| ▲ | stanford_labrat 5 hours ago | |||||||||||||||||||||||||
every few months i like to ask chatgpt to do the "thinking" part of my job (scientist) and see how the responses stack up. at the beginning 2022 it was useless because the output was garbage (hallucinations and fake data). nowadays its still useless, but for different reasons. it just regurgitates things already known and published and is unable to come up with novel hypotheses and mechanisms and how to test them. which makes sense, for how i understand LLMs operate. | ||||||||||||||||||||||||||
| ▲ | doomslayer999 20 minutes ago | parent | next [-] | |||||||||||||||||||||||||
I am also a scientist and had the same conclusion. I just use it to summarize papers, occasionally write boilerplate, and sometimes do some google search primitives if its an easy question. | ||||||||||||||||||||||||||
| ▲ | simianwords 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||
It is used in pure math research already | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||