| ▲ | moregrist 4 hours ago | |||||||
> This approach dismisses the cases where Ai submissions generally are better. You’re perhaps missing the not so subtle subtext of Peter Woit’s post, and entire blog, which is: While AI is getting better, it’s still not _good_ by the standards of most science. However it’s as good as hep-th where (according to Peter Woit) the bar is incredibly low. His thesis is part “the whole field is bad” and part “Arxiv for this subfield is full of human slop.” I don’t have the background to engage with whether Peter Woit’s argument has merit, but it’s been consistent for 25+ years. | ||||||||
| ▲ | zozbot234 3 hours ago | parent | next [-] | |||||||
What about the new result that was recently derived by GPT 5.2 Pro/Deep Research? That was also hep-th. https://openai.com/index/new-result-theoretical-physics/ https://arxiv.org/abs/2602.12176 | ||||||||
| ▲ | tossandthrow 4 hours ago | parent | prev [-] | |||||||
My comment was more an answer to the proposed gatekeeping of science as a human activity. Yes, Ai is still not good in the grand scheme of things. But everybody actively using it has gotten concerned over the past 2 months by the leap frigging of LLMs - and surprised as they thought we had arrived at the plateau. We will see in a year or two if humans still hold an advantage in research - currently very few do in software development, despite what they think about themselves. | ||||||||
| ||||||||