▲ | woolion 3 days ago | |
A lot of people will dismiss with some of the usual AI complaints. I suspect they never did real research. Getting into a paper can be a really long endeavor. The notation might not be entirely self contained, or used in an alien or confusing way. Managing to get into it might finally yield that the results in the paper are not applicable to your own, a point that is often obscured intentionally to make it to publication. Lowering the investment to understand a specific paper could really help focus on the most relevant results, on which you can dedicate your full resources. Although, as of now I tend to favor approaches that only summarize rather than produce "active systems" -- with the approximate nature of LLMs, every step should be properly human reviewed. So, it's not clear what signal you can take out of such an AI approach to a paper. Related, a few days ago: "Show HN: Asxiv.org – Ask ArXiv papers questions through chat" https://news.ycombinator.com/item?id=45212535 Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning |