| ▲ | fyredge a day ago | |
Yes and no. The first thing to understand is that in academia, knowledge is the work. You are being trained to absorb existing knowledge, hypothesise new knowledge and test if it is valid. LLMs are a useful tool if you want it to generate text. But in the context of research, this is quite dangerous. Think of a calculator that spits out the wrong answer 10% of the time, would you trust it to use in an exam? How about 5%? 1%? 0.1%? The business of research is the business of factual knowledge. Every piece of information should and is expected to be scrutinized. That's why dishonesty is severely looked down upon (falsifying data / plagiarism etc.) I would say your use case is not dishonest, but I would also like you to think from the perspective of the university. How would they know if their students are using it honestly like you did? How can they, with their limited resource, make sure that research integrity is upheld in the face of automated hallucinations? At the end of the day, the question is not what if using AI is dishonest, it's about being able to walk into an antagonistic panel and defend your claim that you understand the knowledge of your field (without live AI help). If you can do that and also make sure that the contents are not hallucinated, then I don't see why not. | ||
| ▲ | latand6 13 hours ago | parent [-] | |
Yeah that’s exactly my point. The AI is just taking the boring job of collecting evidence and I’m a validator. This way i see that I’m able to process papers much faster than without AI. It’s faster primarily because you don’t have to spend 70% of your time reading abstracts and sections of the papers you’ll never need. Doing manually it’s very exhausting. Thats being said, I feel like I’m feeling more productive it terms of generating insights apart from what the AI said. I also have a chat interface where I basically can ask anything I want from the PDF (and yeah I’m aware of the NotebookLM, I just don’t trust Gemini) | ||