| ▲ | Ask HN: Is using AI tooling for a PhD literature review dishonest? | ||||||||||||||||||||||||||||||||||||||||||||||
| 8 points by latand6 a day ago | 22 comments | |||||||||||||||||||||||||||||||||||||||||||||||
I'm a PhD student in structural engineering. My dissertation topic is about using LLM agents in automating FEA calculations on common Ukrainian software that companies use. I'm writing my literature review now and I've vibecoded a personal local dashboard that helps me manage the literature review process. I use LLM agents to fill up the LaTeX template (to automate formatting, also you can use IDE to view diffs) in github repo. Then I run ChatGPT Pro to collect all relevant papers (and how) to my topic. Then I collect the ones available online, where the PDFs are available. I have a special structure of folders with plain files like markdown and JSON. The idea of the dashboard is the following: I run the Codex through a web chat to identify the relevant quotes — relevant for my dissertation topic — and how they are relevant, it combines them into a number of claims connected with each quote with a link. And then I review each quote and each claim manually and tick the boxes. There is also a button that runs the verification script, that validates the exact quote IS really in the PDF. This way I can collect real evidence and collect new insights when reading these. I remember doing this all manually when I was doing my master's degree in the UK. That was a very terrible and tedious experience partially because I've ADHD So my question is, is it dishonest? Because I can defend every claim in the review because I built the verification pipeline and reviewed manually each one. I arguably understand the literature better than if I had read it myself manually and highlighted all. But I know that many universities would consider any AI-generated text as academic misconduct. I really don't quite understand the principle behind this position. Because if you outsource the task of proofreading, nobody would care. When you use Grammarly, the same thing. But if I use an LLM to create text from verified, structured, human-reviewed evidence — it might be considered dishonest. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | love2read 21 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Someone against AI will tell you yes, someone for AI will tell you no. The only thing I can really say is that saying you have ADHD so you should have a reprieve from the normal rules is something that I don't agree with. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Acacian 8 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
The verification pipeline is the most valuable part of your workflow. Most people who use AI for literature reviews skip exactly that step — they trust the output and move on. What you're describing is closer to building a testing harness than "using AI to write." You're asserting claims, checking them against source PDFs, and reviewing manually. That's more rigorous than most manual lit reviews where people skim abstracts and cite papers they half-read. Document the pipeline as methodology in your dissertation. That turns a potential misconduct question into a contribution. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | austinjp 19 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
While your dashboard sounds fancy, this part raises issues: > I run ChatGPT Pro to collect all relevant papers Any literature review must be reproducible. If you can't say exactly what queries you ran against exactly what databases, you'll get into trouble. Whether or not that's the way things should be is irrelevant: it's the way things are. You should ask your supervisor if your approach is okay. If necessary, ask it from a theoretical perspective: "would it be okay if I were to....?" If your supervisor is unavailable then seek advice from their colleagues. Since you mention ADHD, you're likely to be strongly motivated by novelty. Don't spend time building a dashboard that you could spend on writing your thesis. If you're not getting support from your university, get it now. It might not help, but it's a signal to the university that you're engaging with the system. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | fyredge 20 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Yes and no. The first thing to understand is that in academia, knowledge is the work. You are being trained to absorb existing knowledge, hypothesise new knowledge and test if it is valid. LLMs are a useful tool if you want it to generate text. But in the context of research, this is quite dangerous. Think of a calculator that spits out the wrong answer 10% of the time, would you trust it to use in an exam? How about 5%? 1%? 0.1%? The business of research is the business of factual knowledge. Every piece of information should and is expected to be scrutinized. That's why dishonesty is severely looked down upon (falsifying data / plagiarism etc.) I would say your use case is not dishonest, but I would also like you to think from the perspective of the university. How would they know if their students are using it honestly like you did? How can they, with their limited resource, make sure that research integrity is upheld in the face of automated hallucinations? At the end of the day, the question is not what if using AI is dishonest, it's about being able to walk into an antagonistic panel and defend your claim that you understand the knowledge of your field (without live AI help). If you can do that and also make sure that the contents are not hallucinated, then I don't see why not. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | matzalazar 11 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Think about it this way: 70 years ago, would a physicist be considered a cheater for using a calculator to solve complex differential equations in their daily work? People tend to frame the moral dilemmas of new technology through the lens of everyday human tasks, and I think that's just a prejudice. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | malshe 18 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
I don't think what you are doing is dishonest. But my opinion hardly matters. My advice is to talk to your dissertation committee chair to understand whether they think it is dishonest. Furthermore, read your university's AI usage policies. If they don't consider what you are doing a permissible use of AI, no amount of assurance on HN or any online forum is gonna help you. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Neosmith_amit 19 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
No, I don't think it is dishonest. At the same time I would recommend, document your methodology explicitly in the dissertation, describe the verification pipeline, and make it clear what you reviewed manually versus what was automated. That transparency converts "dishonest?" into "methodologically rigorous." Here is the thing, academic policy is NOT really about honesty. It is about trust. Universities cannot distinguish your workflow from someone who prompted GPT to write their lit review wholesale. More than the ethical distinction, I believe the rule around AI usage is blunt because enforcement is pretty hard. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | QubridAI 21 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Not dishonest if you verify everything and understand it deeply but you should be transparent about your AI use since many universities care more about disclosure than the method itself. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | bjourne 19 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
You cannot copy others' work and claim it is your own. Thus, you cannot copy ChatGPT's work and claim it is your own. There is a qualitative difference between having an LLM generate text and having a program spell- and grammar check text. Since you are not going to highlight which passages in your article ChatGPT wrote for you and instead intend to pass it of as your own creative work it is dishonest. Very dishonest. If caught you will get in trouble and may be kicked out of your academic programme. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | adampunk 21 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
I don’t know if it is dishonest. What I do know is that it will only save you time if you have a very specific and testable need. Otherwise it will appear to save time and produce something that you won’t be proud of. | |||||||||||||||||||||||||||||||||||||||||||||||