Remix.run Logo
TheAceOfHearts 5 days ago

Personally, I don't think you should ever allow the LLM to write for you or to modify / update anything you're writing. You can use it to get feedback when editing, to explore an idea-space, and to find any topical gaps. But write everything yourself! It's just too easy to give in and slowly let the LLM take over your brain.

This article is focused on essay writing, but I swear I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more. Then again, LLMs also make it way easier to get started and feel like you're making significant progress, instead of getting stuck at the first hurdle. There's definitely a balance. It requires a lot of willpower to sit with a problem in order to try and work through it rather than praying to the LLM slot machine for an instant solution.

jbstack 5 days ago | parent | next [-]

> I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more.

I've had the opposite experience, but my approach is different. I don't just copy/paste errors, accept the AI's answer when it works, and move on. I ask follow up questions to make sure I understand why the AI's answer works. For example, if it suggests running a particular command, I'll ask it to break down the command and all the flags and explain what each part is doing. Only when I'm satisfied that I can see why the suggestion solves the problem do I accept it and move on to the next thing.

The tradeoff for me ends up being that I spend less time learning individual units of knowledge than if I had to figure things out entirely myself e.g. by reading the manual (which perhaps leads to less retention), but I learn a greater quantity of things because I can more rapidly move on to the next problem that needs solving.

mzajc 5 days ago | parent | next [-]

> I ask follow up questions to make sure I understand why the AI's answer works.

I've tried a similar approach and found it very prone to hallucination[0]. I tend to google things first and ask a LLM as fallback, so maybe it's not a fair comparison, but what do I need a LLM for if a search engine can answer my question.

[0]: Just the other day I asked ChatGPT what a colonn (':') after systemd's ExecStart= means. The correct answer is that it inhibits variable expansion, but it kept giving me convincing yet incorrect answers.

jbstack 5 days ago | parent | next [-]

It's a tradeoff. After using ChatGPT for a while you develop somewhat of an instinct for when it might be hallucinating, especially when you start probing it for the "why" part and you get a feel for whether its explanations make sense. Having at least some domain knowledge helps too - you're more at risk of being fooled by hallucinations if you are trying to get it to do something you know nothing about.

While not foolproof, when you combine this with some basic fact-checking (e.g. quickly skim read a command's man page to make sure the explanation for each flag sounds right, or read the relevant paragraph from the manual) plus the fact that you see in practice whether the proposed solution fixes the problem, you can reach a reasonably high level of accuracy most of the time.

Even with the risk of hallucinations it's still a great time saver because you short-circuit the process of needing to work out which command is useful and reading the whole of the man page / manual until you understand which component parts do the job you want. It's not perfect but neither is Googling - that can lead to incorrect answers too.

To give an example of my own, the other day I was building a custom Incus virtual machine image from scratch from an ISO. I wanted to be able to provision it with cloud-init (which comes configured by default in cloud-enabled stock Incus images). For some reason, even with cloud-init installed in the guest, the host's provisioning was being ignored. This is a rather obscure problem for which Googling was of little use because hardly anyone makes cloud-init enabled images from ISOs in Incus (or if they do, they don't write about it on the internet).

At this point I could have done one of two things: (a) spend hours or days learning all about how cloud-init works and how Incus interacts with it until I eventually reached the point where I understood what the problem was; or (b) ask ChatGPT. I opted for the latter and quickly figured out the solution and why it worked, thus saving myself a bunch of pointless work.

majewsky 5 days ago | parent | prev | next [-]

Does it work better when the AI is instructed to describe a method of answering the question, instead of answering the question directly?

For example, in this specific case, I am enough of a domain expert to know that this information is accessible by running `man systemd.service` and looking for the description of command line syntax (findable with grep for "ExecStart=", or, as I have now seen in preparing this answer, more directly with grep for "COMMAND LINES").

mzajc 5 days ago | parent [-]

That's a much better option since the LLM is no longer the source of truth. Unfortunately, it only works in cases where the feature is properly documented, which isn't the case here.

dpkirchner 5 days ago | parent | prev [-]

Could you give an example of an ExecStart line that uses a colon? I haven't found any documentation for that while using Google and I don't have examples of it in my systemd unit files.

mzajc 5 days ago | parent [-]

Yup, it's undocumented for some reason. I don't remember where I saw it used, but as an example

  [Service]
  ExecStart=/bin/echo $PATH
Will log the environment variable, while

  [Service]
  ExecStart=:/bin/echo $PATH
Will log literal $PATH.
kjkjadksj 5 days ago | parent | prev [-]

I think the school experience proves that doesn’t work. Reminds me of a teacher carefully breaking down the problem on the board and you nodding along when it is unfolding in front of you in a directed manner. The question is if you can do it yourself come the exam. If all you did to prepare is watch the teacher solve it, with no attempt to solve it from scratch yourself during practice, you will fail the exam.

jbstack 5 days ago | parent [-]

That very much depends on the topic being studied. I've passed plenty of exams of different levels (school, university, professional qualifications) just by reading the textbook and memorising key facts. I'd agree with you if we are talking about something like maths.

Also, there's a huge difference between passively watching a teacher write an explanation on a board, and interactively quizzing the teacher (or in this case, LLM) in order to gain a deeper and personalised understanding.

kjkjadksj 4 days ago | parent [-]

The issue is verifying the LLM is returning actual facts. By the time you’ve done that consulting sufficient reliable information, you don’t need the LLM.

giancarlostoro 5 days ago | parent | prev | next [-]

When Firefox added autocorrect, and I started using it, I made it a point to learn what it was telling me was correct, so I could write more accurately. I have since become drastically better at spelling, I still goof, I'm even worse when pronouncing words I've read but never heard. English is my second language mind you.

I think any developer worth their salt would use LLMs to learn quicker, and arrive to conclusions quicker. There's some programming problems I run into when working on a new project that I've run into before but cannot recall what my last solution was and it is frustrating, I could see how an LLM could help with such a resolution coming back quicker. Sometimes its 'first time setup' stuff that you have not had to do for like 5 years, so you forget, and maybe you wrote it down on a wiki, two jobs ago, but an LLM could help you remember.

I think we need to self-evaluate how we use LLMs so that they help us become better Software Engineers, not worse ones.

lazide 5 days ago | parent | prev | next [-]

I’d consider it similar to always using a GPS/Google Maps/Apple Maps to get somewhere without thinking about it first.

It’s really convenient. It also similarly rots the parts of the brain required for spatial reasoning and memory for a geographic area. It can also lead to brain rot with decision making.

Usually it’s good enough. Sometimes it leads to really ridiculous outcomes (especially if you never double check actual addresses and just put in a business name or whatever). In many edge cases depending on the use case, it leads to being stuck, because the maps data is wrong, or doesn’t have updated locations, or can’t consider weather conditions, etc. especially if we’re talking in the mountains or outside of major cities.

Doing it blindly has led to numerous people dying by stupidly getting themselves into more and more dumb situations.

People still got stuck using paper maps. Sometimes they even died. It was much rarer and people were more aware they were lost, instead of persisting thinking they weren’t. So different failure modes.

Paper maps were very inconvenient, so dealt with it using more human interaction and adding more buffer time. Which had it’s own costs.

In areas where there are active bad actors (Eastern Europe now a days, many other areas in that region sometimes) it leads to actively pathological outcomes.

It is now rare for anyone outside of conflict zones to use paper maps except for specific commercial and gov’t uses, and even then they often use digitized ‘paper’ maps.

Manik_agg 5 days ago | parent | prev | next [-]

I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).

I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.

defgeneric 5 days ago | parent | prev [-]

This is exactly the problem, but there's still a sweet spot where you can get quickly up to speed on a technical areas adjacent to your specialty and not have small gaps in your own knowledge hold you back from the main task. I was quickly able to do some signal processing for underwater acoustics in C, for example, and don't really plan to become highly proficient in it. I was able to get something workable and move on to other tasks while still getting an idea of what was involved if I ever wanted to come back to it. In the past I would have just read a bunch of existing code.