Remix.run Logo
mcv 7 hours ago

This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.

I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.

So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.

So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.

vidarh 3 hours ago | parent | next [-]

My "actual job" isn't to write code, but to solve problems.

Writing code has just typically been how I've needed to solve those problems.

That has increasingly shifted to "just" reviewing code and focusing on the architecture and domain models.

I get to spend more time on my actual job.

mythical_39 44 minutes ago | parent | next [-]

wait, did you see the part where the person you are replying to said that writing the code themself was essential to correctly solving the problem?

Because they didn't understand the architecture or the domain models otherwise.

Perhaps in your case you do have strong hands-on experience with the domain models, which may indeed have shifted you job requirements to supervising those implementing the actual models.

I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?

If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?

Funny story-- I asked a LLM to review a call transcript to see if the caller was an existing customer. The LLM said True. It was only when I looked closer that I saw that the LLM mean "True-- the caller is an existing customer of one of our competitors". Not at all what I meant.

Kamq 3 hours ago | parent | prev | next [-]

> My "actual job" isn't to write code, but to solve problems.

Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.

> That has increasingly shifted to "just" reviewing code

It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.

jvanderbot an hour ago | parent | next [-]

> often a benefit to having a human have an understanding of the concrete details of the system

Further elaborating from my experience.

1. I think we're in the early stages, where agents are useful because we still know enough to coach well - knowledge inertia.

2. I routinely make the mistake of allowing too much autonomy, and will have to spend time cleaning up poor design choices that were either inserted by the agent, or were forced upon it because I had lost lock on the implementation details (usually both in a causal loop!)

I just have a policy of moving slowly and carefully now through the critical code, vs letting the agent steer. They have overindexed on passing tests and "clean code", producing things that cause subtle errors time and time again in a large codebase.

> burn the time to understand it.

It seems to me to be self-evident that writing produces better understanding than reading. In fact, when I would try to understand a difficult codebase, it often meant that probing+rewriting produced a better understanding than reading, even if those changes were never kept.

laurentiurad 2 hours ago | parent | prev [-]

It's like any other muscle, if you don't exercise it, you will lose it.

It's important that when you solve problems by writing code, you go through all the use cases of your solution. In my experience, just reading the code given by someone else (either a human or machine) is not enough and you end up evaluating perhaps the main use cases and the style. Most of the times you will find gaps while writing the code yourself.

HumblyTossed 27 minutes ago | parent | prev | next [-]

You're right that a dev's job is to solve problems. However, one loses a lot of that if one doesn't think in computerese - and only reading code isn't enough. One has to write code to understand code. So for one to do one's _actual_ job, they cannot depend solely on "AI" to write all the code.

thefaux 3 hours ago | parent | prev | next [-]

This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.

Difwif 2 hours ago | parent [-]

This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.

Kbelicius 2 hours ago | parent | next [-]

> This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

This to me looks like an analogy that would support what GP is saying. With modern farming practices you get problems like increased topsoil loss and decreased nutritional value of produce. It also leads to a loss of knowledge for those that practice those techniques of least resistance in short term.

This is not me saying big farming bad or something like that, just that your analogy, to me, seems perfectly in sync with what the GP is saying.

teeray an hour ago | parent | prev | next [-]

This is a false equivalence. If the farmer had some processing step which had to be done by hand, having mountains of unprocessed crops instead of a small pile doesn’t improve their throughput.

hluska 2 hours ago | parent | prev | next [-]

I’ll be honest with you pal - this statement sounds like you’ve bought the hype. The truth is likely between the poles - at least that’s where it’s been for the last 35 years that I’ve been obsessed with this field.

HumblyTossed 17 minutes ago | parent | next [-]

I feel like we are at the crescendo point with "AI". Happens with every tech pushed here. 3DTV? You have those people who will shout you down and say every movie from now on will be 3D. Oh yeah? Hmmm... Or the people who see Apple's goggles and yell that everyone will be wearing them and that's just going to be the new norm now. Oh yeah? Hmmm...

Truth is, for "AI" to get markedly better than it is now (0) will take vastly more money than anyone is willing to put into it.

(0) Markedly, meaning it will truly take over the majority of dev (and other "thought worker") roles.

paulcole 2 hours ago | parent | prev [-]

They may be early but they’re not wrong.

lazide 2 hours ago | parent [-]

That could be said about hover cars too.

HumblyTossed 17 minutes ago | parent [-]

The Moller car is just weeks away, haven't you heard?

player1234 2 hours ago | parent | prev [-]

[dead]

eaglelamp 2 hours ago | parent | prev | next [-]

All employees solve problems. Developers have benefited from the special techniques they have learned to solve problems. If these techniques are obsolete, or are largely replaced by minding a massive machine, the character of the work, the pay for performing it, and social position of those who perform it will change.

wiseowise 20 minutes ago | parent | prev | next [-]

So what happens when LLM provider and/or internet is down or you're out of credits?

blibble an hour ago | parent | prev | next [-]

this is the standard consultant vs employee angle

if you're a consultant/contractor that's bid a fixed amount for a job: you're incentivised to slop out as much as possible to hit the complete the contract as quickly as possible

and then if you do a particularly bad job then you'll be probably kept on to fix up the problems

vs. an permanent employee that is incentivised to do the job well, sign it off and move onto the next task

notanastronaut 24 minutes ago | parent | prev [-]

I'm in the same boat. There's a lot of things I don't know and using these models help give direction and narrow focus towards solutions I didn't know about previously. I augment my knowledge, not replace.

Some people learn from rote memorization, some people learn through hands on experience. Some people have "ADHD brains". Some people are on the spectrum. If you visit Wikipedia and check out Learning Styles, there's like eight different suggested models, and even those are criticized extensively.

It seems a sort of parochial universalism has coalesced, but people should keep in mind we don't all learn the same.

Archer6621 6 hours ago | parent | prev | next [-]

That's a nice anecdote, and I agree with the sentiment - skill development comes from practice. It's tempting to see using AI as free lunch, but it comes with a cost in the form of skill atrophy. I reckon this is even the case when using it as an interactive encyclopedia, where you may lose some skill in searching and aggregating information, but for many people the overall trade off in terms of time and energy savings is worth it; giving them room to do more or other things.

scyzoryk_xyz 3 hours ago | parent | next [-]

If the computer was the bicycle for the mind, then perhaps AI is the electric scooter for the mind? Gets you there, but doesn't necessarily help build the best healthy habits.

Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.

Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.

andai 25 minutes ago | parent | next [-]

It's a servant, in the Claude Code mode of operation.

If you outsource a skill consistently, you will be engaging less with that skill. Depending on the skill, this may be acceptable, or a desirable tradeoff.

For example, using a very fast LLM to interactively make small edits to a program (a few lines at a time), outsources the work of typing, remembering stdlib names and parameter order, etc.

This way of working is more akin to power armor, where you are still continuously directing it, just with each of your intentions manifesting more rapidly (and perhaps with less precision, though it seems perfectly manageable if you keep the edit size small enough).

Whereas "just go build me this thing" and then you make a coffee is qualitatively very different, at that point you're more like a manager than a programmer.

wiseowise 11 minutes ago | parent | prev | next [-]

> perhaps AI is the electric scooter for the mind

More like mobility scooter for disabled. Literally Wall-E in the making.

coole-wurst 3 hours ago | parent | prev [-]

Maybe it was always about where you are going and how fast you can get there? And AI might be a few mph faster than a bicycle, and still accelerating.

chairmansteve 4 hours ago | parent | prev [-]

"I reckon this is even the case when using it as an interactive encyclopedia".

Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.

OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.

I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!

isolli 7 hours ago | parent | prev | next [-]

This mirrors my experience exactly. We have to learn how to tame the beast.

sothatsit 6 hours ago | parent | next [-]

I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.

If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.

But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.

A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.

orenp 3 hours ago | parent [-]

I'd say the new problem is knowing when understanding is important and where it's okay to delegate.

It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.

sevenzero 6 hours ago | parent | prev [-]

[flagged]

jeffreygoesto an hour ago | parent | prev | next [-]

Elk (Eclipse Layout Kernel) is a very good package solving that, you might want to check it's Javascript port https://github.com/kieler/elkjs

jstummbillig 2 hours ago | parent | prev | next [-]

> But letting it write the actual code was a mistake

I think you not asking questions about the code is the problem (in so far it still is a problem). But it certainly has gotten easy not to.

exodust 2 hours ago | parent | prev | next [-]

Similarly I leave Cursor's AI in "ask" mode. It puts code there, leaving me to grab what I need and integrate myself. This forces me to look closely at code and prevents the "runaway" feeling where AI does too much and you're feeling left behind in your own damn project. It's not AI chat causing cognitive debt it's Agents!

foxes 4 hours ago | parent | prev | next [-]

Is this a copilot ad?

gambiting 20 minutes ago | parent [-]

It does read like one.

PatronBernard 4 hours ago | parent | prev [-]

> a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning.

I am sorry for being direct but you could have just kept it to the first part of that sentence. Everything after that just sounds like pretentious name dropping and adds nothing to your point.

But I fully agree, for complex problems that require insight, LLMs can waste your time with their sycophancy.

TheColorYellow 3 hours ago | parent | next [-]

This is a technical forum, isn't pretentious name dropping kind of what we do?

Seriously though, I appreciated it because my curiosity got the better of me and I went down a quick rabbit hole in Sugiyama, comparative graph algorithms, and learning about the node positioning as a particular dimension of graph theory. Sure nothing ground breaking, but it added a shallow amount to my broad knowledge base of theory that continues to prove useful in our business (often knowing what you don't know is the best initiative for learning). So yeah man, lets keep name dropping pretentious technical details because thats half the reason I surf this site.

And yes, I did use ChatGPT to familiarize myself with these concepts briefly.

fatherwavelet 2 hours ago | parent [-]

I think many are not doing anything like this so to the person who is not interested in learning anything, technical details like this sound like pretentious name dropping because that is how they relate to the world.

Everything to them is a social media post for likes.

I have explored all kinds of graph layouts in various network science context via LLMs and guess what? I don't know anything much about graph theory beyond G = (V,E). I am not really interested either. I am interested in what I can do with and learn from G. Everything on the right of the equals sign Gemini is already beyond my ability. I am just not that smart.

The standard narrative on this board seems to be something akin to having to master all volumes of Knuth before you can even think to write a React CRUD app. Ironic since I imagine so many learned programming by just programming.

I know I don't think as hard when using an LLM. Maybe that is a problem for people with 25 more IQ points than me. If I had 25 more IQ points maybe I could figure out stuff without the LLM. That was not the hand I was dealt though.

I get the feeling there is immense intellectual hubris on this forum that when something like this comes up, it is a dog whistle for these delusional Erdos in their own mind people to come out of the wood work to tell you how LLMs can't help you with graph theory.

If that wasn't the case there would be vastly more interesting discussion on this forum instead of ad nauseam discussion on how bad LLMs are.

I learn new things everyday from Gemini and basically nothing reading this forum.

hluska 2 hours ago | parent | prev | next [-]

I’ve been forced down that path and based on that experience it added a whole lot. Maybe you just don’t understand the problem?

wanderlust123 an hour ago | parent | prev [-]

There is nothing pretentious about what they said. Why are you so insecure/sensitive?