Remix.run Logo
bovermyer 3 days ago

Most of that article is just good career advice in general, so I'll just comment on the part about AI.

One major problem I see with the use of AI is that it will prevent people from building an understanding of <insert problem domain X here>. This will reduce people's ability to drive AI correctly, creating a circular problem.

gloxkiqcza 3 days ago | parent | next [-]

> One major problem I see with the use of AI is that it will prevent people from building an understanding of <insert problem domain X here>.

I don’t really think this is a problem. AI is a tool, you still learn while using it. If you actually read, debug and maintain the produced code, which I consider a must for complex production systems, it’s not really that different compared to reading documentation and using Stack Overflow (i.e., coding the way it was done 10 years ago). It’s just much more efficient and it makes problems easier to miss. Standard practices of AI assisted development are slowly forming and I expect them to improve over time.

nerptastic 3 days ago | parent | next [-]

I’ll bite - I’ve been a dev at a new company for about a year and a half. I had mostly done front end work before this, so my SQL knowledge was almost nonexistent.

I’m now working in the backend, and SQL is a major requirement. Writing what I would call “normal” queries. I’ve been reaching for AI to handle this, pretty much the whole time - because it’s faster than I am.

I am picking up tidbits along the way. So I am learning, but there’s a huge caveat. I notice I’m learning extremely slowly. I can now write a “simple” complexity query by hand with no assistance, and grabbing small chunks of data is getting easier for me.

I am “reading, debugging, and maintaining” the queries, but LLMS bring the effort on that task down to pretty much 0.

I guarantee if I spent even 1 week just taking an actual SQL class and just… doing the learning, I would be MUCH further along, and wouldn’t need the AI at all. It’s now my “query tool”. Yeah, it’s faster than I am, but I’m reliant on it at this point. I will SLOWLY improve, but I’ll still continue to just use AI for it.

All that to say, I don’t know where the future goes - our company doesn’t have time to slow down for me to learn SQL, and the tool does a fine job - it’s been 1.5 years and the world hasn’t ended, I can READ queries rather quickly - but writing them is outsourced to the model.

In the past, if a query was written on stack overflow, I would have to modify it (sometimes significantly) to achieve my goal, so maybe the learning was “baked in” to the translation process.

Now, the LLM gives me exactly what I need, no extra “reinforcement” work done on my end.

I do think these tools can be used for learning, but that effort needs to be dedicated. In many cases I’m sure other juniors are in a similar position. I have a higher output, but I’m not quickly increasing my understanding. There’s no incentive for me to slow down, and my manager would scoff at the idea, really. It’s a tough spot to be in.

eska 2 days ago | parent | next [-]

I can corroborate this. I coached mechanical engineers who had to learn some programming to conduct research by analyzing factory machine data I provided them (them being the domain experts). The ones who learned python and sql using AI hardly had learned anything after half a year, the ones I instructed where to find the API docs and a beginner tutorial weren’t just much further along, they were also on a faster trajectory for the future. I think AI is a beginner trap because it allows them to throw shit at the wall and see what sticks. It is much more useful in the hands of an expert in the long term.

Ekaros 2 days ago | parent [-]

I think this has been shown for fast majority with homework. You just don't learn much by copying homework from somewhere else. Actual effort is needed for learning process. Unless you are some weird most likely rare genius...

Also makes me think of lot of incidental learning that can go on. Like when looking at API docs noticing the other things. Might not be useful now, but could very well be later.

bbogdn2 3 days ago | parent | prev [-]

Sounds pretty dire and/or super exploitive if a company can't spare a week for an individual employee to learn the tools of the job.

exceptione 3 days ago | parent | prev | next [-]

  > AI is a tool
That would be groundbreaking news. A tool works either deterministically or it is broken.

A more helpful analogy is "AI is outsourced labor". You can review all code from overseas teams as well, but if you start to think of them as a tool you've made too big promotions into management.

subhobroto 3 days ago | parent | prev [-]

> It’s just much more efficient and it makes problems easier to miss. Standard practices of AI assisted development are slowly forming and I expect them to improve over time

Bravo! IMHO, AI just underscores core high quality engineering practices that any high quality engineer has been practicing already.

AI is a tool that provides high leverage - if you've been following practices that allow sloppy coding, AI will absolutely amplify it.

If anything, I would guess that the AI assisted future will require engineers to think through the problem more upfront and consider edge cases instead of jump and type out the first thing that comes to mind - the AI can spit out code way faster.

There's an alternate, vibe coded universe where engineers just spit out slop but as I wrote in another comment here, there are tools to detect that. These are tools that sound "enterprisey" and that's because before AI, no one else had to deal with such scale of code - it's was just far too expensive to read, update and create PRs.

Those boundaries are coming down and now almost everyone who can pay for Oxygen tanks have a shot at scaling Mt. Everest.

subhobroto 3 days ago | parent | prev [-]

> One major problem I see with the use of AI is that it will prevent people from building an understanding of <insert problem domain X here>. This will reduce people's ability to drive AI correctly, creating a circular problem.

Very much the opposite. LLMs do a fantastic job of increasing accessibility of knowledge.

They have wide exposure to content and is incredibly good at spotting patterns and suggesting both well established norms from the current domain and serendipitous cross domain concepts.

I feel the concern you share is LLMs expose new frontiers to people who otherwise might not even have imagined that frontier exists and then those people do a lazy or superficial job of it because they lack any internal motivation to do a deep dive on it

bovermyer 3 days ago | parent [-]

We must be from _wildly_ different backgrounds.

subhobroto 3 days ago | parent [-]

It's possible we are from very similar backgrounds but bring wildly different perspectives. I'm always suspicious of statements of the form "X will prevent people from Y".

The amount of progress humans have made over 2000 years, especially the last 100, is just phenomenal. If anything, there's very little evidence that any X prevents people from Y. X certainly might not require people to Y anymore but I don't question what a motivated person can do just like I don't question what an unmotivated person can refuse to do.

It would help if you expanded on "it will prevent people from building an understanding of <insert problem domain X here>", but could one interpretation of it be "AI will prevent people from building an understanding of good software design patterns" because AI allows people to just vibe code and pay no attention to design patterns at all?

bovermyer 2 days ago | parent [-]

Let me start by asking you this:

Where does the majority of an individual's general knowledge come from?

subhobroto 2 days ago | parent [-]

their environment and life experiences

bovermyer a day ago | parent [-]

Alright, so we have our first point of divergence. For me, the majority of my general knowledge comes from reading.

Where does the majority of an individual's _understanding_ come from?

subhobroto a day ago | parent [-]

> Alright, so we have our first point of divergence. For me, the majority of my general knowledge comes from reading

Reading is part of your environment - you didn't read your way into a vacuum - the very tools you use to read right now (language, symbols, syntax) are environmental gifts. Something, even if you're consciously unaware of it, made you read those material. The specific script you read is part of your environment (it's unlikely you will know what symbols in Hindi mean if you didn't grow up in that environment or weren't biased towards it). Your physical brain structure depends on the languages you speak and will physically morph as you change those languages.

If you have absolutely no clue what "mutton paratha" is, you won't ever start reading about "mutton paratha" right away. (there's another thread we can discuss about human biology, original thought and serendipity but that is better done over email than in HN comments). If you can hear and see, a lot of your general knowledge comes from hearing and seeing too, even if you're consciously unaware of it or don't acknowledge it. If you saw your dad working on cars all your young life, you might not think you have any idea about cars but you might be surprised.

> Where does the majority of an individual's _understanding_ come from?

Understanding is more complex than knowledge. Understanding and knowledge are at different levels of abstraction but you cannot understand something you have no knowledge of. Thus knowledge is a pre-requisite to understanding but insufficient. With sufficient understanding, you can synthesize knowledge. Nikola Tesla synthesized knowledge about AC Induction motors - a knowledge that's now taught even in high schools from his understanding of EMF. However given the same knowledge of EMF, many people at the times of Tesla didn't have sufficient understanding or motivation or capacity to birth AC Induction motors.

You can be extremely knowledgeable but understand absolutely nothing. LLM's are, I argue, excellent examples of this - they have tremendous knowledge but depends on human intervention and discourse to tease it out (as great as SOTA LLMs are, they are not going to randomly design apps without an initial prompt which provides them agency and biases them towards autonomy).

(Again, I'm discarding some details that we can discuss over email. For example: I can strongly argue that LLMs have some understanding of what its knowledge is - like the semantic distance between "mutton" and "paratha" and how they can change across various manifolds in latent space)

If we agree on this take on understanding, then I argue that the majority of an individual's understanding comes from their consciousness, when they have exercised those neural networks and processed their knowledge to arrive at inferences and re-enforced (belief) or disassociation (misbelief) of assertions

bovermyer a day ago | parent [-]

We agree on that take on understanding, yes. More succinctly, I think we could say that understanding comes from an application of, and interaction with, knowledge. The more connections to a bit of information that an individual has, the better their understanding.

This thread's getting a bit long, so I'm happy to continue this discussion via email, if you like. My email is ben@overmyer.net.