Remix.run Logo
staticshock 3 hours ago

The eloquence with which this point gets (repeatedly) made is continuing to improve each next time I read it. However, I still feel like we haven't nailed it. That is, we are not yet at the "aphorism" stage of the discourse (e.g. "the medium is the message", "you ship your org chart", "9 mothers can't make a baby in a month"), in which the most pointed version of this critique packs a punch in just a few words that resonate with the majority of people. That kind of epistemological chiseling takes years, if not decades. And AI certainly won't do it for us, because we don't know how to RL meaning-making.

Edit: 9 babies → 9 mothers

bla3 2 hours ago | parent | next [-]

> "can't make 9 babies in a month"

It's "9 women can't make a baby in one month".

staticshock 2 hours ago | parent | next [-]

Hah, right, I mixed it up!

bluefirebrand 32 minutes ago | parent | prev [-]

In fairness, 9 women can't make 9 babies in a month either

ctvdev 2 hours ago | parent | prev | next [-]

> That is, we are not yet at the "aphorism" stage of the discourse

we learn by doing

ipython 2 hours ago | parent | next [-]

I’ve also seen along those lines “there is no compression algorithm for experience” - a nice summary of the hn posts from today.

nkrisc an hour ago | parent | prev [-]

Put differently: you get good at what you actually do, not what you think you're doing.

If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.

embedding-shape 2 hours ago | parent | prev | next [-]

How about "Intelligence amplification, not artificial intelligence"?

Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".

viccis 44 minutes ago | parent | prev | next [-]

>the medium is the message

If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.

IceDane an hour ago | parent | prev | next [-]

Outsource manual labor, not your brain.

2 hours ago | parent | prev | next [-]
[deleted]
xnx 3 hours ago | parent | prev [-]

This concept won't reach that point because when you chisel too hard it crumbles. There are countless lower level tasks that typical programmers no longer learn how to do. Our capacity for knowledge is not unlimited so we offload everything we can to move to the next level of abstraction.

lsy 2 hours ago | parent | next [-]

AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.

Krssst 18 minutes ago | parent | next [-]

Non-determinism is not as much of a problem as the lack of spec. C++ has the C++ norm, Python has its manual. One can refer to it to predict reliably how the program will behave without thinking of the generated assembly. LLMs have no spec.

lukan an hour ago | parent | prev | next [-]

"You can’t treat a prompt like source code because it will give you a different output every time you use it"

But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code

So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.

That requires enough thinking effort.

lelanthran 35 minutes ago | parent [-]

Didnt work for the prod data that the AI nukes in spite of prompts saying "DON'T FUCKING GUESS", just like that in all caps: https://news.ycombinator.com/item?id=47911524

What makes you think it will work for you?

IceDane an hour ago | parent | prev [-]

It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.

girvo an hour ago | parent [-]

> Some people are even comparing them to compilers.

A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol

staticshock 2 hours ago | parent | prev | next [-]

That's true, but I think it's beside the point. The flip side of that argument, which is equally true, goes something like, "not doing cognitive push-ups leads to cognitive atrophy."

There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.

ua709 3 hours ago | parent | prev | next [-]

I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.

willhslade 24 minutes ago | parent [-]

Are compilers deterministic?

imiric an hour ago | parent | prev [-]

The idea that a tool intended to replace all human cognitive work is the next level of abstraction is so fundamentally flawed, that I'm not sure it's made in good faith anymore. The most charitable interpretation I can think of is that it's a coping mechanism for being made redundant.

Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.