Remix.run Logo
anttiharju a day ago

For me it boils down to that I'm much less tied to tech stacks I've previously worked on and can pick up unfamiliar ones quicker.

Democratization they call it.

jijijijij a day ago | parent [-]

> and can pick up unfamiliar ones quicker

Do you tho? Does "picking up" a skill mean the same thing it used to? Do you fact check all the stuff AI tells you? How certain are you, you are learning correct information? Struggling through unfamiliar topics, making mistakes and figuring out solutions by testing internal hypotheses is a big part of how deep, explanatory knowledge is acquired for human brains. Or maybe, it's been always 10,000 kilowatt-hours, after all.

Even, if you would actually learn different tech stacks faster with AI telling you what to do, it's still a momentary thing, since these systems are fundamentally poisoned by their own talk, so shit's basically frozen in time, still limited to pre-AI-slop information, or requires insane amounts of manual sanitation. And who's gonna write the content for clean new training data anyway?

Mind you, I am talking about the possible prospect of this technology and a cost-value evaluation. Maybe I am grossly ignorant/uninformed, but to me all of it just doesn't add up, if you project inherent limitations onto wider adoption and draw the obvious logical conclusions. That is, if humanity isn't stagnating and new knowledge is created.

anttiharju a day ago | parent [-]

> Do you tho?

Recent success I've been happy with has been moving my laptop config to Nix package manager.

Common complaint people have is Nix the language. It's a bit awkward, "JSON-like". I probably would not have had the patience to engage with it with the little time I have available. But AI mostly gets the syntax right, allowing me to engage with it, and I think I've a decent grasp by this point of the ecosystem and even syntax. It's been roughly a year I think.

Like, I don't know all the constructs available in the language, but I can still reason about things as a commoner that I probably don't want to define my username multiple times in my config, esp. when trying to have the setup be reproducible on an arbitary set of personal laptops. So that for a new laptop I just define one new array item as a source of truth and everything downstream just works.

I feel like with AI the architetural properties are more important than the low-level details. Nix has the nice property of reproducibility/declarativeness. You could for sure put even more effort into alternative solutions, but if they lack reproducibility I think you're going to keep suffering, no matter how much AI you have available.

I am certain my config has some silliness in it that someone more experienced would pick out, but ultimately I'm not sure how much that matters. My config is still reproducible enough that I have my very custom env up and running after a few commands on an arbitary macbook.

> Does "picking up" a skill mean the same thing it used to?

I personally feel confident in helping people move their config to Nix, so I would say yes. But it's a big question.

> Do you fact check all the stuff AI tells you? How certain are you, you are learning correct information?

Well, usually I have a more or less testable setup so I can verify whether the desired effect was achieved. Sometimes things don't work, which is when I start reaching for the docs or source code of for example the library I'm trying to use.

> Struggling through unfamiliar topics, making mistakes and figuring out solutions by testing internal hypotheses is a big part of how deep, explanatory knowledge is acquired for human brains.

I don't think this is lost. I iterate a lot. I think the claude code author does too, did they have something like +40k-38k lines of changes over the past year or so. I still use github issues to track what I want to get done when a solution is difficult to reach, and comment progress on them. Recently I did that with my struggles in cross-compiling Rust from Linux to macOS. It's just easier to iterate and I don't need to sleep overnight to get unstuck.

> since these systems are fundamentally poisoned by their own talk,

_I_ feel like this goes into the overthinking territory. I think software and systems will still die by their merits. Same applies to training data. If bugs regularly make it to end users and a competing solution has less defects, I don't think the buggy solution will stay any more afloat thanks to AI. So, I'd argue, the training data will be ok. Paradigms can still exist. Like Theory of Modern Go discouraging globals and init functions. And I think this was something that Tesla also had to deal with pre modern LLMs? As in not all drivers drove well enough that they wanted to use their data for trsining the autopilot.

I really enjoyed your reply, thank you.

jijijijij 21 hours ago | parent [-]

I understand your use case and the benefit experienced, but quite frankly, I don't think that's easy to generalize or extrapolate to common problems justifying the expenses of this technology.

Having a compiler or any kind of easy, fast formalized "sanity" check is a huge privilege of coding work when it comes to AI usage, something missing in almost every other industry. But even in tech the full extent of such capabilities is rather limited to a few programming languages etc.. Outside of those, confidence in understanding, vouching for the output is limited without actually RTFM. I mean, move fast and break things, but I don't think the quality of knowledge gain is comparable to doing it the hard way.

Side note: I also think, prospectively, it's really bad, if pressure for efficiency on the tech stack used is reduced by making any mess seemingly manageable by AI interfacing (which is insanely inefficient and wasteful in someone else's backyard). Dev "pain" and productivity pressure are needed to improve ergonomics and performance, driving innovation. Why would nix improve, if it can be managed through a chat interface? Similarly, the whole human communication quirk of writing prose with each other becomes utterly meaningless, if expansion and compression of information is expected to happen through AI interface intermediaries anyway. A prose letter is formality and subtext of respectful human interaction, which becomes worthless/offensive, if done by a mindless machine. And if the need for prose vanishes, the AI overhead is fantastical, ridiculous compared to effectively sending the prompt information bits directly to the recipient (without expansion and compression in-between). Nothing makes me wanna scream more than LLMs talking with each other, no matter the language. That's just insanely stupid. If there is not value in formality and indirect communication, we can just cut that and use a trivially simple and amazingly efficient protocol for information exchange.

> _I_ feel like this goes into the overthinking territory. I think software and systems will still die by their merits. Same applies to training data. If bugs regularly make it to end users and a competing solution has less defects, I don't think the buggy solution will stay any more afloat thanks to AI.

But where is the training data coming from? I also think this is fallaciously extrapolating from prior tech innovation and questionable market narratives (considering the tech oligopoly). These models are not like lego, can't be tuned or adjusted like that, there is nothing linear about them. And if you spend all that investment money on manually fine-tuning answers, the tech itself does not warrant the cash, to begin with. That's not AI, that's just a lot of effort. The pyramids are also evidence of laborious human hubris and great expense, a sight to behold, but hardly a technological revolution with great ROI.

I don't think refeeding model degradation is comparable to bad human input as with autopilot (besides, as if that's actually working/solved :D). Thing is, frequency of faulty human information is probably rather constant, while AI slop is exploding, drowning available total human crafted content (again, the scenario is AI wide adoption). And that's not even considering unique feedback mechanism, not fully understood. Who is gonna put out handcrafted, thoroughly thought out training content anymore, when the skill of learning itself atrophied widely? And who is gonna do it for free? Or who is gonna pay for it, when you also have to pay for the absurd energy and resource expenses at some point? Keep in mind, AI, in contrast to human intelligence, does not gain functional understanding and relies statistical tricks from crunching stupid amounts of data. One thought-out piece of code sufficient for you to get cooking, means nothing to the machine. To train these things, you need massive input. Again, where is all that clean data coming from? How much are you willing to pay for an AI service to help you a little with nix?

If there was no refeeding degradation, we would have escape velocity and AGI. In that case, all bets are off and money becomes meaningless anyway. The expenses, the investments don't make sense. Nothing of this shit makes sense :D