Remix.run Logo
slopinthebag 8 hours ago

> I didn’t write any piece of code there. There are several known issues, which I will task the agent to resolve, eventually. Meanwhile, I strongly advise against using it for anything beyond a studying exercise.

Months of effort and three separate tries to get something kind of working but which is buggy and untested and not recommended for anyone to use, but unfortunately some folks will just read the headline and proclaim that AI has solved programming. "Ubiquitous hardware support in every OS is going to be a solved problem"! Or my favourite: instead of software we will just have the LLM output bespoke code for every single computer interaction.

Actually a great article and well worth reading, just ignore the comments because it's clear a lot of people have just read the headline and are reading their own opinions into it.

petcat 8 hours ago | parent | next [-]

The author specifically said that they did not read the code or even test the output very thoroughly. It was intentionally just a naive toy they wanted to play around with.

Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.

acedTrex 8 hours ago | parent | next [-]

> Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.

The part to do with AI is that it was not able to drive a comprehensive and bug free driver with minimal effort from the human.

That is the point.

rayiner 6 hours ago | parent | next [-]

Why is that the metric? In my job, I get drafts from junior employees that requires major revisions, often rewriting significant parts. It’s still faster to have someone take the first pass. Why can’t AI coding be used the same way? Especially if AIs are capable of following your own style and design choices, as well as testing code against a test suite, why isn’t it easier to start from a kind of working baseline than to rebuild from a raf h.

dangus 6 hours ago | parent | prev [-]

I’m not able to provide a comprehensive bug free driver.

Gigachad 8 hours ago | parent | prev | next [-]

Seems like they did put in quite a bit of effort, but were not knowledgeable enough on wifi drivers to go further.

So hardware drivers are not a solved problem where you can just ask chatgpt for a driver and it spits one out for you.

freeplay 6 hours ago | parent [-]

If you could write drivers in javascript, it probably would have done just fine /s

slopinthebag 8 hours ago | parent | prev | next [-]

> The author specifically said that they did not read the code or even test the output very thoroughly. It was intentionally just a naive toy they wanted to play around with.

Yes and that's what I'm pointing out, they vibe coded it and the headline is somewhat misleading, although it's not the authors fault if you don't go read the article before commenting.

But it does have to do with AI (obviously), and specifically the capabilities of AI. If you need to be knowledgable about how wifi drivers work and put in effort to get a decent result, that obviously speaks volumes about the capabilities of the vibe coding approach.

petcat 8 hours ago | parent [-]

I strongly suspect that somebody with domain knowledge around Wi-Fi drivers and OS kernel drivers could prompt the llm to spit out a lot more robust code than this guy was able to. That's not a knock on him, he was just trying to see what he could do. It's impressive what he actually accomplished given how little effort he put forth and how little knowledge he had about the subject.

slopinthebag 7 hours ago | parent | next [-]

Someone with domain knowledge could also just write the code instead of trying to get the stochastic prediction machine to generate it. I thought the whole point was to allow people without said expertise to generate it. After all, that seems to be the promise.

cortesoft 7 hours ago | parent | next [-]

> Someone with domain knowledge could also just write the code instead of trying to get the stochastic prediction machine to generate it.

Well, people with the domain knowledge exist, yet they have not yet written this driver... why not?

Because there is other code those experts want to write, and they don't have time to write it all... but what if they could just give a fairly straightforward prompt and have the LLM do it for them? And if it only took minor tweaks to the prompt to have it write drivers for all the myriad combinations of hardware and software? At that point, there might be enough time to write it all.

Just because people exist that can DO all the work doesn't mean we have enough person-hours to do ALL the work.

dollylambda 6 hours ago | parent [-]

> Because there is other code those experts want to write, and they don't have time to write it all... but what if they could just give a fairly straightforward prompt and have the LLM do it for them?

Then pretty soon they wouldn't be the experts anymore?

cortesoft 6 hours ago | parent [-]

Maybe? But you could make the same argument that programmers today aren't "experts" at computers because they don't know how to build CPUs.

There is no reason to believe you can't gain expertise while still using higher and higher level abstractions. Yes, you will lose some of that low level expertise, but you can still be an expert at the problem set itself.

6 hours ago | parent | prev | next [-]
[deleted]
garciasn 7 hours ago | parent | prev | next [-]

Clearly there wasn't much appetite for someone to do that.

luckydata 7 hours ago | parent | prev [-]

It will be like that at some point soon, just not now. Are you trying to make the point that because this technology is not yet perfect the fact that it can already do so much is unimpressive?

slopinthebag 7 hours ago | parent [-]

Will it happen before or after we get fusion energy? I heard that was coming soon too.

ctoth 7 hours ago | parent | prev [-]

@petcat Is your nickname a description or an instruction?

dude250711 8 hours ago | parent | prev [-]

> The person intentionally didn't put in much effort.

Aren't you just describing every vibe code ever?

To think about it, that is probably my main issue with AI art/books etc. They never put in any effort. In fact, even the competition is about putting least effort.

jomohke 7 hours ago | parent | prev | next [-]

You're validly critiquing where it is now.

The hype people are excited because they're guessing where it's going.

This is notable because it's a milestone that was not previously possible: a driver that works, from someone who spent ~zero effort learning the hardware or driver programming themselves.

It's not production ready, but neither is the first working version of anything. Do you see any reason that progress will stop abruptly here?

1024core 7 hours ago | parent | next [-]

Not a huge fan of @sama, but he is quoted as saying: this is the worst these models will every be!

Puts all criticism in a new perspective.

slopinthebag 7 hours ago | parent | next [-]

That's like Bill Gates saying XP is the worst Windows will ever be

usef- 7 hours ago | parent | next [-]

Not Windows: Operating systems. We did get more capable operating systems. The point of the quote is "this is the worst the SOTA will ever be".

If Windows XP were fully supported today I still wouldn't use it, personally, despite having respect for it in its era. The core technology of how, eg OS sandboxing, security, memory, driver etc stacks are implemented have vastly improved in newer OSes.

slopinthebag 7 hours ago | parent [-]

You're just moving the goal posts unfortunately. The point is that positive progress is never actually guaranteed.

usef- 6 hours ago | parent [-]

Of course not. But I believe your Windows example was implying fundamental tech got worse.

The original "worst" quote is implying SOTA either stays the same (we keep using the same model) or gets better.

People have been predicting that progress will halt for many years now, just like the many years of Moore's law. By all indications AI labs are not running short of ideas yet (even judging purely by externally-visible papers being published and model releases this week).

We're not even throwing all of what is possible on current hardware technology at the issue (see the recent demonstration chips fabbed specifically for LLMs, rather than general purpose, doing 14k tokens/s). It's true that we may hit a fundamental limit with current architectures, but there's no indication that current architectures are at a limit yet.

k1musab1 7 hours ago | parent | prev [-]

Aged like milk.

cactusplant7374 7 hours ago | parent | prev [-]

That assumes he is all knowing.

democracy 2 hours ago | parent | prev | next [-]

>> Do you see any reason that progress will stop abruptly here?

I do. When someone thinks they are building next generation super software for 20$ a month using AI, they conveniently forget someone else is paying the remaining 19,980$ for them for compute power and electricity.

staplers 7 hours ago | parent | prev | next [-]

People abstract upon new leaps in invention way too early though. Believing these leaps are becoming the standard. Look at cars, airplanes, phones, etc.

After we landed on the moon people were hyped for casual space living within 50 years.

The reality is it often takes much much longer as invention isn't isolated to itself. It requires integration into the real world and all the complexities it meets.

Even moreso, we may have ai models that can do anything perfectly but it will require so much compute that only the richest of the rich are able to use it and it effectively won't exist for most people.

slopinthebag 7 hours ago | parent | prev [-]

> Do you see any reason progress will stop abruptly here?

Yeah, money and energy. And fundamental limitations of LLM's. I mean, I'm obviously guessing as well because I'm not an expert, but it's a view shared by some of the biggest experts in the field ¯\_(ツ)_/¯

I just don't really buy the idea that we're going to have near-infinite linear or exponential progress until we reach AGI. Reality rarely works like that.

selridge 7 hours ago | parent | next [-]

So far the people who bet against scaling laws have all lost money. That does not mean that their luck won’t change, but we should at least admit the winning streak.

slopinthebag 7 hours ago | parent [-]

You mean Moore's law? Which is now dead?

selridge 7 hours ago | parent | next [-]

No I don't mean that. I mean the LLM parameter scaling laws. More importantly, it doesn't matter if I mean that or Moore's law or anything else, because I'm not making a forward looking prediction.

Read what I wrote.

I'm saying is if you bet AGAINST [LLM] scaling laws--meaning you bet that the output would peter out naturally somehow--you've lost 100% so far.

100%

Tomorrow could be your lucky day.

Or not.

slopinthebag 5 hours ago | parent [-]

This weekend I had 100% success at the blackjack table, until I didn't and lost.

I guess we'll see :)

selridge 4 hours ago | parent [-]

You gonna go read up on some 0% success rate strategies on the way?

What I’m saying is that we act as though claims about these scaling laws have never been tested. People feel free to just assert that any minute now the train will stop. They have been saying that since the Stochastic parrots.

It has not come true yet.

Tomorrow could be it. Maybe the day after. But it would then be the first victory.

_zoltan_ 7 hours ago | parent | prev [-]

it's not dead. it's enough to look at GB200/GB300 vs Vera Rubin specs.

azakai 6 hours ago | parent | prev | next [-]

At the very least, computers are still getting faster. Models will get faster and cheaper to run over time, allowing them more time to "think", and we know that helps. Might be slow progress, but it seems inevitable.

I do agree that exponential progress to AGI is speculation.

conception 6 hours ago | parent | prev | next [-]

You think all AI companies will never release a better model days after they all release better models?

That is a position to take.

empthought 7 hours ago | parent | prev [-]

I know some proponents have AGI as their target, but to me it seems to be unrelated to the steadily increasing effectiveness of using LLMs to write computer code.

I think of it as just another leap in human-computer interface for programming, and a welcome one at that.

nitwit005 6 hours ago | parent [-]

If you imagine it just keeps improving, the end point would be some sort of AGI though. Logically, once you have something better at making software than humans, you can ask it to make a better AI than we were able to make.

rayiner 6 hours ago | parent | prev | next [-]

I don’t get this response. This is amazing! What percentage of programmers can even write a buggy FreeBSD kernel driver? If you were tasked at developing this yourself, wouldn’t it be a huge help to have something that already kind of works to get things started?

bluGill 4 hours ago | parent [-]

Fairly high could - but some could start today some need a few months of study before they know how to start (and then take 10x long than the first person to get it working)

boplicity 7 hours ago | parent | prev | next [-]

Programmers have always been in search of an additional layer of abstraction. LLM coding feeds exactly into this impulse.

etcetera1 7 hours ago | parent | prev | next [-]

> instead of software we will just have the LLM output bespoke code for every single computer interaction.

That's sort of the idea behind GPU upscaling: You increase gaming performance and visual sharpness by rendering games at lower resolutions and use algorithms to upscale to the monitor's native resolution. Somehow cheaper than actually rendering at high resolution: Let the GPU hallucinate the difference at a lower cost.

rozal 6 hours ago | parent | prev [-]

[dead]