Remix.run Logo
KoolKat23 5 days ago

It's still early stages, that is why.

It is not yet good enough or there is not yet sufficient trust. Also there are still resources allocated to checking the code.

I saw a post yesterday showing Brave browser's new tab using 70mb of RAM in the background. I'm very sure there's code there that can be optimized, but who gives a shit. It's splitting hairs and our computers are powerful enough now that it doesn't matter.

Immateriality has abstracted that particular few line codes away.

LandR 5 days ago | parent | next [-]

>> I saw a post yesterday showing Brave browser's new tab using 70mb of RAM in the background. I'm very sure there's code there that can be optimized, but who gives a shit.

I do. This sort of attitude is how we have machines more powerful than ever yet everything still seems to run like shit.

const_cast 5 days ago | parent [-]

This is barely related but I bet that the extra 70 mb of ram isn't even waste - it's probably an optimization. Its possible they're spinning up a JS VM preemptively so when you do navigate you have a hot interpreter for the inevitable script. Maybe they allocate memory for the DOM too.

KoolKat23 5 days ago | parent [-]

Probably the case, I felt bad using this as an example as I don't know the specifics, but thought it is an easy way to convey my point (sorry if so Brave developers).

ncruces 5 days ago | parent | prev | next [-]

> It's still early stages, that is why.

Were we advised to check compiler output every single time "in the early days"?

No, that's not the difference.

A compiler from whatever high/low level language is expected to translate a formal specification of an algorithm faithfully. If it fails to do so, the compiler is buggy, period.

A LLM is expected to understand fuzzy language and spit out something that makes sense.

It's a fundamentally different task, and I trust a human more with this. Certainly, humans are judged by their capability to do this, apply common sense, ask for necessary clarification, also question what they're being asked to do.

rafterydj 5 days ago | parent | prev | next [-]

I feel like I'm taking crazy pills or misunderstanding you. Shouldn't it matter that they are using 70mb of RAM more or less totally wastefully? Maybe not a deal breaker for Brave, sure, but waste is waste.

I understand the world is about compromises, but all the gains of essentially every computer program ever could be summed up by accumulation of small optimizations. Likewise, the accumulation of small wastes kills legacy projects more than anything else.

Mtinie 5 days ago | parent | next [-]

It could matter but what isn't clear to me is if 70MB is wasteful in this specific context. Maybe? Maybe not?

Flagging something as potentially problematic is useful but without additional information related to the tradeoffs being made this may be an optimized way to do whatever Brave is doing which requires the 70MB of RAM. Perhaps the non-optimal way it was previously doing it required 250MB of RAM and this is a significant improvement.

5 days ago | parent [-]
[deleted]
KoolKat23 5 days ago | parent | prev [-]

Yes it can be construed as wasteful. But it's exactly that, a compromise. Could the programmer spend their time better elsewhere generating better value, not doing so is also wasteful.

Supply and demand will decide what compromise is acceptable and what that compromise looks like.

ToucanLoucan 5 days ago | parent | prev | next [-]

> It's still early stages, that is why.

I have been hearing (reading?) this for a solid two years now, and LLMs were not invented two years ago: they are ostensibly the same tech as they were back in 2017, with larger training pools and some optimizations along the way. How many more hundreds of billions of dollars is reasonable to throw at a technology that has never once exceeded the lofty heights of "fine"?

At this point this genuinely feels like silicon valley's fever dream. Just lighting dumptrucks full of money on fire in the hope that it does something better than it did the previous like 7 or 8 times you did it.

And normally I wouldn't give a shit, money is made up and even then it ain't MY money, burn it on whatever you want. But we're also offsetting any gains towards green energy standing up these stupid datacenters everywhere to power this shit, not to mention the water requirements.

SamPatt 5 days ago | parent | next [-]

The difference between using Cursor when it launched and using Cursor today is dramatically different.

It was basically a novelty before. "Wow, AI can sort of write code!"

Now I find it very capable.

player1234 3 days ago | parent [-]

Trillions different?

KoolKat23 5 days ago | parent | prev [-]

I know from my own use case, it went from Gemini 1.5 being unusable to Gemini 2.0 being useable. So 2 years makes a big difference. It's out there right now being used in business making money. This is tangible.

I suspect there's a lot more use out there generating money than you realize, there's no moat in using it, so I'm pretty sure it's kept on the downlow for fear of competitors catching up (which is quick and cheap to do).

How far can one extrapolate? I defer to the experts actually making these things and to those putting money on the line.

vrighter 4 days ago | parent | prev | next [-]

I hate this "early stages" argument. It either works, or it doesn't. If it works sometimes that's called "alpha software" and should not be released and hyped as a finished product. The early stages of the GUI paradigm at release started with a fully working GUI. Windows didn't sometimes refuse to open. The OS didn't sometimes open the wrong program. The system didn't sometimes hallucinate a 2nd cursor. The system worked and then it was shipped.

The "early stages" argument means "not fit for production purposes" in any other case. It should also mean the same here. It's early stages because the product isn't finished (and can't be, at least with current knowledge)

throwaway1777 5 days ago | parent | prev | next [-]

[dead]

5 days ago | parent | prev [-]
[deleted]