Remix.run Logo
blablabla123 3 hours ago

Despite the flashy title that's the first "sober" analysis from a CEO I read about the technology. While not even really news, it's also worth mentioning that the energy requirements are impossible to fulfill

Also now using ChatGPT intensely since months for all kinds of tasks and having tried Claude etc. None of this is on par with a human. The code snippets are straight out of Stackoverflow...

delaminator 3 hours ago | parent | next [-]

Your assessment of Claude simply isn’t true.

Or Stackoverflow is really good.

I’m producing multiple projects per week that are weeks of work each.

bloppe 2 hours ago | parent | next [-]

Would you mind sharing some of these projects?

I've found Claude's usefulness is highly variable, though somewhat predictable. It can write `jq` filters flawlessly every time, whereas I would normally spend 30 minutes scanning docs because nobody memorizes `jq` syntax. And it can comb through server logs in every pod of my k8s clusters extremely fast. But it often struggles making quality code changes in a large codebase, or writing good documentation that isn't just an English translation of the code it's documenting.

gloosx 27 minutes ago | parent | next [-]

It is always "I'm producing 300 projects in a nanosecond" but it's almost never about sharing or actually deploying these ;)

steve_adams_86 an hour ago | parent | prev [-]

Claude has taught me so much about how to use jq better. And really, way more efficient ways of using the command line in general. It's great. Ironically, the more I learn the less I want to ask it to do things.

written-beyond 2 hours ago | parent | prev | next [-]

I'm just as much of an avid llm code generator fan as you may be but I do wonder about the practicality of spending time making projects anymore.

Why build them if other can just generate them too, where is the value of making so many projects?

If the value is in who can sell it the best to people who can't generate it, isn't it just a matter of time before someone else will generate one and they may become better than you at selling it?

jstummbillig 2 hours ago | parent [-]

The value is that we need a lot more software and now, because building software has gotten so much less time consuming, you can sell software to people that could/would not have paid for it previously at a different price point.

eschaton 2 hours ago | parent [-]

We don’t need more software, we need the right software implemented better. That’s not something LLMs can possibly give us because they’re fucking pachinko machines.

Here’s a hint: Nobody should ever write a CRUD app, because nobody should ever have to write a CRUD app; that’s something that can be generated fully and deterministically (i.e. by a set of locally-executable heuristics, not a goddamn ocean-boiling LLM) from a sufficiently detailed model of the data involved.

In the 1970s you could wire up an OS-level forms library to your database schema and then serve literally thousands of users from a system less powerful than the CPU in modern peripheral or storage controller. And in less RAM too.

People need to take a look at what was done before in order to truly have a proper degree of shame about how things are being done now.

steve_adams_86 an hour ago | parent [-]

> That’s not something LLMs can possibly give us because they’re fucking pachinko machines.

I mostly agree, but I do find them useful for fuzzing out tests and finding issues with implementations. I have moved away from larger architectural sketches using LLMs because over larger time scales I no longer find they actually save time, but I do think they're useful for finding ways to improve correctness and safety in code.

It isn't the exciting and magical thing AI platforms want people to think it is, and it isn't indispensable, but I like having it handy sometimes.

The key is that it still requires an operator who knows something is missing, or that there are still improvements to be made, and how to suss them out. This is far less likely to occur in the hands of people who don't know, in which case I agree that it's essentially a pachinko machine.

blablabla123 2 hours ago | parent | prev | next [-]

Sure but these are likely just variations of existing things. And yet the quality is still behind the original

eschaton 2 hours ago | parent | prev [-]

I produce a lot of shit every week too, but I don’t brag about my digestive system on “Hacker” “News.”

will4274 2 hours ago | parent | prev [-]

> While not even really news, it's also worth mentioning that the energy requirements are impossible to fulfill

If you believe this, you must also believe that global warming is unstoppable. OpenAI's energy costs are large compared to the current electricity market, but not so large compared to the current energy market. Environmentalists usually suggest that electrification - converting non-electrical energy to electrical energy - and then making that electrical energy clean - is the solution to global warming. OpenAI's energy needs are something like 10% of the current worldwide electricity market but less than 1% of the current worldwide energy market.

rvnx 2 hours ago | parent [-]

Imagine how big pile of trash as the current generation of graphics cards used for LLM training will get outdated. It will crash the hardware market (which is a good news for gamers)