Remix.run Logo
bambax 18 hours ago

The problem with LLM is when they're used for creativity or for thinking.

Just because LLMs are indeed useful in some (even many!) context, including coding, esp. to either get something started, or, like in your example, to transcode an existing code base to another platform, doesn't mean they will change everything.

It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).

More like AI is the new VBA. Same promise: everyone can code! Comparable excitement -- although the hype machine is orders of magnitude more efficient today than it was then.

eru 17 hours ago | parent | next [-]

I don't know about VBA, but spreadsheets actually delivered (to a large extent) on the promise that 'everyone can write simple programs'. So much so that people don't see creating a spreadsheet as coding.

Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

TeMPOraL 17 hours ago | parent | next [-]

Right. Spreadsheeds already delivered on their promise (and then some) decades ago, and the irony is, many people - especially software engineers - still don't see it.

> Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

That is still the refrain of corporate IT. I see plenty of comments both here and on wider social media, showing that many in our field still just don't get why people resort to building Excel sheets instead of learning to code / asking your software department to make a tool for you.

I guess those who do get it end up working on SaaS products targeting the "shadow IT" market :).

ben_w 16 hours ago | parent | next [-]

>> Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.

> That is still the refrain of corporate IT. I see plenty of comments both here and on wider social media, showing that many in our field still just don't get why people resort to building Excel sheets instead of learning to code / asking your software department to make a tool for you.

In retrospect, this is also a great description of why two of my employers ran low on investors' interest.

rwmj 15 hours ago | parent | prev [-]

Software engineers definitely do understand that spreadsheets are widely used and useful. It's just that we also see the awful downsides of them - like no version control, being proprietary, and having to type obscure incantations into tiny cells - and realise that actual coding is just better.

To bring this back on topic, software engineers see AI being a better search tool or a code suggestion tool on the one hand, but also having downsides (hallucinating, used by people to generate large amounts of slop that humans then have to sift through).

TeMPOraL 14 hours ago | parent [-]

> It's just that we also see the awful downsides of them - like no version control, being proprietary, and having to type obscure incantations into tiny cells

Right. But this also tends to make us forget sometimes that those things aren't always a big deal. It's the distinction between solving an immediate problem vs. building a proper solution.

(That such one-off solution tends to become a permanent fixture in an organization - or household - is unfortunately an unsolved problem of human coordination.)

> and realise that actual coding is just better.

It is, if you already know how to do it. But then we overcompensate in the opposite direction, and suddenly 90% of the "actual coding" turns into dealing with build tools and platform bullshit, at which point some of us (like myself) look back at spreadsheets in envy, or start using LLMs to solve sub-problems directly.

It's actually unfortunate, IMO, that LLMs are so over-trained on React and all kinds of modern webshit - this makes them almost unable to give you simple solutions for anything involving web, unless you specifically prompt them to go full vanilla and KISS.

rwmj 14 hours ago | parent | next [-]

I'm constantly surprised that no one has mainstreamed version control. I see so many cases where it could be applied: document creation and editing, web site updates, spreadsheets ... even the way that laws are amended in Parliament [1]

[1] https://www.gov.uk/guidance/legislative-process-taking-a-bil... https://www.gov.uk/government/publications/amending-bills-st...

gedy 10 hours ago | parent | prev [-]

> But this also tends to make us forget sometimes that those things aren't always a big deal. It's the distinction between solving an immediate problem vs. building a proper solution.

I agree about the "code quality" not being a huge issue for some use cases, however having worked at places with entrenched spreadsheet workflows (like currently), I think that non engineers still need help seeing they don't need a faster horse - e.g. automate this task away. Many, many times a "spreadsheet" is ironically used for a very inefficient manual task.

TeMPOraL 7 hours ago | parent [-]

> Many, many times a "spreadsheet" is ironically used for a very inefficient manual task.

Right. But spreadsheets and "shadow IT" aren't really about technology - they're about autonomy, about how the organization is structured internally. No one is choosing a bad process from the start - spreadsheets are the easiest (and often the only possible) way to solve an immediate problem, and even as they turn into IT horror stories, there usually is no point at which the people using it could make things better on their own. The "quality solutions", conversely, are usually top-down and don't give users much control over the process - instead of adoption, this just breeds resistance.

bambax 17 hours ago | parent | prev | next [-]

True, Excel is in the same category, yes.

6510 15 hours ago | parent | prev [-]

People know which ingredients to use, the ratios, how long to bake and cook them but the design of the kitchen prevents them from cooking the meal? Professional cooks debate which gas tube to use with which adapter and how to organize all the adapters according to ISO standards while the various tubes lay on the floor all over the building. The stove switches off if you try to use the wrong brand of pots. The cupboard has a retina scanner. Eventually people go to the back of the garden and make a campfire. There is no fridge there and no way to wash dishes. They are even using the wrong utensils. The horror!

mettamage 16 hours ago | parent | prev | next [-]

> everyone can code!

I work directly with marketers and even if you give them something like n8n, they find it hard to be precise. Programming teaches you a "precise mindset" that one doesn't have when they aren't really thinking about tech professionally.

I wonder if seasoned UX designers can code now. They do think professionally about software. I wonder if it's at a deep enough granularity such that they can simply use natural language to get something to work.

MattSayar 7 hours ago | parent | next [-]

Our UX designers have been prototyping things they started in Figma with Windsurf. They seem pretty happy with it. Of course there's a big step in getting it production-ready but it really smooths the conversation with engineering.

petra 16 hours ago | parent | prev [-]

Can an LLM detect a lack of precision and point it to you ?

TheOtherHobbes 15 hours ago | parent | next [-]

Sometimes, yes. Reliably, no.

LLMs don't have enough of a model of the world to understand anything. There was a paper floating around recently about how someone trained an ML system on orbital dynamics. The result was a system that could calculate orbits correctly, but it completely failed to extract the underlying - simple - math. Instead it basically frankensteined together its own system of epicycles which solved a very narrow range of problems but lacked any generality.

Any coding has the same problems. Sometimes you get lucky, sometimes you don't. And if you strap on an emulator and test rig and allow the machine to flail around inside it, sometimes working code falls out.

But there's no abstracted model of software development as a process in there, either in theory or practise. And no understanding of vague goals with constraints and requirements that can be inferred creatively from outside the training data.

antonvs 4 hours ago | parent [-]

> LLMs don't have enough of a model of the world to understand anything.

This is binary thinking, and it's fallacious.

For your orbital mechanics example, sure, it's difficult for LLMs to develop good models of the physical world, in large part because they aren't able to interact with the world directly and have to rely on human texts to describe it to them.

For your software development example, you're making a similar mistake: the fact that their strongest suit is not producing fully working systems doesn't mean that they have no world model, or that their successes are as random as you seem to think ("Sometimes you get lucky, sometimes you don't," "sometimes working code falls out.")

But if you try, for example, asking an LLM to identify a bug in a program, or ask it questions about how a program works, you'll find that from a functional perspective, they exhibit excellent understanding that strongly implies a good world model. You may be taking your own thought processes for granted too much to realize how good they are at this. The idea that "there's no abstracted model of software development as a process in there" is hard to reconcile with the often superhuman responses they're capable of, when you use them in the scenarios they're most effective at.

staunton 15 hours ago | parent | prev | next [-]

An LLM can even ignore lack of precision and just guess what you wanted, usually correctly, unless what you want is very unusual.

TeMPOraL 15 hours ago | parent | prev [-]

It can! Though you might need to ask for it, otherwise it may take what it thinks you mean and run off with it, at which point you'll discover the lack of precision only later, when the LLM gets confused or the result is nothing like what you actually expected.

TeMPOraL 17 hours ago | parent | prev | next [-]

> It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).

I personally agree with Andrew Ng here (and I've literally arrived at the exact same formulation before becoming aware of Ng's words).

I take "new electricity" to mean, it'll touch everything people do, become part of every endeavor in some shape of form. Much like electricity. That doesn't mean taking over literally everything; there's plenty of things we don't use electricity for, because alternatives - usually much older alternatives - are still better.

There's still plenty of internal combustion engines on the ground, in the seas and in the skies, and many of them (mostly on extremely light and extremely heavy ends of the spectrum) are not going to be replaced by electric engines any time soon. Plenty of manufacturing and construction is still done by means of hydraulic and pneumatic power. We also sometimes sidestep electricity for heating purposes by going straight from sunlight to heat. Etc.

But even there, electricity-based technology is present in some form. The engine may be this humongous diesel-burning colossus, built from heat, metal, and a lot of pneumatics, positioned and held in place by hydraulics - but all the sensors on it are electric, where in the past some would be hydraulic and rest wouldn't even exist; it's controlled and operated by electricity-based computing network; it's been designed on computers, and so on.

In this sense, I think "AI is a new electricity" is believable. It's a qualitatively new approach to computing, that's directly or indirectly applicable everywhere, and that people already try to apply to literally everything[0]. And, much like with electricity, time and economics will tell which of those applications make sense, which were dead ends, and which were plain dumb in retrospect.

--

[0] - And they really did try to stuff electricity everywhere back when it was the new hot thing. Same with nuclear energy few decades later. We still laugh at how people 100 years ago imagined the future will look like... in between crying that we got short-changed by reality.

camillomiller 16 hours ago | parent [-]

AI is not a fundamental physical element. AI is mostly closed and controlled by people who will inevitably use it to further their power and centralize wealth and control. We acted with this in mind to make electricity a publicly controlled service. There is absolutely no intention nor political strength around to do this with AI in the West.

ben_w 16 hours ago | parent | next [-]

There's a few levels of this:

• That it is software means that any given model can be easily ordered nationalised or whatever.

• Everyone quickly copying OpenAI, and specifically DeepSeek more recently, showed that once people know what kind of things actually work, it's not too hard to replicate it.

• We've only got a handful of ideas about how to align* AI with any specific goal or value, and a lot of ways it does go wrong. So even if every model was put into public ownership, it's not going to help, not yet.

That said, if the goal is to give everyone access to an AI that demands 375 W/capita 24/7, means the new servers double the global demand for electricity, with all that entails.

* Last I heard (a while back now so may have changed): if you have two models, there isn't even a way to rank them as more-or-less aligned vs. anything. Despite all the active research in this area, we're all just vibing alignment, corporate interests included.

ijk 6 hours ago | parent [-]

Public control over AI models is a distinct thing from everyone having access to an AI server (not that national AI would need a 1:1 ratio of servers to people, either).

It's pretty obvious that the play right now is to lock down the AI as much as possible and use that to facilitate control over every system it gets integrated with. Right now there's too many active players to shut out random developers, but there's an ongoing trend of companies backing away from releasing open weight models.

ben_w 5 hours ago | parent [-]

> It's pretty obvious that the play right now is to lock down the AI as much as possible and use that to facilitate control over every system it gets integrated with. Right now there's too many active players to shut out random developers, but there's an ongoing trend of companies backing away from releasing open weight models.

More the opposite, despite the obvious investment incentive to do as you say to have any hope of a return on investment. OpenAI *tried* to make that a trend with GPT-2 on the grounds that it's irresponsible to give out a power tool in the absence of any idea of what "safety tests" even mean in that context, but lots of people mocked them for it and it looks like only them and Anthropic take such risks seriously. Or possibly just Anthropic, depending how cynical you are about Altman.

TeMPOraL 16 hours ago | parent | prev [-]

Electricity here is meant as a technology (or a set of technologies) exploiting a particular physical phenomenon - not the phenomenon itself.

(If it were the latter, then you could argue everything uses electricity if it relies in any way on matter being solid, because AFAIK the furthest we got on the question of "why I don't fall through the chair I'm sitting on" is.... "electromagnetism".)

camillomiller 16 hours ago | parent [-]

Either way, it still feels like a stretched and inappropriate comparison at best, or a disingenuous and asinine one at worst.

ben_w 17 hours ago | parent | prev | next [-]

While I'd agree with your first line:

> The problem with LLM is when they're used for creativity or for thinking.

And while I also agree that it's currently closer to "AI is the new VBA" because of the current domain in which consumer AI* is most useful.

Despite that, I'd also aver that being useful in simply "many" contexts will make AI "the new electricity”. Electricity itself is (or recently was) only about 15% of global primary power, about 3 TW out of about 20 TW: https://en.wikipedia.org/wiki/World_energy_supply_and_consum...

Are LLMs 15% of all labour? Not just coding, but overall? No. The economic impact would be directly noticeable if it was that much.

Currently though, I agree. New VBA. Or new smartphone, in that we ~all have and use them, while society as a whole simultaneously cringes a bit at this.

* Narrower AI such as AlphaFold etc. would, in this analogy, be more like a Steam Age factory which had a massive custom steam engine in the middle distributing motive power to the equipment directly: it's fine at what it does, but you have to make it specifically for your goal and can't easily adapt it for something else later.

informal007 16 hours ago | parent | prev [-]

LLM is helpful for creativity and thinking When you run out of your ideas

andybak 16 hours ago | parent [-]

I sometimes feel that a lot of people bringing up the topic of creativity have never spent much time thinking, studying and self-reflecting on what "creativity" actually is. It's a complex topic and one that's mixed up with many other complex topics ("originality", "intellectual property", "aesthetic value", "art vs engineering" etc etc)

You see a lot of Motte and Bailey arguments in this discussion as people shift (often subconsciously) between different definitions of key terms and different historical perspectives.

I'd recommend someone tries to gain at least a passing familiarity with art history and the social history of art/design etc. Reading a bit of Edward De Bono and Douglas Hofstadter isn't a bad shout either (although it's many years since I've read the former so I can't guarantee it will stand up as well as my teenage self thought it did)