Remix.run Logo
razorbeamz 7 hours ago

Yes, ChatGPT and friends are essentially the same thing as the predictive text keyboard on your phone, but scaled up and trained on more data.

XenophileJKO 7 hours ago | parent [-]

So this idea that they replay "text" they saw before is kind of wrong fundamentally. They replay "abstract concepts of varied conceptual levels".

razorbeamz 7 hours ago | parent [-]

The important point I'm trying to reinforce is that LLMs are not capable of calculation. They can give an answer based on the fact that they have seen lots of calculations and their results, but they cannot actually perform mathematical functions.

XenophileJKO 7 hours ago | parent [-]

That is a pretty bold assertion for a meatball of chemical and electrical potentials to make.

razorbeamz 6 hours ago | parent [-]

Do you know what "LLM" stands for? They are large language models, built on predicting language.

They are not capable of mathematics because mathematics and language are fundamentally separated from each other.

They can give you an answer that looks like a calculation, but they cannot perform a calculation. The most convincing of LLMs have even been programmed to recognize that they have been asked to perform a calculation and hand the task off to a calculator, and then receive the calculator's output as a prompt even.

But it is fundamentally impossible for an LLM to perform a calculation entirely on its own, the same way it is fundamentally impossible for an image recognition AI to suddenly write an essay or a calculator to generate a photo of a giraffe in space.

People like to think of "AI" as one thing but it's several things.

gf000 5 hours ago | parent | next [-]

What calculations? Do you mean "3+5" or a generic Turing-machine like model?

In either case, this "it's a language model" is a pretty dumb argument to make. You may want to reason about the fundamental architecture, but even that quickly breaks down. A sufficiently large neural network can execute many kinds of calculations. In "one shot" mode it can't be Turing complete, but in a weird technicality neither does your computer have an infinite tape. It just simply doesn't matter from a practical perspective, unless you actually go "out of bounds" during execution.

50T parameters give plenty of state space to do all kinds of calculations, and you really can't reason about it in a simplistic way like "this is just a DFA".

Let alone when you run it in a loop.

razorbeamz 4 hours ago | parent | next [-]

> What calculations? Do you mean "3+5" or a generic Turing-machine like model?

Either one. An LLM cannot solve 3+5 by adding 3 and 5. It can only "solve" 3+5 by knowing that within its training data, many people have written that 3+5=8, so it will produce 8 as an answer.

An LLM, similarly, cannot simulate a Turing machine. It can produce a text output that resembles a Turing machine based on others' descriptions of one, but it is not actually reading and writing bits to and from a tape.

This is why LLMs still struggle at telling you how many r's are in the word "strawberry". They can't count. They can't do calculations. They can only reproduce text based on having examined the human corpus's mathematical examples.

gf000 4 hours ago | parent [-]

With all due respect, this is just plain false.

The reason "strawberry" is hard for LLMs is that it sees $str-$aw-$berry, 3 identifiers it can't see into. Can you write down a random word your just heard in a language you don't speak?

gpderetta 2 hours ago | parent | prev [-]

> In "one shot" mode it can't be Turing complete, but in a weird technicality neither does your computer have an infinite tape

Nor our brains, in fact.

parasubvert 6 hours ago | parent | prev | next [-]

Mathematics and language really aren't fundamentally separated from one another.

By your definition, humans can't perform calculation either. Only a calculator can.

arw0n 4 hours ago | parent | prev | next [-]

Mathematics is a language. Everything we can express mathematically, we can also express in natural language. The real interesting, underlying question is: Is there anything worth knowing that cannot be expressed by language? - That's the theoretical boundary of LLM capability.

eudoxus 6 hours ago | parent | prev | next [-]

This is a really poor take, to try and put a firewall between mathematics and language, implying something that only has conceptual understanding root in language is incapable of reasoning in mathematical terms.

You're also correlating "mathematics" and "calculation". Who cares about calculation, as you say, we have calculators to do that.

Mathematics is all just logical reasoning and exploration using language, just a very specific, dense, concise, and low level language. But you can always take any mathematical formula and express it as "language" it will just take far more "symbols"

This might be the worse take on this entire comment section. And I'm not even an overly hyped vibe coder, just someone who understands mathematics

charcircuit 4 hours ago | parent | prev [-]

>it is fundamentally impossible for an image recognition AI to suddenly write an essay

You can already do this today with every frontier modal. You can give it an image and have it write an essay from it. Both patches (parts of images) and text get turned into tokens for the language the LLM is learning.