Remix.run Logo
shafyy 11 hours ago

Enough with this analgoy. It's flawed on so many levels. First and foremost, stop devaluing humanitiy and hyping up AI companies by parroting their party line. Second, LLMs don't learn. They can hold a very limited amount of context, as you know. And every time you need to start over. So fuck no, "teaching" and LLM is nothing like teaching an actual human.

KeplerBoy 11 hours ago | parent | next [-]

It all went south when we started to call it "learning" instead of "fitting parameters".

fxtentacle 10 hours ago | parent | next [-]

„Fitting“ is still too nice of a word choice, because it implies that it’s easy to identify the best solution.

I suggest „randomly adjusting parameters while trying to make things better“ as that accurately reflects the „precision“ that goes into stuffing LLMs with more data.

bonoboTP 10 hours ago | parent | prev | next [-]

It was called learning already back when the field was called cybernetics and foundational figures like Shannon worked on this kind of stuff. People tried to decipher learning in the nervous system and implement the extracted principles in machines. Such as Hebbian learning, the Perception algorithm etc. This stuff goes back to the 40s/50s/60s, so things must have gone south pretty early then.

Imustaskforhelp 10 hours ago | parent | prev [-]

I agree with ya so much. I have seen so many people even in hackernews somehow give human qualities to LLM's.

This Grammarly thing seems to be a bastardized form of that not even sparing the dead.

I'd say that there was some incentive by the AI companies to muddle up the water here.

weird-eye-issue 10 hours ago | parent | prev | next [-]

> very limited amount of context

This isn't 2023 anymore

simianwords 10 hours ago | parent | prev [-]

absolutely they can learn. you are being emotional and the original point is correct.

i give the LLM my codebase and it indeed learns about it and can answer questions.

RichardLake 10 hours ago | parent | next [-]

That isn't learning, it can read things in its context, and generate materials to assist answering further prompts but that doesn't change the model weights. It is just updating the context.

Unless you are actually fine tuning models, in which case sure, learning is taking place.

2 hours ago | parent | next [-]
[deleted]
simianwords 9 hours ago | parent | prev [-]

i don't know why you think it matters how it works internally. whether it changes its weights or not is not important. does it behave like a person who learns a thing? yes.

if i showed a human a codebase and asked them questions with good answers - yes i would say the human learned it. the analogy breaks at a point because of limited context but learning is a good enough word.

RichardLake 8 hours ago | parent [-]

Maybe because I work on a legacy programming language with far less material in the training? For me it makes a difference because it partly needs to "learn" the language itself and have that in the context, along with codebase specific stuff. For something with the model already knowing the language and only needing codebase specific stuff it might feel different.

simianwords 6 hours ago | parent [-]

But my codebase isn’t there in training set yet it learns and I can ask questions

khy34 7 hours ago | parent | prev [-]

[flagged]