Remix.run Logo
hparadiz 6 hours ago

10 years from now: "Can you believe they did anything with such a small context window?"

this_user 6 hours ago | parent | next [-]

More likely: "Can you believe they were actually trying to use LLMs for this?"

nipponese 5 hours ago | parent [-]

OSes and software engs did not end up using less RAM.

gitonup 4 hours ago | parent [-]

Measurable responses to the environment lag, Moore's law has been slowing down (e: and demand has been speeding up, a lot).

From just a sustainability point, I really hope that the parent post's quote is true, because otherwise I've personally seen LLMs used over and over to complete the same task that it could have been used for once to generate a script, and I'd really like to be able to still afford to own my own hardware at home.

hparadiz 2 hours ago | parent [-]

How many times have we implemented Hello World?

I'm using local models on a 6 year old AMD GPU that would have felt like a technology indistinguishable from magic 10 years ago. I ask it for crc32 in C and it gives me an answer. I ask it to play a game with me. It does. If I'm an isolated human this is like a magic talking box. But it's not magic. It doesn't use more energy than playing a video game either.

lionkor 5 hours ago | parent | prev | next [-]

10 years from now: "The next big thing: HENG - Human Engineers! These make mistakes, but when they do, they can just learn from it and move on and never make it again! It's like magic! Almost as smart as GPT-63.3-Fast-Xtra-Ultra-Google23-v2-Mem-Quantum"

agoodusername63 2 hours ago | parent | next [-]

I would love to live in a world where my coworkers learn from their mistakes

is this Human 2.0? I only have 1.0a beta in the office.

I get the joke but it really does highlight how flimsy the argument is for humans. IME humans frequently make simple errors everywhere they don’t learn from and get things right the first time very rarely. Damn. Sounds like LLMs. And those are only getting better. Humans aren’t.

cheevly 5 hours ago | parent | prev [-]

Imagine believing humans don’t make the same mistakes. You live in a different universe than me buddy.

recursive 5 hours ago | parent | next [-]

Sometimes we repeat mistakes. But humans are capable of occasionally learning. I've seen it!

saalweachter 4 hours ago | parent [-]

I've always wanted a better way to test programmers' debugging in an interview setting. Like, sometimes just working problems gets at it, but usually just the "can you re-read your own code and spot a mistake" sort of debugging.

Which is not nothing, and I'm not sure how LLMs do on that style; I'd expect them to be able to fake it well enough on common mistakes in common idioms, which might get you pretty far, and fall flat on novel code.

The kind of debugging that makes me feel cool is when I see or am told about a novel failure in a large program, and my mental model of the system is good enough that this immediately "unlocks" a new understanding of a corner case I hadn't previously considered. "Ah, yes, if this is happening it means that precondition must be false, and we need to change a line of code in a particular file just so." And when it happens and I get it right, there's no better feeling.

Of course, half the time it turns out I'm wrong, and I resort to some combination of printf debugging (to improve my understanding of the code) and "making random changes", where I take swing-and-a-miss after swing-and-a-miss changing things I think could be the problem and testing to see if it works.

And that last thing? I kind of feel like it's all LLMs do when you tell them the code is broken and ask then to fix it. They'll rewrite it, tell you it's fixed and ... maybe it is? It never understands the problem to fix it.

creesch 5 hours ago | parent | prev [-]

I mean, that is not what they are writing buddy.

mbreese 6 hours ago | parent | prev | next [-]

10 years from now: “what’s a context window?”

sghiassy 6 hours ago | parent [-]

10 years from now: “come with me if you want to live”

Terminator 2 Clip: https://youtu.be/XTzTkRU6mRY?t=72&si=dmfLNDqpDZosSP4M

MattGaiser 5 hours ago | parent | prev | next [-]

I am kind of already at that point. For all the complaining about context windows being stuffed with MCPs, I am curious what they are up to and how many MCPs they have that this is a problem.

berziunas 6 hours ago | parent | prev | next [-]

“640K ought to be enough for anybody”

hparadiz 5 hours ago | parent [-]

I dunno why you're getting down voted. This is funny.

smrtinsert 5 hours ago | parent | prev [-]

"That was back when models were so slow and weighty they had to use cloud based versions. Now the same LLM power is available in my microwave"