Remix.run Logo
peterlk 5 hours ago

Modern AI is a miracle. The math that makes it work is beautiful and really impressive. For example, if you wanted to map all knowledge on earth, how would you do it? AI answers that question by building a high dimensional vector space of embeddings, and traversing that space moves you through a topology of basically every concept that humans have.

Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%, but it’s still much better than what you might expect from a markov model of ngrams.

Openclaw is only sort of interesting. How to vibe code your first product is uninteresting. Claims about productivity increase from model usage are speculative and uninteresting. Endless think pieces on the effects of AI slop are uninteresting. There’s a lot of hype and grift and bullshit that is downstream of this very interesting technology, and basically none of that is interesting. The cool parts are when you actually open the models up and try to figure out what’s going on.

So no, I’m not bored of talking about AI. I’m not sure I ever will be. My suspicion is that those who are bored of it aren’t digging deep enough. With that said, that will likely only be interesting to people who think math is fun and cool. On the whole, AI is unlikely to affect our lives in proportion to the ink spilled by influencers.

jakelsaunders94 4 hours ago | parent | next [-]

This is a really intersting take, and maybe shows that I haven't been thorough enough with my reading. My guess is that the deep technical articles are few and far between and the higher level 'hot takes' are what fills the room. Do you have any recommendations for interesting places to start?

peterlk 3 hours ago | parent [-]

My favorites are the micrograd series by Andrej Karpathy on youtube [0], and “Why Deep Learning Works Unreasonably Well” [1]

The greats on youtube are also worth watching: 3B1B, numberphile, etc.

[0] https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9Gv... [1] https://youtu.be/qx7hirqgfuU?si=8zmrbazuvnz379gk

Chinjut 4 hours ago | parent | prev | next [-]

Why is it that a stochastic parrot can solve logic puzzles consistently and accurately?

peterlk 3 hours ago | parent [-]

Attention is all you need…?

The short answer, as far as I’m aware, is that no one really knows. The longer answer is that we have a lot of partial answers that, in my mind, basically boil down to: model architectures draw a walk through the high dimensional vector space of concepts, and we’ve tuned them to land on the right answer. The fact that they do so consistently says something about how we encode logic in language and the effectiveness of these embedding/latent spaces.

bigstrat2003 2 hours ago | parent | prev [-]

> Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%...

It can't. As you say in the very next sentence. If it isn't solving any given puzzle with a 100% success rate, but randomly failing, then it isn't consistent.