Remix.run Logo
mikert89 5 hours ago

There's no limit to the algorithms. People dont understand yet. They can learn the whole universe with a big enough compute cluster. We built a generalizable learning machine

totaa 4 hours ago | parent | next [-]

the question is will we experience resource constraints before we get there? what if the step up to post-scarcity is gated by a compute level just out of our reach?

mikert89 4 hours ago | parent [-]

human ingenuity will solve this

__loam 4 hours ago | parent [-]

Or we'll have ecological collapse.

teaearlgraycold 4 hours ago | parent | prev [-]

Not sure if this is satire.

Edit: What we have built is a natural language interface to existing, textually recorded, information. Transformers cannot learn the whole universe because the universe has not yet been recorded into text.

lukeschlather 3 hours ago | parent | next [-]

Transformers operate on images and a variety of sensor data. They can also operate completely on non-textual inputs and outputs. I don't know what the ceiling on their capabilities is, but the complaint that they only operate on text seems just obviously wrong. There are numerous examples but one is meteorological forecasting which takes in a variety of time series sensor inputs and outputs e.g. time-series temperature maps. https://www.nature.com/articles/s41598-025-07897-4

0x3f 4 hours ago | parent | prev | next [-]

Based on a glance at their other comments: not satire.

firecall 3 hours ago | parent | prev | next [-]

AFAIK the data does not need to be text.

teaearlgraycold 2 hours ago | parent [-]

Well diffusers are trained unsupervised on raw pictures. I don't know how they train multi-modal LLMs on images, but yes obviously they are consuming other media than just text. I don't think, but would be happy to be corrected, that models glean much of their "knowledge" from non-textual training data.

mikert89 32 minutes ago | parent [-]

you couldnt be more wrong

supliminal 4 hours ago | parent | prev | next [-]

It’s more than likely not.

erelong 4 hours ago | parent | prev | next [-]

Poe's (c)law?

bryogenic 4 hours ago | parent [-]

Poe’s (C)law: The more absurd AI-generated content becomes, the more likely people are to believe it is real.

alfalfasprout 4 hours ago | parent | prev [-]

100% agreed. Sadly, lots of people out there with the "trust me bro, just need more compute". Hopefully we don't consume all the planet's resources trying.

xvector 4 hours ago | parent [-]

I reevaluated my priors long ago when I saw that scaling laws show no sign of stopping, no sign of plateau.

Strangely some people on HN seem to desperately cling to the notion that it's all going to come to a halt. This is unscientific. What evidence do you have - any evidence - that the scaling laws are due to come to an end?

0x3f 4 hours ago | parent | next [-]

All the curves have been levelling off as expected. Not really sure what you're talking about.

solenoid0937 3 hours ago | parent [-]

They have not, every successful pre-train as of late has had performance increases greater than what the scaling laws predict.

0x3f 3 hours ago | parent [-]

Those gains are arch based, data quality based, etc. Scaling laws only relate to data volume and compute, holding other factors constant.

rishabhaiover 4 hours ago | parent | prev | next [-]

I suspect it's not that people do not see the progress, they fail to fully trust laws not truly backed by physics like the transistor laws. We empirically see that scaling works and continue to work.

esafak 2 hours ago | parent [-]

https://en.wikipedia.org/wiki/Neural_scaling_law

4 hours ago | parent | prev | next [-]
[deleted]
skybrian 3 hours ago | parent | prev | next [-]

Why should we have strong priors in either direction? Maybe it will keep scaling for decades like Moore's law. Maybe not.

teaearlgraycold 3 hours ago | parent | prev | next [-]

I’d like to see something that indicates models are getting better without the need for more training data. I would expect most gains are coming from more and better labeled data. We’re racing towards a complete encyclopedia of human knowledge. If we get there that’s only a drop in the bucket of all knowable things.

shimman 3 hours ago | parent | prev | next [-]

Bro the planet is literally experiencing a climate disaster and you think the solution is to create more systems that are misaligned with the planet's ecosystem for humans?

I guess the great filter is a real thing and not just a thought experiment.

xvector 3 hours ago | parent [-]

I assure you that voluntary meat consumption because "taste buds go brr" is a much bigger problem than AI that results in actual productivity gains (and potentially solve the very climate crisis you complain about.)

teaearlgraycold 3 hours ago | parent [-]

Completely agree. Meat should be priced to include externalities. People can get used to beans. Beans are great!

FridgeSeal 3 hours ago | parent | prev [-]

The issue people have isn’t some interpretation of scaling laws, it’s whether the planet’s ecology is goi g to be able to sustain this endeavour.

I shouldn’t have to say this out loud, but if the environment collapses, we will die, and no amount of “just a bit more scaling bro, just think of the gains” will matter.

xvector 3 hours ago | parent [-]

People's voluntary dietary choices cause far more suffering and ecological damage than AI, and for much less return or economic output. But you tell people to switch to plant based foods and they lose their shit.

ori_b 10 minutes ago | parent [-]

Yes. There's more than one thing that needs to change if we're going to make it through this.