| ▲ | cyanydeez 5 hours ago | |
they already did put a model into the silicon and it's crazy fast. https://chatjimmy.ai/ I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality. It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA | ||
| ▲ | LarsDu88 3 hours ago | parent | next [-] | |
And this is an asic that is still operating digitally. Imagine a chip with baked it weights that does its math analogue with 20x reduction in number of circuit elements needed to do a multiplication op. If there's a breakthrough in memristors, you could end up with another 20x reduction in circuit elements (get rid of memory bottlnecks, start doing multiplication ops as log transform voltage addition) The ceiling is ultra high for how far AI can go. | ||
| ▲ | clickety_clack 5 hours ago | parent | prev [-] | |
It would be pretty cool to have interchangeable usb keys with models on them. | ||