| ▲ | butz 6 hours ago | |||||||
How about we use all that AI and start doing some serious optimizations to existing software? Reduce memory requirements by half, or even more. | ||||||||
| ▲ | TeMPOraL 5 hours ago | parent | next [-] | |||||||
Plenty of people do. AI is one of the few major general technological breakthroughs, comparable to the Internet and electricity. It's potentially applicable to everything, which is why right now everyone is trying to apply it to everything. Including developing new optimization algorithms, optimizing optimizing compilers, optimizing applications, optimizing systems, optimizing hardware, ... Big AI vendors are at the forefront of it, because they're the ones who actually pay for the AI revolution, so any efficiency improvement saves them money. | ||||||||
| ||||||||
| ▲ | undersuit 3 hours ago | parent | prev | next [-] | |||||||
Improving LLM memory contention will allow LLMs to use more memory. | ||||||||
| ▲ | echelon 4 hours ago | parent | prev | next [-] | |||||||
We are. I'm writing a metric ton of Rust code with Claude Code. | ||||||||
| ▲ | cyanydeez 6 hours ago | parent | prev [-] | |||||||
LLMs are intrinsically deaigned for token production, which is typically inversely related to optimization and efficoency. | ||||||||