| ▲ | Intermernet 17 hours ago | |||||||
Over the last few decades I've seen people make the same comment about spell checking, voice recognition, video encoding, 3D rendering, audio effects and many more. I'm happy to say that LLM usage will only actually become properly integrated into background work flow when we have performant local models. People are trying to madly monetise cloud LLMs before the inevitable rise of local only LLMs severely diminishes the market. | ||||||||
| ▲ | tsimionescu 10 hours ago | parent | next [-] | |||||||
Time will tell, but right now we're not solving the problem of running LLMs by increasing efficiency, we're solving it by massive, unprecedented investments in compute power and just power. Companies definitely weren't building nuclear power stations to power their spell checkers or even 3D renderers. LLMs are unprecedented in this way. | ||||||||
| ||||||||
| ▲ | 13 hours ago | parent | prev [-] | |||||||
| [deleted] | ||||||||