| ▲ | metabrew 7 hours ago |
| I tried the chatbot. jarring to see a large response come back instantly at over 15k tok/sec I'll take one with a frontier model please, for my local coding and home ai needs.. |
|
| ▲ | grzracz 7 hours ago | parent | next [-] |
| Absolute insanity to see a coherent text block that takes at least 2 minutes to read generated in a fraction of a second. Crazy stuff... |
| |
| ▲ | pjc50 7 hours ago | parent | next [-] | | Accelerating the end of the usable text-based internet one chip at a time. | |
| ▲ | VMG 6 hours ago | parent | prev | next [-] | | Not at all if you consider the internet pre-LLM. That is the standard expectation when you load a website. The slow word-by-word typing was what we started to get used to with LLMs. If these techniques get widespread, we may grow accustomed to the "old" speed again where content loads ~instantly. Imagine a content forest like Wikipedia instantly generated like a Minecraft word... | |
| ▲ | kleiba 7 hours ago | parent | prev [-] | | Yes, but the quality of the output leaves to be desired. I just asked about some sports history and got a mix of correct information and totally made up nonsense. Not unexpected for an 8k model, but raises the question of what the use case is for such small models. | | |
| ▲ | kgeist 5 hours ago | parent | next [-] | | 8b models are great at converting unstructured data to a structured format. Say, you want to transcribe all your customer calls and get a list of issues they discussed most often. Currently with the larger models it takes me hours. A chatbot which tells you various fun facts is not the only use case for LLMs. They're language models first and foremost, so they're good at language processing tasks (where they don't "hallucinate" as much). Their ability to memorize various facts (with some "hallucinations") is an interesting side effect which is now abused to make them into "AI agents" and what not but they're just general-purpose language processing machines at their core. | | | |
| ▲ | djb_hackernews 6 hours ago | parent | prev [-] | | You have a misunderstanding of what LLMs are good at. | | |
| ▲ | cap11235 6 hours ago | parent | next [-] | | Poster wants it to play Jeopardy, not process text. | |
| ▲ | kleiba 6 hours ago | parent | prev | next [-] | | Care to enlighten me? | | |
| ▲ | vntok 6 hours ago | parent [-] | | Don't ask a small LLM about precise minutiae factual information. Alternatively, ask yourself how plausible it sounds that all the facts in the world could be compressed into 8k parameters while remaining intact and fine-grained. If your answer is that it sounds pretty impossible... well it is. |
| |
| ▲ | IshKebab 6 hours ago | parent | prev | next [-] | | I don't think he does. Larger models are definitely better at not hallucinating. Enough that they are good at answering questions on popular topics. Smaller models, not so much. | |
| ▲ | paganel 6 hours ago | parent | prev [-] | | Not sure if you're correct, as the market is betting trillions of dollars on these LLMs, hoping that they'll be close to what the OP had expected to happen in this case. | | |
| ▲ | raincole 5 hours ago | parent [-] | | The market didn't throw trillions of dollars to develop Llama 3 8B. What GP is expected to happen has happened around late 2024 ~ early 2025 when LLM frontends got web search feature. It's old tech now. | | |
| ▲ | paganel 4 hours ago | parent [-] | | The GP’s point was about LLMs generally, no matter the interface. I agree that this particular model is (relatively speaking) ancient in AI the world, but go back 3 or 4 years and this (pretty complex “reasoning” at almost instant speed) would have seemed taken out of a science-fiction book. |
|
|
|
|
|
|
| ▲ | stabbles 7 hours ago | parent | prev | next [-] |
| Reminds me of that solution to Fermi's paradox, that we don't detect signals from extraterrestrial civilizations because they run on a different clock speed. |
| |
| ▲ | dintech 6 hours ago | parent | next [-] | | Iain M Banks’ The Algebraist does a great job of covering that territory. If an organism had a lifespan of millions of years, they might perceive time and communication differently to say a house fly or us. | |
| ▲ | xyzsparetimexyz 6 hours ago | parent | prev [-] | | :eyeroll: |
|
|
| ▲ | pennomi 2 hours ago | parent | prev | next [-] |
| Yeah, feeding that speed into a reasoning loop or a coding harness is going to revolutionize AI. |
|
| ▲ | an hour ago | parent | prev [-] |
| [deleted] |