| ▲ | MPSimmons 6 days ago |
| The crux is how big the L is in the local LLMs. Depending on what it's used for, you can actually get really good performance on topically trained models when leveraged for their specific purpose. |
|
| ▲ | rickdeckard 6 days ago | parent [-] |
| There's alot of L's in LLLM, so overall it's hard to tell what you're trying to say... Is it 'Local'?, 'Large?'...'Language?' |
| |
| ▲ | fennecfoxy 5 days ago | parent | next [-] | | Clearly the Large part, given the context...LLMs usually miss stuff like this, funnily enough. | |
| ▲ | touristtam 5 days ago | parent | prev | next [-] | | Do you see the C for Cheap in there? Me neither. | | |
| ▲ | rickdeckard 5 days ago | parent [-] | | Sorry I'm not following. Cheap in terms of what, hardware cost? From Apple's point of view a local model would be the cheapest possible to run, as the end-user pays for hardware plus consumption... |
| |
| ▲ | triceratops 5 days ago | parent | prev [-] | | Username checks out. |
|