▲ | niek_pas 8 hours ago | |||||||||||||||||||||||||
Can someone tell me what the advantages are of doing this over using, e.g., the ChatGPT web interface? Is it just a privacy thing? | ||||||||||||||||||||||||||
▲ | 0000000000100 8 hours ago | parent | next [-] | |||||||||||||||||||||||||
Privacy is a big one, but avoiding censorship and reducing costs are the other ones I’ve seen. Not so sure about the reducing costs argument anymore though, you'd have to use LLMs a ton to make buying brand new GPUs worth it (models are pretty reasonably priced these days). | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | explorigin 8 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
Privacy, available offline, software that lasts as long as the hardware can run it. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | elpocko 8 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
Privacy, freedom, huge selection of models, no censorship, higher flexibility, and it's free as in beer. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | zarekr 8 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
This is a way to run open source models locally. You need the right hardware but it is a very efficient way to experiment with the newest models, fine tuning etc. ChatGPT uses massive model which are not practical to run on your own hardware. Privacy is also an issue for many people, particularly enterprise. | ||||||||||||||||||||||||||
▲ | JKCalhoun 7 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
I did a blog post about my preference for offline [1]. LLM's would fall under the same criteria for me. Maybe not so much the distraction-free aspect of being offline, but as a guard against the ephemeral aspect of online. I'm less concerned about privacy for whatever reason. [1] https://engineersneedart.com/blog/offlineadvocate/offlineadv... | ||||||||||||||||||||||||||
▲ | pletnes 8 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
You can chug through a big text corpus at little cost. | ||||||||||||||||||||||||||
▲ | priprimer 8 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
you get to find out all the steps! meaning you learn more | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | throwawaymaths 5 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
Yeah but I think of you've got a GPU you should probably think about using vllm. Last I tried using llama.cpp (which granted was several months ago) the ux was atrocious -- vllm basically gives you an openai api with no fuss. That's saying something as generally speaking I loathe Python. | ||||||||||||||||||||||||||
▲ | cess11 7 hours ago | parent | prev [-] | |||||||||||||||||||||||||
For work I routinely need to do translations of confidential documents. Sending those to some web service in a state that doesn't even have basic data protection guarantees is not an option. Putting them into a local LLM is rather efficient, however. |