| |
| ▲ | GaryBluto 16 hours ago | parent [-] | | > I think you halucinated this up. (Quote from original comment, pre malicious-edit) No point in responding to a troll, but for the other people who may be reading this comment chain, he's used LLMs for various tasks. Not to mention that he founded TextSynth, an entire service that revolves around them. https://textsynth.com/ https://bellard.org/ts_sms/ | | |
| ▲ | yeasku 16 hours ago | parent [-] | | [flagged] | | |
| ▲ | dwaltrip 15 hours ago | parent | next [-] | | > TextSynth provides access to large language, text-to-image, text-to-speech or speech-to-text models such as Mistral, Llama, Stable Diffusion, Whisper thru a REST API and a playground. They can be used for example for text completion, question answering, classification, chat, translation, image generation, speech generation, speech to text transcription, ... ??? | |
| ▲ | simonw 15 hours ago | parent | prev | next [-] | | You're confused. The compression algorithm was something different. TextSynth is an LLM inference server, similar to (but older than) llama.cpp. | | |
| ▲ | yeasku 14 hours ago | parent [-] | | Creating a llama.cpp like software is not using LLMs to develop software neither. |
| |
| ▲ | 15 hours ago | parent | prev [-] | | [deleted] |
|
|
|