| ▲ | delaminator 17 hours ago | |||||||
> Working with some of these huge models, I can see how AI has some use, especially if it's under my own local control. But it'll be a long time before I put much trust in what I get out of it—I treat it like I do Wikipedia. Maybe good for a jumping-off point, but don't ever let AI replace your ability to think critically! It is a little sad that they gave someone an uber machine and this was the best he could come up with. Question answering is interesting but not the most interesting thing one can do, especially with a home rig. The realm of the possible Video generation: CogVideoX at full resolution, longer clips Mochi or Hunyuan Video with extended duration Image generation at scale: FLUX batch generation — 50 images simultaneously Fine-tuning: Actually train something — show LoRA on a 400B model, or full fine-tuning on a 70B but I suppose "You have it for the weekend" means chatbot go brrrrr and snark | ||||||||
| ▲ | benjismith 16 hours ago | parent | next [-] | |||||||
> show LoRA on a 400B model, or full fine-tuning on a 70B Yeah, that's what I wanted to see too. | ||||||||
| ▲ | theshrike79 16 hours ago | parent | prev [-] | |||||||
Yea, I don't understand why people use LLMs for "facts". You can get them from Wikipedia or a book. Use them for something creative, write a short story on spec, generate images. Or the best option: give it tools and let it actually DO something like "read my message history with my wife, find top 5 gift ideas she might have hinted at and search for options to purchase them" - perfect for a local model, there's no way in hell I'd feed my messages to a public LLM, but the one sitting next to me that I can turn off the second it twitches the wrong way? - sure. | ||||||||
| ||||||||