▲ | daoboy 4 days ago | ||||||||||||||||||||||||||||||||||
I'm running Hermes Mistral and the very first thing it did was start hallucinating. I recently started an audio dream journal and want to keep it private. Set up whisper to transcribe the .wav file and dump it in an Obsidian folder. The plan was to put a local llm step in to clean up the punctuation and paragraphs. I entered instructions to clean the transcript without changing or adding anything else. Hermes responded by inventing an intereview with Sun Tzu about why he wrote the Art of War. When I stopped the process it apologized and advised it misunderstood when I talked about Sun Tzu. I never mentioned Sun Tzu or even provided a transcript. Just instructions. We went around with this for a while before I could even get it to admit the mistake, and it refused to identify why it occurred in the first place. Having to meticulously check for weird hallucinations will be far more time consuming than just doing the editing myself. This same logic applies to a lot of the areas I'd like to have a local llm for. Hopefully they'll get there soon. | |||||||||||||||||||||||||||||||||||
▲ | simonh 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
It’s often been assumed that accuracy and ‘correctness’ would be easy to implement on computers because they operate on logic, in some sense. It’s originality and creativity that would be hard, or impossible because it’s not logical. Science Fiction has been full of such assumptions. Yet here we are, the actual problem is inventing new heavy enough training sticks to beat our AIs out of constantly making stuff up and lying about it. I suppose we shouldn’t be surprised in hindsight. We trained them on human communicative behaviour after all. Maybe using Reddit as a source wasn’t the smartest move. Reddit in, Reddit out. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
[deleted] |