| ▲ | ggm 17 hours ago |
| I believe in dead salmon, they do. |
|
| ▲ | exe34 16 hours ago | parent | next [-] |
| Thank you for the giggle, I misread this as a statement of faith and a non-sequitur. |
| |
| ▲ | moffkalast 14 hours ago | parent | next [-] | | I had an fMRI and also believe in dead salmon now, it's a common side effect but it's worth it for the diagnostic data they get. | |
| ▲ | oniony 13 hours ago | parent | prev [-] | | Yeah, really needed the comma on the left side of the parenthesis. |
|
|
| ▲ | lgas 17 hours ago | parent | prev [-] |
| They cause hallucinations in dead salmon? I find that hard to believe. |
| |
| ▲ | ggm 17 hours ago | parent | next [-] | | https://www.scientificamerican.com/blog/scicurious-brain/ign... | | |
| ▲ | lgas 16 hours ago | parent [-] | | I'm not 100% sure I'd call that a hallucination, but it's close enough and interesting enough that I'm happy to stand corrected. | | |
| ▲ | bitwize 16 hours ago | parent [-] | | When improper use of a statistical model generates bogus inferences in generative AI, we call the result a "hallucination"... | | |
| ▲ | baq 13 hours ago | parent [-] | | It should have been called confabulation, hallucination is not the correct analog, tech bros simply used the first word they thought of and it unfortunately stuck. | | |
| ▲ | K0balt 11 hours ago | parent [-] | | Undesirable output might be more accurate, since there is absolutely no difference in the process of creating a useful output vs a “hallucination” other than the utility of the resulting data. I had a partially formed insight along these lines, that LLMs exist in this latent space of information that has so little external grounding. A sort of deeamspace. I wonder if embodying them in robots will anchor them to some kind of ground-truth source? |
|
|
|
| |
| ▲ | furyofantares 17 hours ago | parent | prev [-] | | Loss of consciousness seems equally unlikely. | | |
|