▲ | jruohonen a day ago | |||||||
""" • Instead of forming hypotheses, users asked the AI for ideas. • Instead of validating sources, they assumed the AI had already done so. • Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on. This isn’t hypothetical. This is happening now, in real-world workflows. """ Amen, and OSINT is hardly unique in this respect. And implicitly related, philosophically: | ||||||||
▲ | johnnyanmac 14 hours ago | parent | next [-] | |||||||
>This isn’t hypothetical. This is happening now, in real-world workflows. Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not. Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output | ||||||||
▲ | cmiles74 a day ago | parent | prev | next [-] | |||||||
Anyone using these tools would do well to take this article to heart. | ||||||||
| ||||||||
▲ | gneuron a day ago | parent | prev [-] | |||||||
Reads like it was written by AI. |