| ▲ | ajross 2 hours ago | |
What's interesting to me is that this kind of behavior -- slightly-buffleheaded synthesis of very large areas of discourse with widely varying levels of reliability/trustworthiness -- is actually sort of one of the best things about AI research, at least for me? I'm pretty good at reading the original sources. But what I don't have in a lot of cases is a gut that tells me what's available. I'll search for some vague idea (like, "someone must have done this before") with the wrong jargon and unclear explanation. And the AI will... sort of figure it out and point me at a bunch of people talking about exactly the idea I just had. Now, sometimes they're loons and the idea is wrong, but the search will tell me who the players are, what jargon they're using to talk about it, what the relevant controversies around the ideas are, etc... And I can take it from there. But without the AI it's actually a long road between "I bet this exists" and "Here's someone who did it right already". | ||