| ▲ | rockemsockem 7 days ago |
| I'm not talking about time saving. AI seems to speed up my searching a bit since I can get results quicker without having to find the right query then find a site that actually answers my question, but that's minor, as nice as it is. I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too. How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally. |
|
| ▲ | lisbbb 7 days ago | parent | next [-] |
| How much of that is junk knowledge, though? I mean, sure, I love looking up obscure information, particularly about cosmology and astronomy, but in reality, it's not making me better or smarter, it's just kind of "science junk food." It feels good, though. I feel smarter. I don't think I am, though, because the things I really need to work on about myself are getting pushed aside. |
| |
| ▲ | rockemsockem 6 days ago | parent [-] | | For me it's pretty much all knowledge that I'm immediately operationalizing. I occasionally use it to look up actors and stuff too, but most of the time it's information that provides direct value to me |
|
|
| ▲ | flkiwi 7 days ago | parent | prev | next [-] |
| This is kind of how I use it: 1. To work through a question I'm not sure how to ask yet
2. To give me a starting point/framework when I have zero experience with an issue
3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway. |
|
| ▲ | fzeroracer 7 days ago | parent | prev | next [-] |
| Why not just look up the information directly instead of asking a machine that you can never truly validate? If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step. If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you. |
| |
| ▲ | rockemsockem 7 days ago | parent | next [-] | | See my previous statement > Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too. You're practically saying that looking at an index in the back of a book is a meaningless step. It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding. Edit: Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time. | | |
| ▲ | fzeroracer 7 days ago | parent [-] | | Again, why would you just not use Wikipedia as your index? I'm saying why would you use the index that lies and hallucinates to you instead of another perfectly good index elsewhere. You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely? | | |
| ▲ | rockemsockem 7 days ago | parent [-] | | Because the middleman is faster and practically never lies/hallucinates for simple queries, the middleman can handle vague queries that Google and Wikipedia cannot. The same reasons you use Wikipedia instead of reading all the citations on Wikipedia. | | |
| ▲ | fzeroracer 7 days ago | parent [-] | | > Because the middleman is faster and practically never lies/hallucinates for simple queries How do you KNOW it doesn't lie/hallucinate? In order to know that, you have to verify what it says. And in order to verify what it says, you need to check other outside sources, like Wikipedia. So what I'm saying is: Why bother wasting time with the middle man? 'Vague queries' can be distilled into simple keyword searches: If I want to know what a 'Tsunami' is I can simply just plug that keyword into a Wikipedia search and skim through the page or ctrl-f for the information I want instantly. If you assume that it doesn't lie/hallucinate because it was right on previous requests then you fall into the exact trap that blows your foot off eventually, because sometimes it can and will hallucinate over even benign things. | | |
| ▲ | rockemsockem 6 days ago | parent [-] | | I feel like you're coming from a very strange place of both using advanced technology that saves you time and expands your personal knowledge base and at the same time saying that the more advanced technology that saves you even more time and expands your knowledge base further is useless and a time sink. For most questions it is so much faster to validate a correct answer than to figure out the answer to begin with. Vague queries CANNOT be distilled to simple keyword searches when you don't know where to start without significant time investment. Ctrl-f relies on you and the article having the exact same preferred vocabulary for the exact same concepts. I do not assume that LLMs don't lie or hallucinate, I start with the assumption that they will be wrong. Which for the record is the same assumption I take with both websites and human beings. |
|
|
|
| |
| ▲ | lisbbb 7 days ago | parent | prev [-] | | A lot of formerly useful search tools, particularly Google, are just trash now, absolute trash. |
|
|
| ▲ | agent_turtle 7 days ago | parent | prev [-] |
| OpenAI is currently being evaluated in terms of hundreds of billions. That’s an insane number for creating a product that “speeds up searching a bit”. |
| |
| ▲ | rockemsockem 6 days ago | parent [-] | | Did you miss the part where I said it helps me find new knowledge that I wouldn't have otherwise? That is pretty significant in my book. | | |
| ▲ | agent_turtle 4 days ago | parent [-] | | I don't know how you use search but I often find incredible information that I didn't explicitly search for. How do you quantify such things? How can you say with a straight face that this magic box gives you more relevant information (which may be wrong!) and that will revolutionize the workforce? | | |
| ▲ | rockemsockem 4 days ago | parent [-] | | I am searching for the incredible information and I can't find it without the LLM because I don't know the proper terminology ahead of time and search isn't that good unless you know exactly what you want. |
|
|
|