| ▲ | pluc 5 hours ago |
| AI killed curiosity. At least Google made you search and look at alternatives, AI just gives you solutions, whether right or wrong. In a few years, the cognitive decline will be obvious. The only people who remain curious are the people who actively want to, despite AI, and most of the time against it. Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so. Knowledge used to be power; now knowledge is money and they won't let us have it for much longer. |
|
| ▲ | hrimfaxi 5 hours ago | parent | next [-] |
| AI enables curious people to explore. Why do you say it kills curiosity? If anything, it's so recognizable with output I'd say it kills creativity. |
| |
| ▲ | pluc 5 hours ago | parent | next [-] | | It enables people to solve, not explore. It's a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity. | | |
| ▲ | agubelu 4 hours ago | parent | next [-] | | Strong disagree. One of my favorite use cases for LLM chatbots is to satisfy random niche curiosities whenever they cross my mind and get pointers for further reading. This often leads to going down some niche rabbit hole and learning some interesting stuff in the process. Whenever I tried the same with Google in the past, more often than not I couldn't find what I was looking for, because I didn't know the correct keywords to search for in order to start getting relevant results. With ChatGPT & co. I can just pose the question in natural language, get results and continue exploring. | |
| ▲ | Brendinooo 4 hours ago | parent | prev | next [-] | | A couple of weeks ago I was interested in how people have interpreted the Tower of Babel narrative over time, so I used Claude to do a bunch of research to identify interpretations over time and look for historical trends. I don't think it "solved" anything, and it all felt more curiosity-driven. It led to a bunch of in-person conversations and followup questions. So I guess I'd say it's more about how you're using the tool and what kinds of problems you're looking to solve with it. A calculator can be dinged for getting effortless answers at every turn or it can be praised for enabling a higher volume of solved math problems and enabling more complex work for a broader set of people. | |
| ▲ | DangitBobby 4 hours ago | parent | prev | next [-] | | It gets me past the non-productive barriers and allows me to explore problems and scenarios I could never have done before due to impossible to justify time cost for myself and expense for my clients. | |
| ▲ | lemoncookiechip 4 hours ago | parent | prev | next [-] | | That's a deeply cynical way of seeing things. Grabbing a book to search for an answer is no different than being told the answer is on page 153 line 6 by someone else. It's about what you as an individual is seeking from the activity. If you're just copy-pasting answers and you don't internalize what is being said, sure, you're not being curious or more importantly, learning. This DOES NOT mean that every person who engages with an LLM is doing that or doing it every time, and just like using a search engine or grabbing a book can lead you into interesting rabbit holes, so can an LLM, it's just a matter of how fast and to want end. The real issue is the hallucinations which for people unfamiliar with said topic, can lead them into believing what they're being told is a fact when it's not. Also LLMs like leaving out URLs and sources from their replies to save on tokens often if you don't remind them, that's also annoying. This whole discussion is bunch of anecdotal evidence, which is fair, and as such I'll give my own. I've found myself engaging more with obscure topics that interest me via the LLMs than I did with a search engine because the barrier is lower. I don't have to sieve through horribly designed websites filled fluff that doesn't interest me, many with dozens of JS trying to run (UBO + noscript thumbs up) and in some cases demanding that certain JS run just for me to see some plain text, some slow to browse with topics hidden under sub-sub-menus. It's annoying and just one of many barriers. Others being language. etc... | |
| ▲ | lxgr 4 hours ago | parent | prev | next [-] | | Speak for yourself. Looking at my LLM chat history, about 90% of my questions are focused on understanding systems better, not having it solve a concrete problem for me. Do you never click through to the sources or experimentally test the information presented to you by the LLM? If not, who's stopping you? To me, this seems a bit like a tenured academic complaining about the abundance of research assistants working for them preventing them from properly understanding things anymore. | |
| ▲ | Kon5ole 3 hours ago | parent | prev | next [-] | | I think it just changes the level where you spend your thinking. You think things like "is the accordion a better user experience than the side tabs" instead of "why the f is the third accordion pane empty?" Sure, the curiosity of figuring out where you made the mistake is gone, but that was never very valuable. It's just a detour that forces you to be curious about something else. | |
| ▲ | hrimfaxi 4 hours ago | parent | prev [-] | | It can enable people to go directly to solutions, but it also enables alternative paths. AI may not be nurturing creativity where it is not present but it doesn't seem to be responsible for people's disinterest in anything beyond their immediate need. The real problem is that most people either don't see the value in or don't have the time to indulge in their curiosity. Even the language we use, indulgence to describe scratching that itch. How funny. Because curiosity is a luxury. | | |
| ▲ | lxgr 4 hours ago | parent [-] | | > curiosity is a luxury. It is indeed. Curiosity, for me, very often stems out of a particular kind of idleness and boredom, paired with a tricky question I can't find an immediate answer to. And I can definitely still be bored that way even with LLMs. |
|
| |
| ▲ | debazel 4 hours ago | parent | prev | next [-] | | Until you explore "too deep" and get your whole account banned for suspicious activity and permanently grief your whole career. | | |
| ▲ | leetrout 4 hours ago | parent [-] | | Serious fear I have. I brought it up two years ago and get downvoted when I brought it up a couple months ago. There is a story on the front page right now about someone losing their child's family videos from a youtube ban. We hear about this stuff all the time.
I suspect we are gonna be in somewhat of an arms race with AI products as the bubble grows over the next 18-24 months. This makes me worried about how disadvantaged people are going to be if they lose access to the better platform (whichever that ends up being). Do you think AI is going to be so important that we would benefit from legal protections for access? Or do you think the models and technology will become so small we will be able to personalize / decentralize the tech and it still be useful / competitive? https://news.ycombinator.com/item?id=40784126 | | |
| ▲ | ivankra 4 hours ago | parent [-] | | Happening already. My new claude max account got instabanned after just a few messages asking it to debug some stuff for me, that they felt like a TOS violation. Nothing remotely controversial. The main model didn't even complain, some dumber background censorship model flagged it. |
|
| |
| ▲ | mring33621 4 hours ago | parent | prev | next [-] | | Agree. I have learned so much, so rapidly, over the last 3 years, thanks to these AI tools. These things can be a poisoned chalice, leading to weaker long-term performance, or they can be a force multiplier. It's up to you how you use them. | |
| ▲ | rich_sasha 2 hours ago | parent | prev [-] | | Eh dunno. I've been gaslit (gaslighted?) by AI quite a few time. Along these lines: here's a design problem, how do I fix it? Oh known problem, here's the only sane way of doing it. Then I poke holes, AI tells me nonono, do like Computer say. Eventually relenting, telling me I'm right to push back, and doing a 180 turn. Then agreeing with me/adding options etc. The RL metaoptimisation clearly sometimes pushes it to "here's one solution, end of story". |
|
|
| ▲ | lxgr 4 hours ago | parent | prev | next [-] |
| > AI killed curiosity. Only if you let yours be killed. There will always be a demand for high-value signal, even though it might not be as easy to find anymore. But then again, has it ever been? > Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so. I have sympathy for that argument when it comes to locked bootloaders, closed-source software etc., but with AI? How? Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code? I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever. |
| |
| ▲ | pluc 4 hours ago | parent | next [-] | | > Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code? Microsoft owns CoPilot and controls GitHub, LinkedIn, etc Google owns Gemini and control search results for most of the web Meta owns whatever their model name is now and controls person-to-person relationships on the web etc It's up to any of them to flip the switch and make AI the default entry point when they decide that their AI isn't gaining enough traction. And then you can just hide the source data as proprietary information. Is it cynical? Sure, but I don't think we can say it's unlikely. | |
| ▲ | thepasch 4 hours ago | parent | prev [-] | | > I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever. This is what gets me every single time. I genuinely don’t think this is a hard realization to come to, and yet, the vast majority of arguments from both sides of the aisle, both proponents and antis, always assume that EITHER you do everything yourself, OR you have the AI do everything for you. If you use AI, you’re DOOMED to never think critically about anything anyone ever tells you ever again. If you don’t, you’re an idiot, because everyone else is using it, and skills and experience no longer matter because everyone can now do everything. And this is on HN, too; supposedly, a site where experienced engineers, developers, and builders converge; the exact kind of demographic you’d expect to understand such a thing as nuance. And yet, your comment is one of very few. There’s someone RIGHT HERE, a few comments down, saying, verbatim, “it’s a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.” Treating curiosity as the end rather than the means, as if I stop being a curious person once I find an answer to a question I’ve been asking myself, or as if curiosity is some sort of “temporary status effect” that an answer/solution “consumes.” And it seems to be worse than just “no one’s thought it through properly.” I’ve literally had someone show a fundamental incapability to understand the concept. I spent a non-trivial amount of effort writing out three comments with several paragraphs about how knowing your knowns and unknowns, and the fact that you have unknown unknowns, is the most important thing in any project, not just when it comes to AI. That these tools aren’t just doers, but also searchers. That they’re pretty much the best rubber ducky that’s ever been created, and that I argue a rubber ducky is exactly what you should be using for in any contexts that don’t have it automate trivial and testable work. The guy refused to read any of it and, after three walls of text, continued claiming I’m “advocating for the LLM to guide me.” There is some sort of deeply instinctive and intrinsically defensive reflex that a lot of people seem to immediately collapse into when the topic comes up, and it seems to seriously impair the ability to acknowledge nuance or concede a single fraction of an inch. It’s baffling. |
|
|
| ▲ | kingleopold 4 hours ago | parent | prev | next [-] |
| in few years the filters they will implent to AI models will be insane too. right now it only blocks bad content. future will be limitef for info |
|
| ▲ | wilde 4 hours ago | parent | prev [-] |
| Google killed curiosity. At least libraries made you search and read alternatives. Google just gives you solutions, whether right or wrong. |
| |
| ▲ | amazingamazing 4 hours ago | parent [-] | | Google search doesn't "just give you solutions" | | |
| ▲ | kroolik 4 hours ago | parent [-] | | It first gives you a page of ads, then a scraped version of the solution that steals content for ads, and then the amp version of the solution that doesnt work because js or what not. | | |
|
|