Remix.run Logo
murillians 17 hours ago

“ A few weeks ago, I was wrestling with a major life decision. Like I've grown used to doing, I opened Claude”

Is this where we’re at with AI?

nacozarina 16 hours ago | parent | next [-]

People used to cast lots to make major life decisions.

Putting a token predictor in the mix — especially one incapable of any actual understanding — seems like a natural evolution.

Absolved of burden of navigating our noisy, incomplete and dissonant thoughts, we can surrender ourselves to the oracle and just obey.

lionkor 11 hours ago | parent [-]

Yes, but its incredibly dangerous when the operator of the token predictor can give you, personally, different behavior and can influence your decisions even more directly than before.

meindnoch 15 hours ago | parent | prev | next [-]

Some people are incapable of internal thought. They have to verbalise/write down their thoughts, so they can hear/read it back, and that's how they make progress. In a way, these people's brain do work like LLMs.

ACCount37 12 hours ago | parent [-]

There is no evidence whatsoever that having or not having inner monologue confers any advantages or disadvantages.

For all we know, it's just two paths the brain can take to arrive at the same destination.

ga_to 12 hours ago | parent [-]

The comment (at least my reading of it) did not cast any judgement on whether this was a good or bad thing.

AlecSchueler 8 hours ago | parent [-]

The response didn't suggest that it did.

hxstroy2 8 hours ago | parent [-]

It absolutely did. Seems like you may be an example of exactly what they're discussing, and it looks disadvantageous to me.

SoftTalker 11 hours ago | parent | prev | next [-]

It does strike me as pretty crazy, but I'm at the other end of the spectrum, I almost never think about using an AI for anything. I've tried Claude I think, twice (it wasn't very helpful). The only other AI I've ever used are the "AI summaries" that Duck Duck Go sometimes shows at the top of its search results.

nl 15 hours ago | parent | prev | next [-]

If this is surprising to you then your circle is fairly unusual.

For example HBR recently reported the number 1 use for ChatGPT is "Therapy/companionship"

https://archive.is/Y76c5

Miraltar 17 hours ago | parent | prev | next [-]

Delegating life decisions to AI is obviously quite stupid but it can really help lay out and question your thoughts even if it's obviously biased.

skywhopper 16 hours ago | parent | prev | next [-]

A certain type of person loves nothing more than to spill their guts to anyone who will listen. They don’t see their conversational partners as other equally aware entities—they are just a sounding board for whatever is in this person's head. So LLMs are incredibly appealing to these folks. LLMs never get tired or zone out or make snarky responses. Add in chatbots’ obsequious enabling, and these folks are instantly hooked.

haar 15 hours ago | parent [-]

Do you just mean external vs internal processing/thinking?

senordevnyc 10 hours ago | parent | prev [-]

I constantly use AI like this. For life decisions, for complicated logistics situations, for technical decisions and architectures, etc. I'm not having it make any decisions for me, I'm just talking through things with another entity who has a vast breadth of knowledge, and will almost always suggest a different angle or approach that I hadn't considered.

Here's an example of the kinds of things I've talked with ChatGPT about in the last few weeks:

- I'm moving to a new area and I share custody of my daughter, so this adds a lot of complications around logistics. Talked through all that.

- Had it research niche podcasts and youtube channels for advertising / sponsorship opportunities for my SaaS

- Talked through a really complex architecture decision that's a mix of technical info and big tradeoffs for cost and customer experience.

- Did some research and talked through options for buying two new vehicles for the upcoming move, and what kinds work best for use cases (which are complex)

- Lots and lots of discussions around complex tax planning for 2026 and beyond

Again, these models have vast knowledge, as well as access to search and other tools to gather up-to-date info and sift through it far faster than I can. Why wouldn't I talk through these things with them? In my experience, with a little guardrails ("double check this" or "search and verify that X..."), I'm finding it more trustworthy than most experts in those fields. For example, I've gotten all kinds of incorrect tax advice from CPAs. Sometimes ChatGPT is out of date, but it's generally pretty accurate around taxes ime, especially if I have it search to verify things.