Remix.run Logo
awesomeusername 4 days ago

I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.

Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.

zdragnar 4 days ago | parent | next [-]

It would still need to be regulated and licensed. There was this [0] I saw today about a guy who tried to replace sodium chloride in his diet with sodium bromide because ChatGPT said he could, and poisoned himself.

With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.

[0] https://x.com/AnnalsofIMCC/status/1953531705802797070

II2II 4 days ago | parent | next [-]

There are two different issues here. One is tied to how authoritative we view a source, and the other is tied to the weaknesses of the person receiving advice.

With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.

I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.

Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.

terminalshort 4 days ago | parent | prev | next [-]

You cite one case for LLMs, but I can cite 250,000 a year for licensed doctors doing the same https://pubmed.ncbi.nlm.nih.gov/28186008/. Bureaucracy doesn't work for anyone but the bureaucrats.

laserlight 4 days ago | parent | next [-]

Please show me one doctor who recommended taking a rock each day. LLMs have a different failure mode than professionals. People are aware that doctors or therapists may err, but I've already seen countless instances of people asking relationship advice from sycophant LLMs and thinking that the advice is “unbiased”.

pmarreck 3 days ago | parent | next [-]

> LLMs have a different failure mode than professionals

That actually supports the use-case of collaboration, since the weaknesses of both humans and LLMs would potentially cancel each other out.

shmel 4 days ago | parent | prev | next [-]

Homeopathy is a good example. For an uneducated person it sounds convincing enough and yes, there are doctors prescribing homeopathic pills. I am still fascinated it still exists.

fl0id 4 days ago | parent [-]

That’s actually a example of sth different. And as it’s basically a placebo it only harms people’s wallets (mostly). That cannot be said for random llm failure modes. And whether it can be prescribed by doctors depends very much on the country

ivell 4 days ago | parent [-]

I don't think it is that harmless. Believe in homeopathy often delays patients from taking timely intervention.

pmarreck 3 days ago | parent [-]

Yes. See: Steve Jobs, maybe.

terminalshort 4 days ago | parent | prev [-]

A LLM (or doctor) recommending that I take a rock can't hurt me. Screwing up in more reasonable sounding ways is much more dangerous.

zdragnar 4 days ago | parent [-]

Actually, swallowing a rock will almost certainly cause problems. Telling your state medical board that your doctor told you to take a rock will have a wildly different outcome than telling a judge that you swallowed one because ChatGPT told you to do so.

Unless the judge has you examined and found to be incompetent, they're most likely to just tell you that you're an idiot and throw out the case.

terminalshort 4 days ago | parent [-]

They can't hurt me by telling me to do it because I won't.

pmarreck 3 days ago | parent | prev [-]

Wow. I had never heard this before.

nullc 4 days ago | parent | prev | next [-]

You don't need a "regulated license" to hold someone accountable for harm they caused you.

The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.

oinfoalgo 4 days ago | parent | prev [-]

I would suspect at some point we will get models that are licensed.

Not tomorrow, but I just can't imagine this not happening in the next 20 years.

fl0id 4 days ago | parent | prev | next [-]

When has technological progress leveled the playing field? Like never. At best it shifted it, like that a machine manufacturer got rich in addition to existing wealth. There is no reason for this to go different with AI, and it’s far from certain that it will become better anything anytime soon. Cheaper, sure. But then ppl might see slight improvements from talking to ann original Eliza/Markov bot, and nobody advocated using those as therapy

jakelazaroff 4 days ago | parent | prev | next [-]

Why is that a foregone conclusion?

quantummagic 4 days ago | parent [-]

Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica. Given enough time, we'll create that replica, there's no reason to think otherwise.

shkkmo 4 days ago | parent | next [-]

> Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica

That is a big assumption and my doubts aren't based on any soul "magic" but on our historical inability to replicate all kinds of natural mechanisms. Instead we create analogs that work differently. We can't make machines that fly like birds but we can make airplanes that fly faster and carry more. Some of this is due to the limits of artificial construction and some of it is due to the differences in our needs driving the design choices.

Meat isn't magic, but it also isn't silicon.

It's possible that our "meat" architecture depends on a low internal latency, low external latency, quantum effects and/or some other biological quirks that simply can't be replicated directly on silicon based chip architectures.

It's also possible they are chaotic systems that can't be replicated and each artificial human brain would require equivalent levels of experience and training in ways that don't make the any more cheaper or available than humans.

It's also possible we have found some sort of local maximum in cognition and even if we can make an artificial human brain, we can't make it any smarter than we are.

There are some good reasons to think it is plausibly possible, but we are simply too far away from doing it to know for sure whether it can be done. It definitely is not a "forgone conclusion".

quantummagic 4 days ago | parent [-]

> We can't make machines that fly like birds

Not only can we, they're mere toys : https://youtu.be/gcTyJdPkDL4?t=73

--

I don't know how you can believe in science and engineering, and not believe all of these:

1. Anything that already exists, the universe is able to construct, (ie. the universe fundamentally accommodates the existence of intelligent objects)

2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.

3. While some things are astronomically (literally) difficult to achieve, that doesn't nullify #2

4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans. The universe has already shown us their creation is possible.

This is different than, for instance, speculating that science will definitely allow us to live forever. There is no existence proof for such a thing.

But there is no reason to believe that we can't manipulate and harness intelligence. Maybe it won't be with Von Neumann, maybe it won't be with silicon, maybe it won't be any smarter than we are, maybe it will require just as much training as us; but with enough time, it's definitely within our reach. It's literally just science and engineering.

shkkmo 4 days ago | parent [-]

> 1. Anything that already exists, the universe is able to construct

I didn't claim it is possible we couldn't build meat brains. I claimed it is possible that equivalent or better performance might only be obtainable by meats brains.

> 2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.

I actually don't believe the last part. There are quite plausibly laws of nature that we can't understand. I think it's actually pretty presumptuous that we will/can eventually understand and master every law of nature.

We've already proven that we can't prove every true thing about natural numbers. I think there might well be limits on what is knowable about our universe (atleast from inside of it.)

> 4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans.

I didn't say that I believed that humans can't create intelligent objects. I believe we probably can and depending on how you want to define "intelligence", we already have.

What I said is that it is not a forgone conclusion that we will create "a better therapist, doctor, architect". I think it is pretty likely but not certain.

4 days ago | parent [-]
[deleted]
jakelazaroff 4 days ago | parent | prev | next [-]

Even if we grant that for the sake of argument, there are two leaps of faith here:

- That AI as it currently exists is on the right track to creating that replica. Maybe neural networks will plateau before we get close. Maybe the Von Neumann architecture is the limiting factor, and we can only create the replica with a radically different model of computing!

- That we will have enough time. Maybe we'll accomplish it by the end of the decade. Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance. Maybe it'll happen in a million years, when humans have evolved into other species. We just don't know!

quantummagic 4 days ago | parent [-]

I don't think you've refuted the point though. There's no reason to think that the apparatus we employ to animate ourselves will remain inscrutable forever. Unless you believe in a religious soul, all that stands in the way of the scientific method yielding results, is time.

> Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance

In that eventuality, it really doesn't matter. The point remains, given enough time, we'll be successful. If we aren't successful, that means everything else has gone to shit anyway. Failure wont be because it is fundamentally impossible, it will be because we ran out of time to continue the effort.

jakelazaroff 4 days ago | parent [-]

No one has given a point to refute? The OP offered up the unsubstantiated belief that AI will some day be better than doctors/therapists/etc. You've added that it's not impossible — which, sure, whatever, but that's not really relevant to what we're discussing, which is whether it will happen to our society.

quantummagic 4 days ago | parent [-]

OP didn't specify a timeline or that it would happen for us personally to behold. Just that it is inevitable. You've correctly pointed out that there are things that can slow or even halt progress, but I don't think that undermines (what I at least see as) the main point. That there's no reason to believe anything fundamental stands in our way of achieving full "artificial intelligence"; ie. the doubters are being too pessimistic. Citing the destruction of humanity as a reason why we might fail can be said about literally every single other human pursuit as well; which to my mind, renders it a rather unhelpful objection to the idea that we will indeed succeed.

jakelazaroff 4 days ago | parent [-]

The article is about Illinois banning AI therapists in our society today, so I think the far more reasonable interpretation is that OP is also talking about our society today — or at least, in the near-ish future. (They also go on to talk about how it would affect different people in our society, which I think also points to my interpretation.)

And to be clear, I'm not even objecting to OP's claim! All I'm asking for is an affirmative reason to believe what they see as a foregone conclusion.

quantummagic 4 days ago | parent [-]

Well, I've already overstepped polite boundaries in answering for the OP. Maybe you're right, and he thinks such advancements are right around the corner. On my most hopeful days, I do. Let's just hope that the short term reason for failure isn't a Mad Max hellscape.

treespace8 4 days ago | parent | prev | next [-]

Haven't we found that there is a limit? Math itself is an abstraction. There is always a conversion process (Turning the real world into a 1 or a 0) that has an error rate. IE 0.000000000000001 is rounded to 0.

Every automation I have seen needs human tuning in order to keep working. The more complicated, the more tuning. This is why self driving cars and voice to text still rely on a human to monitor, and tune.

Meat is magic. And can never be completely recreated artificially.

pmarreck 3 days ago | parent | prev | next [-]

> Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica.

What's hilarious about this argument (besides the fact that it smacks of the map-territory relation fallacy https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation) is that for most of my life (53 years), we've been trying not just to simulate a nematode or Drosophila (two of the most-studied creatures of all time- note that we COMPLETELY understand their nervous systems) and failed to create anything remotely convincing of "life" (note that WE are the SOLE judgers of what is "alive", there is no 100% foolproof mechanistic algorithm to detect "life" (look up the cryptobiosis of tardigrades or wood frogs for an extra challenge)... therein lies part of the problem), but we cannot even convincingly simulate a single cell's behavior in any generous span of time (so for example, using a month to compute 10 seconds of a cell's "life"). And yes, there have been projects attempting to do those things this entire time. You should look them up. Tons of promise, zero delivery.

> Given enough time, we'll create that replica, there's no reason to think otherwise.

Note how structurally similar this is to a "God of the gaps" argument (just substitute "materialism-given-unlimited-time" for "God").

And yet... I agree that we should continue to try. I just think we will discover something interesting in... never succeeding, ever... while you will continue to refer to the "materialism-given-unlimited-time of the gaps" argument, assuming (key word there) that it must be successful. Because there can't possibly be anything else going on. LOL. Naive.

(Side note, but related: I couldn't help noticing that most of the AI doomers are materialist atheists.)

kevin_thibedeau 4 days ago | parent | prev [-]

It's sort of nice when medical professionals have real emotions and can relate to their patients. A machine emulation won't ever do the same. It will be like a narcissist faking empathy.

Mtinie 4 days ago | parent | prev | next [-]

I agree with you that the possibility of egalitarian care for low costs is becoming very likely.

I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.

guappa 4 days ago | parent | prev | next [-]

I wish I was so naive… but since AI is entirely in the hands of people with money… why would that possibly happen?

sssilver 4 days ago | parent | prev | next [-]

Wouldn’t the rich afford a much better trained, larger, and computationally more intensive model?

kolinko 4 days ago | parent | next [-]

With most tech we reach law of diminishing returns. That is sure, there is still a variation, but very little:

- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable

- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)

- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)

- flying/travel is similar

- computer games and entertainment, and software in general

The more we remove human work from the loop, the more democratised and scalable the technology becomes.

socalgal2 4 days ago | parent | prev [-]

does it matter? If mine is way better than I had before, why does it matter that someone else's is better still? My sister's $130 Moto G is much better than whatever phone she could afford 10 years. Does it matter that it's not a $1599 iPhone 16 Pro Max 1TB?

esseph 4 days ago | parent | next [-]

If the claim was that it would level the playing field, it seems like it wouldn't really do that?

olyjohn 4 days ago | parent | prev [-]

A therapist is not a phone. Everybody deserves the best care, not just what is better than we have now. That's a low bar.

intended 4 days ago | parent | prev | next [-]

Why will any of those things come to pass? I’m asking as someone who has used it extensively for such situations.

II2II 4 days ago | parent | prev [-]

I've never been to a therapist for anything that can be described as a diagnosable condition, but I have spoken to one about stress management and things of that ilk. For "amusement" I discussed similar things with an LLM.

At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.

On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.

Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.

nullc 4 days ago | parent [-]

Why do you think the lack of time limits is an advantage?

There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.

You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.

And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.

Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.