Remix.run Logo
tracerbulletx 2 days ago

Ugh you just fancy auto-completed a sequence of electrical signals from your eyes into a sequence of nerve impulses in your fingers to say that, and how do I know you're not hallucinating, last week a different human told me an incorrect fact and they were totally convinced they were right!

adamredwoods 2 days ago | parent | next [-]

Humans base their "facts" on consensus-driven education and knowledge. Anything that falls into a range of "I think this is true" or "I read this somewhere" or "I have a hunch" is more acceptable for a human than an LLM. Also humans are more often to encapsulate their uncertain answers with phrasing. LLMs can't do this, they don't have a way to track answers that are possibly incorrect.

deadbabe 2 days ago | parent | prev [-]

The human believes it was right.

The LLM doesn’t believe it was right or wrong. It doesn’t believe anything anymore than a mathematical function believes 2+2=4.

tracerbulletx 2 days ago | parent | next [-]

Obviously LLMs are missing many important properties of the brain like spatial, time, and chemical factors, as well as many different inter connected feedback networks to different types of neural networks that go well beyond what llms do.

Beyond that, they are the same thing. Signal Input -> Signal Output

I do not know what consciousness actually is so I will not speak to what it will take for a simulated intelligence to have one.

Also I never used the word believes, I said convinced, if it helps I can say "acted in a way as if it had high confidence in its output"

cratermoon a day ago | parent [-]

Obviously sand is missing many important properties of integrated circuits, like semiconductivity, electric interconnectivity, transistors, and p-n junctions.

Beyond that, they are the same thing.

istjohn 2 days ago | parent | prev [-]

Can you support that assertion? What's your evidence?

cratermoon a day ago | parent [-]

not the OP but https://www.tandfonline.com/doi/abs/10.1080/0951508070123951...