Remix.run Logo
CooCooCaCha 2 days ago

This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.

It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.

biophysboy 2 days ago | parent | next [-]

Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.

That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.

cmsj 2 days ago | parent [-]

Just to put some numbers on "extremely highly connected" - there are about 90 billion neurons in a human brain, but the connections between them number in the range of 100 trillion.

That is one hell of a network, and it can all operate fully in parallel while continuously training itself. Computers have gotten pretty good at doing things in parallel, but not that good.

dmwilcox 2 days ago | parent | prev [-]

I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.

Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.

What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.

In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)

krisoft 2 days ago | parent [-]

> What's the machine code of an AGI gonna look like?

Right now the guess is that it will be mostly a bunch of multiplications and additions.

> It makes one illegal instruction and crashes?

And our hearth quivers just slightly the wrong way and we die. Or a tiny blood cloth plugs a vessel in our brain and we die. Do you feel that our fragility is a good reason why meat cannot be intelligent?

> I jest but really think about the metal.

Ok. I'm thinking about the metal. What should this thinking illuminate?

> The inside of modern computers is tightly controlled with no room for anything unpredictable.

Let's assume we can't make AGI because we need randomness and unpredictability in our computers. We can very easily add unpredictability. The simple and stupid solution is to add some sensor (like a camera CCD) and stare at the measurement noise. You don't even need a lens on that CCD. You can cap it so it sees "all black", and then what it measures is basically heat noise of the sensors. Voila. Your computer has now unpredictability. People who actually make semiconductors probably can come up with even simpler and easier ways to integrate unpredictability right on the same chip we compute with.

You still haven't really argued why you think "unpredictableness" is the missing component of course. Beside the fact that it just feels right to you.

dmwilcox 2 days ago | parent [-]

Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me. Computers can treat data as code and code as data all pretty easily. It's core to several languages (like lisp). As such making illegal instructions or violating the straightjacket of a system such an "intelligence" would operate in is likely. If you could make an intelligent process, what would it think of an operating system kernel (the thing you have to ask for everything, io memory, etc)? Does the "intelligent" process fear for itself when it's going to get descheduled? What is the bitpattern for fear? Can you imagine an intelligent process in such a place, as static representation of data in ram? To get write something down you call out to a library and maybe the CPU switches out to a brk system call to map more virtual memory? It all sounds frankly ridiculous. I think AGI proponents fundamentally misunderstand how a computer works and are engaging in magical thinking and taking the market for a ride.

I think it's less about the randomness and more about that all the functionality of a computer is defined up front, in software, in training, in hardware. Sure you can add randomness and pick between two paths randomly but a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

krisoft 15 hours ago | parent [-]

> Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me.

It very much can. Jump scares, deep grief are known to cause heart attacks. It is called stress cardiomyopathy. Or your meatsuit can indiredtly do that by ingesting the wrong chemicals.

> If you could make an intelligent process, what would it think of an operating system kernel

Idk. What do you think of your hypothalamus? It can make you unconscious at any time. It in fact makes you unconscious about once a day. Do you fear it? What if one day it won’t wake you up? Or what if it jacks up your internal body temperature and cooks you alive from the inside? It can do that!

Now you might say you don’t worry about that, because through your long life your hypothalamus proved to be reliable. It predictably does what it needs to do, to keep you alive. And you would be right. Your higher cognitive functions have a good working relationship with your lower level processes.

Similarly for an AGI to be inteligent it needs to have a good working relationship with the hardware it is running on. That means that if the kernel is temperamental and idk descheduling the higher level AGI process then the AGI will mallfunction and not appear that inteligent. Same as if you meet Albert Einstein while he is chemically put to sleep. He won’t appear inteligent at all! At best he will be just drooling there.

> Can you imagine an intelligent process in such a place, as static representation of data in ram?

Yes. You can’t? This is not really a convincing argument.

> It all sounds frankly ridiculous.

I think what you are doing is that you are looking at implementation details and feeling a disconnect between that and the possibility of inteligence. Do you feel the same ridiculousnes about a meatblob doing things and appearing inteligent?

> a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

Can you?