Sufficient Structures for Conscious Computers

Written by  on June 14, 2016

From Scott Aaronson’s “My reply to Roger Penrose”

At the same time, I also firmly believe that, if anyone thinks that way, the burden is on them to articulate what it is about the brain that could possibly make it relevantly different from a digital computer that passes the Turing test. It’s their job!

This reminded me of his post trash talking Tononi’s IIT. But I do have 2 things I cannot barely articulate as plausibly making our central nervous systems (CNS) relevantly different from digital computers. (Please not that I didn’t say “brain” because these 2 things would fail in the “brain in a vat” scenario… much as I think general AI will fail if it’s a computer in a vat.)

Both of these are intimately related to the parallelism/connectionism of the CNS, but only in that such architecture is probably a prerequisite for them … a (perhaps only somewhat) parallel architecture is, I think, necessary but not sufficient. The 2 things are:

  1. Circular reasoning and/or
  2. Dynamic modularity.

By circular reasoning, I mean (fundamentally) the ability to conceive and reason about impredicative sets, those that are defined by quantification over the entire set. This is something human mathematicians do on a regular basis. But, to the best of my limited knowledge, it can only be simulated by digital computers (though analog computers may well be able to do it – I don’t know). Is a simulation good enough? Possibly. One could argue that human brains can only simulate such reasoning, as well, using ambiguous placeholders for higher order structures. Cue shouting matches about intuitionism, unification, etc.

By dynamic modularity, I mean the ability to (nearly instantaneously) reorganize one’s thinking to include new concepts, primarily new conceptions of old concepts. The simplest version(s) of this is anything like the Necker Cube: which represents a seemingly dualistic way of conceiving/perceiving the lines on the page. At first, you may see it only one way. Then if you (for whatever reason) flip to seeing it the other way, you may find it hard to flip back. After some practice, of course, you can flip back and forth (in vs. out) at will. Seeing it the 1st way is one conception. Seeing it the 2nd way, is a 2nd conception. And the ability to flip in and out at will is yet a 3rd conception.

This dynamic ability to reframe, re-ground, re-interpret, re-build, everything we’re thinking seems to be regularly doable (and done) by humans that digital computers may or may not be able to do. I think the embodied/situated cognition stance may enable a computer (digital or analog, depending on the interface to the milieu) to do this. I simply don’t see a plausible path for doing it.