closures

Written by  on May 13, 2015 

I debated whether this entry should go here or in my personal log, which contains things one should not talk about in a professional context. As I age, I notice the line between professional and personal blurs and clarifies, depending on the context. So, I left the decision up to which one had the more recent entry. And here we are.

This post on an analogy between Singularity doomsayers and skeptical theists evoked a common whipping boy of mine: the idealism of closure (or the closure that is idealism). By “closure”, I basically mean the computer science concept. For most purposes, though, I extend this to the bound elements of any conceivable context. So, for example, when you’re arguing with someone about the meaning of the word “God”, there is no closure at all because that word is so vague as to be useless. I.e. every variable is a free variable. (Perhaps it’s better to say it’s the trivial closure rather than has no closure.)

The problem with all three positions in that article: skeptical theism, treacherous turn, and the author’s analogy between the two is that all three depend on some form of closure, the idea that some elements of the context are definitely bound (PDF). I maintain, especially in conversations with Singularians, that nothing is closed, that the universe is open. By the way, while the treacherous turn (of some type) might be thought of as the heart of 99% of science fiction out there, it is 100% countered by the openness trope. No matter how super intelligent the AI will be (or no matter how many omni-X properties God has), there will always be some free variable we can pick at to eventually unravel their dastardly plan.

Of course, the trope depends on some fundamental principles, the most important of which is sensitivity to initial conditions (critical to deterministic chaos, which is critically relevant when arguing about what machines will or will not do). Another is the stability of attractors. E.g. how stable is the first mover advantage gained by the first super intelligent AI? My claim is that such attractors are always much less stable than we think they are, than we idealize them to be, especially when writing philosophy books and articles. Yes, if we accept the author’s analogy, it bifurcates the space, making the doomsayer and geek nirvana deeper, more stable attractors than they otherwise would be. But it’s a long leap from stable to irreversible.

Category : Uncategorized

Leave a Reply