I awoke from this recurring dream last night where I am sitting in the presence of a bunch of older dorks complaining about the spoiled children who act as if they have infinite memory, efficient time-slicing, well-shielded circuits, etc. The phrase one of the older dorks says that “pulls my string” goes something like:
How do we include what we need without pulling the whole universe in with it? It’s commonly recognized, I think, that this is an ad infinitum argument against runaway systems (e.g. Java’s libraries).
In my dream, I try to challenge the dorks by pointing out there exists a similar ad infinitum argument the other way, narrowing the context so that only things precisely and critically relevant to a single particular use CASE ‡ are included. This evokes Gödel’s guess, at the request of Burks, at what von Neumann meant when he said:
It’s a theorem of Gödel that the next logical step, the description of an object, is one class type higher than the object and is therefore asymptotically infinitely longer to describe. A complete specification for a use case will be infinite. (Don’t look at me, blame von Neumann!) So the argument, here, is simply that hyper-specific applications carry an approaching 0 (→0) infinity in contrast to the dorks’ ad infinitum argument (→∞) against too much functionality.
Anyway, I always awake frustrated at this point because I can’t make them understand the point. But whatever. As alluded to above, there also exists a third ad infinitum argument (that my old business partner Chris called “The Spanish Inquisition”), wherein a customization environment, whose purpose is to arrive at a complete specification, will be infinitely long. This is why Apple users complain about “dinking around with Linux”, because most Linux distributions have so many more options for customizing one’s experience. If you coerce the user into answering those questions prior to anything working, you get the Inquisition. And nobody expects the Inquisition! Maybe as I’m dying, when my time comes, my dream will continue and, if Yog accepts me into the Void, I’ll manage to make this point to the dorks.
Of course, in reality, I don’t need to make this point to anyone. Luckily, we now have Opinionated Software to go along with the Unix Philosophy that makes my point better than I ever could. Those who know me will expect this to devolve into an unread screed about unitarity and closure(s). But I just posted something about that. So, I can bail this time.
I emphasize the word “case” in order to point out its particular nature. In software, we are sometimes sloppy and would call a collection of situations a “use case”. E.g. when the same “use case” is exercised, but with slightly different input. Such a collection is not a use case. It is a collection of separate use cases. For something to be a single case, it has to “be the case” that everything is identical, not merely indistinguishable.
In reading Machine Experiments and Theoretical Modelling: from Cybernetic Methodology to Neuro-Robotics, they rely to some extent on the concept of “near decomposability” and cite The Sciences of the Artificial, Third Edition as a motivation.
However, this raises the same problem I continually have with any discussion of hierarchical systems. I try/fail to point out my problem to people at conferences, in my own writing, in the papers I review, etc. But the point never lands. It is (of course) similar to the previous post on layers vs. levels. Here we can use Simon’s presentation to make the case, though.
Therein, Simon uses the example of a building with perfectly insulating walls to the outside world and imperfectly insulating walls between rooms, and then badly insulating cubicles within rooms. He then goes on to talk about the diffusion of heat through the building. At least I assume he’s only talking about diffusion because the matrix he arranges into nearly decomposable units contains the diffusion coefficients, but nothing about any advection. So, in this idealized example, the inter-room interactions are clearly weaker than the inter-cubicle interactions.
However, what if we add another MODE of interaction like, say, forced air HVAC? It strikes me that inter-room interactions will be stronger than inter-cubicle interactions under both advection and diffusion. Imagine the continually cold person sitting in the cubicle nearest the vent in your office.
So, the point being made is that some variables can be idealized as an aggregate, somewhat isolable from other aggregated variable sets. But if this is the purpose of the “nearly decomposable” concept, it’s at high risk of inscription error. Any system with a huge number of modes of interaction, over and above any number of variables within each mode, will force you to idealize down to particular modes. And, thereby, you’ve inscribed the aggregation rather than discovered it. And any predictions you may make off your near decomposition will have that choice programmed in.
In a discussion of this paper, I argued that the virtus dormitiva is not “viciously” circular because it restates the proposition in different language. Of course the different language might be trivial or it might be significant. So, one could argue that if the different language were only trivially different, then it really is vicious. But whatever. My point was that that different language is a layer that has to be reduced or eliminated in order to demonstrate the circularity. Apparently, this word “layer” presents a problem for some people. Many people seem to think in terms of hierarchy when talking about real or logical systems. E.g. an axiomatic system allows propositions to be composed of “lower level” elements (axioms and, even lower, the alphabet). E.g. an organization like a corporation is a higher level collective, composed of lower level departments or people. Etc.
The concept of levels assumes that directional hierarchy. It’s effectively a partial order where any level is ≥ or ≤ any other level. But I posit that some systems may not submit to a partial order, where there is no cumulative relationship between any 2 components. Or, more likely, the relationship between any two components is not as simple as ≥. My favorite middle ground example is the onion. There’s no up or down with respect to the center. You can vary your direction and stay at the same layer. Of course, all you need do is switch to polar coordinates and you get your partial order. But that’s not the point.
I don’t want to impute the properties I’m trying to discover. So, why bias the conversation by using levels when we could use the more generic term layers just as effectively?
Anyway, the point that caused me to write this log entry is: What does it mean to have a logic that requires the more generic concept of layers and does not succumb to the concept of levels? What does a non-hierarchically layered logic look like?
Well, my (largely ignorant) guess at an answer is paraconsistent logic. But it’s useful to first consider non-monotonic logic, which I think (in my ignorance) would be a partial order. Here, when you add a new proposition to an extant argument, it’s truth value could change. When that happens, you have to have a handler that resolves the situation. E.g. is the truth value of the new proposition weighted more heavily than the older ones? Can you find a single old proposition that contradicts the new one? Etc. But the objective is to accumulate propositions in a singular argument.
Paraconsistent logics allow persistent contradiction. You still have to try to resolve any conflicts, because your purpose is not to allow any old nonsense to pollute your argument(s). But you have alternatives for how you handle the inconsistency. One example might be to unify as many of the propositions as possible into a “lower level” argument, then allow 2 lines of inference at a “higher level”, where the mutually incompatible propositions are appended to the unified part of the argument. This would still be a partial order. But another example might be to maintain 2 entirely independent lines of inference.
In the latter case, where we maintain 2 independent arguments, transformations and consequences of the arguments flow out in one dimension (forward in time, if you like). So we have a 2 dimensional construct. Argument # vs. consequences. And if we also consider that new propositions can be added, then we have a 3rd dimension. Movement in any of those 3 dimensions might change how you view the arguments. And there may not be a simple relation (like ≥) that characterizes the differences as you move in those dimensions. Hence, levels is no longer a useful concept, in general.
As usual, I hope I don’t sound like too much of an idiot in what I’m saying. But I thought I’d document it before I forget and have to rethink the thought.