gepr

glen ropella

Deontological Ethics via Ambiguous Aphorisms

Written by  on April 12, 2018

This post, Effective Leadership Requires Integrity and Intellectual Humility, whose gist I entirely accept and endorse, really irritates me, not least because the entire body of “Leadership” follows the new age thought self-help genre. But mostly, it’s irritating because each and every one of these rules is pure idealism, in its most pernicious form: idealism that masquerades as practicality. Perhaps I’m only thinking this way because, just yesterday, I started trying/failing to grok the opposition between stoicism and skepticism (spurred on by my skeptical inferences from C.S.Peirce’s “pragmaticism”). But all that aside, I’ll criticize each of the 15 rules with what I think are practical situations where each is simply wrong, morally and effectively. But, again, I wholeheartedly agree with the underlying message and intent. So, my criticism is, perhaps, a bit stupid.

  1. Underachievers make excuses; Winners make time.
  2. Winners also provide explanations, reasons, for their failures. What is the difference between an excuse versus a reason for one’s failures? When is a justification/explanation for some series of events or situation an inadequate explanation and when is it an authentic attempt to identify one’s own flaws? I think the answer lies in one’s willingness to admit failure. If the one who failed pretends they haven’t failed and provides a justification for the situation, then we label it an excuse. If they provide the exact same justification, yet admit their role in the outcome, then they’re “making time”. My criticism being that one should be exceedingly careful when accusing another (or themselves, especially in catastrophic conditions like addiction or a political situation where people die or whatnot) of making excuses.

    Further everyone is a loser and a failure. There are no winners, in any absolute sense. The only difference between a “winner” and an “underachiever” is, like the expert, the winner has failed more times than the underachiever has even tried.

  3. Learn when to give in, but don’t EVER give up.
  4. Yes, please give up sometimes, most importantly, when it’s time for you to die. A significant part of wisdom is knowing how to choose one’s battles. If you choose the wrong battle, please give up that battle. Don’t insist on “giving in” and saying things like “Let’s call it a draw, then”, when/if you’ve actually lost. Knowing how to lose gracefully is critical. But a loss is not a compromise, which is what this ambiguous aphoristic rule seems to imply.

  5. Lapses in judgment are human, but there is no such thing as a lapse of integrity.
  6. Yes, there are lapses in integrity. Everyone experiences these. As always, the difference is how one handles them. If you happen to lapse in your integrity (e.g. doing something you’ve preached against doing), admit it, show that you recognize your fallibility, and thereby restore your steady-state integrity in the face of your instantaneous failing. This ambiguous rule relies on a conflation of unitary action versus trends or collections of actions. And that conflation is over and above my previous criticism against “excuse making”, in that it’s reputational. The state of “having integrity” is assigned by, attributed by, other people, not oneself. You cannot tell whether or not you have integrity or not. The best thing you can do is pay attention to whether others think you have it.

  7. Intelligent people fight fire with water, never with fire.
  8. This is just blatantly, ironically, false. We often take away the fuel of a wild fire with a controlled burn … literally fighting fire with fire. Pffft. Metaphorically, the implication we’re supposed to take is something like, perhaps, getting angry when someone else is angry at you, or fudging data to fix the fudging someone else has done, or whatever … the old “two wrongs don’t make a right” thing-a-ma-jig. But, again, the ambiguity is subverting the message. In fact, this aphorism’s ambiguity is guilty of what it professes against. Don’t apply ambiguous metaphors in order to persuade someone that some other ambiguous aphorism, the details of which will be supplied by the naïve audience, is practically meaningful.

    In reality, a response to a situation is … situational. Sometimes we fight fire with fire. Sometimes we fight it with water. Sometimes we let it burn (fight it by no response at all.) Don’t let vague rules rule your response.

  9. Be equally passionate about understanding others as you are in your desire to be understood.
  10. This is one of the least objectionable in this list of 15. However, it is still so idealistic as to encourage those who fail at it to think badly of themselves, further prompting them to “make excuses” for their failures. Everyone is (sometimes) selfish. It’s just a fact of biology. Yes, higher life forms are more capable of empathy. But even us humans, with our big brains, are incapable of simulating every other perspective of every other thing we happen to interact with.

    So, I claim it’s adequate to try to understand others as much as you try to be understood. But when you fail, simply admit it and keep trying. In the end, the two are the same thing, anyway. Understanding is mutual. If they understand you, then you understand them, and vice versa. Also, complete understanding is impossible. So, the 80/20 rule applies. At some point, it’s OK to stop and just sit quietly, basking in your 80% adequate understanding.

  11. Know to be INTERESTING, you must first be INTERESTED! Be an articulate and passionate storyteller but a more memorable listener!
  12. This is demonstrably false, cf rule #14. Most of the “characters” you might be inclined to reference will (at least appear to) be narcissists, who care and think much more about themselves than they do about others. If reality TV has taught us anything, it is that many of us are voyeurs, entertained by showboats and hotdogs. To boot, many of those “characters” are intellectually humble, at least during some activities, and then very self-absorbed in other activities. Take Richard Feynman as an example. When he’s exhibiting how he thinks, rather than encouraging us to think, his words are fundamentally different. Or, imagine a top-tier athlete. When they’re performing, they aren’t listening to you, they’re showing you.

    So, no, you don’t have to be interested to be interesting and you don’t have to always mix demonstrating with explaining.

  13. Understand that while TIME may heal all wounds, it will kill all deals. Be expeditious, passionate and determined to finish what you start.
  14. This is simple to criticize if you refer back to my criticism of rule #3. It conflates the instantaneous with the longer term. Some deals must be quick and some take a long time to gestate. Don’t let an ambiguous rule interfere with your situational awareness. … And, know when to walk away from a deal gone bad. Sometimes “finishing what you start” means simply back off, go away.

  15. Possess a childlike curiosity to go along with a contagious smile and an awe inspiring humbleness.
  16. Or, perhaps, express a cynical nihilism so that your IoT device is more secure against hackers than those built by childlike naïveté, championed by overly enthusiastic advocates, who claim they don’t understand the technology?

    “It takes a village” to do anything. Some in the village are grumpy old experts who complain about every change because they engineered the thing (be it a bureaucratic process or a 1957 restored Chevy) to work the way it does. Some in the village are spirit-uplifting innovators from whom ideas (good and bad) gush like from an artesian well. Leadership and what it means to be humble depend fundamentally on the type of people you’re leading. Know when to express childlike curiosity and when to express mature criticality and apply your manifold expressions situationally.

  17. Know what you don’t know, and be so open-minded that you learn something from EVERYONE you meet, EVERY single day.
  18. Again, who could argue with this? Well, Dunning-Kruger tell us that nobody can actually know what they don’t know. But what’s meant, I suppose, is to simply be a bit doubtful about what you think you know and accept the doubt others express about what you may or may not know. And, most importantly, be ready to modify your knowledge, to incorporate new information. Build and be proud of your ability to change your mind.

    But, as always, remember that others also cannot know what they don’t know and should be doubtful of their own knowledge. So, when you meet a sanctimonious blow-hard who wants you to learn from them, feel free to be closed-minded and reject their attempts to prove that you know less than they know (while simultaneously learning everything you can of what they know, of course).

  19. Have high expectations of others, but even higher expectations of yourself.
  20. More importantly, forgive yourself when you fail. And forgive others when they fail. Learn from your failure. Learn from others’ failure. And encourage others to learn from your, and their, failure.

    I.e. have high expectations for the ability to forgive and learn, in whoever.

  21. Compromise to cooperate, but never at the expense of your principals, integrity or your partners.
  22. Unless your principles are toxic (or false), of course. Rule #11 directly contradicts the other rules mentioning “open-minded” or “childlike”, etc. If your (reputational) integrity is built on, say, a capitalist commitment to slavery or an efficient logistics for dealing heroin to trust-fund twenty-somethings, then perhaps it would be best for you to compromise and cooperate at the expense of your principles.

    This criticism may seem ridiculous. But witness the polarization in debates about, e.g., gun control, abortion, or affirmative action. These debates become polarized precisely because the participants stick to their idealistic principles without understanding the implications of practical applications of those principles.

    So, always be ready to compromise at the expense of your principles and integrity. Again, don’t let some rule interfere with your situational awareness.

  23. Be understated with a grasp that extends past your reach. The world already has too many overstated people with reaches that extend way beyond their grasp.
  24. And if we all followed this rule, no progress would ever be made. The history of innovations is replete with arrogant jerks who reached beyond their grasp. The gist, however, is to realize that our grasp is better than your grasp, from Aristotle to Albert Einstein. So, don’t be shy. Reach as far as you can and help us grasp more than we ever have before. If you’re accidentally a bit of a jerk and cause some bad consequences, recognize it, accept others’ criticism, and get better.

  25. Forever be optimistic and genuinely excited about the future success of others and let them know it.
  26. Again, nobody can be forever anything. Even if we’re someday immortal, I’d bet we’ll still change over time. Sometimes people get depressed. That’s fine. If you can’t come out of it on your own, you should admit it and ask for help. Sometimes our optimism blinds us to bad actors (e.g. Enron or Bernie Madoff … or Donald Trump). When that happens, abandon your optimism for as long as it takes to put in place good counter measures.

    Nobody is or should be always any one way. We’re living, evolving creatures. Be alive and evolve and help others live and evolve.

  27. Demand the highest in quality from your partners and always choose “character” over “characters”.
  28. Rule #14 is, again, ironically arguing against the very humility it is supposed to argue for. You, being intellectually humble, don’t know the difference between “character” and “characters”. Is Joe merely an esoteric jerk? Or does he, perhaps, have high integrity and adheres to his ideal of his craft? A better aphorism would be to facilitate the personalities and competencies of the partners you have and choose new partners so that they complement the partners you already have. Don’t assume that you can define “quality” any better than any of the characters you are or might partner(ed) with.

  29. Express genuine concern, interest, benevolence, compassion, tolerance, and understanding towards everyone.
  30. Do not express concern, interest, benevolence, compassion, tolerance, and understanding toward someone who is demonstrably misanthropic. Bad actors must first show their willingness to be part of a team before they will clearly understand any good faith acts or expressions toward them. If they do not understand the damage their actions entail, then expressing concern, interest, etc. will only encourage them to continue in their bad actions. But if/when a bad actor stops or reduces their bad actions, then encourage them and facilitate their good actions, regardless of how badly they’ve acted in the past. See my criticism of rule #10.

In summary, don’t make decisions based on a set of ill-stated rules that rarely apply or are very difficult to apply. Instead, how about this for an aphorism:

Pay attention and try not to be a jerk.

Contextual Adhesion

Written by  on September 6, 2017

I awoke from this recurring dream last night where I am sitting in the presence of a bunch of older dorks complaining about the spoiled children who act as if they have infinite memory, efficient time-slicing, well-shielded circuits, etc. The phrase one of the older dorks says that “pulls my string” goes something like: How do we include what we need without pulling the whole universe in with it? It’s commonly recognized, I think, that this is an ad infinitum argument against runaway systems (e.g. Java’s libraries).

In my dream, I try to challenge the dorks by pointing out there exists a similar ad infinitum argument the other way, narrowing the context so that only things precisely and critically relevant to a single particular use CASE are included. This evokes Gödel’s guess, at the request of Burks, at what von Neumann meant when he said: It’s a theorem of Gödel that the next logical step, the description of an object, is one class type higher than the object and is therefore asymptotically infinitely longer to describe. A complete specification for a use case will be infinite. (Don’t look at me, blame von Neumann!) So the argument, here, is simply that hyper-specific applications carry an approaching 0 (→0) infinity in contrast to the dorks’ ad infinitum argument (→∞) against too much functionality.

Anyway, I always awake frustrated at this point because I can’t make them understand the point. But whatever. As alluded to above, there also exists a third ad infinitum argument (that my old business partner Chris called “The Spanish Inquisition”), wherein a customization environment, whose purpose is to arrive at a complete specification, will be infinitely long. This is why Apple users complain about “dinking around with Linux”, because most Linux distributions have so many more options for customizing one’s experience. If you coerce the user into answering those questions prior to anything working, you get the Inquisition. And nobody expects the Inquisition! Maybe as I’m dying, when my time comes, my dream will continue and, if Yog accepts me into the Void, I’ll manage to make this point to the dorks.

Of course, in reality, I don’t need to make this point to anyone. Luckily, we now have Opinionated Software to go along with the Unix Philosophy that makes my point better than I ever could. Those who know me will expect this to devolve into an unread screed about unitarity and closure(s). But I just posted something about that. So, I can bail this time.

[‡] case

I emphasize the word “case” in order to point out its particular nature. In software, we are sometimes sloppy and would call a collection of situations a “use case”. E.g. when the same “use case” is exercised, but with slightly different input. Such a collection is not a use case. It is a collection of separate use cases. For something to be a single case, it has to “be the case” that everything is identical, not merely indistinguishable.

Near Decomposability

Written by  on September 6, 2017

In reading Machine Experiments and Theoretical Modelling: from Cybernetic Methodology to Neuro-Robotics, they rely to some extent on the concept of “near decomposability” and cite The Sciences of the Artificial, Third Edition as a motivation.

However, this raises the same problem I continually have with any discussion of hierarchical systems. I try/fail to point out my problem to people at conferences, in my own writing, in the papers I review, etc. But the point never lands. It is (of course) similar to the previous post on layers vs. levels. Here we can use Simon’s presentation to make the case, though.

Therein, Simon uses the example of a building with perfectly insulating walls to the outside world and imperfectly insulating walls between rooms, and then badly insulating cubicles within rooms. He then goes on to talk about the diffusion of heat through the building. At least I assume he’s only talking about diffusion because the matrix he arranges into nearly decomposable units contains the diffusion coefficients, but nothing about any advection. So, in this idealized example, the inter-room interactions are clearly weaker than the inter-cubicle interactions.

However, what if we add another MODE of interaction like, say, forced air HVAC? It strikes me that inter-room interactions will be stronger than inter-cubicle interactions under both advection and diffusion. Imagine the continually cold person sitting in the cubicle nearest the vent in your office.

So, the point being made is that some variables can be idealized as an aggregate, somewhat isolable from other aggregated variable sets. But if this is the purpose of the “nearly decomposable” concept, it’s at high risk of inscription error. Any system with a huge number of modes of interaction, over and above any number of variables within each mode, will force you to idealize down to particular modes. And, thereby, you’ve inscribed the aggregation rather than discovered it. And any predictions you may make off your near decomposition will have that choice programmed in.

Layers vs Levels

Written by  on September 6, 2017

In a discussion of this paper, I argued that the virtus dormitiva is not “viciously” circular because it restates the proposition in different language. Of course the different language might be trivial or it might be significant. So, one could argue that if the different language were only trivially different, then it really is vicious. But whatever. My point was that that different language is a layer that has to be reduced or eliminated in order to demonstrate the circularity. Apparently, this word “layer” presents a problem for some people. Many people seem to think in terms of hierarchy when talking about real or logical systems. E.g. an axiomatic system allows propositions to be composed of “lower level” elements (axioms and, even lower, the alphabet). E.g. an organization like a corporation is a higher level collective, composed of lower level departments or people. Etc.

The concept of levels assumes that directional hierarchy. It’s effectively a partial order where any level is ≥ or ≤ any other level. But I posit that some systems may not submit to a partial order, where there is no cumulative relationship between any 2 components. Or, more likely, the relationship between any two components is not as simple as ≥. My favorite middle ground example is the onion. There’s no up or down with respect to the center. You can vary your direction and stay at the same layer. Of course, all you need do is switch to polar coordinates and you get your partial order. But that’s not the point.

I don’t want to impute the properties I’m trying to discover. So, why bias the conversation by using levels when we could use the more generic term layers just as effectively?

Anyway, the point that caused me to write this log entry is: What does it mean to have a logic that requires the more generic concept of layers and does not succumb to the concept of levels? What does a non-hierarchically layered logic look like?

Well, my (largely ignorant) guess at an answer is paraconsistent logic. But it’s useful to first consider non-monotonic logic, which I think (in my ignorance) would be a partial order. Here, when you add a new proposition to an extant argument, it’s truth value could change. When that happens, you have to have a handler that resolves the situation. E.g. is the truth value of the new proposition weighted more heavily than the older ones? Can you find a single old proposition that contradicts the new one? Etc. But the objective is to accumulate propositions in a singular argument.

Paraconsistent logics allow persistent contradiction. You still have to try to resolve any conflicts, because your purpose is not to allow any old nonsense to pollute your argument(s). But you have alternatives for how you handle the inconsistency. One example might be to unify as many of the propositions as possible into a “lower level” argument, then allow 2 lines of inference at a “higher level”, where the mutually incompatible propositions are appended to the unified part of the argument. This would still be a partial order. But another example might be to maintain 2 entirely independent lines of inference.

In the latter case, where we maintain 2 independent arguments, transformations and consequences of the arguments flow out in one dimension (forward in time, if you like). So we have a 2 dimensional construct. Argument # vs. consequences. And if we also consider that new propositions can be added, then we have a 3rd dimension. Movement in any of those 3 dimensions might change how you view the arguments. And there may not be a simple relation (like ≥) that characterizes the differences as you move in those dimensions. Hence, levels is no longer a useful concept, in general.

As usual, I hope I don’t sound like too much of an idiot in what I’m saying. But I thought I’d document it before I forget and have to rethink the thought.

Sufficient Structures for Conscious Computers

Written by  on June 14, 2016

From Scott Aaronson’s “My reply to Roger Penrose”

At the same time, I also firmly believe that, if anyone thinks that way, the burden is on them to articulate what it is about the brain that could possibly make it relevantly different from a digital computer that passes the Turing test. It’s their job!

This reminded me of his post trash talking Tononi’s IIT. But I do have 2 things I cannot barely articulate as plausibly making our central nervous systems (CNS) relevantly different from digital computers. (Please not that I didn’t say “brain” because these 2 things would fail in the “brain in a vat” scenario… much as I think general AI will fail if it’s a computer in a vat.)

Both of these are intimately related to the parallelism/connectionism of the CNS, but only in that such architecture is probably a prerequisite for them … a (perhaps only somewhat) parallel architecture is, I think, necessary but not sufficient. The 2 things are:

  1. Circular reasoning and/or
  2. Dynamic modularity.

By circular reasoning, I mean (fundamentally) the ability to conceive and reason about impredicative sets, those that are defined by quantification over the entire set. This is something human mathematicians do on a regular basis. But, to the best of my limited knowledge, it can only be simulated by digital computers (though analog computers may well be able to do it – I don’t know). Is a simulation good enough? Possibly. One could argue that human brains can only simulate such reasoning, as well, using ambiguous placeholders for higher order structures. Cue shouting matches about intuitionism, unification, etc.

By dynamic modularity, I mean the ability to (nearly instantaneously) reorganize one’s thinking to include new concepts, primarily new conceptions of old concepts. The simplest version(s) of this is anything like the Necker Cube: which represents a seemingly dualistic way of conceiving/perceiving the lines on the page. At first, you may see it only one way. Then if you (for whatever reason) flip to seeing it the other way, you may find it hard to flip back. After some practice, of course, you can flip back and forth (in vs. out) at will. Seeing it the 1st way is one conception. Seeing it the 2nd way, is a 2nd conception. And the ability to flip in and out at will is yet a 3rd conception.

This dynamic ability to reframe, re-ground, re-interpret, re-build, everything we’re thinking seems to be regularly doable (and done) by humans that digital computers may or may not be able to do. I think the embodied/situated cognition stance may enable a computer (digital or analog, depending on the interface to the milieu) to do this. I simply don’t see a plausible path for doing it.

Hard to vary

Written by  on March 29, 2016

While trying to compose a presentation for the upcoming SpringSim 2016 meeting, I wanted to express Deutsch’s concept of a good explanation being “hard to vary”. Here’s a quote from The Beginning of Infinity:

Fallibilism entails not looking to authorities but instead acknowledging that we may always be mistaken, and trying to correct errors. We do so by seeking good explanations – explanations that are hard to vary in the sense that changing the details would ruin the explanation.

The point I’m trying to make in the presentation is about building simulations that are easy to change. But by “easy to change”, what I really mean is “easy to falsify”. This point isn’t very easy to express to a group of (mostly) engineers, whose expertise and purpose is to design very particular solutions to very particular problems. This sort of stereotype of “the engineer” should be fairly clear to the layperson. Engineers are often quick to drill down into the detailed nitty gritty of any situation. To the synoptician, whose skill is seeing the horizon, the view from 30,000 feet, such a skill is best characterized as “falling down” into an abyss, rather than “drilling down” into detail.

The same can be said of a stereotypical scientist, whose motivation is to discover knowledge regardless of any particular gravity well of detail. E.g. a biologist who discovers she needs to learn, say, fluid dynamics, in order to make progress on the biology, does just that … learns fluid dynamics. Walls, boundaries, disciplines, are all battered down in relentless pursuit of the objective.

One way to think of the difference might be that the engineer accumulates globs of detail and packs them carefully together to build something. By contrast, the scientist encapsulates globs of detail, sloughs off what she can and drags what she must as she wades through the surrounding muck.

The point in my presentation is that cumulative globs of detail are a hindrance to science. And, although it might be counter-intuitive, this gels nicely with Deutsch’s “hard to vary” concept, because what I think he really means is: contains only the critical detail. It’s a kind of duality, the less detail your accurate theory has, the more difficult it is to change/delete any given detail without breaking the theory. That lack of detail makes them both easy and hard to vary. And this is the appropriate paradigm for scientific M&S.

By the way, I’m flirting with learning about the new-ish push toward the Philosophy of Engineering (e.g. here). This guy even seems to claim that Engineering (capital E) is larger, contains, science. At least that’s what I think I heard. But the above paradigm difference seems to argue against that.

There is a middle ground, I suppose, where both Engineering and Science require some form of intervention, activism. The idea is that you cannot understand independent of manipulation. But to me, this would imply that both Engineering and Science are (different) specialized descendants of the more basic condition of consciousness.

Attraction vs. Repulsion

Written by  on February 9, 2016

While criticizing this article at the CSS reading group, I made the comment that I thought it was a terrible (yet great in one way) article because it spanned the gamut of topics without digging deep enough into any one topic. I paid homage to the method, adopted by another member of the group, to the concept of studying “seminal” (paternalism, anyone?) or otherwise canonically “good” articles, rather than studying bad ones. The other person then made the statement that (paraphrasing) “attraction is a better gradient to follow than repulsion because the number of outcomes from attraction is lower than that of repulsion. With repulsion, you can end up anywhere.” I didn’t pick nits with him at the time because, in these meetings, I prefer to stay close to the topic. But his position exhibits the very same thing the article cautions against, the concept that there is an objective reality and we can approach it by targeting. So, it would have been on topic … albeit argumentative.

I much prefer the concept of constraint-based reasoning, which is founded on repulsion. Any solution within the solution space bounded by the constraints is reasonable. And perhaps the optimal solution is located in the “middle” of the bounded space. So, the only way attraction is a better method than repulsion is if the space has a large ratio of unbounded to bounded sides.

To boot, when a space is relatively bounded, relying on attraction is mind-numbingly restricted. And this is the core problem with both PhD programs, big science funding, and peer review. The extent to which one is free to bounce around in solution space is severely limited if one tries to mimic previous results. By contrast, if you can clearly explicate your boundaries, then you have the freedom to find any plausible solution/invention within those boundaries and it will likely be more novel than solutions found by mimicry/attraction.

Special characters across all X apps

Written by  on December 9, 2015

I finally got tired of keeping Emacs open to make special character entry easier in other applications. So, I set up my $HOME/.XCompose file like so:

include "%L"   # import the default Compose file for your locale
<Multi_key> <m> <E>        : "∃"   U2203 # THERE EXISTS
<Multi_key> <bar> <E>      : "∄"   U2204 # THERE DOES NOT EXIST
<Multi_key> <m> <e>        : "∈"   U2208 # element of
<Multi_key> <bar> <e>      : "∉"   U2209 # NOT AN ELEMENT OF
<Multi_key> <bar> <equal>  : "≢"   U2262   # _ ≠ NOT IDENTICAL TO
<Multi_key> <m> <^>        : "∩"   U2229   # intersection
<Multi_key> <m> <v>        : "∪"   U222a   # union
<Multi_key> <m> <R>        : "ℝ"   U211d   # reals
<Multi_key> <m> <C>        : "ℂ"   U2102   # complex
...
<Multi_key> <backslash> <A>   : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<Multi_key> <backslash> <a>   : "α"   U03B1    # GREEK SMALL LETTER ALPHA
...

In order for it to work in Icedove and Firefox, you have to set the GTK_IM_MODULE environment variable for those (and other apps). I chose to place it in .xsessionrc so it would be there for all of them:

export GTK_IM_MODULE=xim

IMAG MSM Posters

Written by  on September 29, 2015

I’ve uploaded our IMAG MSM posters to our homepage.

The TACM group was interesting this year. We haven’t had much success collaborating between the MSM meetings. So, this year’s meeting talked quite a bit about how to do that. My proposal that we explore more formal methods was met with a few quizzical responses, to which I proposed we consider the following efforts:

The following 2 are indirectly related, discussing reasoners over ontologies:

And, finally, I have high hopes for the Homotopy type theory, but have yet to use it practically

Delusion Tradeoff

Written by  on August 18, 2015

Tweaked by this article, I was reminded of these two concepts:

pareidolia
is a psychological phenomenon involving a stimulus (an image or a sound) wherein the mind perceives a familiar pattern where none actually exists.
apophenia
the experience of seeing meaningful patterns or connections in random or meaningless data.

Both are common in simulation, though perhaps not as common as pre-emptive registration. But what this article evoked in me was the idea of a fine line between delusions. Nobody actually has control over “their life”. Yes, if you assume free will, you can assert that there are small things we have control over (when to eat, whether to watch TV or read a book, etc.). And successive iterations of those small controls can carry one into entirely different regions of the possible. But that control is very fragile. For example, saving up for a comfortable retirement can be done. But it depends on a lifetime of discipline. And, most importantly, it depends on a lifetime relatively free of tragedy like medical bills, the drug addiction of a family member, fire, flood, earthquake, etc.

My argument is that this sort of control, of the very fine-grained mechanisms, is not relevant to the belief in conspiracy theories talked about in this article. Rather, the type of control sought by these people is delusional … the illusion of control. And, if that’s the case, then what we’re faced with is a choice between two types of delusion:

  • The false belief that you have control over your life, or
  • The false belief that others (individuals or cabals) have control over things (e.g. your life).

As usual, when faced with such a choice, the best bet is to choose a little of both. The least delusional position is to bounce between the two delusions, picking and choosing the parts that are, while still false, more true than any other combination.

Shambling Zombie Thoughts

Written by  on May 27, 2015

I just returned from this workshop and, as usual, the most interesting stuff happens in between talks and over pints of beer in the evening. One of the interesting discussions happened over 2 non-consecutive evenings. It started with a question from one of the other attendees about whether or not I’d heard of memetics. I had, mainly due to a cultural evolution working group I participated in when I was still at the SFI. But I took the opportunity to whip my whipping boy, again, and suggested that memetics is a metaphor and, depending on where you land regarding the ubiquity of metaphor, any study of memetics should loudly exclaim its assumptions up front. This exploded into a 3-way argument about the grounding of thought. I took the position that all thoughts are inextricably grounded in physiology, whereas the other 2 took (variations of) the position that ideas are somewhat independent of the physiological structures that implement them. We all agree that until/unless we can find the maps between the physiology and the ideas, it’s still useful to study the transmission of ideas as if it were independent, much like studying chemistry without having to always refer back to physics.

In any case, that discussion [d]evolved into a discussion about determinism and free will. I am forced to claim that ideas are epiphenomenal, or at least, limited to constraining their generators. I.e. thoughts are an effect, not a cause. My competitors in the discussion insisted that ideas can cause behaviors. Of course, I brought up the evidence that we make decisions before we’re conscious of those decisions. I also brought up the challenge that they bear the burden to distinguish emotion from thought and instinct from reason (or involuntary reflexes from idea-caused behaviors).

All of this is expected during a conference on agent-based modeling (ABM), of course. But I suppose it left me pre-adapted to these articles:

and a reconsideration of my own log entry: Atheism and the Meaning of Life.

Determinism and Free Will in ABM

ABM has a very sloppy history. The phrase is used to describe lots of different types of models, most of which have no clear concept of an agent. To my mind, most models described as ABMs are simply discrete time models, which have been in common use for a very long time. I typically define an agent as an encapsulated object that has control over its own agenda. This means that, by definition, it cannot be purely reactive to its context. It must embody some type of “free will”, some unpredictability or other idiosyncratic attributes. These models typically require the installation of a pseudo-random number generator (pRNG) inside the agent. Without that, the object becomes a slave to whatever other processes call its methods or change its state. (Another way might be to embed some actual parallelism inside the agent, allowing its behavior to be a function of some other stochastic process.)

But embedding a pRNG inside the agent does not magically imply that the symbols or sub-symbols used by the agent are independent of its context. The underlying grammar, the space of potential states of the agent is defined by its interface with the outside world. Only its selection of points in its state space is free. Everything else is bound.

This formalizes the argument against memes evolving independently from the underlying machinery in which they’re implemented.

Progressivism in Evolution

In the 2nd installment of the argument, a 4th player made the comment that he simply did not want to think that all his thoughts and feelings were purely and directly derived from his (cumulative) context. He claimed that the other position, where ideas can be causative … inspiring even, was more appealing, more beautiful. I didn’t give him the chance, but I suspected he would go on to cite something like what Einstein said about the beautiful theory being more likely true. (Of course, Einstein wasn’t a biologist, which has a different concept of beauty … namely those that have excruciatingly exquisite messiness.) But the inevitable idea that there is no obvious purpose to a purely determined life. If ideas are epiphenomenal, then why do we do what we do? Why are we as we are? Is there purpose … intention to any act?

In response, I invoked the idea that I learned (or badly inferred, to help him with plausible deniability) from this friend of mine that evolution is (fundamentally) a way for mechanisms to progressively grow more and more complex … to steadily harness order from an ever more disordering universe. At which point my beauty-invoking discussant brightened a bit. He made the comment that viewing evolution this way, combined with the context-driven deterministic epiphenomenality of ideas leads to a kind of social cohesion that is sometimes lacking in, at least, economic thought. We are all part of the same machine, pursuing a common dream.

The Supernatural as Artificial Social Cohesive

This finally leads me back to my own log entry on a local group meeting where a bunch of local atheists seem to continue to avoid trying to find any true biological explanation of religious belief or faith in the supernatural. A point I raised at that humanist meeting was in response to a comment that churches tend to have a physical location, which provides some of the glue that holds the congregation together. It’s a rallying point, something very concrete around which the group can cohere. The same function is served by concepts of God, faith, and good behavior, even if those concepts are demonstrably false. What my beauty-seeking discussant above was looking for was some concept/idea around which we could all cohere, much like 2 competing Christian denominations might disagree about, say, drinking alcohol or whatnot, they can still cohere around their idealism, Christ. The common dream of biological species banding together to harvest the increasingly rarified order in a heat death universe might provide that for some who find determinism a bit depressing otherwise.

Intuition and Emotion vs. Symbolic Thought and Language

And finally, the article by the neoreactionary invokes that group of humanist/atheists for me because I’m consistently disagreeing with them in a deeply urgic way. This usually manifests in the distinction I make between agnostic (without knowledge) versus atheist (without gods). But with the launch description of The Future Primaeval, and their break with the LessWrong crowd and subsequent break with MoreRight, I recognized that my discomfort with atheism is very similar to my discomfort with the hyper-rationalists (like the LessWrong crowd).

One of my defenses of theism lies in the very tiny window kept open by universal consciousness and the anthropic universe. I just cannot bring myself to plug that hole, to be hyper-crisp and rely on the Law of Noncontradiction. Hence my fascination with second-order and paraconsistent logic. And it should be relatively obvious how this relates to the somewhat artificial distinction between emotion vs. thought, instinct vs. reason. And this is why A Short Argument for Traditions makes some intuitive sense to me. Being part of this large, overwhelmingly deterministic, order-grasping, machine, implies to some extent that some traditions are best followed until better alternatives present themselves.

Where does this leave us regarding the ontological status of thoughts/ideas/memes? Oh who knows. I’m probably still jet-lagged.

closures

Written by  on May 13, 2015

I debated whether this entry should go here or in my personal log, which contains things one should not talk about in a professional context. As I age, I notice the line between professional and personal blurs and clarifies, depending on the context. So, I left the decision up to which one had the more recent entry. And here we are.

This post on an analogy between Singularity doomsayers and skeptical theists evoked a common whipping boy of mine: the idealism of closure (or the closure that is idealism). By “closure”, I basically mean the computer science concept. For most purposes, though, I extend this to the bound elements of any conceivable context. So, for example, when you’re arguing with someone about the meaning of the word “God”, there is no closure at all because that word is so vague as to be useless. I.e. every variable is a free variable. (Perhaps it’s better to say it’s the trivial closure rather than has no closure.)

The problem with all three positions in that article: skeptical theism, treacherous turn, and the author’s analogy between the two is that all three depend on some form of closure, the idea that some elements of the context are definitely bound (PDF). I maintain, especially in conversations with Singularians, that nothing is closed, that the universe is open. By the way, while the treacherous turn (of some type) might be thought of as the heart of 99% of science fiction out there, it is 100% countered by the openness trope. No matter how super intelligent the AI will be (or no matter how many omni-X properties God has), there will always be some free variable we can pick at to eventually unravel their dastardly plan.

Of course, the trope depends on some fundamental principles, the most important of which is sensitivity to initial conditions (critical to deterministic chaos, which is critically relevant when arguing about what machines will or will not do). Another is the stability of attractors. E.g. how stable is the first mover advantage gained by the first super intelligent AI? My claim is that such attractors are always much less stable than we think they are, than we idealize them to be, especially when writing philosophy books and articles. Yes, if we accept the author’s analogy, it bifurcates the space, making the doomsayer and geek nirvana deeper, more stable attractors than they otherwise would be. But it’s a long leap from stable to irreversible.

HBOOT hacks

Written by  on May 1, 2015

Relying quite a bit on [BOOTSPLASH] iElvis’s Custom One Splash Screens and Tutorial: How to Customize/Modify/Hack your HBoot.img, I finally got around to replacing that blinding white default boot splash from HTC on my One Mini (aka m4). For posterity, the basic processes are as follows.

Replacing the “developer build” text HTC writes when you unlock/S-off the phone with my name in red:

$ adb shell
$ su
# dd if=/dev/block/mmcblk0p12 of=/sdcard/hboot.img
# exit
$ exit
$ adb pull /sdcard/hboot.img
$ emacs hboot.img
M-x hexl-mode

hboot
Replace the text starting with “This build …” with whatever you want. But be sure only overwrite characters. Don’t delete or add because that’ll make your file a different size, perhaps bricking your phone. I tried lots of replacements, but didn’t take the time to figure out how to do it right. So a short string like my name was best. Save the file, then:

$ adb push hboot.img /sdcard/hboot.img
$ adb shell
$ su
# dd if=/sdcard/hboot.img of=/dev/block/mmcblk0p12
# exit
$ exit
$ adb reboot

Replacing the HTC logo with the white background:

$ adb shell
$ su
# dd if=/dev/block/mmcblk0p13 of=/sdcard/defaultsplash.img
# exit
$ exit
$ adb pull /sdcard/defaultsplash.img
$ ./nbimg -w 720 -h 1280 -F defaultsplash.img
$ file defaultsplash.img.bmp
defaultsplash.img.bmp: PC bitmap, Windows 3.x format, 720 x 1280 x 24

defaultsplash.img
Create a new BMP image that matches that one. I use the GIMP, obviously. Then:

$ ./nbimg -F new.bmp
$ mkdir tmp
$ cd tmp
$ unzip ../splash-one-mini_super-mario.zip
Archive: ../splash-one-mini_super-mario.zip
creating: META-INF/
creating: META-INF/com/
creating: META-INF/com/google/
creating: META-INF/com/google/android/
inflating: META-INF/com/google/android/update-binary
inflating: META-INF/com/google/android/updater-script
creating: cache/
inflating: cache/splash.565
$ cp ../new.bmp.img cache/splash.565
$ zip -r newbootimg.zip .
$ adb push newbootimg.zip /sdcard
$ adb reboot recovery

Then in recovery, just install that zip like any other file. Here’s a video showing my new boot splash:

CPOC

Written by  on April 27, 2015

When feasible, I still use my CPOC (Constituent, Process, Observable, Causality) method for designing and constructing simulations. My claim is that any given system can (usually) be equivalently described in any one of those four “languages”. When I first make that claim, many in the (whatever) audience balk. But I side-step most objections by a) hand-waving about possible isomorphisms between various formalisms and b) submitting a caveat that I really just want them to model agnostically, with as little preloaded bias as possible … so it doesn’t really matter if the claim is rigorously true in every (or any) case.

Last weekend, a friend told me about the recent kerfuffle over Numberphile’s proof that the sum of the natural numbers comes to -1/12. (I personally like this discussion the best.) I didn’t believe it, then, but hadn’t run across it, and wanted to keep an open mind. In any case, my friend and I then launched into a discussion about Platonists vs. the constructivists. I suggested that none of these results are “real” in any sense. They are all just artifacts of the way we’ve formulated the questions we’re asking. My friend lands on the other side, that these results have some (ontological) reality, existence, and we discover them … at least some of them. (Of course, I knew he felt that way, which I why I chose to be a constructivist for this conversation. To be honest, I’m agnostic and try to think either way depending on the circumstances.)

The superstitious tend to assert that things come in 3′s. So, to validate that, I also noticed these 2 blurbs this morning:

Of course, this flows right along the lines of my previous post about Lee Rudolph’s comments on Hardy’s “astonishingly beautiful complex”. So, the superstition is really just a cognitive bias.

Anyhoo… Near the end of Scott’s post, one of my old saws was evoked: the extent to which the content of our minds/brains is separated from the environment in which we’re embedded. This topic also came up with my friend over the weekend. Our discussion was largely about the very natural language of humans, that of things, objects, “nouns”, as contrasted with the language of behaviors, processes, or “verbs”. W.r.t. the equivalency of models written in different languages, I can confidently assert that a system can be equivalently described in terms of states (objects) versus state transitions (processes). In that discussion, one of the examples I used was a human (or any organism). We think if it as an object. But given that our skin is semi-permeable, our cells die off and are replaced, we eat, defecate, etc., where I end and you begin is really not very well defined. A human is, I argued, more accurately described as a large cluster of processes. It’s not an object at all.

But that was all simply background for what I want to point out in this log entry, related to Scott’s post and all the rest. I tend to think that math is only mysterious where everything else is mysterious, at that border between what’s inside versus what’s outside our selves. Hence, when we talk about a proof or disproof of something that’s intuitive, we’re really talking about a) our own internal, isolated, ways of thinking and b) the socially constructed “complex” providing the medium for our inter-individual communications. Often, when something is counter-intuitive to one person but intuitive to another, perhaps it’s because one person’s internal “complex” (or “landscape”) is more like the socially agreed upon “complex” that is the body of math as a whole.

Finally, one of my ongoing (though still agnostic) assertions is that our brains/minds are really rather flat. What’s inside us is directly defined by what comes in and what goes out. You are mostly a naively traceable, albeit stigmergic, product of the stimulus you received from your environment. (Now, I have an alternative ongoing assertion: that we are all little isolated universes of thought, which is tragic, actually, because that means when someone dies, a universe of knowledge dies with them. Hence, though it may be counter-intuitive, the idea that we’re predetermined bags of meat, defined completely by our stimulus, is more optimistic than that alternative. If the alternative is true, just think of the knowledge we lose when a species goes extinct. Ugh. That perspective is profoundly depressing.)

Extended Physiology & IIT

Written by  on April 1, 2015

I don’t commit to Tononi’s (though I 1st read about it in a book he co-authored with Gerald Edelman) Integrated Information Theory of consciousness either. But Scott Aaronson’s criticism is unsatisfying. It took me awhile to realize why (and I’m still not sure). But the basic idea is that the organization of something like a DVD player (and the math behind something like a codec) is an artifact of consciousness.

I’ve long been convinced that our artifacts (combustion engines, dams, scissors, cities, plowed fields, etc.) are simply extensions of our sensorimotor surfaces in the same way our hands and feet (and eardrums and retina) are extensions of our brains. E.g. the brain of a person born color blind is different, in a fundamental way, from the brain of a person born with all the color receptors. Similarly, a person born in the ubiquitous presence of smartphones has a fundamentally different brain than one born, say, in the 1930s.

Hence, although Scott’s argument works for the necessary but insufficient conclusion, I think it’s wiser to suggest that IIT’s Φ doesn’t measure the extent to which some thing (artifact, living or dead) is conscious. It measures the extent to which the cause(s) of the thing were conscious. I add the parenthetical plural of cause to indicate that perhaps an efficient cause of the thing (the agent) is conscious, but the material cause is not. It seems fairly clear that the DVD player would not have emerged without humans having created it. But to evoke Robert Rosen, when the thing being considered is its own cause, both the thing and its cause are conscious.

The interesting next step, of course, is when/if the intelligent design yahoos will pick up on this. It seems rather obvious that an eyeball doesn’t just “fall together”… it must be painstakingly developed, just like the DVD player. The trick is that the DVD player is not its own cause, whereas the eyeball is (… or almost is, to the extent that eyeballs are extensions of the nervous system of the animal). This is what led Rosen’s critics to accuse him of vitalism … and is the heart of our modern forms of anthropo- and bio-centrism. Where the intelligent design guys are clearly misguided, the ideas of pan-psychism or … pan-life-ism, is not as clearly wrong.)

Intuitive concept of space

Written by  on March 27, 2015

I had an extensive discussion about the meaning of the word “space” recently. Having been (somewhat) trained in math, that biases my understanding of the word, I think. But since most of my adult life has been spent programming, I think that biases it more. The ultimate bias, however, is the medium in which our bodies sense and act. This discussion centered around “visualization” and how/why visualizations (particularly of simulations) appeal to us humans, why they help us understand the often cryptic mechanisms inside a computation. I believe they help us understand things because they appeal to the 4D spacetime in which we live. And hence, there is an intuition that’s evoked by a visualization (as well as an audialization or … a hapticalization? … how about an olfactorization? … or even an inertialization?).

Given that there are these 3 basic usage domains for the word “space” (math, compsci, and natural), when someone talks about a visualization, what do they really mean? I tend to think they mean the latter. They mean “render your abstract stuff into something in 4D so my natural senses are stimulated”. That forces me to contrast the three uses of “space” and try to establish the distinctive properties of a natural space. I won’t go through all the hemming and hawing to do that. I’ll just make my assertion. What I think makes a space natural is affine geometry, basically the preservation of parallel lines. That’s what gives us our intuition about the translation and rotation of rigid objects, distance, perspective, etc.

Of course, people are different. So, it’s reasonable to assume that some people think more in terms of graphs (networks), some in terms of sequence, order, number, etc. But I tend to think those are in the minority. I’d enjoy evidence to the contrary.

exegisis of the esoteric

Written by  on March 23, 2015

In Chapter 2 of Lee Rudolph’s Qualitative Mathematics for the Social Sciences, Rudolph asserts:

truth is what they come to believe more firmly as they function better [...] As such, truth is always conditional and subject to amendment: but it has, always and unconditionally, a net or web of meaning that anchors it pretty firmly to many places

He’s saying this in the context of 2 comments written by G.H. Hardy about the function of mathematicians, chronologically:

(1922) The function of a mathematician is [...] to observe the facts about his own intricate system of reality, [...] to record the results of his observations in a series of maps, each of which is a branch of pure mathematics.

(1940) The function of a mathematician is [...] to prove new theorems

And Rudolph makes the following statement about the first of those quotes:

Hardy wrote his Apology at the end of his mathematical career, when he was convinced, perhaps correctly, that “his creative powers as a mathematician at last, in his sixties, [had] left him.”

Rudolph’s point about the timing of the two statements is critical, the difference being before and after Hardy felt his powers had left him. But the point does not imply what Rudolph infers about mathematical truth. One of the things that all of us do, mathematicians included, is become more conservative as we age (in thought and action, not necessarily political beliefs). As whatever powers we had leave us, we are left with the fossils of the exercising of those powers. This is true of both the complete structure as well as the more sparse anchors set more firmly amongst the less firm surroundings. My guess is that those anchors seed the “crystal.” Whatever anchors we become convicted of … convinced of … while younger, tend to accumulate cruft and barnacles, leading to a stigmergic mess of arbitrarily decided and fossilized dogma … that we then carry to our graves. (Unless we have a near-death epiphany or, as more research is showing, loosen up that “crystal” in some other way.)

What Hardy successfully exhibits with his change is the path from ideological conviction to transpersonal artifact. Just like science, what matters are the artifacts we produce, the less semantically (and metaphorically) laden, the better. For math, it comes in the form of proofs. For science, it comes in the form of recipes that anyone with an equivalent sensorimotor manifold can execute. Hence, mathematical truth is not a (or many) set(s) of beliefs inside the heads of mathematicians. It is the proofs written on paper and magnetic/optical media all around us. I think we’re finally approaching a demonstration of that with the homotopy type theory (HoTT) project, whereby mathematical truth would be fully instantiated in computing machinery. Even if HoTT fails, it’ll be a huge step toward externalizing mathematics (exegesis of the esoteric … pretty much the inverse of what Rudolph concludes).

Gnome 3 top bar drag auto maximize

Written by  on October 22, 2014

This log has become more about arbitrary tech tricks than anything else. So, I may as well log this as well. The single most irritating thing about Gnome 3 has been the automatic maximize for windows when I drag it to (or rather past) the top bar. There are numerous posts out there claiming it’s the fault of Compiz or Metacity and referring to various options that don’t necessarily even exist in Gnome 3.14. The only solution I could find is this. Side edge tiling is mildly useful… but disabling the absolutely stupid auto-resize top edge tiling is definitely worth the loss. For lazy people who don’t want to click the link (or in case it disappears before this log):

$ dconf read /org/gnome/shell/overrides/edge-tiling
$ dconf write /org/gnome/shell/overrides/edge-tiling false
$ dconf read /org/gnome/shell/overrides/edge-tiling
false

Java 8 Nashorn problem

Written by  on October 7, 2014

I got a __noSuchProperty__ error when trying to use my old JavaScripts with Java 8. Despite the claims on this site: https://wiki.openjdk.java.net/display/Nashorn/Rhino+Migration+Guide which may change by the time you read this; so I’ll excerpt here:

JavaImporter and with

Nashorn supports JavaImporter constructor of Rhino. It is possible to locally import multiple java packages and use it within a ‘with’ statement.

It seems like there is some problem with the JavaImporter… or there’s some problem with the way I’m [ab]using it, of course. In any case, this page helped me locate the problem. I simply changed from using the Java StrictMath package to using the JavaSript Math package:

--- test.js	2014-10-07 10:34:15.771959410 -0700
+++ testnew.js	2014-10-07 10:34:20.739826080 -0700
-  tempy = StrictMath.exp(-constant*ind);
+  tempy = Math.exp(-constant*ind);

This just kicks the can down the street, of course. I may or may not have to solve the problem for real at some point. But this is fine for now. Perhaps when OpenJDK 8 is released, it’ll be easier to debug? Below is the rest of the story.

Execution:

gepr@yog:~/lang/java/js-script-engine$ java -cp . Eval
test eval(0.0) = 1.0 : test eval(0.0) = 1.0
test eval(10.0) = 4.539992976248485E-5 : test eval(10.0) = 0.006737946999085467
test eval(20.0) = 2.061153622438558E-9 : test eval(20.0) = 4.539992976248485E-5
test eval(30.0) = 9.357622968840175E-14 : test eval(30.0) = 3.059023205018258E-7
test eval(40.0) = 4.248354255291589E-18 : test eval(40.0) = 2.061153622438558E-9
gepr@yog:~/lang/java/js-script-engine$ sudo update-alternatives --config java
There are 4 choices for the alternative java (providing /usr/bin/java).

  Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      auto mode
  1            /usr/bin/gij-4.8                                 1048      manual mode
  2            /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java   1071      manual mode
  3            /usr/lib/jvm/jdk-7-oracle-x64/jre/bin/java       317       manual mode
  4            /usr/lib/jvm/jdk-8-oracle-x64/jre/bin/java       318       manual mode

Press enter to keep the current choice[*], or type selection number: 4
update-alternatives: using /usr/lib/jvm/jdk-8-oracle-x64/jre/bin/java to provide /usr/bin/java (java) in manual mode
gepr@yog:~/lang/java/js-script-engine$ java -cp . Eval
test eval(0.0) = 1.0 : test eval(0.0) = 1.0
test eval(10.0) = 4.539992976248485E-5 : test eval(10.0) = 0.006737946999085467
test eval(20.0) = 2.061153622438558E-9 : test eval(20.0) = 4.539992976248485E-5
test eval(30.0) = 9.357622968840175E-14 : test eval(30.0) = 3.059023205018258E-7
Exception in thread "main" java.lang.AssertionError: __noSuchProperty__ placeholder called
	at jdk.nashorn.internal.objects.NativeJavaImporter.__noSuchProperty__(NativeJavaImporter.java:105)
	at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:557)
	at jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:209)
	at jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:378)
	at jdk.nashorn.internal.runtime.ScriptObject.invokeNoSuchProperty(ScriptObject.java:2113)
	at jdk.nashorn.internal.runtime.ScriptObject.megamorphicGet(ScriptObject.java:1805)
	at jdk.nashorn.internal.scripts.Script$\^eval\_.runScript(:5)
	at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:535)
	at jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:209)
	at jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:378)
	at jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:568)
	at jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:525)
	at jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:521)
	at jdk.nashorn.api.scripting.NashornScriptEngine.eval(NashornScriptEngine.java:192)
	at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:233)
	at Eval.eval(Eval.java:31)
	at Eval.main(Eval.java:52)
gepr@yog:~/lang/java/js-script-engine$ 

Eval.java:

import java.io.FileNotFoundException;
import java.io.IOException;
import javax.script.ScriptException;

public class Eval {
  final static javax.script.ScriptEngineManager manager = new javax.script.ScriptEngineManager();
  final static javax.script.ScriptEngine engine = manager.getEngineByExtension("js");
  javax.script.ScriptContext context = new javax.script.SimpleScriptContext();
  public javax.script.Bindings scope = null;

  String script = null;

  public Eval(String fileName) {
    context.setBindings(engine.createBindings(), javax.script.ScriptContext.ENGINE_SCOPE);
    scope = context.getBindings(javax.script.ScriptContext.ENGINE_SCOPE);

    script = convertStreamToString(
             getClass().getClassLoader().getResourceAsStream(fileName));
  }

  public static String convertStreamToString(java.io.InputStream is) {
    java.util.Scanner s = new java.util.Scanner(is).useDelimiter("\\A");
    return s.hasNext() ? s.next() : "";
  }

  public double eval(double t) {
    double retVal = Double.NaN;
    Object result = null;
    scope.put("ind", t);
    try {
      result = engine.eval(script,scope);
    } catch (ScriptException e) {
      System.err.println(e.getMessage());
      e.printStackTrace();
      System.exit(-1);
    }
    if (result instanceof Double)
      retVal = (Double)result;
    else
      retVal = ((Integer)result).doubleValue();
    return retVal;
  }

  public static void main(String args[]) {
    Eval se1 = null;
    Eval se2 = null;
    se1 = new Eval("test.js");
    se2 = new Eval("test.js");
    se1.scope.put("constant", 1.00);
    se2.scope.put("constant", 0.50);
    for (double t=0.0; t<50.0; t += 10.0) {
      System.out.print("test eval("+t+") = "+ se1.eval(t)+" : ");
      System.out.println("test eval("+t+") = "+ se2.eval(t));
    }
  }
}

test.js:

var math = new JavaImporter(java.lang.StrictMath);
var constant;
var ind;
with (math) {
  tempy = StrictMath.exp(-constant*ind);
}

Akka First Tutorial

Written by  on September 9, 2014

I found this tutorial:
http://doc.akka.io/docs/akka/2.0.2/intro/getting-started-first-java.html
very useful. But Akka 2.0.2 is pretty old at this point. There are only a few changes needed to update it to Akka 2.3.6. The following is a universal diff showing those differences.

--- pi/Pi.java  2014-09-09 12:49:36.988942607 -0700
+++ nbpi/pi/src/pi/Pi.java      2014-09-09 13:49:12.677146112 -0700
@@ -4,9 +4,8 @@
 import akka.actor.ActorSystem;
 import akka.actor.Props;
 import akka.actor.UntypedActor;
-import akka.actor.UntypedActorFactory;
 import akka.routing.RoundRobinRouter;
-import akka.util.Duration;
+import scala.concurrent.duration.Duration;
 import java.util.concurrent.TimeUnit;
 
 public class Pi {
@@ -47,8 +46,6 @@
 
   public static class Worker extends UntypedActor {
      
-    // calculatePiFor ...
-     
     public void onReceive(Object message) {
       if (message instanceof Work) {
         Work work = (Work) message;
@@ -85,14 +82,15 @@
       this.nrOfElements = nrOfElements;
       this.listener = listener;
      
-      workerRouter = this.getContext().actorOf(new Props(Worker.class).withRouter(new RoundRobinRouter(nrOfWorkers)),
+      workerRouter = this.getContext().actorOf(Props.create(Worker.class).withRouter(new RoundRobinRouter(nrOfWorkers)),
                                                "workerRouter");
+      
     }
      
     public void onReceive(Object message) {
       if (message instanceof Calculate) {
-        for (int start = 0; start < nrOfMessages; start++) {
-          workerRouter.tell(new Work(start, nrOfElements), getSelf());
+        for (int st = 0; st < nrOfMessages; st++) {
+          workerRouter.tell(new Work(st, nrOfElements), getSelf());
         }
       } else if (message instanceof Result) {
         Result result = (Result) message;
@@ -131,23 +129,19 @@
     ActorSystem system = ActorSystem.create("PiSystem");
      
     // create the result listener, which will print the result and shutdown the system
-    final ActorRef listener = system.actorOf(new Props(Listener.class), "listener");
+    final ActorRef listener = system.actorOf(Props.create(Listener.class), "listener");
      
     // create the master
-    ActorRef master = system.actorOf(new Props(new UntypedActorFactory() {
-        public UntypedActor create() {
-          return new Master(nrOfWorkers, nrOfMessages, nrOfElements, listener);
-        }
-      }), "master");
+    ActorRef master = system.actorOf(Props.create(Master.class, nrOfWorkers, nrOfMessages, nrOfElements, listener), "master");
      
     // start the calculation
-    master.tell(new Calculate());
+    master.tell(new Calculate(), ActorRef.noSender());
      
   } 
 }