Tuesday, November 18, 2008

Convergence 08 conference

I just got back from the Convergence 08 conference. I was conflicted about whether to go because I have been trying to minimize distractions while writing, but the topics and people at that conference were just so close a match to the themes of my book that I really had to go, and I'm glad I did.

The format was an unconference, which allows a lot of flexibility in what attendees get out of the event. In my case, I was able to spend several hours talking to really smart people about nanotech, consciousness, AI, cognitive enhancement, and of course, writing. I ran a discussion group on the topic of "Convergence and Near-term Speculative Fiction" which was very useful and interesting, not least because in that group I met a professor of English literature specializing in speculative fiction who also teaches composition. He agreed to give me some feedback on my writing, which I'm (somewhat nervously) excited to get.

There was a session called something like "Limits of Knowledge" which conflicted with something else I wanted to attend, but sounded interesting, particularly since I have recently spent some time trying to understand a paper by David Wolpert on just this topic. Wolpert, a researcher in physics and computer science at NASA, purports to show that (a) it is impossible to have an "inference machine" (basically any physical device, with or without human input) which is capable of predicting everything that can happen in its universe, and (b) at most one inference machine can fully predict the behavior of all other inference machines in its universe. These results hold independent of the laws of physics of a given universe, so they are presumably valid across the (capital "M") Multiverse of all possible universes.

I can't say I fully understand the arguments in detail, but Wolpert is using a variation on Cantor's diagonalization method, similar to Turing's proof that it is impossible to come up with a guaranteed way of determining whether an arbitrary program+input will halt. If Wolpert's results hold, it means that not only do we not know everything, it is impossible in principle to know everything. Of course, the proof does not say precisely what we can know; it just constructs an example of something that no inference machine can know. (Along the way Wolpert also gives a formal definition of what it means to "know" something, which is also interesting, but one topic at a time!)

I've been wondering what the implications of this might be for simulated universes in which the entity running the simulation introduces knowledge into the simulation. Would this possibility get around Wolpert's results? I suspect not, given that even with the addition of an oracle, Turing's halting paradox still exists: no oracle machine is capable of solving its own halting problem. The parallel to Wolpert is that even though one might think that any question about the simulated universe could be answered by those running the simulation, the true "universe" (which includes the simulator itself) cannot know everything about itself, which raises the question of whether it is possible to know everything about the simulation.

In any case, this all just reinforces my agreement with J.B.S. Haldane: "I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only stranger than we suppose, but stranger than we can suppose."