Monday, November 16, 2009

2D Consciousness?

Life has a way of keeping me from getting bored. The last several months have been, shall we say, interesting, in the Chinese Curse sense. In any case, I have been making progress on the book, and I'm soon going to have to decide whether to pursue publishing it, or putting it out as a podcast, or ... I don't know.

In the meantime, I've written up some of the ideas I've been mulling over for a while now, having to do with the possibility of uploading a mind into a computer. The link is here:

Consciousness and 2D Computation: a Curious Conundrum

The take-away is that it is possible to set up a scenario that blurs the distinction between performing a computation and using the results of a computation, in a way that raises some troubling questions I haven't been able to answer satisfactorily for myself. The thought experiment itself combines a number of ideas that have been out there for a while in the AI and philosophy of mind communities, though perhaps in a slightly novel way. I ran it past some smart folks at the recent Singularity Summit, all of whom indicated that they wanted to think about it further before responding to it, which told me that it must be, as I say, at least slightly novel.

I'm very interested in feedback on these ideas.

Tuesday, March 31, 2009

Heading out

Tonight is my last night in Fairbanks. It's been a great six and a half months. I didn't get as much writing done as I originally hoped, but I did get a fair bit, and I did accomplish my health goals, so overall I'm very happy.

On the humble inquiry theme, I'll just put a few thoughts here about a rather interesting problem in thermodynamics that is not widely appreciated. Thermodynamics is generally considered to be a very solid, well-established part of physics, but there is an aspect of it that is extremely problematic: the time-reversibility of physical law means that the second law of thermodynamics, (disorder always increases), therefore predicts that the disorder in the past should also be higher. Yet we know that is not the case. I had realized this long ago after reading about it in the Feynman Lectures on Physics, but it wasn't until I read Robin Hanson's comments in the Overcoming Bias blog that I considered just how "crazy" the standard explanation for this really is.

The "explanation" is, of course, that at some point in the past the disorder was very low. I put "explanation" in quotes, because it really isn't an explanation at all. It's just a statement that disorder was lower in the past without giving any reason for that. With the big bang, it's assumed that the disorder must have been low, but so far no one has come up with a compelling reason why that should be so. The center of an explosion is not normally known for being a nice, orderly place. Where did the order come from?

One suggestion that has been made is discussed by Feynman in his Lectures on Physics: maybe we just happen to live in a more ordered fluctuation in a much larger unordered region. But if that were the case, you would not expect to see the same amount of order in every direction equally, especially in such a large volume as our visible universe. In other words, it's much more likely that you would have a fluctuation that resulted in a single solar system than a galaxy, let alone 80 billion galaxies. The "random fluctuation in a larger volume" argument is not popular because of this.

It's sobering to realize that as we watch the Universe unfold, everything we see is the unwinding of the "spring" that was wound up at some point in the past. To paraphrase Feynman, even a falling drop of water cannot be completely understood until the mystery of the beginnings of the universe are reduced from speculation to scientific understanding. We're still a long way from that.


Update: I recently watched an interesting lecture that Roger Penrose gave at Princeton in 2003 which talks about this issue. He gives an estimate of the volume of the phase space for our observable universe of 1010123. One over that enormous number is an estimate of the likelihood of the special conditions that would have to be present at the Big Bang in order to give rise to a universe with the amount of order we currently observe. That number (10^10^123) is so huge that it overwhelms the anthropic principle: as I mentioned above, you don't need anything near that much order to get observers like us. There must be something else going on, but so far, no one has come up with a good explanation for all that order.


Further update: Sean Carroll gives a fascinating interview on which covers many of these same issues. It's well worth watching.

Sunday, March 1, 2009

Next stop: Austin

This is just a quick update to let people know that I've decided to continue writing for a while rather than return to work. Yes, given the economic situation it's not an ideal time, but I want to keep writing for now even though I'm burning through my retirement money at a much faster rate than I had planned. I'm making good progress on my goals (both for writing and health), but I'm not yet where I want to be so I have decided to keep working on both of those goals  for now. But where to live?

Alaska has been great, but it's too expensive, especially during the tourist season, so after investigating a lot of possibilities, I've decided to spend the next six months or so in Austin, Texas. There's no single compelling reason why I chose Austin, but I think it's going to work out well for my purposes. Tomorrow I'm flying down to visit my sister in Wasilla, then Tuesday I'm headed to Austin to check things out and make some necessary arrangements.

In my capacity as a humble inquirer, I've been thinking a lot recently about computationalism, the notion that the mind is fundamentally a simulatable computational process. I used to be fairly confident that this view is correct, but the more I've thought about it the less sure I am. This whole topic is one that I am deeply intrigued by, and it certainly triggers feelings of humility in the face of the daunting complexity of the brain, and the many subtleties and conundrums in the philosophy of mind. For a useful and balanced introduction to many of the important thinkers in this area, I recommend the Conscious Entities blog, run by Peter Hankins. There is a lot of good stuff there.

I have come up with some interesting, and at least partially novel (as far as I know) thought experiments in this area that I want to post about sometime when the muse strikes me, but for now, I'll just relate a story, draw a comparison, and ask some questions to give a flavor of my thinking.

Many years ago, I came home from work one day and discovered that our cat had caught a small bird, not yet fully fledged, and was playing with it. The bird was bleeding and badly injured; clearly there was nothing I could do for it but to put it out of its misery. I shooed the cat away, took a heavy metal pole that was nearby, and put it on the poor little creature's head. It had been cheeping pitifully, but when it felt the pole it cheeped frantically, then I pushed down, sharp and hard, and the cheeping stopped. I dug a little grave for the bird and buried it. There is no doubt in my mind that the bird was experiencing something akin to fear and pain, and it still tugs a little at my heartstrings thinking about this incident now, more than thirty years later.

Compare that to a desktop computer with a camera attached to it, with the camera pointed at the outlet on the wall where the computer is plugged in. Running on the computer is a simple program that monitors the image coming in, and if it detects motion near the outlet it starts making painful cheeping sounds. The closer to the outlet, and the longer the foreign object stays near the outlet, the louder and more pathetic the sounds become. Question: should I feel any qualms about unplugging the computer? For the sake of argument, let's suppose I personally wrote the program, and I understand that there is a variable in the computer which I call pain which the program updates based on input from the camera. When pain increases, the sound increases.

Now, I may have a slight feeling of psychological resistance to pulling the plug, but still I would know full well that no matter what the value of pain is in my program, there is no entity that is actually feeling pain. This can be rendered even more obvious by renaming the variable pleasure, and by changing the sound from more and more frantic cheeping to ever more contented sighs. Clearly (to me, anyway), in neither case with my program is any real feeling happening.

What is the fundamental difference between these two cases? Is it just a matter of complexity and organization? Is there any combination of symbol manipulations that results in a system that feels real pain? Or is it just symbols being manipulated? If everything about the bird is describable by the laws of physics, would it be possible, in principle, to run a computer program that simulated exactly what was going on in the bird's nervous system? If so, would it be morally acceptable to run such a simulation, if there was a chance that somehow real suffering was happening? Questions abound. I have much more to say on this and related topics.