Sunday, March 1, 2009

Next stop: Austin

This is just a quick update to let people know that I've decided to continue writing for a while rather than return to work. Yes, given the economic situation it's not an ideal time, but I want to keep writing for now even though I'm burning through my retirement money at a much faster rate than I had planned. I'm making good progress on my goals (both for writing and health), but I'm not yet where I want to be so I have decided to keep working on both of those goals  for now. But where to live?

Alaska has been great, but it's too expensive, especially during the tourist season, so after investigating a lot of possibilities, I've decided to spend the next six months or so in Austin, Texas. There's no single compelling reason why I chose Austin, but I think it's going to work out well for my purposes. Tomorrow I'm flying down to visit my sister in Wasilla, then Tuesday I'm headed to Austin to check things out and make some necessary arrangements.

In my capacity as a humble inquirer, I've been thinking a lot recently about computationalism, the notion that the mind is fundamentally a simulatable computational process. I used to be fairly confident that this view is correct, but the more I've thought about it the less sure I am. This whole topic is one that I am deeply intrigued by, and it certainly triggers feelings of humility in the face of the daunting complexity of the brain, and the many subtleties and conundrums in the philosophy of mind. For a useful and balanced introduction to many of the important thinkers in this area, I recommend the Conscious Entities blog, run by Peter Hankins. There is a lot of good stuff there.

I have come up with some interesting, and at least partially novel (as far as I know) thought experiments in this area that I want to post about sometime when the muse strikes me, but for now, I'll just relate a story, draw a comparison, and ask some questions to give a flavor of my thinking.

Many years ago, I came home from work one day and discovered that our cat had caught a small bird, not yet fully fledged, and was playing with it. The bird was bleeding and badly injured; clearly there was nothing I could do for it but to put it out of its misery. I shooed the cat away, took a heavy metal pole that was nearby, and put it on the poor little creature's head. It had been cheeping pitifully, but when it felt the pole it cheeped frantically, then I pushed down, sharp and hard, and the cheeping stopped. I dug a little grave for the bird and buried it. There is no doubt in my mind that the bird was experiencing something akin to fear and pain, and it still tugs a little at my heartstrings thinking about this incident now, more than thirty years later.

Compare that to a desktop computer with a camera attached to it, with the camera pointed at the outlet on the wall where the computer is plugged in. Running on the computer is a simple program that monitors the image coming in, and if it detects motion near the outlet it starts making painful cheeping sounds. The closer to the outlet, and the longer the foreign object stays near the outlet, the louder and more pathetic the sounds become. Question: should I feel any qualms about unplugging the computer? For the sake of argument, let's suppose I personally wrote the program, and I understand that there is a variable in the computer which I call pain which the program updates based on input from the camera. When pain increases, the sound increases.

Now, I may have a slight feeling of psychological resistance to pulling the plug, but still I would know full well that no matter what the value of pain is in my program, there is no entity that is actually feeling pain. This can be rendered even more obvious by renaming the variable pleasure, and by changing the sound from more and more frantic cheeping to ever more contented sighs. Clearly (to me, anyway), in neither case with my program is any real feeling happening.

What is the fundamental difference between these two cases? Is it just a matter of complexity and organization? Is there any combination of symbol manipulations that results in a system that feels real pain? Or is it just symbols being manipulated? If everything about the bird is describable by the laws of physics, would it be possible, in principle, to run a computer program that simulated exactly what was going on in the bird's nervous system? If so, would it be morally acceptable to run such a simulation, if there was a chance that somehow real suffering was happening? Questions abound. I have much more to say on this and related topics.

No comments:

Post a Comment