Thursday, June 14, 2012

I got some stuff published

Well, after two and a half years, (!) I figure it's time to update this blog.

After I put up the article on machine consciousness mentioned in my last posting in this blog, I had some interesting email exchanges with several people about it, including Mark Bishop and David Chalmers, both of whom have written on this topic, and both of whom encouraged me to write something up for publication, which I have finally done. Part of the delay was due to my other writing project (Do Not Go Gentle, more on that below), and part was due to the fact that in November 2010 my sabbatical was over and I went back to full-time work.

The article I wrote is titled "Counterfactuals, Computation, and Consciousness", and it's published in Cognitive Computation. It is available online from Springer, or if you don't have institutional access to that journal, you can read a less-nicely-formatted version of it on my website:

muhlestein.com/consciousness/ccc.html

For the sake of conciseness, I decided to write in detail about a single thought experiment that illustrates one of the difficulties of computationalism: the philosophical claim that a purely computational theory of mind is possible. Computationalism has been the leading theory of mind for some time, though it does have its detractors. I had more or less assumed that something like uploading to a purely computational substrate would work, in the sense that an uploaded mind would have conscious experience, but  I never thought seriously about it until I was plotting my book and I realized I'd have to come down one way or the other on the question. Unfortunately, I've come to believe there are serious reasons to doubt that it would indeed work as some hope.

Very clearly, there is computation going on in the brain, as has been demonstrated beautifully, for example, in studies of visual processing. But what about consciousness itself? The more I thought about it, the less convinced I became that a purely computational account would do the job. The problem for me is the abstract nature of computation, and the fact that it is possible to blur the distinction between a recording of a computation, which almost everybody (with the exception of patternists like Ben Goertzel) agrees could not be conscious, and a bona fide computation, which computationalists assert could be conscious, as long as it is the right sort of computation.

Beyond needing to know how to think about this for use in my novel, I have a personal interest in this as well: I have many friends in the transhumanist community who are looking forward to uploading their consciousness into a computational substrate via a destructive scanning of their brains. I'd be the last person to try to prevent someone from doing that, but before I could recommend it, I'd want to be double damn sure that it works. The resulting failure mode if it doesn't work is quite horrifying: your loved one uploads and announces that they are conscious, everything's fine, life is good, etc. But if computationalism is false, they would in fact be dead, and their behavior would be the output of a program, no more conscious than a one-liner that prints "Hello, World." So we definitely want to get this right!

In other news, as most readers of this blog will already know, I did manage to get the first two volumes of my novel, Do Not Go Gentle, put up on Amazon as ebooks:

Do Not Go Gentle Book One: Discovery
Do Not Go Gentle Book Two: Kinaadman

People seem to be enjoying the story, and though I have made no attempt whatsoever to promote it, I've had a gratifying response so far. At some point I guess I'll spend some effort getting the word out about it, but for now, I'm enjoying just working on it.

I'm about 2/3 done with the third installment of the story, and now that I've got the consciousness paper finished, I'm finally getting back to work on it. I'm hoping to finish book 3 before the end of the year. I have a  blog for the story, which is at millstorm.com, "Millstorm" being the nom de plume I'm using.

Monday, November 16, 2009

2D Consciousness?

Life has a way of keeping me from getting bored. The last several months have been, shall we say, interesting, in the Chinese Curse sense. In any case, I have been making progress on the book, and I'm soon going to have to decide whether to pursue publishing it, or putting it out as a podcast, or ... I don't know.

In the meantime, I've written up some of the ideas I've been mulling over for a while now, having to do with the possibility of uploading a mind into a computer. The link is here:

Consciousness and 2D Computation: a Curious Conundrum

The take-away is that it is possible to set up a scenario that blurs the distinction between performing a computation and using the results of a computation, in a way that raises some troubling questions I haven't been able to answer satisfactorily for myself. The thought experiment itself combines a number of ideas that have been out there for a while in the AI and philosophy of mind communities, though perhaps in a slightly novel way. I ran it past some smart folks at the recent Singularity Summit, all of whom indicated that they wanted to think about it further before responding to it, which told me that it must be, as I say, at least slightly novel.

I'm very interested in feedback on these ideas.

Tuesday, March 31, 2009

Heading out

Tonight is my last night in Fairbanks. It's been a great six and a half months. I didn't get as much writing done as I originally hoped, but I did get a fair bit, and I did accomplish my health goals, so overall I'm very happy.

On the humble inquiry theme, I'll just put a few thoughts here about a rather interesting problem in thermodynamics that is not widely appreciated. Thermodynamics is generally considered to be a very solid, well-established part of physics, but there is an aspect of it that is extremely problematic: the time-reversibility of physical law means that the second law of thermodynamics, (disorder always increases), therefore predicts that the disorder in the past should also be higher. Yet we know that is not the case. I had realized this long ago after reading about it in the Feynman Lectures on Physics, but it wasn't until I read Robin Hanson's comments in the Overcoming Bias blog that I considered just how "crazy" the standard explanation for this really is.

The "explanation" is, of course, that at some point in the past the disorder was very low. I put "explanation" in quotes, because it really isn't an explanation at all. It's just a statement that disorder was lower in the past without giving any reason for that. With the big bang, it's assumed that the disorder must have been low, but so far no one has come up with a compelling reason why that should be so. The center of an explosion is not normally known for being a nice, orderly place. Where did the order come from?

One suggestion that has been made is discussed by Feynman in his Lectures on Physics: maybe we just happen to live in a more ordered fluctuation in a much larger unordered region. But if that were the case, you would not expect to see the same amount of order in every direction equally, especially in such a large volume as our visible universe. In other words, it's much more likely that you would have a fluctuation that resulted in a single solar system than a galaxy, let alone 80 billion galaxies. The "random fluctuation in a larger volume" argument is not popular because of this.

It's sobering to realize that as we watch the Universe unfold, everything we see is the unwinding of the "spring" that was wound up at some point in the past. To paraphrase Feynman, even a falling drop of water cannot be completely understood until the mystery of the beginnings of the universe are reduced from speculation to scientific understanding. We're still a long way from that.

---

Update: I recently watched an interesting lecture that Roger Penrose gave at Princeton in 2003 which talks about this issue. He gives an estimate of the volume of the phase space for our observable universe of 1010123. One over that enormous number is an estimate of the likelihood of the special conditions that would have to be present at the Big Bang in order to give rise to a universe with the amount of order we currently observe. That number (10^10^123) is so huge that it overwhelms the anthropic principle: as I mentioned above, you don't need anything near that much order to get observers like us. There must be something else going on, but so far, no one has come up with a good explanation for all that order.

---

Further update: Sean Carroll gives a fascinating interview on edge.org which covers many of these same issues. It's well worth watching.

Sunday, March 1, 2009

Next stop: Austin

This is just a quick update to let people know that I've decided to continue writing for a while rather than return to work. Yes, given the economic situation it's not an ideal time, but I want to keep writing for now even though I'm burning through my retirement money at a much faster rate than I had planned. I'm making good progress on my goals (both for writing and health), but I'm not yet where I want to be so I have decided to keep working on both of those goals  for now. But where to live?

Alaska has been great, but it's too expensive, especially during the tourist season, so after investigating a lot of possibilities, I've decided to spend the next six months or so in Austin, Texas. There's no single compelling reason why I chose Austin, but I think it's going to work out well for my purposes. Tomorrow I'm flying down to visit my sister in Wasilla, then Tuesday I'm headed to Austin to check things out and make some necessary arrangements.

In my capacity as a humble inquirer, I've been thinking a lot recently about computationalism, the notion that the mind is fundamentally a simulatable computational process. I used to be fairly confident that this view is correct, but the more I've thought about it the less sure I am. This whole topic is one that I am deeply intrigued by, and it certainly triggers feelings of humility in the face of the daunting complexity of the brain, and the many subtleties and conundrums in the philosophy of mind. For a useful and balanced introduction to many of the important thinkers in this area, I recommend the Conscious Entities blog, run by Peter Hankins. There is a lot of good stuff there.

I have come up with some interesting, and at least partially novel (as far as I know) thought experiments in this area that I want to post about sometime when the muse strikes me, but for now, I'll just relate a story, draw a comparison, and ask some questions to give a flavor of my thinking.

Many years ago, I came home from work one day and discovered that our cat had caught a small bird, not yet fully fledged, and was playing with it. The bird was bleeding and badly injured; clearly there was nothing I could do for it but to put it out of its misery. I shooed the cat away, took a heavy metal pole that was nearby, and put it on the poor little creature's head. It had been cheeping pitifully, but when it felt the pole it cheeped frantically, then I pushed down, sharp and hard, and the cheeping stopped. I dug a little grave for the bird and buried it. There is no doubt in my mind that the bird was experiencing something akin to fear and pain, and it still tugs a little at my heartstrings thinking about this incident now, more than thirty years later.

Compare that to a desktop computer with a camera attached to it, with the camera pointed at the outlet on the wall where the computer is plugged in. Running on the computer is a simple program that monitors the image coming in, and if it detects motion near the outlet it starts making painful cheeping sounds. The closer to the outlet, and the longer the foreign object stays near the outlet, the louder and more pathetic the sounds become. Question: should I feel any qualms about unplugging the computer? For the sake of argument, let's suppose I personally wrote the program, and I understand that there is a variable in the computer which I call pain which the program updates based on input from the camera. When pain increases, the sound increases.

Now, I may have a slight feeling of psychological resistance to pulling the plug, but still I would know full well that no matter what the value of pain is in my program, there is no entity that is actually feeling pain. This can be rendered even more obvious by renaming the variable pleasure, and by changing the sound from more and more frantic cheeping to ever more contented sighs. Clearly (to me, anyway), in neither case with my program is any real feeling happening.

What is the fundamental difference between these two cases? Is it just a matter of complexity and organization? Is there any combination of symbol manipulations that results in a system that feels real pain? Or is it just symbols being manipulated? If everything about the bird is describable by the laws of physics, would it be possible, in principle, to run a computer program that simulated exactly what was going on in the bird's nervous system? If so, would it be morally acceptable to run such a simulation, if there was a chance that somehow real suffering was happening? Questions abound. I have much more to say on this and related topics.

Wednesday, December 10, 2008

Reflections on life and death

Later today I’m getting on a plane to travel to Utah to attend a family funeral. It’s a difficult time for the whole family, as there have been two horribly tragic deaths in the family in the last few days. Not only have I been grieving for these losses, but it also brings back a lot of other feelings for me, especially about the deaths of my dad in 1976 and my mom in 2003.

My dad’s death was incredibly painful to me. We had always been very close, but only a few weeks before he died he and I had spent almost an entire month on a road trip together. When he died suddenly (and way too young) from a massive heart attack that February afternoon I was devastated. It was a long time before I felt okay again. I still miss him.

One of the hardest things I’ve ever done in my life was to give a tribute to my mom at her funeral. Mom’s death was not totally unexpected, as she had been fighting cancer for some time, but still it was sudden and something I was mentally unprepared for. I knew she was sick, but I expected her to pull through and live another twenty years, so when I heard the news I was again devastated. I had not planned to speak at Mom’s funeral service, but my sisters somehow managed to convince me that I ought to be the one to give a tribute from the children. How I was able to keep it together enough to give any kind of decent tribute I still don't know.

A real difficulty I faced was how to be true to my own “materialistic” views yet honor the religious feelings of Mom and the rest of the family, especially at a time when so much comfort comes from the hope and faith that religion can give. Everyone in the family of course knows we don’t share the same ideas about religion, but that has not (maybe surprisingly) been a real source of conflict. As my mom told me more than once, “I don’t know how it’s going to work, but you have a good heart and I know you are going to be all right in the end.” I hope you’re right, Monz, I hope you're right.

At her funeral service I told a few remembrances and stories about her, and talked about some recent experiences she and I had shared. I concluded by saying that I hoped we would do what it takes for us to all be together again, and I quoted from the end of C.S. Lewis’ story The Last Battle, challenging us all to go “higher up and further in.” We both loved that part at the end which speaks of “beginning Chapter One of the Great Story which no one on earth has read: which goes on forever: in which every chapter is better than the one before.”

I’m sure more than one person who heard my talk that day was intrigued by my call to do what it takes to be together again, because on the face of it, this sounds like something a religious believer would say. The explanation for this, as well as the scare quotes on “materialistic” in the paragraph above, may take a bit to explain, and some effort to understand, but I’ll give it a try.

One underlying fact is just the theme of this blog: we know we don’t know everything. Setting aside for a moment the implications that fact may have for belief in existing traditional religions, and staying strictly within a rationalistic, materialistic framework, given the uncertainty that the humble inquirer must acknowledge, who can say confidently what limits exist regarding what can be known or accomplished? For example, we know that everything that happens leaves its ripples on the universe, so is information ever truly lost? The general consensus among physicists is no, it is not. Then is it really possible to rule out, in principle, the prospect of bringing people back to life, keeping in mind we might have googolplexes of years, and conceivably googolplexes of universes to work with? As a matter of fact, under the Multiverse view that many mainstream physicists hold, not only is this possible, it is definitely true with a probability of one. This and many related ideas are explored in my friend Michael Perry’s book Forever For All.

If/when this is accomplished, it will be by the actions of beings with Enlightened Self Interest who lovingly want to bring about the maximum goodness that can be attained. Whatever your theory of value, if anything has value human life does, so that will certainly be something that must be considered in any plan for maximum happiness. It seems to me that living an orderly, compassionate life right now is a remarkably helpful thing in advancing to a time where we can fix up everything that is currently broken. So that’s the explanation for my “doing what it takes to be together again” remark. Reflecting on it right now, it doesn’t take much of a stretch to fit that belief into (an appropriately expanded) religious framework. Which is why I put “materialistic” in quotes.

There is a lot of uncertainty about these ideas, but as I strongly believe and have mentioned earlier in this blog, that's okay, uncertainty is good. I like this quote, from Ursula K. Le Guin: “The only thing that makes life possible is permanent, intolerable uncertainty: not knowing what comes next.” And I like these lines from one of Paul Simon’s recent songs:

My children are laughing, not a whisper of care
My love is brushing her long chestnut hair
I don't believe a heart can be filled to the brim
Then vanish like mist as though life were a whim

Maybe and maybe and maybe some more
Maybe’s the exit that I’m looking for

Acts of kindness
Like rain in a drought
Release the spirit with a whoop and a shout

Tuesday, November 18, 2008

Convergence 08 conference

I just got back from the Convergence 08 conference. I was conflicted about whether to go because I have been trying to minimize distractions while writing, but the topics and people at that conference were just so close a match to the themes of my book that I really had to go, and I'm glad I did.

The format was an unconference, which allows a lot of flexibility in what attendees get out of the event. In my case, I was able to spend several hours talking to really smart people about nanotech, consciousness, AI, cognitive enhancement, and of course, writing. I ran a discussion group on the topic of "Convergence and Near-term Speculative Fiction" which was very useful and interesting, not least because in that group I met a professor of English literature specializing in speculative fiction who also teaches composition. He agreed to give me some feedback on my writing, which I'm (somewhat nervously) excited to get.

There was a session called something like "Limits of Knowledge" which conflicted with something else I wanted to attend, but sounded interesting, particularly since I have recently spent some time trying to understand a paper by David Wolpert on just this topic. Wolpert, a researcher in physics and computer science at NASA, purports to show that (a) it is impossible to have an "inference machine" (basically any physical device, with or without human input) which is capable of predicting everything that can happen in its universe, and (b) at most one inference machine can fully predict the behavior of all other inference machines in its universe. These results hold independent of the laws of physics of a given universe, so they are presumably valid across the (capital "M") Multiverse of all possible universes.

I can't say I fully understand the arguments in detail, but Wolpert is using a variation on Cantor's diagonalization method, similar to Turing's proof that it is impossible to come up with a guaranteed way of determining whether an arbitrary program+input will halt. If Wolpert's results hold, it means that not only do we not know everything, it is impossible in principle to know everything. Of course, the proof does not say precisely what we can know; it just constructs an example of something that no inference machine can know. (Along the way Wolpert also gives a formal definition of what it means to "know" something, which is also interesting, but one topic at a time!)

I've been wondering what the implications of this might be for simulated universes in which the entity running the simulation introduces knowledge into the simulation. Would this possibility get around Wolpert's results? I suspect not, given that even with the addition of an oracle, Turing's halting paradox still exists: no oracle machine is capable of solving its own halting problem. The parallel to Wolpert is that even though one might think that any question about the simulated universe could be answered by those running the simulation, the true "universe" (which includes the simulator itself) cannot know everything about itself, which raises the question of whether it is possible to know everything about the simulation.

In any case, this all just reinforces my agreement with J.B.S. Haldane: "I have no doubt that in reality the future will be vastly more surprising than anything I can imagine. Now my own suspicion is that the Universe is not only stranger than we suppose, but stranger than we can suppose."

Thursday, October 30, 2008

I am a bad blogger

I've been so distracted with the financial meltdown, the presidential election, and working on my book that I have neglected to do any updates here. But just in case anyone was worried that I had accidentally eaten yellow snow or fallen prey to some other peril of the North, I thought I'd better post something here.

I've been making slow but steady progress on my book. I enjoy working on it a lot, once I get going, but as I said, it's been hard to keep my focus. I still hope to finish most of it by next spring, especially since due to the financial crisis I'll almost certainly have to go back to work then. Sigh.

Just to mention something in line with my "theme" of humble inquiry, I recently learned a new word: agnotology. It refers to the study of ignorance and doubt that are deliberately manufactured in order to obscure an issue or justify a course of action. A prime example can be seen in the actions of the tobacco companies, which for years attempted to create doubts about what was, in fact, a scientific consensus on the dangers of smoking. Other examples include dangers of pollution, climate change, radiometric dating, and ... the possibility of a black hole headed for the solar system! Ha ha!