I have endless debates about things that appear to be purely philosophical, but to me are now applicable. Sometimes this is a reach. Recently my great friend (who is welcome to identify himself but it would be impolite for me to do so) was discussing via Facebook the issue of consciousness. I have been prodding this friend to start a blog and he, likely half jokingly said, “when I figure out consciousness I will start it.” Pointing out that I may have to wait awhile to read his blog, he felt that figuring this out was not a matter of science, in that it likely didn’t involve more experimentation. Instead he felt a Eureka moment may occur, and therefore an ancient question resolved, and his blog launched. Until very recently I would have said that he was 100% wrong. Now I think there is that possibility, though my thoughts on why are likely different than his.
It is possible that consciousness is not to be found, but rather conceptualized. An excellent book by Thomas Metzinger called “The Ego Tunnel” deals with this directly, giving evidence that consciousness itself may be an illusion. I am sympathetic to this theory, as it fits very nicely with my well documented (and likelyboring to my friends and family) strong belief that free will is an illusion. If you are interested in this, look here and here for my views, and here for an opposing view by Massimo Pigliucci. The ideas of a no free will self are hard enough to digest. It means that we are complex parts of nature, but no different than anything else in nature itself. We can be predicted in theory if we had enough information. There are only two possibilities, either everything is determined, which is the large stuff, or involves quantum fluctuations, which are random, with the small stuff. Either way there is not free will.
I have taken this debate past the philosophical to help engineers and myself who are interested in artificial intelligence learn how to create it. That is to allow a system to have options, but only one right answer. A machine that has feeling is more complicated of course. How does a machine feel that it is choosing, like we feel that we are choosing? I don’t have an answer but assume that it can be accomplished the way I think it is accomplished for people. The reasons for the perception of free will must be somehow tied to an evolutionary need at some point to feel free. Perhaps this is why we care for our children, or for the purpose of creating technologies. I have no idea, yet any of those things can be programmed.
So what if the same is true for consciousness itself, and why should we not think that it is? This is an open question to myself, and more importantly to people who know something about this. It is also my shot at a simplistic Eureka to beat out my friend.
Ha! Eureka, I get consciousness. It doesn’t exist! Or maybe I am wrong...