
As
I look at these fictional characters trying to cope with the always present
annoyance of emotion, I wonder if this quality the Vulcan is finding in this
exercise is not merely mechanistic. What is it about creating unique forms
based on that particular moment that makes a Vulcan easier for us as Trekkies
to relate to? For some reason the one important human characteristic that I always
think of as crucial in our ability to both improve ourselves and thrive as a
species is empathy. The Wikipedia definition of empathy states: “Empathy is the capacity to recognize emotions that
are being experienced by another sentient or fictional being”. At first glance this definition
does not seem to apply to a solitary act, but I would argue that in a very
important way it does. The entire purpose of combining rationality with
creativity is an applied discipline. When a meditation that results in variations
depending on personal interactions of the day is completed, it is assumed that
those interactions play an important role. Therefore to complete the task
effectively the Vulcan must possess empathy. While this may be a stretch, it is
allowed to be, as we are talking about characters in a TV space drama. The analogies
that this suggests however, I think are transferable to us actual humans, not
just pointy eared aliens.
This
apparent contradiction between the logical mechanistic mode, and the creative
empathetic one, is not only one to consider when assembly a starship crew, but
also in a technology company, or even an artificial intelligence. I was
attending a salon that the social scientist, and bestselling author Jonathan Haidt was a guest speaker for. The group was a libertarian organization called ReasonFoundation, whose name Haidt seemed to suggest, (based on his in-depth research),
explains libertarians fairly well. It seems that empathy and rational decision
making ability are inversely proportional. Libertarians seem to be the Vulcans
of the political ideals spectrum (my words not his). This is in a way similar
to very high functioning individuals with autism, such as many people with Asperger’s
syndrome, who populate some of the best computer science departments and silicon value
development labs. Haidt implies a mutually exclusive tendency between libertarian
logical rigor, and a lack of empathy. While he certainly has a lot of research,
there may very well be something more fundamental about the nature of humanity
that he is missing.
Though
I hate to jump around between fictional characters, real people, and robots I
am going to do it anyway. Many people, who I promise you, are not all nuts, are
considering the programming parameters for creating human-like artificial intelligence.
I and many others think that computer technology is accelerating at such a rate
that we will be faced with both practical and ethical questions about what and
who these future highly intelligence machines should be. The former Singularity
Institute, now known as MIRI, has been working on this very issue with full
time researchers, and yearly conferences.
Perhaps there is a something to the dichotomy
of empathy and reason that we should consider when allowing our machines to
become sentient. You notice I use allow, as I have decided to take a rather libertarian
approach to artificial intelligence. As an analogy, humans have programming which
is encoded in our DNA. A good generalized AI algorithm also has the equivalent,
perhaps not written in ATCG base pairs on a biopolymer, but rather in binary
logic, written in C++ and eventually printed
onto silicon. We know how to create learning algorithms already. The IBM
computer Watson, and most Google products, as well as thousands of others do
it. These are machines that start as newborns with pre-programmed tools for
learning and as they age get smarter. In some ways
current AI does better than humans, and in some ways humans do better than
computers.
This is old stuff (well Watson was 2 years ago. I guess that is old
in modern tech terms), and easy to understand. What we don’t know about human
genomics, and equally don’t know how to do in computer science, is to find, or
program that fundamental structure so that pure logical thinking, and
empathy can co-exist. The reason that they should co-exist in AI
may seem obvious. We would want machines that inherently have superior abilities,
such as perfect memories, but also have the heart of an empath, or at the very
least the heart of a libertarian Vulcan.
No comments:
Post a Comment