The talk I gave yesterday at the MPA Nanotech conference had
plenty of normal attributes to criticize and then even some more. Mainly I was
speaking about regenerative medicine, which is something I didn’t even know
existed last year at this time. I admitted this the second I started talking as
way to disarm the attendees from shooting it back at me later. I talked about
what I know with some authority. When I spoke about new topics I became like a child discovering the wonders
of the sea his first time at the beach. In
this case those are new body parts made of silly putty, which I must admit
sounds even more childish than I mean it to. When questions came around I got
one which was completely different, and to me must have come out of a
philosophical wing of the nanotech community that I didn’t know of. The question
surrounds some images I had taken at high resolution of the surface of a pig’s
stomach called ECM, and of a piece of rubber (this is the silly putty). These
are two kinds of scaffolds that can be used to regenerate organs. The man, who
I later learned was not a philosopher at all but rather a material scientist at
a Polish technical university, asked me “how do you know those images are real?”
I was certain that this was a language issue, so I spoke slowly to him in a condescending
way that I try to avoid in complicated technical talks with people who come
from different linguistic origins. I told him that I used a highly repeatable microscope
and special image recognition software that my company actually made, and that
indeed I could assure him that the images were in fact real. That wasn’t the
question he was asking exactly however. He wanted to know how I could trust imaging
for evaluating nanoporosity, which was what I was showing. This was far deeper,
and I became defensive, telling him with even more superiority than before that
light had been well established for a long time as a reality of physics. “But
they are just grey pixel values. You should use other methods such as absorption
of nanoparticles to verify your findings” he told me. The fight went on for
some time with the rest of the room joining the debate with a surprising
number, I thought, agreeing with the Polish scientist. I had truly entered a
debate about what type of physics was more real. Was a photon less real than a
carbon particle? Were pixels more removed from direct observation than weight measurements after absorbing nanoparticles? I didn’t agree with his argument, but appreciated it all the
same.
Right after I sat down from the debate I started thinking
about something my wife Marine had shown me. She is a speech and language pathologist.
She showed me an example of something called the McGurk effect. (see video below)This is an
illusion of sorts. A man repeats a sound and we watch him. We then see him with
a different mouth movement than the word uses. We hear a change in sound, even
though he has not changed the sound at all. This is because our visual sensation over
powers our auditory sensation. This is verified when we close our eyes, and the
sounds are clearly perceived as being the same. Marine works with children with
autism and told me that with autism the McGurk effect is not always present,
which is a fascinating look at the symptoms of the different neural structures of
people.
So what does the McGurk effect have to do with my experience
at the conference? Maybe nothing. Maybe it is as simple as Polish material
scientists view the world differently than American physicists. That however is
only part of it. It is true that different disciplines in science see things
differently. We are trained differently and it makes us skeptical of other approaches.
Looking at it this way both he and I were objectively wrong. We should have
realized that it was just a matter of perception. From another perceptive
however it could be like the other aspect of the McGurk effect, and that is not
the difference between autistic and non-autistic observation, but rather the
accuracy of the autistic observation. It is likely that the McGurk effect points to
an important survival skill. Perhaps vision is a more urgent sense to rely on.
What the McGurk effect shows however is that the non-autistic sense is not as
accurate as the autistic perception. So both views of the world have a
subjective truth, but only the autistic view is actually the objective truth.
So what about using microscopes with sensors and image
processing rather than weighing nanoparticles after absorption? Is one more
objectively real than the other? Perhaps the Polish scientist is right in one
way. We are programming our computers and designing our sensors with our human
minds. These are the same minds that survive well due to the misinformation as
shown in the McGurk effect. But science is more of an autistic style endeavor
in some respects. We avoid human variability whenever possible in order to
see what is really going on. The answer still remains unclear to me however.
When working in domains such as the nanoscale, we are forced to create information
using algorithms. We make an artificial vision of the invisible. We use
computers to model what we expect is happening on a scale we can see with our
eyes. When creating these models, more and more we use a form of AI which tries
to learn as we do, taking into account variability and disregarding what is not
important. In essence we are programing a McGurk effect style response into our
machines, and those machines then tell us about a reality which is similar to
the subjective one we experience. We don’t often even admit to ourselves that
this is what we are doing, but it really is. Maybe this is the point of the Polish
scientist, but if it was he is also missing the McGurk style effect in his own
measurements which are also designed by humans, and involve several steps of extrapolation
where we are likely blinded by our own errors the way that we are when viewing
McGurk.
All of this leads to an important point though. Science must
aim to observe as independently as possible, or when not, to acknowledge what it
is doing. Bio mimicry for example is a technological human activity which
embraces and acknowledges perception as the basis for creating AI. The problem
is that we remain stuck in a loop of thought on this topic. We have our always
compensating brains deciding what to do, even when we allow our machines to
learn.
1 comment:
really interesting perspective. i've heard alot about how important visual stimuli are. your blog post really helps to provide some additional context in explaining this idea a little further. good stuff. thank you!
Post a Comment