Before I refute these ideas, I want to admit that to a large extent I agree with the premise as it relates to our current state of knowledge. This is especially true if you are comparing pictures, as was done on Saturday. Nebula resembled Neurons so much that if all of the information contained in those images were all we knew or were ever to know, we would basically just count, and the brain would win this slapdown as being more complicated. I am saying complicated as a way to distinguish this word from complexity, which I think of differently (despite Google telling that they are indeed synonyms.) This distinction is why the brain’s importance is scientifically arguable to be. I actually wish that the images did tell the whole story as I make instruments for taking images. I want those images to tell a story, and they do. The story they tell is observational and involves imagination however. We also make software for interpretation and classification so that we are not fooled by our eyes. Those are the most important part perhaps, but with being able to scale deduction of these observations we are lost with a simplistic idea that everything that looks complicated is inherently more complex. I have reduced this before, and this is where it gets tricky (or complicated and complex I guess).
Complexity theory is of course a real thing, and involves modeling by some of the most creative computer scientists and statisticians I know. The Santa Fe Institute is dedicated this study, and the sophistication of complexity models used varies from those which we may have an intuitive sense about, to those that are completely non-intuitive. No matter what types of complexity being addressed (agent based modeling, toy models etc.) there are two basic distinctions that are often made up front. It goes back to the late 1940’s with the work of the mathematician Warren Weaver who described Complexity as either being organized, or disorganized. I am not in any way an expert on either, but in the age of big data we can think of disorganized complexity as something that traditionally involved statistical methods for understanding bulk behavior. The farther away we are from direct reduction, the more we rely on the randomness of disorganized systems. In other words we need a lot of data points to make sense of it. Mathematics has made us better at such systems by the adoption of Chaos Theory which is another enormously large discipline in mathematics that attempts to break down the complexity of disorganized systems into predicable components. Many people are familiar with chaos theory in a colloquial sense from the famous butterfly effect, where a single flap of a butterfly changes conditions everywhere. A chaotic and complex is system is dynamic, and the farther we are from detailed observation the more we rely on complex models. An organized system would appear easier to cope with, but it assumes direct correlations. If randomness is eliminated and a new entity is observed, it is said to emerge. Emergence, along with Quantum energy, are two of the most overused and misunderstood concepts in physics. Complexity theorists use emergence in a way that it allows for a new agglomerated data point which can either be used in an organized or disorganized complexity model.
Having said all of that I will propose a narrow definition of complexity, and it is not my complete argument against the Braincentric universe theory that I am describing. The brief understanding of how others describe complexity here is only to say that perhaps the mathematical necessities of disorganization and the observable predicable aspects of others when combined are where complexity really resides. This means that if we are using imaging anything recorded by a sensor has basically equivalent amounts of data (yes sensors can be different, but astrophysicists and neuroscientist use similar sensors and analyze bit values ether way.) So, while I won’t discard the work of the smarter dedicated complexity theorists, I will use my own definitions, which somehow fall between a disorganized and organized system as I think most do. Rather than talk about the brain specifically though, I will propose some basic axioms, which I know borders on pretentious to do, but I think is necessary to understand my point.
a) The ability to predict patterns of a given system is equivalent to its complexity. Therefore:
b) The smaller a single organized object in relation to larger disorganized objects within the same space is more complex as:
c) The potential for location within that larger space is far greater
d) Any aggregation of smaller objects results in reduced complexity
This may seem both very easy to understand, and very wrong at the same time, but I want to point out just two examples, neither of which jump straight to the brain and the universe, which I am sorry for. Instead they come from Experimental Physics, which is at least closer to my field. They do take in to account the Murch et al. idea of complexity as being about large numbers and energy.
a) One bit of science I have worked on is putting nanoparticles in polymers. This seems like a strange thing to do, but there are great practical reasons for such an experiment. In the case of what I and my colleagues have done it was to look at how many of these nanoparticles you could put into the polymer matrix in order to achieve certain “complex” behaviors. Coincidentally (or not) we were looking for ways to generate electricity, and material strength. The results are that if you put a small amount of the nanoparticle in a polymer it is very hard to deal with. There are laws of thermodynamics, polymer entanglements, week polymer-filler bonds etc, which make the location and the control of these particles very difficult to predict. So I call this complex. What we were after was something complex, but this level of complexity put the “mind” of the particle out of our control, so we needed to add more. As a bit of a sidenote, we did not think of this, we were just the first to experiment with it. The theory was that there was a near perfect amount of complexity by which we could achieve. This is called percolation, and refers to a theoretically loading level where flow between particles can occur. In this case the flow of electricity actually takes place without direct contact, but through quantum tunneling. So the goal was to fill with as little of the particles as possible, but allow for percolation to occur. This provides a good idea of complexity which reaches a level (the percolation point) at which the system becomes less complex with additional loading. So this is again an example of increased simplicity with greater numbers, and more energy.
b) My second example of complexity, is something I am much less knowledgeable about, but still more knowledgeable about than neuroscience (which I know is supposed to be the point of my whole essay, but still..) This week I had two Walter Murch experiences, the first one being early in the week when I saw the brilliant, thought provoking and personally moving film “Particle Fever” at the New York Film Festival. Walter edited this film, and since it contained footage taken over 7 years, his contribution to it must have been profound. “Particle Fever” is an almost existential journey into the lives of a few scientists involved with the LHC discovery of the Higgs Boson. Readers here have certainly followed this highly publicized discovery, and the multibillion dollar project that produced it. It was the largest experiment in history, with a primary aim of observing the presence of something very small and very import. The Higgs is a key discovery in unifying the Standard Model of Particle Physics, and even now, with it found, the implications remain truly complex. This is to say that with all of the resources of the world physics community, and a very large device (the LHC itself), the particle is more complex than the instrumentation or even the people that it took to produce it.
So after all of this my point is actually incredibly simple (not complex at all). The human brain is tightly packed. There are a lot of relatively small things. Billions as we are reminded. These things are bounded within a very small area. This essentially makes the brain predictable as we can theoretically map all of these points. The brain is not infinite in any direction. We can also use a very old way of placing each brain into a disorganized system, by taking the organization of each brain as an emergent system. When we do this then and compare the small population of earth to the galaxies alone, which move through a potentially infinite space it is impossible to consider a single brain to be more complex. Again we find ourselves small and simple.
No comments:
Post a Comment