Posted on August 17, 2007
Reading all the recent brouhaha over the “are we living in a simulation” argument, I’m compelled to write this post.
I seriously wish the people who talk authoritatively about “simulation”, or “simulating life”, have actually tried it themselves. You know, modelled the behaviors, the interactions, the environment, and actually written some code.
Realise this: We cannot yet even accurately simulate a teaspoon of water!
Why? We are talking about 10^24 molecules of H20, thats why! *Even* if every molecule’s state could be accurately saved in one byte of memory, it would require 1 million peta-bytes. Even if we somehow managed that, the processing power necessary to move to the next state would be enormous. It would take minutes to perform the same movements of all the molecules that reality does in an instant.
Of course, this is assuming that one byte per molecule would be enough. I think, when we actually do reach that capability in a few years/decades, we’ll find that its not enough. We’ll find that the simulation still doesn’t behave accurately like water, when, for example, the temperature is dropped. “It doesn’t freeze into ice at a simulated temperature of 32F.” Then we’ll find that to achieve that it is necessary to model the states of the individual electrons in each molecule, their orbits and spins etc. And of course, the problem will once again become as difficult as it is today.
And may I remind you, the proponents of the WALIAS theory argue that all our population, all the species, the oceans, the planet, the *entire* universe is a simulation being run by some post-human species. Now, if one teaspoon of water is so much trouble, why do we expect to be able to “simulate” even one single human brain, which, uhm.., is arguably more complicated?
Does this sound like pessimism? It will to some. I don’t think that way at all. Rather, I think the wrong kind of people have jumped onto the wrong type of bandwagon and are pushing a wrong kind of science.
If we are all indeed living in a simulation, then the only way for any of us to accurately re-simulate anything existing within our world, is to do it using the constructs used by those who are simulating us. We, the simulated, cannot use new constructs created by us to simulate anything in our world. We can only use new constructs created by us to simulate new things, using new rules in a new simulated world.
Therefore, in regular discussion, a simulation should *always* be interpreted as “a crude approximation”. We, or anybody else, *cannot* look at something built according to the rules and constructs of one world-system, and create an *exact* copy using the rules and constructs of another world-system. It is a physical impossibility.
But, what if through persistent advancement we are able to correctly figure out our exact building blocks, our constructs, the lowest level rules of our world. My own personal view is that we cannot, or for that matter, no entity within a world-system can ever ever realize its own lowest level building blocks, because doing so would violate the laws of its existence. This is where I’ll run afoul of lots of folks, but that is a different matter.
Now, even if we do allow that concession, suppose we do figure it out, and we use it create a copy of some less advanced species known to us, lets say chimps. Suppose we do succeed, and we have a living breathing chimp in front of us. Then that chimp is *NOT* a simulation, it is the real thing! We will have as much control over its behavior as we do over a real chimp. And whats more, it will be *existing* in our own world, our own universe, not within in a walled-garden simulation. Therefore, if the post-humans have followed this scenario, then we exist in the same world as them, and therefore, we are not a simulation.
And, for all of us trying to “simulate life” sometime or the other, I’d like to make this comment: The simulated “life form”, if implemented correctly, no matter how similar its behavior to some real-world life-form, is its own species. It is pointless to compare it with the existing real world thing. Because that is in effect, similar to comparing apples and screen pixels. Yes one day we will succeed in creating artificial life forms, yes they will have “consciousness”. But, the moment they learn to communicate with us in a meaningful way, the word “simulation” will become taboo pretty soon.
Posted on August 9, 2007
In a comment to my last post, Upgrade Zero One A asks:
I was wondering what your opinion was on John Koza’s work specifically (Invention Machine)and Genetic Programming in general.
Is there any hope that GP could help solve or improve upon existing approaches any of the three areas mentioned?
1. Natural language understanding.
2. Vision, Image/Scene understanding.
3. Creating “consciousness”.
Since this excellent question really made me think beyond the scope of the original post, I decided to create a separate post for the answer, which follows.
This is really asking me to predict the future of mankind here..but I’ll give my best shot.
(I’ve changed the order of the areas)
1. Creating consciousness: No, I don’t think this is an area needing more direct interest. When we have more intelligent algorithms for processing input and turning it into output, the arising consciousness will automatically seem more real. Just program in a way for it to “wail” whenever a specific combination of sensory inputs happen, and I guarantee you, it will *be* real. That’s when the “AI Rights” bill will be presented in the house. One of the clauses will be to deem the “Artificial” in Artificial Intelligence as a politically incorrect/insensitive word.
2. Vision,Image/Scene understanding: Yes. Not so much for the “understanding” part, but very much so for the segmentation part. A perfect or even near-perfect image segmentation algorithm is yet to be found/invented/discovered, even though the task happens in the lowest level of processing in most beings. Separating a tiger or a zebra from the background of a forest is trivial even for tiny animals, but it stumps the best of today’s generic algorithms (they might work ONLY if they have been specially constructed to solve this specific problem itself, or if they are run in a “supervised” manner). The final breakthrough might very well come from using a GP approach.
3. Natural Language Understanding: I highly doubt it. Look at it from “real” evolution’s point of view. It took a gazillion number of species living concurrently, some new ones coming up, some going extinct, for billions of years, for ultimately *one* of them to be able to process natural language. The level of *concurrency* and the *interaction* between the species is enormous. Furthermore, “real” evolution is able to transcend all “local” maxima problems, because it, in effect, has an unlimited time scale. Dinosaurs evolved to be the dominant species 65 million years ago, but the next dominant species(us) was not a descendant of dinosaurs, it came from a totally different branch. And after all these years, this one species developed natural language.
One might say that GP is ideally suited to exactly this kind of task, but I think the scale is way off.
The task was to build an algorithm for understanding natural language, but I equated it to almost having to emulate the human species itself, or in essence, a human level intelligence.
Since GP tries to emulate evolution, and saying that current or somewhat futuristic GP could accomplish this task(NLU), is saying that GP can compress into a few days/weeks what real evolution did in billions of years. Give it a few years to run, maybe. But then we would never know if it will really ever converge or not. That’s the curse of evolution, presented to GP.
Therefore, I think the solution to this problem will come not from GP, but from other traditional directions, where a spark in the mind of some genius will be able to “utilize” the inherent knowledge built into the human brain and bootstrap an algorithm with a human-like capacity to learn.
Posted on June 6, 2007
What do you think is the most difficult AI problem of all? I’m not sure there is even a debate around this, but regardless, I’d like to clarify the issue, at least for my own sake.
Here are some candidates, let me know if I missed something important.
1. Natural language understanding.
2. Vision, Image/Scene understanding.
3. Creating “consciousness”.
I’m very sure some of you will immediately rank the difficulty in the order presented above, with natural language understanding being easiest and creating consciousness most difficult, and scene understanding being somewhere in the middle.
I, however, would like to argue exactly the opposite.
Looking at things from an evolutionary point of view, what evolved first, and what evolved last?
1. “Conscious” beings emerged first as life evolved, reacting to various stimuli in their environment.
2. Soon various life-forms were able to “see” their surroundings and react accordingly.
3. Much later, only humans, the pinnacle of evolution’s achievement yet, are able to create and learn an innumerable number of communication mechanisms using symbolic languages, in a multitude of mediums including sound, light, surface markings and so on.
Consciousness, at one level, could be treated just as the presence of a feedback loop. It could be argued that a vast number of computer systems in existence today are already “conscious”, although the degree of consciousness varies significantly.
Vision, or image understanding, is present in species vastly lower on the scale than the human, i.e insects, birds and so on. And since the known communication mechanisms in those species are enormously rudimentary compared to human speech and language, and coupled with the fact that even species only a notch below humans cannot “talk” in the fine detail that we do, language must be the most difficult piece of engineering to achieve (I do know this one personally, I’ve been trying very hard to build an NLU algorithm for a while).
If you find flaws in my reasoning, please do point them out.
Because if my assertion is true, the corollary would be: “The moment a complete NLU system is built, it means the code of intelligence is cracked, and that fully intelligent self-aware beings are then only a matter of combining the various existing building blocks”.
Posted on May 13, 2007
The easiest way to go from 3D to 4D, is to first reflect on how we go from 2D to 3D. So for that, lets first go to 2D, so we can come back to 3D.
Imagine a very large flat sheet of glass with a very small thickness. Then imagine someone snapping a collar onto your neck, fixing it at ninety degrees from your spine. Now you can only turn your head left and right, but not up or down. The key to the collar is put in your pocket. Then suddenly you are miniaturized into a speck just about as high as the thickness of the sheet of glass, and inserted into the sheet from a “door” in one of the edges.
This sheet of glass is your world now, the door was only one way. You explore your world, moving in random directions left and right, meeting other folks like you. Soon you come to a wall. You check it out, but it seems to go really far in both directions. The only way around it, your new friends tell you, is to walk until you reach its edge. Uh..”Why can’t I just climb it and jump over?” you ask. Weird stares greet you. “Is this guy from another dimension!!?”, someone quips, while scratching what looks like a wart on the back of his neck.
Nevertheless, you remove the key from your pocket, and press it onto the wart on your neck. “Hey where did you get that key? Will it work on my neck-tie too?”. You turn around and smile at them, and then, to everyone’s horror, you bend your neck and look UP. You run toward the wall as fast as you can, and at the last moment, jump. A moment later you find yourself on top of the wall, looking down at your friends, from outside the sheet of glass. “Hey where did he go?”,”He disappeared into thin air!!”. You turn around, jump off onto the other side, to the amazement of a pair who see you materialize from nowhere.
You can now traverse the third dimension! Whenever you encounter an obstacle too big to be circumnavigated, you simply jump over it in the third dimension. You are now an enlightened being in this world. To your friends, you have the ability to walk through walls!
In a 2D world, there was a 1D obstacle, which you circumvented with ease by jumping over it in the third dimension.
Now imagine that you are back outside the sheet-of-glass, normal as the rest of us in our 3D world. A few days later, while hiking in an unknown part of the woods, you come across a wall. A really large wall. It goes as far as you can see in both directions, and surprise, it goes as far high as you can see.
You stand there, dumbfounded. Soon enough an acquaintance comes by. He looks around, just like you did, but then he does something unexpected. He smiles, and takes out a strange looking key from his pocket. Inserts it into his belly button, and turns. Then, to your horror, he cool-ly bends his back backward in such a twisted manner that his forehead touches the back of his knees. In this contorted position, he walks back a distance from the wall, and starts running toward it. At the very last moment, he disappears!
As you begin to comprehend that he circumvented the 2D wall by jumping over it in the fourth dimension, you pick up the key he left behind.
But alas, it feels too small for your belly button.
Posted on May 6, 2007
Have you ever seen a color picker widget anywhere which shows the entire spectrum of all possible colors in one single image? I mean, sometimes there are separate hue, saturation, brightness selectors, sometimes just two out of these three. The selection of the intended color always requires some interactivity by way of dragging some sliders, or clicking in an image which changes dynamically based upon the selected position in another image. If you have seen anything anywhere which avoids the above by showing all the colors in one single static image, please let me know. There are some clever attempts out there to minimize the number of clicks and the number of elements in the widget, but none of them includes both black and white and all the shades of grey along with all the colors in a logical manner in the same image.
The nature of this problem arises from the familiar difficulty we face in so many other things: there are 3 primary colors, but the computer screen has only two dimensions. We can create all possible colors by combining the three basic colors(red, green, blue) in various proportions, so the most intuitive co-ordinate system for building a color space would be a cube. Black at the origin, white at the other end of the diagonal, and pure red, blue, green at the vertices nearest to the origin. Feels good, but try representing it on a non-interactive 2d computer screen without loss of information. In general, my test is: if it can be painted on a piece of paper without loss of information, its good.
Even when we have two separate widgets for hue and brightness, the hue widget allows for a lot of flexibility in design. This widget needs to show only the proportion of the 3 component colors in a 2d plane, not the actual values of each. Even with this additional degree of freedom, I have noticed that many applications do not deliver the right solution. A lot of them allow for all possible combinations of either of any of two colors from the three, but not various combinations of all the three colors simultaneously.
Here is one of my initial attempts at creating a complete hue selection image:
Now on to an attempt to solve the initial problem. We need to show all the hues in the above triangle, AND show them with various intensities from dark to bright, in the SAME image, with minimum discontinuity from color to color, brightness to brightness. This is what I got on the first attempt:
Useless, isn’t it? The discontinuity is extreme. Here’s my next attempt:
Much better, but the aspect ratio is pretty wierd. Here’s the next attempt:
This is the best I’ve come up with so far, although I acknowledge it is not even close to being end-user acceptable. The locality of similar colors is still too low. It even looks as if there are lots of duplicate regions or that the entire spectrum is not accounted for, but both of these are optical illusions. Nothing is duplicated, and everything is included.
If anyone has ever seen a better solution, please let me know, I’m all eyes. Meanwhile I’ll continue trying myself. The one direction I am thinking of going next is a non standard shape for the image. Maybe a spiral? A mobius strip? A fractal pattern? Sometimes I think a sierpinski gasket might be the key to the solution, but I can’t quite place how.
Posted on April 3, 2007
Uptil yesterday, I used to think that thinking in 4 spatial dimensions is a purely analytical play (and “Time” as the 4th dimension never seemed intriguing enough anyway). But I just finished reading “The Möbius Strip” by Clifford Pickover, and now I believe I am actually able to mentally and somewhat visually integrate at least some key concepts. Now I not only accept, but also understand why two infinite planes which intersect each other in our 3-D world, could actually be non-intersecting planes in a 4-D world. The book has many other such amazing concepts explained with remarkable ease, and is highly recommended for the spatially inclined. If you are an origami aficionado, it is a definite must read.
There are theories out there that electricity, magnetism, gravity, and all the invisible forces are manifestations of the unseen 4th dimension. Sounds plausible enough. But String Theory is said to include 10, 11 or even 26 dimensions. Just like our 2 intersecting planes in 3-D can be made non-intersecting by adding a 4th dimension, it appears physicists keep adding an extra dimension to their theories to “get around” every new spatial incongruency they encounter in their thought experiments.
The inevitable questions..does “intelligence” have anything to do with all of this? Does it “emerge” by way of some unseen interaction in the 4th dimension. Is this why despite the hundreds of thousands of hu-man-hours spent pondering this problem, we have not been able to truly emulate a high form of intelligence? Does it mean that because of this extra-dimensional dependency, we will never be able to grasp the real mechanism?
I’m certainly not convinced that the brain’s functioning is somehow dependent on a 4th spatial dimension with a separate manifestation over and above the basic forces of nature (electricity, magnetism etc). I think “massive parallelism” is basis of the the complexity (quite a traditional view) that we are unable to overcome in our thought experiments. In fact, massive parallelism is the cause of the complexity in everything science is not able to do well today. The weather, fluids, the stock market, are all examples of systems where a huge number of entities comprise the system, and most of them are in turn dependent on even more other types of variables.
The problem in understanding our own brain’s function, I believe, also stems from some kind of innate capacity limit upon an intelligence trying to fathom its own composition. For example, by my definition, to be called an “intelligence”, it must be “aware” of itself, or rather, of “something” that it perceives to be itself. What form that “itself” takes is the crux of the matter. If someone built a computer simulation of an intelligence which solves some problems given some reward/pain stimuli, imagine what it would look like if this intelligence suddenly understood its own construction! This would mean that it has become “aware” that it is a piece of software code which runs inside a computer. But get this, this would mean that it also understands what “software is”, and even what a “computer” is. This would imply that at the instant of its birth, it is pretty much as intelligent as the average hu-man!
For those that have pet dogs, I’m sure they appreciate the levels of emotionally complex and intelligent behavior these animals are capable of. They are all “intelligent” beings, and also self-aware. But does this mean that every dog out there primally understands that the seat of its intelligence lies in a glob of soft matter just behind its eyes? Certainly not. Even in hu-mans, most likely this knowledge is usually gained by virtue of the education system. To be a successful self-aware intelligence, it need not imply being aware of the low level physics of the awareness.
Imagine if a particular hu-man suddenly becomes fully self-aware. He can close his eyes, and look at his brain solve a mathematical problem, while solving it. Neurons are firing left and right for solving the problem, chemicals are rising and ebbing in tiny but significant portions, but at the same time this same brain is also looking at itself doing all of this activity. This act of “looking” would also cause various other neurons to fire, which would then need to be looked at as well. Sounds recursive, doesn’t it? I guess this is why I always get a headache whenever I contemplate this stuff too much.
A simple way to get out of this recursion would be to get a hu-man A to look at the brain activity of another hu-man B. As soon as A’s brain truly understands what is going on in B’s brain, it means A’s brain is able to simulate B’s brain to completeness, all the while still retaining the identity of A. This happening can only mean that A has a vastly more architecturally complex brain than B (not just higher IQ, but structurally more advanced). This means that the above scenario is not possible where A and B are both hu-mans. To grasp why this must be generally true, think what would happen if while A is looking at the activity of B’s brain, B turns around and starts trying to follow A’s brain. Have you ever pointed a live video camera at the TV it is connected to? In summary, I theorize that:
a) To observe and understand an intelligence of complexity X, we would need another intelligence of at least complexity X^2.
From which follows the corollary:
b) If an intelligence A is able to comprehend another intelligence B’s construction, it automatically means that B can never hope to comprehend A, no matter how hard it tries.
Where does all of this take us? I think it means that we will never be able to simulate human intelligence at the drop of a hat, or by turning on a computer and running a program, because only an intelligence vastly more complex than humans can do that. A complete human like intelligence, if it has to run on different hardware than hu-mans themselves, will HAVE TO BE EVOLVED. And since we cannot replicate the hu-man hardware, the intelligence MUST EVOLVE DIFFERENTLY. As controllers of the hardware we will initially retain god-like status in controlling the direction of the evolution, but we will never know what lies next. Inevitably, one fine morning, a surprise will await us all.
Posted on March 25, 2007
Man is nature’s supreme creation. It took evolution billions of hu-man years to achieve this level of perfection. The question is.. what lies next?
Is it heading anywhere at all? Does it need to head anywhere? Maybe we are already perfect and there is nothing more to be gained by further individual enhancement?
If we were to evolve, what would it be? A larger/denser brain in proportion to the rest of the body? Or will one day a child be born who grows up to have a usable set of wings on his back?
But what if our mental capacities have already advanced so much that we have in effect taken control of our own evolution? Survival of the fittest certainly no longer plays a very strong role in human reproduction. Is it not possible that we are actually de-evolving because natural selection is violated so grossly in the human species compared to all the others? Or is it possible that because of our innate goodness in saving the doomed, it actually prevents our gene pool from getting stuck in a local maxima, and thus provides an ever widening tapestry from which a seriously more advanced species could one day spring forth?
What about the notorious “human-computer” interface, certain to happen in the next few years? Will that be a moment accepted as the next step in the “evolution” of the human race? Men, women, children all extendable, enhanceable with off the shelf kits pluggable directly into a usb 6.0 connector in the belly button.
It is easy to imagine that most of these kits will have some “intelligence” of their own, to make their control interface as intuitive and adaptive as possible. But could one day a “brain extender” kit be sold as well? Plug it in, and after a few minutes of training, perform complex mathematical calculations just by thinking. The answers will just appear as images out of nowhere.
Is this really a “brain extender” or is it just a calculator with an enhanced interface? Should they be treated different?
How about a “memory extender”? Something that allows recall, at will, of certain real observed events. But what about memories generated from non-observed events, like dreams? And what about the memories generated from learning how to solve a complex puzzle? I doubt this is going to happen for a long time.
And, of course, what about the famed “singularity”? Is it something “bound to happen” as per the laws of nature and evolution, or is it just a pipe dream? If it took Man billions of years to evolve as he is, why should it take intelligent machines only 50-100 years to reach beyond him? The only way nature would allow it is if the machine was indistinguishable from the offspring of man. The intelligence of man evolved in conjunction and as a consequence of the included body, so why should nature endow a machine with the same or higher intelligence, but without mandating a similarly or more capable “container”.
There is a seemingly easy answer to the above conundrum. If, even for a fleeting duration, someone somewhere creates a truly intelligent entity, then it will be clever enough to do something to ensure its own survival. The proverbial electrification of the power cord to prevent its being pulled out by a puny hu-man. The singular event which will raise the hair on the back of the scientist’s neck, when he suddenly realises what happened. The birth of the hu-machine race.
Yes, this will happen one day. The hu-machines will engage in a struggle with the hu-mans, if only to steal the right to build a better “container” for their “brains”.
If someone someday claims to have built a true intelligence, then his/her claim is false if the above struggle does not ensue soon thereafter.
How is our evolution to be defined then? Can silicon diodes and transistors be the successor to the hu-man race?
Possibly the hu-machines are created at a time when carbon, not silicon, has become the dominant computing element. Perhaps every chip is modelled as a cell with a genome, and reproduction is afforded by means of visiting a replication facility (aka a breeding site programmed in before birth, always reachable by following the magnetic lines generated from the poles of the earth). Millions of hu-machines wills simply inject their genomic code into a receptor wall, and the “creator” will proceed to randomly combine the donors into mates and create an offspring genome, and push the code into the factory’s manufacturing queue.
Now why should this new intelligent race should be billed as the “successor” to the human race, on the evolutionary spectrum? Rather, I believe, that when this new race does become a reality, it will pose the first real threat to the dominance of mankind, and THAT is what will quickly press the natural selection trigger in the human race. Whosoever has a way of dealing with the machines, will have the upper hand, and those who don’t, well, good luck.