The Brain, the body, and survival

Does the brain exist and evolve to prolong the survival of the body? Or does the brain use the body to prolong its own survival? Or are they one and the same thing and should not be discussed as two separate entities?

Hypothetically, is it possible to “evolve” a brain as complex and intelligent as a human’s, without attaching it to a human like body?

On the flip side, is it possible to create a human like body and expect it to survive out in the world, without providing it with a human like brain?

To me, asking questions like these instantly exposes the flaws in traditional AI techniques. Everything that has been built so far, learns only because it is force-fed information that we want it to be able to process.

If it doesn’t care about life and death, can it learn?

If it doesn’t yearn, does it learn?


Intelligence without a brain

Intelligence without a brain

Is it possible to have “intelligence” without having a brain to account for it? Are bacteria intelligent? Better still, are plants intelligent?

What does define intelligence? If a bacteria is able to move slowly in the right direction and attach itself to the right foreign bodies to enhance its own chances of survival, is it intelligent? Or is it just following a set of very basic rules which any machine could be programmed to do, and is thus therefore, not intelligent?

A plant grows and orients itself correctly in the direction of sunlight. Even if that is just a result of some chemical imbalances caused by differences in light receptivity, can it be called intelligence?

What is intelligence after all? Is it just that trait which enables an organism to survive in its environment? Or is it actually the opposite; the capacity of an organism to do things which might endanger its survival? The second one might sound counter-intuitive. But humans are known to indulge in the most extreme forms of self-endangerment(even suicide), and if humans are the most intelligent species on the planet, then maybe there is something to it? 

Are animals just a higher and more complex form of intelligence found in plants(or single celled organisms etc), or is there some magic threshold of novel behavior above which everything can be called intelligent? What is that threshold?

What about things such as “collective” intelligence, where multiple simpler organisms collaborate to enhance the chances of survival of the entire group? This is very common all across the living spectrum, but it could be applied closer to us an individual neuron cell within a brain intelligent? If not, is the brain itself intelligent, or is it the containing entity that is intelligent?

Lots of questions, but I believe until such as these are answered, or at least until we shed our inhibitions in attempting to find the answers, we won’t be closer to solving the mystery of intelligence.

Typically, there are two types of behavior exhibited by any organism. One is genetic, i.e what the species is programmed to do since birth. A bird finds a worm tasty, well so be it. Every living species has some kind of pre-programmed attitude towards its environment. This attitude is a result of evolution , but nevertheless, its whole purpose is to prolong the survival/reproductive chances of the organism.

The second behavior is more interesting. All the changes in attitude that happen to an organism, based upon the specifics of its own environment, is what it has “learned” in its lifetime. Therefore, multiple copies of the same organism would in general, behave differently over time if their environment is not absolutely identical.

If  someone builds a robot with a huge amount of pre-programmed behaviors for different situations( like answering questions), we might commonly call it intelligent. But enlightened folks would test if it actually learns to correctly answer new questions that it was not pre-programmed with. If it does, it would be deemed amazing and, truly intelligent.

In contemporary terms, thus far, this is the closest definition of intelligence I can think of: The capacity of an organism to learn new behavior in a given situation, which is more appropriate for its survival than it knew the last time around.

So a simple test for intelligence in an entity would be: Take a snapshot of the entity’s behavioral specifics at birth (or to be fair; only after the basic minimal physical development necessary for survival is complete), as a set of various stimulus/response pairs, and take another snashot after a sufficient age, but reasonably before the expected average life-span. Subtract the two, and if the difference shows a marked increase in chances of survival for many situations, then it is intelligent.

Does this seem like a fair test?

If it is, then it would mean that entities like collaborating bacterial colonies are actually intelligent. Every bacteria follows some basic chemical signalling rules, and the behavior of the group as a whole adapts to never before seen circumstances. What we call a brain in higher organisms, is also of course a collection of multiple single cells called neurons, communicating with each other using chemical neurotransmitters and actual physical pathways and junctions, all working in conjunction to enhance the survival of the containing body.

Where does this leave plants? Unless I see real evidence of a plant’s “behavior” changing over time to the same stimuli, I would probably classify plants as “not intelligent”. But, surprise… a forest, I would think quite the opposite. Who knows what kind of chemical signals nearby plants exchange with each other during their lifespans. If a bacterial colony can be intelligent, a forest of giant trees each living a thousand years has to be infinitely more intelligent.

In my next post, I’ll try to tackle a related topic, “intelligence without survival”.

Is the brain simpler than we think?

For the last couple of years I’ve been wrapping my brain around the question of knowledge representation and the decision making process within the ..uh, brain. I’m not fully there yet, but I am close to the conclusion that the basic principles can be modelled using simple probability theory, applied in a hierarchical manner. The deeper the hierarchy, the more intelligent the being, in general.

 Conversely, looking at all the research around modelling neurons directly in software for emulating brain like behavior; maybe all that goo in our cranial cavity is actually nature’s way of building something that can do mathematical operations, albeit massively in parallel? Do we really need to model the brain dendrite by dendrite, axon by axon, synapse by synapse to be able to get the same basic functionality in software?

Recently I have been researching and writing code for sentence segmentation, a rather well-beaten topic in the research community, particularly for segmenting chinese sentences into words( chinese is ususally written without any spaces betweeen the words/characters). Since I only wanted to create a generic algorithm( and I don’t understand chinese), Ijusttakearegularenglishsentenceandremoveallthespacesbetweenthewordsasmytestinput. (I just take a regular english sentence and remove all the spaces between the words as my test input.)

The algorithm then figures out where exactly to put the spaces such that the output will make sense…or rather, the “most” sense. For example: consider a fragment of an input “isawhim”. This could be “i saw him”, or it could be “is a whim”. If you knew absolutely nothing else about the context in which this was said, what would you choose? Probably the first one. But if you knew that this fragment was preceeded by an “it”, as in “itisawhim”, then its obvious that the second choice is the better one, because “it i saw him” doesn’t “make sense”. Consider the case of “comealong”. If you knew it was succeeded by “now”, then it would be “come along”, but if it was succeeded by “way”, it would be “come a long”. Usually its not just the directly preceeding or succeeding words which provide the evidence, but many words past and beyond.

In layman terms, the algorithm works by creating a running measure of “goodness” across a streaming input of continuous characters without spaces. It doesn’t need to know a start or an end.  At every point it evaluates the thousands of possible valid combinations that could be formed over a large number of characters, and eliminates those which would cause goodness to drop in the future. The goal is not to move in the direction of short term maximum goodness, but rather long term lack of badness. In other words, it will do somewhat bad things to get a good future, short of doing something suicidal.

The “long term” is the key here. In time based terms, some activity happens at the milli-second levels( like interpreting the sounds in sentences we hear as words), some happens at the level of  “fractions of a second” (applying meaning to the words as we interpret the sentence), and some at seconds( the meaning of a sentence, the emotion within a sentence ). The important thing to note is that activity at higher levels affects past activity at lower levels. For example, someone starts saying the sentence in a nice tone “jack is a…”, the words “great, nice” etc start flashing in our minds, but then the tone changes, and it goes “..complete jerk”, with sarcasm. At this point, the nice image of jack dissolves, and the sentence is interpreted in a different way.

 Taking this up to higher levels, its not difficult to visualize how the task of going from point A to point B allows us to tolerate the nastiness of a bad journey, because the pleasure of reaching is much greater than problems encountered during the journey. Going up a level, point B might not actually be a particularly enjoyable city, but we accept it because it might result in a future career advancement.


Here’s hoping this blog never returns.