Weak Intelligence

by M. Gams

Can computers ever become intelligent?

1."NO, unless constructed significantly differently", says weak artificial intelligence (weak AI), "regardless how fast computers become" (Penrose, Gams).

2."YES", says strong artificial intelligence, "just wait for faster computers" (Moravec, Minsky).

3."NEVER", say mentalists.
 

Can machines ever become intelligent?

"No", say mentalists. Weak and strong-AI researchers say "Sure" - we humans are also kind of machines. And, weak-AI researchers unlike strong-AI ones assume that humans are very, very special computing machines, stronger than current computers formally modeled by Turing machines thus contradicting the Church-Turing thesis (Wegner; Bringsjord Strong AI Is Simply Silly, AI Magazine,1997). Advances in science will very probably enable construction of intelligent machines sooner or later, even though we might never be fully able to explain all intelligent processes. The "easy" task (just a term) or question is how to construct an intelligent machine, the difficult or "hard" task is to explain and understand consciousness and intelligence (Chalmers, Dennett).
 

These are different viewpoints regarding the old AI debate (Hauser; Rose; Humphrys), starting with weak-AI pioneers Dreyfus and Winograd, followed by Searle defining the term weak-AI and the Chinese-room experiment, and weak-AI revival in the last decade starting with Penrose (The emperor's new mind.., Oxford University Press, 1990) indicating that humans can see the truth of a statement that no formal system can, and Aaron Sloman (The emperor's real mind: review of Roger Penrose's The Emperor's New Mind .., Artificial Intelligence,1992) grading the strong-weak AI opposition into stupid extremes and possible compromises.

In other words, can intelligent behavior, consciousness and other essential human properties be achieved by a computing mechanism performing an algorithm on digital data? Do the computer and the brain have equivalent computing power? Can an appropriate very complex system of mathematical equations describe and solve any problem, e.g. can we represent the performance of the human brain by a set of equations? Can formal sciences based on mathematics solve all real-life problems?

The last question represents the core of the problem. Science and technology have solved many mysteries and helped humans to become masters of at least our solar system. But why cannot we simulate the performance of the human brain with computers, which can calculate at least 1000 times faster than humans? Computers progress exponentially according to Moore’s law doubling their performance every 2 years while humans hardly improve from generation to generation. If computers were even slightly intelligent 10 years ago, computer intelligence should clearly be noticeable by now. This is the empirical argument presented by Wilkes, Dreyfus, Searle and others.

Why is human civilization so interested in these questions and why are there so many different viewpoints? Why are AI movies like Matrix or  Spielberg’s recent AI attracting human imagination all over the world? Because common sense tells us that there is something very deep and relevant in these matters. Because our common sense tells us that the strong-AI viewpoint is wrong.

The debate is often related to the relevance and power of science and AI. In my opinion, it is quite acceptable that science cannot solve all problems in all situations, but on the other hand is the best for producing scientific truths (a synonym for truth). If AI cannot achieve true intelligence on current digital computer with existing approaches, then we just have to invent new scientific comprehensions upgrading current knowledge. On this viewpoint, attacks on AI in general (Penrose, Dreyfus) are misguided since they actually attack strong AI and promote weak AI.

One of the problems is that it is hard to define what intelligence and consciousness are. All attempts to define it by formal means have produced nothing reasonable. The most famous is the Turing test (Computing machinery and Intelligence, Mind, 59:433-60, 1950) in which a human must in a limited time through a computer communication with two subjects discover who is human and what a human-simulating program. Many researchers think that the Turing test is useless due to many reasons, some like Dreyfus and Searle saying that even passing the test is not sufficient to accept a program as conscious, while others claim that what is needed first is a kind of animal intelligence (Brooks). Indeed, humans would recognize even an animal-type of intelligence if it were present in computers, but we humans can see that there is nothing like that in computers so far.

Consider the strong-AI extremes claiming that thermostats have minds and that Einstein's brain written in the form of a book can possess intelligence (Sloman). Have you ever seen a book on its own performing any action (just any)? Even though there would be something resembling intelligence in the Einstein's book, it would be a kind of static copy, which clearly could not perform anything. Being intelligent means one is at least capable of performing actions on one's own.,
 

Consider another example – when decoding the human genome; scientists were surprised that so few genes determine so much about us. Obviously, the power must lie in interactions between genes. Another example – neurons in our brains transmit just 0 or 1 like artificial switches. Again, the neurons themselves in our brains cannot produce so complex behavior. It must be the way they interact with each other. My assumption is that it is not the special power of the basic brain tissue (Hameroff, Penrose), but the way thinking processes interact with each other, the environment and other actors in it. I call it the Principle of multiple knowledge (Weak intelligence, 2001), thus presenting a principle dividing current computers and intelligent thinking mechanisms. It is in a way similar to Heisenberg’s principle, which describes an essential property of the sub-atomic world. The similarity is greater with Wegner's claim that interaction is stronger than algorithms. In other words - that intelligent agents on the Internet (Bradshaw) are in principle stronger than stand-alone computers, i.e. universal Turing machines. The major reason for superior performance is the open, truly interactive environment, which not only cannot be formalized, it enables solving tasks in principle better than with the Turing machine. Several other researchers like Minsky or Perlis have also presented similar ideas (Please email me if you think you should be mentioned at this site).
 

There are also several other confirmations of the major weak-AI thesis in the book “Weak intelligence”, shortly presented here:

Presenting the principle of multiple knowledge (later Principle) at various AI conferences, two groups of objectives were observed:
1. "The principle is not valid since it contradicts the Church-Turing thesis. The Turing machine with multiple tapes corresponds to one Turing machine with one tape."
Reply: This strong-AI and formal-sciences all mightiness position has been addressed all over the book. The essential difference is in multiple programs interacting with one another, hence the principle of multiple knowledge.
2. "So what if the Principle holds? Everybody knows that more heads know more, but that doesn't change anything."
Reply: Consider a couple of consequences:
- Basic computer science and scientific books alike will have to add another chapter saying that the Turing machine is not the universal computing mechanism it was thought to be in the last century. Wegner's interaction "Turing machine" is the needed modification. (But for formal domains there is nothing wrong with the Church-Turing thesis and the universal Turing machine.)
- Humans, albeit slower than computers, are in principle stronger computing mechanisms than current computers - to design computers able at least in theory to achieve intelligence and consciousness, they have to be designed in the multiple way.
- Humans are much more complex then currently thought. A book describing Einstein's brains is not just enormous, each statement points to thousands of other statements and they dynamically interact with each other. Combinatorial explosion makes the book in effect unreadable (not to mention that the computing mechanism is still absent).
- Humans are not single identities as we see ourselves, rather, each of us has a multiple personality as the essential computing mechanism, and each of us has a society of minds as proposed by Minsky.
- We might not be living in one universe, as we perceive it. There is a huge number of possible universes and we travel through them using information from others as well.
 
The book "Weak Intelligence" firmly defines the weak-AI approach as a regular scientific discipline. It is claimed that these new methods in theory enable producing intelligence in computers