The Ghent strong AI meetup group will discuss on the 20th of march the 2007 paper from John McCarthy From here to human-level AI so I decided to highlight some parts and write-up my notes and thoughts.
1. What is human-level AI
There are two approaches to human-level AI, but each presents difficulties. It isn’t a question of deciding between them, because each should eventually succeed; it is more a race.
- If we understood enough about how the human intellect works, we could simulate it.
- To the extent that we understand the problems achieving goals in the world presents to intelligence we can write intelligent programs. That's what this article is about.
Much of the public recognition of AI has been for programs with a little bit of AI and a lot of computing.
There are some big projects and a lot of researchers attacking the human-level AI problem with one of the two approaches. The first approach, or at least part of it, is used by the American BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies) and the European Human Brain Project. The second approach and especially everything related to deep learning has recently had a fair amount of publicity due to human-level and super-human results on some pattern recognition tasks (image classification, face verification) and games (human-level performance on 29 Atari games). For an overview and history of deep learning you can check out the 88 page overview article from Jürgen Schmidhuber. As a side note Jürgen Schmidhuber recently did an interesting AMA (ask me anything) on Reddit (summary from FastML).
2. The common sense informatic situation
The key to reaching human-level AI is making systems that operate successfully in the common sense informatic situation.
In general a thinking human is in what we call the common sense informatic situation. It is more general than any bounded informatic situation. The known facts are incomplete, and there is no a priori limitation on what facts are relevant. It may not even be decided in advance what phenomena are to be taken into account. The consequences of actions cannot be fully determined. The common sense informatic situation necessitates the use of approximate concepts that cannot be fully defined and the use of approximate theories involving them. It also requires nonmonotonic reasoning in reaching conclusions.
Nonmonotonic reasoning = a logic is non-monotonic if some conclusions can be invalidated by adding more knowledge (source).
Common sense facts and common sense reasoning are necessarily imprecise. The imprecision necessitated by the common sense informatic situation applies to computer programs as well as to people.
3. The use of mathematical logic
Mathematical logic was devised to formalize precise facts and correct reasoning. Its founders, Leibniz, Boole and Frege, hoped to use it for common sense facts and reasoning, not realizing that the imprecision of concepts used in common sense language was often a necessary feature and not always a bug. The biggest success of mathematical logic was in formalizing purely mathematical theories for which imprecise concepts are unneeded. Since the common sense informatic situation requires using imprecise facts and imprecise reasoning, the use of mathematical logic for common sense has had limited success. This has caused many people to give up. Others devise extended logical languages and even extended forms of mathematical logic.
Further on he notes that using different concepts and different predicate and function symbols in a new mathematical logic language might still make mathematical logic adequate for expressing common sense. But he is not very optimistic.
Success so far has been moderate, and it isn’t clear whether greater success can be obtained by changing the concepts and their representation by predicate and function symbols or by varying the nonmonotonic formalism.
4. Approximate concepts and approximate theories
Other kinds of imprecision are more fundamental for intelligence than numerical imprecision. Many phenomena in the world are appropriately described in terms of approximate concepts. Although the concepts are imprecise, many statements using them have precise truth values
He follows up with two clarifying examples, one about the concept Mount Everest, the other about the concept welfare of a chicken. Mount Everest is an approximate concept because the exact pieces of rock and ice that constitute it are unclear. But it is still possible to infer solid conclusions based on a foundation built on a quicksand of approximate concepts without definite extensions e.g. if you haven't been to Asia then you've never climbed Mount Everest. The core of the welfare of a chicken problem is: is it better to raise a chicken with care and nice food and then slaughter it or would it have a better life in the wild, risking starvation and foxes ? McCarthy concludes from this:
There is no truth of the matter to be determined by careful investigation of chickens. When a concept is inherently approximate, it is a waste of time to try to give it a precise definition.
In order to reach human-level AI we'll have to be able to approximate concepts in a way that the computer can reason about them.
5. Nonmonotonic reasoning
6. Elaboration tolerance
7. Formalization of context
8. Reasoning about events - especially action
Human level intelligence requires reasoning about strategies of action, i.e. action programs. It also requires considering multiple actors and also concurrent events and continuous events. Clearly we have a long way to go.
9. Introspection and self-awareness
People have a limited ability to observe their own mental processes. For many intellectual tasks introspection is irrelevant. However, it is at least relevant for evaluating how one is using one’s own thinking time. Human-level AI will require introspective ability. In fact programs can have more than humans do, because they can examine themselves, both in source and compiled form and also reason about the current values of the variables in the program.
The largest qualitative gap between human performance and computer performance is in the area of heuristics, even though the gap is disguised in many applications by the millions-fold speed advantage of computers. The general purpose theorem proving programs run very slowly, and the special purpose programs are very specialized in their heuristics.
I think the problem lies in our present inability to give programs domain and problem dependent heuristic advice.
McCarthy advocates the usage of declarative heuristics and explains the concept of postponable variables in constraint satisfaction problems.
11. Psychological, social and political obstacles
In this article, McCarthy states that although the main problems in reaching human-level AI lay in the inherent difficulty of the scientific problems, research is hampered by the focus of the computer science world to connecting the basic research to applied problems. The artificial intelligence has encountered philosophical and ideological (religious) objections but the attacks on AI have been fairly limited.
As the general public gets more and more acquainted with the potential dangers, of human-level AI and especially super-human level AI, I believe, that the pressure against AI research will increase.
An interesting book by Nick Bostrom covering the dangers of AI is Superintelligence: Paths, Dangers, Strategies.
Between us and human-level intelligence lie many problems. They can be summarized as that of succeeding in the common sense informatic situation.
If you want to read more about the road to human-level AI and superintelligence and its possible consequences then I can recommend this 2 part article by Tim Urban on his blog Wait But Why.