Beyond Simple Agents - The INM-Computer Kindergarten as a Testbed of the Future

One day people will want to have them: computer programs which have feelings, which can learn, understand, and speak any natural language, and which will have their own will. In Frankfurt (Germany) is the Knowbotic Interface Group developing and testing such programs by running a computer Kindergarten. Compared to the normal Artificial Intelligence or Artificial Life paradigm the key concept of the Knowbotic Interface Group is 'consciousness' understood as an interface to body structures. This allows the modelling of processes which have been impossible before.

In the following I wish to communicate to you a software agent project which is quite different to what is known today as a software agent. My intention is therefore to give you the main arguments to understand our approach followed by a short overview of what we have done until now.

Beyond simple agents?

Today we are fascinated by first agents which are able to do some simple, but nevertheless useful jobs for us (e.g. filtering email, finding & retrieving information in the internet, scheduling, diary management, mobility management, workflow management, network management & control, buying and selling services, negotiation, price discovery, manufacturing planning & control, process control, traffic control, ...). And, for sure, if these agents prove to be practical for most normal users, we will have a growing demand for such agents. An interesting question for the future is, how far we can extend the capabilities of software-agents, especially with regard to their 'human likelyness': can we seriously expect to have in the near future software-agents with which we can communicate like a human person? Will we have software-agents which are driven by emotions as well as various types of logic? Will these software-agents be able to learn any natural language in the way children do it? Will they have their own will? The kind of answer which we can give to these questions will depend on the strategy with which we will attack the problem.

The key problem to be solved in the case of a human-like agent is the problem of language. The main component of language is what we call 'meaning'.

This meaning is interrelated with nearly all aspects of the conscious worldview of a human person, not to forget the different learning processes, which generate the world view.

Thus if you find a way to cope with the meaning of natural languages you will then also have the solution to deal with nearly all aspects of conscious knowledge.

The problem of language meaning is so fundamental for the construction of human-like software agents that I classify agents in a twofold way: there are (i) SIMPLE AGENTS, which cannot handle natural language, and (ii) there are HUMAN-LIKE AGENTS, which can.

A useful strategy I - consciousness

Now, let's have a look at a strategy of how we can attack the problem of building human-like agents. Clearly we will need a plan, how to build such an agent, some kind of a description; in the ideal case a formal scientific theory. And together with such a formal theory we need a domain of useful facts. Where should these facts come from?

The most direct way would be to work in a behavior-oriented fashion looking to a human organism, collecting responses and trying to infer those stimuli which are triggering these responses. But this will not help us as CHOMSKY (1959) has demonstrated in the case of SKINNERs 'Verbal Behavior'. Most features of the conscious related aspects of meaning are located in the inner structures and processes of the organism. An empirical minded researcher has nearly no chance to infer these only by stimulus-response data sets.

You can try to improve your data-set by taking also into account the physiological, especially the neurophysiological, structures and processes of the organism. Correlating stimulus-response patterns with neurological findings improves our knowledge of possible internal processes remarkably.

But I have to state here, that this is still not enough! There is one main reason why not: Not one of the behavioral or neuronal data have any direct relationship to our conscious experiences.

If we want to 'explain' conscious experience by behavioral and/ or neurological data we only can try to correlate the findings of these domains with the findings of our consciously known experiences.

This presupposes that we have useful descriptions of our conscious experiences which we can relate to the behavioral and physiological data.

An explicit description of the structure of our consciousness is therefore a pre-condition to be able to make some more general statements about the relationship between overt and neuronal processes on one hand and conscious experiences and activities on the other.

Thus if you take as point of view your conscious experience you can get all the data you need for the reconstruction of language meaning; you will not need to use any behavioral or neurological data to set up your theoretical model. You are free to use neurological data, if you want. But why should you do this? There is no need for a software-agent to be a 1-to-1 copy of the biological system of humans. It is sufficient that an agent mimics the general structures and functions that characterize the structure of the human consciousness together with the meaning structures which are closely related to them.

With such a conscious-based approach -in philosophy this is termed a phenomenological approach- you have all the data you need to set up a formal theory which you can use as a plan to build software-agents which mimic a human consciousness and, as part of the consciousness, the language meaning.

A useful strategy II - learning

This description is still lacking one important feature: this feature is 'learning'! Every human person is damned to learn all the time especially if the person is a child which is faced with a complex dynamic environment. Thus if we want to have our software-agents act in a human-like way they must be able to learn like humans do.

It was the famous Alan Matthew TURING which in a visionary way in 'Intelligent Machinery' (1948) and in 'Computing machinery and intelligence' (1950) already claimed that computers should be educated and trained like children if we want to have them behaving like humans do.

A computer kindergarten as a knowbotic interface

These two principles, a data set based on our consciousness, and a training of agents like children, are constituting the framework of our research in the Institute for New Media (INM) in Frankfurt (Germany).

In the spirit of TURING we have designed a computer-kindergarten called a knowbotic interface.

A computer-kindergarten realized as a knowbotic interface contains at least the following three elements: first (i) an artificial environment called 'world'; second (ii) a human-like agent called 'knowbot'; third (iii) a representative of the human user appearing in the world like a knowbot but acting as a teacher called a 'pseudo-knowbot'.

A knowbotic interface can be operated as a stand-alone program or within a network. In the net version every knowbotic interface has knowledge of all the other participants and they are simulating a common world where every participant can interact with all the others.

A network of connected knowbotic interfaces is more than a pure chat channel insofar as you can not only speak with one another, but you can also see the other acting, you can smell him/her, you can touch all objects with specific touch experiences and you can taste things you put into your mouth. This is possible while the objects of the world are always specified with regard to all human sensory channels. Besides this your actions will have distinguished mechanical effects in your environment. If you for example touch a knowbot, then the knowbot will sense the strength of your touch and he or she can also locate the region on the surface of his or her body where you touched.

What a Knowbot is/ could be

A Knowbot is a computer-program which is based on a formal philosophical theory of the human consciousness. A Knowbot can therefore generally show no more features of the consciousness than a philosophical theory can reveal. But it will be stronger than any purely empirical theory. This type of knowbots I will call a normal knowbot.

Because any formal theory can be changed at will in any possible formal direction one can develop new -fictional - types of conscious structures, which are different of the ones which are known from humans. This could be a very fascinating domain of research. These knowbots I will call fictional knowbots.

According to our knowledge of humans we have to postulate that a normal Knowbot must be able to acquire Knowledge of its environment as well as of any natural language like a child. He must have drives and emotions, he must develop social attitudes, and he must be able to communicate with respect to his indivudal goals and the actual situation.

Normal Knowbots will therefore have a perception-like structure providing sensory-information about the environment; they will have some kind of a situation-representation and a memory-like structure, which especially contains a model of the 'self' of the knowbot as well as different models of 'others'. Further we have to assume proprioceptive-data signaling states similar to the human body-states and a planning capability to simulate possible continuations 'in the future'. In close relationship to the planning ability we have to presuppose some abilities to act. Finally we assumme a language-capabilty to acquire, to develop and to apply linguistic structures with respect to all 'internal' structures.

What we have done so far

At the Ars Electronica Festival in June last year and during the Telepolis Festival in Luxemburg in November, we presented a first version of a knowbotic interface. It was realized on NextStep coded in objective-C and demonstrated knowbots which have been able to learn and handle 1 word sentences.

Based on the experiences of the first version we started in November the development of a second version. This version will be completely encoded in JAVA. The knowbots of this second version will be able to learn and handle 2-3 word sentences, they will be able to distinguish simple speech acts like 'questions', 'answers', 'commands' and 'simple assertions', and they will be able to identify individual objects.

A first simple test version - but still without the network-capability - is available since July-29 96. A first test-version with the full net-capability will be available during autumn 1996. The final version we do expect for September 1998.

The Institute for New Media is a non-profit institution and is open to cooperation with anyone who is interested in the project. If you want to stay in contact with the project look to this page.

Possible Applications

Well, the main target of the Knowbotic Interface Project is research centered on human consciousness - and possible alternatives!- in the realm of computer-aided Philosophy (CAP). This is a quite new research-paradigm in its own right which will last for many years. Besides this necessary research there are many applications yet conceivable, which can become valuable products in the market. Here I can give only a small set of examples of a nearly infinite list of possible applications: The Knowbotic Interface without Knowbots (KInt0) can be a platform for advanced chat channels which can use product placement in the environment as a way of commercially efficient advertisment.

A KInt0 can also be used for insurance-companies and banks as a diagnosis- and test-tool for the behavior of clients and agents. You set up a dedicated environment and then you can observe how the person in question will behave. If you add real Knowbots, then you can set up quite demanding test-environments.

A KInt0 is also an ideal platform for the development and realization of diverse types of plan-game-environments (Planspiel-Umgebungen). Plan-game-environments are a very efficient and proved tool for many realistic learning tasks where complex environements and groups are involved. Adding Knowbots would improve such learning environments much more.

A complete Knowbotic Interface with a first generation of speaking Knowbots (KInt1) could surely be a valuable tool in the realm of edutainment. Either as test-environment for real social agents for the social sciences or as an learning-environment for the exploration of different social aspects, including the internal structures of social agents.

A KInt1 could also solve many unsolved problems of the Interactive Television discussion today without any expensive investments! With KInt1 and the WWW you have an existing infrastructure for very rich multimedial and interactive environments, which you could 'feed' into the main TV-channels as it is. No special decoder, no expensive cables, you would only use what is already there.

A KInt1 can also be used as a an environment for interactive computergames already enhanced with intelligent, emotional characters, the knowbots. This market is already very hot and can in the future become one of the the biggest and fastest growing markets.

(This is a revised form of a speech at the developers day of the 5th Intern. WWW-Conference in Paris May-10 1996)BibliographyFor alle references made in the article look to Memography. There you will find a selection of books and articles used in the Knowbotic Interface Project.Gerd Döben-Henisch. Comments are welcome.

Institut für neue Medien


Institut für neue Medien

Daimlerstrasse 32

60314 Frankfurt am Main, Deutschland.

Tel +49-(0)69-941963-10

Fax: +49- (0)69-941963-22 (Gerd Döben-Henisch)