litceysel.ru
добавить свой файл
1 ... 2 3 4 5 6

Epistemics: the knowledge level

"the trouble with 'knowledge' is that we don't know what it is"

Donald Broadbent, pers. Comm. c 1975.

Newell drew extensively on Broadbent's ideas about the organisation of mind, notably the distinction between short-term and long-term memory that Broadbent and his peers did so much to articulate. However Newell’s interpretation was based on computational ideas about the dynamic operation of the cognitive system as well as the functional perspective of different kinds of memory. Furthermore he believed in the theoretical importance of understanding the content of cognition. Where Broadbent viewed long-term knowledge merely as a "store of conditional probabilities" (figure 2) Newell realised that if…then… rules could represent a far greater range of types of knowledge about an agent’s task environment:

Broadbent viewed the problem of mind primarily as an engineering challenge, in which the aim is to understand how the cognitive system works and how well it performs: how fast, how reliably, and why it degrades under stress for instance (Broadbent, 1971). Newell recognised the need for an engineering diagram and an understanding of the components, but he also understood that if psychology is to have anything to say about individual cognition we need an account of what we know as individuals. Our knowledge, after all, plays the director's role in everything we do. In 1982 he published one of the seminal papers of recent AI, "The knowledge level", which articulated one of the important contributions of AI to cognitive science.

It is now uncontroversial that knowledge can be understood in formal, computational and even mathematical terms, but also that theories of knowledge require different constructs from those needed for understanding physical or biological systems. In this section I give a short overview of current knowledge representation theory. To keep the presentation brief I use the simple image in figure 5 to explain some of the ideas.4


The standard way of talking about knowledge nowadays is to describe it as a collection of expressions in a symbolic language. Such languages can include informal natural language (indeed I shall use natural language to present my examples here) but work in this area is increasingly formal. From an AI point of view it is necessary to have a formal semantics if we want to design cognitive systems that can apply knowledge in making inferences, solving problems, taking decisions, enacting plans and so on in a reliable and sound fashion.

A good knowledge modelling language is compositional; we build “sentences” in the language out of simpler phrases and sentences according to clear rules. The starting point on the knowledge ladder is the symbol. Symbols are easy to represent in computers but by themselves have no meaning. Meaning is introduced progressively, by defining relationships between symbols using epistemic conventions. These conventions sanction the composition of symbolic sentences into increasingly complex expressions in the language.



Figure 5. The knowledge ladder

The most primitive epistemic convention is classification in which a symbol is assigned to some class (of object or other concept) that it is required to represent. In early computational systems the only classifications that people were interested in were mathematical types (“integer”, “string” etc). With the rise of cognitive science, however, a further classification convention emerged. Ontological assignment assigns a symbol to a conceptual category (usually a humanly intelligible category, but there is no reason why it must be). For example, the symbol "apple" can be assigned to the class "fruit" using the relationship symbol a kind of. Ontological assignment begins the transformation of meaningless symbols into meaningful concepts that can be the object of reasoning and other cognitive operations. One important cognitive operation that is usually taken to be a direct consequence of ontological assignment, for example, is epistemic inheritance: if the parent class has some interpretation then all the things that are assigned to this class also inherit this interpretation.


Another class of epistemic conventions concerns descriptions, in which concepts are composed into sentences, such as "apples grow on trees", "apples are larger than grapes" and relationships between specific instances of concepts like "this apple is rotten", "that grape is smaller than this apple" and so on. The semantic networks developed by psychologists and AI researchers in the 'sixties and 'seventies were the earliest knowledge representations of this type, but there have been many variants on this basic scheme. In the weakest description systems there are no constraints; anything can enter into any kind of relationship with anything else. In stronger description systems there will be more constraints on what is semantically meaningful. In the famous Chomskian sentence "colourless green ideas sleep furiously" the verb "sleep" is normally a relationship that constrains the noun phrase to refer to an object that is animate, not abstract. In a weak epistemic system like a language that is only syntactically defined this sentence is acceptable. In a stronger epistemic system that imposes semantic constraints on the objects that can enter into particular relationships or the relationships than can exist between certain kinds of objects, the sentence is unacceptable.

Rules can be seen as a special kind of description. Logical implication can be viewed as a special kind of relationship between descriptions such that if one set of descriptions is true then so is the other; from the logician’s point of view one set of descriptions has the logical role of "premises" and the other the status of "conclusions". If I say “Socrates is a man” and assert that “all men are mortal” then you are entitled (under certain conventions about “all” that we agree about) to conclude that Socrates is mortal. Similarly, if I say that “Flora is an ancestor of Jane”, and “Esther is an ancestor of Flora”, then you are entitled under certain interpretations of the relationship ancestor of to conclude that “Esther is an ancestor of Jane”. Some epistemic systems support higher-order properties such as transitivity of relations, so that if we assert that some relationship is transitive, as in ancestor of, then the knowledge system is sanctioned to deduce all valid ancestor relationships


Since the introduction of the earliest knowledge representation systems many different schemes have been developed which introduce further epistemic conventions and allow them to be combined. For example "frame" systems exploit the epistemic convention of ontological condensation, in which collections of individual descriptions and rules that share a concept can be merged to represent a complex object. For example an apple is a fruit that can be described as having a certain type of skin, colour, size and so on.

Attempts to build systems that can demonstrate high-levels of cognitive functioning in complex domains are leading to more complex and abstract objects, or models. Two important types of model are scenarios and tasks. Scenarios are significant, distinctive situations that occur repeatedly (in the medical domain common scenarios are ‘patients with meningitis’ or ‘being late for clinic’). Tasks are pre-planned procedures that are appropriate in particular situations (such as ‘finding out what is wrong with someone with a high temperature’ or ‘calling a taxi’). Despite their apparent complexity many complex models can be understood as compositions of concepts, descriptions, rules and, recursively, other models.

To take an example from medicine we have found that a large proportion of the expertise of skilled clinicians can be modelled in terms of a small ontology of tasks: plans, actions and decisions. These models can be composed into complex goal-directed processes carried out over time. The simplest task model has a small set of critical properties, whose values can be formalised using descriptions and rules. These include preconditions (descriptions which must be true for the task to be relevant in a context), post-conditions (descriptions of the effects of carrying out the task), trigger conditions (scenarios in which the task is to be invoked) and scheduling constraints that describe sequential aspects of the process (as in finding out what is wrong with a patient before deciding what the treatment should be). Task-based representations of expertise seem to be more natural and powerful than rule-based models in complex worlds like medicine (Fox, 2003).


Rules have provided the basic constructs for a number of successful knowledge representation languages for implementing cognitive systems, notably production systems and logic programming languages. Task-based systems are now promising to provide powerful formalisms for modelling complex human expertise. This is being extensively investigated in medical AI, a number of these efforts are reviewed by (Peleg et al, 2003), and practical tools for this kind of modelling are emerging (see Figure 6).



Figure 6. PROforma process modelling system (Fox & Das, 2000) that is used for modelling expertise in terms of an ontology of tasks (plans, decisions, actions). A PROforma model is a hierarchy of plans containing decisions, actions and sub-plans (left panel) configured as a network (right panel, arrows indicate scheduling constraints). Rectangles = plans; circles = decisions, squares = actions, diamonds = enquiries, or actions that return information. PROforma models are automatically processed into a description language, which can be “enacted” by a suitable interpreter (e.g. Sutton and Fox, 2003).

In recent years many knowledge representation systems, from semantic networks and frame systems to rule-based systems like production systems and logic programming languages have come to be seen as members of a family of knowledge representation systems called Description Logics (Baader et al, 2003). The early ideas about knowledge representation developed by psychologists, for example (Quillian, 1968), stimulated many experimental investigations of the adequacy of particular languages. These have in turn given way to more formal theories of knowledge representation. The work of Brachman and his colleagues (1979, 1984; Nardi & Brachman, 2003) has been central in this development and has led to a much deeper understanding of the requirements on effective knowledge representation systems, and their computational and other properties.


Most of the work on formal representation is being carried out in AI rather than psychology and I have been asked whether this implies a rejection of connectionist or “sub-symbolic” theories of knowledge which are influential in psychology and neuroscience. I believe it does not. Whether one believes the human brain/mind system “really” symbolic or not, we must use symbolic formalisms if we are to develop any theory in this field. (We describe the acceleration of an object falling in a gravitational field in a symbolic mathematical language, but do not conclude from this that the object is therefore carrying out some sort of symbolic computation!)

This brief presentation of basic ideas in knowledge representation does not do justice to the huge amount of work done in the last three decades. Recent developments in non-classical and higher order logics, for example, may yield important insights into epistemic properties of minds, both natural and artificial. Though such “exotic” systems face significant technical challenges they may offer enormous computational power for cognitive systems in the future. We are certainly approaching a time when we can reasonably claim that we “know what knowledge is" and that we can predict from basic principles the properties that different knowledge representation systems will have. Description logics will provide some of the theory that will be needed to understand, and design, different kinds of minds.



<< предыдущая страница   следующая страница >>