litceysel.ru
добавить свой файл
1 ... 3 4 5 6

Discussion

The aim of this chapter has been to review a variety of images of mind, from the structure and operation of cognitive systems to ways of visualising knowledge and mental states. We cannot understand the mind without being able to discuss the knowledge that makes it possible for it to interpret its environment, solve problems and plan actions, the states it moves through in carrying out these tasks, and the control processes that make sure the whole shebang works properly to achieve its goals.

We need to bring together a number of types of account if we are to develop a general theory of mind. As in many sciences, from physics to biology, we need different kinds of theory to understand static structures from those that will shed light on dynamic behaviour. But the cognitive sciences need more. Cognitive agents have to be able to reflect on their knowledge and beliefs, make decisions and schedule actions in response, anticipate possible consequences of their actions and plan for contingencies. For this we need at least two types of theory that we do not need in the physical and biological sciences; epistemic theories are needed to understand the nature of knowledge and what makes a good representation and pathic theories to understand what Dennett would call the intentional aspects of cognition.

Even when we have reasonably complete accounts of all these levels there will still be much to do of course. Many psychologists who are interested in explaining the foundations of mental states will in the end wish to ground their theories in neurology and biology. Broadbent made comments to that effect in 1958. With the rise of cognitive neuropsychology the time may be coming when we can make a reasonable fist of mapping down from an understanding of the functional architecture of the mind to the structural architecture of the brain. For example, the work of John Anderson (http://act-r.psy.cmu.edu/) and Marcel Just, Pat Carpenter and others (http://www.ccbi.cmu.edu/) are showing promising progress on how the kinds of production rule system that Newell introduced may be implemented in a way that is consistent with what we know about the static organisation and dynamic operation of the human brain as revealed by physical imaging techniques like PET and functional MR.


What we would really like of course is a single theory that encompasses all these levels of description in one unified framework. That was what Newell was after, and it is probably still asking too much, but one attempt along these lines may point us in a promising direction. This is a piece of work by David Glasspool, reported in another chapter in this volume. Glasspool's interests are in the convergence of theoretical ideas from cognitive neuropsychology and AI, particularly the requirements that natural agents and artificial agents must both satisfy if they are to deal successfully with complex and unpredictable environments. His chapter explores a number of similarities between ideas in cognitive neuroscience and in agent research. He notes, for example, that both natural cognitive processes and agent systems both require a combination of reactive control to respond to unpredictable events in their environments, and deliberative control to ensure continuity of purpose and behaviour over time.

Glasspool’s model of high-level cognition is based on Shallice’s (1988) account of human executive (frontal lobe) function and uses established concepts from cognitive- and neuro-psychology brought together in a variant of the domino model. The static architecture of this part of the human cognitive system is modelled as a COGENT box and arrow diagram. Working memory for "beliefs" and another temporary store for plans ("schemas") are modelled as short-term storage buffers in COGENT, while dynamic aspects of the system are implemented by several distinct processing components. Low-level reactive operations are simulated using production rules and high-level deliberative control is based on transitions between decisions and plan states, under the influence of belief and goal states stored in appropriate memory buffers. An intermediate process that controls the selection of actions and plans implements a process called contention scheduling (Norman & Shallice, 1986; Shallice, 2002). Glasspool’s demonstration suggests that a theory of cognitive functioning that unifies several very different views of mind may be within reach.

Conclusion


Boxes and arrows, ladders, dominos and the other images of mind discussed in this chapter are just a few of many possible ways of visualising cognition. The future will no doubt bring more. These images give important insights into the nature of cognition, but incomplete ones. A general theory of mind should unify these different perspectives. That task, however, demands more than images and metaphors. Only if we can translate our partial theories into a common, formal framework will we be able to achieve a unified theory that is consistent with the traditional physical account while explaining intentional concepts like belief, goal and commitment, and perhaps even ruach, leb and nepesh.

Endnotes


  1. Paul MacDonald, personal communication

  2. There are reasons to consider this simplistic today, as we shall see in the next section, but it is a good scientific strategy to work with a simple hypothesis until there is a compelling reason to complicate things

  3. COGENT can be downloaded from http://cogent.psyc.bbk.ac.uk/

  4. This pedagogical device cannot do justice to the great body of research in this field but, as with the rest of this discussion, my purpose is not to offer a thorough review of the subject but to give a flavour of what is going on

  5. ascribing human feelings to a god or inanimate object”

  6. For this reason I prefer not to make an anthropopathic attribution to BDI agents, and prefer simply to refer to this kind of theory as "pathic" (as in empathic, telepathic and psychopathic, but not homeopathic!).

References


Anderson, J.R. & Lebiere, C. (1998) The atomic components of thought. Mahwah, NJ: Erlbaum.

Baader, F., Valvanese, D., McGuinness, D., Nardi, D., & Patel-Schneider, P. (eds) (2003) The Description logic handbook: Theory, Implementation and Applications, Cambridge: Cambridge University Press.


Brachman, R.J. (1979) On the epistemological status of semantic networks. In N V Findler (ed) Associative Networks, 3-50, New York: Academic Press

Brachman, R.J. & Levesque, H.J. (1984) The tractability of subsumption in frame-based description languages. Proc. 4th Nat. Conf. On Artificial Intelligence (AAAI84), 34-37.

Broadbent, D.E. (1958) Perception and Communication, Pergamon Press

Broadbent, D.E. (1971) Decision and Stress, London: Academic Press

Chambers, W&R (eds) (1884) Chambers' Information for the People, 5th Edition, London: W&R Chambers.

Cohen, P.R. & Levesque, H.J. (1987) Persistence, intention and commitment. In M.P. Georgeff and A.L. Lansky, editors, Proceedings of the 1986 workshop on Reasoning about Actions and Plans, pages 297--340. San Mateo, CA: Morgan Kaufmann.

Cooper, R. & Fox, J. (1998) COGENT: A visual design environment for cognitive modeling. Behavior Research Methods, Instruments and Computers, 30 (4), 553–564.

Cooper, R.P. (2002) Modeling high-level cognitive processes, Mahwah, New Jersey: Lawrence Erlbaum.

Cooper, R. & Shallice, T. (1995) SOAR and the case for unified theories of cognition. Cognition, 55, 115–149,

Cooper, R., Fox, J., Farringdon, J. & Shallice, T. (1996) A systematic methodology for cognitive modeling. Artificial Intelligence, 85, 3–44.

Das, S., Fox, I., Elsdon, D. & Hammond, P. (1997) A flexible architecture for a general intelligent agent. Journal of Experimental and Theoretical Artificial Intelligence, 9, 407–440.

Dennett, D.C. (1996) Kinds of Minds: Towards an understanding of consciousness, London: Weidenfeld and Nicholson.

Forgy, C. & McDermott, J. (1977) OPS, a domain independent production system language. Proc. 5th Int. Joint Conf. Artifical Intelligence, Cambridge, Massachusetts, 933-939.


Fox, J. (2003) Logic, probability and the cognitive foundations of rational belief. Journal of Applied Logic, 1, 197-224.

Fox, J., Beveridge, M. & Glasspool, D. (2003) Understanding intelligent agents: analysis and synthesis. Artificial Intelligence Communications: Volume 16, Number 3.

Fox, J. & Das, S. (2000) Safe and Sound: Artificial Intelligence in Hazardous Applications, AAAI and MIT Press.

Fox, J. & Parsons, S. (1998) Arguing about beliefs and actions. In: A. Hunter & S. Parsons (Eds.) Applications of Uncertainty Formalisms.