Article: 2 of 6 

ARTNO 0021200-0

ARTICLE automata, theory of

TEXT {awt-oh-mat'-uh\}

The theory of automata deals with the fundamental behavioral principles of automatic machines. It provides the theoretical framework for automated processes in which sensors and detectors replace human sense organs, actuators powered by electric motors or hydraulic forces replace human muscles, and microprocessors replace the human brain. Automata theory was once largely an abstract branch of mathematics. It was an attempt to define the behavior of machines or systems in terms of data inputs and outputs. Specific areas of interest include computing machine theory, language or grammar theory, logic theory, and ARTIFICIAL INTELLIGENCE . Recently, automata theory has been implemented in practical computer-based operations that mimic human processes.

In 1936, Alan TURING, an English mathematician, systematized automata theory by developing, on paper, a model for the digital computer. Turing proved that his "universal computer" could solve virtually any mathematical or logical problem that could be formulated in a logically consistent manner. He also showed that certain types of mathematical problems were beyond its powers.

In the 1940s, Warren McCulloch and Walter Pitts, of theMassachusetts Institute of Technology (MIT), developed the theory of neural net automation. Essentially an abstract model of the human nervous system, neural net automation theory describes a system of neurons, that, when stimulated above acertain threshold, will fire, and transmit an impulse along connecting fibers. Neural net automation theory has been used

in a variety of automata applications.

The 1950s brought a surge of enthusiasm for automata theory. American linguist Noam CHOMSKY showed that automated language analysis could be performed on both human language and programming languages. John McCarthy of MIT coined the term artificial intelligence and defined it as "the science of making computers do things which if done by men would require intelligence." This clever definition sidestepped a troubling issue: whether or not computers can actually think. Claude Elwood SHANNON, also of MIT, outlined methods whereby a machine could be programmed to play winning chess and learn from its experiences. Additional advances were made by the researcher Herbert A. SIMON, who developed the "General Problem Solver," a program that was able to prove a well-known theorem in a way that was simpler than traditional methods, and by Terry Winograd, who developed a program that could understand both verbal language and complex geometric manipulations.

Automata theorists have struggled with the difficulties of learning how machines might be made to see, to hear, to learn from experience, and to think. Critics have accused researchers in the field of producing few results, and some have predicted that it will be impossible for machines to mimic real human thinking. Nevertheless, automatic machines that perform routine tasks have become commonplace in industry.

Active areas in modern automata theory include studies in expert systems and man-machine interactions (see CYBERNETICS). Expert systems are computer programs that to some extent encapsulate the expertise of highly trained professionals Caduceus, for example, is a knowledge-based system that diagnoses blood diseases from 1,000 programmed rules, and Prospector is used to locate mineral deposits using geological\ data supplied by its programmers.

One possibly rich field for basic research is cellular automata. First elucidated by mathematicians John VON NEUMANN and Stanislaw Ulam in the early 1950s, cellular automata are made up of mathematical "cells"--squares on a chessboard offer a good analogy--that change their value, or state, according to simple rules, or ALGORITHMS. The state of each "cell" is affected by the states of neighboring cells. Researchers in this field have used computers to model such natural processes as snowflake formation. Some automata theorists believe it possible to simulate complex physical systems, including the human nervous system , through cellular automata.

Advances in applied areas, such as VOICE RECOGNITION and machine-learning, have been frustrated by the slow speeds of present-day computers. Higher speeds are needed in order to handle the data involved in such processes. As an example, the creation of a fully autonomous land vehicle that can avoid obstacles and move safely at freeway speeds will require computer processing speeds 100 to 10,000 times greater than the maximum speeds now possible. PARALLEL PROCESSING techniques that mimic the processes of the human brain, and newer generations of smaller, faster microelectronics will be needed to attain such goals. DONNA AND TOM LOGSDON

Bibliography: Arbib, Michael, Computers and the Cybernetic\ Society, 2d ed. (1984); Logsdon, Tom, Computers Today and\ Tomorrow (1985); McCorduck, Pamela, Machines Who Think (1981).\ \ Copyright notice:

Copyright by Grolier Electronic Publishing, Inc.

---- End of Article ----