Automata Theory, concept that describes how machines mimic human behavior. The theory proposes that human physical functions and behavior can be simulated by a mechanical or computer-controlled device. Applications of automata theory have included imitating human comprehension and reasoning skills using computer programs, duplicating the function of muscles and tendons by hydraulic systems or electric motors, and reproducing sensory organs by electronic sensors such as smoke detectors.
The concept of automata,
or manlike machines, has historically been associated with any self-operating
machine, such as watches or clockwork songbirds driven by tiny springs and
gears. But in the late 20th century, the science of robotics (the development
of computer-controlled devices that move and manipulate objects) has replaced
automata as it relates to replicating motion based on human anatomy. Modern
theories of automata currently focus on reproducing human thought patterns and
problem-solving abilities using artificial intelligence and other advanced
computer-science techniques.
The History of Automata Theory
Automata date to ancient
times. During the Han dynasty (3rd century BC),
a mechanical orchestra was constructed for the Chinese Emperor. The
13th-century English philosopher and scientist Roger Bacon is credited with
creating an artificial talking head, and in the 17th century French philosopher
and mathematician René Descartes reportedly built a female automaton as a
traveling companion. In the 18th and 19th centuries, intricate machines were
constructed that imitated some human actions, such as limb movement, but these
devices were little more than sophisticated windup toys.
Mimicry of human mental
abilities did not begin until the advent of electronics and mathematical logic
structures. In the mid-20th century, the British mathematician Alan Turing
designed a theoretical machine to process equations without human direction.
The machine (now known as a Turing machine), in concept resembled an automatic
typewriter that used symbols for math and logic instead of letters. Turing
intended the device to be used as a “universal machine” that could be
programmed to duplicate the function of any other existing machine. Turing's
machine was the theoretical precursor to the modern digital computer.
In the 1940s and 1950s
American researchers Warren McCulloch and Walter Pitts at the Massachusetts
Institute of Technology developed artificial neurons, or neural networks, to
theoretically bridge the structure of the human brain and the
still-to-be-invented modern computer. The human brain has about 1 trillion
nerve cells, or neurons, each of which is connected to several other neurons.
This allows neurons to transmit information along different pathways, depending
on the stimulation they receive. McCulloch and Pitts theorized that this same
mode of information transmission, an interconnected network, might be
reproduced with electronic components. Artificial neural networks have been
shown to have the capacity to learn from their experiences and enhance their
computational performance.
In 1956 American social
scientist and Nobel laureate Herbert Simon and American physicist and computer
scientist Allan Newell at Carnegie Mellon University in Pennsylvania devised a
program called Logic Theorist that simulated human thinking on computers,
although at first the program was simply written on index cards due to the
scarcity of computers. Newell and Simon later created a modified version of the
program called the General Problem Solver (GPS). The GPS was unique in that it
was programmed to achieve a goal and then find the means to reach that goal—as
opposed to starting from a problem or question and working toward an eventual
solution. The GPS process involved backward thinking, going from the final
solution to the present state. Using computers from the Rand Corporation to
test GPS, Simon and Newell found that in some cases GPS did act as if it were
capable of human reasoning. This was one of the first breakthroughs in the
computer-science field that would later be known as artificial intelligence.
The first artificial intelligence
conference occurred at Dartmouth College in New Hampshire in 1956. This
conference inspired researchers to undertake projects that emulated human
behavior in the areas of reasoning, language comprehension, and communications.
In addition to Newell and Simon, computer scientists and mathematicians Claude
Shannon, Marvin Minsky, and John McCarthy laid the groundwork for creating
“thinking” machines from computers. Contemporary fields of interest resulting
from early artificial intelligence research include expert systems, cellular
automata, and artificial life.
No comments:
Post a Comment