Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from A-E

Artificial Intelligence - History, Approaches, Fundamental System Issues, Knowledge Representation, Reasoning and Searching, Learning, Applications, Robotics, Computer Vision

systems human data programs

Artificial intelligence (AI) is a scientific field whose goal is to understand intelligent thought processes and behavior and to develop methods for building computer systems that act as if they are “thinking” and can learn from themselves. Although the study of intelligence is the subject of other disciplines such as philosophy, physiology, psychology, and neuroscience, people in those disciplines have begun to work with computational scientists to build intelligent machines. The computers offer a vehicle for testing theories of intelligence, which in turn enable further exploration and understanding of the concept of intelligence.

The growing information needs of the electronic age require sophisticated mechanisms for information processing. As Richard Forsyth and Roy Rada (1986) point out, AI can enhance information processing applications by enabling the computer systems to store and represent knowledge, to apply that knowledge in problem solving through reasoning mechanisms, and finally to acquire new knowledge through learning.

History

The origin of AI can be traced to the end of World War II, when people started using computers to solve nonnumerical problems. The first attempt to create intelligent machines was made by Warren McCulloh and Walter Pitts in 1943 when they proposed a model of artificial networked neurons and claimed that properly defined networks could learn, thus laying the foundation for neural networks.

In 1950, Alan Turing published “Computer Machinery and Intelligence,” where he explored the question of whether machines can think. He also proposed the Turing Test as an operational measure of intelligence for computers. The test requires that a human observer interrogates (i.e., interacts with) a computer and a human through a Teletype. Both the computer and the human try to persuade the observer that she or he is interacting with a human at the other end of the line. The computer is considered intelligent if the observer cannot tell the difference between the computer responses and the human responses.

In 1956, John McCarthy coined the term “artificial intelligence” at a conference where the participants were researchers interested in machine intelligence. The goal of the conference was to explore whether intelligence can be precisely defined and specified in order for a computer system to simulate it. In 1958, McCarthy also invented LISP, a high-level AI programming language that continues to be used in AI programs. Other languages used for writing AI programs include Prolog, C, and Java.

Approaches

Stuart Russell and Peter Norvig (1995) have identified the following four approaches to the goals of AI: (1) computer systems that act like humans, (2) programs that simulate the human mind, (3) knowledge representation and mechanistic reasoning, and (4) intelligent or rational agent design. The first two approaches focus on studying humans and how they solve problems, while the latter two approaches focus on studying real-world problems and developing rational solutions regardless of how a human would solve the same problems.

Programming a computer to act like a human is a difficult task and requires that the computer system be able to understand and process commands in natural language, store knowledge, retrieve and process that knowledge in order to derive conclusions and make decisions, learn to adapt to new situations, perceive objects through computer vision, and have robotic capabilities to move and manipulate objects. Although this approach was inspired by the Turing Test, most programs have been developed with the goal of enabling computers to interact with humans in a natural way rather than passing the Turing Test.

Some researchers focus instead on developing programs that simulate the way in which the human mind works on problem-solving tasks. The first attempt to imitate human thinking was the Logic Theorist and the General Problem Solver programs developed by Allen Newell and Herbert Simon. Their main interest was in simulating human thinking rather than solving problems correctly. Cognitive science is the interdisciplinary field that studies the human mind and intelligence. The basic premise of cognitive science is that the mind uses representations that are similar to computer data structures and computational procedures that are similar to computer algorithms that operate on those structures.

Other researchers focus on developing programs that use logical notation to represent a problem and use formal reasoning to solve a problem. This is called the “logicist approach” to developing intelligent systems. Such programs require huge computational resources to create vast knowledge bases and to perform complex reasoning algorithms. Researchers continue to debate whether this strategy will lead to computer problem solving at the level of human intelligence.

Still other researchers focus on the development of “intelligent agents” within computer systems. Russell and Norvig (1995, p. 31) define these agents as “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.” The goal for computer scientists working in this area is to create agents that incorporate information about the users and the use of their systems into the agents’ operations.

Fundamental System Issues

A robust AI system must be able to store knowledge, apply that knowledge to the solution of problems, and acquire new knowledge through experience. Among the challenges that face researchers in building AI systems, there are three that are fundamental: knowledge representation, reasoning and searching, and learning.

Knowledge Representation

What AI researchers call “knowledge” appears as data at the level of programming. Data becomes knowledge when a computer program represents and uses the meaning of some data. Many knowledge-based programs are written in the LISP programming language, which is designed to manipulate data as symbols.

Knowledge may be declarative or procedural. Declarative knowledge is represented as a static collection of facts with a set of procedures for manipulating the facts. Procedural knowledge is described by executable code that performs some action. Procedural knowledge refers to “how-to” do something. Usually, there is a need for both kinds of knowledge representation to capture and represent knowledge in a particular domain.

First-order predicate calculus (FOPC) is the best-understood scheme for knowledge representation and reasoning. In FOPC, knowledge about the world is represented as objects and relations between objects. Objects are real-world things that have individual identities and properties, which are used to distinguish the things from other objects. In a first-order predicate language, knowledge about the world is expressed in terms of sentences that are subject to the language’s syntax and semantics.

Reasoning and Searching

Problem solving can be viewed as searching. One common way to deal with searching is to develop a production-rule system. Such systems use rules that tell the computer how to operate on data and control mechanisms that tell the computer how to follow the rules. For example, a very simple production-rule system has two rules: “if A then B” and “if B then C.” Given the fact (data) A, an algorithm can chain forward to B and then to C. If C is the solution, the algorithm halts.

Matching techniques are frequently an important part of a problem-solving strategy. In the above example, the rules are activated only if A and B exist in the data. The match between the A and B in the data and the A and B in the rule may not have to be exact, and various deductive and inductive methods may be used to try to ascertain whether or not an adequate match exists.

Generate-and-test is another approach to searching for a solution. The user’s problem is represented as a set of states, including a start state and a goal state. The problem solver generates a state and then tests whether it is the goal state. Based on the results of the test, another state is generated and then tested. In practice, heuristics, or problem-specific rules of thumb, must be found to expedite and reduce the cost of the search process.

Learning

The advent of highly parallel computers in the late 1980s enabled machine learning through neural networks and connectionist systems, which simulate the structure operation of the brain. Parallel computers can operate together on the task with each computer doing only part of the task. Such systems use a network of interconnected processing elements called “units.” Each unit corresponds to a neuron in the human brain and can be in an “on” or “off” state. In such a network, the input to one unit is the output of another unit. Such networks of units can be programmed to represent short-term and long-term working memory and also to represent and perform logical operations (e.g., comparisons between numbers and between words).

A simple model of a learning system consists of four components: the physical environment where the learning system operates, the learning element, the knowledge base, and the performance element. The environment supplies some information to the learning element, the learning element uses this information to make improvements in an explicit knowledge base, and the performance element uses the knowledge base to perform its task (e.g., play chess, prove a theorem). The learning element is a mechanism that attempts to discover correct generalizations from raw data or to determine specific facts using general rules. It processes information using induction and deduction. In inductive information processing, the system determines general rules and patterns from repeated exposure to raw data or experiences. In deductive information processing, the system determines specific facts from general rules (e.g., theorem proving using axioms and other proven theorems). The knowledge base is a set of facts about the world, and these facts are expressed and stored in a computer system using a special knowledge representation language.

Applications

There are two types of AI applications: stand-alone AI programs and programs that are embedded in larger systems where they add capabilities for knowledge representation, reasoning, and learning. Some examples of AI applications include robotics, computer vision, natural-language processing; and expert systems.

Robotics

Robotics is the intelligent connection of perception by the computer to its actions. Programs written for robots perform functions such as trajectory calculation, interpretation of sensor data, executions of adaptive control, and access to databases of geometric models. Robotics is a challenging AI application because the software has to deal with real objects in real time. An example of a robot guided by humans is the Sojourner surface rover that explored the area of the Red Planet where the Mars Pathfinder landed in 1997. It was guided in real time by NASA controllers. Larry Long and Nancy Long (2000) suggest that other robots can act autonomously, reacting to changes in their environment without human intervention. Military cruise missiles are an example of autonomous robots that have intelligent navigational capabilities.

Computer Vision

The goal of a computer vision system is to interpret visual data so that meaningful action can be based on that interpretation. The problem, as John McCarthy points out (2000), is that the real world has three dimensions while the input to cameras on which computer action is based represents only two dimensions. The three-dimensional characteristics of the image must be determined from various two-dimensional manifestations. To detect motion, a chronological sequence of images is studied, and the image is interpreted in terms of high-level semantic and pragmatic units. More work is needed in order to be able to represent three-dimensional data (easily perceived by the human eye) to the computer. Advancements in computer vision technology will have a great effect on creating mobile robots. While most robots are stationary, some mobile robots with primitive vision capability can detect objects on their path but cannot recognize them.

Natural-Language Processing

Language understanding is a complex problem because it requires programming to extract meaning from sequences of words and sentences. At the lexical level, the program uses words, prefixes, suffixes, and other morphological forms and inflections. At the syntactic level, it uses a grammar to parse a sentence. Semantic interpretation (i.e., deriving meaning from a group of words) depends on domain knowledge to assess what an utterance means. For example, “Let’s meet by the bank to get a few bucks” means one thing to bank robbers and another to weekend hunters. Finally, to interpret the pragmatic significance of a conversation, the computer needs a detailed understanding of the goals of the participants in the conversation and the context of the conversation.

Expert Systems

Expert systems consist of a knowledge base and mechanisms/programs to infer meaning about how to act using that knowledge. Knowledge engineers and domain experts often create the knowledge base. One of the first expert systems, MYCIN, was developed in the mid-1970s. MYCIN employed a few hundred if-then rules about meningitis and bacteremia in order to deduce the proper treatment for a patient who showed signs of either of those diseases. Although MYCIN did better than students or practicing doctors, it did not contain as much knowledge as physicians routinely need to diagnose the disease.

Although Alan Turing’s prediction that computers would be able to pass the Turing Test by the year 2000 was not realized, much progress has been made and novel AI applications have been developed, such as industrial robots, medical diagnostic systems, speech recognition in telephone systems, and chess playing (where IBM’s Deep Blue supercomputer defeated world champion Gary Kasparov).

Conclusion

The success of any computer system depends on its being integrated into the workflow of those who are to use it and on its meeting of user needs. A major future direction for AI concerns the integration of AI with other systems (e.g., database management, real-time control, or user interface management) in order to make those systems more usable and adaptive to changes in user behavior and in the environment where they operate.

Artusi, Giovanni Maria [next] [back] Artie Lange - Early life, MADtv, The Howard Stern Show

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or

Vote down Vote up

almost 6 years ago

Thanks for this notes about the AI.This technology new for developing country but from this i got some knowledge about AI where researchers reach for artificial intelligent

thanks!
please send me some material about artificial intelligent

Vote down Vote up

over 6 years ago

AI