Artificial intelligence (AI); also called Machine Intelligence (MI); is intelligence exhibited by machines.
Goal of AI is to create technology that allows machines to function in an intelligent manner.
The general problem of simulating (or creating) intelligence has been broken down into sub-problems - These consist of particular traits/capabilities that researchers expect an intelligent system to display.The traits described below have received the most attention-
[1] Reasoning, Problem Solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
For difficult problems, algorithms can require enormous computational resources (most experience a "combinatorial explosion" - the amount of memory or computer time required becomes astronomical for problems of a certain size).
So, the search for more efficient problem-solving algorithms is a high priority.
Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model. AI has progressed using "sub-symbolic" problem solving:
[a] Embodied Agent approaches emphasize the importance of sensorimotor skills to higher reasoning
[b] Neural Net research attempts to simulate the structures inside the brain that give rise to this skill
[c] Statistical approaches to AI mimic the human ability to guess.
[2] Knowledge Representation, Commonsense Knowledge
Knowledge Representation and Knowledge Engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains.
A representation of "what exists" is an Ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.
The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[55] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.
Among the most difficult problems in knowledge representation are:
[a] Default Reasoning and the Qualification Problem
Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
[b] The Breadth of Commonsense Knowledge
The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the Internet, and thus be able to add to its own ontology.
[c] The Subsymbolic form of some Commonsense Knowledge
Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and realize that it is a fake. These are non-conscious and sub-symbolic intuitions or tendencies in the human brain. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
[3] Automated Planning and Scheduling
Intelligent agents must be able to Set Goals and Achieve them. They need a way to visualize the future - a representation of the state of the world and be able to make predictions about how their actions will change it - and be able to make choices that maximize the utility (or "value") of available choices.
In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions. However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.
[4] Machine Learning (ML) (Read in detail here)
Machine learning, is the study of Computer Algorithms that Improve Automatically through Experience.
Unsupervised Learning is the ability to find patterns in a stream of input.
Supervised Learning includes both classification and numerical regression.
Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories.
Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.
In Reinforcement Learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).
[5] Natural Language Processing (NLP)
NLP gives machines the ability to Read/Understand Human Language.
A sufficiently powerful NLP system would enable NL UIs and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering, machine translation.
A common method of processing and extracting meaning from natural language is through Semantic Indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.
[6] Perception
Machine perception is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world.
Computer Vision is the ability to analyze visual input.
A few selected sub-problems are Speech Recognition, Facial Recognition, Object Recognition.
[7] Robotics
Intelligence is required for Robots to handle tasks (such as object manipulation and navigation, with sub-problems such as localization, mapping, and motion planning).
These systems require that an agent is able to:
[a] Be spatially cognizant of its surroundings
[b] Learn from and build a map of its environment
[c] Figure out how to get from one point in space to another
[d] Execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object)
[8] Social intelligence & Affective computing
Affective computing is the study and development of systems that can Recognize, Interpret, Process, and Simulate Human Affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.
Emotion and social skills are important to an intelligent agent for two reasons:
[a] Being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as Game Theory, Decision Theory, necessitate that an agent be able to detect and model human emotions.
[b] In an effort to facilitate HCI (Human-Computer Interaction), an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.
[9] Creativity & Computational creativity
A sub-field of AI addresses Creativity both
[a] Theoretically (the Philosophical Psychological perspective)
[b] Practically (the specific implementation of Systems that Generate Novel and Useful Outputs)
[10] General Intelligence - Artificial General Intelligence and AI-complete
Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.
A few believe that anthropomorphic features like Artificial Consciousness or an Artificial Brain may be required for such a project.
Many of the problems above also require that general intelligence be solved. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). A problem like machine translation is considered "AI-complete", but all of these problems need to be solved simultaneously in order to reach human-level machine performance.
Goal of AI is to create technology that allows machines to function in an intelligent manner.
Research from Oxford & Yale predicts the years when AI will take over Human tasks
[1] Reasoning, Problem Solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
For difficult problems, algorithms can require enormous computational resources (most experience a "combinatorial explosion" - the amount of memory or computer time required becomes astronomical for problems of a certain size).
So, the search for more efficient problem-solving algorithms is a high priority.
Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model. AI has progressed using "sub-symbolic" problem solving:
[a] Embodied Agent approaches emphasize the importance of sensorimotor skills to higher reasoning
[b] Neural Net research attempts to simulate the structures inside the brain that give rise to this skill
[c] Statistical approaches to AI mimic the human ability to guess.
[2] Knowledge Representation, Commonsense Knowledge
Knowledge Representation and Knowledge Engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains.
A representation of "what exists" is an Ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.
The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[55] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.
Among the most difficult problems in knowledge representation are:
[a] Default Reasoning and the Qualification Problem
Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
[b] The Breadth of Commonsense Knowledge
The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the Internet, and thus be able to add to its own ontology.
[c] The Subsymbolic form of some Commonsense Knowledge
Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and realize that it is a fake. These are non-conscious and sub-symbolic intuitions or tendencies in the human brain. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
[3] Automated Planning and Scheduling
Intelligent agents must be able to Set Goals and Achieve them. They need a way to visualize the future - a representation of the state of the world and be able to make predictions about how their actions will change it - and be able to make choices that maximize the utility (or "value") of available choices.
In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions. However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.
[4] Machine Learning (ML) (Read in detail here)
Machine learning, is the study of Computer Algorithms that Improve Automatically through Experience.
Unsupervised Learning is the ability to find patterns in a stream of input.
Supervised Learning includes both classification and numerical regression.
Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories.
Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.
In Reinforcement Learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).
[5] Natural Language Processing (NLP)
NLP gives machines the ability to Read/Understand Human Language.
A sufficiently powerful NLP system would enable NL UIs and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering, machine translation.
A common method of processing and extracting meaning from natural language is through Semantic Indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.
[6] Perception
Machine perception is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world.
Computer Vision is the ability to analyze visual input.
A few selected sub-problems are Speech Recognition, Facial Recognition, Object Recognition.
[7] Robotics
Intelligence is required for Robots to handle tasks (such as object manipulation and navigation, with sub-problems such as localization, mapping, and motion planning).
These systems require that an agent is able to:
[a] Be spatially cognizant of its surroundings
[b] Learn from and build a map of its environment
[c] Figure out how to get from one point in space to another
[d] Execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object)
[8] Social intelligence & Affective computing
Affective computing is the study and development of systems that can Recognize, Interpret, Process, and Simulate Human Affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.
Emotion and social skills are important to an intelligent agent for two reasons:
[a] Being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as Game Theory, Decision Theory, necessitate that an agent be able to detect and model human emotions.
[b] In an effort to facilitate HCI (Human-Computer Interaction), an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.
[9] Creativity & Computational creativity
A sub-field of AI addresses Creativity both
[a] Theoretically (the Philosophical Psychological perspective)
[b] Practically (the specific implementation of Systems that Generate Novel and Useful Outputs)
[10] General Intelligence - Artificial General Intelligence and AI-complete
Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.
A few believe that anthropomorphic features like Artificial Consciousness or an Artificial Brain may be required for such a project.
Many of the problems above also require that general intelligence be solved. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). A problem like machine translation is considered "AI-complete", but all of these problems need to be solved simultaneously in order to reach human-level machine performance.
Source:
https://en.wikipedia.org/wiki/Artificial_intelligence
No comments:
Post a Comment