INTRODUCTION TO AI TURING TEST APPROACHES LIMITS AGENTS AGENT ENVIRONMENT

 Introduction to AI

Definition of AI
What is AI ?
Artificial Intelligence is concerned with the design of intelligence in an artificial device. The
term was coined by McCarthy in 1956.
According to the father of Artificial Intelligence, John McCarthy, it is “The science and
engineering of making intelligent machines, especially intelligent computer programs”. 
 
Typical AI problems
While studying the typical range of tasks that we might expect an “intelligent entity” to perform, we
need to consider both “common-place” tasks as well as expert tasks.
Examples of common-place tasks include
Recognizing people, objects.
Communicating (through natural language).
Navigating around obstacles on the streets 
 
These tasks are done matter of factly and routinely by people and some other animals.
Expert tasks include:
• Medical diagnosis.
• Mathematical problem solving
• Playing games like chess .
These tasks cannot be done by all people, and can only be performed by skilled specialists
 
Intelligent behaviour
It constitutes
Perception involving image recognition and computer vision
Reasoning ƒ
Learning ƒ
Understanding language involving natural language processing, speech processing ƒ
Solving problems ƒ
Robotics
 
 Turing test
In artificial intelligence (AI), the Turing Test is a method for determining whether or
not a computer is capable of thinking like a human. The test is named after Alan
Turing.
According to this kind of test, a computer is deemed to have artificial intelligence if it can mimic human responses under specific conditions.
In Turing's test, if the human being conducting the test is unable to consistently
determine whether an answer has been given by a computer or by another human
being, then the computer is considered to have "passed" the test.
In the basic Turing Test, there are three terminals. Two of the terminals are operated
by humans, and the third terminal is operated by a computer.
Each terminal is physically separated from the other two.
One human is designated as the questioner. The other human and the computer are
designated the respondents.
The questioner interrogates both the human respondent and the computer according to a specified format, within a certain subject area and context, and for a preset length of time .
After the specified time, the questioner tries to decide which terminal is operated by
the human respondent, and which teminal is operated by the computer.
The test is repeated many times.
If the questioner makes the correct determination in half of the test runs or less, the
computer is considered to have artificial intelligence, because the questioner regards it
as "just as human" as the human respondent.
 
Approaches to AI
Strong AI :
It aims to build machines that can truly reason and solve problems. These machines
should be self aware and their overall intellectual ability needs to be indistinguishable
from that of a human being. Strong AI maintains that suitably programmed machines
are capable of cognitive mental states. 
 
Weak AI:
It deals with the creation of some form of computer-based artificial intelligence that
cannot truly reason and solve problems, but can act as if it were intelligent. Weak AI
holds that suitably programmed machines can simulate human cognition.  
 
Applied AI:
It aims to produce commercially viable "smart" systems such as, for example, a
security system that is able to recognise the faces of people who are permitted to enter
a particular building. Applied AI has already enjoyed considerable success. 
 
Cognitive AI:
computers are used to test theories about how the human mind works--for example,
theories about how we recognise faces and other objects, or about how we solve
abstract problems. 
 
 
Limits of AI Today
Today’s successful AI systems operate in well-defined domains and employ narrow,
specialized knowledge. Common sense knowledge is needed to function in complex, open ended worlds. Such a system also needs to understand unconstrained natural language.
However these capabilities are not yet fully present in today’s intelligent systems.
Today’s AI systems have been able to achieve limited success in some of these tasks.
• In Computer vision, the systems are capable of face recognition
• In Robotics, we have been able to make vehicles that are mostly autonomous.
• In Natural language processing, we have systems that are capable of simple
machine translation.
• Today’s Expert systems can carry out medical diagnosis in a narrow domain
• Speech understanding systems are capable of recognizing several thousand words
continuous speech
• Planning and scheduling systems had been employed in scheduling experiments
with the Hubble Telescope.
• The Learning systems are capable of doing text categorization into about a 1000
topics
• In Games, AI systems can play at the Grand Master level in chess (world
champion), checkers, etc.
 
 
What can AI systems NOT do yet?
• Understand natural language robustly (e.g., read and understand articles in a
newspaper)
• Interpret an arbitrary visual scene
• Learn a natural language • Construct plans in dynamic real-time domains
• Exhibit true autonomy and intelligence
 
Introduction to Agent
Agents
-An agent acts in an environment.
-An agent perceives its environment through sensors.
-The complete set of inputs at a given time is called a percept.
-The current percept, or a sequence of percepts can influence the actions of an agent.
- The agent can change the environment through actuators or effectors.
- An operation involving an effector is called an action. Actions can be grouped into action
sequences.
- The agent can have goals which it tries to achieve.
Thus, an agent can be looked upon as a system that implements a mapping from percept sequences to
actions. A performance measure has to be used in order to evaluate an agent. An autonomous agent decides autonomously which action to take in the current situation to maximize progress towards its goals.
-Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators is an agent.
-The agent function maps from percept sequences to actions: [f:P*->A]
- The agent program runs on the physical architecture to produce f
• agent = architecture + program 
 

 
 
Rational Agent
For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, based on the evidence provided by
the percept sequence and whatever built-in knowledge the agent has.
Rationality is status of being reasonable, sensible, and having good sense of judgment.
 
 
Structure of Agents
1. Table Driven agents
2. Simple reflex agents
3. Model-based reflex agents
4. Goal-based agents
5. Utility-based agents 
 

 
 
  Table driven agents use a lookup table to associate perceptions from the environment with possible actions.
There are several drawbacks in this technique:
Need to keep in memory entire percept sequence
Long time to build the table
Agent has no autonomySimple Reflex Agents 
 
 

 
•A simple reflex agent works by finding a rule whose condition match
the current situation and, then doing the action associated with that rule
They choose actions only based on the current percept.
They are rational only if a correct decision is made only on the basis of current
precept.
Their environment is completely observable.
 
Function Simple-Reflex-Agent(percept)
static: rules, /* condition-action rules */
state <-Intercept_input(percept)
rule <-Rule_match(state, rules)
action <-Rule_Action(rule)
return (action) 
 

 
 
They use a model (The knowledge about “how the things happen in the world”) of the world to choose their actions. They maintain an internal state (It is a representation of unobserved aspects of current state depending on percept history).
Sensors may not provide access to complete state of the world
Need to maintain internal state information to distinguish between world states that
generate same perceptual input but are significantly different 
 
 
Function Reflex-Agent-With-State(percept)
static: state, /* description of the current world state*/
rules // set of condition-action rules //
state <-Update_State(state, percept)
rule <-Rule_Match(state, rules)
action <-Rule_Action(rule)
state <-Update_State(state, action)
return(action) 
 
 

 
 
Knowing current state of environment often is not enough to decide an action
•Need of goal information
•Combine goal info with possible actions to choose that achieve the goal
•Not easy always to decide best action to take
•Use search planning to find action sequences that achieve goal
    - Goal-based approach is more flexible than reflex agent since the knowledge
supporting a decision is explicitly modeled, thereby allowing for modifications. 
 
Function Goal-Based-Agent(percept)
static: state, /* description of the current world state*/
rules /*set of condition-action rules */
goal /* set of specific success states */
state <-Update_State(state, percept)
rule <-Rule_Match(state, rules)
action <-Rule_Action(rule)
state <-Update_State(state, action)
if (state in goal) then
return (action)
else
percept <-Obtain_Percept(state, goal)
return(Goal-Based-Agent(percept)) 
 
 

 
 
Goals not enough for high-quality behavior
•In Goal-based agents states are classified as successful unsuccessful
•We need a method to distinguish the level of utility or gain in a state
•Utility: A function which maps a state (successful) into a real
number (describes associated degree of success)
•Utility functions allow for:
–Specifying tradeoffs in conflicting or alternative goals
–Specifying a way in which likelihood of success can be weighed up against the importance of alternative goals 
 
 
Function Goal-Based-Agent(percept)
static: state, /* description of the current world state*/
rules /*set of condition-action rules */
goal /* set of specific success states */
state <-Update_State(state, percept)
rule <-Rule_Match(state, rules)
action <-Rule_Action(rule)
state <-Update_State(state, action)
score <-Obtain_Score(state)
if (state in goal) and Best_Score(score) then
return (action)
else
percept <-Obtain_Percept(state, goal)
return(Goal-Based-Agent(percept)) 
 
 

 
 
Agent Environment
Environments in which agents operate can be defined in different ways 
 
1. Observability
    In terms of observability, an environment can be characterized as
    In a fully observable environment all of the environment relevant to the action being
considered is observable. In such environments, the agent does not need to keep track of the
changes in the environment. A chess playing system is an example of a system that operates
in a fully observable environment. 
 
    In a partially observable environment, the relevant features of the environment are only
partially observable. A bridge playing program is an example of a system operating in a
partially observable environment. 
 
2. Determinism
In deterministic environments, the next state of the environment is completely described by
the current state and the agent’s action. Image analysis systems are examples of this kind of
situation. The processed image is determined completely by the current image and the
processing operations. 
 
If an element of interference or uncertainty occurs then the environment is stochastic. Note
that a deterministic yet partially observable environment will appear to be stochastic to the agent. Examples of this are the automatic vehicles that navigate a terrain, say, the Mars
rovers robot. The new environment in which the vehicle is in is stochastic in nature. 
 
If the environment state is wholly determined by the preceding state and the actions of
multiple agents, then the environment is said to be strategic. Example: Chess. There are two
agents, the players and the next state of the board is strategically determined by the players’
actions. 
 
 
3. Episodicity
An episodic environment means that subsequent episodes do not depend on what actions
occurred in previous episodes.
In a sequential environment, the agent engages in a series of connected episodes.
 
 
4. Dynamism
Static Environment: does not change from one state to the next while the agent is
considering its course of action. The only changes to the environment are those caused by
the agent itself. A static environment does not change while the agent is thinking. The
passage of time as an agent deliberates is irrelevant. The agent doesn’t need to observe the
world during deliberation. 
 
A Dynamic Environment changes over time independent of the actions of the agent -- and
thus if an agent does not respond in a timely manner, this counts as a choice to do nothing.
 
 
5. Continuity
If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous.
 
 

Comments

Popular posts from this blog

ANN - Basic Concepts

Boltzmann Machine

Articles on AI