CANNES

Research
Areas
Projects
Publications
Simulators
Theses


People
Directors
Administrator
Members

Laboratory
Activities
Infrastructure
Opportunities

Courses
Undergraduate
Graduate
Extension

[Versión en Español]

 

Autonomous Robotic Agents

Agents have their origin in psychology, artificial intelligence, and distributed artificial intelligence, integrating learning, planning, reasoning, knowledge representation aspects, and have as goal to execute complex tasks benefiting users, that otherwise would be hard to accomplish. Users have the possibility of assigning goals to be achieved by the agents, in contrast to conventional software systems limiting the users to previously specified goals which cannot be modified.

An agent is anything which can be considered that perceives its environment through sensors and responds or behaves in such environment by means of effectors (Rusell & Norvig 1995).

An autonomous agents is one whose behavior is based mainly on its own existence, although being able to use certain built-in knowledge.

Similar to the way evolution has given animals a number of built-in reflexes so that they can survive until being capable of learning by themselves, it is reasonable to give an intelligent agent with certain initial knowledge and ability to learn.

As much as an agent acts based on integrated suppositions , its behavior would be satisfactory only as much as those suppositions are current, lacking any flexibility. A real autonomous agent will be capable of successfully functioning under a broad environment spectrum, given enough time to adapt. There is little or no dpendency on abstract world representations, and behavior instead of plans are the robot´s interaction with the world.

There exist different types pf autonomous agents:

  • Human agents have organs, such as eyes and ears serving as sensors, while body parts, such as hands, legs and mouth, serve as effectors.
  • Robotic agents substitute sensors for cameras and readers, such as infrared or ultrasound, and effectors are replaced by motors.
  • Software agents receive perceptions and execute actions having formats such as codified chains of bits.

Software agents vary and can be classified as:

  • expert assistants, software agents assisting users in complex decision making or knowledge processing, such as medical monitoring, industrial control, business process administration, manufacturing, and air traffic control.
  • softbots, software agents interacting with real world software environments, such as operating systems, the Internet, and the Web.
  • synthetic agents, software agents operating in simulated worlds, such as virtual worlds, MUDS, or video games. Emphasis is given on qualities such as credibility and personality, instead of intelligence and specialization, and can play roles in interactive entertainment systems, art and education.

Autonomous agents are studied as either single or multiple agents.

Autonomous Robotic Multi-Agents

A significant amount of research on multi-agent systems exists. An important early work is. Fukuda's CEBOT system which demonstrates the self-organizing behavior of a group of heterogeneous robotic agents. Beni and Hackwood's research on swarm robotics demonstrates large scale cooperation in simulation. Work at MIT, by Brooks and Mataric, shows the development of subsumption-based multi-agent teams.

These systems are characterized by their reactive control nature:

  • A decomposition of robotic goals into a collection of primitive behaviors.
    Behaviors are either activated via arbitration, or permit concurrent activation;
  • Perceptual strategies are closely associated with each reactive behavior, providing only the information that is necessary for each activity; and global world models are generally avoided at this level, yielding faster real-time robotic response.

Many systems have been developed which include multiple identical units to carry out tasks of foraging, grazing, and consuming objects in a cluttered world:

  • Foraging consists of searching the environment for objects (referred to as attractors) and carrying them back to a central location. Robots performing this task would potentially be suitable for garbage collection or specimen collection in a hazardous environment.
  • Grazing is similar to lawn mowing; robot team must adequately cover the environment. Grazing robots might be used to mow, plow or seed fields, vacuum houses, for surveillance, or to remove scrub in a lumber producing forest.
  • Consuming requires the robot to perform work on the attractors in place, rather than carrying them back. Applications might include toxic waste cleanup, assembly, or cleaning tasks.

Communication mechanisms include:

  • State communication enhances the performance of the social system in quantifiable ways. When state communication is permitted, robots are able to detect the internal state (wander, acquire, or deliver) of other robots in a manner analogous to the display behavior of animals.
  • Goal communication involves the transmission and reception of specific goal-oriented information. Implementation on mobile robots requires data to be encoded, transmitted, received, and decoded.
  • Emergent behavior is evidenced as the phenomena of recruitment, the shared effort of many robots to perform a task, which occurs even in the absence of communication between the agents.

Little research has been conducted on multi-agent systems based on biological studies and even less so on systems that have been fielded on working robots. These aspects are an important concern in the research project entitled Ecological Robotics: A Schema-theoretic Approach.

Autonomous Robot Architecture (AuRA)

In the Autonomous Robot Architecture (AuRA) developed at College of Computing's Mobile Robot Laboratory at Georgia Tech, the overarching hybrid deliberative/reactive architecture employed, motor schemas provide the reactive component of navigation. Instead of planning by predetermining an exact route through the world and then trying to coerce the robot to follow it, motor schemas (behaviors) are selected and instantiated in a manner that enables the robot to interact successfully with unexpected events while still striving to satisfy its higher level goals. Motor schema outputs are, in a sense, analogous to potential fields. Multiple active schemas are usually present, each producing a velocity vector driving the robot in response to its perceptual stimulus. The robot only needs to compute the single vector at its current location. Each of the individual schemas posts its contribution to the robot's motion at a centralized location. The resultant vectors are summed and normalized to fit within the limits of the robot vehicle, yielding a single combined velocity for the robot. These vectors are continually updated as new perceptual information arrives, with the result being immediate response to any new sensory data.

The advantages of this form of navigation are many. They include rapid computation and the ability to be mapped onto parallel architectures making real-time response easily attainable. Modular construction affords ease of integration of new motor behaviors simplifying both system maintenance and the ease of transfer to new problem domains. Motor schemas readily reflect uncertainty in perception, when such a measure is available, and also react immediately to environmental sensor data. These factors all contribute to the needs of a navigational system that will successfully assist a robot's intentional goals. MissionLab is designed to support such a system in a simulated environment.

itam