Home » Sector Topics » Education AI

Introduction to autonomous agents from a developer perspective – Part one

  • ajitjaokar 

Introduction to autonomous agents from a developer perspective – Part one

What are autonomous AI agents? 

Autonomous AI agents are systems capable of performing tasks without human intervention. 

Agents have been around in various incarnations. Most recently, an element of autonomy was achieved by reinforcement learning(RL). However, it is still hard to deploy RL beyond virtual environments and games. Autonomous AI agents (called agents henceforth in this document) are more complex.  They are designed to perceive their environment, make decisions based on their perceptions and pre-programmed knowledge, and execute actions to achieve specific goals. Agents are also based on LLMs which make them more efficient.

In various forms, conventional AI agents (predating the LLM-based autonomous agents)  are already used in some capacity. For example in self-driving cars, robotics, Virtual assistants like Alexa, Autonomous drones, or in algorithmic trading. 

However, the real potential of agents as we discuss here, lies in their capacity to solve problems at a higher level of abstraction. In simple terms, if you want to book a holiday to Greece, then the AI is able to split the high-level task into subtasks and autonomously execute these tasks to create an overall solution. It is this capability of autonomous AI agents to execute a high-level task that makes agent technology significant. 

In this blog post series, we explore autonomous agents from the perspective of a developer. 

Workflow of autonomous agents

Autonomous agents involve a series of steps

  1. Sensing and perception where the agent gathers data from its environment using various sensors. The raw sensor data is preprocessed to eliminate noise and extract relevant features. 
  2. Based on the sensing, the agent constructs a model of its environment, which could be a physical map for a robot or a conceptual map for a software agent. It also determines the context, including its own state in the environment
  3. The agent identifies its objectives based on predefined goals or learned behaviors. The agent then develops a plan to achieve its goals which includes evaluating different actions to evaluate their effectiveness in achieving the goal.
  4. The agent chooses and performs the best action based on its decision-making process.
  5. The agent gathers feedback from the environment about the results of its actions, which could be immediate sensor data or delayed outcomes.
  6. The agent updates its models and decision-making processes based on the feedback, which might involve updating machine learning models or refining rules.
  7. The agent may need to communicate with other agents or humans, which could involve sending data, reporting status, or coordinating actions.

From a developer’s perspective, creating and deploying an autonomous AI agent involves a series of systematic steps, encompassing design, development, testing, and deployment. 

Here is a general workflow for deploying agents:

  1. Problem Definition and Requirements Gathering
  2. Design
  3. Data Collection and Preparation
  4. Model Development
  5. Integration and System Development
  6. Implementation of Learning and Adaptation
  7. Testing and Validation
  8. Deployment
  9. User Interaction and Feedback (if applicable)
  10. Iteration and Improvement

But this flow hides the complexity of agent development

Andrew Ng describes the four design patterns of agentic workflows  as  Reflection, Tool use, Planning and Multi-agent collaboration.

We can expand on these four design patterns as follows:

1. Reflection: Reflection refers to the ability of an AI agent to think about its own thinking process. This includes evaluating its actions, learning from experiences, and adapting its strategies based on past performance. Reflection enables agents to improve over time, make better decisions, and avoid repeating mistakes.

Key Aspects of reflection include: Self-Monitoring: The agent monitors its own performance and processes.; Learning from Experience: Using techniques like reinforcement learning, the agent learns from feedback received from its actions and Adaptive Behavior: The agent modifies its strategies and behaviors based on past outcomes and new information.

Examples of reflection include: Autonomous Vehicles that Continuously analyse driving decisions and update the driving model based on new data and Game Playing Agents that evaluate  past games to improve strategies and decision-making in future games.

2. Tool Use: Tool Use involves AI agents leveraging external tools or resources to achieve their goals. The agent can use APIs, databases, and other software tools to obtain information or perform actions; The agent delegates specific tasks to specialized tools, focusing its own processing power on decision-making and coordination; Seamlessly integrating external tools into the agent’s workflow to enhance functionality.

Examples include Robotic Process Automation (RPA. 

3. Planning: Planning refers to the ability of an AI agent to formulate a sequence of actions to achieve a specific goal. Planning involves anticipating future states, considering various actions and their outcomes, and selecting the optimal sequence to reach the desired objective.

Key aspects of planning include Goal Setting – Defining clear objectives for the agent to achieve; Action Sequencing – Developing a series of steps that lead from the current state to the goal state; Contingency Handling – Planning for alternative actions in case of unexpected changes or failures.

Examples include Robotics – An autonomous robot planning a path to navigate through an environment while avoiding obstacles; .Supply Chain Management: Planning logistics and inventory management to ensure timely delivery of goods.

4. Multi-Agent Collaboration

Multi-Agent Collaboration involves multiple AI agents working together to achieve a common goal. This pattern is crucial in scenarios where tasks are too complex or large for a single agent to handle. Collaboration requires communication, coordination, and sometimes negotiation among agents.

Key Aspects include:  Communication: Agents exchange information to align their actions and strategies; Coordination: Agents synchronize their actions to avoid conflicts and ensure efficient task execution.; Negotiation: In some cases, agents need to negotiate to resolve conflicts or distribute resources.

Examples of multiagent collaboration include: Swarm Robotics: Multiple robots collaborating to perform tasks like search and rescue, environmental monitoring, or construction. Distributed Computing: Multiple AI systems working together to solve large-scale computational problems, such as data analysis or simulations.

In the next section, we will discuss implications for developers