3 min read

A Simple Framework for Designing AI Agents.

Agentic AI is the next stop (or are we there already?) on the winding, weaving and land-mine-ridden road to AGI. Clients have switched focus from 'WTF is AI?' to 'WTF is an AI Agent?'.

💡
An AI Agent is a computational entity that perceives its environment through sensors and acts within that environment through actuators to achieve specified goals. They often leverage machine learning techniques, including reinforcement learning and deep learning, to learn optimal action policies and improve their performance over time.

Some Trend Analysis for Context.

The Agentic AI market is experiencing substantial growth, with projections indicating a Compound Annual Growth Rate (CAGR) of 43.8% and a market value reaching $196.6 billion by 2034, up from $5.2 billion in 2024. (Source: Market.us)

and ...

By 2028, a third of all enterprise software applications will include agentic AI capabilities, up from less than 1% in 2024, and about 15% of all work decisions will be made by agents rather than humans. (Source: Tech Target)

Irrespective of how we slice this bread, Agentic AI will dominate the airwaves (and hype cycles) for at least the next 5 years.

A Framework for Designing AI Agents.

I like to refer to agents as one-trick ponies. They should have a specific identity and defined objective, access to long-term memory and tools and, finally, a penchant for inter-agent-collaboration.

Specific Defined Identity and Objective (or 'Who am I? What is my purpose?')

An agent must have an 'identity'. For example, the image below shows results from Gemini before and after it was given a SPECIFIC identity:

Prompt without any identity clarified returns what looks like a definition from a text book...
... while on the other hand, giving it an identity (3rd-grade teacher) results in a playful response.

Short-Term and Long-Term Memory (a.k.a 'I have a vague recollection of the last time I did this work').

Agents like to keep learning from the outcomes of their processing and mix it with the memory of what they have done in the past (through what is called reflection).

💡
In the realm of Agentic AI, "reflection" refers to a crucial process that enables AI agents to improve their performance over time.It's akin to how humans review their actions and learn from their experiences.

Some known techniques to help with Long-Term Memory are Vector Databases, Knowledge Graphs, Memory Buffers, Reflexion Frameworks, Semantic Memory and Episodic Memory.

Proper Planning (a.k.a 'One thing at a time').

Software engineers use SOLID, a framework for writing/designing good application architecture. The S in SOLID stands for Single-Purpose or 'one class should only focus on one domain' and influences software developers to think through how they want to lay out the web of functionalities in their systems. Theoretically speaking, such architectures are easier to manage and troubleshoot, and since agents are also a software construct, arguably, S can (and should) apply to them too. Each step in an agent, therefore, should be responsible for only one kind of output which can be sent to the next step(s) in the agent or a different agent altogether.

Inter-Agent-Collaboration (a.k.a 'Let's hold hands and sing Kumbaya').

This suggestion points towards different agents exchanging information with other agents, learning from them, teaching them and ultimately becoming better at what they are tasked with doing.

💡
The collaboration of AI agents, also known as multi-agent systems (MAS), is a fascinating area of AI research.It aims to create systems where multiple autonomous agents work together to achieve common or individual goals.

Well-known agentic collaboration platforms include OpenAI Gym MultiAgent Environments, AWS Robomaker, Microsoft Azure IoT, Ray (Anyscale) and Mesa (Python).


I write to remember, and if, in the process, I can help someone learn about Containers, Orchestration (Docker Compose, Kubernetes), GitOps, DevSecOps, VR/AR, Architecture, and Data Management, that is just icing on the cake.