Category: Multi-Agent Systems | Frameworks: Microsoft AutoGen Core | Language: Python 3.14+
A comprehensive demonstration of Microsoft AutoGen Core framework fundamentals, showcasing custom agent creation, message routing, runtime management, and LLM integration patterns.
- 🤖 Custom Agent Development: Build custom agents using RoutedAgent base class
- 📨 Message Handling: Implement custom message types and handlers
- 🔄 Runtime Management: Single-threaded agent runtime with lifecycle management
- 🧠 LLM Integration: OpenAI model integration through AssistantAgent delegation
- 🔀 Agent Communication: Inter-agent messaging with typed message protocols
- ⚡ Asynchronous Processing: Full async/await support for concurrent operations
The system demonstrates two types of custom agents:
from dataclasses import dataclass
from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler
from autogen_core import SingleThreadedAgentRuntime
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
@dataclass
class Message:
content: str
class SimpleAgent(RoutedAgent):
def __init__(self) -> None:
super().__init__("Simple")
@message_handler
async def on_my_message(self, message: Message, ctx: MessageContext) -> Message:
return Message(
content=f"This is {self.id.type}-{self.id.key}. You said '{message.content}' and I disagree."
)
class MyLLMAgent(RoutedAgent):
def __init__(self) -> None:
super().__init__("LLMAgent")
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
self._delegate = AssistantAgent("LLMAgent", model_client=model_client)
@message_handler
async def handle_my_message_type(self, message: Message, ctx: MessageContext) -> Message:
print(f"{self.id.type} received message: {message.content}")
text_message = TextMessage(content=message.content, source="user")
response = await self._delegate.on_messages([text_message], ctx.cancellation_token)
reply = response.chat_message.content
print(f"{self.id.type} responded: {reply}")
return Message(content=reply)async def main():
runtime = SingleThreadedAgentRuntime()
await SimpleAgent.register(runtime, "simple_agent", lambda: SimpleAgent())
await MyLLMAgent.register(runtime, "LLMAgent", lambda: MyLLMAgent())
runtime.start() # Start processing messages in the background.
# Agent communication
response = await runtime.send_message(Message("Hi there!"), AgentId("LLMAgent", "default"))
print(">>>", response.content)
response = await runtime.send_message(Message(response.content), AgentId("simple_agent", "default"))
print(">>>", response.content)
response = await runtime.send_message(Message(response.content), AgentId("LLMAgent", "default"))
await runtime.stop()
await runtime.close()User Input → Runtime → LLM Agent → OpenAI API → Response → Simple Agent → Response → LLM Agent → Final Output
- Python 3.14+
- OpenAI API key
- UV package manager
- Clone the repository:
git clone <repository-url>
cd 41-autogen_core- Install dependencies:
uv sync- Set up environment variables:
# Create .env file
OPENAI_API_KEY=your_openai_api_key_hereuv run main.pyLLMAgent received message: Hi there!
LLMAgent responded: Hello! I'm an AI assistant powered by AutoGen Core. How can I help you today?
>>> Hello! I'm an AI assistant powered by AutoGen Core. How can I help you today?
>>> This is simple_agent-default. You said 'Hello! I'm an AI assistant powered by AutoGen Core. How can I help you today?' and I disagree.
LLMAgent received message: This is simple_agent-default. You said 'Hello! I'm an AI assistant powered by AutoGen Core. How can I help you today?' and I disagree.
- SimpleAgent: Basic agent with predefined response pattern
- LLMAgent: LLM-powered agent using OpenAI gpt-4o-mini
- Runtime: SingleThreadedAgentRuntime for message processing
- Message Protocol: Custom Message dataclass for typed communication
- Model:
gpt-4o-mini - Client: OpenAIChatCompletionClient
- Delegation: AssistantAgent for LLM capabilities
- Message Type: TextMessage for LLM communication
- AutoGen Core Fundamentals: Understanding the core framework architecture
- Custom Agent Creation: Building agents with RoutedAgent base class
- Message Handling: Implementing typed message protocols with decorators
- Runtime Management: Agent lifecycle and runtime configuration
- LLM Integration: Delegation patterns for AI model integration
- Asynchronous Programming: Async/await patterns in agent communication
- Agent Registration: Ensure agents are registered before runtime.start()
- Message Types: Verify message types match handler signatures
- Runtime Lifecycle: Always call stop() and close() after runtime usage
- API Key Issues: Verify OpenAI API key is correctly set in environment
- Dependency Conflicts: Ensure compatible AutoGen versions
For debugging agent communication, you can add logging:
import logging
logging.basicConfig(level=logging.DEBUG)
# This will show detailed agent runtime informationThe system includes built-in error handling:
try:
response = await runtime.send_message(message, agent_id)
except Exception as e:
logger.error(f"Agent communication failed: {e}")
# Implement fallback or retry logicdependencies = [
"autogen-agentchat==0.4.9.3", # Agent framework
"autogen-core>=0.4.9.3", # Core framework
"autogen-ext>=0.4.0", # OpenAI extensions
"openai>=1.0.0", # OpenAI API client
"python-dotenv>=1.2.1", # Environment variables
"tiktoken>=0.5.0", # Token counting
]- Framework Fundamentals: Deep dive into AutoGen Core architecture
- Custom Agent Patterns: Demonstrates extensible agent design
- Type Safety: Dataclass-based message protocols
- Runtime Management: Professional agent lifecycle handling
- LLM Integration: Seamless OpenAI model integration
- Communication Patterns: Inter-agent messaging with routing
- Agent Registration: Dynamic agent registration with factory patterns
- Message Routing: Type-safe message handling with decorators
- Runtime States: Proper runtime lifecycle management
- Delegation Patterns: Clean separation between agents and LLMs
- Error Handling: Robust exception management in async contexts
- Extensibility: Easy to add new agent types and message protocols
- The system demonstrates AutoGen Core's low-level agent capabilities
- Message handlers use decorators for type-safe routing
- Runtime management follows proper async patterns
- LLM integration uses delegation for clean architecture
- The framework supports complex multi-agent workflows
- Custom message types enable flexible communication protocols
- Multi-Runtime Support: Multiple agent runtimes with communication
- Advanced Message Types: Complex message protocols with validation
- Agent State Management: Persistent agent state across sessions
- Performance Monitoring: Agent performance metrics and logging
- Security Features: Message encryption and access controls
- Configuration Management: YAML/JSON configuration for agents
- Testing Framework: Comprehensive agent testing utilities
Project Repository: 41-autogen_core