- Регистрация
- 1 Мар 2015
- Сообщения
- 1,481
- Баллы
- 155
Creating intelligent AI applications that can handle complex tasks requires more than just making basic API calls. This tutorial will guide you through building a complete agentic AI workflow using OpenRouter API, which gives you access to multiple AI models (from OpenAI, Anthropic, Google, and more) through a single, consistent interface.
What You'll Learn
OpenRouter is a unified API gateway that gives you access to hundreds of AI models from various providers through a single endpoint. Instead of managing multiple API integrations, you can:
To follow this tutorial, you'll need:
Let's start by setting up our project structure:
mkdir agentic-ai-workflow
cd agentic-ai-workflow
Create a requirements.txt file with the necessary dependencies:
openai>=1.0.0
python-dotenv>=1.0.0
requests>=2.28.2
jupyter>=1.0.0
notebook>=6.5.3
It's best practice to create a virtual environment to isolate your project dependencies. Here's how to set it up:
# Create a virtual environment (you can use python3.12 or whatever version you have)
python3 -m venv venv
# Activate the virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
# venv\Scripts\activate
Now install the dependencies within your virtual environment:
pip install -r requirements.txt
If the above command doesn't work, you might need to specify python3:
python3 -m pip install -r requirements.txt
Create a .env file to store your API key securely (never commit this to version control):
# OpenRouter API Key
# Get yours at
OPENROUTER_API_KEY=your_openrouter_api_key_here
# Your site URL and name (optional, used for OpenRouter leaderboard)
YOUR_SITE_URL=
YOUR_SITE_NAME=Your App Name
Step 2: Basic Client Setup
Let's create our first script to set up and test the OpenRouter client:
# basic_setup.py
"""
Basic OpenRouter API setup example
This script demonstrates how to properly set up the OpenRouter API client
using the OpenAI SDK with environment variables for API key management.
"""
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env file
load_dotenv()
def setup_openrouter_client():
"""
Initialize the OpenRouter client using the OpenAI SDK
with proper configuration.
Returns:
OpenAI: Configured OpenAI client pointing to OpenRouter
"""
# Get API key from environment variables
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError(
"OpenRouter API key not found. Please set the OPENROUTER_API_KEY "
"environment variable in your .env file."
)
# Initialize the client with OpenRouter configuration
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
# Optional headers for OpenRouter leaderboard
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", ""),
"X-Title": os.getenv("YOUR_SITE_NAME", "Agentic AI Demo")
}
)
return client
def test_connection():
"""Test the connection to OpenRouter API with a simple completion request."""
try:
client = setup_openrouter_client()
# Make a simple test request
completion = client.chat.completions.create(
model="openai/gpt-3.5-turbo", # OpenRouter model format
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, world!"}
],
)
print("
Successfully connected to OpenRouter API!")
print(f"Model response: {completion.choices[0].message.content}")
except Exception as e:
print(f"
Error connecting to OpenRouter API: {e}")
if __name__ == "__main__":
test_connection()
Run this script to test your connection:
python basic_setup.py
If everything is set up correctly, you should see a successful connection message and a response from the AI model:
Successfully connected to OpenRouter API!
Model response: Hello! How can I assist you today?
Step 3: Comparing Different Models
One of the key benefits of OpenRouter is the ability to switch between different AI models easily. Let's create a script to compare responses from various models:
# model_comparison.py
"""
OpenRouter Model Comparison Example
This script demonstrates how to access models from different providers
(OpenAI, Anthropic, Google, etc.) using OpenRouter's unified API.
"""
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env file
load_dotenv()
# Initialize the OpenRouter client
def get_client():
"""
Initialize and return the OpenRouter client
"""
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError("OPENROUTER_API_KEY not found in environment variables")
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", ""),
"X-Title": os.getenv("YOUR_SITE_NAME", "Model Comparison Demo")
}
)
return client
def generate_response(model, prompt):
"""
Generate a response using the specified model
Args:
model (str): OpenRouter model identifier
prompt (str): Text prompt to send to the model
Returns:
str: The model's response
"""
client = get_client()
response = client.chat.completions.create(
model=model,
messages=[
{"role": "user", "content": prompt}
]
)
return response.choices[0].message.content
def compare_models(prompt):
"""
Compare responses from different models for the same prompt
Args:
prompt (str): The prompt to send to all models
"""
# Define models from different providers to test
models = {
"OpenAI GPT-3.5": "openai/gpt-3.5-turbo",
"OpenAI GPT-4": "openai/gpt-4",
"Anthropic Claude": "anthropic/claude-instant-v1",
"Google PaLM 2": "google/palm-2-chat-bison",
"Mistral": "mistralai/mistral-7b-instruct-v0.2"
}
print(f"Prompt: {prompt}\n")
print("-" * 50)
for name, model_id in models.items():
try:
print(f"\n{name} ({model_id}):")
response = generate_response(model_id, prompt)
print(f"Response: {response}\n")
print("-" * 50)
except Exception as e:
print(f"Error with {name}: {str(e)}")
print("-" * 50)
if __name__ == "__main__":
# Example prompt for comparison
test_prompt = "Explain quantum computing in simple terms."
compare_models(test_prompt)
Run this script to see how different models respond to the same prompt:
python model_comparison.py
This allows you to compare the strengths and weaknesses of different models for your specific use case. Here's an example of the output you might see:
Prompt: Explain quantum computing in simple terms.
--------------------------------------------------
OpenAI GPT-3.5 (openai/gpt-3.5-turbo):
Response: Quantum computing is a type of computing that uses the principles of quantum mechanics to perform calculations. In traditional computing, information is stored in bits, which can either be a 0 or a 1. However, in quantum computing, information is stored in qubits, which can be 0, 1, or both at the same time. This allows quantum computers to perform multiple calculations simultaneously, making them much faster and more powerful than traditional computers for certain tasks. Quantum computing has the potential to revolutionize fields such as cryptography, drug discovery, and artificial intelligence.
--------------------------------------------------
OpenAI GPT-4 (openai/gpt-4):
Response: Quantum computing is a type of computing that's very different from the computers we use every day. It uses principles of quantum mechanics (a branch of physics that deals with phenomena on a very small scale, like molecules, atoms, and subatomic particles) to process information.
In regular computers, the fundamental unit of information is a "bit", which can be either a 0 or a 1. But in a quantum computer, it uses "quantum bits" or "qubits". A qubit can be both 0 and 1 at the same time, thanks to a property in quantum mechanics called superposition.
Additionally, qubits can be entangled, another property in quantum mechanics. When qubits are entangled, the state of one qubit is directly related to the state of the other, no matter how far they are.
These properties allow quantum computers to process a vast number of possibilities all at the same time, solve complex problems more rapidly compared to classical machines, and could revolutionize fields such as cryptography, optimization, drug discovery, and more. However, it is also important to note that quantum computing is still in early stages of development.
--------------------------------------------------
Mistral (mistralai/mistral-7b-instruct-v0.2):
Response: Quantum computing is a type of computing that uses the principles of quantum mechanics to perform operations on data. While classical computers use bits, which can only be in two states (0 or 1), quantum computers use quantum bits, or qubits. Qubits can exist in multiple states simultaneously, thanks to quantum mechanics phenomena such as superposition and entanglement.
Here's a simple analogy to help understand the concept of a qubit: Imagine a classic coin. In a classical computer, the coin can be heads or tails, just like a bit can be either 0 or 1. However, in a quantum computer, the qubit can be both heads and tails simultaneously, thanks to superposition. This means that quantum computers can perform many calculations at once, making them potentially much faster than classical computers for certain tasks.
Quantum computing is still in its infancy and faces significant challenges before it becomes a mainstream technology. But its potential to solve complex problems more efficiently than classical computers has made it an exciting area of research. Some of the potential applications include breaking encryption codes, optimizing complex systems, and simulating chemical reactions for drug discovery.
--------------------------------------------------
Note: You might see errors for some models if they're not available through your OpenRouter account or if the model IDs have changed since this tutorial was written.
Step 4: Building an Agentic AI Workflow
Now, let's create the main agentic workflow. An agentic AI workflow involves breaking down complex tasks into manageable steps and executing them sequentially:
# agent_example.py
"""
Simple Agentic AI Workflow Example with OpenRouter
This script demonstrates a basic agentic workflow where an AI agent:
1. Analyzes a user query
2. Breaks it down into steps
3. Executes each step sequentially
4. Compiles a final response
"""
import os
import json
from dotenv import load_dotenv
from openai import OpenAI
import time
# Load environment variables from .env file
load_dotenv()
class Agent:
"""A simple AI agent that can solve tasks through multi-step reasoning"""
def __init__(self, model="openai/gpt-4"):
"""
Initialize the agent with the specified model
Args:
model (str): The OpenRouter model identifier to use
"""
self.model = model
self.client = self._setup_client()
self.conversation_history = []
def _setup_client(self):
"""Set up the OpenRouter client"""
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError(
"OpenRouter API key not found. Please set the OPENROUTER_API_KEY "
"environment variable in your .env file."
)
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", ""),
"X-Title": os.getenv("YOUR_SITE_NAME", "Agentic AI Demo")
}
)
return client
def _call_llm(self, messages):
"""
Make an API call to the language model
Args:
messages (list): List of message objects for the conversation
Returns:
str: The model's response content
"""
try:
response = self.client.chat.completions.create(
model=self.model,
messages=messages
)
return response.choices[0].message.content
except Exception as e:
print(f"Error calling LLM: {e}")
return None
def analyze_task(self, user_query):
"""
Break down a user query into discrete steps
Args:
user_query (str): The user's request
Returns:
list: List of steps to solve the task
"""
system_prompt = """
You are an AI task planner. Your job is to break down a user's request
into a series of clear, discrete steps that can be executed sequentially.
Respond with a JSON array of steps, where each step has:
1. A "description" field describing what needs to be done
2. A "reasoning" field explaining why this step is necessary
Format your response as a valid JSON array without any additional text.
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Break down this task into steps: {user_query}"}
]
response = self._call_llm(messages)
try:
# Extract the JSON array from the response
steps = json.loads(response)
return steps
except json.JSONDecodeError:
print("Error: Could not parse response as JSON")
print(f"Raw response: {response}")
return []
def execute_step(self, step, context):
"""
Execute a single step in the plan
Args:
step (dict): The step to execute
context (str): Context from previous steps
Returns:
str: Result of executing the step
"""
system_prompt = """
You are an AI assistant focusing on executing a specific task step.
Use the provided context and step description to complete this specific step only.
Your response should be detailed and directly address the step's requirements.
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Context so far: {context}\n\nExecute this step: {step['description']}\n\nReasoning: {step['reasoning']}"}
]
return self._call_llm(messages)
def compile_results(self, steps_results, user_query):
"""
Compile the results of all steps into a final response
Args:
steps_results (list): Results from each executed step
user_query (str): The original user query
Returns:
str: Final compiled response
"""
system_prompt = """
You are an AI assistant that compiles information from multiple processing steps
into a coherent, unified response. Your goal is to present the information clearly
and directly address the user's original query.
"""
steps_text = "\n\n".join([f"Step {i+1} result: {result}" for i, result in enumerate(steps_results)])
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Original query: {user_query}\n\nResults from steps:\n{steps_text}\n\nPlease provide a comprehensive, unified response to the original query."}
]
return self._call_llm(messages)
def solve(self, user_query):
"""
Solve a task through multi-step reasoning
Args:
user_query (str): The user's request
Returns:
dict: A dictionary containing the original query, steps taken,
results of each step, and the final response
"""
print(f"? Analyzing task: {user_query}")
steps = self.analyze_task(user_query)
if not steps:
return {"error": "Could not break down the task into steps"}
print(f"? Breaking down into {len(steps)} steps:")
for i, step in enumerate(steps):
print(f" {i+1}. {step['description']}")
steps_results = []
context = ""
for i, step in enumerate(steps):
print(f"\n
Executing step {i+1}: {step['description']}")
result = self.execute_step(step, context)
steps_results.append(result)
context += f"\nStep {i+1} result: {result}"
print(f"
Completed")
# Add a small delay to avoid rate limits
time.sleep(1)
print("\n? Compiling final response...")
final_response = self.compile_results(steps_results, user_query)
return {
"query": user_query,
"steps": steps,
"step_results": steps_results,
"final_response": final_response
}
def main():
"""Main function to demonstrate the agent's capabilities"""
# Initialize the agent with a capable model
agent = Agent(model="openai/gpt-4")
# Example query
user_query = "Research and suggest three possible vacation destinations for a family with young children, considering budget-friendly options."
# Solve the task
result = agent.solve(user_query)
# Print the final response
print("\n" + "=" * 50)
print("FINAL RESPONSE:")
print("=" * 50)
print(result["final_response"])
if __name__ == "__main__":
main()
Run the agent example with:
python agent_example.py
When you run this script, you'll see the agent breaking down the task, executing each step, and compiling a final response. Here's what the output might look like:
? Analyzing task: Research and suggest three possible vacation destinations for a family with young children, considering budget-friendly options.
? Breaking down into 6 steps:
1. Identify the important factors to consider when choosing a family vacation destination
2. Consider the interests and needs of young children
3. Determine the family's budget for the vacation
4. Research vacation destinations that match the identified factors and budget
5. Narrow down the list to three possible vacation destinations
6. Provide a brief overview for each suggested destination, including attractions, accommodations, and estimated cost
Executing step 1: Identify the important factors to consider when choosing a family vacation destination
Completed
Executing step 2: Consider the interests and needs of young children
Completed
Executing step 3: Determine the family's budget for the vacation
Completed
Executing step 4: Research vacation destinations that match the identified factors and budget
Completed
Executing step 5: Narrow down the list to three possible vacation destinations
Completed
Executing step 6: Provide a brief overview for each suggested destination, including attractions, accommodations, and estimated cost
Completed
? Compiling final response...
==================================================
FINAL RESPONSE:
==================================================
For a family vacation with young children, three budget-friendly options could be Disney World in Florida, Yellowstone National Park in Wyoming, and San Diego, California.
Disney World is a classic choice, providing a magical experience for children. The park offers numerous entertainment options such as Magic Kingdom Park, Epcot, Disney's Animal Kingdom, and Disney's Hollywood Studios. The estimated cost for a family of four for a week, including park tickets, meals, and accommodation, may range from $3,500 to $6,000. You can choose to stay in one of Disney's resort hotels or opt for a vacation rental outside the park to save costs.
Alternatively, Yellowstone National Park allows your family to explore nature's wonders. Major attractions include wildlife viewing, geothermal features like Old Faithful, hiking trails, and ranger-led programs tailored for kids. Accommodation options range from campsites starting at $30 per night or lodges like the Old Faithful Inn. A week-long trip for a family of four, taking into account park admission, accommodation, and meals, could range from $1,000 - $4,000.
Lastly, San Diego offers a mix of city and marine life. You can visit the famous San Diego Zoo or the New Children's Museum, relax at the beach, or get adventurous at SeaWorld and Legoland. Family-friendly hotels like the Paradise Point Resort & Spa start from $200 per night, and vacation rentals in the city could be a more economical choice. A one-week stay's estimated cost may range between $2,500 -$4,500, inclusive of accommodation, food, and entry to various attractions.
These estimates serve as a rough guide and may vary based on factors like the time of travel, specific activities chosen, and the mode of transportation used. Regardless of the destination you choose, ensure that it aligns with your children's interests, has kid-friendly attractions and accommodation, food options that cater to their palate, and is safe and easily accessible. It's also essential to double-check any travel restrictions due to COVID-19.
As you can see, the agent has successfully:
Let's break down what's happening in our agent_example.py:
This approach has several advantages:
This agentic workflow pattern can be applied to many real-world scenarios:
Here are some ways to extend this project:
You've built a complete agentic AI workflow using OpenRouter API! This approach allows you to create more sophisticated AI applications that can break down complex problems into manageable steps.
The key benefits of using OpenRouter for your agentic workflow include:
Now, you have the foundation to build more sophisticated AI agents that can solve complex problems through multi-step reasoning. The possibilities are endless!
Get the complete code on .
Happy building! ?
What You'll Learn
- Setting up a proper development environment for AI applications
- Managing API keys securely using environment variables
- Implementing a simple yet powerful agentic workflow
- Accessing models from different providers through OpenRouter
- Testing and comparing responses from various AI models
OpenRouter is a unified API gateway that gives you access to hundreds of AI models from various providers through a single endpoint. Instead of managing multiple API integrations, you can:
- Access models from OpenAI, Anthropic, Google, and others with one API
- Switch between models without changing your code
- Take advantage of automatic fallbacks and cost optimization
- Build more resilient AI applications with multi-model support
To follow this tutorial, you'll need:
- Python 3.8 or higher
- Basic familiarity with Python and API concepts
- A free OpenRouter account (sign up at )
Let's start by setting up our project structure:
mkdir agentic-ai-workflow
cd agentic-ai-workflow
Create a requirements.txt file with the necessary dependencies:
openai>=1.0.0
python-dotenv>=1.0.0
requests>=2.28.2
jupyter>=1.0.0
notebook>=6.5.3
It's best practice to create a virtual environment to isolate your project dependencies. Here's how to set it up:
# Create a virtual environment (you can use python3.12 or whatever version you have)
python3 -m venv venv
# Activate the virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
# venv\Scripts\activate
Now install the dependencies within your virtual environment:
pip install -r requirements.txt
If the above command doesn't work, you might need to specify python3:
python3 -m pip install -r requirements.txt
Create a .env file to store your API key securely (never commit this to version control):
# OpenRouter API Key
# Get yours at
OPENROUTER_API_KEY=your_openrouter_api_key_here
# Your site URL and name (optional, used for OpenRouter leaderboard)
YOUR_SITE_URL=
YOUR_SITE_NAME=Your App Name
Step 2: Basic Client Setup
Let's create our first script to set up and test the OpenRouter client:
# basic_setup.py
"""
Basic OpenRouter API setup example
This script demonstrates how to properly set up the OpenRouter API client
using the OpenAI SDK with environment variables for API key management.
"""
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env file
load_dotenv()
def setup_openrouter_client():
"""
Initialize the OpenRouter client using the OpenAI SDK
with proper configuration.
Returns:
OpenAI: Configured OpenAI client pointing to OpenRouter
"""
# Get API key from environment variables
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError(
"OpenRouter API key not found. Please set the OPENROUTER_API_KEY "
"environment variable in your .env file."
)
# Initialize the client with OpenRouter configuration
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
# Optional headers for OpenRouter leaderboard
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", ""),
"X-Title": os.getenv("YOUR_SITE_NAME", "Agentic AI Demo")
}
)
return client
def test_connection():
"""Test the connection to OpenRouter API with a simple completion request."""
try:
client = setup_openrouter_client()
# Make a simple test request
completion = client.chat.completions.create(
model="openai/gpt-3.5-turbo", # OpenRouter model format
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, world!"}
],
)
print("
print(f"Model response: {completion.choices[0].message.content}")
except Exception as e:
print(f"
if __name__ == "__main__":
test_connection()
Run this script to test your connection:
python basic_setup.py
If everything is set up correctly, you should see a successful connection message and a response from the AI model:
Model response: Hello! How can I assist you today?
Step 3: Comparing Different Models
One of the key benefits of OpenRouter is the ability to switch between different AI models easily. Let's create a script to compare responses from various models:
# model_comparison.py
"""
OpenRouter Model Comparison Example
This script demonstrates how to access models from different providers
(OpenAI, Anthropic, Google, etc.) using OpenRouter's unified API.
"""
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env file
load_dotenv()
# Initialize the OpenRouter client
def get_client():
"""
Initialize and return the OpenRouter client
"""
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError("OPENROUTER_API_KEY not found in environment variables")
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", ""),
"X-Title": os.getenv("YOUR_SITE_NAME", "Model Comparison Demo")
}
)
return client
def generate_response(model, prompt):
"""
Generate a response using the specified model
Args:
model (str): OpenRouter model identifier
prompt (str): Text prompt to send to the model
Returns:
str: The model's response
"""
client = get_client()
response = client.chat.completions.create(
model=model,
messages=[
{"role": "user", "content": prompt}
]
)
return response.choices[0].message.content
def compare_models(prompt):
"""
Compare responses from different models for the same prompt
Args:
prompt (str): The prompt to send to all models
"""
# Define models from different providers to test
models = {
"OpenAI GPT-3.5": "openai/gpt-3.5-turbo",
"OpenAI GPT-4": "openai/gpt-4",
"Anthropic Claude": "anthropic/claude-instant-v1",
"Google PaLM 2": "google/palm-2-chat-bison",
"Mistral": "mistralai/mistral-7b-instruct-v0.2"
}
print(f"Prompt: {prompt}\n")
print("-" * 50)
for name, model_id in models.items():
try:
print(f"\n{name} ({model_id}):")
response = generate_response(model_id, prompt)
print(f"Response: {response}\n")
print("-" * 50)
except Exception as e:
print(f"Error with {name}: {str(e)}")
print("-" * 50)
if __name__ == "__main__":
# Example prompt for comparison
test_prompt = "Explain quantum computing in simple terms."
compare_models(test_prompt)
Run this script to see how different models respond to the same prompt:
python model_comparison.py
This allows you to compare the strengths and weaknesses of different models for your specific use case. Here's an example of the output you might see:
Prompt: Explain quantum computing in simple terms.
--------------------------------------------------
OpenAI GPT-3.5 (openai/gpt-3.5-turbo):
Response: Quantum computing is a type of computing that uses the principles of quantum mechanics to perform calculations. In traditional computing, information is stored in bits, which can either be a 0 or a 1. However, in quantum computing, information is stored in qubits, which can be 0, 1, or both at the same time. This allows quantum computers to perform multiple calculations simultaneously, making them much faster and more powerful than traditional computers for certain tasks. Quantum computing has the potential to revolutionize fields such as cryptography, drug discovery, and artificial intelligence.
--------------------------------------------------
OpenAI GPT-4 (openai/gpt-4):
Response: Quantum computing is a type of computing that's very different from the computers we use every day. It uses principles of quantum mechanics (a branch of physics that deals with phenomena on a very small scale, like molecules, atoms, and subatomic particles) to process information.
In regular computers, the fundamental unit of information is a "bit", which can be either a 0 or a 1. But in a quantum computer, it uses "quantum bits" or "qubits". A qubit can be both 0 and 1 at the same time, thanks to a property in quantum mechanics called superposition.
Additionally, qubits can be entangled, another property in quantum mechanics. When qubits are entangled, the state of one qubit is directly related to the state of the other, no matter how far they are.
These properties allow quantum computers to process a vast number of possibilities all at the same time, solve complex problems more rapidly compared to classical machines, and could revolutionize fields such as cryptography, optimization, drug discovery, and more. However, it is also important to note that quantum computing is still in early stages of development.
--------------------------------------------------
Mistral (mistralai/mistral-7b-instruct-v0.2):
Response: Quantum computing is a type of computing that uses the principles of quantum mechanics to perform operations on data. While classical computers use bits, which can only be in two states (0 or 1), quantum computers use quantum bits, or qubits. Qubits can exist in multiple states simultaneously, thanks to quantum mechanics phenomena such as superposition and entanglement.
Here's a simple analogy to help understand the concept of a qubit: Imagine a classic coin. In a classical computer, the coin can be heads or tails, just like a bit can be either 0 or 1. However, in a quantum computer, the qubit can be both heads and tails simultaneously, thanks to superposition. This means that quantum computers can perform many calculations at once, making them potentially much faster than classical computers for certain tasks.
Quantum computing is still in its infancy and faces significant challenges before it becomes a mainstream technology. But its potential to solve complex problems more efficiently than classical computers has made it an exciting area of research. Some of the potential applications include breaking encryption codes, optimizing complex systems, and simulating chemical reactions for drug discovery.
--------------------------------------------------
Note: You might see errors for some models if they're not available through your OpenRouter account or if the model IDs have changed since this tutorial was written.
Step 4: Building an Agentic AI Workflow
Now, let's create the main agentic workflow. An agentic AI workflow involves breaking down complex tasks into manageable steps and executing them sequentially:
# agent_example.py
"""
Simple Agentic AI Workflow Example with OpenRouter
This script demonstrates a basic agentic workflow where an AI agent:
1. Analyzes a user query
2. Breaks it down into steps
3. Executes each step sequentially
4. Compiles a final response
"""
import os
import json
from dotenv import load_dotenv
from openai import OpenAI
import time
# Load environment variables from .env file
load_dotenv()
class Agent:
"""A simple AI agent that can solve tasks through multi-step reasoning"""
def __init__(self, model="openai/gpt-4"):
"""
Initialize the agent with the specified model
Args:
model (str): The OpenRouter model identifier to use
"""
self.model = model
self.client = self._setup_client()
self.conversation_history = []
def _setup_client(self):
"""Set up the OpenRouter client"""
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError(
"OpenRouter API key not found. Please set the OPENROUTER_API_KEY "
"environment variable in your .env file."
)
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", ""),
"X-Title": os.getenv("YOUR_SITE_NAME", "Agentic AI Demo")
}
)
return client
def _call_llm(self, messages):
"""
Make an API call to the language model
Args:
messages (list): List of message objects for the conversation
Returns:
str: The model's response content
"""
try:
response = self.client.chat.completions.create(
model=self.model,
messages=messages
)
return response.choices[0].message.content
except Exception as e:
print(f"Error calling LLM: {e}")
return None
def analyze_task(self, user_query):
"""
Break down a user query into discrete steps
Args:
user_query (str): The user's request
Returns:
list: List of steps to solve the task
"""
system_prompt = """
You are an AI task planner. Your job is to break down a user's request
into a series of clear, discrete steps that can be executed sequentially.
Respond with a JSON array of steps, where each step has:
1. A "description" field describing what needs to be done
2. A "reasoning" field explaining why this step is necessary
Format your response as a valid JSON array without any additional text.
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Break down this task into steps: {user_query}"}
]
response = self._call_llm(messages)
try:
# Extract the JSON array from the response
steps = json.loads(response)
return steps
except json.JSONDecodeError:
print("Error: Could not parse response as JSON")
print(f"Raw response: {response}")
return []
def execute_step(self, step, context):
"""
Execute a single step in the plan
Args:
step (dict): The step to execute
context (str): Context from previous steps
Returns:
str: Result of executing the step
"""
system_prompt = """
You are an AI assistant focusing on executing a specific task step.
Use the provided context and step description to complete this specific step only.
Your response should be detailed and directly address the step's requirements.
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Context so far: {context}\n\nExecute this step: {step['description']}\n\nReasoning: {step['reasoning']}"}
]
return self._call_llm(messages)
def compile_results(self, steps_results, user_query):
"""
Compile the results of all steps into a final response
Args:
steps_results (list): Results from each executed step
user_query (str): The original user query
Returns:
str: Final compiled response
"""
system_prompt = """
You are an AI assistant that compiles information from multiple processing steps
into a coherent, unified response. Your goal is to present the information clearly
and directly address the user's original query.
"""
steps_text = "\n\n".join([f"Step {i+1} result: {result}" for i, result in enumerate(steps_results)])
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Original query: {user_query}\n\nResults from steps:\n{steps_text}\n\nPlease provide a comprehensive, unified response to the original query."}
]
return self._call_llm(messages)
def solve(self, user_query):
"""
Solve a task through multi-step reasoning
Args:
user_query (str): The user's request
Returns:
dict: A dictionary containing the original query, steps taken,
results of each step, and the final response
"""
print(f"? Analyzing task: {user_query}")
steps = self.analyze_task(user_query)
if not steps:
return {"error": "Could not break down the task into steps"}
print(f"? Breaking down into {len(steps)} steps:")
for i, step in enumerate(steps):
print(f" {i+1}. {step['description']}")
steps_results = []
context = ""
for i, step in enumerate(steps):
print(f"\n
result = self.execute_step(step, context)
steps_results.append(result)
context += f"\nStep {i+1} result: {result}"
print(f"
# Add a small delay to avoid rate limits
time.sleep(1)
print("\n? Compiling final response...")
final_response = self.compile_results(steps_results, user_query)
return {
"query": user_query,
"steps": steps,
"step_results": steps_results,
"final_response": final_response
}
def main():
"""Main function to demonstrate the agent's capabilities"""
# Initialize the agent with a capable model
agent = Agent(model="openai/gpt-4")
# Example query
user_query = "Research and suggest three possible vacation destinations for a family with young children, considering budget-friendly options."
# Solve the task
result = agent.solve(user_query)
# Print the final response
print("\n" + "=" * 50)
print("FINAL RESPONSE:")
print("=" * 50)
print(result["final_response"])
if __name__ == "__main__":
main()
Run the agent example with:
python agent_example.py
When you run this script, you'll see the agent breaking down the task, executing each step, and compiling a final response. Here's what the output might look like:
? Analyzing task: Research and suggest three possible vacation destinations for a family with young children, considering budget-friendly options.
? Breaking down into 6 steps:
1. Identify the important factors to consider when choosing a family vacation destination
2. Consider the interests and needs of young children
3. Determine the family's budget for the vacation
4. Research vacation destinations that match the identified factors and budget
5. Narrow down the list to three possible vacation destinations
6. Provide a brief overview for each suggested destination, including attractions, accommodations, and estimated cost
? Compiling final response...
==================================================
FINAL RESPONSE:
==================================================
For a family vacation with young children, three budget-friendly options could be Disney World in Florida, Yellowstone National Park in Wyoming, and San Diego, California.
Disney World is a classic choice, providing a magical experience for children. The park offers numerous entertainment options such as Magic Kingdom Park, Epcot, Disney's Animal Kingdom, and Disney's Hollywood Studios. The estimated cost for a family of four for a week, including park tickets, meals, and accommodation, may range from $3,500 to $6,000. You can choose to stay in one of Disney's resort hotels or opt for a vacation rental outside the park to save costs.
Alternatively, Yellowstone National Park allows your family to explore nature's wonders. Major attractions include wildlife viewing, geothermal features like Old Faithful, hiking trails, and ranger-led programs tailored for kids. Accommodation options range from campsites starting at $30 per night or lodges like the Old Faithful Inn. A week-long trip for a family of four, taking into account park admission, accommodation, and meals, could range from $1,000 - $4,000.
Lastly, San Diego offers a mix of city and marine life. You can visit the famous San Diego Zoo or the New Children's Museum, relax at the beach, or get adventurous at SeaWorld and Legoland. Family-friendly hotels like the Paradise Point Resort & Spa start from $200 per night, and vacation rentals in the city could be a more economical choice. A one-week stay's estimated cost may range between $2,500 -$4,500, inclusive of accommodation, food, and entry to various attractions.
These estimates serve as a rough guide and may vary based on factors like the time of travel, specific activities chosen, and the mode of transportation used. Regardless of the destination you choose, ensure that it aligns with your children's interests, has kid-friendly attractions and accommodation, food options that cater to their palate, and is safe and easily accessible. It's also essential to double-check any travel restrictions due to COVID-19.
As you can see, the agent has successfully:
- Broken down the complex query into logical steps
- Executed each step with careful consideration
- Compiled the information into a comprehensive, well-structured response
Let's break down what's happening in our agent_example.py:
- Task Planning: The agent uses a "task planner" persona to break down the user's request into discrete steps.
- Structured Output: The planning phase returns a JSON structure with clear steps and reasoning.
- Sequential Execution: Each step is executed in order, building on the context of previous steps.
- Progressive Context: Each step's result is added to the context for future steps.
- Final Synthesis: All results are compiled into a coherent, comprehensive response.
This approach has several advantages:
- Complex Problem Solving: Breaking down complex problems into manageable steps.
- Improved Reasoning: Each step can focus on a specific aspect of the problem.
- Better Explainability: The process is transparent and the reasoning is visible.
- Flexibility: You can swap out models for different steps based on their strengths.
This agentic workflow pattern can be applied to many real-world scenarios:
- Research Assistants: Breaking down research questions into investigation steps
- Content Creation: Planning, researching, drafting, and refining content
- Data Analysis: Processing, analyzing, and interpreting data
- Customer Support: Diagnosing issues, finding solutions, and providing explanations
- Decision Support: Analyzing options, weighing pros and cons, and making recommendations
Here are some ways to extend this project:
- Add Memory: Implement a vector database to give your agent long-term memory.
- Add Tools: Enable your agent to use tools (like web search, calculators, etc.).
- Optimize Cost: Implement a model router that uses cheaper models for simpler tasks.
- Improve Error Handling: Add retry logic and better error handling.
- Add User Feedback: Implement a feedback loop to improve the agent's responses.
You've built a complete agentic AI workflow using OpenRouter API! This approach allows you to create more sophisticated AI applications that can break down complex problems into manageable steps.
The key benefits of using OpenRouter for your agentic workflow include:
- Model Flexibility: Easily switch between models from different providers.
- Cost Optimization: Use the most cost-effective model for each task.
- Redundancy: If one provider is unavailable, your application can fall back to others.
- Simplified Integration: One API for many models means less code to maintain.
Now, you have the foundation to build more sophisticated AI agents that can solve complex problems through multi-step reasoning. The possibilities are endless!
Get the complete code on .
Happy building! ?