Migration Guide: v0.1
This guide covers migrating from v0.0.x to v0.1+, which introduced the strategy pattern.
See the full migration document for complete details.
Key Changes in v0.1
1. Strategy Pattern Introduction
v0.1 replaced direct agent= parameter with the strategy= parameter.
Before (v0.0.x):
from async_batch_llm import LLMWorkItem
from pydantic_ai import Agent
agent = Agent("gemini-2.0-flash", result_type=Output)
work_item = LLMWorkItem(
item_id="1",
agent=agent, # Direct agent
prompt="..."
)
After (v0.1+):
from async_batch_llm import LLMWorkItem, PydanticAIStrategy
from pydantic_ai import Agent
agent = Agent("gemini-2.0-flash", result_type=Output)
strategy = PydanticAIStrategy(agent=agent) # Wrap in strategy
work_item = LLMWorkItem(
item_id="1",
strategy=strategy, # Use strategy
prompt="..."
)
2. Why the Change?
The strategy pattern decouples the framework from specific LLM providers:
- ✅ Support any LLM provider (OpenAI, Anthropic, Google, LangChain, custom)
- ✅ Each strategy encapsulates provider-specific logic
- ✅ Framework handles retry, timeout, rate limiting uniformly
- ✅ Easy to test with mock strategies
- ✅ Resource lifecycle management (prepare/cleanup)
3. Built-in Strategies
v0.1 introduced several built-in strategies:
- PydanticAIStrategy - Wraps PydanticAI agents
- GeminiStrategy - Direct Gemini API calls
- GeminiCachedStrategy - Gemini with context caching
4. Custom Strategies
You can now create custom strategies for any provider:
from async_batch_llm import LLMCallStrategy
class OpenAIStrategy(LLMCallStrategy[str]):
def __init__(self, client, model: str):
self.client = client
self.model = model
async def execute(self, prompt: str, attempt: int, timeout: float):
response = await self.client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": prompt}]
)
output = response.choices[0].message.content
tokens = {
"input_tokens": response.usage.prompt_tokens,
"output_tokens": response.usage.completion_tokens,
"total_tokens": response.usage.total_tokens
}
return output, tokens