Contributing to async-batch-llm
Thank you for considering contributing to async-batch-llm!
Development Setup
1. Clone and Install
git clone https://github.com/geoff-davis/async-batch-llm.git
cd async-batch-llm
# Create virtual environment and install dependencies
uv venv
uv sync --all-extras
2. Install Pre-commit Hooks
This will automatically run code quality checks before each commit.
Development Workflow
Running Tests
# Run all tests
uv run pytest
# Run specific test file
uv run pytest tests/test_basic.py -v
# Run with coverage
uv run pytest --cov=async_batch_llm --cov-report=html
Code Quality
Always run quality checks before committing:
# Format code
uv run ruff format src/ tests/ examples/
# Lint and auto-fix issues
uv run ruff check src/ tests/ examples/ --fix
# Verify linting passes
uv run ruff check src/ tests/ examples/
# Type check
uv run mypy src/async_batch_llm/ --ignore-missing-imports
# Or run all checks at once
make ci
Documentation
Build and preview documentation:
# Install docs dependencies
uv sync --extra docs
# Serve docs locally
uv run mkdocs serve
# Build docs
uv run mkdocs build
Then visit http://localhost:8000
Markdown Linting
# Lint markdown files
npx markdownlint-cli2 "README.md" "docs/**/*.md" "CLAUDE.md"
# Auto-fix markdown issues
npx markdownlint-cli2 "README.md" "docs/**/*.md" "CLAUDE.md" --fix
# Or use make target
make markdown-lint-fix
Pre-Commit Checklist
Before committing, ensure:
- ✅ All tests pass:
uv run pytest - ✅ Linting passes:
uv run ruff check src/ tests/ - ✅ Type checking passes:
uv run mypy src/async_batch_llm/ - ✅ Markdown is clean:
make markdown-lint
Or run everything at once:
Pull Request Guidelines
- Create a feature branch:
git checkout -b feature/your-feature - Write tests: Add tests for new functionality
- Update docs: Update relevant documentation
- Run quality checks: Ensure all checks pass
- Write clear commit messages: Explain the "why" not just "what"
- Open PR: Provide a clear description of changes
Project Structure
async-batch-llm/
├── src/async_batch_llm/ # Main package
│ ├── base.py # Core data models
│ ├── parallel.py # Main processor
│ ├── llm_strategies/ # Strategy implementations
│ ├── observers/ # Observer implementations
│ └── testing/ # Testing utilities
├── tests/ # Test suite
├── examples/ # Example scripts
├── docs/ # Documentation
└── CLAUDE.md # AI assistant context
Adding New Strategies
To add a new LLM provider strategy:
- Create strategy in
src/async_batch_llm/llm_strategies/ - Implement
LLMCallStrategyprotocol - Add tests in
tests/ - Add example in
examples/ - Update documentation in
docs/examples/custom-strategies.md
Example:
from async_batch_llm import LLMCallStrategy
class MyProviderStrategy(LLMCallStrategy[str]):
async def execute(self, prompt: str, attempt: int, timeout: float):
# Your implementation
return output, tokens
Questions?
- Open an issue
- Start a discussion
License
By contributing, you agree that your contributions will be licensed under the MIT License.