A comprehensive, community-maintained registry of AI/LLM model configurations. This repository provides standardized model metadata including pricing, features, and token limits across all major AI providers.
LLM model configs change often — prices drop, features expand, limits shift. This repository provides up-to-date information across providers and makes updating stale data easy.
- Unified Schema — Consistent model configuration format across 17 providers
- Up-to-Date Pricing — Current cost information for input/output tokens, batch processing, and caching
- Feature Tracking — Know exactly what each model supports (vision, tools, structured output, etc.)
- Open Source — Community-driven updates ensure accuracy and coverage
| Provider | Models | Description |
|---|---|---|
| OpenAI | 81 | GPT-4, GPT-4o, GPT-5, o1, o3, DALL-E, Whisper, TTS |
| Anthropic | 21 | Claude 3, Claude 3.5, Claude 4 |
| AWS Bedrock | 139 | Claude, Llama, Titan, Mistral on AWS |
| Azure OpenAI | 77 | OpenAI models on Azure |
| Azure AI Foundry | 65 | Azure AI models |
| Google Vertex AI | 110 | Gemini, PaLM on GCP |
| Google Gemini | 25 | Gemini Pro, Ultra, Flash |
| Mistral AI | 37 | Mistral, Mixtral, Codestral |
| Cohere | 16 | Command, Embed models |
| Groq | 14 | Fast inference models |
| Together AI | 39 | Open source model hosting |
| DeepInfra | 67 | Open source model hosting |
| Perplexity | 25 | Search-augmented models |
| Cerebras | 8 | Fast inference models |
| Databricks | 28 | Databricks-hosted models |
| SambaNova | 16 | Enterprise AI models |
| AI21 | 13 | Jamba models |
git clone https://github.com/truefoundry/models.gitEach model YAML file follows this schema:
# Required
model: gpt-4o # Model identifier
# Pricing
costs:
input_cost_per_token: 0.0000025
output_cost_per_token: 0.00001
cache_read_input_token_cost: 0.00000125
# Token limits
limits:
max_input_tokens: 128000
max_output_tokens: 16384
# Features (array of strings)
features: [chat, vision, function_calling, tools]
# Metadata
mode: chat
original_provider: openai
is_deprecated: falseproviders/
├── <provider>/
│ ├── default.yaml # Default params for all models under this provider
│ ├── <model>.yaml
│ └── ...
Example:
providers/
├── openai/
│ ├── default.yaml
│ ├── gpt-4o.yaml
│ ├── gpt-4o-mini.yaml
│ └── ...
├── anthropic/
│ ├── default.yaml
│ ├── claude-3-5-sonnet.yaml
│ └── ...
└── ...
We welcome contributions! Please see our Contributing Guide for details.
- Clone the repository
- Create a new branch (
git checkout -b add-new-model) - Add or update model configurations
- Validate your YAML files
- Submit a pull request
# Copy an existing model as a template
cp providers/openai/gpt-4o.yaml providers/openai/new-model.yaml
# Edit with your model's configuration
# Submit a PR!Model pricing changes frequently. If you notice outdated pricing:
- Check the provider's official pricing page
- Update the relevant YAML file
- Submit a PR with a link to the source
Validate your YAML files before submitting:
# Using Python
python -c "import yaml; yaml.safe_load(open('providers/openai/gpt-4o.yaml'))"
# Using yq
yq eval '.' providers/openai/gpt-4o.yamlThis project is licensed under the MIT License - see the LICENSE file for details.