Skip to content

feat: add MiniMax as LLM provider with M2.7 as default model#247

Open
octo-patch wants to merge 2 commits intoAlibaba-NLP:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider with M2.7 as default model#247
octo-patch wants to merge 2 commits intoAlibaba-NLP:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 15, 2026

Summary

Add MiniMax as an LLM provider for the summary model and qwen-agent framework, with the latest MiniMax-M2.7 as the default model.

Changes

  • Add MiniMax LLM provider (TextChatAtMiniMax) in the qwen-agent framework with auto-detection by model name
  • Support MiniMax-M2.7 (default), MiniMax-M2.7-highspeed, MiniMax-M2.5, and MiniMax-M2.5-highspeed models
  • Add MINIMAX_API_KEY fallback in tool_visit.py for page summarization
  • Update .env.example with MiniMax configuration
  • Document MiniMax integration in README

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, offering 204K context length well-suited for long document summarization tasks.

Testing

  • Python syntax validation passed for all modified files
  • MiniMax provider inherits from TextChatAtOAI (OpenAI-compatible), minimal risk

octo-patch and others added 2 commits March 15, 2026 09:33
…work

MiniMax offers OpenAI-compatible LLM APIs with models like MiniMax-M2.5
(204K context). This commit adds MiniMax support in three areas:

- New minimax provider in qwen-agent LLM registry with auto-detection
  for model names containing "minimax"
- Fallback to MINIMAX_API_KEY in tool_visit.py for page summarization
- Configuration examples in .env.example and README documentation
- Set MiniMax-M2.7 as default model (was M2.5)
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model documentation
- Keep all previous models (M2.5, M2.5-highspeed) as alternatives
- Update tool_visit.py, minimax.py provider, README, and .env.example
@octo-patch octo-patch changed the title Add MiniMax as LLM provider for summary model and qwen-agent framework feat: add MiniMax as LLM provider with M2.7 as default model Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant