Skip to content

feat: add MiniMax provider support for prompt refinement#916

Open
octo-patch wants to merge 1 commit intohpcaitech:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support for prompt refinement#916
octo-patch wants to merge 1 commit intohpcaitech:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

This PR adds MiniMax as an alternative LLM provider for Open-Sora's prompt refinement step, alongside the existing OpenAI/GPT-4o integration.

Changes

  • opensora/utils/prompt_refine.py

    • Add _get_client(model) — routes to MiniMax API (OpenAI-compatible) when model starts with "MiniMax", otherwise uses OpenAI
    • Add has_minimax_key() — checks whether MINIMAX_API_KEY is set
    • Add _strip_think_tags(text) — strips <think>…</think> chain-of-thought blocks emitted by MiniMax reasoning models
    • Add _extra_tokens(model) — allocates 500 extra output tokens for MiniMax models to accommodate thinking tokens
    • Add refine_prompts_by_minimax() — convenience wrapper that defaults to MiniMax-M2.7
    • refine_prompt() and refine_prompts() now accept an optional model parameter (defaults to PROMPT_MODEL env var, falls back to "gpt-4o")
  • tests/test_minimax_prompt_refine.py — 18 unit tests covering routing, API key validation, model selection, temperature constraints, think-tag stripping, and the refine_prompts_by_minimax() wrapper

Supported models

Model Description
MiniMax-M2.7 Default — peak performance
MiniMax-M2.7-highspeed Faster variant

Usage

# Use MiniMax for prompt refinement
export MINIMAX_API_KEY=your_key_here
export PROMPT_MODEL=MiniMax-M2.7

Or programmatically:

from opensora.utils.prompt_refine import refine_prompts_by_minimax

refined = refine_prompts_by_minimax(["a cat playing in snow"], type="t2v")

API references

- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model support in
  prompt_refine.py using the OpenAI-compatible API
- Add _get_client() to route between OpenAI and MiniMax based on model name
- Add has_minimax_key() utility to check MINIMAX_API_KEY availability
- Add refine_prompts_by_minimax() convenience wrapper
- Support PROMPT_MODEL env var to set default refinement model
- Support MINIMAX_BASE_URL env var (defaults to https://api.minimax.io/v1)
- Strip <think> chain-of-thought blocks from MiniMax responses
- Allocate extra token budget for MiniMax thinking blocks
- Add 18 unit tests covering all new functionality
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant