-
Notifications
You must be signed in to change notification settings - Fork 6.7k
Open
Labels
bugSomething isn't workingSomething isn't workingtriageIssue needs to be triaged/prioritizedIssue needs to be triaged/prioritized
Description
Bug Description
When I use a Mistral reasoning model, the thinking part is always included in the response, without any <think> tag. The showThinking parameter does not change anything to this.
So it's impossible to parse the output in order to split reasoning and response parts.
Version
0.14.12
Steps to Reproduce
Use the following piece of code to test this:
from llama_index.core.base.llms.types import (
ChatMessage,
MessageRole,
TextBlock,
)
from llama_index.llms.mistralai import MistralAI
llm = MistralAI(model="magistral-small-latest", api_key=my_api_key, show_thinking=True)
response = llm.chat([ChatMessage(role=MessageRole.USER, blocks=[TextBlock(text="What is the capital of France?")])])
print("response", response)Relevant Logs/Tracbacks
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingtriageIssue needs to be triaged/prioritizedIssue needs to be triaged/prioritized