Skip to content

[Bug]: Mistral reasoning not correctly parsed #20456

@gpanneti

Description

@gpanneti

Bug Description

When I use a Mistral reasoning model, the thinking part is always included in the response, without any <think> tag. The showThinking parameter does not change anything to this.

So it's impossible to parse the output in order to split reasoning and response parts.

Version

0.14.12

Steps to Reproduce

Use the following piece of code to test this:

from llama_index.core.base.llms.types import (
    ChatMessage,
    MessageRole,
    TextBlock,
)

from llama_index.llms.mistralai import MistralAI

llm = MistralAI(model="magistral-small-latest", api_key=my_api_key, show_thinking=True)
response = llm.chat([ChatMessage(role=MessageRole.USER, blocks=[TextBlock(text="What is the capital of France?")])])
print("response", response)

Relevant Logs/Tracbacks

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingtriageIssue needs to be triaged/prioritized

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions