Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Jan 14, 2026

Summary

This PR fixes issue #4238 where Gemini models fail with UNEXPECTED_TOOL_CALL errors when used in hierarchical crews. The root cause was that tools were not being passed to llm.call() and llm.acall(), preventing native function calling support for models like Gemini that require tools to be declared upfront.

Changes:

  • Added _extract_tools_from_context() helper function that extracts tools from executor context (CrewAgentExecutor.tools or LiteAgent._parsed_tools) and converts them to dict format
  • Updated get_llm_response() to extract and pass tools to llm.call()
  • Updated aget_llm_response() to extract and pass tools to llm.acall()
  • Updated type hint for executor_context in aget_llm_response() to accept LiteAgent | None

Important implementation detail: The tools parameter is only passed to llm.call()/llm.acall() when tools are actually available (not None). This maintains backward compatibility with existing code that checks "tools" in kwargs to determine if tools were provided.

Review & Testing Checklist for Human

  • Verify tool dict format compatibility: The tools are converted to {name, description, args_schema} format. Confirm this is compatible with how llm.call() processes tools for Gemini and other providers
  • Test with actual Gemini model: Run a hierarchical crew with Gemini (e.g., gemini/gemini-2.0-flash-exp) to verify the fix resolves the UNEXPECTED_TOOL_CALL error
  • Regression test with other LLM providers: Verify that not passing tools when no tools are available doesn't break OpenAI, Anthropic, or other providers
  • Verify LiteAgent integration: The fix supports both CrewAgentExecutor and LiteAgent - confirm both paths work correctly

Recommended test plan:

  1. Create a hierarchical crew with a Gemini model as the manager LLM
  2. Add agents with delegation tools
  3. Run the crew and verify no UNEXPECTED_TOOL_CALL errors occur
  4. Run existing integration tests to check for regressions

Notes

This fixes issue #4238 where Gemini models fail with UNEXPECTED_TOOL_CALL
errors because tools were not being passed to the LLM call.

Changes:
- Add _extract_tools_from_context() helper function to extract tools from
  executor context (CrewAgentExecutor or LiteAgent) and convert them to
  dict format compatible with LLM providers
- Update get_llm_response() to extract and pass tools to llm.call()
- Update aget_llm_response() to extract and pass tools to llm.acall()
- Add comprehensive tests for the new functionality

Co-Authored-By: João <[email protected]>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

devin-ai-integration bot and others added 2 commits January 14, 2026 22:54
This fixes the CI test failures by only passing the tools parameter to
llm.call() and llm.acall() when tools are actually available. This
maintains backward compatibility with existing code that checks
'tools' in kwargs to determine if tools were provided.

The previous commit always passed tools=None, which caused tests that
check 'tools' in kwargs to fail because the key was always present.

Co-Authored-By: João <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant