fix(turbomind): fix dimension mismatch in ApplyTokenBitmaskInplace#4456
Open
windreamer wants to merge 1 commit intoInternLM:mainfrom
Open
fix(turbomind): fix dimension mismatch in ApplyTokenBitmaskInplace#4456windreamer wants to merge 1 commit intoInternLM:mainfrom
windreamer wants to merge 1 commit intoInternLM:mainfrom
Conversation
3 tasks
Contributor
There was a problem hiding this comment.
Pull request overview
This PR fixes a Turbomind guided-decoding crash caused by a batch-dimension mismatch between sliced logits (only active/generating requests) and the guided-decoding bitmask (previously sliced using the full batch request count).
Changes:
- Update
GuidedDecoding::ApplyMask()to slice the bitmask usinglogits.shape(0)rather thand.matchers.size(). - Add clarifying comments explaining why
logits.shape(0)is the correct dimension to use when some requests in the batch are no longer generating.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
8fe41f0 to
20de205
Compare
The bug occurs when batch contains requests that have finished generation but
are still waiting in the batch for synchronization. In this case,
generation_size (number of requests still generating) is less than
matchers.size() (total requests in batch).
Root cause:
- In generation.cc Forward(), logits is sliced to match generation_size:
env.produce("logits", logits.slice(0, gs));
- But in guided_decoding.cc ApplyMask(), bitmask was sliced using
d.matchers.size() instead of the actual logits batch size.
This causes TM_CHECK(logits_shape.first == bitmask_shape.first) to fail
in apply_token_bitmask_inplace_cuda.cu when the dimensions don't match.
Fix: Use logits.shape(0) instead of d.matchers.size() to ensure the
bitmask has the same batch dimension as logits.
Co-authored-by: openhands <openhands@all-hands.dev>
20de205 to
270e541
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The bug occurs when batch contains requests that have finished generation but are still waiting in the batch for synchronization. In this case, generation_size (number of requests still generating) is less than matchers.size() (total requests in batch).
Root cause:
This causes TM_CHECK(logits_shape.first == bitmask_shape.first) to fail in apply_token_bitmask_inplace_cuda.cu when the dimensions don't match.
fix: #4453