Skip to content

Fix llama-embedding test accuracy issue#68

Open
zhaixuejun1993 wants to merge 306 commits intoravi9:dev_backend_openvinofrom
zhaixuejun1993:xuejun/fix-accracy-issue
Open

Fix llama-embedding test accuracy issue#68
zhaixuejun1993 wants to merge 306 commits intoravi9:dev_backend_openvinofrom
zhaixuejun1993:xuejun/fix-accracy-issue

Conversation

@zhaixuejun1993
Copy link
Collaborator

Make sure to read the contributing guidelines before submitting a PR

if (op->src[0]->op == GGML_OP_PERMUTE || op->src[1]->op == GGML_OP_PERMUTE) {
return true;
}
// if (op->src[0]->op == GGML_OP_PERMUTE || op->src[1]->op == GGML_OP_PERMUTE) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ravi9 @cavusmustafa this changes maybe cause some test-backend-ops case failed, maybe we can label the failed case with ov unsupported for tmp. Cause disable llama-embedding is not easy

@ggerganov ggerganov force-pushed the dev_backend_openvino branch from 76e4057 to e73b4d4 Compare March 13, 2026 10:44
@wine99 wine99 force-pushed the dev_backend_openvino branch from 996b739 to b6c83aa Compare March 17, 2026 02:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants