Several bug fixes#786
Conversation
📝 WalkthroughWalkthroughReworks objective handling to pass both user-space and solver-space objectives, adds bidirectional objective conversion, makes root-solution setting idempotent, introduces configurable presolve threading, and adds conditional logging to suppress verbose LP messages during MIP solves. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…ed concurrent solution when the B&B is already running. moved number of threads in the probing cache to the setting struct.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/mip/presolve/probing_cache.cu (1)
859-910: Add defensive validation fornum_threadsbefore pool sizing and parallel launchThe code assumes
settings.num_threadsis always positive, but sincesettings_tis a public struct, external code can setnum_threadsto 0 (or modify it between initialization and use). If 0 is assigned:
modification_vector_poolandsubstitution_vector_poolwould have size 0#pragma omp parallel num_threads(0)produces undefined behavior (OpenMP spec)- If threads spawn despite 0,
thread_idxfromomp_get_thread_num()will exceed pool bounds, causing out-of-bounds accessAdd a defensive clamp before sizing the pools:
Proposed fix
- const size_t num_threads = bound_presolve.settings.num_threads; + const i_t requested_threads = bound_presolve.settings.num_threads; + size_t num_threads = + requested_threads > 0 ? static_cast<size_t>(requested_threads) + : static_cast<size_t>(omp_get_max_threads()); + if (num_threads == 0) { num_threads = 1; }Aligns with multithreading safety guidelines to prevent resource exhaustion and thread safety violations.
cpp/src/linear_programming/solve.cu
Outdated
|
|
||
| #include <thread> // For std::thread | ||
|
|
||
| #define CUOPT_LP_SOLVER_LOG_INFO(...) \ |
There was a problem hiding this comment.
Can't we undef the previous macro and still use CUOPT_LOG_INFO here?
There was a problem hiding this comment.
I think it is better to add another macro, since there are some methods where we do not pass the setting as argument, hence they need to use the old version.
There was a problem hiding this comment.
Ideally, we should add logger_t to the settings rather than CUOPT_LOG_INFO or rapids::logger_t.
CMS750_4 + Fixed hard-coded number of threads|
|
||
| // Check if crossover was stopped by dual simplex | ||
| if (crossover_status == crossover_status_t::OPTIMAL) { | ||
| settings_.log.printf("\nCrossover found an optimal solution for the root relaxation\n\n"); |
There was a problem hiding this comment.
This is where we should indicate that PDLP/Barrier won. Can you use the is_pdlp_solution to determine which of them won and print it here?
There was a problem hiding this comment.
I did not find a way to get the winner solver from the optimization_problem_solution_t unless we change the API (#787). This object is user-facing, so it needs to be reflect on the Python side as well.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.hpp`:
- Around line 87-97: The plain bool is_root_solution_set is accessed
concurrently (read in set_root_relaxation_solution and written in
solve_root_relaxation), causing a data race; change the declaration of
is_root_solution_set to std::atomic<bool> and update all reads/writes in
functions such as set_root_relaxation_solution and solve_root_relaxation to use
.load()/.store() with appropriate memory orders (e.g., memory_order_acquire for
reads and memory_order_release for writes) to match the existing
root_crossover_solution_set_ usage, ensuring thread-safe checks and updates.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.cpp`:
- Around line 1372-1386: The plain bool is_root_solution_set is written in
branch_and_bound.cpp (at the shown block) and concurrently read by
set_root_relaxation_solution(), causing a data race; change its declaration to
std::atomic<bool> (matching root_crossover_solution_set_) or guard all accesses
with mutex_upper_ to synchronize with upper-bound updates; update all
reads/writes of is_root_solution_set to use atomic operations (or lock
mutex_upper_) and ensure consistency with existing root_crossover_solution_set_
usage patterns.
- Around line 1351-1358: The composite iteration count from the
barrier+crossover solution (root_crossover_soln_) is being copied into
root_relax_soln_.iterations and later used via exploration_stats_.total_lp_iters
to compute diving limits, which is incorrect; update the swap logic around
root_relax_soln_, root_crossover_soln_, root_vstatus_, root_status,
user_objective and solver_name so that you do not propagate the composite
barrier+crossover iteration total into the dual-simplex/diving accounting—either
reset iter to 0 (or to a separate dual-simplex pivot counter) immediately after
assigning root_relax_soln_ = root_crossover_soln_ or preserve and copy a stored
dual-simplex iteration field instead; ensure exploration_stats_.total_lp_iters
and lp_settings.iteration_limit continue to use the dual-simplex pivot count (or
a fresh counter) rather than the barrier+crossover composite value so diving
workers aren’t terminated prematurely.
|
/merge |
CMS750_4due to an incorrect root objective from the PDLP. It provides an objective in user space and B&B expects the objective in the solver space.cuopt_cli. Now the number of threads in the probing cache can be controlled by thebounds_update::settings_t.Checklist
Summary by CodeRabbit
New Features
Bug Fixes
Style
✏️ Tip: You can customize this high-level summary in your review settings.