Skip to content

Failing tests #94

@lrobion

Description

@lrobion

Latest build on main has 3 tests failing:

  • Test 12: YAML reader
  • Test 19: Explicit Advection - Diffusion w/ Shear
  • Test 21: Operator Split Advection-Diffusion

Sadly test 19 and 21 have been failing as far as I remember which is itself an issue...

Test 12 failing is new however and was introduced by #75. It's due to the input.yaml files not being consistently updated between the one in examples/issl_rhi140/, the one in Code.v05-00/defaults/input.yaml and the 3 in the test files. The functionality is fine, the test was just testing for things that have been removed / were not being read.

More generally however, we should be systematic about running tests for new PRs. I think there are two reasons why this is not done:

  • Test 19 and 21 (advection-diffusion solver related) have always failed, so it's easy to miss that an additional test is failing
  • Tests 18 and 20 (also advection-diffusion solver related) take an absurd amount of time to run (~250s and ~60s respectively). This discourages people from running tests / leads to people interrupting them and not reading the full report making it easy to miss a new failing test

At the minimum, I think we should change the parameters of 18 and 20 to make them run faster such that tests run in a reasonable amount of time. I think we also need to decide what to do with the failing ones.

Currently test 19 fails because it does not pass REQUIRE( std::abs(maxx-xmax_exp) < 0.01 ) due to 0.011 < 0.01. Is the expected error too small and we can just make it a little more tolerant? Is there a problem in the implementation? No idea.

Test 21 fails because it does not pass REQUIRE( std::abs(maxx-0.575) < 0.01 ) due to 0.08 < 0.01. Same problem as above.

Ideally, we fix them, but I do not know enough about the numerical to make informed decisions there. The alternative is to comment them out. In my opinion, keeping them with no intention of fixing them is worse than removing them out because it introduces noise and makes it easy to brush off that the tests are failing. There is also something to be said about adopting a regression testing approach for these two tests, and simply ensuring that the values do not change as a middle ground. The current situation is bad however.

Would love to hear other opinions on this. @ian-ross @sdeastham

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions