svFSIplus
|
Integration testing is an essential part of software development. It is performed when integrating code changes into the main development branch to verify that the code works as expected. Below is a quick guide on how to run and add integration tests for svFSI
.
There are two things you need to do before you can run a test case: Build svFSI
and install Git LFS
to download the test cases.
Follow the build instructions outlined here. Importantly, to automatically run test cases with pytest
(see below), you need to build svFSI
in the folder
in the repository root.
You need to install Git LFS
(Large File Storage) to run any test, which we use to track large files. Tracking large files with Git
can significantly add to the repository size. These large files include meshes and boundary conditions for all test cases. They are stored on GitHub
, but the files themselves just contain a hash or Object ID that is tracked with Git
. All file extensions currently tracked with Git LFS
are listed under in this file.
When using Git LFS
for the first time, you need to follow these simple steps:
svFSIplus
repository with ``` git lfs install ``` After performing these steps once, you never need to worry about Git LFS again. All large files are handled automatically during all Git operations, like
push,
pull, or
commit`.You can run an individual test by navigating to the ./tests/cases/<physics>/<test>
folder you want to run and execute svFSIplus
with the svFSI.xml
input file as an argument. A more elegant way, e.g., to run a whole group of tests, is using pytest
. By default, it will run all tests defined in the test_*.py
files in the ./tests folder. Tests and input files in ./tests/cases are grouped by physics type, e.g., struct, fluid, or fsi (using the naming convention from EquationType
). Here are a couple of useful Pytest
commands:
For more options, simply call pytest -h
.
We expect that new code is fully covered with at least one integration test. We also strive to increase our coverage of existing code. You can have a look at our current code coverage with Codecov. It analyzes every pull request and checks the change of coverage (ideally increasing) and if any non-covered lines have been modified. We avoid modifying untested lines of codeas there is no guarante that the code will still do the same thing as before.
Here are some steps you can follow to create a new test for the code you implemented. This will satisfy the coverage requirement (see above) and help other people who want to run your code. A test case is a great way to show what your code can do! Ideally, you do this early in your development. Then you can keep running your test case as you are refractoring and optimizing your code.
./tests/cases
and append the test to the Python test_*.py
file. If you created a new physics type, create new ones for both.GitHub Actions
when opening your pull request. You should see in your coverage report that your new code is covered.If you want to parameterize values in your test case, you can use @pytest
fixtures. We currently use them to automatically loop different numbers of processors, meshes, or input files.