Contributing#

We welcome anyone interested in contributing to this project, be it with new ideas, suggestions, by filing bug reports or contributing code.

You are invited to submit pull requests / issues to our Github repository.

Development Setup#

For linting, formatting and checking your code contributions against our guidelines (e.g. we use Black as code style and use pre-commit:

  1. Installation conda install -c conda-forge pre-commit or pip install pre-commit

  2. Usage:
    • To automatically activate pre-commit on every git commit: Run pre-commit install

    • To manually run it: pre-commit run --all

Running Tests#

Testing is essential for maintaining code quality. We use pytest as our testing framework.

Basic Testing#

To run the test suite:

# Install development dependencies
pip install -e .[dev,solvers]

# Run all tests
pytest

# Run tests with coverage
pytest --cov=./ --cov-report=xml linopy --doctest-modules test

# Run a specific test file
pytest test/test_model.py

# Run a specific test function
pytest test/test_model.py::test_model_creation

GPU Testing#

Tests for GPU-accelerated solvers (e.g., cuPDLPx) are automatically skipped by default since CI machines and most development environments don’t have GPU hardware. This ensures tests pass in all environments.

To run GPU tests locally (requires GPU hardware and CUDA):

# Run all tests including GPU tests
pytest --run-gpu

# Run only GPU tests
pytest -m gpu --run-gpu

GPU tests are automatically detected based on solver capabilities - no manual marking is required. When you add a new GPU solver to linopy, tests using that solver will automatically be marked as GPU tests.

See the GPU-Accelerated Solving guide for more information about GPU solver setup and usage.

Performance Benchmarks#

When working on performance-sensitive code, use the internal benchmark suite in benchmarks/ to check for regressions.

# Install benchmark dependencies
pip install -e ".[benchmarks]"

# Quick timing benchmarks
pytest benchmarks/ --quick

# Compare timing between branches
pytest benchmarks/test_build.py --benchmark-save=master
pytest benchmarks/test_build.py --benchmark-save=my-feature --benchmark-compare=0001_master

# Compare peak memory between branches
python benchmarks/memory.py save master --quick
python benchmarks/memory.py save my-feature --quick
python benchmarks/memory.py compare master my-feature

See benchmarks/README.md for full details on models, phases, and usage.

Contributing examples#

Nice examples are always welcome.

You can even submit your Jupyter notebook (.ipynb) directly as an example. For contributing notebooks (and working with notebooks in git in general) we have compiled a workflow for you which we suggest you follow:

This obviously has to be done only once. The hook checks if any of the notebooks you are including in a commit contain a non-empty output cells.

Then for every notebook:

  1. Write the notebook (let’s call it foo.ipynb) and place it in examples/foo.ipynb.

  2. Ask yourself: Is the output in each of the notebook’s cells relevant for to example?

    • Yes: Leave it there. Just make sure to keep the amount of pictures/… to a minimum.

    • No: Clear the output of all cells, e.g. Edit -> Clear all output in JupyterLab.

  3. Provide a link to the documentation: Include a file foo.nblink located in doc/examples/foo.nblink with this content

  4. Link your file in the documentation:

    Either

    • Include your examples/foo.nblink directly into one of the documentations toctrees; or

    • Tell us where in the documentation you want your example to show up

  5. Commit your changes. If the precommit hook you installed above kicks in, confirm your decision (‘y’) or go back (‘n’) and delete the output of the notebook’s cells.

  6. Create a pull request for us to accept your example.

The support for the the .ipynb notebook format in our documentation is realised via the extensions nbsphinx and nbsphinx_link.