This page describes some of the tools and conventions followed by Common Partial Wave Analysis. Where possible, we use the source code of the AmpForm repository as example, because its file structure is comparable to that of other ComPWA repositories.
To start developing, simply run the following from a cloned repository on your machine:
conda env create conda activate ampform pre-commit install --install-hooks
python3 -m venv ./venv source ./venv/bin/activate python3 -m pip install -c .constraints/py3.8.txt -e .[dev] pre-commit install --install-hooks
3.8 with the Python version you use on your machine.
See Virtual environment for more info.
When developing source code, it is safest to work within a virtual environment, so that all package dependencies and developer tools are safely contained. This is helpful in case something goes wrong with the dependencies: just trash the environment and recreate it. In addition, you can easily install other versions of the dependencies, without affecting other packages you may be working on.
Two common tools to manage virtual environments are Conda
and Python’s built-in
venv. In either
case, you have to activate the environment whenever you want to run the framework or use
the developer tools.
All packages maintained by the ComPWA organization provide
Conda environment file
defines all requirements when working on the source code of that repository. To create
an environment specific for this repository, simply navigate to the main folder of the
source code and run:
conda env create
Conda now creates an environment with a name that is defined in the
In addition, it will install the framework itself in
“editable” mode, so that you can start developing right away.
If you have Python’s
venv, available on
your system, you can create a virtual environment with it. Navigate to some convenient
folder and run:
python3 -m venv ./venv
This creates a folder called
venv where all Python packages will be contained.
To activate the environment, run:
pip install -e .
When developing a package, it is most convenient if you install it in “editable” mode. This allows you to tweak the source code and try out new ideas immediately, because the source code is considered the ‘installation’.
python3 -m pip install -e .
Internally, this calls:
python3 setup.py develop
This will also install all dependencies required by the package.
pip install tensorwaves[jax,scipy] pip install .[test] # local directory, not editable pip install -e .[dev] # editable + all dev requirements
pip install "tensorwaves[jax,scipy]" pip install ".[test]" # local directory, not editable pip install -e ".[dev]" # editable + all dev requirements
Developers require several additional tools besides the dependencies required to run the package itself (see Automated coding conventions). All those additional requirements can be installed with the last example.
Pinning dependency versions#
To ensure that developers use exactly the same versions of the package dependencies and developer requirements, some of the repositories provide constraint files. These files can be used to ‘pin’ all versions of installed packages as follows:
python3 -m pip install -c .constraints/py3.8.txt -e .
The syntax works just as well for Optional dependencies:
python3 -m pip install -c .constraints/py3.8.txt -e .[doc,sty] python3 -m pip install -c .constraints/py3.8.txt -e .[test] python3 -m pip install -c .constraints/py3.8.txt -e .[dev]
python3 -m pip install -c .constraints/py3.8.txt -e ".[doc,sty]" python3 -m pip install -c .constraints/py3.8.txt -e ".[test]" python3 -m pip install -c .constraints/py3.8.txt -e ".[dev]"
It may be that new commits in the repository modify the dependencies. In that case, you have to rerun this command after pulling new commits from the repository:
git checkout main git pull pip install -c .constraints/py3.8.txt -e .[dev]
git checkout main git pull pip install -c .constraints/py3.8.txt -e ".[dev]"
If you still have problems, it may be that certain dependencies have become redundant. In that case, trash the virtual environment and create a new one.
Julia is an upcoming programming language in High-Energy Physics. While ComPWA is mainly developed in Python, we try to taylor to new trends and are experimenting with Julia as well.
Julia can be downloaded here or can be installed within your virtual environment with juliaup. To install Julia system-wide in Linux and Mac, you’ll have to unpack the downloaded tar file to a location that is easily accessible. Here’s an example, where we also make the Julia executable available to the system:
Install juliaup for installing and managing Julia versions.
conda install juliaup -c conda-forge
Optional: select Julia version
By default, this provides you with the latest Julia release. Optionally, you can switch versions as follows:
conda install juliaup -c conda-forge juliaup add 1.9 juliaup default 1.9
You can switch back to the latest version with:
juliaup default release
cd ~/Downloads tar xzf julia-1.9.2-linux-x86_64.tar.gz mkdir ~/opt ~/bin mv julia-1.9.2 ~/opt/ ln -s ~/opt/julia-1.9.2/bin/julia ~/bin/julia
Make sure that
~/bin is listed in the
PATH environment variable, e.g. by updating it
cd ~/Downloads tar xzf julia-1.9.2-linux-x86_64.tar.gz sudo mv julia-1.9.2 /opt/ sudo ln -s /opt/julia-1.9.2/bin/julia /usr/local/bin/julia
Just as in Python, it’s safest to work with a
virtual environment. You can read more about Julia
environments here. An environment is
defined through a
Project.toml file (which
defines direct dependencies) and a
(which exactly pins the installed versions of all recursive dependencies). Don’t touch
these files―they are automatically managed by the
package manager. It does make
sense though to commit both
Manifest.toml files, so that the
environment is reproducible for each commit (see also
Pinning dependency versions).
Automated coding conventions#
Where possible, we define and enforce our coding conventions through automated tools, instead of describing them in documentation. These tools perform their checks when you commit files locally (see Pre-commit), when running tox, and when you make a pull request.
The tools are mainly configured through
tox.ini, and the workflow files
.github. If you run into
persistent linting errors, this may mean we need to further specify our conventions. In
that case, it’s best to create an issue or a
pull request and propose a policy change that can be
formulated through those config files.
pre-commit install --install-hooks
Upon committing, pre-commit runs a set of checks as defined in the file
over all staged files. You can also quickly run all checks over all indexed files in
the repository with the command:
pre-commit run -a
Whenever you submit a pull request, this command is automatically run on GitHub actions and on pre-commit.ci , ensuring that all files in the repository follow the same conventions as set in the config files of these tools.
More thorough checks can be run in one go with the following command:
This command will run
pytest, perform all
build the documentation, and verify cross-references in
the documentation and the API. It’s especially recommended to run tox before
submitting a pull request!
More specialized tox job are defined in the
tox.ini config file, under each
testenv section. You can list all environments, along with a description of what
they do, by running:
All style checks, testing of the
documentation and links, and
unit tests are performed upon each pull request through
GitHub Actions (see status overview
here). The checks are defined under the
.github folder. All checks
performed for each PR have to pass before the PR can be merged.
Formatters are tools that automatically format source code, or some document. Naturally, this speeds up your own programming, but these tools are particularly important when collaborating, because a standardized format avoids line conflicts in Git and makes diffs in code review easier to read.
For the Python source code, we use
Ruff). For other code, we use
Prettier. All of these formatters are “opinionated formatters”:
they offer only limited configuration options, as to make formatting as conform as
Linters point out when certain style conventions are not correctly followed. Unlike with formatters, you have to fix the errors yourself. As mentioned in Automated coding conventions, style conventions are formulated in config files. The main linter that ComPWA projects use, is Ruff.
Throughout this repository, we follow American English (en-us) spelling conventions. As a tool, we use cSpell, because it allows to check variable names in camel case and snake case. This way, a spelling checker helps you avoid mistakes in the code as well! cSpell is enforced through pre-commit.
Accepted words are tracked through the
.cspell.json file. As with
the other config files,
our conventions with regard to spelling and can be continuously updated while our code
base develops. In the file, the
words section lists words that you want to see as
suggested corrections, while
ignoreWords are just the words that won’t be flagged. Try
to be sparse in adding words: if some word is just specific to one file, you can
ignore it inline, or you can add the file
ignorePaths section if you want to ignore it completely.
It is easiest to use cSpell in Visual Studio code, through the
Code Spell Checker
extension: it provides linting, suggests corrections from the
words section, and
enables you to quickly add or ignore words through the
The fastest way to run all tests is with the command:
pytest -n auto
The flag -n auto causes
run with a distributed strategy.
Try to keep test coverage high. You can compute current coverage by running
tox -e cov
htmlcov/index.html in a browser.
To get an idea of performance per component, run
and check the stats and the
prof/combined.svg output file.
Jupyter notebooks can also be used as tests. See more info here.
The documentation that you find on ComPWA pages like pwa.rtfd.io is built with Sphinx. Sphinx also builds the API page of the packages and therefore checks whether the docstrings in the Python source code are valid and correctly interlinked.
We make use of Markedly Structured Text (MyST), so you can write the documentation in both Markdown and reStructuredText. In addition, it’s easy to write (interactive) code examples in Jupyter notebooks and host them on the website (see MyST-NB)!
You can quickly build the documentation with the command:
tox -e doc
If you are doing a lot of work on the documentation,
sphinx-autobuild is a nice tool to use.
tox -e doclive
This will start a server http://127.0.0.1:8000 where you can continuously preview the changes you make to the documentation.
Finally, a nice feature of Read the Docs, where we host our documentation, is that documentation is built for each pull request as well. This means that you can view the documentation for your changes as well. For more info, see here, or just click “details” under the RTD check once you submit your PR.
The docs folder can also contain Jupyter notebooks. These notebooks are rendered as HTML by MyST-NB. The notebooks are also run and tested whenever you make a pull request, so they also serve as integration tests.
If you want to improve those notebooks, we recommend working with Jupyter Lab, which is installed with the dev requirements. Jupyter Lab offers a nicer developer experience than the default Jupyter notebook editor does. A few useful Jupyter Lab plugins are also installed through the optional dependencies.
Now, if you want to test all notebooks in the documentation folder and check what their output cells will look like in the Documentation, you can do this with:
tox -e docnb
This command takes more time than
tox -e doc, but it is good practice to do this
before you submit a pull request. It’s also possible to continuously generate the HTML
pages including cell output while you work on the notebooks with:
EXECUTE_NB= tox -e doclive
Notebooks are automatically formatted through pre-commit
(see Formatting). If you want to format the notebooks automatically as
you’re working, you can do so with
which is automatically
installed with the dev requirements.
julia -e 'import Pkg; Pkg.add("IJulia")'
import Pkg Pkg.add("IJulia")
Usually, this also installs a Jupyter kernel directly. Optionally, you can define a Jupyter kernel manually:
julia -e 'using IJulia; installkernel("julia")'
using IJulia installkernel("julia")
and select it as kernel in the Jupyter notebook.
As mentioned in Julia, Julia can be installed within your Conda
juliaup. This is,
however, not yet a virtual environment for Julia itself. You can create a virtual
environment for Julia itself by for instance defining it through a code cell like this:
using Pkg Pkg.activate(".") # if environment is defined in this folder Pkg.instantiate()
See Jupyter notebook with Julia kernel for an example.
Additionally, you can install a Language Server for Julia in Jupyter Lab. To do so, run:
julia -e 'import Pkg; Pkg.add("LanguageServer")'
using Pkg Pkg.add("LanguageServer")
The source code of all ComPWA repositories is maintained with Git and GitHub. We keep track of issues with the code, documentation, and developer set-up with GitHub issues (see for instance here). This is also the place where you can report bugs.
If you are new to working with GitHub, have a look at the tutorials on GitHub Skills.
We keep track of issue dependencies, time estimates, planning, pipeline statuses, et cetera with GitHub project boards (GitHub Issues). The main project boards are:
Some issues are not public. To get access, you can request to become member of the ComPWA GitHub organization. Other information that is publicly available are:
Issue labels: help to categorize issues by type (maintenance, enhancement, bug, etc.). The labels are also used to in the sub-sections of the release notes.
Milestones: way to bundle issues and PRs for upcoming releases.
All of these are important for the Release flow and therefore also serve as a way to document the framework.
While our aim is to maintain long-term, stable projects, PWA software projects are academic projects that are subject to change and often require swift modifications or new features for ongoing analyses. For this reason, we work in different layers of development. These layers are represented by Git branches.
Represents the latest release of the package that can be found on both the GitHub
release page and on PyPI (see Release flow). The documentation of the
stable branch is also the default view
you see on Read the Docs (RTD). See e.g.
Represents the upcoming release of the package. This branch is not guaranteed to be
stable, but has high CI standards and can only be
updated through reviewed pull requests. The documentation of the
main branch can be
found on RTD under “latest”, see e.g.
When working on a feature or larger refactoring that may take a longer time (think of
implementing a new PWA formalism), we isolate its development under an ‘epic branch’,
separate from the
main branch. Eventually, this epic branch is to be merged back into
main, until then it is available for discussion and testing.
Pull requests to an epic branch require no code review and the CI checks are less strict. This allows for faster development, while still offering the possibility to discuss new implementations and keeping track of related issues.
Epic branches can be installed through PyPI as well. Say that a certain epic is located
under the branch
epic/some-title and that the source code is located under
https://github.com/ComPWA/ampform, it can be
installed as follows:
python3 -m pip install git+https://github.com/ComPWA/ampform@epic/some-title
The main branch and Epic branches can be updated through pull requests. It is best to create such a pull request from a separate branch, which does not have any CI or code review restrictions. We call this a “feature branch”.
Please use conventional commit messages: start the commit with one of the semantic keywords below in UPPER CASE, followed by a column, then the commit header. The message itself should be in imperative mood — just imagine the commit to give a command to the code framework. So for instance:
DX: implement coverage report tools FIX: remove typo in raised `ValueError` message MAINT: remove redundant print statements DOC: rewrite welcome pages BREAK: removed `formulate_model()` alias method
The allowed semantic keywords (commit types) are as follows:1
New feature added to the package
Improvements and optimizations of existing features
Bug has been fixed
Breaking changes to the API
Changes that may affect the framework output
Improvements or additions to documentation
Maintenance and upkeep improvements
Improvements to the Developer Experience
Keep pull requests small. If the issue you try to address is too big, discuss in the team whether the issue can be converted into an Epic and split up into smaller tasks.
Before creating a pull request, run Tox.
Also use a conventional commit message style for the PR title. This is because we follow a linear commit history and the PR title will become the eventual commit message. A linear commit history is important for the Release flow and it is easier to navigate through changes once something goes wrong. In fact, in a linear commit history, commits that a have been merged into the main branch become more like small intermediate patches between the minor and major releases.
PRs can only be merged through ‘squash and merge’. There, you will see a summary based on the separate commits that constitute this PR. Leave the relevant commits in as bullet points. See the commit history for examples. This comes in especially handy when drafting a release!
Releases are managed with the
GitHub release page,
see for instance the one for AmpForm. The
release notes there are
automatically generated from the PRs
that were merged into the main branch since the previous tag and can be viewed and
edited as a release draft if you are a member of the ComPWA organization. Each of the
entries are generated from the PR titles, categorized by issue label (see configuration
Once a release is made on GitHub for a repository with source code for a Python package,
a new version is automatically published on PyPI and the
stable branch is updated to this latest tag. The package version is
taken from the Git tag associated with the release on GitHub (see
setuptools-scm). This way, the release notes
on GitHub serve as a changelog as well!
Release tags have to follow the Semantic Versioning scheme! This ensures that the tag can be used by setuptools-scm (in case the repository is a Python package). In addition, milestones with the same name as the release tag are automatically closed.
Even though we try to standardize the developer set-up of the repositories, we encourage
you to use the code editors that you feel comfortable with. Where possible, we therefore
define settings of linters, formatters, etc in config files that are specific to those
pyproject.toml where possible), not in the configuration files of the
Still, where code editor settings can be shared through configuration files in the repository, we provide recommended settings for the code editor as well. This is especially the case for VSCode.
We are open to other code editors as well. An example would be maintaining a local vimrc for users who prefer VIM. Other IDEs we’d like to support are PyCharm, Atom, IntelliJ with Python. So we’ll gladly integrate your editor settings where possible as you contribute to the frameworks!
Visual Studio code#
We recommend using Visual Studio Code as it’s free, regularly updated, and very flexible through it’s wide offer of user extensions.
If you add or open this repository as a
VSCode workspace, the
will ensure that you have the right developer settings for this repository. In addition,
VSCode will automatically recommend you to install a number of extensions that we use
when working on this code base.
They are defined
You can still specify your own settings in either the user or encompassing workspace settings, as the VSCode settings that come with this are folder settings.
Conda and VSCode
ComPWA projects are best developed with Conda and VSCode. The complete developer install procedure then becomes:
git clone https://github.com/ComPWA/ampform.git # or some other repo cd ampform conda env create conda activate pwa # or whatever the environment name is code . # open folder in VSCode
Writing durable software#
ComPWA strives to follow best practices from software development in industry. Following these standards not only makes the code easier to maintain and the software more reliable, it also provides you with the opportunity to learn about these practices while developing the code-base. Below you can find some resources we highly recommend you to be familiar with.
Software development in Python
Clean Code: A Handbook of Agile Software Craftsmanship (2009) by Robert Martin (“Uncle Bob”) 
This gist with a comprehensive summary of the core principles of Martin’s Clean Code
Test-Driven Development with Python (2017) by Harry Percival 
The classic: Test-Driven Development by Example (2002) by Kent Beck 
Composition over inheritance: Subclassing in Python Redux by Hynek Schlawack. A comprehensive article on the topic with illustrative examples in Python an several references to other important articles.
C++ Core Guidelines: while this document provides intended for C++ developers, it is an excellent, up-to-date set of guidelines that apply to any programming language.
LeetCode: practice algorithms through coding problems