ci-fairy - a CLI tool

ci-fairy is a commandline helper tool for some commonly performed tasks, primarily those executed by the CI. ci-fairy is optimized to be run from a CI script where its arguments will be hardcoded and change very little. It does not do a lot of error handling, expect to see exceptions where things go wrong.

Use within gitlab-ci.yml

The ci-fairy tool is available by extending the template:

  - project: 'freedesktop/ci-templates'
    file: '/templates/ci-fairy.yml'

   - ci-fairy check-merge-request --require-allow-collaboration

The template guarantees the following preinstalled packages: bash, git, python and pip. It is recommended that CI jobs that want to pip install use a Python venv to avoid version conflicts.

The distribution the template is based on is subject to change without notice.


As the ci-fairy is likely used within a CI job, installation via pip3 directly from git is recommended:

pip3 install git+

The @12345deadbeef suffix specifies the git sha to check out. The git sha should match the sha of the included templates.

Or where a local checkout is available:

pip3 install .

Alternatively, you may directly invoke the ci-fairy tool from within the git tree without prior installation. Note that pip3 will take care of all dependencies, for local invocations you must install those manually.

Use of the GitLab CI Environment

ci-fairy will make use of the predefined environment variables set by GitLab CI where possible. This includes but is not limited to CI_PROJECT_PATH, CI_SERVER_URL, and CI_JOB_TOKEN.

In most cases ci-fairy does not need arguments beyond what is specific to the interaction with the project at the time.


Authentication is handled automatically through the $CI_JOB_TOKEN environment variable. Where the ci-fairy tool is called outside a CI job, use the --authfile argument to provide the path to a file containing the value of a GitLab private token with ‘api’ access.

For example, if your private token has the value abcd1234XYZ, authentication can be performed like this:

$ echo "abcd1234XYZ" > /path/to/authfile
$ ci-fairy --authfile /path/to/authfile <....>

Where --authfile is used within a CI job, specificy the token as a GitLab predefined environment variable of type “File”. The “Key” is the file name given to --authfile, the value is the token value of your private token.


GitLab has an online linter but it requires copy/pasting the .gitlab-ci.yml file into the web form. ci-fairy makes this simpler:

$ cd /path/to/project.git
$ ci-fairy lint

ci-fairy will find the .gitlab-ci.yml file and upload it to the CI linter and print any errors.


You can incorporate some ci-fairy functions check in your pre-commit setup. Add the following snippet to your .pre-commit-config.yaml

- repo:
  rev: master
    # The hooks below are pre-push by default
    - id: lint
    - id: check-commits
    - id: generate-template

Currently all ci-fairy pre-commit hooks default to stages: ["push"], i.e. they run pre-push, not pre-commit. Please see the pre-commit Documentation for further details.


By default pre-commit installs the pre-commit hook only. Run pre-commit install --hook-type pre-push to enable the ci-fairy hooks.

Deleting registry images


The CI_JOB_TOKEN does not have sufficient permissions to delete images, --authfile is required for this task.

To delete an image from your container registry, use the delete-image subcommand. This subcommands provides three modes - deleting a specific image tag, deleting all but an image tag or deleting all images.

ci-fairy delete-image --repository fedora/30 --all
ci-fairy delete-image --repository fedora/30 --exclude-tag 2020-03.11.0
ci-fairy delete-image --tag "2020-03-*"

During testing, use the --dry-run argument to ensure that no tags are actually deleted.

Templating .gitlab-ci.yml

The .gitlab-ci.yml file can become repetitive where multiple jobs use the same setups (e.g. such as testing a build across different distributions). ci-fairy provides a command to generate a .gitlab-ci.yml (or any file, really) from a Jinja2 template, sourcing the data from a YAML file.

For example, let’s assume the YAML file .gitlab-ci/config.yml with a list of distributions:

  fedora: [32, 31]
  ubuntu: [19.04]

    needed: ['gcc', 'valgrind']
    use_qemu: false
    needed: ['curl', 'wget', 'valgrind']
    use_qemu: true
    needed: ['curl', 'wget']

And the template .gitlab-ci/ci.template that uses those to generate jobs (see the Jinja2 documentation for details):

{% for distro in distributions %}
{% for version in distributions[distro] %}
   extends: .fdo.container-build@{{distro}}@x86_64
      FDO_DISTRIBUTION_VERSION: "{{version}}"
      FDO_DISTRIBUTION_PACKAGES: "{{" ".join(packages[distro].needed)}}"
{% endfor %}
{% endfor %}

Now let’s generate the .gitlab-ci.yml file:

$ ci-fairy generate-template

and our file will look like this:

   extends: .fdo.container-build@fedora
      FDO_DISTRIBUTION_PACKAGES: "gcc valgrind"

   extends: .fdo.container-build@fedora
      FDO_DISTRIBUTION_PACKAGES: "gcc valgrind"

   extends: .fdo.container-build@ubuntu@x86_64
      FDO_DISTRIBUTION_PACKAGES: "curl wget valgrind"


Without arguments, ci-fairy generate-templates always uses the files .gitlab-ci/config.yml and .gitlab-ci/ci.template and (over)write the file .gitlab-ci.yml.

This command does not need a connection to the GitLab instance.

Templating any file

While the defaults generate a .gitlab-ci.yml file, it’s possible to invoke ci-fairy with any YAML or template file:

$ ci-fairy generate-template --config somefile.yml mytemplate.tmpl

ci-fairy doesn’t care about the template file type, you can use any text file as template source.

To allow using the same YAML file for multiple templates, ci-fairy allows for the selection of the root node. For example:

$ cat template.tpl
qemu needed? {{use_qemu}}
$ ci-fairy generate-template --root=/packages/fedora --config distributions.yml template.tpl
qemu needed? true
$ ci-fairy generate-template --root=/packages/ubuntu --config distributions.yml template.tpl
qemu needed? false

ci-fairy also provides helper functions that can be used from Jinja2 templates:

  • ci_fairy.hashfiles(): takes a list of one or more filenames and produces a 64 character hexdigest string. It hashes both the files’ content and the filenames themselves. The used hash algorithm is unspecified but stable. This can be used to generate checksums from files. Example:

      FEDORA_TAG: {{
  • ci_fairy.sha256sum(path, prefix=True): takes a path to a file and produces a string in the form of sha256-<hexdigest> if prefix=True and in the form of <hexdigest> if prefix=False. Unlike hashfiles() it hashes only the file contents. This can be used to generate checksums from files. Example:

      CHECKSUM: {{ci_fairy.sha256sum('tarball.tar.xz')}}
  • ci_fairy.import_module(module_name): takes a string which is a python module name and exports that module as a variable. This is useful to import a module not exposed by ci_fairy itself. Example:

    {% set time = ci_fairy.import_module( 'time' ) %}
    {{ time.time() }}
  • ci_fairy.nodes: gives the full config nodes as a dictionary. This allows to iterate over all root nodes. Example:

    {% for key in ci_fairy.nodes %}
    - root node in config: { key }
    {% endfor %}


Root nodes in the config files must not be named ci_fairy to avoid namespace clashes.

For convenience, users are encouraged to use the unicode character 🧚 as an alias for ci_fairy. This makes the template clearer that this object is not part of the config file.



Extensions to YAML syntax:

In addition to the ci_fairy or 🧚 keyword, some keywords have been added to the YAML language to allow easier configuration.


extends aims at replicating Gitlab CI/CD extends keyword.

There are a few differences however, and ci-fairy only supports the following:

Supported in: top-level dictionaries

The extends: keyword makes the current dictionary inherit all members of the extended dictionary, according to the following rules:

  • where the value is a non-empty dict, the base and new dicts are merged

  • where the value is a non-empty list, the base and new list are concatinated

  • where the value is an empty dict or empty list, the new value is the empty dict/list.

  • otherwise, the new value overwrites the base value.


  bar: [1, 2]
    a: 'a'
    b: 'b'
  bat: 'foobar'
  extends: bar
  bar: [3, 4]
    c: 'c'
  bat: 'subfoobar'

Results in the effective values for subfoo:

   bar: [1, 2, 3, 4]
   baz: {a: 'a', b: 'b', c: 'c'}
   bat: 'subfoobar'
  bar: 1


include aims at replicating the local include from Gitlab CI/CD include keyword.

Supported in: top-level only

The include: keyword includes the specified file at the place. The path to the included file is relative to the source file.


# content of firstfile
    bar: [1, 2]

#content of secondfile
    baz: [3, 4]
include: firstfile
  extends: foo


the included file will work with the normal YAML rules, specifically: where the included file has a key with the same name this section will be overwritten with the later-defined. The position of the include statement thus matters a lot.


The YAML file version (optional). Handled as attribute on this object and does not show up in the dictionary otherwise. This version must be a top-level entry in the YAML file in the form “version: 1” or whatever integer number and is exposed as the “version” attribute.

Where other files are included, the version of that file must be identical to the first version found in any file.

Checking commits

Git commit messages have a few standard requirements that apply across projects. ci-fairy can verify the commits to be merged for those standards.

$ ci-fairy check-commits master..HEAD

By default, this checks for any leftover fixup!/squash! commits and some cosmetic requirements. Other options to check for is whether a Signed-off-by: line is present (--signed-off-by) or ensure that such a line is not included (--no-signed-off-by). See the --help output for a list fo checks available.

It is possible to provide custom rule definitions in YAML format, either in .gitlab-ci/commit-rules.yml if it exists, or by specifying a file with the --rules-file option. An example rules files:

    - regex: ($CI_MERGE_REQUEST_PROJECT_URL/(-/)?(issues|merge_requests)/[0-9]+)
      message: Commit message must contain a link to an issue or merge request
    - regex: '^\S*\.[ch]:'
      message: Commit message subject prefix should not include .c, .h etc.
      where: subject

ci-fairy uses the CI environment where available and will do the right thing without requiring options. It does need the $FDO_UPSTREAM_REPO variable to be set to the upstream project’s full name. An example .gitlab-ci.yml section:

  FDO_UPSTREAM_REPO: libinput/libinput

commit message check:
   image: something_that_has_pip
     - pip3 install git+
     - ci-fairy check-commits --signed-off-by

ci-fairy uses $CI_MERGE_REQUEST_TARGET_BRANCH_NAME if available, otherwise it defaults to the main or master branch of the $FDO_UPSTREAM_REPO, whichever exists. For the above snippet, the commit range checked will thus be upstream/master..HEAD.

Checking merge requests

ci-fairy can be used to check merge requests for common requirements:

$ ci-fairy check-merge-request --require-allow-collaboration

The --require-allow-collaboration flag checks for the merge request to allow edits from maintainers.

This can be used in the CI to fail a merge request pipeline if that checkbox is not set. There are two ways to invoke this through the CI: either through a detached merge pipeline or by searching for a merge request with the same commit sha as our branch. The detached merge pipeline has side-effects you should familiarize yourself with.

Checking merge requests within a pipeline

Here is an example for utilising FDO_UPSTREAM_REPO:

  FDO_UPSTREAM_REPO: 'project/name'

  image: golang:alpine
    - apk add python3 git
    - pip3 install git+
     - ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
    when: on_failure
        junit: check-merge-request.xml

When pushing to a branch, this snippet will search project/name for an open merge requests that has the same git sha as this pipeline. That merge request is the one the checks are performed on. If no suitable merge request is found, ci-fairy’s exits code is 2. Otherwise, ci-fairy exits with status 0 on success or 1 in case of failure.

The advantag of this approach is that the job is run as part of the normal CI pipeline and no detached merge pipeline needs to be run.

The disadvantages of this approach is that the pipeline may complete before a merge request is filed and the job may fail despite the (future) merge request having the checkbox set.

Checking merge requests within a detached pipeline

Here is an example for utilizing detached merge pipelines:

# work around the detached merge request pipeline issues by defining
# rules for all jobs
    - when: always

  image: golang:alpine
    - apk add python3 git
    - pip3 install git+
     - ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
     - if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"'
       when: always
     - when: never  # override the workflow's always rule
    when: on_failure
        junit: check-merge-request.xml

Note that defining rules: for a single job has the side-effect of running detached merge pipelines. This is almost never what you really want. To work around this, the above example defines global rules in workflow, effectively providing a rules definition for all jobs. Ensure that this is compatible with your project’s CI.

Waiting for a pipeline

ci-fairy can be used to wait for the completion of a pipeline.

$ ci-fairy wait-for-pipeline

In the default invocation with no arguments ci-fairy will guess the gitlab project path based on the $GITLAB_USER_ID if set, or $USER otherwise, combined with the basename of the current directory. The pipline to wait for defaults to the pipeline matching the current git sha:

$ whoami
$ cd myproject
$ git push -q gitlab mybranch
$ ci-fairy wait-for-pipeline
status: success   | 90/90 | created:  0 | pending:  0 | running:  0 | failed:  1 | success: 89

Please see the --help output for more details on controlling which pipeline to wait for.