Skip to content

Commit

Permalink
expand maintaining kit docs
Browse files Browse the repository at this point in the history
  • Loading branch information
fiona-naughton committed Oct 31, 2024
1 parent 00302ae commit 3033c4d
Show file tree
Hide file tree
Showing 4 changed files with 250 additions and 11 deletions.
Binary file added docs/source/img/finding-ci-error.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
95 changes: 95 additions & 0 deletions docs/source/maintaining-a-kit/improving.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
#####################
Improving your MDAKit
#####################

Adding new features
===================
Developing your MDAKit can involve, for example, adding new features, or
altering existing ones to improve performance or user-friendliness. This may
come at your own initiative, or the request of a user.

How you develop your kit is up to you! Just remember it's a good idea to
develop the tests and documentation alongside the new code (and to run the
existing tests to ensure you aren't causing any unintended changes).


Go beyond the minimal MDAKit Registry requirements
==================================================
The requirements for registering an MDAKit are intentionally minimal to keep
the entry barrier low. **Going beyond these minimal requirements is highly
recommended**.

This may include (but is not limited to):

- **More tests**: can you get to 100% coverage?!
- **More documentation**: Explain each part of your code. Add examples. Make
your documentation visually appealing and easy to navigate!
- :ref:`Add a logo <logo>`: Spice up your MDAKit with your very own logo!
- **Use tooling and workflows**: don't rely solely on the MDAKit Registry's CI
run - set up your own automated testing and alerts!
- **Community resources**: make it easy for users to report issues, ask
questions - or even contribute to your MDAKit themselves!
- **Release on PyPi/conda-forge**: make it easier for users to install your kit!
- :ref:`Make a journal publication <publishing>`: get recognition for your code!

If applicable, remember to :ref:`update your kit's metadata <update-metadata>`
so new features are reflected on the Registry.


.. _update-metadata:

Updating the MDAKit's metadata
==============================
As your kit evolves, you may wish to update the information displayed on the
MDAKit's registry - for example, to expand the description, add a publication,
or update the installation instructions.

If this is the case, simply edit your kit's ``metadata.yaml`` in your fork of
the ``MDAKits`` repository, then make a new PR. The MDAKits team will then
review and merge your PR to apply the changes.


.. _publishing:

Publishing your MDAKit!
=======================
Publishing an article in a journal such as `JOSS <https://joss.readthedocs.io/>`_
is a good way to get recogneition for your work and provide a citable source for
users.

`JOSS papers`_ are short, relatively simple, and in meeting the requirements
for registering an MDAKit, you will have already met many of the JOSS submission
requirements. Once you've built up your kit's documentation and testing,
consider publication with JOSS or a similar journal!


.. _logo:

Adding a logo for your MDAKit
=============================
A custom logo can add some pizazz to your MDAKit. You are welcome to create an
entirely custom logo, use the default `'empty gear' template`_,
or modify the template - feel free to place your logo within the gears!

If you used the MDAKits cookiecutter, there is already a placeholder logo in
your documentation. The `MDAKits cookiecutter documentation`_ has information
on updating this.

.. note::
MDAnalysis recommends that kit authors carefully check that any material used
in their kit - including logos - is used under appropriate licenses, and do
not infringe on any copyrights.

MDAnalysis cannot give any legal advice on these matters; if you are unsure
about any legal matters, please consult a lawyer.


.. _`MDAKits cookiecutter documentation`:
https://cookiecutter-mdakit.readthedocs.io/en/latest/usage.html#documentation-configuration

.. _`JOSS papers`:
https://joss.readthedocs.io/en/latest/submitting.html#submission-process

.. _`'empty gear' template`:
https://github.com/MDAnalysis/branding/tree/main/templates/MDAKits

125 changes: 125 additions & 0 deletions docs/source/maintaining-a-kit/keep-healthy.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
*************************
Keeping an MDAKit healthy
*************************

.. _failingci:

If an MDAKit fails the weekly CI
================================

In the event that a kit no longer passes its tests, an issue is automatically
raised on the `MDAKits GitHub`_, the maintainers (as identified in
``metadata.yaml``) are notified, and the kit's CI badges appearing on the
:ref:`Registry <mdakits>` will be updated.

Note that two tests are run:

- **develop**, using the 'current' version of your MDAKit (installed when
running the commands under ``src_install`` in ``metadata.yaml``), and the
current ``develop`` branch of MDAnalysis.

- **latest**, (if applicable) using the latest release of your MDAKit (installed
when running the commands under ``install`` in ``metadata.yaml``), and the
most recent release of MDAnalysis.

Depending on the nature of the failure, one or both of **develop** and
**latest** may be failing.


Why did CI fail?
----------------
There are a number of reasons that the CI tests may fail - it could be an
internal issue arising as you develop your kit, or it may indicate that updates
are needed to keep in line with changes within MDAnalysis or other dependencies.
It may reflect a single test that is no longer passing, or that a larger error
is preventing your kit from being installed/any tests from being run.

If you don't already have an idea what is causing your kit to fail, you can read
the CI log file to find the exact point of failure and accompanying error
messages:

#. Click on the *'Actions'* tab on the
`MDAKits Github page <https://github.com/MDAnalysis/MDAKits/>`_.

#. Click on the most recent *'GH Actions Cron CI'* job.

#. Under *'Annotations'*, find and click the failing job(s) with your kit's
name. Failing jobs should show a red cross and be grouped at the top.

#. You should be directed to the place in the CI log where the failure occurs.
Some scrolling may be required to find the origin of the error.

.. image:: ../img/finding-ci-error.gif
:alt: Finding the error message after being notified of a failing CI run


Fixing an failure
-----------------
Once the point of failure has been identified, you can set about trying to fix
it. The exact fix required will of course depend on exactly what went wrong, but
hopefully the error message(s) in the log will be enough to get you started.

Any fixes will be applied in your kit's home repository - no direct interaction
with the Registry is required.

If you're still not sure what's gone wrong or how to fix it, you can comment on
the issue that was raised on the `MDAKits GitHub`_. The MDAKits team, or
other members of the community, may be able to help - but remember, ultimate
responsibility remains with **you**.


After applying a fix
--------------------
Once you have applied a fix to your MDAKit (and, if applicable, pushed a new
release with these changes applied), no further action is required from you.

Assuming that the fix does indeed solve the issue, the tests will pass the next
time the automated CI is run. After the successful run, the CI badges on the
:ref:`Registry <mdakits>` will be restored to 'passing' and the issue raised on
the `MDAKits GitHub`_ will be automatically closed.


Keeping an eye out for upstream changes
=======================================
Avoid failing tests before they happen!

Just as you are likely to keep improving your kit, the upstream packages on
which is relies - including MDAnalysis - will also continue to evolve.
Sometimes, this means that things your kit relies on will no longer work.
Keeping an eye out for such changes will allow you to modify your kit
appropriately *before* the upstream change is fully applied and your code
starts to fail.

Usually, a package will warn users of any upcoming changes that may affect
downstream usage (e.g. changing how a function is to be used), by raising
a warning indicating the upcoming change when the function is used.
If your kit relies on any such to-be-changed features, then (assuming the
relevant code is covered by your kit's tests) these warnings will be triggered
when running the tests and will appear in the logs of the automated CI runs -
it pays to keep an eye on these!

It is also a good idea check release notes for new releases of packages your kit
uses and watch for any announcements of major upcoming changes.


Keeping support windows in mind
===============================
Your kit should specify which versions of the software it relies on (including
Python) it works with. Ideally, as new versions of these dependencies are
released, your kit will be updated to work with these.

It is *not* expected that your kit remains compatible with *all* historic
releases - and indeed, many old versions of these packages will not work with
each other. These packages will also have **support windows** of how long after
a given release the developers will keep an eye to make sure it still works as
intended.

`SPEC0 <https://scientific-python.org/specs/spec-0000/>`_ is a standard outlining
a timeline of which versions of Python and common dependencies in the Scientific
Python ecosystem should be supported and compatible with each other. You can
follow SPEC0 to determine which Python/dependency versions you should aim to
support, and which old versions you can drop.


.. _`MDAKits GitHub`:
https://github.com/MDAnalysis/MDAKits/issues
41 changes: 30 additions & 11 deletions docs/source/maintainingakit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,38 @@ Maintaining an MDAKit
like to see covered here, please get in touch via
`MDAnalysis Github Discussions`_.

Successfully registering an MDAKit is not the end of the journey! Like a pet, a
software package still requires input to keep it healthy and thriving.
This includes both expanding and adding new features and ensuring the MDAKit
continues to run as intended. **While not required for an MDAKit to remain in
the registry, such activities are highly encouraged**.

There are a variety of reasons a kit may behave unexpectedly after being
submitted to the registry. Apart from actively developing the kit, changes in
kit dependencies, or even Python itself, can introduce/deprecate new/old functionality.
For this reason, the kits' continuous integration is rerun weekly to
confirm the kits expected behavior.

In the event that a kit no longer passes its tests, an issue in
MDAnalysis/MDAKits is automatically raised while notifying the maintainers
indicated in the `metadata.yaml` file.
While the registry developers will be happy to help where possible, ultimately,
the maintainers of the MDAKit are responsible for resolving such issues and
ensuring that the tests pass.
The issue will automatically close after the next CI run if the tests pass again.
kit dependencies, or even Python itself, can introduce/deprecate new/old
functionality.

As part of the MDAKit Registry, your kits' tests (as specified in
``metadata.yaml``) are automatically rerun each week, so that you (and potential
users) have assurance that your code still works as intended, or are notified
when it does not.

**However, the ultimate responsibility for maintaining your MDAKit remains with
you.**

The sections below provide some information to keep in mind for maintaining your
MDAKit after registration.

.. toctree::
:maxdepth: 2

maintaining-a-kit/keep-healthy
maintaining-a-kit/improving


.. _`MDAnalysis GitHub Discussions`:
https://github.com/MDAnalysis/mdanalysis/discussions


Want to go even further beyond your MDAKit? You can also help us out by
:ref:`reviewing other MDAKits <reviewers-guide>`!

0 comments on commit 3033c4d

Please sign in to comment.