You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thread-MPI domain decomposition is working, but we should allow MPI domain decomposition with plugins. This requires some more fiddling with the MPI initialization in and out of GROMACS and sharing of MPI communicators.
A first step is to build the full stack with MPI compilers and MPI enabled in Gromacs, but gmxapi and sample_restraint shouldn't need any MPI dependencies.
Enabling MPI GROMACS must not break existing ensemble features in gmxapi, which currently rely on mpi4py in a Python-based Context implementation. Either mpi4py must coexist with the GROMACS MPI usage or the parallel Context must receive and use the communicator obtained when GROMACS initializes MPI. In the latter case, we must make sure that we no longer require the communicator after GROMACS finalizes the MPI environment.
gromacs-gmxapi testing matrix includes MPI and Thread-MPI builds
This is kind of an involved thing to trouble-shoot and has caused a lot of headaches in switching back and forth between tool chains. I want to isolate it as much as possible and keep up to date with the changes occurring in the communicator interfaces upstream. I suggest that I pursue MPI communicator sharing in a fresh fork of GROMACS 2018 with minimal gmxapi features to start with. I would like to start more or less immediately so that the solution can be integrated with gmxapi before much more gmxapi development adds complexity.
Thread-MPI domain decomposition is working, but we should allow MPI domain decomposition with plugins. This requires some more fiddling with the MPI initialization in and out of GROMACS and sharing of MPI communicators.
A first step is to build the full stack with MPI compilers and MPI enabled in Gromacs, but gmxapi and sample_restraint shouldn't need any MPI dependencies.
Enabling MPI GROMACS must not break existing ensemble features in gmxapi, which currently rely on mpi4py in a Python-based Context implementation. Either mpi4py must coexist with the GROMACS MPI usage or the parallel Context must receive and use the communicator obtained when GROMACS initializes MPI. In the latter case, we must make sure that we no longer require the communicator after GROMACS finalizes the MPI environment.
The text was updated successfully, but these errors were encountered: