Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable MPI domain decomposition with plugins #50

Open
1 of 6 tasks
eirrgang opened this issue Apr 29, 2018 · 3 comments
Open
1 of 6 tasks

enable MPI domain decomposition with plugins #50

eirrgang opened this issue Apr 29, 2018 · 3 comments

Comments

@eirrgang
Copy link
Collaborator

eirrgang commented Apr 29, 2018

Thread-MPI domain decomposition is working, but we should allow MPI domain decomposition with plugins. This requires some more fiddling with the MPI initialization in and out of GROMACS and sharing of MPI communicators.

A first step is to build the full stack with MPI compilers and MPI enabled in Gromacs, but gmxapi and sample_restraint shouldn't need any MPI dependencies.

Enabling MPI GROMACS must not break existing ensemble features in gmxapi, which currently rely on mpi4py in a Python-based Context implementation. Either mpi4py must coexist with the GROMACS MPI usage or the parallel Context must receive and use the communicator obtained when GROMACS initializes MPI. In the latter case, we must make sure that we no longer require the communicator after GROMACS finalizes the MPI environment.

  • gromacs-gmxapi testing matrix includes MPI and Thread-MPI builds
  • libgmxapi test client initializes MPI and GROMACS reuses the environment: Allow client management of MPI environment for GROMACS #57
  • gmxapi Context can determine ensemble member rank and perform ensemble reduce operations
  • restraint plugin has access to call-back framework that can make one call per ensemble member
  • restraint plugin can be registered and initialized once per tMPI particle-pair force calculating thread
  • restraint plugin can be registered and initialized once per MPI particle-pair force calculating rank
@peterkasson
Copy link
Collaborator

Yes--as you know, this is high priority for some of our target applications.

@eirrgang
Copy link
Collaborator Author

eirrgang commented May 2, 2018

This is kind of an involved thing to trouble-shoot and has caused a lot of headaches in switching back and forth between tool chains. I want to isolate it as much as possible and keep up to date with the changes occurring in the communicator interfaces upstream. I suggest that I pursue MPI communicator sharing in a fresh fork of GROMACS 2018 with minimal gmxapi features to start with. I would like to start more or less immediately so that the solution can be integrated with gmxapi before much more gmxapi development adds complexity.

@eirrgang
Copy link
Collaborator Author

Also ref kassonlab/sample_restraint#15

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants