diff --git a/README.md b/README.md index a88f794..1292726 100755 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ Implemented in this package (the `ClusterManagers.jl` package): | PBS (Portable Batch System) | `addprocs_pbs(np::Integer; qsub_flags=``)` or `addprocs(PBSManager(np, qsub_flags))` | | Scyld | `addprocs_scyld(np::Integer)` or `addprocs(ScyldManager(np))` | | HTCondor[^1] | `addprocs_htc(np::Integer)` or `addprocs(HTCManager(np))` | -| Slurm | `addprocs_slurm(np::Integer; kwargs...)` or `addprocs(SlurmManager(np); kwargs...)` | +| Slurm (deprecated - consider using [SlurmClusterManager.jl](https://github.com/kleinhenz/SlurmClusterManager.jl) instead) | `addprocs_slurm(np::Integer; kwargs...)` or `addprocs(SlurmManager(np); kwargs...)` | | Local manager with CPU affinity setting | `addprocs(LocalAffinityManager(;np=CPU_CORES, mode::AffinityMode=BALANCED, affinities=[]); kwargs...)` | [^1]: HTCondor was previously named Condor. @@ -93,7 +93,7 @@ julia> From worker 2: compute-6 From worker 3: compute-6 ``` -Some clusters require the user to specify a list of required resources. +Some clusters require the user to specify a list of required resources. For example, it may be necessary to specify how much memory will be needed by the job - see this [issue](https://github.com/JuliaLang/julia/issues/10390). The keyword `qsub_flags` can be used to specify these and other options. Additionally the keyword `wd` can be used to specify the working directory (which defaults to `ENV["HOME"]`). @@ -176,10 +176,10 @@ ElasticManager: Active workers : [] Number of workers to be added : 0 Terminated workers : [] - Worker connect command : + Worker connect command : /home/user/bin/julia --project=/home/user/myproject/Project.toml -e 'using ClusterManagers; ClusterManagers.elastic_worker("4cOSyaYpgSl6BC0C","127.0.1.1",36275)' ``` -By default, the printed command uses the absolute path to the current Julia executable and activates the same project as the current session. You can change either of these defaults by passing `printing_kwargs=(absolute_exename=false, same_project=false))` to the first form of the `ElasticManager` constructor. +By default, the printed command uses the absolute path to the current Julia executable and activates the same project as the current session. You can change either of these defaults by passing `printing_kwargs=(absolute_exename=false, same_project=false))` to the first form of the `ElasticManager` constructor. -Once workers are connected, you can print the `em` object again to see them added to the list of active workers. +Once workers are connected, you can print the `em` object again to see them added to the list of active workers. diff --git a/src/slurm.jl b/src/slurm.jl index 95ec145..a5dd617 100644 --- a/src/slurm.jl +++ b/src/slurm.jl @@ -15,6 +15,14 @@ end function launch(manager::SlurmManager, params::Dict, instances_arr::Array, c::Condition) + let + msg = "`ClusterManagers.addprocs_slurm` and `ClusterManagers.SlurmManager` " * + "(from the `ClusterManagers.jl` package) are deprecated. " * + "Consider using the `SlurmClusterManager.jl` package instead." + funcsym = :addprocs_slurm + force = true + Base.depwarn(msg, funcsym; force) + end try exehome = params[:dir] exename = params[:exename]