Skip to content

Commit

Permalink
Deprecate the Slurm functionality in this package (and point users to…
Browse files Browse the repository at this point in the history
… `SlurmClusterManager.jl` instead)
  • Loading branch information
DilumAluthge committed Jan 2, 2025
1 parent 2a2d8c6 commit fe6a94c
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 5 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Implemented in this package (the `ClusterManagers.jl` package):
| PBS (Portable Batch System) | `addprocs_pbs(np::Integer; qsub_flags=``)` or `addprocs(PBSManager(np, qsub_flags))` |
| Scyld | `addprocs_scyld(np::Integer)` or `addprocs(ScyldManager(np))` |
| HTCondor[^1] | `addprocs_htc(np::Integer)` or `addprocs(HTCManager(np))` |
| Slurm | `addprocs_slurm(np::Integer; kwargs...)` or `addprocs(SlurmManager(np); kwargs...)` |
| Slurm (deprecated - consider using [SlurmClusterManager.jl](https://github.com/kleinhenz/SlurmClusterManager.jl) instead) | `addprocs_slurm(np::Integer; kwargs...)` or `addprocs(SlurmManager(np); kwargs...)` |
| Local manager with CPU affinity setting | `addprocs(LocalAffinityManager(;np=CPU_CORES, mode::AffinityMode=BALANCED, affinities=[]); kwargs...)` |

[^1]: HTCondor was previously named Condor.
Expand Down Expand Up @@ -93,7 +93,7 @@ julia> From worker 2: compute-6
From worker 3: compute-6
```

Some clusters require the user to specify a list of required resources.
Some clusters require the user to specify a list of required resources.
For example, it may be necessary to specify how much memory will be needed by the job - see this [issue](https://github.com/JuliaLang/julia/issues/10390).
The keyword `qsub_flags` can be used to specify these and other options.
Additionally the keyword `wd` can be used to specify the working directory (which defaults to `ENV["HOME"]`).
Expand Down Expand Up @@ -176,10 +176,10 @@ ElasticManager:
Active workers : []
Number of workers to be added : 0
Terminated workers : []
Worker connect command :
Worker connect command :
/home/user/bin/julia --project=/home/user/myproject/Project.toml -e 'using ClusterManagers; ClusterManagers.elastic_worker("4cOSyaYpgSl6BC0C","127.0.1.1",36275)'
```

By default, the printed command uses the absolute path to the current Julia executable and activates the same project as the current session. You can change either of these defaults by passing `printing_kwargs=(absolute_exename=false, same_project=false))` to the first form of the `ElasticManager` constructor.
By default, the printed command uses the absolute path to the current Julia executable and activates the same project as the current session. You can change either of these defaults by passing `printing_kwargs=(absolute_exename=false, same_project=false))` to the first form of the `ElasticManager` constructor.

Once workers are connected, you can print the `em` object again to see them added to the list of active workers.
Once workers are connected, you can print the `em` object again to see them added to the list of active workers.
8 changes: 8 additions & 0 deletions src/slurm.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,14 @@ end

function launch(manager::SlurmManager, params::Dict, instances_arr::Array,
c::Condition)
let
msg = "`ClusterManagers.addprocs_slurm` and `ClusterManagers.SlurmManager` " *
"(from the `ClusterManagers.jl` package) are deprecated. " *
"Consider using the `SlurmClusterManager.jl` package instead."
funcsym = :addprocs_slurm
force = true
Base.depwarn(msg, funcsym; force)
end
try
exehome = params[:dir]
exename = params[:exename]
Expand Down

0 comments on commit fe6a94c

Please sign in to comment.