Provides miscellaneous utility functions
Set the timezone
This function sets the timezone on the cluster node. The timezone to set is a mandatory parameter and must be present in /usr/share/zoneinfo Eg: "US/Mountain", "America/Los_Angeles" etc.
After setting the timezone, it is advised to restart engine daemons on the master and worker nodes
Set the timezone
This function sets the timezone on the cluster node. The timezone to set is a mandatory parameter and must be present in /usr/share/zoneinfo Eg: "US/Mountain", "America/Los_Angeles" etc.
After setting the timezone, it is advised to restart engine daemons on the master and worker nodes
set_timezone "America/Los_Angeles"
- $1 (string): Timezone to set
Add a public key to authorized_keys
add_to_authorized_keys "ssh-rsa xyzxyzxyzxyz...xyzxyz [email protected]" ec2-user
- $1 (string): Public key to add to authorized_keys file
- $2 (string): User for which the public key is added. Defaults to
ec2-user
Provides function to configure AWS CLI
Configure AWS CLI
A credentials file containing the AWS Access Key and the AWS Secret Key separated by a space, comma, tab or newline must be provided
Configure AWS CLI
A credentials file containing the AWS Access Key and the AWS Secret Key separated by a space, comma, tab or newline must be provided
configure_awscli -p exampleprofile -r us-east-1 -c /path/to/credentials/file
- -p string Name of the profile. Defaults to
default
- -r string AWS region. Defaults to
us-east-1
- -c string Path to credentials file
- 0: AWS CLI is configured
- 1: AWS CLI or credentials file not found
Provides function to install Python virtualenv
Install and activate a Python virtualenv
This function activates the new virtualenv, so install any libraries you want after calling this with "pip install"
Alternatively you can also use a requirements file. For example to use a requirements file stored in S3 or Azure Blob Store, run
/usr/lib/hadoop2/bin/hadoop dfs -get {s3|wasb}://path/to/requirements/file /tmp/requirements.txt pip install -r /tmp/requirements.txt
Install and activate a Python virtualenv
This function activates the new virtualenv, so install any libraries you want after calling this with "pip install"
Alternatively you can also use a requirements file. For example to use a requirements file stored in S3 or Azure Blob Store, run
/usr/lib/hadoop2/bin/hadoop dfs -get {s3|wasb}://path/to/requirements/file /tmp/requirements.txt pip install -r /tmp/requirements.txt
install_python_env 3.6 /path/to/virtualenv/py36
- $1 (float): Version of Python to use. Defaults to 3.6
- $2 (string): Location to create virtualenv in. Defaults to /usr/lib/virtualenv/py36
- 0: Python virtualenv was created and activated
- 1: Python executable for virtualenv couldn't be found or installed
Provides function to mount a NFS volume
Mounts an NFS volume on master and worker nodes
Instructions for AWS EFS mount:
- After creating the EFS file system, create a security group
- Create an inbound traffic rule for this security group that allows traffic on port 2049 (NFS) from this security group as described here: https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-create-security-groups.html
- Add this security group as a persistent security group for the cluster from which you want to mount the EFS store, as described here: http://docs.qubole.com/en/latest/admin-guide/how-to-topics/persistent-security-group.html
TODO: add instructions for Azure file share
Mounts an NFS volume on master and worker nodes
Instructions for AWS EFS mount:
- After creating the EFS file system, create a security group
- Create an inbound traffic rule for this security group that allows traffic on port 2049 (NFS) from this security group as described here: https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-create-security-groups.html
- Add this security group as a persistent security group for the cluster from which you want to mount the EFS store, as described here: http://docs.qubole.com/en/latest/admin-guide/how-to-topics/persistent-security-group.html
TODO: add instructions for Azure file share
mount_nfs_volume "example.nfs.share:/" /mnt/efs
- $1 (string): Path to NFS share
- $2 (string): Mount point to use