Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,14 @@ The [Kubernetes documentation style-guide](https://kubernetes.io/docs/contribute
encourages the use of `UpperCamelCase` (also known as `PascalCase`) when referring to Kubernetes resources.
Please make sure to follow this guide when writing documentation that involves Kubernetes.

#### Usage of console and bash blocks

- Use `console` for any set of commands that are entered in a command line that contain output from the command.
- Start the line with `$`, they'll be omitted upon copying in sphinx rendered docs.
- Use `\` to break lines.
- Do not add comments like `# comment` in-line when using break lines because this will break the copying.
- Use `bash` for any bash script `.sh`, or command line entry without output.

### Notebooks

The `examples` section of these docs are written in Jupyter Notebooks and built with [MyST-NB](https://myst-nb.readthedocs.io/en/latest/).
Expand Down
2 changes: 1 addition & 1 deletion source/cloud/aws/ec2-multi.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Install the AWS CLI tools following the [official instructions](https://docs.aws

Also install `dask-cloudprovider` and ensure you select the `aws` optional extras.

```console
```bash
$ pip install "dask-cloudprovider[aws]"
```

Expand Down
2 changes: 1 addition & 1 deletion source/cloud/aws/ec2.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ If you use the AWS Console, please use the default `ubuntu` user to ensure the N
Depending on where your ssh key is, when connecting via SSH you might need to do

```bash
ssh -i <path-to-your-ssh-key-dir>/your-key-file.pem ubuntu@<ip address>
$ ssh -i <path-to-your-ssh-key-dir>/your-key-file.pem ubuntu@<ip address>
```

If you get prompted with a `WARNING: UNPROTECTED PRIVATE KEY FILE!`, and get a
Expand Down
10 changes: 6 additions & 4 deletions source/cloud/aws/eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ First you'll need to have the [`aws` CLI tool](https://aws.amazon.com/cli/) and

Ensure you are logged into the `aws` CLI.

```console
```bash
$ aws configure
```

Expand All @@ -28,16 +28,18 @@ In your aws console under `EC2` in the side panel under Network & Security > Key
key pair or import (see "Actions" dropdown) one you've created locally.

2. If you are not using your default AWS profile, add `--profile <your-profile>` to the following command.

3. The `--ssh-public-key` argument is the name assigned during creation of your key in AWS console.
```

```console
```bash
$ eksctl create cluster rapids \
--version 1.30 \
--nodes 3 \
--node-type=g4dn.xlarge \
--timeout=40m \
--ssh-access \
--ssh-public-key <public key ID> \ # Name assigned during creation of your key in aws console
--ssh-public-key <public key ID> \
--region us-east-1 \
--zones=us-east-1c,us-east-1b,us-east-1d \
--auto-kubeconfig
Expand All @@ -48,7 +50,7 @@ With this command, you’ve launched an EKS cluster called `rapids`. You’ve sp
To access the cluster we need to pull down the credentials.
Add `--profile <your-profile>` if you are not using the default profile.

```console
```bash
$ aws eks --region us-east-1 update-kubeconfig --name rapids
```

Expand Down
12 changes: 6 additions & 6 deletions source/cloud/azure/aks.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ First you'll need to have the [`az` CLI tool](https://learn.microsoft.com/en-us/

Ensure you are logged into the `az` CLI.

```console
```bash
$ az login
```

Expand All @@ -23,7 +23,7 @@ $ az login
Now we can launch a GPU enabled AKS cluster. First launch an AKS cluster.

```bash
az aks create -g <resource group> -n rapids \
$ az aks create -g <resource group> -n rapids \
--enable-managed-identity \
--node-count 1 \
--enable-addons monitoring \
Expand Down Expand Up @@ -77,13 +77,13 @@ Microsoft.ContainerService/GPUDedicatedVHDPreview Registered

When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the `az provider register` command:

```console
```bash
$ az provider register --namespace Microsoft.ContainerService
```

Then install the aks-preview CLI extension, use the following Azure CLI commands:

```console
```bash
$ az extension add --name aks-preview
```

Expand All @@ -92,7 +92,7 @@ $ az extension add --name aks-preview
`````

```bash
az aks nodepool add \
$ az aks nodepool add \
--resource-group <resource group> \
--cluster-name rapids \
--name gpunp \
Expand All @@ -108,7 +108,7 @@ Here we have added a new pool made up of `Standard_NC48ads_A100_v4` instances wh
Then we can install the NVIDIA drivers.

```bash
helm install --wait --generate-name --repo https://helm.ngc.nvidia.com/nvidia \
$ helm install --wait --generate-name --repo https://helm.ngc.nvidia.com/nvidia \
-n gpu-operator --create-namespace \
gpu-operator \
--set operator.runtimeClass=nvidia-container-runtime
Expand Down
6 changes: 3 additions & 3 deletions source/cloud/azure/azure-vm.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Prepare the following environment variables.
| `AZ_SSH_KEY` | public ssh key | `~/.ssh/id_rsa.pub` |

```bash
az vm create \
$ az vm create \
--name ${AZ_VMNAME} \
--resource-group ${AZ_RESOURCEGROUP} \
--image ${AZ_IMAGE} \
Expand Down Expand Up @@ -110,7 +110,7 @@ Next we need to allow network traffic to the VM so we can access Jupyter and Das
| `AZ_NSGRULENAME` | Name for NSG rule | `Allow-Dask-Jupyter-ports` |

```bash
az network nsg rule create \
$ az network nsg rule create \
-g ${AZ_RESOURCEGROUP} \
--nsg-name ${AZ_NSGNAME} \
-n ${AZ_NSGRULENAME} \
Expand All @@ -129,7 +129,7 @@ Next, we can SSH into our VM to install RAPIDS. SSH instructions can be found by
When connecting via SSH by doing

```bash
ssh -i <path-to-your-ssh-key-dir>/your-key-file.pem azureuser@<vm-ip-address>
$ ssh -i <path-to-your-ssh-key-dir>/your-key-file.pem azureuser@<vm-ip-address>
```

you might get prompted with a `WARNING: UNPROTECTED PRIVATE KEY FILE!`, and get a
Expand Down
32 changes: 28 additions & 4 deletions source/cloud/gcp/dataproc.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,28 @@ RAPIDS can be deployed on Google Cloud Dataproc using Dask. For more details, se

It is strongly recommended that you copy the initialization scripts into your own Storage bucket to prevent unintended upgrades from upstream in the cluster:

```console
```bash
$ REGION=<region>
```

```bash
$ GCS_BUCKET=<bucket_name>
```

```bash
$ gcloud storage buckets create gs://$GCS_BUCKET
```

```bash
$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/gpu/install_gpu_driver.sh gs://$GCS_BUCKET
```

```bash
$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/dask/dask.sh gs://$GCS_BUCKET
$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/rapids/rapids.sh gs://$GCS_BUCKET
```

```bash
$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/rapids/rapids.sh gs://$GCS_BUCKET
```

**1. Create Dataproc cluster with Dask RAPIDS.** Use the gcloud command to create a new cluster. Because of an Anaconda version conflict, script deployment on older images is slow, we recommend using Dask with Dataproc 2.0+.
Expand All @@ -24,12 +38,23 @@ At the time of writing [Dataproc only supports RAPIDS version 23.12 and earlier
Please ensure that your setup complies with this compatibility requirement. Using newer RAPIDS versions may result in unexpected behavior or errors.
```

```console
```bash
$ CLUSTER_NAME=<CLUSTER_NAME>
```

```bash
$ DASK_RUNTIME=yarn
```

```bash
$ RAPIDS_VERSION=23.12
```

```bash
$ CUDA_VERSION=11.8
```

```bash
$ gcloud dataproc clusters create $CLUSTER_NAME\
--region $REGION\
--image-version 2.0-ubuntu18\
Expand All @@ -42,7 +67,6 @@ $ gcloud dataproc clusters create $CLUSTER_NAME\
--optional-components=JUPYTER\
--metadata gpu-driver-provider=NVIDIA,dask-runtime=$DASK_RUNTIME,rapids-runtime=DASK,rapids-version=$RAPIDS_VERSION,cuda-version=$CUDA_VERSION\
--enable-component-gateway

```

[GCS_BUCKET] = name of the bucket to use.\
Expand Down
12 changes: 6 additions & 6 deletions source/cloud/gcp/gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,16 @@ First you'll need to have the [`gcloud` CLI tool](https://cloud.google.com/sdk/g

Ensure you are logged into the `gcloud` CLI.

```console
```bash
$ gcloud init
```

## Create the Kubernetes cluster

Now we can launch a GPU enabled GKE cluster.

```console
gcloud container clusters create rapids-gpu-kubeflow \
```bash
$ gcloud container clusters create rapids-gpu-kubeflow \
--accelerator type=nvidia-tesla-a100,count=2 --machine-type a2-highgpu-2g \
--zone us-central1-c --release-channel stable
```
Expand All @@ -40,14 +40,14 @@ executable. Install gke-gcloud-auth-plugin for use with kubectl by following htt
you will need to install the `gke-gcloud-auth-plugin` to be able to get the credentials. To do so,

```bash
gcloud components install gke-gcloud-auth-plugin
$ gcloud components install gke-gcloud-auth-plugin
```
````

## Get the cluster credentials

```console
gcloud container clusters get-credentials rapids-gpu-kubeflow \
```bash
$ gcloud container clusters get-credentials rapids-gpu-kubeflow \
--region=us-central1-c
```

Expand Down
10 changes: 7 additions & 3 deletions source/cloud/gcp/vertex-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,16 +35,20 @@ You can create a new RAPIDS conda environment and register it with `ipykernel` f

```bash
# Create a new environment
conda create -y -n rapids \
$ conda create -y -n rapids \
{{ rapids_conda_channels }} \
{{ rapids_conda_packages }} \
ipykernel
```

```bash
# Activate the environment
conda activate rapids
$ conda activate rapids
```

```bash
# Register the environment with Jupyter
python -m ipykernel install --prefix "${DL_ANACONDA_HOME}/envs/rapids" --name rapids --display-name rapids
$ python -m ipykernel install --prefix "${DL_ANACONDA_HOME}/envs/rapids" --name rapids --display-name rapids
```

Then refresh the Jupyter Lab page and open the launcher. You will see a new "rapids" kernel available.
Expand Down
19 changes: 11 additions & 8 deletions source/cloud/nvidia/brev.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ There are two options to get you up and running with RAPIDS in a few steps, than

### Option 1. Setting up your Brev GPU Instance

1. Navigate to the [Brev console](https://brev.nvidia.com/org) and click on "Create your first instance".
1. Navigate to the [Brev bash](https://brev.nvidia.com/org) and click on "Create your first instance".

![Screenshot of the "Create your first instance" UI](/_static/images/platforms/brev/brev1.png)

Expand Down Expand Up @@ -88,15 +88,15 @@ To create and use a Jupyter Notebook, click "Open Notebook" at the top right aft
If you want to access your launched Brev instance(s) via Visual Studio Code or SSH using terminal, you need to install the [Brev CLI according to these instructions](https://docs.nvidia.com/brev/latest/brev-cli.html) or this code below:

```bash
sudo bash -c "$(curl -fsSL https://raw.githubusercontent.com/brevdev/brev-cli/main/bin/install-latest.sh)" && brev login
$ sudo bash -c "$(curl -fsSL https://raw.githubusercontent.com/brevdev/brev-cli/main/bin/install-latest.sh)" && brev login
```

#### 2.1 Brev CLI using Visual Studio Code

To connect to your Brev instance from VS Code open a new VS Code window and run:

```bash
brev open <instance-id>
$ brev open <instance-id>
```

It will automatically open a new VS Code window for you to use with RAPIDS.
Expand All @@ -106,23 +106,23 @@ It will automatically open a new VS Code window for you to use with RAPIDS.
To access your Brev instance from the terminal run:

```bash
brev shell <instance-id>
$ brev shell <instance-id>
```

##### Forwarding a Port Locally

Assuming your Jupyter Notebook is running on port `8888` in your Brev environment, you can forward this port to your local machine using the following SSH command:

```bash
ssh -L 8888:localhost:8888 <username>@<ip> -p 22
$ ssh -L 8888:localhost:8888 <username>@<ip> -p 22
```

This command forwards port `8888` on your local machine to port `8888` on the remote Brev environment.

Or for port `2222` (default port).

```bash
ssh <username>@<ip> -p 2222
$ ssh <username>@<ip> -p 2222
```

Replace `username` with your username and `ip` with the ip listed if it's different.
Expand Down Expand Up @@ -166,6 +166,9 @@ print(gdf)
- Please note: Git is not preinstalled in the RAPIDS container, but can be installed into the container when it is running using

```bash
apt update
apt install git -y
$ apt update
```

```bash
$ apt install git -y
```
2 changes: 2 additions & 0 deletions source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,8 @@

copybutton_prompt_text = r">>> |\.\.\. |\$ |In \[\d*\]: | {2,5}\.\.\.: | {5,8}: "
copybutton_prompt_is_regexp = True
copybutton_line_continuation_character = "\\"


suppress_warnings = ["myst.header", "myst.nested_header"]

Expand Down
2 changes: 1 addition & 1 deletion source/guides/caching-docker-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ spec:

You can create this Daemonset with `kubectl`.

```console
```bash
$ kubectl apply -f caching-daemonset.yaml
```

Expand Down
6 changes: 3 additions & 3 deletions source/guides/colocate-workers.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ First you'll need to have the [`gcloud` CLI tool](https://cloud.google.com/sdk/g

Ensure you are logged into the `gcloud` CLI.

```console
```bash
$ gcloud init
```

## Create the Kubernetes cluster

Now we can launch a GPU enabled GKE cluster.

```console
```bash
$ gcloud container clusters create rapids-gpu \
--accelerator type=nvidia-tesla-a100,count=2 --machine-type a2-highgpu-2g \
--zone us-central1-c --release-channel stable
Expand Down Expand Up @@ -163,7 +163,7 @@ spec:

You can create this cluster with `kubectl`.

```console
```bash
$ kubectl apply -f rapids-dask-cluster.yaml
```

Expand Down
Loading