From bed025b76e283b6fb225d8e81f4fa424b0fa5c74 Mon Sep 17 00:00:00 2001 From: ncclementi Date: Mon, 25 Aug 2025 19:03:23 -0400 Subject: [PATCH 1/4] add config to copy properly multiline blocks --- source/conf.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/conf.py b/source/conf.py index fef5089f..d7ce882e 100644 --- a/source/conf.py +++ b/source/conf.py @@ -99,6 +99,8 @@ copybutton_prompt_text = r">>> |\.\.\. |\$ |In \[\d*\]: | {2,5}\.\.\.: | {5,8}: " copybutton_prompt_is_regexp = True +copybutton_line_continuation_character = "\\" + suppress_warnings = ["myst.header", "myst.nested_header"] From 390d9c53e67c888f0e4f20f67d9b2f3b57baa615 Mon Sep 17 00:00:00 2001 From: ncclementi Date: Mon, 25 Aug 2025 19:07:24 -0400 Subject: [PATCH 2/4] make usage of bash and console consistent --- source/cloud/aws/ec2.md | 4 +-- source/cloud/aws/eks.md | 4 ++- source/cloud/azure/aks.md | 12 +++---- source/cloud/azure/azure-vm.md | 12 +++---- source/cloud/gcp/dataproc.md | 28 +++++++++++++-- source/cloud/gcp/gke.md | 8 ++--- source/cloud/gcp/vertex-ai.md | 12 ++++--- source/cloud/nvidia/brev.md | 29 +++++++++------- source/guides/mig.md | 6 ++-- source/guides/scheduler-gpu-optimization.md | 4 +++ source/platforms/coiled.md | 38 ++++++++++----------- source/platforms/kserve.md | 8 +++++ source/platforms/snowflake.md | 36 +++++++++---------- source/tools/kubernetes/dask-helm-chart.md | 8 +++-- 14 files changed, 129 insertions(+), 80 deletions(-) diff --git a/source/cloud/aws/ec2.md b/source/cloud/aws/ec2.md index 67f1ad9e..8cd0ff40 100644 --- a/source/cloud/aws/ec2.md +++ b/source/cloud/aws/ec2.md @@ -34,8 +34,8 @@ If you use the AWS Console, please use the default `ubuntu` user to ensure the N ````{tip} Depending on where your ssh key is, when connecting via SSH you might need to do -```bash -ssh -i /your-key-file.pem ubuntu@ +```console +$ ssh -i /your-key-file.pem ubuntu@ ``` If you get prompted with a `WARNING: UNPROTECTED PRIVATE KEY FILE!`, and get a diff --git a/source/cloud/aws/eks.md b/source/cloud/aws/eks.md index 8eef60bc..99ff84ed 100644 --- a/source/cloud/aws/eks.md +++ b/source/cloud/aws/eks.md @@ -28,6 +28,8 @@ In your aws console under `EC2` in the side panel under Network & Security > Key key pair or import (see "Actions" dropdown) one you've created locally. 2. If you are not using your default AWS profile, add `--profile ` to the following command. + +3. The `--ssh-public-key` argument is the name assigned during creation of your key in AWS console. ``` ```console @@ -37,7 +39,7 @@ $ eksctl create cluster rapids \ --node-type=g4dn.xlarge \ --timeout=40m \ --ssh-access \ - --ssh-public-key \ # Name assigned during creation of your key in aws console + --ssh-public-key \ --region us-east-1 \ --zones=us-east-1c,us-east-1b,us-east-1d \ --auto-kubeconfig diff --git a/source/cloud/azure/aks.md b/source/cloud/azure/aks.md index e81e3451..1c77de86 100644 --- a/source/cloud/azure/aks.md +++ b/source/cloud/azure/aks.md @@ -22,8 +22,8 @@ $ az login Now we can launch a GPU enabled AKS cluster. First launch an AKS cluster. -```bash -az aks create -g -n rapids \ +```console +$ az aks create -g -n rapids \ --enable-managed-identity \ --node-count 1 \ --enable-addons monitoring \ @@ -91,8 +91,8 @@ $ az extension add --name aks-preview ````` -```bash -az aks nodepool add \ +```console +$ az aks nodepool add \ --resource-group \ --cluster-name rapids \ --name gpunp \ @@ -107,8 +107,8 @@ Here we have added a new pool made up of `Standard_NC48ads_A100_v4` instances wh Then we can install the NVIDIA drivers. -```bash -helm install --wait --generate-name --repo https://helm.ngc.nvidia.com/nvidia \ +```console +$ helm install --wait --generate-name --repo https://helm.ngc.nvidia.com/nvidia \ -n gpu-operator --create-namespace \ gpu-operator \ --set operator.runtimeClass=nvidia-container-runtime diff --git a/source/cloud/azure/azure-vm.md b/source/cloud/azure/azure-vm.md index 4fdfc4d0..f316583c 100644 --- a/source/cloud/azure/azure-vm.md +++ b/source/cloud/azure/azure-vm.md @@ -53,8 +53,8 @@ Prepare the following environment variables. | `AZ_USERNAME` | User name of VM | `rapidsai` | | `AZ_SSH_KEY` | public ssh key | `~/.ssh/id_rsa.pub` | -```bash -az vm create \ +```console +$ az vm create \ --name ${AZ_VMNAME} \ --resource-group ${AZ_RESOURCEGROUP} \ --image ${AZ_IMAGE} \ @@ -109,8 +109,8 @@ Next we need to allow network traffic to the VM so we can access Jupyter and Das | `AZ_NSGNAME` | NSG name for the VM | `${AZ_VMNAME}NSG` | | `AZ_NSGRULENAME` | Name for NSG rule | `Allow-Dask-Jupyter-ports` | -```bash -az network nsg rule create \ +```console +$ az network nsg rule create \ -g ${AZ_RESOURCEGROUP} \ --nsg-name ${AZ_NSGNAME} \ -n ${AZ_NSGRULENAME} \ @@ -128,8 +128,8 @@ Next, we can SSH into our VM to install RAPIDS. SSH instructions can be found by ````{tip} When connecting via SSH by doing -```bash -ssh -i /your-key-file.pem azureuser@ +```console +$ ssh -i /your-key-file.pem azureuser@ ``` you might get prompted with a `WARNING: UNPROTECTED PRIVATE KEY FILE!`, and get a diff --git a/source/cloud/gcp/dataproc.md b/source/cloud/gcp/dataproc.md index eb78cd48..12ba3063 100644 --- a/source/cloud/gcp/dataproc.md +++ b/source/cloud/gcp/dataproc.md @@ -8,12 +8,26 @@ It is strongly recommended that you copy the initialization scripts into your ow ```console $ REGION= +``` + +```console $ GCS_BUCKET= +``` + +```console $ gcloud storage buckets create gs://$GCS_BUCKET +``` + +```console $ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/gpu/install_gpu_driver.sh gs://$GCS_BUCKET +``` + +```console $ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/dask/dask.sh gs://$GCS_BUCKET -$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/rapids/rapids.sh gs://$GCS_BUCKET +``` +```console +$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/rapids/rapids.sh gs://$GCS_BUCKET ``` **1. Create Dataproc cluster with Dask RAPIDS.** Use the gcloud command to create a new cluster. Because of an Anaconda version conflict, script deployment on older images is slow, we recommend using Dask with Dataproc 2.0+. @@ -26,10 +40,21 @@ Please ensure that your setup complies with this compatibility requirement. Usin ```console $ CLUSTER_NAME= +``` + +```console $ DASK_RUNTIME=yarn +``` + +```console $ RAPIDS_VERSION=23.12 +``` + +```console $ CUDA_VERSION=11.8 +``` +```console $ gcloud dataproc clusters create $CLUSTER_NAME\ --region $REGION\ --image-version 2.0-ubuntu18\ @@ -42,7 +67,6 @@ $ gcloud dataproc clusters create $CLUSTER_NAME\ --optional-components=JUPYTER\ --metadata gpu-driver-provider=NVIDIA,dask-runtime=$DASK_RUNTIME,rapids-runtime=DASK,rapids-version=$RAPIDS_VERSION,cuda-version=$CUDA_VERSION\ --enable-component-gateway - ``` [GCS_BUCKET] = name of the bucket to use.\ diff --git a/source/cloud/gcp/gke.md b/source/cloud/gcp/gke.md index 7217d9e3..2947b0b9 100644 --- a/source/cloud/gcp/gke.md +++ b/source/cloud/gcp/gke.md @@ -23,7 +23,7 @@ $ gcloud init Now we can launch a GPU enabled GKE cluster. ```console -gcloud container clusters create rapids-gpu-kubeflow \ +$ gcloud container clusters create rapids-gpu-kubeflow \ --accelerator type=nvidia-tesla-a100,count=2 --machine-type a2-highgpu-2g \ --zone us-central1-c --release-channel stable ``` @@ -39,15 +39,15 @@ executable. Install gke-gcloud-auth-plugin for use with kubectl by following htt ``` you will need to install the `gke-gcloud-auth-plugin` to be able to get the credentials. To do so, -```bash -gcloud components install gke-gcloud-auth-plugin +```console +$ gcloud components install gke-gcloud-auth-plugin ``` ```` ## Get the cluster credentials ```console -gcloud container clusters get-credentials rapids-gpu-kubeflow \ +$ gcloud container clusters get-credentials rapids-gpu-kubeflow \ --region=us-central1-c ``` diff --git a/source/cloud/gcp/vertex-ai.md b/source/cloud/gcp/vertex-ai.md index 74ce2ac1..34bf0ff7 100644 --- a/source/cloud/gcp/vertex-ai.md +++ b/source/cloud/gcp/vertex-ai.md @@ -33,18 +33,22 @@ You can find out your current system CUDA Toolkit version by running `ls -ld /us You can create a new RAPIDS conda environment and register it with `ipykernel` for use in Jupyter Lab. Open a new terminal in Jupyter and run the following commands. -```bash +```console # Create a new environment -conda create -y -n rapids \ +$ conda create -y -n rapids \ {{ rapids_conda_channels }} \ {{ rapids_conda_packages }} \ ipykernel +``` +```console # Activate the environment -conda activate rapids +$ conda activate rapids +``` +```console # Register the environment with Jupyter -python -m ipykernel install --prefix "${DL_ANACONDA_HOME}/envs/rapids" --name rapids --display-name rapids +$ python -m ipykernel install --prefix "${DL_ANACONDA_HOME}/envs/rapids" --name rapids --display-name rapids ``` Then refresh the Jupyter Lab page and open the launcher. You will see a new "rapids" kernel available. diff --git a/source/cloud/nvidia/brev.md b/source/cloud/nvidia/brev.md index fb0a4d8f..8def077e 100644 --- a/source/cloud/nvidia/brev.md +++ b/source/cloud/nvidia/brev.md @@ -87,16 +87,16 @@ To create and use a Jupyter Notebook, click "Open Notebook" at the top right aft If you want to access your launched Brev instance(s) via Visual Studio Code or SSH using terminal, you need to install the [Brev CLI according to these instructions](https://docs.nvidia.com/brev/latest/brev-cli.html) or this code below: -```bash -sudo bash -c "$(curl -fsSL https://raw.githubusercontent.com/brevdev/brev-cli/main/bin/install-latest.sh)" && brev login +```console +$ sudo bash -c "$(curl -fsSL https://raw.githubusercontent.com/brevdev/brev-cli/main/bin/install-latest.sh)" && brev login ``` #### 2.1 Brev CLI using Visual Studio Code To connect to your Brev instance from VS Code open a new VS Code window and run: -```bash -brev open +```console +$ brev open ``` It will automatically open a new VS Code window for you to use with RAPIDS. @@ -105,24 +105,24 @@ It will automatically open a new VS Code window for you to use with RAPIDS. To access your Brev instance from the terminal run: -```bash -brev shell +```console +$ brev shell ``` ##### Forwarding a Port Locally Assuming your Jupyter Notebook is running on port `8888` in your Brev environment, you can forward this port to your local machine using the following SSH command: -```bash -ssh -L 8888:localhost:8888 @ -p 22 +```console +$ ssh -L 8888:localhost:8888 @ -p 22 ``` This command forwards port `8888` on your local machine to port `8888` on the remote Brev environment. Or for port `2222` (default port). -```bash -ssh @ -p 2222 +```console +$ ssh @ -p 2222 ``` Replace `username` with your username and `ip` with the ip listed if it's different. @@ -165,7 +165,10 @@ print(gdf) - [Brev Docs](https://brev.dev/) - Please note: Git is not preinstalled in the RAPIDS container, but can be installed into the container when it is running using -```bash -apt update -apt install git -y +```console +$ apt update +``` + +```console +$ apt install git -y ``` diff --git a/source/guides/mig.md b/source/guides/mig.md index a4b4b450..7a5c0d30 100644 --- a/source/guides/mig.md +++ b/source/guides/mig.md @@ -20,7 +20,7 @@ Physical GPUs can be addressed by their indices `[0..N)` (where `N` is the total The simplest way to determine the names of MIG instances is to run `nvidia-smi -L` on the command line. -```bash +```console $ nvidia-smi -L GPU 0: NVIDIA A100-PCIE-40GB (UUID: GPU-84fd49f2-48ad-50e8-9f2e-3bf0dfd47ccb) MIG 2g.10gb Device 0: (UUID: MIG-41b3359c-e721-56e5-8009-12e5797ed514) @@ -65,8 +65,8 @@ Suppose you have 3 MIG instances on the local system: To start a `dask-cuda-worker` that the address to the scheduler is located in the `scheduler.json` file, the user would run the following: -```bash -CUDA_VISIBLE_DEVICES="MIG-41b3359c-e721-56e5-8009-12e5797ed514,MIG-65b79fff-6d3c-5490-a288-b31ec705f310,MIG-c6e2bae8-46d4-5a7e-9a68-c6cf1f680ba0" dask-cuda-worker scheduler.json # --other-arguments +```console +$ CUDA_VISIBLE_DEVICES="MIG-41b3359c-e721-56e5-8009-12e5797ed514,MIG-65b79fff-6d3c-5490-a288-b31ec705f310,MIG-c6e2bae8-46d4-5a7e-9a68-c6cf1f680ba0" dask-cuda-worker scheduler.json # --other-arguments ``` Please note that in the example above we created 3 Dask-CUDA workers on one node, for a multi-node cluster, the correct MIG names need to be specified, and they will always be different for each host. diff --git a/source/guides/scheduler-gpu-optimization.md b/source/guides/scheduler-gpu-optimization.md index 10c9da1e..0db2b0d3 100644 --- a/source/guides/scheduler-gpu-optimization.md +++ b/source/guides/scheduler-gpu-optimization.md @@ -78,12 +78,16 @@ The operator has a Helm chart which can be used to manage the installation of th ```console $ helm repo add dask https://helm.dask.org "dask" has been added to your repositories +``` +```console $ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "dask" chart repository Update Complete. ⎈Happy Helming!⎈ +``` +```console $ helm install --create-namespace -n dask-operator --generate-name dask/dask-kubernetes-operator NAME: dask-kubernetes-operator-1666875935 NAMESPACE: dask-operator diff --git a/source/platforms/coiled.md b/source/platforms/coiled.md index 8d456092..dae7000a 100644 --- a/source/platforms/coiled.md +++ b/source/platforms/coiled.md @@ -13,14 +13,14 @@ Head over to [Coiled](https://docs.coiled.io/user_guide/setup/index) and registe Once your account is set up, install the coiled Python library/CLI tool. -```bash -pip install coiled +```console +$ pip install coiled ``` Then you can authenticate with your Coiled account. -```bash -coiled login +```console +$ coiled login ``` For more information see the [Coiled Getting Started documentation](https://docs.coiled.io/user_guide/setup/index). @@ -29,8 +29,8 @@ For more information see the [Coiled Getting Started documentation](https://docs The simplest way to get up and running with RAPIDS on Coiled is to launch a Jupyter notebook server using the RAPIDS notebook container. -```bash -coiled notebook start --gpu --container {{ rapids_notebooks_container }} +```console +$ coiled notebook start --gpu --container {{ rapids_notebooks_container }} ``` ![Screenshot of Jupyterlab running on Coiled executing some cudf GPU code](../_static/images/platforms/coiled/coiled-jupyter.png) @@ -43,8 +43,8 @@ By default when running remote operations Coiled will [attempt to create a copy All Coiled commands can be passed a container image to use. This container will be pulled onto the remote VM at launch time. -```bash -coiled notebook start --gpu --container {{ rapids_notebooks_container }} +```console +$ coiled notebook start --gpu --container {{ rapids_notebooks_container }} ``` This is often the most convenient way to try out existing software environments, but is often not the most performant due to the way container images are unpacked. @@ -77,27 +77,27 @@ dependencies: - dask-labextension ``` -```bash -coiled env create --name rapids --gpu-enabled --conda rapids-environment.yaml +```console +$ coiled env create --name rapids --gpu-enabled --conda rapids-environment.yaml ``` Then you can specify this software environment when starting new Coiled resources. -```bash -coiled notebook start --gpu --software rapidsai-notebooks +```console +$ coiled notebook start --gpu --software rapidsai-notebooks ``` ## CLI Jobs You can execute a script in a container on an ephemeral VM with [Coiled CLI Jobs](https://docs.coiled.io/user_guide/cli-jobs.html). -```bash -coiled run python my_code.py # Boots a VM on the cloud, runs the scripts, then shuts down again +```console +$ coiled run python my_code.py # Boots a VM on the cloud, runs the scripts, then shuts down again ``` We can use this to run GPU code on a remote environment using the RAPIDS container. You can set the coiled CLI to keep the VM around for a few minutes after execution is complete just in case you want to run it again and reuse the same hardware. -```concole +```console $ coiled run --gpu --name rapids-demo --keepalive 5m --container {{ rapids_container }} -- python my_code.py ... ``` @@ -123,14 +123,14 @@ Calculate violations by day of week took: 1.238 seconds To start an interactive Jupyter notebook session with [Coiled Notebooks](https://docs.coiled.io/user_guide/notebooks.html) run the RAPIDS notebook container via the notebook service. -```bash -coiled notebook start --gpu --container {{ rapids_notebooks_container }} +```console +$ coiled notebook start --gpu --container {{ rapids_notebooks_container }} ``` Note that the `--gpu` flag will automatically select a `g4dn.xlarge` instance with a T4 GPU on AWS. You could additionally add the `--vm-type` flag to explicitly choose another machine type with different GPU configuration. For example to choose a machine with 4 L4 GPUs you would run the following. -```bash -coiled notebook start --gpu --vm-type g6.24xlarge --container nvcr.io/nvidia/rapidsai/notebooks:24.12-cuda12.5-py3.12 +```console +$ coiled notebook start --gpu --vm-type g6.24xlarge --container nvcr.io/nvidia/rapidsai/notebooks:24.12-cuda12.5-py3.12 ``` ## Dask Clusters diff --git a/source/platforms/kserve.md b/source/platforms/kserve.md index 89994d60..f501c349 100644 --- a/source/platforms/kserve.md +++ b/source/platforms/kserve.md @@ -123,11 +123,19 @@ We will show you concrete examples below. But first some general notes: ```console $ INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +``` + +```console $ INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway \ -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') +``` + +```console $ SERVICE_HOSTNAME=$(kubectl get inferenceservice -n kserve-test \ -o jsonpath='{.status.url}' | cut -d "/" -f 3) +``` +```console $ curl -v -H "Host: ${SERVICE_HOSTNAME}" -H "Content-Type: application/json" \ "http://${INGRESS_HOST}:${INGRESS_PORT}/v2/models//infer" \ -d @./payload.json diff --git a/source/platforms/snowflake.md b/source/platforms/snowflake.md index affe48a7..0ab19606 100644 --- a/source/platforms/snowflake.md +++ b/source/platforms/snowflake.md @@ -219,8 +219,8 @@ RUN pip install "snowflake-snowpark-python[pandas]" snowflake-connector-python Build the image in the directory where your Dockerfile is located. Notice that no GPU is needed to build this image. -```bash -docker build --platform=linux/amd64 -t /rapids-nb-snowflake:latest . +```console +$ docker build --platform=linux/amd64 -t /rapids-nb-snowflake:latest . ``` #### Install SnowCLI @@ -240,8 +240,8 @@ SELECT CURRENT_ORGANIZATION_NAME(); --org SELECT CURRENT_ACCOUNT_NAME(); --account name ``` -```bash -snow connection add +```console +$ snow connection add ``` ```bash @@ -263,8 +263,8 @@ token file path: Test the connection: -```bash -snow connection test --connection "CONTAINER_HOL" +```console +$ snow connection test --connection "CONTAINER_HOL" ``` To be able to push the docker image we need to get the snowflake registry hostname @@ -291,31 +291,31 @@ ALTER ACCOUNT SET ALLOW_CLIENT_MFA_CACHING = TRUE; and if you are using the Snowflake Connector for Python you need: -```bash -pip install "snowflake-connector-python[secure-local-storage]" +```console +$ pip install "snowflake-connector-python[secure-local-storage]" ``` ```` -```bash -snow spcs image-registry login --connection CONTAINER_HOL +```console +$ snow spcs image-registry login --connection CONTAINER_HOL ``` We tag and push the image, make sure you replace the repository url for `org-account.registry.snowflakecomputing.com/container_hol_db/public/image_repo`: -```bash -docker tag /rapids-nb-snowflake:latest /rapids-nb-snowflake:dev +```console +$ docker tag /rapids-nb-snowflake:latest /rapids-nb-snowflake:dev ``` Verify that the new tagged image exists by running: -```bash -docker image list +```console +$ docker image list ``` Push the image to snowflake: -```bash -docker push /rapids-nb-snowflake:dev +```console +$ docker push /rapids-nb-snowflake:dev ``` ```{note} @@ -375,8 +375,8 @@ Anything that is added to this directory will persist. We use `snow-cli` to push this `yaml` file: -```bash -snow stage copy rapids-snowpark.yaml @specs --overwrite --connection CONTAINER_HOL +```console +$ snow stage copy rapids-snowpark.yaml @specs --overwrite --connection CONTAINER_HOL ``` Verify that your `yaml` was pushed properly by running the following SQL in the diff --git a/source/tools/kubernetes/dask-helm-chart.md b/source/tools/kubernetes/dask-helm-chart.md index 276d4c4e..b4a0602e 100644 --- a/source/tools/kubernetes/dask-helm-chart.md +++ b/source/tools/kubernetes/dask-helm-chart.md @@ -78,7 +78,9 @@ First, setup port forwarding from the cluster to external port: ```console # For the Jupyter server $ kubectl port-forward --address 127.0.0.1 service/rapids-release-dask-jupyter 8888:8888 +``` +```console # For the Dask dashboard $ kubectl port-forward --address 127.0.0.1 service/rapids-release-dask-scheduler 8787:8787 ``` @@ -114,11 +116,13 @@ Worker metrics can be examined in dask dashboard. In case you want to scale up the cluster with more GPU workers, you may do so via `kubectl` or via `helm upgrade`. -```bash +```console $ kubectl scale deployment rapids-release-dask-worker --replicas=8 +``` -# or +or +```console $ helm upgrade --set worker.replicas=8 rapids-release dask/dask ``` From 536626135514ebcf6a55e0d3e49bcb9b83d4fb18 Mon Sep 17 00:00:00 2001 From: ncclementi Date: Wed, 3 Sep 2025 17:33:20 -0400 Subject: [PATCH 3/4] fix merge conflict --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index a305ece8..0d631cf3 100644 --- a/README.md +++ b/README.md @@ -72,6 +72,14 @@ The [Kubernetes documentation style-guide](https://kubernetes.io/docs/contribute encourages the use of `UpperCamelCase` (also known as `PascalCase`) when referring to Kubernetes resources. Please make sure to follow this guide when writing documentation that involves Kubernetes. +#### Usage of console and bash blocks + +- Use `console` for any set of commands that are entered in a command line. + - Start the line with `$`, they'll be omitted upon copying in sphinx rendered docs. + - Use `\` to break lines. + - Do not add comments like `# comment` in-line when using break lines because this will break the copying. +- Use `bash` for any bash script `.sh`. + ### Notebooks The `examples` section of these docs are written in Jupyter Notebooks and built with [MyST-NB](https://myst-nb.readthedocs.io/en/latest/). From dd57307b2f13c08ff6ab8b92968435e78cedb6b4 Mon Sep 17 00:00:00 2001 From: ncclementi Date: Fri, 5 Sep 2025 11:57:51 -0400 Subject: [PATCH 4/4] use console when output otherwise use bash --- README.md | 4 ++-- source/cloud/aws/ec2-multi.md | 2 +- source/cloud/aws/ec2.md | 2 +- source/cloud/aws/eks.md | 6 +++--- source/cloud/azure/aks.md | 12 +++++------ source/cloud/azure/azure-vm.md | 6 +++--- source/cloud/gcp/dataproc.md | 22 ++++++++++----------- source/cloud/gcp/gke.md | 8 ++++---- source/cloud/gcp/vertex-ai.md | 6 +++--- source/cloud/nvidia/brev.md | 16 +++++++-------- source/guides/caching-docker-images.md | 2 +- source/guides/colocate-workers.md | 6 +++--- source/guides/scheduler-gpu-optimization.md | 8 ++++---- source/platforms/coiled.md | 22 ++++++++++----------- source/platforms/kserve.md | 8 ++++---- source/platforms/kubeflow.md | 4 ++-- source/platforms/kubernetes.md | 6 +++--- source/platforms/snowflake.md | 18 ++++++++--------- source/tools/kubernetes/dask-helm-chart.md | 10 +++++----- source/tools/kubernetes/dask-operator.md | 2 +- 20 files changed, 85 insertions(+), 85 deletions(-) diff --git a/README.md b/README.md index 0d631cf3..35a99d30 100644 --- a/README.md +++ b/README.md @@ -74,11 +74,11 @@ Please make sure to follow this guide when writing documentation that involves K #### Usage of console and bash blocks -- Use `console` for any set of commands that are entered in a command line. +- Use `console` for any set of commands that are entered in a command line that contain output from the command. - Start the line with `$`, they'll be omitted upon copying in sphinx rendered docs. - Use `\` to break lines. - Do not add comments like `# comment` in-line when using break lines because this will break the copying. -- Use `bash` for any bash script `.sh`. +- Use `bash` for any bash script `.sh`, or command line entry without output. ### Notebooks diff --git a/source/cloud/aws/ec2-multi.md b/source/cloud/aws/ec2-multi.md index 942d65c8..97fcc9bc 100644 --- a/source/cloud/aws/ec2-multi.md +++ b/source/cloud/aws/ec2-multi.md @@ -20,7 +20,7 @@ Install the AWS CLI tools following the [official instructions](https://docs.aws Also install `dask-cloudprovider` and ensure you select the `aws` optional extras. -```console +```bash $ pip install "dask-cloudprovider[aws]" ``` diff --git a/source/cloud/aws/ec2.md b/source/cloud/aws/ec2.md index 8cd0ff40..5e0a467b 100644 --- a/source/cloud/aws/ec2.md +++ b/source/cloud/aws/ec2.md @@ -34,7 +34,7 @@ If you use the AWS Console, please use the default `ubuntu` user to ensure the N ````{tip} Depending on where your ssh key is, when connecting via SSH you might need to do -```console +```bash $ ssh -i /your-key-file.pem ubuntu@ ``` diff --git a/source/cloud/aws/eks.md b/source/cloud/aws/eks.md index 99ff84ed..77194fe7 100644 --- a/source/cloud/aws/eks.md +++ b/source/cloud/aws/eks.md @@ -14,7 +14,7 @@ First you'll need to have the [`aws` CLI tool](https://aws.amazon.com/cli/) and Ensure you are logged into the `aws` CLI. -```console +```bash $ aws configure ``` @@ -32,7 +32,7 @@ key pair or import (see "Actions" dropdown) one you've created locally. 3. The `--ssh-public-key` argument is the name assigned during creation of your key in AWS console. ``` -```console +```bash $ eksctl create cluster rapids \ --version 1.30 \ --nodes 3 \ @@ -50,7 +50,7 @@ With this command, you’ve launched an EKS cluster called `rapids`. You’ve sp To access the cluster we need to pull down the credentials. Add `--profile ` if you are not using the default profile. -```console +```bash $ aws eks --region us-east-1 update-kubeconfig --name rapids ``` diff --git a/source/cloud/azure/aks.md b/source/cloud/azure/aks.md index 1c77de86..91d0916d 100644 --- a/source/cloud/azure/aks.md +++ b/source/cloud/azure/aks.md @@ -14,7 +14,7 @@ First you'll need to have the [`az` CLI tool](https://learn.microsoft.com/en-us/ Ensure you are logged into the `az` CLI. -```console +```bash $ az login ``` @@ -22,7 +22,7 @@ $ az login Now we can launch a GPU enabled AKS cluster. First launch an AKS cluster. -```console +```bash $ az aks create -g -n rapids \ --enable-managed-identity \ --node-count 1 \ @@ -77,13 +77,13 @@ Microsoft.ContainerService/GPUDedicatedVHDPreview Registered When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the `az provider register` command: -```console +```bash $ az provider register --namespace Microsoft.ContainerService ``` Then install the aks-preview CLI extension, use the following Azure CLI commands: -```console +```bash $ az extension add --name aks-preview ``` @@ -91,7 +91,7 @@ $ az extension add --name aks-preview ````` -```console +```bash $ az aks nodepool add \ --resource-group \ --cluster-name rapids \ @@ -107,7 +107,7 @@ Here we have added a new pool made up of `Standard_NC48ads_A100_v4` instances wh Then we can install the NVIDIA drivers. -```console +```bash $ helm install --wait --generate-name --repo https://helm.ngc.nvidia.com/nvidia \ -n gpu-operator --create-namespace \ gpu-operator \ diff --git a/source/cloud/azure/azure-vm.md b/source/cloud/azure/azure-vm.md index f316583c..e7f90ab3 100644 --- a/source/cloud/azure/azure-vm.md +++ b/source/cloud/azure/azure-vm.md @@ -53,7 +53,7 @@ Prepare the following environment variables. | `AZ_USERNAME` | User name of VM | `rapidsai` | | `AZ_SSH_KEY` | public ssh key | `~/.ssh/id_rsa.pub` | -```console +```bash $ az vm create \ --name ${AZ_VMNAME} \ --resource-group ${AZ_RESOURCEGROUP} \ @@ -109,7 +109,7 @@ Next we need to allow network traffic to the VM so we can access Jupyter and Das | `AZ_NSGNAME` | NSG name for the VM | `${AZ_VMNAME}NSG` | | `AZ_NSGRULENAME` | Name for NSG rule | `Allow-Dask-Jupyter-ports` | -```console +```bash $ az network nsg rule create \ -g ${AZ_RESOURCEGROUP} \ --nsg-name ${AZ_NSGNAME} \ @@ -128,7 +128,7 @@ Next, we can SSH into our VM to install RAPIDS. SSH instructions can be found by ````{tip} When connecting via SSH by doing -```console +```bash $ ssh -i /your-key-file.pem azureuser@ ``` diff --git a/source/cloud/gcp/dataproc.md b/source/cloud/gcp/dataproc.md index 12ba3063..c4557337 100644 --- a/source/cloud/gcp/dataproc.md +++ b/source/cloud/gcp/dataproc.md @@ -6,27 +6,27 @@ RAPIDS can be deployed on Google Cloud Dataproc using Dask. For more details, se It is strongly recommended that you copy the initialization scripts into your own Storage bucket to prevent unintended upgrades from upstream in the cluster: -```console +```bash $ REGION= ``` -```console +```bash $ GCS_BUCKET= ``` -```console +```bash $ gcloud storage buckets create gs://$GCS_BUCKET ``` -```console +```bash $ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/gpu/install_gpu_driver.sh gs://$GCS_BUCKET ``` -```console +```bash $ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/dask/dask.sh gs://$GCS_BUCKET ``` -```console +```bash $ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/rapids/rapids.sh gs://$GCS_BUCKET ``` @@ -38,23 +38,23 @@ At the time of writing [Dataproc only supports RAPIDS version 23.12 and earlier Please ensure that your setup complies with this compatibility requirement. Using newer RAPIDS versions may result in unexpected behavior or errors. ``` -```console +```bash $ CLUSTER_NAME= ``` -```console +```bash $ DASK_RUNTIME=yarn ``` -```console +```bash $ RAPIDS_VERSION=23.12 ``` -```console +```bash $ CUDA_VERSION=11.8 ``` -```console +```bash $ gcloud dataproc clusters create $CLUSTER_NAME\ --region $REGION\ --image-version 2.0-ubuntu18\ diff --git a/source/cloud/gcp/gke.md b/source/cloud/gcp/gke.md index 2947b0b9..b808008f 100644 --- a/source/cloud/gcp/gke.md +++ b/source/cloud/gcp/gke.md @@ -14,7 +14,7 @@ First you'll need to have the [`gcloud` CLI tool](https://cloud.google.com/sdk/g Ensure you are logged into the `gcloud` CLI. -```console +```bash $ gcloud init ``` @@ -22,7 +22,7 @@ $ gcloud init Now we can launch a GPU enabled GKE cluster. -```console +```bash $ gcloud container clusters create rapids-gpu-kubeflow \ --accelerator type=nvidia-tesla-a100,count=2 --machine-type a2-highgpu-2g \ --zone us-central1-c --release-channel stable @@ -39,14 +39,14 @@ executable. Install gke-gcloud-auth-plugin for use with kubectl by following htt ``` you will need to install the `gke-gcloud-auth-plugin` to be able to get the credentials. To do so, -```console +```bash $ gcloud components install gke-gcloud-auth-plugin ``` ```` ## Get the cluster credentials -```console +```bash $ gcloud container clusters get-credentials rapids-gpu-kubeflow \ --region=us-central1-c ``` diff --git a/source/cloud/gcp/vertex-ai.md b/source/cloud/gcp/vertex-ai.md index 34bf0ff7..c29ce9a9 100644 --- a/source/cloud/gcp/vertex-ai.md +++ b/source/cloud/gcp/vertex-ai.md @@ -33,7 +33,7 @@ You can find out your current system CUDA Toolkit version by running `ls -ld /us You can create a new RAPIDS conda environment and register it with `ipykernel` for use in Jupyter Lab. Open a new terminal in Jupyter and run the following commands. -```console +```bash # Create a new environment $ conda create -y -n rapids \ {{ rapids_conda_channels }} \ @@ -41,12 +41,12 @@ $ conda create -y -n rapids \ ipykernel ``` -```console +```bash # Activate the environment $ conda activate rapids ``` -```console +```bash # Register the environment with Jupyter $ python -m ipykernel install --prefix "${DL_ANACONDA_HOME}/envs/rapids" --name rapids --display-name rapids ``` diff --git a/source/cloud/nvidia/brev.md b/source/cloud/nvidia/brev.md index 8def077e..9800218b 100644 --- a/source/cloud/nvidia/brev.md +++ b/source/cloud/nvidia/brev.md @@ -15,7 +15,7 @@ There are two options to get you up and running with RAPIDS in a few steps, than ### Option 1. Setting up your Brev GPU Instance -1. Navigate to the [Brev console](https://brev.nvidia.com/org) and click on "Create your first instance". +1. Navigate to the [Brev bash](https://brev.nvidia.com/org) and click on "Create your first instance". ![Screenshot of the "Create your first instance" UI](/_static/images/platforms/brev/brev1.png) @@ -87,7 +87,7 @@ To create and use a Jupyter Notebook, click "Open Notebook" at the top right aft If you want to access your launched Brev instance(s) via Visual Studio Code or SSH using terminal, you need to install the [Brev CLI according to these instructions](https://docs.nvidia.com/brev/latest/brev-cli.html) or this code below: -```console +```bash $ sudo bash -c "$(curl -fsSL https://raw.githubusercontent.com/brevdev/brev-cli/main/bin/install-latest.sh)" && brev login ``` @@ -95,7 +95,7 @@ $ sudo bash -c "$(curl -fsSL https://raw.githubusercontent.com/brevdev/brev-cli/ To connect to your Brev instance from VS Code open a new VS Code window and run: -```console +```bash $ brev open ``` @@ -105,7 +105,7 @@ It will automatically open a new VS Code window for you to use with RAPIDS. To access your Brev instance from the terminal run: -```console +```bash $ brev shell ``` @@ -113,7 +113,7 @@ $ brev shell Assuming your Jupyter Notebook is running on port `8888` in your Brev environment, you can forward this port to your local machine using the following SSH command: -```console +```bash $ ssh -L 8888:localhost:8888 @ -p 22 ``` @@ -121,7 +121,7 @@ This command forwards port `8888` on your local machine to port `8888` on the re Or for port `2222` (default port). -```console +```bash $ ssh @ -p 2222 ``` @@ -165,10 +165,10 @@ print(gdf) - [Brev Docs](https://brev.dev/) - Please note: Git is not preinstalled in the RAPIDS container, but can be installed into the container when it is running using -```console +```bash $ apt update ``` -```console +```bash $ apt install git -y ``` diff --git a/source/guides/caching-docker-images.md b/source/guides/caching-docker-images.md index 4c17c009..89cb174b 100644 --- a/source/guides/caching-docker-images.md +++ b/source/guides/caching-docker-images.md @@ -45,7 +45,7 @@ spec: You can create this Daemonset with `kubectl`. -```console +```bash $ kubectl apply -f caching-daemonset.yaml ``` diff --git a/source/guides/colocate-workers.md b/source/guides/colocate-workers.md index 4a3dfa7a..d773472e 100644 --- a/source/guides/colocate-workers.md +++ b/source/guides/colocate-workers.md @@ -8,7 +8,7 @@ First you'll need to have the [`gcloud` CLI tool](https://cloud.google.com/sdk/g Ensure you are logged into the `gcloud` CLI. -```console +```bash $ gcloud init ``` @@ -16,7 +16,7 @@ $ gcloud init Now we can launch a GPU enabled GKE cluster. -```console +```bash $ gcloud container clusters create rapids-gpu \ --accelerator type=nvidia-tesla-a100,count=2 --machine-type a2-highgpu-2g \ --zone us-central1-c --release-channel stable @@ -163,7 +163,7 @@ spec: You can create this cluster with `kubectl`. -```console +```bash $ kubectl apply -f rapids-dask-cluster.yaml ``` diff --git a/source/guides/scheduler-gpu-optimization.md b/source/guides/scheduler-gpu-optimization.md index 0db2b0d3..6e99bde5 100644 --- a/source/guides/scheduler-gpu-optimization.md +++ b/source/guides/scheduler-gpu-optimization.md @@ -14,7 +14,7 @@ First you'll need to have the [`gcloud` CLI tool](https://cloud.google.com/sdk/g Ensure you are logged into the `gcloud` CLI. -```console +```bash $ gcloud init ``` @@ -22,7 +22,7 @@ $ gcloud init Now we can launch a GPU enabled GKE cluster. -```console +```bash $ gcloud container clusters create rapids-gpu \ --accelerator type=nvidia-tesla-a100,count=2 --machine-type a2-highgpu-2g \ --zone us-central1-c --release-channel stable @@ -35,7 +35,7 @@ a2-highgpu-2g, each with two A100 GPUs. Now create a new nodepool on this GPU cluster. -```console +```bash $ gcloud container node-pools create scheduler-pool --cluster rapids-gpu \ --accelerator type=nvidia-tesla-t4,count=1 --machine-type n1-standard-2 \ --num-nodes 1 --node-labels dedicated=scheduler --zone us-central1-c @@ -207,7 +207,7 @@ spec: You can create this cluster with `kubectl`. -```console +```bash $ kubectl apply -f rapids-dask-cluster.yaml ``` diff --git a/source/platforms/coiled.md b/source/platforms/coiled.md index dae7000a..8dbd4132 100644 --- a/source/platforms/coiled.md +++ b/source/platforms/coiled.md @@ -13,13 +13,13 @@ Head over to [Coiled](https://docs.coiled.io/user_guide/setup/index) and registe Once your account is set up, install the coiled Python library/CLI tool. -```console +```bash $ pip install coiled ``` Then you can authenticate with your Coiled account. -```console +```bash $ coiled login ``` @@ -29,7 +29,7 @@ For more information see the [Coiled Getting Started documentation](https://docs The simplest way to get up and running with RAPIDS on Coiled is to launch a Jupyter notebook server using the RAPIDS notebook container. -```console +```bash $ coiled notebook start --gpu --container {{ rapids_notebooks_container }} ``` @@ -43,7 +43,7 @@ By default when running remote operations Coiled will [attempt to create a copy All Coiled commands can be passed a container image to use. This container will be pulled onto the remote VM at launch time. -```console +```bash $ coiled notebook start --gpu --container {{ rapids_notebooks_container }} ``` @@ -77,13 +77,13 @@ dependencies: - dask-labextension ``` -```console +```bash $ coiled env create --name rapids --gpu-enabled --conda rapids-environment.yaml ``` Then you can specify this software environment when starting new Coiled resources. -```console +```bash $ coiled notebook start --gpu --software rapidsai-notebooks ``` @@ -91,20 +91,20 @@ $ coiled notebook start --gpu --software rapidsai-notebooks You can execute a script in a container on an ephemeral VM with [Coiled CLI Jobs](https://docs.coiled.io/user_guide/cli-jobs.html). -```console +```bash $ coiled run python my_code.py # Boots a VM on the cloud, runs the scripts, then shuts down again ``` We can use this to run GPU code on a remote environment using the RAPIDS container. You can set the coiled CLI to keep the VM around for a few minutes after execution is complete just in case you want to run it again and reuse the same hardware. -```console +```bash $ coiled run --gpu --name rapids-demo --keepalive 5m --container {{ rapids_container }} -- python my_code.py ... ``` This works very nicely when paired with the cudf.pandas CLI tool. For example we can run `python -m cudf.pandas my_script` to GPU accelerate our Pandas code without having to rewrite anything. For example [this script](https://gist.github.com/jacobtomlinson/2481ecf2e1d2787ae2864a6712eef97b#file-cudf_pandas_coiled_demo-py) processes some open NYC parking data. With `pandas` it takes around a minute, but with `cudf.pandas` it only takes a few seconds. -```console +```bash $ coiled run --gpu --name rapids-demo --keepalive 5m --container {{ rapids_container }} -- python -m cudf.pandas cudf_pandas_coiled_demo.py Output @@ -123,13 +123,13 @@ Calculate violations by day of week took: 1.238 seconds To start an interactive Jupyter notebook session with [Coiled Notebooks](https://docs.coiled.io/user_guide/notebooks.html) run the RAPIDS notebook container via the notebook service. -```console +```bash $ coiled notebook start --gpu --container {{ rapids_notebooks_container }} ``` Note that the `--gpu` flag will automatically select a `g4dn.xlarge` instance with a T4 GPU on AWS. You could additionally add the `--vm-type` flag to explicitly choose another machine type with different GPU configuration. For example to choose a machine with 4 L4 GPUs you would run the following. -```console +```bash $ coiled notebook start --gpu --vm-type g6.24xlarge --container nvcr.io/nvidia/rapidsai/notebooks:24.12-cuda12.5-py3.12 ``` diff --git a/source/platforms/kserve.md b/source/platforms/kserve.md index f501c349..51bdb99d 100644 --- a/source/platforms/kserve.md +++ b/source/platforms/kserve.md @@ -120,22 +120,22 @@ We will show you concrete examples below. But first some general notes: - Triton-FIL uses v2 version of KServe protocol, so make sure to use `v2` URL when sending inference request: -```console +```bash $ INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}') ``` -```console +```bash $ INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway \ -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') ``` -```console +```bash $ SERVICE_HOSTNAME=$(kubectl get inferenceservice -n kserve-test \ -o jsonpath='{.status.url}' | cut -d "/" -f 3) ``` -```console +```bash $ curl -v -H "Host: ${SERVICE_HOSTNAME}" -H "Content-Type: application/json" \ "http://${INGRESS_HOST}:${INGRESS_PORT}/v2/models//infer" \ -d @./payload.json diff --git a/source/platforms/kubeflow.md b/source/platforms/kubeflow.md index 394e12ba..f231e441 100644 --- a/source/platforms/kubeflow.md +++ b/source/platforms/kubeflow.md @@ -54,7 +54,7 @@ Once the Notebook is ready, click Connect to launch Jupyter. You can verify everything works okay by opening a terminal in Jupyter and running: -```console +```bash $ nvidia-smi ``` @@ -215,7 +215,7 @@ Create a file with the above contents, and then apply it into your user’s name For the default `user@example.com` user it would look like this. -```console +```bash $ kubectl apply -n kubeflow-user-example-com -f configure-dask-dashboard.yaml ``` diff --git a/source/platforms/kubernetes.md b/source/platforms/kubernetes.md index 51c4baae..ffc98e1b 100644 --- a/source/platforms/kubernetes.md +++ b/source/platforms/kubernetes.md @@ -255,19 +255,19 @@ spec: ```` -```console +```bash $ kubectl apply -f rapids-notebook.yaml ``` The container creation takes approximately 7 min, you can check the status of the Pod by doing: -```console +```bash $ kubectl get pods ``` Once it's ready, Jupyter will be accessible on port `30002` of your Kubernetes nodes via `NodePort` service. Alternatively you could use a `LoadBalancer` service type [if you have one configured](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) or a `ClusterIP` and use `kubectl` to port forward the port locally and access it that way. -```console +```bash $ kubectl port-forward service/rapids-notebook 8888 ``` diff --git a/source/platforms/snowflake.md b/source/platforms/snowflake.md index 0ab19606..e20e4ca5 100644 --- a/source/platforms/snowflake.md +++ b/source/platforms/snowflake.md @@ -219,7 +219,7 @@ RUN pip install "snowflake-snowpark-python[pandas]" snowflake-connector-python Build the image in the directory where your Dockerfile is located. Notice that no GPU is needed to build this image. -```console +```bash $ docker build --platform=linux/amd64 -t /rapids-nb-snowflake:latest . ``` @@ -240,7 +240,7 @@ SELECT CURRENT_ORGANIZATION_NAME(); --org SELECT CURRENT_ACCOUNT_NAME(); --account name ``` -```console +```bash $ snow connection add ``` @@ -263,7 +263,7 @@ token file path: Test the connection: -```console +```bash $ snow connection test --connection "CONTAINER_HOL" ``` @@ -291,30 +291,30 @@ ALTER ACCOUNT SET ALLOW_CLIENT_MFA_CACHING = TRUE; and if you are using the Snowflake Connector for Python you need: -```console +```bash $ pip install "snowflake-connector-python[secure-local-storage]" ``` ```` -```console +```bash $ snow spcs image-registry login --connection CONTAINER_HOL ``` We tag and push the image, make sure you replace the repository url for `org-account.registry.snowflakecomputing.com/container_hol_db/public/image_repo`: -```console +```bash $ docker tag /rapids-nb-snowflake:latest /rapids-nb-snowflake:dev ``` Verify that the new tagged image exists by running: -```console +```bash $ docker image list ``` Push the image to snowflake: -```console +```bash $ docker push /rapids-nb-snowflake:dev ``` @@ -375,7 +375,7 @@ Anything that is added to this directory will persist. We use `snow-cli` to push this `yaml` file: -```console +```bash $ snow stage copy rapids-snowpark.yaml @specs --overwrite --connection CONTAINER_HOL ``` diff --git a/source/tools/kubernetes/dask-helm-chart.md b/source/tools/kubernetes/dask-helm-chart.md index b4a0602e..d16250dc 100644 --- a/source/tools/kubernetes/dask-helm-chart.md +++ b/source/tools/kubernetes/dask-helm-chart.md @@ -57,7 +57,7 @@ You can compute password hash by following the [jupyter notebook guide](https:// ### Installing the Helm Chart -```console +```bash $ helm install rapids-release --repo https://helm.dask.org dask -f rapids-config.yaml ``` @@ -75,12 +75,12 @@ For simplicity, this guide will setup access to the Jupyter server via port forw First, setup port forwarding from the cluster to external port: -```console +```bash # For the Jupyter server $ kubectl port-forward --address 127.0.0.1 service/rapids-release-dask-jupyter 8888:8888 ``` -```console +```bash # For the Dask dashboard $ kubectl port-forward --address 127.0.0.1 service/rapids-release-dask-scheduler 8787:8787 ``` @@ -116,13 +116,13 @@ Worker metrics can be examined in dask dashboard. In case you want to scale up the cluster with more GPU workers, you may do so via `kubectl` or via `helm upgrade`. -```console +```bash $ kubectl scale deployment rapids-release-dask-worker --replicas=8 ``` or -```console +```bash $ helm upgrade --set worker.replicas=8 rapids-release dask/dask ``` diff --git a/source/tools/kubernetes/dask-operator.md b/source/tools/kubernetes/dask-operator.md index 78e6c981..b24a69dd 100644 --- a/source/tools/kubernetes/dask-operator.md +++ b/source/tools/kubernetes/dask-operator.md @@ -128,7 +128,7 @@ spec: You can create this cluster with `kubectl`. -```console +```bash $ kubectl apply -f rapids-dask-cluster.yaml ```