From 6793d1e0bbae381c7ff8f68dea8b25c61b77a987 Mon Sep 17 00:00:00 2001 From: Erik Ordentlich Date: Sat, 1 Nov 2025 13:37:35 -0700 Subject: [PATCH 1/2] start faq entry on cuda/native code error possible causes Signed-off-by: Erik Ordentlich --- docs/site/FAQ.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/site/FAQ.md b/docs/site/FAQ.md index 83a28eea..7616e59f 100644 --- a/docs/site/FAQ.md +++ b/docs/site/FAQ.md @@ -20,3 +20,9 @@ Python 3.10 or higher. This error occurs when the product of Arrow batch size and row dimension exceeds 2,147,483,647 (INT32_MAX), typically with very wide datasets (many features per row), causing Arrow serialization to fail. For example, if you set `max_records_per_batch = 10000` and your data has `row_dimension = 300000` (i.e., 300,000 features per row), then `10000 × 300000 = 3,000,000,000`, which exceeds the Arrow limit of 2,147,483,647 (INT32_MAX) and will cause this error. Be aware that some Spark Rapids ML algorithms (such as NearestNeighbors) may convert sparse vectors to dense format internally if the underlying cuML algorithm does not support sparse input. This conversion can significantly increase memory usage, especially with wide datasets, and may make the Arrow size limit error more likely. To mitigate this, lower the value of `spark.sql.execution.arrow.maxRecordsPerBatch` (for example, to 5,000 or less) so that the product of the batch size and the number of elements per row stays within Arrow's maximum allowed size. + +### What are some possible causes of low-level CUDA and/or native code errors? + + - NaNs or nulls in the input data. These are currently passed directly into the cuML layer and may trigger such errors. + - NCCL communication library does not allow communication between processes on the same GPU. Check your Spark GPU configs to ensure 1 task per GPU during fit() calls. + - Previously unknown bugs. Please file an issue. From e2b45a8a1281e21edc2c7274e8c2b1886b5ff029 Mon Sep 17 00:00:00 2001 From: Erik Ordentlich Date: Mon, 3 Nov 2025 19:07:46 -0800 Subject: [PATCH 2/2] add pointer to stage level scheduling Signed-off-by: Erik Ordentlich --- docs/site/FAQ.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/site/FAQ.md b/docs/site/FAQ.md index 7616e59f..53b2dd31 100644 --- a/docs/site/FAQ.md +++ b/docs/site/FAQ.md @@ -24,5 +24,5 @@ Be aware that some Spark Rapids ML algorithms (such as NearestNeighbors) may con ### What are some possible causes of low-level CUDA and/or native code errors? - NaNs or nulls in the input data. These are currently passed directly into the cuML layer and may trigger such errors. - - NCCL communication library does not allow communication between processes on the same GPU. Check your Spark GPU configs to ensure 1 task per GPU during fit() calls. + - NCCL communication library does not allow communication between processes on the same GPU. [Stage level scheduling](https://nvidia.github.io/spark-rapids-ml/performance.html#stage-level-scheduling) can avoid this but it is not supported in all cases. Check requirements and adjust your Spark GPU configs to ensure 1 task per GPU during fit() calls if needed. - Previously unknown bugs. Please file an issue.