diff --git a/docs/site/FAQ.md b/docs/site/FAQ.md index 83a28eea..53b2dd31 100644 --- a/docs/site/FAQ.md +++ b/docs/site/FAQ.md @@ -20,3 +20,9 @@ Python 3.10 or higher. This error occurs when the product of Arrow batch size and row dimension exceeds 2,147,483,647 (INT32_MAX), typically with very wide datasets (many features per row), causing Arrow serialization to fail. For example, if you set `max_records_per_batch = 10000` and your data has `row_dimension = 300000` (i.e., 300,000 features per row), then `10000 × 300000 = 3,000,000,000`, which exceeds the Arrow limit of 2,147,483,647 (INT32_MAX) and will cause this error. Be aware that some Spark Rapids ML algorithms (such as NearestNeighbors) may convert sparse vectors to dense format internally if the underlying cuML algorithm does not support sparse input. This conversion can significantly increase memory usage, especially with wide datasets, and may make the Arrow size limit error more likely. To mitigate this, lower the value of `spark.sql.execution.arrow.maxRecordsPerBatch` (for example, to 5,000 or less) so that the product of the batch size and the number of elements per row stays within Arrow's maximum allowed size. + +### What are some possible causes of low-level CUDA and/or native code errors? + + - NaNs or nulls in the input data. These are currently passed directly into the cuML layer and may trigger such errors. + - NCCL communication library does not allow communication between processes on the same GPU. [Stage level scheduling](https://nvidia.github.io/spark-rapids-ml/performance.html#stage-level-scheduling) can avoid this but it is not supported in all cases. Check requirements and adjust your Spark GPU configs to ensure 1 task per GPU during fit() calls if needed. + - Previously unknown bugs. Please file an issue.