The official Colab notebook for the Derm Foundation model named 'quick_start_with_hugging_face.ipynb' fails to run on GPU with the error:
NotFoundError: Graph execution error:
Detected at node XlaCallModule defined at (most recent call last):
The current platform CUDA is not among the platforms required by the module: [CPU]
[[{{node XlaCallModule}}]]
tf2xla conversion failed while converting _inference_2174[]. Run with TF_DUMP_GRAPH_PREFIX=/path/to/dump/dir and --vmodule=xla_compiler=2 to obtain a dump of the compiled functions.
[[StatefulPartitionedCall/StatefulPartitionedCall]] [Op:__inference_signature_wrapper_inference_fn_5294]
This occurs when loading the model from Hugging Face and attempting inference on GPU. The same notebook works on CPU, but GPU execution is broken.
Steps to Reproduce:
- Open the official Colab notebook: [Link to Notebook]
- Change runtime to GPU (Runtime → Change runtime type → T4 GPU).
- Run the notebook until model inference.
- Error occurs at 'output = infer(inputs=tf.constant([input_tensor]))'.
Troubleshooting Attempted:
- Forced CPU execution (works, but too slow).
- Verified CUDA/cuDNN compatibility (GPU is detected).
- Tested on multiple Colab instances.
Urgency:
This blocks GPU-accelerated workflows for researchers and clinicians using the official notebook. Please:
- Clarify if GPU support is intended for this model.
- Provide a GPU-compatible version or fix the Colab notebook.
- Update the Hugging Face model card with GPU compatibility details.
The official Colab notebook for the Derm Foundation model named 'quick_start_with_hugging_face.ipynb' fails to run on GPU with the error:
NotFoundError: Graph execution error:
Detected at node XlaCallModule defined at (most recent call last):
The current platform CUDA is not among the platforms required by the module: [CPU]
[[{{node XlaCallModule}}]]
tf2xla conversion failed while converting _inference_2174[]. Run with TF_DUMP_GRAPH_PREFIX=/path/to/dump/dir and --vmodule=xla_compiler=2 to obtain a dump of the compiled functions.
[[StatefulPartitionedCall/StatefulPartitionedCall]] [Op:__inference_signature_wrapper_inference_fn_5294]
This occurs when loading the model from Hugging Face and attempting inference on GPU. The same notebook works on CPU, but GPU execution is broken.
Steps to Reproduce:
Troubleshooting Attempted:
Urgency:
This blocks GPU-accelerated workflows for researchers and clinicians using the official notebook. Please: