-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Hi Nathaniel,
I am referring to an issue that I already opened a few days ago on the GitHub page of bifacial_radiance by NREL:
[https://github.com/NatLabRockies/bifacial_radiance/issues/458].
I am using the python package bifacial_radiance to access the Radiance software. Irradiance analysis is performed by calling the rtrace function within bifacial_radiance.
I recently switched from running the simulations locally on my Windows (11th Gen Intel(R) Core(TM) i7-1185g7 @ 3.00GHz with 4 cores, no GPU) to a Linux computer with a NVIDIA Tesla M10 with 5 multiprocessors. I successfully installed Radiance and then Accelerad.
The software finds the GPU, however memory usage is limited to approx 700-800MiB per rtrace process.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla M10 On | 00000000:0B:00.0 Off | N/A |
| N/A 47C P0 41W / 53W | 708MiB / 8192MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 691523 C rtrace 705MiB |
+-----------------------------------------------------------------------------+
My question is, why is the usage of memory space limited to these 700MiB per process?
Running multiple simulations at once (one for each timestamp) did not change anything:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla M10 On | 00000000:0B:00.0 Off | N/A |
| N/A 46C P0 44W / 53W | 2368MiB / 8192MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2729473 C rtrace 705MiB |
| 0 N/A N/A 2730899 C rtrace 830MiB |
| 0 N/A N/A 2732326 C rtrace 830MiB |
+-----------------------------------------------------------------------------+
Unfortunately the results from these simulations are also not in agreement with comparable ones from the runs on Windows...
Is there something else that I can try?
Thanks in advance!