-
Notifications
You must be signed in to change notification settings - Fork 93
Description
When running rfd3 there is a line that says:
You are using a CUDA device ('NVIDIA GeForce RTX 4060 Laptop GPU') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
If I modify the rfd3 in the bin to this:
#!/opt/.miniconda/envs/protein_foundry/bin/python3.12
try:
import torch
# TO-DO: make if-statement for tensor cores?
torch.set_float32_matmul_precision("high")
except Exception:
print("Failed to Set matmul precision.")
import sys
from rfd3.cli import app
if __name__ == '__main__':
if sys.argv[0].endswith('.exe'):
sys.argv[0] = sys.argv[0][:-4]
sys.exit(app())
The comment goes away. That said, I have no idea if this is a good idea or if the precision loss will be harmful in some way.
I tried using seed=42 in the command line interface; however, I got subtle differences even without this modification that make me think I might need an additional setting to make it deterministic. Weirdly, the "high" precision setting gave me something more different than the "medium" precision setting. My best guess is time-derived pseudo-randomness somewhere is messing with the outputs.
Since the medium gave me something more similar, I'm thinking I will leave this in my bin file, but since I really don't know what I'm doing I thought it wise to ask the experts. Thanks!