Stable #8
Replies: 8 comments 2 replies
-
|
Hi, could you please provide the example file "off_axis.TXT" that you show in the PSF_Analysis.ipynb case so we can try out your nice examples? Thank you, |
Beta Was this translation helpful? Give feedback.
-
|
Hi! I have added it to the data folder. You may use it from there. Thank you! |
Beta Was this translation helpful? Give feedback.
-
|
Hi! I have added it to the data folder. You may use it from there. Thank you! |
Beta Was this translation helpful? Give feedback.
-
|
Thank you! Curiously, on a Mac M1 (no CUDA), I am only able to use an array of 180x180 or else torch instantly seg faults at the data = torch.tensor line of PSF_Analysis.ipynb. Same at a python command line. Python 3.9+. pytorch 2.x from pypi
Is there some limit or memory issues? I have over 40 GB free and it happens if device is both ‘cpu’ & ‘mps’.
Smaller data tensors work. I tool the center “square” of the off_axis.txt size 160x160 and was able to so the conv2d.
pyTorch is new to me so I am trying to work through it.
Regards
r.
On Sep 3, 2023, at 18:01, Avinash CK ***@***.***> wrote:
Hi! I have added it to the data folder. You may use it from there. Thank you!
—
Reply to this email directly, view it on GitHub<#8 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AS45U7FYTWAXDGTNCHHYBEDXYUR6ZANCNFSM6AAAAAAZH6AADQ>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Just FYI:
(my) Mac M1 with torch(pyTorch) 2.0.x only works with an input tensor of 160x160 or less, else seg.faults.
I reverted to torch 1.13 (same problem). Finally back down to 1.10.2 and thngs work but there is no Mac GPU (‘mps’) support.
I know this may be irrelevant to you but just info on recent versions. I know torch is bleeding edge and most people are CUDA but there are a lot of us on Mac M1 h/w for dev.
Thanks for your help (listening).
Enjoying working through the PSF_Analysis examples.
Regards,
r.
Robert F. Meyer
NASA's Global Differential GPS System
Orbiter and Radio Metric Systems Group
Jet Propulsion Laboratory
4800 Oak Grove Dr., 238-600
Pasadena, California 91109-8099
http://www.gdgps.net<http://www.gdgps.net/>
E-Mail: ***@***.******@***.***>
From: Robert Meyer ***@***.***>
Date: Monday, September 4, 2023 at 19:23
To: Jack3690/INSIST ***@***.***>
Cc: Jack3690/INSIST ***@***.***>, Comment ***@***.***>
Subject: Re: [Jack3690/INSIST] Stable (Discussion #8)
Thank you! Curiously, on a Mac M1 (no CUDA), I am only able to use an array of 180x180 or else torch instantly seg faults at the data = torch.tensor line of PSF_Analysis.ipynb. Same at a python command line. Python 3.9+. pytorch 2.x from pypi
Is there some limit or memory issues? I have over 40 GB free and it happens if device is both ‘cpu’ & ‘mps’.
Smaller data tensors work. I tool the center “square” of the off_axis.txt size 160x160 and was able to so the conv2d.
pyTorch is new to me so I am trying to work through it.
Regards
r.
On Sep 3, 2023, at 18:01, Avinash CK ***@***.***> wrote:
Hi! I have added it to the data folder. You may use it from there. Thank you!
—
Reply to this email directly, view it on GitHub<#8 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AS45U7FYTWAXDGTNCHHYBEDXYUR6ZANCNFSM6AAAAAAZH6AADQ>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
64 GB RAM! It’s only a 512x512 array and even if float64 (doubles) that should easily run thru ‘cpu’ & ‘mps’. Something is wrong with Torch. <=v1.11x worked but there is no ‘mps’ “GPU” support. I don’t think torch is well tested & debugged on Mac M1+. Some sort of memory access issue since seg.faults. 2.097 MB limit?? I see lots of complaints in discuss.pytorch.org. I will try colab again. I think I tried it when I didn’t have the input txt files. But google problably has CUDA. r.On Sep 7, 2023, at 20:24, Avinash CK ***@***.***> wrote:
How much RAM do you have, I think it could be a RAM memory shortage. I would suggest using Google Colab, that way you can test everything to its full extent. Hope it was helpful.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
PS: yes, it all works in COLAB but that uses
torch.__version__
2.0.1+cu118
And surely isn’t running on Mac M1/M2 !
Oh, well…..
Thank you,
r.
From: Robert Meyer ***@***.***>
Date: Thursday, September 7, 2023 at 21:27
To: Jack3690/INSIST ***@***.***>
Cc: Jack3690/INSIST ***@***.***>, Comment ***@***.***>
Subject: Re: [Jack3690/INSIST] Stable (Discussion #8)
64 GB RAM!
It’s only a 512x512 array and even if float64 (doubles) that should easily run thru ‘cpu’ & ‘mps’. Something is wrong with Torch. <=v1.11x worked but there is no ‘mps’ “GPU” support.
I don’t think torch is well tested & debugged on Mac M1+. Some sort of memory access issue since seg.faults. 2.097 MB limit?? I see lots of complaints in discuss.pytorch.org. I will try colab again. I think I tried it when I didn’t have the input txt files. But google problably has CUDA.
r.
On Sep 7, 2023, at 20:24, Avinash CK ***@***.***> wrote:
How much RAM do you have, I think it could be a RAM memory shortage. I would suggest using Google Colab, that way you can test everything to its full extent. Hope it was helpful.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
64 GB is more than sufficient I guess. Then yeah probably a Mac
compatibility issue as you suggested.
…On Thu, Sep 7, 2023 at 11:22 PM topJimmey ***@***.***> wrote:
PS: yes, it all works in COLAB but that uses
torch.__version__
2.0.1+cu118
And surely isn’t running on Mac M1/M2 !
Oh, well…..
Thank you,
r.
From: Robert Meyer ***@***.***>
Date: Thursday, September 7, 2023 at 21:27
To: Jack3690/INSIST ***@***.***>
Cc: Jack3690/INSIST ***@***.***>, Comment ***@***.***>
Subject: Re: [Jack3690/INSIST] Stable (Discussion #8)
64 GB RAM!
It’s only a 512x512 array and even if float64 (doubles) that should easily
run thru ‘cpu’ & ‘mps’. Something is wrong with Torch. <=v1.11x worked but
there is no ‘mps’ “GPU” support.
I don’t think torch is well tested & debugged on Mac M1+. Some sort of
memory access issue since seg.faults. 2.097 MB limit?? I see lots of
complaints in discuss.pytorch.org. I will try colab again. I think I
tried it when I didn’t have the input txt files. But google problably has
CUDA.
r.
On Sep 7, 2023, at 20:24, Avinash CK ***@***.***> wrote:
How much RAM do you have, I think it could be a RAM memory shortage. I
would suggest using Google Colab, that way you can test everything to its
full extent. Hope it was helpful.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
—
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKVCYTVQOJTGSLRH2WVGKU3XZKTRFANCNFSM6AAAAAAZH6AADQ>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This release includes Resolved Stellar Population Image Simulation
This discussion was created from the release Stable .
Beta Was this translation helpful? Give feedback.
All reactions