A python class to produce images of debris disks, as a function of different grain sizes. It works for scattered light images as well as for thermal emission (with some crude-ish approximation for the dust temperature).
Radiation pressure increases the eccentricity of the particles, and the increase depends on their sizes. The idea here is to consider a size distribution, and instead of creating one single image, the size distribution is binned in ng intervals, and ng images are created, to capture the different spatial scales of the debris disk. Afterwards, you can for instance do all sorts of linear combination to find the images that best reproduce your observations.
The main motivation of the approach is to get rid of as many assumptions regarding the dust properties as possible. Therefore, instead of working directly with the "true" size of the particles, the code directly uses the
As
Simply clone the repository, change to the directory and install using the develop option.
python3 setup.py developThe dependencies are numpy, matplotlib (though technically not necessary except for the example), and most importantly numba to speed things up a little. With numba you have the possibility to pre-compile parts of the code, which can significantly improve the runtime speed. In our case, since we will be launching many particles, and do very simple math operations, this is quite fantastic.
Once you cloned and installed the repository, you should therefore compile some of the code by running
python3 frame.pyand this will create a directory called 'frame.cpython-38-x86_64-linux-gnu.so' (name will vary depending on your operating system). And that's it, you should be able to successfully run the following
python3 betadisk.pyThe code is pretty simple to use, you start by initializing the class as
from betadisk import BetaDisk
disk = BetaDisk()which has the following parameters (all optional)
nx = 300 # [int] number of pixels for the images
nl = 10_000_000 # [int] number of particles to be launched
ng = 10 # [int] number of grain size intervals
pixscale = 0.01226 # [float] size of one pixel in arcsec
bmin = 0.001 # [float] minimum value for beta
bmax = 0.49 # [float] maximum value for beta
nb = 50 # [int] number of bins for the phase function (see later)
slope = 0.5 # [float] "Correction" for the size distribution (see next paragraph)
dx = 0. # [float] possibility to have an offset in the x direction
dy = 0. # [float] and in the y directionthose should be enough (and mostly self-explanatory) to produce scattered light images. The slope parameter is not the slope of the size distribution. Normally, a slope in
If you want to model ALMA observations, for thermal images you will need to provide the following parameters
thermal = False # [boolean] False by default, need to switch it to True
lstar = None # [float] to compute the temperature we need a stellar luminosity
dpc = None # [float] we will also need a distance in pc
wave = None # [float] and we need to provide a wavelength in micronsFor thermal images, we need to provide the luminosity because the temperature of the dust grains is estimated by inverting the following equation (Wyatt 2008):
r = (278.3/Tdust)**2 * Lstar**0.5and r will be in au. Since all the distances in the code are in arcseconds we also need the distance in pc for the conversion.
Afterwards, you can call the main method
disk.compute_model(a = 1.5)which the following parameters
a = 1.5 # [float] the reference radius of the disk, in arcsec
dr = 0.25 # [float] the standard deviation for the width of the main belt (normal profile)
incl = 45. # [float] inclination of the disk, in degrees
pa = -110. # [float] position angle of the disk, in degrees
opang = 0.05 # [float] opening angle of the disk
pfunc = np.ones(nb) # [np.array] array containing the phase function
is_hg = True # [boolean] should we use the HG approximation
ghg = 0. # [float or np.array] value for the asymmetry parameter for the HG phase function
dpi = False # [boolean] if is_hg is True, we can also model polarimetric observationsThe "traditional" way to deal with the phase function, either in scattered or polarized light observations, would be to use the Henyein-Greenstein approximation, which is parametrized by a single value
The result of this approach is that we have two arrays, one for the scattering angle, and one for the phase function. In compute_model if one does not pass the parameter pfunc, then the array is filled with nb 1. In the other case, the user can provide the array pfunc to the method so that it is used when computing the images. In both cases, the array for the scattering angle is automatically creating based on the length of the array for the phase function, and the scattering angle is between
There is also the possibility to use the HG approximation by setting the boolean is_hg to True and providing a value for ghg. The latter value can be a float or a np.array with a length equal to ng. If only a single value is provided, all the different ng images will have the same phase function, but if an array is passed, then all the different images will use different phase functions (see the example below). If is_hg is True you can also set dpi to True to include the Rayleigh scattering term to mimic polarimetric observations.
As a last remark on the phase function, if is_hg is set to True, this takes over the values passed in pfunc, therefore if you want to use the first approach, you have to make sure that is_hg = False (the default value).
In the example method you can find a minimal working example, quite similar to
nx, ng = 1_000, 12
ghg = np.linspace(0.9, 0.5, num = ng)
disk = BetaDisk(nx = nx, ng = ng)
disk.compute_model(a = 1.5, dr = 0.030, incl = 40.0, pa = -120.0, opang = 0.05, \
is_hg = True, ghg = ghg)
print(np.shape(disk.model))
# Should return something like (12, 1000, 1000)Note that I did not really notice a decrease in performance when increasing the size of the images. The position of all the particles has to be computed in any case, there is only a check towards the end to see whether there are in the field of view or not. So it should be only slightly slower if you have nx = 200 or nx = 1024. This might be useful to model ALMA observations and want to sample your Fourier transform with high enough spatial resolution.
If you were to use this code in your research, I would appreciate if you could cite this paper (Olofsson et al. 2022).
