-
Notifications
You must be signed in to change notification settings - Fork 0
Description
@yookoon Thank you for the interesting work and the code repository. We are trying to support your proposed method in our UQ-Library for Deep Learning called Lightning-UQ-Box and have a couple of questions regarding the regression framework.
To take the Regression case based on the UCI datasets for example, the MLP module has a logvar parameter that is a single value for the entire model. During the training and test phase the loss is computed with this homoscedastic logvar parameter here. Additionally, for sampling methods like BNNs or your proposed Density framework, N samples are taken in this loop, however, you only compute the average over the model mean predictions, and the sampling of the weights has no effect on the predictive uncertainty you are computing, since it is just a single learned parameter for the entire model. This seems counterintuitive as the point of BNNs is to model epistemic uncertainty which should influence the overall predictive uncertainty that I get from the model.
In Figure 1 of your paper, you show results on a Toy Regression dataset with input dependent uncertainty, however, the repository does not contain the code to generate this figure as far as I can tell. In your paper you state that based on equations 7-9 "Consequently, predictive uncertainty will be high for test inputs that are improbable in the training density and low for those that are more probable, providing intuitive and reliable predictive uncertainty." However, I fail to see how that can be the case, when you are using a single logvar parameter as your predictive uncertainty. I was therefore wondering whether you could help me out in understanding the utilized notion of predictive uncertainty in the regression case. Thanks in advance!