RH November 2023 code review #2
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi Dina,
This review has been primarily a review of the general usability and instructions, I have not done a thorough review of the analysis code beyond following the instructions, but instead made a sense check throughout.
Overall, the layout, usability and reactivity are all good and fit for purpose and comments below are mainly constructive with only a few points (but even those are subjective) which I have listed below. I tried to break the model with nonsensical values and was not able to, the validation throughout is sensible. The instructions for amending make sense and I like that the page layouts can be amended in Word; I feel that the instructions are suitable for an R inexperienced audience and they are well written. The only changes I have made to this branch have been to spelling and a single change to an input value where the min was not set - see commits for this. I have not made any other changes as the need for them may be subjective.
Review comments
In the 'chip and bin' method, only integers can be used. This causes a problem when non-integer values such as 'relative risk' are to be elicited. There should either be an option to have box sizes down to 0.1 or 0.5 if the difference between upper/lower thresholds are less than a certain value, or state that values requiring decimals are not suitable for this method.

Another concern in the above image and in the one below is that if the min is set to 0 and the max is set to 4, then the chips and bins method pads the graph so that the min/max range than can be entered is -1 to 5. These values can be filled in and used and this should not be allowed.

The consent form defaults should to be amended to state that by using this model you agree to the terms. At the moment there is a question, but there is no way for a user to refuse, and if they do refuse and the conditional_release is set to FALSE then the app can be used anyway and the consent or lack of it is not recorded anywhere.
When any of the questions are saved, the tabs refresh and the user finds themselves back on question 1 tab (e.g. a user fills in Q3, presses save, and then finds themselves back on Q1 instead of Q3 tab) - this is confusing. Would recommend using a reactiveValues() to record the tab that the tabbox should re-render on.
Would recommend adding a new tab or a pop-up once user has finished the last question, saying thank you and reminding them to send their responses
There is a mention of being able to send to Dropbox and I did notice instructions in one of the scripts. When we last looked at this, Dropbox was not compatible with shinyapps.io, can I confirm that the instructions provided do work with shinyapps.io, otherwise there needs to be a note stating this or mention of Dropbox should be removed.
If all of the chips are in a single bin then the maximum probability is 98%, not 100%. Obviously this is a silly example, but some validation should be used here

In the analysis_code.R script, firstly, it is a small thing but there is no default “analysis_files/experts_responses” file, so this should be created or users encouraged to make their own. Secondly, the code looks fine but there are issues with the assumed names that the files are exported as. For instance, I clicked 'save' a few times throughout and my test files were named as such in the image below, but if they were named this way then the code could not correctly extract the user ID or the question number. There needs to be guidance on how these files need to be named before running the aggregation and analysis code
