-
Notifications
You must be signed in to change notification settings - Fork 0
function to do permutation test added in analysis #536
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| """ | ||
| Conducts a permutation test to compute the p-value for the t-test statistic. | ||
|
|
||
| Parameters: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Args:
| Parameters: | ||
| sample1 (array-like): distribution 1. | ||
| sample2 (array-like): distribution 2. | ||
| statistic (function): The function to compute the test statistic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe call this statistic_fn to be more consistent with other function arguments in aopy
| num_permutations (int): The number of permutations to perform. | ||
|
|
||
| Returns: | ||
| p_value (float): The p-value for the permutation test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you be more descriptive about the p-value, is it the probability of the null distribution being above the original sample statistic?
|
|
||
| # Perform permutation test using ttest_ind from scipy.stats directly | ||
| p_value = aopy.analysis.permutation_test(sample1, sample2, ttest_ind, num_permutations) | ||
| self.assertAlmostEqual(p_value, 0.997) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm a bit confused by this, i expected a small p-value if the samples are from different distributions
|
thanks for adding this! seems like a more intuitive and simplified version of the scipy function https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.permutation_test.html maybe you could point to that function in the docstring in case folks need more control over the permutation test |
| # that are more extreme than the observed statistic | ||
| p_value = np.sum(perm_stats > obs_stat) / num_permutations | ||
|
|
||
| return p_value No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also want null distribution and observed statistic.
|
|
||
| # Calculate the p-value as the proportion of permutation test statistics | ||
| # that are more extreme than the observed statistic | ||
| p_value = np.sum(perm_stats > obs_stat) / num_permutations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to add two-sided test as well?
Added a function to perform permutation test. Current code takes any statistics function as input as long as the first variable returned by the statistics function is the statistic.