Skip to content

Conversation

@pavi-rajes
Copy link
Collaborator

Added a function to perform permutation test. Current code takes any statistics function as input as long as the first variable returned by the statistics function is the statistic.

"""
Conducts a permutation test to compute the p-value for the t-test statistic.

Parameters:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Args:

Parameters:
sample1 (array-like): distribution 1.
sample2 (array-like): distribution 2.
statistic (function): The function to compute the test statistic.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe call this statistic_fn to be more consistent with other function arguments in aopy

num_permutations (int): The number of permutations to perform.

Returns:
p_value (float): The p-value for the permutation test.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you be more descriptive about the p-value, is it the probability of the null distribution being above the original sample statistic?


# Perform permutation test using ttest_ind from scipy.stats directly
p_value = aopy.analysis.permutation_test(sample1, sample2, ttest_ind, num_permutations)
self.assertAlmostEqual(p_value, 0.997)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm a bit confused by this, i expected a small p-value if the samples are from different distributions

@leoscholl
Copy link
Collaborator

thanks for adding this! seems like a more intuitive and simplified version of the scipy function https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.permutation_test.html

maybe you could point to that function in the docstring in case folks need more control over the permutation test

# that are more extreme than the observed statistic
p_value = np.sum(perm_stats > obs_stat) / num_permutations

return p_value No newline at end of file
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also want null distribution and observed statistic.


# Calculate the p-value as the proportion of permutation test statistics
# that are more extreme than the observed statistic
p_value = np.sum(perm_stats > obs_stat) / num_permutations
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to add two-sided test as well?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants