Skip to content

Clarification of Computing Likelihood? #4

@RylanSchaeffer

Description

@RylanSchaeffer

I have two questions about this line of code:

https://github.com/csmfindling/behavior_models/blob/master/models/expSmoothing_prevAction.py#L52

  1. If I'm understanding correctly, the goal is to compute a likelihood to combine with prior formed from the exponentially smoothed history of actions. The likelihood that is needed is specifically p(observation | stimulus side = right) so that we can end up with a posterior p(stimulus side = right | observations). However, this code seems to compute p(observation < 0 | signed stimulus contrast strength). Are the two distributions equivalent? I would think not, but if they aren't interchangeable, why is p(observation < 0 | signed stimulus contrast strength) the correct likelihood?

I would think that if the model assumes the mouse knows the true signed stimulus contrasts strengths and their variances, then the mouse should compute \sum_{signed stimulus contrast strength} p(o|signed stimulus contrast strength) p(signed stimulus contrast strength | stimulus side = right)

  1. Why don't the minimum and maximum truncations introduce truncation errors?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions