Skip to content
This repository was archived by the owner on Oct 21, 2020. It is now read-only.
This repository was archived by the owner on Oct 21, 2020. It is now read-only.

Probability math speedup? #4

@o-jasper

Description

@o-jasper

find_height has to iterate the map, and then the roll often throw away that computation. Probabilities can get pretty low, sub-1% even.

Nodes are already chosen at random, we want to add to that that they're run in the right fractions. We could approximate different probabilities with bins, each bin corresponding to p[k] probability has n[k] entries, the frequency of it f[k] occuring in general is f[k]=n[k]p[k]

I think randomly pick locations, put them in some bin-list k, as the top probability is calculated. Measure the n[k] of each bin, by simple number of encounters, likely some running average, based on the rate of the area is expected to change. Work through the lists for each bin at the proper frequency.

Bins at high f[k] and low n[k] may run out, and then you get more, likely finding a lot of the other k instead. Bins of which the bin-list gets too long, dump the oldest.(as we're pretty much hoping the information isn't stale..) Of course if this happens a lot, the savings are lost. On the other hand, finding many to dump lowers the estimate of the number out there; n[k] and thus f[k] by the moving everage. I am not sure how much faster it'd be in the end..

I might try implement it later.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions