Skip to content

Optimizations inspired by ssqueezepy #69

@cboulay

Description

@cboulay

ssqueezepy has a bunch of optimizations around FFT and CWT.

  • numba jit
  • fftw if available
  • optional parallel processing for fft
  • torch

jit

I tried this in the adaptive scaler and it was slower than what I had already. However, it's possible that I was using something that numba couldn't handle and it fell back to python. I should try again with nopython=True to make sure I'm not providing numba-incompatible code.

fftw

This should be an easy win. The first sample will take much longer to process while fftw does its optimization but subsequent processing will be faster.

parallel

scipy.fft(..., workers=N) -- does this actually help? Isn't the overhead more than the savings?

torch

We will need all the nodes in a standard pipeline to handle torch tensors before the overhead of moving to gpu/mps is worth it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions