Skip to content

feat: add inverse trigonometric functions (asin, acos, atan)#3338

Open
AndrewTKent wants to merge 1 commit intohuggingface:mainfrom
AndrewTKent:add-inverse-trig-ops
Open

feat: add inverse trigonometric functions (asin, acos, atan)#3338
AndrewTKent wants to merge 1 commit intohuggingface:mainfrom
AndrewTKent:add-inverse-trig-ops

Conversation

@AndrewTKent
Copy link

Summary

Adds asin, acos, and atan with full autodiff support across all backends.

Changes

Layer Files
Core API tensor.rs, op.rs
Autodiff backprop.rs
CPU mkl.rs, accelerate.rs
CUDA cuda_utils.cuh, unary.cu
Metal unary.metal, unary.rs, metal_backend/mod.rs
Tests tensor_tests.rs, grad_tests.rs

Backend Coverage

Backend f32 f64 f16 bf16 fp8
CPU
CUDA
Metal
MKL
Accelerate

Gradients

d/dx asin(x) = 1/√(1-x²)
d/dx acos(x) = -1/√(1-x²)  
d/dx atan(x) = 1/(1+x²)

Test values validated against PyTorch.

Usage

let t = Tensor::new(&[0.0f32, 0.5, -0.5], &Device::Cpu)?;
let asin = t.asin()?;  // [0.0, 0.5236, -0.5236]
let acos = t.acos()?;  // [1.5708, 1.0472, 2.0944]
let atan = t.atan()?;  // [0.0, 0.4636, -0.4636]

Closes #3270

Add asin, acos, and atan operations with full backend support:
- CPU via UnaryOpT trait with MKL/Accelerate vectorization
- CUDA kernels for f32, f64, f16, bf16, and fp8e4m3
- Metal shaders for f32, f16, bf16 (contiguous + strided)

Includes gradient computation for autodiff:
- d/dx asin(x) = 1/√(1-x²)
- d/dx acos(x) = -1/√(1-x²)
- d/dx atan(x) = 1/(1+x²)

Tests validated against PyTorch reference values.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add inverse trigonometric operations

1 participant