diff --git a/README.md b/README.md index 85ae1e6..a330670 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ # BatchNLPKernels.jl -[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://klamike.github.io/BatchNLPKernels.jl/dev/) -[![Build Status](https://github.com/klamike/BatchNLPKernels.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/klamike/BatchNLPKernels.jl/actions/workflows/CI.yml?query=branch%3Amain) +[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://LearningToOptimize.github.io/BatchNLPKernels.jl/dev/) +[![Build Status](https://github.com/LearningToOptimize/BatchNLPKernels.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/LearningToOptimize/BatchNLPKernels.jl/actions/workflows/CI.yml?query=branch%3Amain) [![Coverage](https://codecov.io/gh/LearningToOptimize/BatchNLPKernels.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/LearningToOptimize/BatchNLPKernels.jl) `BatchNLPKernels.jl` provides [`KernelAbstractions.jl`](https://github.com/JuliaGPU/KernelAbstractions.jl) kernels for evaluating problem data from a (parametric) [`ExaModel`](https://github.com/exanauts/ExaModels.jl) for batches of solutions (and parameters). Currently the following functions (as well as their non-parametric variants) are exported: