Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,12 @@ Cargo.lock
**/*.rs.bk

.vscode

.env/

*.egg-info
__pycache__
*.so

build/

7 changes: 4 additions & 3 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,16 @@ readme = "README.md"
repository = "https://github.com/nadavrot/arpfloat"

[dependencies]

pyo3 = { version = "0.24.1", optional = true }

[dev-dependencies]
criterion = "0.4"
criterion = "0.5"

[[bench]]
name = "main_benchmark"
harness = false

[features]
default = ["std"]
default = ["std", "python"]
std = []
python=["pyo3", "std"]
56 changes: 54 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ types can scale to hundreds of digits, and perform very accurate calculations.
In ARPFloat the rounding mode is a part of the type-system, and this defines
away a number of problem that show up when using fenv.h.

`no_std` environments are supported by disabling the `std` feature.
`no_std` environments are supported by disabling the `std` feature.
`python` bindings are supported by enabling the `python` feature.

### Example
```rust
Expand Down Expand Up @@ -125,9 +126,60 @@ The program above will print this output:
....
```


The [examples](examples) directory contains a few programs that demonstrate the use of this library.

### Python Bindings

The has python bindings that can be installed with 'pip install -e .'

```python
>>> from arpfloat import Float, Semantics, FP16, BF16, FP32, fp64, pi

>>> x = fp64(2.5).cast(FP16)
>>> y = fp64(1.5).cast(FP16)
>>> x + y
4.

>>> sem = Semantics(10, 10, "NearestTiesToEven")
>>> sem
Semantics { exponent: 10, precision: 10, mode: NearestTiesToEven }
>>> Float(sem, False, 0b1000000001, 0b1100101)
4.789062

>>> pi(FP32)
3.1415927
>>> pi(FP16)
3.140625
>>> pi(BF16)
3.140625
```

Arpfloat allows you to experiment with new floating point formats. For example,
Nvidia's new [FP8](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html)
format can be defined as:

```python
import numpy as np
from arpfloat import FP32, fp64, Semantics, zero

# Create two random numpy arrays in the range [0,1)
A0 = np.random.rand(1000000)
A1 = np.random.rand(1000000)

# Calculate the numpy dot product of the two arrays
print("Using fp32 arithmetic : ", np.dot(A0, A1))

# Create the fp8 format (4 exponent bits, 3 mantissa bits + 1 implicit bit)
FP8 = Semantics(4, 3 + 1, "NearestTiesToEven")

# Convert the arrays to fp8
A0 = [fp64(x).cast(FP8) for x in A0]
A1 = [fp64(x).cast(FP8) for x in A1]

dot = sum([x.cast(FP32)*y.cast(FP32) for x, y in zip(A0, A1)])
print("Using fp8/fp32 arithmetic: ", dot)
```

### Resources

There are excellent resources out there, some of which are referenced in the code:
Expand Down
61 changes: 61 additions & 0 deletions arpfloat/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/usr/bin/env python3
"""
ARPFloat: Arbitrary Precision Floating-Point Library

This library provides arbitrary precision floating-point arithmetic with
configurable precision and rounding modes. It implements IEEE 754
semantics and supports standard arithmetic operations.

Examples:
>>> from arpfloat import Float, FP16
>>> x = from_f64(FP32, 2.5).cast(FP16)
>>> y = from_f64(FP32, 1.5).cast(FP16)
>>> x + y
4

>>> sem = Semantics(10, 10, "Zero")
>>> sem
Semantics { exponent: 10, precision: 10, mode: Zero }
>>> Float(sem, False, 1, 13)
.0507

>>> arpfloat.pi(arpfloat.FP32)
3.1415927
>>> pi(FP16)
3.14
>>> pi(BF16)
3.15

Constants:
BF16, FP16, FP32, FP64, FP128, FP256: Standard floating-point formats
pi, e, ln2, zero: Mathematical constants
Float, Semantics: Classes for representing floating-point numbers and their semantics
from_i64, from_f64: Constructors for creating Float objects from integers and floats
"""

from ._arpfloat import PyFloat as Float
from ._arpfloat import PySemantics as Semantics
from ._arpfloat import pi, e, ln2, zero, fma
from ._arpfloat import from_fp64 as fp64
from ._arpfloat import from_i64 as i64

# Add __radd__ method to Float class for sum() compatibility


def _float_radd(self, other):
if isinstance(other, (int, float)) and other == 0:
return self
return self.__add__(other)

Float.__radd__ = _float_radd

# Define standard floating-point types
# Parameters match IEEE 754 standard formats
BF16 = Semantics(8, 8, "NearestTiesToEven") # BFloat16
FP16 = Semantics(5, 11, "NearestTiesToEven") # Half precision
FP32 = Semantics(8, 24, "NearestTiesToEven") # Single precision
FP64 = Semantics(11, 53, "NearestTiesToEven") # Double precision
FP128 = Semantics(15, 113, "NearestTiesToEven") # Quadruple precision
FP256 = Semantics(19, 237, "NearestTiesToEven") # Octuple precision

version = "0.1.10"
20 changes: 20 additions & 0 deletions examples/fma.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import numpy as np
from arpfloat import FP32, fp64, Semantics, zero, fma

# Create two random numpy arrays in the range [0,1)
A0 = np.random.rand(1024)
A1 = np.random.rand(1024)

# Create the fp8 format (4 exponent bits, 3 mantissa bits + 1 implicit bit)
FP8 = Semantics(4, 3 + 1, "NearestTiesToEven")

# Convert the arrays to FP8
B0 = [fp64(x).cast(FP8) for x in A0]
B1 = [fp64(x).cast(FP8) for x in A1]

acc = zero(FP32)
for x, y in zip(B0, B1):
acc = fma(x.cast(FP32), y.cast(FP32), acc)

print("Using fp8/fp32 arithmetic: ", acc)
print("Using fp32 arithmetic : ", np.dot(A0, A1))
29 changes: 29 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from setuptools import setup
from setuptools_rust import Binding, RustExtension

setup(
name="arpfloat",
version="0.1.10", # Match the version in Cargo.toml
description="Arbitrary-precision floating point library",
author="Nadav Rotem",
author_email="nadav256@gmail.com",
url="https://github.com/nadavrot/arpfloat",
rust_extensions=[
RustExtension(
"arpfloat._arpfloat",
binding=Binding.PyO3,
debug=False,
features=["python"],
)
],
package_data={"arpfloat": ["py.typed"]},
packages=["arpfloat"],
zip_safe=False,
python_requires=">=3.6",
classifiers=[
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
],
)
Loading
Loading