Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions .github/workflows/FloatTypes.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: Float type promotion

on:
push:
branches:
- main
pull_request:

# needed to allow julia-actions/cache to delete old caches that it has created
permissions:
actions: write
contents: read

# Cancel existing tests on the same PR if a new commit is added to a pull request
concurrency:
group: ${{ github.workflow }}-${{ github.ref || github.run_id }}
cancel-in-progress: ${{ startsWith(github.ref, 'refs/pull/') }}

jobs:
floattypes:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6

- uses: julia-actions/setup-julia@v2
with:
version: "1"

- uses: julia-actions/cache@v2

- name: Run float type tests
working-directory: test/floattypes
run: |
julia --project=. --color=yes -e 'using Pkg; Pkg.instantiate()'
julia --project=. --color=yes main.jl setup f64
julia --project=. --color=yes main.jl run f64
julia --project=. --color=yes main.jl setup f32
julia --project=. --color=yes main.jl run f32
julia --project=. --color=yes main.jl setup f16
julia --project=. --color=yes main.jl run f16
julia --project=. --color=yes main.jl setup min
julia --project=. --color=yes main.jl run min
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ Manifest.toml
**.~undo-tree~

benchmarks/*.json
LocalPreferences.toml
14 changes: 14 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# 0.40.15

DynamicPPL now allows you to set the type that log-probabilities are initialised with, using the `DynamicPPL.set_logprob_type!` function.
This records a compile-time preference so requires restarting Julia to take effect.

This allows model log-probability accumulation to work with different numerical precisions.
For example, if your model is defined using distributions that are parameterised by `Float32` only (and avoid promoting them to `Float64` elsewhere in the model), and you call `DynamicPPL.set_logprob_type!(Float32)`, the resulting log-probabilities will also be `Float32`.

Previously, DynamicPPL would automatically choose a `Float64` log-probability, causing any lower-precision model to be promoted.

The function `DynamicPPL.get_input_vector_type(::LogDensityFunction)` is now exported, in order to help with querying the type that log-probabilities are initialised with.

`DynamicPPL.typed_identity` is deprecated; please use `Bijectors.VectorBijectors.TypedIdentity()` instead (it does the same thing).

# 0.40.14

Fixed `check_model()` erroneously failing for models such as `x[1:2] .~ univariate_dist`.
Expand Down
4 changes: 3 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "DynamicPPL"
uuid = "366bfd00-2699-11ea-058f-f148b4cae6d8"
version = "0.40.14"
version = "0.40.15"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand All @@ -22,6 +22,7 @@ LogDensityProblems = "6fdf6af0-433a-55f7-b3ed-c6c6e0b8df7c"
MacroTools = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
OrderedCollections = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
PrecompileTools = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
Preferences = "21216c6a-2e73-6563-6e65-726566657250"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Expand Down Expand Up @@ -70,6 +71,7 @@ MarginalLogDensities = "0.4.3"
Mooncake = "0.4.147, 0.5"
OrderedCollections = "1"
PrecompileTools = "1.2.1"
Preferences = "1.5.2"
Printf = "1.10"
Random = "1.6"
ReverseDiff = "1"
Expand Down
15 changes: 9 additions & 6 deletions docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ The [LogDensityProblems.jl](https://github.com/tpapp/LogDensityProblems.jl) inte

```@docs
LogDensityFunction
get_input_vector_type
```

Internally, this is accomplished using [`init!!`](@ref) on:
Expand Down Expand Up @@ -193,12 +194,6 @@ DynamicPPL.prefix

## Utilities

`typed_identity` is the same as `identity`, but with an overload for `with_logabsdet_jacobian` that ensures that it never errors.

```@docs
typed_identity
```

It is possible to manually increase (or decrease) the accumulated log likelihood or prior from within a model function.

```@docs
Expand Down Expand Up @@ -396,6 +391,14 @@ DynamicPPL.split
DynamicPPL.combine
```

The float type used for accumulation of log-probabilities is defined by a compile-time preference:

```@docs
DynamicPPL.LogProbType
DynamicPPL.set_logprob_type!
DynamicPPL.NoLogProb
```

```@docs
VNTAccumulator
DoNotAccumulate
Expand Down
6 changes: 4 additions & 2 deletions src/DynamicPPL.jl
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,6 @@ export AbstractVarInfo,
@model,
# Utilities
OrderedDict,
typed_identity,
# Model
Model,
getmissings,
Expand All @@ -129,6 +128,7 @@ export AbstractVarInfo,
LogDensityFunction,
OnlyAccsVarInfo,
to_vector_params,
get_input_vector_type,
# Leaf contexts
AbstractContext,
contextualize,
Expand Down Expand Up @@ -201,8 +201,10 @@ export AbstractVarInfo,
# Convenience macros
@addlogprob!,
check_model,
set_logprob_type!,
# Deprecated.
generated_quantities
generated_quantities,
typed_identity

# Reexport
using Distributions: loglikelihood
Expand Down
10 changes: 6 additions & 4 deletions src/accumulators/default.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,14 @@ types like LogPriorAccumulator, LogJacobianAccumulator, and LogLikelihoodAccumul
"""
abstract type LogProbAccumulator{T<:Real} <: AbstractAccumulator end

# The first of the below methods sets AccType{T}() = AccType(zero(T)) for any
# AccType <: LogProbAccumulator{T}. The second one sets LogProbType as the default eltype T
# when calling AccType().
"""
LogProbAccumulator{T}()

Create a new `LogProbAccumulator` accumulator with the log prior initialized to zero.
Create a new `LogProbAccumulator` accumulator with the log prior initialized to `zero(T)`.

LogProbAccumulator()

Create a new `LogProbAccumulator{DynamicPPL.LogProbType}` accumulator.
"""
(::Type{AccType})() where {T<:Real,AccType<:LogProbAccumulator{T}} = AccType(zero(T))
(::Type{AccType})() where {AccType<:LogProbAccumulator} = AccType{LogProbType}()
Expand Down Expand Up @@ -59,6 +60,7 @@ function combine(acc::LogProbAccumulator, acc2::LogProbAccumulator)
end

acclogp(acc::LogProbAccumulator, val) = basetypeof(acc)(logp(acc) + val)
acclogp(acc::LogProbAccumulator{NoLogProb}, val) = basetypeof(acc)(val)

function Base.convert(
::Type{AccType}, acc::LogProbAccumulator
Expand Down
2 changes: 1 addition & 1 deletion src/accumulators/vector_params.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ initialisation strategy and collect the vectorised parameters corresponding to t
strategy.
"""
function VectorParamAccumulator(ldf::LogDensityFunction)
et = eltype(_get_input_vector_type(ldf))
et = eltype(get_input_vector_type(ldf))
dim = ldf._dim
vals = Vector{et}(undef, dim)
set_indices = falses(dim)
Expand Down
2 changes: 2 additions & 0 deletions src/deprecated.jl
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
@deprecate generated_quantities(model, params) returned(model, params)

typed_identity = Bijectors.VectorBijectors.TypedIdentity()
16 changes: 13 additions & 3 deletions src/logdensityfunction.jl
Original file line number Diff line number Diff line change
Expand Up @@ -309,7 +309,17 @@ function _default_vnt(model::Model, transform_strategy::AbstractTransformStrateg
return getacc(oavi, Val(VECTORVAL_ACCNAME)).values
end

function _get_input_vector_type(::LogDensityFunction{M,A,L,G,R,P,X}) where {M,A,L,G,R,P,X}
"""
DynamicPPL.get_input_vector_type(::LogDensityFunction)

Get the type of the vector `x` that should be passed to `LogDensityProblems.logdensity(ldf,
x)`.

Note that if you pass a vector of a different type, it will be converted to the correct
type. This allows you however to determine upfront what kind of vector should be passed in.
It is also useful for determining e.g. whether Float32 or Float64 parameters are expected.
"""
function get_input_vector_type(::LogDensityFunction{M,A,L,G,R,P,X}) where {M,A,L,G,R,P,X}
return X
end

Expand Down Expand Up @@ -415,7 +425,7 @@ function LogDensityProblems.logdensity_and_gradient(
)
# `params` has to be converted to the same vector type that was used for AD preparation,
# otherwise the preparation will not be valid.
params = convert(_get_input_vector_type(ldf), params)
params = convert(get_input_vector_type(ldf), params)
return if _use_closure(ldf.adtype)
DI.value_and_gradient(
LogDensityAt(
Expand Down Expand Up @@ -586,7 +596,7 @@ that.
"""
function to_vector_params(vector_values::VarNamedTuple, ldf::LogDensityFunction)
return to_vector_params_inner(
vector_values, ldf._varname_ranges, eltype(_get_input_vector_type(ldf)), ldf._dim
vector_values, ldf._varname_ranges, eltype(get_input_vector_type(ldf)), ldf._dim
)
end

Expand Down
2 changes: 1 addition & 1 deletion src/transformed_values.jl
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ struct UntransformedValue{V} <: AbstractTransformedValue
end
Base.:(==)(tv1::UntransformedValue, tv2::UntransformedValue) = tv1.val == tv2.val
Base.isequal(tv1::UntransformedValue, tv2::UntransformedValue) = isequal(tv1.val, tv2.val)
get_transform(::UntransformedValue) = typed_identity
get_transform(::UntransformedValue) = Bijectors.VectorBijectors.TypedIdentity()
get_internal_value(tv::UntransformedValue) = tv.val
set_internal_value(::UntransformedValue, new_val) = UntransformedValue(new_val)

Expand Down
112 changes: 85 additions & 27 deletions src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,48 +2,106 @@
# defined in other files.
function subset end

using Preferences: @load_preference, @set_preferences!

"""
The type for all log probability variables.
DynamicPPL.NoLogProb <: Real

Singleton type that represents the absence of a log probability value. This is used as the
default type parameter for `LogProbAccumulator` when no log probability value is needed, to
avoid defining a concrete type such as `Float64` that would cause unwanted type promotion
when accumulating log probabilities of other types (e.g., `Float32`).

This is Float64 on 64-bit systems and Float32 on 32-bit systems.
Adding anything to `NoLogProb()` returns the other thing. In other words, `NoLogProb` is a
true additive identity which additionally preserves types.
"""
const LogProbType = float(Real)
struct NoLogProb <: Real end
Base.zero(::Type{NoLogProb}) = NoLogProb()
Base.convert(::Type{T}, ::NoLogProb) where {T<:Number} = zero(T)
Base.promote_rule(::Type{NoLogProb}, ::Type{T}) where {T<:Number} = T
Base.iszero(::NoLogProb) = true
Base.hash(::NoLogProb, h::UInt) = hash(0.0, h)
Base.:(+)(::NoLogProb, ::NoLogProb) = NoLogProb()
Base.:(-)(::NoLogProb, ::NoLogProb) = NoLogProb()

const FLOAT_TYPE_PREF_KEY = "floattype"

"""
typed_identity(x)
DynamicPPL.LogProbType

Identity function, but with an overload for `with_logabsdet_jacobian` to ensure
that it returns a sensible zero logjac.
The default type used for log-probabilities in DynamicPPL.jl. This is a compile-time constant
that can be set via [`set_logprob_type!`](@ref), which under the hood uses Preferences.jl.

The problem with plain old `identity` is that the default definition of
`with_logabsdet_jacobian` for `identity` returns `zero(eltype(x))`:
https://github.com/JuliaMath/ChangesOfVariables.jl/blob/d6a8115fc9b9419decbdb48e2c56ec9675b4c6a4/src/with_ladj.jl#L154
Note that this does not prevent computations within the model from promoting the
log-probability to a different type. In essence, `LogProbType` specifies the *lowest*
possible type that log-probabilities can be, and DynamicPPL promises to not insert any extra
operations that would cause this to be promoted to a higher type. However, DynamicPPL cannot
guard against user code inside models.

This is fine for most samples `x`, but if `eltype(x)` doesn't return a sensible type (e.g.
if it's `Any`), then using `identity` will error with `zero(Any)`. This can happen with,
for example, `ProductNamedTupleDistribution`:
For example, in:

```julia
julia> using Distributions; d = product_distribution((a = Normal(), b = LKJCholesky(3, 0.5)));
@model f() = x ~ Normal(0.0, 1.0)
```

the log-probability of the model will always be promoted to `Float64`, regardless of the
value of `LogProbType`, because the logpdf of `Normal(0.0, 1.0)` is a `Float64`. On the
other hand, in:

julia> eltype(rand(d))
Any
```julia
@model f() = x ~ Normal(0.0f0, 1.0f0)
```

The same problem precludes us from eventually broadening the scope of DynamicPPL.jl to
support distributions with non-numeric samples.
the log-probability of the model will be `Float32` if `LogProbType` is `Float32` or lower.
"""
const LogProbType = let
logp_pref = @load_preference(FLOAT_TYPE_PREF_KEY, "f64")
if logp_pref == "f64"
Float64
elseif logp_pref == "f32"
Float32
elseif logp_pref == "f16"
Float16
elseif logp_pref == "min"
NoLogProb
else
error("Unsupported log probability preference: $logp_pref")
end
end

Furthermore, in principle, the type of the log-probability should be separate from the type
of the sample. Thus, instead of using `zero(LogProbType)`, we should use the eltype of the
LogJacobianAccumulator. There's no easy way to thread that through here, but if a way to do
this is discovered, then `typed_identity` is what will allow us to obtain that custom
behaviour.
"""
function typed_identity end
@inline typed_identity(x) = x
@inline Bijectors.with_logabsdet_jacobian(::typeof(typed_identity), x) =
(x, zero(LogProbType))
@inline Bijectors.inverse(::typeof(typed_identity)) = typed_identity
set_logprob_type!(::Type{T}) where {T}

Set the log probability type for DynamicPPL.jl, [`DynamicPPL.LogProbType`](@ref), to `T`.
Permitted values are `Float64`, `Float32`, `Float16`, and `NoLogProb`. The default in
DynamicPPL is `Float64`.

`NoLogProb` is a special type that is the "lowest" possible float type. This means that the
log probability will be promoted to whatever type the model dictates. This is a totally
unintrusive option, which can be useful if you do not know in advance what log probability
type you are targeting, or want to troubleshoot a model to see what type the log probability
is being promoted to. However, this can also cause type stability issues and performance
degradations, so we generally recommend setting a specific log probability type if you know
what type you want to target.

This function uses Preferences.jl to set a compile-time constant, so you will need to
restart your Julia session for the change to take effect.
"""
function set_logprob_type!(::Type{T}) where {T}
new_pref = if T == Float64
"f64"
elseif T == Float32
"f32"
elseif T == Float16
"f16"
elseif T == NoLogProb
"min"
else
throw(ArgumentError("Unsupported log probability type: $T"))
end
@set_preferences!(FLOAT_TYPE_PREF_KEY => new_pref)
@info "DynamicPPL's log probability type has been set to $T.\nPlease note you will need to restart your Julia session for this change to take effect."
end

"""
@addlogprob!(ex)
Expand Down
Loading
Loading