Skip to content

Releases: JPK314/rlgym-learn

v1.0.5

01 Jul 12:21
f8309bd

Choose a tag to compare

Fix rlgym_learn.pyi not working on some systems where typeshed types don't get patched in at runtime

v1.0.4

29 May 02:29
e3fb39e

Choose a tag to compare

  • Add Python 3.13 support
  • remove "device" parameter from learning coordinator config model (it should be decided by agent controllers' config)
  • update example scripts for rlgym-learn-algos 0.2.0

v1.0.3

07 May 06:37
cc5682a

Choose a tag to compare

  • update and correct bindings
  • add method of instantiating timestep outside of Rust
  • rename config model parameter in the generate_config method
  • fix state setting
  • update tests and quick start guide to use correct kwargs
  • minor error message improvements

v1.0.2

26 Apr 22:25
5249567

Choose a tag to compare

  • Fix RLViser hard-coded integration in env process
  • update pyo3 and numpy versions
  • fix examples

v1.0.1

29 Mar 20:56
22e42dc

Choose a tag to compare

Update quick_start_guide.py and fix .pyi bindings for numpy serde type

v1.0.0

25 Mar 12:49
2ea8932

Choose a tag to compare

First major release

v0.1.8

24 Mar 04:08
d628359

Choose a tag to compare

  • refactor ppo implementation to rlgym-learn-algos
  • implement Rocket League helper functions + classes
  • improve flexibility of state info and timestep data with addition to EnvActionResponse to choose when to use the state serde to receive the state from the given env process
  • slight QOL changes

v0.1.7

08 Mar 23:37
aa94013

Choose a tag to compare

various optimizations

v0.1.6

02 Mar 19:53
e422ba6

Choose a tag to compare

  • switch over to shared info instead of state metrics
  • allow setting shared info keys as part of env actions
  • github issue fixes

v0.1.5

27 Feb 09:44
07a6be4

Choose a tag to compare

update pyany-serde to support linux