v3.0.0a3
Pre-release
Pre-release
New Alpha release! A whole bunch of changes have been added. Some more HookedTransformer functionality has been imported, and a whole bunch of architectures have been improved to give more options in our new module. These changes have resulted in a very noticeable improvement with compatibility of old HookedTransformer based code.
What's Changed
- Setup deprecated hook aliases and got the majority of the main demo running properly by @bryce13950 in #976
- Linear test coverage by @bryce13950 in #977
- Create Bridge for every Gemma 3 module by @degenfabian in #966
- Add Bridges for every module in GPT2 by @degenfabian in #967
- Cache hook aliases & stop at layer by @bryce13950 in #978
- Create Bridges for every module in Bloom models by @degenfabian in #970
- Create Bridges for every module in Gemma 2 by @degenfabian in #971
- Create bridges for every module in Gemma 1 by @degenfabian in #972
- Create bridges for every module in Mistral by @degenfabian in #979
- Remove that output_attention flag defaults to true in boot function by @degenfabian in #982
- Create bridge for every module in GPT-J by @degenfabian in #974
- Create bridge for every module in Llama by @degenfabian in #975
Full Changelog: v3.0.0a2...v3.0.0a3