Skip to content

mattperls-code/probing-rank-llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Probing Rank LLaMa

An exploration of how classical IR feature signals arise in neural reranking models.

R Squared Progressions for Several Classical IR Features

We can begin to understand which classical IR features the learned reranking model borrows from by analyzing how different neurons correlate with a given feature. Specifically, we try to predict the value of that feature from the output of some set of neurons, as this demonstrates those neurons have a correlated signal.

In this particular case, our prediction is some linear combination of the activated values of a layer. A successful prediction indicates similar signals to that feature are being preserved and potentially used by the network in that layer.

Finding the connection between specific neurons and features may eventually help us understand how LLMs make decisions, speed up evaluation time, reduce model sizes, and adjust high level behavior.

About

Detecting Classical IR Features In Neural Reranking Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages