Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LM Eval harness scores are bugged in large runs maybe? #860

Open
dlwh opened this issue Jan 21, 2025 · 0 comments
Open

LM Eval harness scores are bugged in large runs maybe? #860

dlwh opened this issue Jan 21, 2025 · 0 comments
Assignees
Labels

Comments

@dlwh
Copy link
Member

dlwh commented Jan 21, 2025

Tootsie 9b runs tanked to random chance this weekend across the board. I'm pretty sure this is the fault of sequence packing (i.e. my fault)

Image

Perplexities are all fine still, so the timing points to sequence packing

Image

@dlwh dlwh added the p1 label Jan 21, 2025
@dlwh dlwh self-assigned this Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant