Skip to main content
Featured image for post: Did JEPA Learn Anything?

Did JEPA Learn Anything?

2 min

What I worked on

Trained a JEPA model based on my environment and transition dataset. Now I need to validate that it’s actually learned the environment dynamics. The feature set (n=4) and action space (n=3) is small so here’s what I’ve landed on.

  1. Prediction Test
    • Run on a fresh set of transitions
    • Compare predicted next state to actual
    • Compute errors separately for each feature
    • Compare against two baselines: “do nothing” & a tiny model that learns the next state

Thinking: If it beats the first baseline and gets close to the supervised one then it’s learnt something

  1. Action Table
    • For each action (noop, eat, forward) look at what the model predicts will change
    • NOOP should decrease energy, eat should increase energy, forward should increase x_pos

Thinking: If these don’t align then the model isn’t using the action info correctly

  1. Latent Memory
    • Small feature set means there should be a correlation with a latent dimension

Thinking: If there’s no correlation with any latent dim then the model probably dropped it

  1. VicReg Check
    • no latent dim has near zero variation
    • latent dims aren’t copy of each other

Thinking: first time using it so following best practices here to check it didn’t collapse

What I noticed

  • Some of these rely on the decoder so it may need its own sanity check

”Aha” Moment

n/a

What still feels messy

  • JEPA loss curves aren’t meaningful to me right now. I wonder if that will change in the future.

Next step

  • Notebook to code it up