Latent Space in AI
• 1 min read 1 min
What I worked on
Tried to understand what latent space actually represents and how it connects to feature learning and the Alberta Plan ideas. Looked at where latent space exists in neural networks.
What I noticed
- Latent space is the internal learned representation of input data
- It typically exists between network layers
- The Alberta Plan focuses on learning which features matter most
”Aha” Moment
Latent space is the model’s internal world — its way of compressing meaning from raw data.
What still feels messy
What parts of a model contribute most to structure in latent space and how to measure “goodness” of representations.
Next step
Compare different architectures to see how latent spaces differ visually.