Vectors Over Tokens for Non-Invasive BCI
• 2 min read 2 min
What I worked on
Explored whether Semantic Pointer Architecture (SPA) is a useful abstraction for non invasive BCI, using speech recognition as a grounding example rather than starting from EEG or fNIRS which I don’t have a good reference point with.
What I noticed
- SPA uses circular convolution which is associative, commutative, and distributive, making it possible to bind and unbind concepts (with some noise)
- Formally, a word is represented as a composite semantic pointer: WORD = Σ PHONEMEᵢ ⊗ Tᵢ
- The phoneme vector itself is not discovered but a target in a known semantic space
- New words can be learned without forgetting or retraining because when the phonemes + transitions are observed repeatedly it stabilizes into a single composite vector (labeling that vector is a separate problem)
- Recognition and readout require associative memory since the system works by comparing similarity between an observed SP and stored SPs.
”Aha” Moment
- SPA reframes learning because it assumes structure (so avoids backprop) so it is more about alignment than discovery.
What still feels messy
- This feels more naturally suited to intent recognition than raw speech recognition, since phoneme SPs still need to be aligned to real speech signals from randomly initialized vectors. Unclear if this is more efficient than existing ASR pipelines.
- The approach feels promising for handling signal drift, since adaptation would be small continual updates to composite vectors rather than repeated fine tuning of a large model.
Next step
None.
Open Questions
- Could SPA be useful for agent to agent communication where shared semantic spaces can be assumed upfront?