
Like Computations for Language
Key Points:
- Researchers from Hebrew University, Google Research, and Princeton found that the human brain processes spoken language in a sequential manner that mirrors the layered architecture of large language models (LLMs) like GPT-2 and Llama 2, with early brain responses aligning with early AI layers and later responses corresponding to deeper layers in regions such as Broca’s area.
- The study challenges traditional rule-based theories of language comprehension, showing that AI-derived contextual embeddings predict brain activity better than classical linguistic features, supporting a dynamic, context-driven model of meaning integration in the brain.
- Using electrocorticography recordings of participants listening to a podcast, the team demonstrated a temporal alignment between neural activity and the stepwise transformations in LLMs, highlighting a shared computational strategy











