Close Menu

    Subscribe to Updates

    Get the latest Tech news from SynapseFlow

    What's Hot

    US Destroys All Military Targets on Kharg Island Which Is Iran’s Oil Export Hub

    March 14, 2026

    The vivo X300 Ultra will upgrade audio quality on all levels

    March 14, 2026

    This Supreme Court decision is bad news for Hollywood’s AI ambitions

    March 14, 2026
    Facebook X (Twitter) Instagram
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube
    synapseflow.co.uksynapseflow.co.uk
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    synapseflow.co.uksynapseflow.co.uk
    Home»AI News & Updates»Google’s ‘Nested Learning’ paradigm could solve AI's memory and continual learning problem
    Google’s ‘Nested Learning’ paradigm could solve AI's memory and continual learning problem
    AI News & Updates

    Google’s ‘Nested Learning’ paradigm could solve AI's memory and continual learning problem

    The Tech GuyBy The Tech GuyNovember 21, 2025No Comments5 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Advertisement



    Google’s ‘Nested Learning’ paradigm could solve AI's memory and continual learning problem

    Researchers at Google have developed a new AI paradigm aimed at solving one of the biggest limitations in today’s large language models: their inability to learn or update their knowledge after training. The paradigm, called Nested Learning, reframes a model and its training not as a single process, but as a system of nested, multi-level optimization problems. The researchers argue that this approach can unlock more expressive learning algorithms, leading to better in-context learning and memory.

    Advertisement

    To prove their concept, the researchers used Nested Learning to develop a new model, called Hope. Initial experiments show that it has superior performance on language modeling, continual learning, and long-context reasoning tasks, potentially paving the way for efficient AI systems that can adapt to real-world environments.

    The memory problem of large language models

    Deep learning algorithms helped obviate the need for the careful engineering and domain expertise required by traditional machine learning. By feeding models vast amounts of data, they could learn the necessary representations on their own. However, this approach presented its own set of challenges that couldn’t be solved by simply stacking more layers or creating larger networks, such as generalizing to new data, continually learning new tasks, and avoiding suboptimal solutions during training.

    Efforts to overcome these challenges led to the innovations that led to Transformers, the foundation of today's large language models (LLMs). These models have ushered in "a paradigm shift from task-specific models to more general-purpose systems with various emergent capabilities as a result of scaling the 'right' architectures," the researchers write. Still, a fundamental limitation remains: LLMs are largely static after training and can't update their core knowledge or acquire new skills from new interactions.

    The only adaptable component of an LLM is its in-context learning ability, which allows it to perform tasks based on information provided in its immediate prompt. This makes current LLMs analogous to a person who can't form new long-term memories. Their knowledge is limited to what they learned during pre-training (the distant past) and what's in their current context window (the immediate present). Once a conversation exceeds the context window, that information is lost forever.

    The problem is that today’s transformer-based LLMs have no mechanism for “online” consolidation. Information in the context window never updates the model’s long-term parameters — the weights stored in its feed-forward layers. As a result, the model can’t permanently acquire new knowledge or skills from interactions; anything it learns disappears as soon as the context window rolls over.

    A nested approach to learning

    Nested Learning (NL) is designed to allow computational models to learn from data using different levels of abstraction and time-scales, much like the brain. It treats a single machine learning model not as one continuous process, but as a system of interconnected learning problems that are optimized simultaneously at different speeds. This is a departure from the classic view, which treats a model's architecture and its optimization algorithm as two separate components.

    Under this paradigm, the training process is viewed as developing an "associative memory," the ability to connect and recall related pieces of information. The model learns to map a data point to its local error, which measures how "surprising" that data point was. Even key architectural components like the attention mechanism in transformers can be seen as simple associative memory modules that learn mappings between tokens. By defining an update frequency for each component, these nested optimization problems can be ordered into different "levels," forming the core of the NL paradigm.

    Hope for continual learning

    The researchers put these principles into practice with Hope, an architecture designed to embody Nested Learning. Hope is a modified version of Titans, another architecture Google introduced in January to address the transformer model's memory limitations. While Titans had a powerful memory system, its parameters were updated at only two different speeds: a long-term memory module and a short-term memory mechanism.

    Hope is a self-modifying architecture augmented with a "Continuum Memory System" (CMS) that enables unbounded levels of in-context learning and scales to larger context windows. The CMS acts like a series of memory banks, each updating at a different frequency. Faster-updating banks handle immediate information, while slower ones consolidate more abstract knowledge over longer periods. This allows the model to optimize its own memory in a self-referential loop, creating an architecture with theoretically infinite learning levels.

    On a diverse set of language modeling and common-sense reasoning tasks, Hope demonstrated lower perplexity (a measure of how well a model predicts the next word in a sequence and maintains coherence in the text it generates) and higher accuracy compared to both standard transformers and other modern recurrent models. Hope also performed better on long-context "Needle-In-Haystack" tasks, where a model must find and use a specific piece of information hidden within a large volume of text. This suggests its CMS offers a more efficient way to handle long information sequences.

    This is one of several efforts to create AI systems that process information at different levels. Hierarchical Reasoning Model (HRM) by Sapient Intelligence, used a hierarchical architecture to make the model more efficient in learning reasoning tasks. Tiny Reasoning Model (TRM), a model by Samsung, improves HRM by making architectural changes, improving its performance while making it more efficient.

    While promising, Nested Learning faces some of the same challenges of these other paradigms in realizing its full potential. Current AI hardware and software stacks are heavily optimized for classic deep learning architectures and Transformer models in particular. Adopting Nested Learning at scale may require fundamental changes. However, if it gains traction, it could lead to far more efficient LLMs that can continually learn, a capability crucial for real-world enterprise applications where environments, data, and user needs are in constant flux.

    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Tech Guy
    • Website

    Related Posts

    Railway secures $100 million to challenge AWS with AI-native cloud infrastructure

    January 22, 2026

    Claude Code costs up to $200 a month. Goose does the same thing for free.

    January 20, 2026

    Listen Labs raises $69M after viral billboard hiring stunt to scale AI customer interviews

    January 16, 2026

    Salesforce rolls out new Slackbot AI agent as it battles Microsoft and Google in workplace AI

    January 13, 2026

    Converge Bio raises $25M, backed by Bessemer and execs from Meta, OpenAI, Wiz

    January 13, 2026

    Anthropic launches Cowork, a Claude Desktop agent that works in your files — no coding required

    January 13, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    The iPad Air brand makes no sense – it needs a rethink

    October 12, 202516 Views

    ChatGPT Group Chats are here … but not for everyone (yet)

    November 14, 20258 Views

    Facebook updates its algorithm to give users more control over which videos they see

    October 8, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Advertisement
    About Us
    About Us

    SynapseFlow brings you the latest updates in Technology, AI, and Gadgets from innovations and reviews to future trends. Stay smart, stay updated with the tech world every day!

    Our Picks

    US Destroys All Military Targets on Kharg Island Which Is Iran’s Oil Export Hub

    March 14, 2026

    The vivo X300 Ultra will upgrade audio quality on all levels

    March 14, 2026

    This Supreme Court decision is bad news for Hollywood’s AI ambitions

    March 14, 2026
    categories
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 SynapseFlow All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.