Close Menu

    Subscribe to Updates

    Get the latest Tech news from SynapseFlow

    What's Hot

    This Supreme Court decision is bad news for Hollywood’s AI ambitions

    March 14, 2026

    How to Make a Killing review: a serial killer story should not be this boring

    March 14, 2026

    NASA Selects Finalists in Student Aircraft Maintenance Competition – NASA

    March 13, 2026
    Facebook X (Twitter) Instagram
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube
    synapseflow.co.uksynapseflow.co.uk
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    synapseflow.co.uksynapseflow.co.uk
    Home»AI News & Updates»AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop
    AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop
    AI News & Updates

    AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop

    The Tech GuyBy The Tech GuyOctober 8, 2025No Comments3 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Advertisement



    AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop

    The latest addition to the small model wave for enterprises comes from AI21 Labs, which is betting that bringing models to devices will free up traffic in data centers. 

    Advertisement

    AI21’s Jamba Reasoning 3B, a “tiny” open-source model that can run extended reasoning, code generation and respond based on ground truth. Jamba Reasoning 3B handles more than 250,000 tokens and can run inference on edge devices. 

    The company said Jamba Reasoning 3B works on devices such as laptops and mobile phones. 

    Ori Goshen, co-CEO of AI21, told VentureBeat that the company sees more enterprise use cases for small models, mainly because moving most inference to devices frees up data centers.  

    “What we're seeing right now in the industry is an economics issue where there are very expensive data center build-outs, and the revenue that is generated from the data centers versus the depreciation rate of all their chips shows the math doesn't add up,” Goshen said. 

    He added that in the future “the industry by and large would be hybrid in the sense that some of the computation will be on devices locally and other inference will move to GPUs.”

    Tested on a MacBook

    Jamba Reasoning 3B combines the Mamba architecture and Transformers to allow it to run a 250K token window on devices. AI21 said it can do 2-4x faster inference speeds. Goshen said the Mamba architecture significantly contributed to the model’s speed. 

    Jamba Reasoning 3B’s hybrid architecture also allows it to reduce memory requirements, thereby reducing its computing needs. 

    AI21 tested the model on a standard MacBook Pro and found that it can process 35 tokens per second. 

    Goshen said the model works best for tasks involving function calling, policy-grounded generation and tool routing. He said that simple requests, such as asking for information about a forthcoming meeting and asking the model to create an agenda for it, could be done on devices. The more complex reasoning tasks can be saved for GPU clusters. 

    Small models in enterprise

    Enterprises have been interested in using a mix of small models, some of which are specifically designed for their industry and some that are condensed versions of LLMs. 

    In September, Meta released MobileLLM-R1, a family of reasoning models ranging from 140M to 950M parameters. These models are designed for math, coding and scientific reasoning rather than chat applications. MobileLLM-R1 can run on compute-constrained devices. 

    Google’s Gemma was one of the first small models to come to the market, designed to run on portable devices like laptops and mobile phones. Gemma has since been expanded. 

    Companies like FICO have also begun building their own models. FICO launched its FICO Focused Language and FICO Focused Sequence small models that will only answer finance-specific questions. 

    Goshen said the big difference their model offers is that it’s even smaller than most models and yet it can run reasoning tasks without sacrificing speed. 

    Benchmark testing 

    In benchmark testing, Jamba Reasoning 3B demonstrated strong performance compared to other small models, including Qwen 4B, Meta’s Llama 3.2B-3B, and Phi-4-Mini from Microsoft. 

    It outperformed all models on the IFBench test and Humanity’s Last Exam, although it came in second to Qwen 4 on MMLU-Pro. 

    Goshen said another advantage of small models like Jamba Reasoning 3B is that they are highly steerable and provide better privacy options to enterprises because the inference is not sent to a server elsewhere. 

    “I do believe there’s a world where you can optimize for the needs and the experience of the customer, and the models that will be kept on devices are a large part of it,” he said. 

    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Tech Guy
    • Website

    Related Posts

    Railway secures $100 million to challenge AWS with AI-native cloud infrastructure

    January 22, 2026

    Claude Code costs up to $200 a month. Goose does the same thing for free.

    January 20, 2026

    Listen Labs raises $69M after viral billboard hiring stunt to scale AI customer interviews

    January 16, 2026

    Salesforce rolls out new Slackbot AI agent as it battles Microsoft and Google in workplace AI

    January 13, 2026

    Converge Bio raises $25M, backed by Bessemer and execs from Meta, OpenAI, Wiz

    January 13, 2026

    Anthropic launches Cowork, a Claude Desktop agent that works in your files — no coding required

    January 13, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    The iPad Air brand makes no sense – it needs a rethink

    October 12, 202516 Views

    ChatGPT Group Chats are here … but not for everyone (yet)

    November 14, 20258 Views

    Facebook updates its algorithm to give users more control over which videos they see

    October 8, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Advertisement
    About Us
    About Us

    SynapseFlow brings you the latest updates in Technology, AI, and Gadgets from innovations and reviews to future trends. Stay smart, stay updated with the tech world every day!

    Our Picks

    This Supreme Court decision is bad news for Hollywood’s AI ambitions

    March 14, 2026

    How to Make a Killing review: a serial killer story should not be this boring

    March 14, 2026

    NASA Selects Finalists in Student Aircraft Maintenance Competition – NASA

    March 13, 2026
    categories
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 SynapseFlow All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.