Close Menu

    Subscribe to Updates

    Get the latest Tech news from SynapseFlow

    What's Hot

    Create Fast Growth Companies With Growth Loops

    October 20, 2025

    Ninja Prestige DualBrew System review: espresso and drip coffee don’t get easier than this

    October 19, 2025

    Bluesky adds private bookmarks | TechCrunch

    October 19, 2025
    Facebook X (Twitter) Instagram
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube
    synapseflow.co.uksynapseflow.co.uk
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    synapseflow.co.uksynapseflow.co.uk
    Home»AI News & Updates»AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop
    AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop
    AI News & Updates

    AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop

    The Tech GuyBy The Tech GuyOctober 8, 2025No Comments3 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Advertisement



    AI21’s Jamba Reasoning 3B Redefines What “Small” Means in LLMs — 250K Context on a Laptop

    The latest addition to the small model wave for enterprises comes from AI21 Labs, which is betting that bringing models to devices will free up traffic in data centers. 

    Advertisement

    AI21’s Jamba Reasoning 3B, a “tiny” open-source model that can run extended reasoning, code generation and respond based on ground truth. Jamba Reasoning 3B handles more than 250,000 tokens and can run inference on edge devices. 

    The company said Jamba Reasoning 3B works on devices such as laptops and mobile phones. 

    Ori Goshen, co-CEO of AI21, told VentureBeat that the company sees more enterprise use cases for small models, mainly because moving most inference to devices frees up data centers.  

    “What we're seeing right now in the industry is an economics issue where there are very expensive data center build-outs, and the revenue that is generated from the data centers versus the depreciation rate of all their chips shows the math doesn't add up,” Goshen said. 

    He added that in the future “the industry by and large would be hybrid in the sense that some of the computation will be on devices locally and other inference will move to GPUs.”

    Tested on a MacBook

    Jamba Reasoning 3B combines the Mamba architecture and Transformers to allow it to run a 250K token window on devices. AI21 said it can do 2-4x faster inference speeds. Goshen said the Mamba architecture significantly contributed to the model’s speed. 

    Jamba Reasoning 3B’s hybrid architecture also allows it to reduce memory requirements, thereby reducing its computing needs. 

    AI21 tested the model on a standard MacBook Pro and found that it can process 35 tokens per second. 

    Goshen said the model works best for tasks involving function calling, policy-grounded generation and tool routing. He said that simple requests, such as asking for information about a forthcoming meeting and asking the model to create an agenda for it, could be done on devices. The more complex reasoning tasks can be saved for GPU clusters. 

    Small models in enterprise

    Enterprises have been interested in using a mix of small models, some of which are specifically designed for their industry and some that are condensed versions of LLMs. 

    In September, Meta released MobileLLM-R1, a family of reasoning models ranging from 140M to 950M parameters. These models are designed for math, coding and scientific reasoning rather than chat applications. MobileLLM-R1 can run on compute-constrained devices. 

    Google’s Gemma was one of the first small models to come to the market, designed to run on portable devices like laptops and mobile phones. Gemma has since been expanded. 

    Companies like FICO have also begun building their own models. FICO launched its FICO Focused Language and FICO Focused Sequence small models that will only answer finance-specific questions. 

    Goshen said the big difference their model offers is that it’s even smaller than most models and yet it can run reasoning tasks without sacrificing speed. 

    Benchmark testing 

    In benchmark testing, Jamba Reasoning 3B demonstrated strong performance compared to other small models, including Qwen 4B, Meta’s Llama 3.2B-3B, and Phi-4-Mini from Microsoft. 

    It outperformed all models on the IFBench test and Humanity’s Last Exam, although it came in second to Qwen 4 on MMLU-Pro. 

    Goshen said another advantage of small models like Jamba Reasoning 3B is that they are highly steerable and provide better privacy options to enterprises because the inference is not sent to a server elsewhere. 

    “I do believe there’s a world where you can optimize for the needs and the experience of the customer, and the models that will be kept on devices are a large part of it,” he said. 

    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Tech Guy
    • Website

    Related Posts

    The teacher is the new engineer: Inside the rise of AI enablement and PromptOps

    October 19, 2025

    ACE prevents context collapse with ‘evolving playbooks’ for self-improving AI agents

    October 19, 2025

    Amazon and Chobani adopt Strella's AI interviews for customer research as fast-growing startup raises $14M

    October 19, 2025

    Abstract or die: Why AI enterprises can't afford rigid vector stacks

    October 18, 2025

    Codev lets enterprises avoid vibe coding hangovers with a team of agents that generate and document code

    October 18, 2025

    Gridware’s boxes literally listen to power lines to find outages

    October 18, 2025
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    The iPad Air brand makes no sense – it needs a rethink

    October 12, 202516 Views

    Facebook updates its algorithm to give users more control over which videos they see

    October 8, 20257 Views

    Huawei Watch GT 6 Pro review

    October 12, 20256 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Advertisement
    About Us
    About Us

    SynapseFlow brings you the latest updates in Technology, AI, and Gadgets from innovations and reviews to future trends. Stay smart, stay updated with the tech world every day!

    Our Picks

    Create Fast Growth Companies With Growth Loops

    October 20, 2025

    Ninja Prestige DualBrew System review: espresso and drip coffee don’t get easier than this

    October 19, 2025

    Bluesky adds private bookmarks | TechCrunch

    October 19, 2025
    categories
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2025 SynapseFlow All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.