Close Menu

    Subscribe to Updates

    Get the latest Tech news from SynapseFlow

    What's Hot

    IVO Quantum Orbital Thrust Update

    April 12, 2026

    Amazon’s Fire TVs risk being left in the doldrums by Hisense and TCL’s Mini LEDs

    April 12, 2026

    Intel and SambaNova just built a three-chip AI machine that splits work between GPUs, RDUs, and Xeon

    April 12, 2026
    Facebook X (Twitter) Instagram
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube
    synapseflow.co.uksynapseflow.co.uk
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    synapseflow.co.uksynapseflow.co.uk
    Home»Tech Gadgets»Intel and SambaNova just built a three-chip AI machine that splits work between GPUs, RDUs, and Xeon
    Intel and SambaNova just built a three-chip AI machine that splits work between GPUs, RDUs, and Xeon
    Tech Gadgets

    Intel and SambaNova just built a three-chip AI machine that splits work between GPUs, RDUs, and Xeon

    The Tech GuyBy The Tech GuyApril 12, 2026No Comments3 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Advertisement




    • GPUs handle prefill operations by converting prompts into key-value caches
    • SambaNova RDUs generate tokens at high throughput and low latency
    • Intel Xeon 6 processors manage workload distribution and execute compiled code

    Intel and SambaNova Systems have introduced a joint hardware blueprint combining GPUs, SambaNova RDUs, and Intel Xeon 6 processors for large-scale inference workloads.

    Advertisement

    The system assigns GPUs to prefill operations, RDUs to decoding, and Xeon CPUs to execution and orchestration tasks across agent-driven environments.

    “Agentic AI is moving into production — and the winning pattern we’re seeing is GPUs to start the job, Intel Xeon 6 to run it, and SambaNova RDUs to finish it fast,” said Rodrigo Liang, CEO and co-founder of SambaNova Systems.

    Article continues below


    You may like

    CPU is the execution and control layer

    This design is scheduled to be available in the second half of 2026 for enterprises, cloud providers, and sovereign deployments.

    The architecture places Intel Xeon 6 processors at the center of system control, where they manage workload distribution, execute code, and coordinate tool interactions.

    It includes handling compilation, validating outputs, and maintaining communication between simultaneous processes.

    “When thousands of simultaneous coding agents are generating tool calls, retrieval requests, code builds, and encrypted inter-agent messages, the CPU is not a background component — it is the system’s executive and action layer,” said Harry Ault, CRO of SambaNova.

    Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

    The statement defines the CPU as the primary layer responsible for system behavior rather than a supporting component.

    According to SambaNova, Xeon 6 delivers more than 50% faster LLVM compilation times compared with Arm-based server CPUs.

    It also delivers up to 70% faster vector database performance compared with other x86-based systems.


    What to read next

    These figures relate to execution speed within coding and retrieval workflows, and in this configuration, GPUs process the prefill stage by converting prompts into key-value caches.

    SambaNova RDUs operate as the decoding layer, generating tokens at high throughput and low latency.

    Xeon 6 processors function as both host CPUs and execution engines, managing system-level operations and running compiled workloads.

    “Production inference is moving toward heterogeneous hardware — no single chip type is optimal for every stage of an agentic workflow,” said Banghua Zhu, co-founder and CTO at RadixArk.

    He added that combining RDUs with Xeon CPUs allows systems to maintain compatibility with existing software environments.

    The system is designed to run inside existing air-cooled data centers without requiring new builds.

    According to the companies, this allows scaling of inference workloads without additional strain on water and energy resources.

    As Nvidia and Groq continue to focus on improving inference throughput and latency, this announcement adds a layer of competition.

    It offers an alternative approach that distributes workloads across multiple hardware layers rather than relying on a single processing model.


    Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Tech Guy
    • Website

    Related Posts

    Deals: Galaxy A57 and A37 get price cuts, Poco X8 Pro lineup and OnePlus 15 discounted

    April 12, 2026

    I replaced my NAS with Google Drive and barely noticed

    April 12, 2026

    5 Android myths that need to die

    April 12, 2026

    Exposed Google API keys across 22 apps let attackers access Gemini AI freely, causing hundreds of thousands in losses

    April 11, 2026

    Oppo Find X9s Pro’s global debut, design, colors, and memory options officially confirmed

    April 11, 2026

    A bunch of Ring security cameras are on sale! Come get them!

    April 11, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    The iPad Air brand makes no sense – it needs a rethink

    October 12, 202516 Views

    ChatGPT Group Chats are here … but not for everyone (yet)

    November 14, 20258 Views

    Facebook updates its algorithm to give users more control over which videos they see

    October 8, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Advertisement
    About Us
    About Us

    SynapseFlow brings you the latest updates in Technology, AI, and Gadgets from innovations and reviews to future trends. Stay smart, stay updated with the tech world every day!

    Our Picks

    IVO Quantum Orbital Thrust Update

    April 12, 2026

    Amazon’s Fire TVs risk being left in the doldrums by Hisense and TCL’s Mini LEDs

    April 12, 2026

    Intel and SambaNova just built a three-chip AI machine that splits work between GPUs, RDUs, and Xeon

    April 12, 2026
    categories
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 SynapseFlow All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.