Close Menu

    Subscribe to Updates

    Get the latest Tech news from SynapseFlow

    What's Hot

    Nothing Phone (4a) series confirmed to use Snapdragon chipsets

    February 20, 2026

    This ‘Machine Eye’ Could Give Robots Superhuman Reflexes

    February 20, 2026

    Apple said to be working on camera-enabled AirPods, AI-powered pin and full-on smart glasses

    February 20, 2026
    Facebook X (Twitter) Instagram
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube
    synapseflow.co.uksynapseflow.co.uk
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    synapseflow.co.uksynapseflow.co.uk
    Home»Future Tech»This ‘Machine Eye’ Could Give Robots Superhuman Reflexes
    This ‘Machine Eye’ Could Give Robots Superhuman Reflexes
    Future Tech

    This ‘Machine Eye’ Could Give Robots Superhuman Reflexes

    The Tech GuyBy The Tech GuyFebruary 20, 2026No Comments5 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Advertisement


    You’re driving in a winter storm at midnight. Icy rain smashes your windshield, immediately turning it into a sheet of frost. Your eyes dart across the highway, seeking any movement that could be wildlife, struggling vehicles, or highway responders trying to pass. Whether you find safe passage or meet catastrophe hinges on how fast you see and react.

    Advertisement

    Even experienced drivers struggle with bad weather. For self-driving cars, drones, and other robots, a snowstorm could cause mayhem. The best computer-vision algorithms can handle some scenarios, but even running on advanced computer chips, their reaction times are roughly four times greater than a human’s.

    “Such delays are unacceptable for time-sensitive applications…where a one-second delay at highway speeds can reduce the safety margin by up to 27m [88.6 feet], significantly increasing safety risks,” Shuo Gao at Beihang University and colleagues wrote in a recent paper describing a new superfast computer vision system.

    Instead of working on the software, the team turned to hardware. Inspired by the way human eyes process movement, they developed an electronic replica that rapidly detects and isolates motion.

    The machine eye’s artificial synapses connect transistors into networks that detect changes in the brightness of an image. Like biological neural circuits, these connections store a brief memory of the past before processing new inputs. Comparing the two allows them to track motion.

    Combined with a popular vision algorithm, the system quickly separates moving objects, like walking pedestrians, from static objects, like buildings. By limiting its attention to motion, the machine eye needs far less time and energy to assess and respond to complex environments.

    When tested on autonomous vehicles, drones, and robotic arms, the system sped up processing times by roughly 400 percent and, in most cases, surpassed the speed of human perception without sacrificing accuracy.

    “These advancements empower robots with ultrafast and accurate perceptual capabilities, enabling them to handle complex and dynamic tasks more efficiently than ever before,” wrote the team.

    Two Motion Pictures

    A mere flicker in the corner of an eye captures our attention. We’ve evolved to be especially sensitive to movement. This perceptual superpower begins in the retina. The thin layer of light-sensitive tissue at the back of the eye is packed with cells fine-tuned to detect motion.

    Retinal cells are a curious bunch. They store memories of previous scenes and spark with activity when something in our visual field shifts. The process is a bit like an old-school film reel: Rapid transitions between still frames lead to the perception of movement.

    Every cell is tuned to detect visual changes in a particular direction—for example, left to right or up to down—but is otherwise dormant. These activity patterns form a two-dimensional neural map that the brain interprets as speed and direction within a fraction of a second.

    “Biological vision excels at processing large volumes of visual information” by focusing only on motion, wrote the team. When driving across an intersection, our eyes intuitively zero in on pedestrians, cyclists, and other moving objects.

    Computer vision takes a more mathematical approach.

    A popular type called optical flow analyzes differences between pixels across visual frames. The algorithm segments pixels into objects and infers movement based on changes in brightness. This approach assumes that objects maintain brightness as they move. A white dot, for example, remains a white dot as it drifts to the right, at least in simulations. Pixels near each other should also move in tandem as a marker for motion.

    Although inspired by biological vision, optical flow struggles in real-world scenarios. It’s an energy hog and can be laggy. Add in unexpected noise—like a snowstorm—and robots running optical flow algorithms will have trouble adapting to our messy world.

    Two-Step Solution

    To get around these problems, Gao and colleagues built a neuron-inspired chip that dynamically detects regions of motion and then focuses an optical flow algorithm on only those areas.

    Their initial design immediately hit a roadblock. Traditional computer chips can’t adjust their wiring. So the team fabricated a neuromorphic chip that, true to its name, computes and stores information at the same spot, much like a neuron processes data and retains memory.

    Because neuromorphic chips don’t shuttle data from memory to processors, they’re far faster and more energy-efficient than classical chips. They outshine standard chips in a variety of tasks, such as sensing touch, detecting auditory patterns, and processing vision.

    “The on-device adaptation capability of synaptic devices makes human-like ultrafast visual processing possible,” wrote the team.

    The new chip is built from materials and designs commonly used in other neuromorphic chips. Similar to the retina, the array’s artificial synapses encode differences in brightness and remember these changes by adjusting their responses to subsequent electrical signals.

    When processing an image, the chip converts the data into voltage changes, which only activate a handful of synaptic transistors; the others stay quiet. This means the chip can filter out irrelevant visual data and focus optical flow algorithms on regions with motion only.

    In tests, the two-step setup boosted processing speed. When analyzing a movie of a pedestrian about to dash across a road, the chip detected their subtle body position and predicted what direction they’d run in roughly 100 microseconds—faster than a human. Compared to conventional computer vision, the machine eye roughly doubled the ability of self-driving cars to detect hazards in a simulation. It also improved the accuracy of robotic arms by over 740 percent thanks to better and faster tracking.

    The system is compatible with computer vision algorithms beyond optical flow, such as the YOLO neural network that detects objects in a scene, making it adjustable for different uses.

    “We do not completely overthrow the existing camera system; instead, by using hardware plug-ins, we enable existing computer vision algorithms to run four times faster than before, which holds greater practical value for engineering applications,” Gao told the South China Morning Post.

    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Tech Guy
    • Website

    Related Posts

    Skin-Crawlingly Awkward Video Shows Sam Altman and Dario Amodei Refusing to Hold Hands

    February 19, 2026

    2001 Technology Predictions Like Seatback TVs on Planes and We Will Soon Have Moon Bases

    February 19, 2026

    Northern Glow Spans Iceland and Canada

    February 19, 2026

    SpaceX Angling for Military Contract to Produce Drone Swarms

    February 19, 2026

    XAI Grok 4.20 Makes Scheduling Repeated Monitoring Easy for Up to 200 Queries

    February 18, 2026

    TB 26-02 Effects of Large Grain Size in Composite Overwrapped Pressure Vessel

    February 18, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    The iPad Air brand makes no sense – it needs a rethink

    October 12, 202516 Views

    ChatGPT Group Chats are here … but not for everyone (yet)

    November 14, 20258 Views

    Facebook updates its algorithm to give users more control over which videos they see

    October 8, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Advertisement
    About Us
    About Us

    SynapseFlow brings you the latest updates in Technology, AI, and Gadgets from innovations and reviews to future trends. Stay smart, stay updated with the tech world every day!

    Our Picks

    Nothing Phone (4a) series confirmed to use Snapdragon chipsets

    February 20, 2026

    This ‘Machine Eye’ Could Give Robots Superhuman Reflexes

    February 20, 2026

    Apple said to be working on camera-enabled AirPods, AI-powered pin and full-on smart glasses

    February 20, 2026
    categories
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 SynapseFlow All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.