Close Menu

    Subscribe to Updates

    Get the latest Tech news from SynapseFlow

    What's Hot

    This Supreme Court decision is bad news for Hollywood’s AI ambitions

    March 14, 2026

    How to Make a Killing review: a serial killer story should not be this boring

    March 14, 2026

    NASA Selects Finalists in Student Aircraft Maintenance Competition – NASA

    March 13, 2026
    Facebook X (Twitter) Instagram
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    Facebook X (Twitter) Instagram YouTube
    synapseflow.co.uksynapseflow.co.uk
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    synapseflow.co.uksynapseflow.co.uk
    Home»AI News & Updates»Alibaba's AgentEvolver lifts model performance in tool use by ~30% using synthetic, auto-generated tasks
    Alibaba's AgentEvolver lifts model performance in tool use by ~30% using synthetic, auto-generated tasks
    AI News & Updates

    Alibaba's AgentEvolver lifts model performance in tool use by ~30% using synthetic, auto-generated tasks

    The Tech GuyBy The Tech GuyNovember 26, 2025No Comments6 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Advertisement



    Alibaba's AgentEvolver lifts model performance in tool use by ~30% using synthetic, auto-generated tasks

    Researchers at Alibaba’s Tongyi Lab have developed a new framework for self-evolving agents that create their own training data by exploring their application environments. The framework, AgentEvolver, uses the knowledge and reasoning capabilities of large language models for autonomous learning, addressing the high costs and manual effort typically required to gather task-specific datasets.

    Advertisement

    Experiments show that compared to traditional reinforcement learning–based frameworks, AgentEvolver is more efficient at exploring its environment, makes better use of data, and adapts faster to application environments. For the enterprise, this is significant because it lowers the barrier to training agents for bespoke applications, making powerful, custom AI assistants more accessible to a wider range of organizations.

    The high cost of training AI agents

    Reinforcement learning has become a major paradigm for training LLMs to act as agents that can interact with digital environments and learn from feedback. However, developing agents with RL faces fundamental challenges. First, gathering the necessary training datasets is often prohibitively expensive, requiring significant manual labor to create examples of tasks, especially in novel or proprietary software environments where there are no available off-the-shelf datasets.

    Second, the RL techniques commonly used for LLMs require the model to run through a massive number of trial-and-error attempts to learn effectively. This process is computationally costly and inefficient. As a result, training capable LLM agents through RL remains laborious and expensive, limiting their deployment in custom enterprise settings.

    How AgentEvolver works

    The main idea behind AgentEvolver is to give models greater autonomy in their own learning process. The researchers describe it as a “self-evolving agent system” designed to “achieve autonomous and efficient capability evolution through environmental interaction.” It uses the reasoning power of an LLM to create a self-training loop, allowing the agent to continuously improve by directly interacting with its target environment without needing predefined tasks or reward functions.

    “We envision an agent system where the LLM actively guides exploration, task generation, and performance refinement,” the researchers wrote in their paper.

    The self-evolution process is driven by three core mechanisms that work together.

    The first is self-questioning, where the agent explores its environment to discover the boundaries of its functions and identify useful states. It’s like a new user clicking around an application to see what’s possible. Based on this exploration, the agent generates its own diverse set of tasks that align with a user’s general preferences. This reduces the need for handcrafted datasets and allows the agent and its tasks to co-evolve, progressively enabling it to handle more complex challenges. 

    According to Yunpeng Zhai, researcher at Alibaba and co-author of the paper, who spoke to VentureBeat, the self-questioning mechanism effectively turns the model from a “data consumer into a data producer,” dramatically reducing the time and cost required to deploy an agent in a proprietary environment.

    The second mechanism is self-navigating, which improves exploration efficiency by reusing and generalizing from past experiences. AgentEvolver extracts insights from both successful and unsuccessful attempts and uses them to guide future actions. For example, if an agent tries to use an API function that doesn't exist in an application, it registers this as an experience and learns to verify the existence of functions before attempting to use them in the future.

    The third mechanism, self-attributing, enhances learning efficiency by providing more detailed feedback. Instead of just a final success or failure signal (a common practice in RL that can result in sparse rewards), this mechanism uses an LLM to assess the contribution of each individual action in a multi-step task. It retrospectively determines whether each step contributed positively or negatively to the final outcome, giving the agent fine-grained feedback that accelerates learning. 

    This is crucial for regulated industries where how an agent solves a problem is as important as the result. “Instead of rewarding a student only for the final answer, we also evaluate the clarity and correctness of each step in their reasoning,” Zhai explained. This improves transparency and encourages the agent to adopt more robust and auditable problem-solving patterns.

    “By shifting the training initiative from human-engineered pipelines to LLM-guided self-improvement, AgentEvolver establishes a new paradigm that paves the way toward scalable, cost-effective, and continually improving intelligent systems,” the researchers state.

    The team has also developed a practical, end-to-end training framework that integrates these three mechanisms. A key part of this foundation is the Context Manager, a component that controls the agent's memory and interaction history. While today's benchmarks test a limited number of tools, real enterprise environments can involve thousands of APIs. 

    Zhai acknowledges this is a core challenge for the field, but notes that AgentEvolver was designed to be extended. “Retrieval over extremely large action spaces will always introduce computational challenges, but AgentEvolver’s architecture provides a clear path toward scalable tool reasoning in enterprise settings,” he said.

    A more efficient path to agent training

    To measure the effectiveness of their framework, the researchers tested it on AppWorld and BFCL v3, two benchmarks that require agents to perform long, multi-step tasks using external tools. They used models from Alibaba’s Qwen2.5 family (7B and 14B parameters) and compared their performance against a baseline model trained with GRPO, a popular RL technique used to develop reasoning models like DeepSeek-R1.

    The results showed that integrating all three mechanisms in AgentEvolver led to substantial performance gains. For the 7B model, the average score improved by 29.4%, and for the 14B model, it increased by 27.8% over the baseline. The framework consistently enhanced the models' reasoning and task-execution capabilities across both benchmarks. The most significant improvement came from the self-questioning module, which autonomously generates diverse training tasks and directly addresses the data scarcity problem.

    The experiments also demonstrated that AgentEvolver can efficiently synthesize a large volume of high-quality training data. The tasks generated by the self-questioning module proved diverse enough to achieve good training efficiency even with a small amount of data.

    For enterprises, this provides a path to creating agents for bespoke applications and internal workflows while minimizing the need for manual data annotation. By providing high-level goals and letting the agent generate its own training experiences, organizations can develop custom AI assistants more simply and cost-effectively.

    “This combination of algorithmic design and engineering pragmatics positions AgentEvolver as both a research vehicle and a reusable foundation for building adaptive, tool-augmented agents,” the researchers conclude.

    Looking ahead, the ultimate goal is much bigger. “A truly ‘singular model’ that can drop into any software environment and master it overnight is certainly the holy grail of agentic AI,” Zhai said. “We see AgentEvolver as a necessary step in that direction.” While that future still requires breakthroughs in model reasoning and infrastructure, self-evolving approaches are paving the way.

    Advertisement
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    The Tech Guy
    • Website

    Related Posts

    Railway secures $100 million to challenge AWS with AI-native cloud infrastructure

    January 22, 2026

    Claude Code costs up to $200 a month. Goose does the same thing for free.

    January 20, 2026

    Listen Labs raises $69M after viral billboard hiring stunt to scale AI customer interviews

    January 16, 2026

    Salesforce rolls out new Slackbot AI agent as it battles Microsoft and Google in workplace AI

    January 13, 2026

    Converge Bio raises $25M, backed by Bessemer and execs from Meta, OpenAI, Wiz

    January 13, 2026

    Anthropic launches Cowork, a Claude Desktop agent that works in your files — no coding required

    January 13, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    The iPad Air brand makes no sense – it needs a rethink

    October 12, 202516 Views

    ChatGPT Group Chats are here … but not for everyone (yet)

    November 14, 20258 Views

    Facebook updates its algorithm to give users more control over which videos they see

    October 8, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Advertisement
    About Us
    About Us

    SynapseFlow brings you the latest updates in Technology, AI, and Gadgets from innovations and reviews to future trends. Stay smart, stay updated with the tech world every day!

    Our Picks

    This Supreme Court decision is bad news for Hollywood’s AI ambitions

    March 14, 2026

    How to Make a Killing review: a serial killer story should not be this boring

    March 14, 2026

    NASA Selects Finalists in Student Aircraft Maintenance Competition – NASA

    March 13, 2026
    categories
    • AI News & Updates
    • Cybersecurity
    • Future Tech
    • Reviews
    • Software & Apps
    • Tech Gadgets
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 SynapseFlow All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.