CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 27, 2026

Multi-Task Optimization over Networks of Tasks

arXiv AI Archived Apr 27, 2026 ✓ Full text saved

arXiv:2604.21991v1 Announce Type: cross Abstract: Multi-task optimization is a powerful approach for solving a large number of tasks in parallel. However, existing algorithms face distinct limitations: Population-based methods scale poorly and remain underexplored for large task sets. Approaches that do scale beyond a thousand tasks are mostly MAP-Elites variants and rely on a fixed, discretized archive that disregards the topology of the task space. We introduce MONET (Multi-Task Optimization o

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Machine Learning [Submitted on 23 Apr 2026] Multi-Task Optimization over Networks of Tasks Julian Hatzky, Thomas Bartz-Beielstein, A. E. Eiben, Anil Yaman Multi-task optimization is a powerful approach for solving a large number of tasks in parallel. However, existing algorithms face distinct limitations: Population-based methods scale poorly and remain underexplored for large task sets. Approaches that do scale beyond a thousand tasks are mostly MAP-Elites variants and rely on a fixed, discretized archive that disregards the topology of the task space. We introduce MONET (Multi-Task Optimization over Networks of Tasks), a multi-task optimization algorithm that models the task space as a graph: tasks are nodes, and edges connect tasks in the task parameter space. This representation enables knowledge transfer between tasks and remains tractable for high-dimensional problems while exploiting the topology of the task space. MONET combines social learning, which generates candidates from neighboring nodes via crossover, with individual learning, which refines a node's own solution independently via mutation. We evaluate MONET on four domains (archery, arm, and cartpole with 5,000 tasks each; hexapod with 2,000 tasks) and show that it matches or exceeds the performance of existing MAP-Elites-based baselines across all four domains. Comments: 14 pages, 5 figures Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE) Cite as: arXiv:2604.21991 [cs.LG]   (or arXiv:2604.21991v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2604.21991 Focus to learn more Submission history From: Julian Hatzky [view email] [v1] Thu, 23 Apr 2026 18:18:20 UTC (1,805 KB) Access Paper: HTML (experimental) view license Current browse context: cs.LG < prev   |   next > new | recent | 2026-04 Change to browse by: cs cs.AI cs.NE References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv AI
    Category
    ◬ AI & Machine Learning
    Published
    Apr 27, 2026
    Archived
    Apr 27, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗