CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 15, 2026

INTARG: Informed Real-Time Adversarial Attack Generation for Time-Series Regression

arXiv Security Archived Apr 15, 2026 ✓ Full text saved

arXiv:2604.11928v1 Announce Type: cross Abstract: Time-series forecasting aims to predict future values by modeling temporal dependencies in historical observations. It is a critical component of many real-world systems, where accurate forecasts improve operational efficiency and help mitigate uncertainty and risk. More recently, machine learning (ML), and especially deep learning (DL)-based models, have gained widespread adoption for time-series forecasting, but they remain vulnerable to advers

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Machine Learning [Submitted on 13 Apr 2026] INTARG: Informed Real-Time Adversarial Attack Generation for Time-Series Regression Gamze Kirman Tokgoz, Onat Gungor, Tajana Rosing, Baris Aksanli Time-series forecasting aims to predict future values by modeling temporal dependencies in historical observations. It is a critical component of many real-world systems, where accurate forecasts improve operational efficiency and help mitigate uncertainty and risk. More recently, machine learning (ML), and especially deep learning (DL)-based models, have gained widespread adoption for time-series forecasting, but they remain vulnerable to adversarial attacks. However, many state-of-the-art attack methods are not directly applicable in time-series settings, where storing complete historical data or performing attacks at every time step is often impractical. This paper proposes an adversarial attack framework for time-series forecasting under an online bounded-buffer setting, leveraging an informed and selective attack strategy. By selectively targeting time steps where the model exhibits high confidence and the expected prediction error is maximal, our framework produces fewer but substantially more effective attacks. Experiments show that our framework can increase the prediction error up to 2.42x, while performing attacks in fewer than 10% of time steps. Subjects: Machine Learning (cs.LG); Cryptography and Security (cs.CR) Cite as: arXiv:2604.11928 [cs.LG]   (or arXiv:2604.11928v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2604.11928 Focus to learn more Submission history From: Onat Gungor [view email] [v1] Mon, 13 Apr 2026 18:16:39 UTC (1,237 KB) Access Paper: HTML (experimental) view license Current browse context: cs.LG < prev   |   next > new | recent | 2026-04 Change to browse by: cs cs.CR References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv Security
    Category
    ◬ AI & Machine Learning
    Published
    Apr 15, 2026
    Archived
    Apr 15, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗