CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◐ Insider Threat & DLP Mar 16, 2026

Understanding AI insider risk before it becomes a problem - Help Net Security

Help Net Security Archived Mar 16, 2026 ✓ Full text saved

Understanding AI insider risk before it becomes a problem Help Net Security

Full text archived locally
✦ AI Summary · Claude Sonnet


    Help Net Security January 5, 2026 Share Understanding AI insider risk before it becomes a problem In this Help Net Security video, Greg Pollock, Head of Research and Insights at UpGuard, discusses AI use inside organizations and the risks tied to insiders. He explains two problems. One involves employees who use AI tools to speed up work but share data with unapproved services. The other involves hostile actors who use AI to gain trusted roles inside companies. Pollock walks through research showing how common unapproved AI use has become, including among senior staff. He explains why this creates data, legal, and compliance gaps that security teams may not see. He also describes how state backed groups have used AI to fake skills, land jobs, and move inside networks. The video connects these issues to cyber risk posture management. Pollock stresses the need for employee education, open reporting, and visibility into data flows. The focus is on managing risk while supporting productivity across the organization. More about Artificial intelligence compliance cybersecurity data protection insider threat risk UpGuard video Share
    💬 Team Notes
    Article Info
    Source
    Help Net Security
    Category
    ◐ Insider Threat & DLP
    Published
    Mar 16, 2026
    Archived
    Mar 16, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗