ISC West 2026: During the AI Boom, Trust Is Crucial - ASIS
ASISArchived May 04, 2026✓ Full text saved
ISC West 2026: During the AI Boom, Trust Is Crucial ASIS
Full text archived locally
✦ AI Summary· Claude Sonnet
Illustration by iStock
SYSTEMS SELECTION AND INTEGRATION
DIGITAL TRANSFORMATION
ARTIFICIAL INTELLIGENCE (AI)
ISC West 2026: During the AI Boom, Trust Is Crucial
By Claire Meyer and Sara Mosqueda 30 March 2026
Today in Security
Trust remains a prized commodity for the security industry, especially in the growing era of artificial intelligence (AI). Last week, thousands of security practitioners convened in Las Vegas, Nevada, for ISC West, and they brought their enthusiasm, skepticism, and confusion about AI applications with them.
AI has been a common theme at ISC West for the past four years or so, echoing the explosive growth of AI applications across most industries. But this year, some manufacturers sought to reassure potential clients about the security and privacy of their data—and the ethics behind those AI-enabled tools.
There was no shortage of AI-enabled systems at ISC West (although many are fairly straightforward uses of LLMs or visual language models for natural language search across video feeds), and they shared some key themes: simplifying key event identification and search, supporting use in multiple languages, evidence management, and report generation. Others dived more deeply into cross-functional value generation, such as using existing video surveillance infrastructure to answer business intelligence questions for other departments, like time-based visual checks for signage placement or endcap setup in retail stores, said a representative from intelligent cloud video surveillance company OpenEye.
But it’s important for end users to understand where their data is going, as well as how security systems’ AI tools have been trained, said Tim Palmquist, vice president, Americas, at Milestone, in a conversation with Security Management. For example, is your security surveillance data being shared with AI companies to use to improve or train their models? Was the original data set used to train the security system ethically collected? Is the organization’s data anonymized before being shared? Have users (including your more sensitive departments, such as R&D) consented to share the data? Is the surveillance data staying within a private large language model (LLM) for your organization, or is it being distributed or sold more widely?
Security manufacturers need to be clear about their ethical AI guardrails so they can lead in a more balanced way, Palmquist said. But that also means end users and consultants need to ask more informed and precise questions about where their data is going.
Some manufacturers are also placing guardrails on how systems can be used. For instance, Genetec and Securitas Technology both demonstrated how users cannot search for incidents or clips based on stereotypes or potentially derogatory terminology that could be used to abuse the system or go against company policies, such as “find anyone looking suspicious” or “identify all women in the scene.” Instead, users can be prompted to use more specific terms aligned with appropriate use, identifying activities or other personal characteristics, to support more ethical AI use.
Some AI systems explain themselves in turn. Crisis24’s new AiiA strategic intelligence platform, powered by Palantir, includes sources and reasoning for the different insights it provides in its President’s Brief reports, said Ansel Stein, vice president of operations at Crisis24. This helps improve users’ confidence in the intelligence, “supercharging human capacity” so analysts and organizations can get ahead of potential disruptions, he said.
Defining those use cases, restrictions, and methods—and communicating about those sources and guardrails to employees—can help boost trust and transparency in AI solutions.
But trust issues extend beyond AI—they always have. Security thought-leaders called for professionals to look beyond traditional physical solutions that have kept adversaries out. There’s a greater focus for identifying the threats that you may have already let in—the insider.
Often, failures of trust create opportunities for an insider threat to act against an organization, according to Haywood Talcove, CEO, government, LexisNexis Risk Solutions. Talcove spoke at the ISC West keynote session on Wednesday, 25 March.
“You can have the strongest fence [until] when somebody walks through it or when someone is let in,” Talcove said. “It’s not the perimeter that fails—it’s trust. Knowing who and what to trust to let into our sites or let within our perimeter.”
Whether they use physical or digital vectors, attackers leverage trust as a weapon against targets. A trusted person—or that person’s identifiers, like a password, badge, or another form of access—can abuse their access to a network or organization to steal funds, data, other identities, and more.
“Identity today is the one attack surface that every adversary has to pass through,” Talcove said.
Talcove’s stress on the value of trust and the concern over insider threats was also seen on the show floor. Various vendors were marketing products that cater to customers’ potential concerns about insider threats or fraud, whether it’s the ability to store and access data on-site, facial authentication, privacy filters within security systems, using AI to identify anomalies from data generated from access control or surveillance data, the protection of employee and company data, and more.
HID, for example, touted its new converged credentials solution, which manages employees’ identities through a single, unified platform so they can access facilities, log into computers, and authenticate cloud applications with a single credential—whether a smart card, security key, or micro reader. This can help to eliminate pain points and risks associated with traditional access and multifactor authentication while improving visibility into employees’ activity and permissions. It can also make organizations more resistant to phishing by reducing reliability on easily shared passwords, HID representatives told Security Management.
Converged or holistic security solutions like these were highlighted in multiple presentations and booths—as IT’s influence in security purchases and technology decisions grows, the cybersecurity implications of identity management and networked security systems cannot be overstated. AI’s accelerating influence only makes this more of an imperative.
As Axis Communications cofounder Martin Gren said in a press briefing at ISC West, “Connection without protection—that’s risk.”
Claire Meyer is editor-in-chief at Security Management. This was her 15th ISC West.
Sara Mosqueda is associated editor for Security Management. This was her first ISC West.
Share your ISC West takeaways with us on LinkedIn.
MOST POPULAR ARTICLES
1. How to Secure the Modern Warehouse
2. Ideology and Anger: Early Lessons from the Kimberly-Clark Distribution Center Arson
3. On the Clock But Mentally Checked Out: Managers Say They’re Increasingly Disengaged at Work