CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◉ Threat Intelligence

Proactive Preparation and Hardening Against Destructive Attacks: 2026 Edition

Mandiant Archived Mar 16, 2026 ✓ Full text saved

Written by: Matthew McWhirt, Bhavesh Dhake, Emilio Oropeza, Gautam Krishnan, Stuart Carrera, Greg Blaum, Michael Rudden UPDATE (March 13): Added guidance around abuse or misuse of endpoint / MDM platforms . Background Threat actors leverage destructive malware to destroy data, eliminate evidence of malicious activity, or manipulate systems in a way that renders them inoperable. Destructive cyberattacks can be a powerful means to achieve strategic or tactical objectives; however, the risk of repr

Full text archived locally
✦ AI Summary · Claude Sonnet


    Threat Intelligence Proactive Preparation and Hardening Against Destructive Attacks: 2026 Edition March 6, 2026 Mandiant Mandiant Services Stop attacks, reduce risk, and advance your security. Contact Mandiant Written by: Matthew McWhirt, Bhavesh Dhake, Emilio Oropeza, Gautam Krishnan, Stuart Carrera, Greg Blaum, Michael Rudden UPDATE (March 13): Added guidance around abuse or misuse of endpoint / MDM platforms. Background Threat actors leverage destructive malware to destroy data, eliminate evidence of malicious activity, or manipulate systems in a way that renders them inoperable. Destructive cyberattacks can be a powerful means to achieve strategic or tactical objectives; however, the risk of reprisal is likely to limit the frequency of use to very select incidents. Destructive cyberattacks can include destructive malware, wipers, or modified ransomware. When conflict erupts, cyber attacks are an inexpensive and easily deployable weapon. It should come as no surprise that instability leads to increases in attacks. This blog post provides proactive recommendations for organizations to prioritize for protecting against a destructive attack within an environment. The recommendations include practical and scalable methods that can help protect organizations from not only destructive attacks, but potential incidents where a threat actor is attempting to perform reconnaissance, escalate privileges, laterally move, maintain access, and achieve their mission.  The detection opportunities outlined in this blog post are meant to act as supplementary monitoring to existing security tools. Organizations should leverage endpoint and network security tools as additional preventative and detective measures. These tools use a broad spectrum of detective capabilities, including signatures and heuristics, to detect malicious activity with a reasonable degree of fidelity. The custom detection opportunities referenced in this blog post are correlated to specific threat actor behavior and are meant to trigger anomalous activity that is identified by its divergence from normal patterns. Effective monitoring is dependent on a thorough understanding of an organization's unique environment and usage of pre-established baselines. Organizational Resilience While the core focus of this blog post is aligned to technical- and tactical-focused security controls, technical preparation and recovery are not the only strategies. Organizations that include crisis preparation and orchestration as key components of security governance can naturally adopt a "living" resilience posture. This includes: Out-of-Band Incident Command and Communication: Establish a pre-validated, "out-of-band" communication platform that is completely decoupled from the corporate identity plane. This ensures that the key stakeholders and third-party support teams can coordinate and communicate securely, even if the primary communication platform is unavailable. Defined Operational Contingency and Recovery Plans: Establish baseline operational requirements, including manual procedures for vital business functions to ensure continuity during restoration or rebuild efforts. Organizations must also develop prioritized application recovery sequences and map the essential dependencies needed to establish a secure foundation for recovery goals. Pre-Establish Trusted Third-Party Vendor Relationships: Based on the range of technologies and platforms vital to business operations, develop predefined agreements with external partners to ensure access to specialists for legal / contractual requirements, incident response, remediation, recovery, and ransomware negotiations. Practice and Refine the Recovery: Conduct exercises that validate the end-to-end restoration of mission-critical services using isolated, immutable backups and out-of-band communication channels, ensuring that recovery timelines (RTO) and data integrity (RPO) are tested, practiced, and current.  Google Security Operations Google Security Operations (SecOps) customers have access to these broad category rules and more under the Mandiant Intel Emerging Threats, Mandiant Frontline Threats, Mandiant Hunting Rules, CDIR SCC Enhanced Data Destruction Alerts rule packs. The activity discussed in the blog post is detected in Google SecOps under the rule names: BABYWIPER File Erasure Secure Evidence Destruction And Cleanup Commands CMD Launching Application Self Delete Copy Binary From Downloads Rundll32 Execution Of Dll Function Name Containing Special Character Services Launching Cmd System Process Execution Via Scheduled Task Dllhost Masquerading Backdoor Writing Dll To Disk For Injection Multiple Exclusions Added To Windows Defender In Single Command Path Exclusion Added to Windows Defender Registry Change to CurrentControlSet Services Powershell Set Content Value Of 0 Overwrite Disk Using DD Utility Bcdedit Modifications Via Command Disabling Crash Dump For Drive Wiping Suspicious Wbadmin Commands Fsutil File Zero Out Recommendations Summary Table 1 provides a high-level overview of guidance in this blog post. Focus Area Description External-Facing Assets Protect against the risk of threat actors exploiting an externally facing vector or leveraging existing technology for unauthorized remote access. Critical Asset Protections Protect specific high-value infrastructure and prepare for recovery from a destructive attack. On-Premises Lateral Movement Protections Protect against a threat actor with initial access into an environment from moving laterally to further expand their scope of access and persistence. Credential Exposure and Account Protections Protect against the exposure of privileged credentials to facilitate privilege escalation. Preventing Destructive Actions in Kubernetes and CI/CD Pipelines Protect the integrity and availability of Kubernetes environments and CI/CD pipelines. Table 1: Overview of recommendations 1. External-Facing Assets Identify, Enumerate, and Harden To protect against a threat actor exploiting vulnerabilities or misconfigurations via an external-facing vector, organizations must determine the scope of applications and organization-managed services that are externally accessible. Externally accessible applications and services (including both on-premises and cloud) are often targeted by threat actors for initial access by exploiting known vulnerabilities, brute-forcing common or default credentials, or authenticating using valid credentials.  To proactively identify and validate external-facing applications and services, consider: Leveraging a vulnerability scanning technology to identify assets and associated vulnerabilities.  Performing a focused vulnerability assessment or penetration test with the goal of identifying external-facing vectors that could be leveraged for authentication and access. Verifying with technology vendors if the products leveraged by an organization for external-facing services require patches or updates to mitigate known vulnerabilities.  Any identified vulnerabilities should not only be patched and hardened, but the identified technology platforms should also be reviewed to ensure that evidence of suspicious activity or technology/device modifications have not already occurred. The following table provides an overview of capabilities to proactively review and identify external-facing assets and resources within common cloud-based infrastructures. Cloud Provider Attack Surface Discovery Capability Google Cloud Security Command Center Amazon Web Services AWS Config / Inspector Microsoft Azure Defender External Attack Surface Management (Defender EASM) Table 2: Overview of cloud provider attack surface discovery capabilities Enforce Multi-Factor Authentication External-facing assets that leverage single-factor authentication (SFA) are highly susceptible to brute-forcing attacks, password spraying, or unauthorized remote access using valid (stolen) credentials. External-facing applications and services that currently allow for SFA should be configured to support multi-factor authentication (MFA). Additionally, MFA should be leveraged for accessing not only on-premises external-facing managed infrastructure, but also for cloud-based resources (e.g., software-as-a-service [SaaS] such as Microsoft 365 [M365]).  When configuring multifactor authentication, the following methods are commonly considered (and ranked from most to least secure): Fast IDentity Online 2 (FIDO2)/WebAuthn security keys or passkeys Software/hardware Open Authentication (OAUTH) token Authenticator application (e.g., Duo/Microsoft [MS] Authenticator/Okta Verify) Time-based One Time Password (TOTP) Push notification (least preferred option) using number matching when possible Phone call Short Message Service (SMS) verification Email-based verification Risks of Specific MFA Methods Push Notifications If an organization is leveraging push notifications for MFA (e.g., a notification that requires acceptance via an application or automated call to a mobile device), threat actors can exploit this type of MFA configuration for attempted access, as a user may inadvertently accept a push notification on their device without the context of where the authentication was initiated.  Phone/SMS Verification If an organization is leveraging phone calls or SMS-based verification for MFA, these methods are not encrypted and are susceptible to potentially being intercepted by a threat actor. These methods are also vulnerable if a threat actor is able to transfer an employee's phone number to an attacker-controlled subscriber identification module (SIM) card. This would result in the MFA notifications being routed to the threat actor instead of the intended employee.  Email-Based Verification If an organization is leveraging email-based verification for validating access or for retrieving MFA codes, and a threat actor has already established the ability to access the email of their target, the actor could potentially also retrieve the email(s) to validate and complete the MFA process.  If any of these MFA methods are leveraged, consider: Training remote users to never accept or respond to a logon notification when they are not actively attempting to log in. Establishing a method for users to report suspicious MFA notifications, as this could be indicative of a compromised account. Ensuring there are messaging policies in place to prevent the auto-forwarding of email messages outside the organization. Time-Based One-Time Password Time-based one-time password (TOTP) relies on a shared secret, called a seed, known by both the authenticating system and the authenticator possessed by an end user. If a seed is compromised, the TOTP authenticator can be duplicated and used by a threat actor. Detection Opportunities for External-Facing Assets and MFA Attempts Use Case MITRE ID Description Brute Force T1110 – Brute Force Search for a single user with an excessive number of failed logins from external Internet Protocol (IP) addresses.  This risk can be mitigated by enforcing a strong password, MFA, and lockout policy. Password Spray T1110.003 – Password Spray Search for a high number of accounts with failed logins, typically from the similar origination addresses. Multiple Failed MFA Same User T1110 – Brute Force T1078 – Valid Accounts Search for multiple failed MFA conditions for the same account. This may be indicative of a previously compromised credential. Multiple Failed MFA Same Source T1110.003 – Password Spray T1078 – Valid Accounts Search for multiple failed MFA prompts for different users from the same source. This may be indicative of multiple compromised credentials and an attempt to "spray" MFA prompts/tokens for access. External Authentication from an Account with Elevated Privileges T1078 – Valid Accounts Privileged accounts should use internally managed and secured privileged access workstations for access and should not be accessible directly from an external (untrusted) source. Adversary in the Middle (AiTM) Session Token Theft T1557 - Adversary in the Middle Monitor for sign-ins where the authentication method succeeds but the session originates from an IP/ASN inconsistent with the user's prior sessions.  Detect logins from newly registered domains or known reverse-proxy infrastructure (EvilProxy, Tycoon 2FA).  Correlate sign-in logs for "isInteractive: true" sessions with anomalous user-agent strings or geographically impossible travel. MFA Fatigue / Prompt Bombing T1621 - MFA Request Generation Search for accounts receiving more than five MFA push notifications within a 10-minute window without a corresponding successful authentication.  Post-Authentication MFA Device Registration T1098.005 - Account Manipulation - Device Registration Monitor audit logs for new MFA device registrations (AuthenticationMethodRegistered) occurring within 60 minutes of a sign-in from a new IP or device. Attackers who steal session tokens via AiTM immediately register their own MFA device for persistent access. OAuth/Consent Phishing T1550.001 - Use Alternate Authentication Material Monitor for OAuth application consent grants with high-privilege scopes (Mail.Read, Files.ReadWrite.All) from unrecognized application IDs. Table 3: Detection opportunities for external-facing assets and MFA attempts 2. Critical Asset Protections Domain Controller and Critical Asset Backups Organizations should verify that backups for domain controllers and critical assets are available and protected against unauthorized access or modification. Backup processes and procedures should be exercised on a continual basis. Backups should be protected and stored within secured enclaves that include both network and identity segmentation.  If an organization's Active Directory (AD) were to become corrupted or unavailable due to ransomware or a potentially destructive attack, restoring Active Directory from domain controller backups may be the only viable option to reconstitute domain services. The following domain controller recovery and reconstitution best practices should be proactively reviewed by organizations:  Verify that there is a known good backup of domain controllers and SYSVOL shares (e.g., from a domain controller – backup C:\Windows\SYSVOL). For domain controllers, a system state backup is preferred.  Note: For a system state backup to occur, Windows Server Backup must be installed as a feature on a domain controller. The following command can be run from an elevated command prompt to initiate a system state backup of a domain controller. wbadmin start systemstatebackup -backuptarget:<targetDrive>: Figure 1: Command to perform a system state backup The following command can be run from an elevated command prompt to perform a SYSVOL backup. (Manage auditing and security log permissions must also be configured for the account performing the backup.) robocopy c:\windows\sysvol c:\sysvol-backup /copyall /mir /b /r:0 /xd Figure 2: Command to perform a SYSVOL backup Proactively identify domain controllers that hold flexible single master operation (FSMO) roles, as these domain controllers will need to be prioritized for recovery in the event that a full domain restoration is required.  netdom query fsmo Figure 3: Command to identify domain controllers that hold FSMO roles Offline backups: Ensure offline domain controller backups are secured and stored separately from online backups.  Encryption: Backup data should be encrypted both during transit (over the wire) and when at rest or mirrored for offsite storage.  DSRM Password validation: Ensure that the Directory Services Restore Mode (DSRM) password is set to a known value for each domain controller. This password is required when performing an authoritative or nonauthoritative domain controller restoration.  Configure alerting for backup operations: Backup products and technologies should be configured to detect and provide alerting for operations critical to the availability and integrity of backup data (e.g., deletion of backup data, purging of backup metadata, restoration events, media errors).  Enforce role-based access control (RBAC): Access to backup media and the applications that govern and manage data backups should use RBAC to restrict the scope of accounts that have access to the stored data and configuration parameters.  Testing and verification: Both authoritative and nonauthoritative domain controller restoration processes should be documented and tested on a regular basis. The same testing and verification processes should be enforced for critical assets and data. Business Continuity Planning Critical asset recovery is dependent upon in-depth planning and preparation, which is often included within an organization's business continuity plan (BCP). Planning and recovery preparation should include the following core competencies: A well-defined understanding of crown jewels data and supporting applications that align to backup, failover, and restoration tasks that prioritize mission-critical business operations Clearly defined asset prioritization and recovery sequencing Thoroughly documented recovery processes for critical systems and data Trained personnel to support recovery efforts Validation of recovery processes to ensure successful execution Clear delineation of responsibility for managing and verifying data and application backups Online and offline data backup retention policies, including initiation, frequency, verification, and testing (for both on-premises and cloud-based data) Established service-level agreements (SLAs) with vendors to prioritize application and infrastructure-focused support Continuity and recovery planning can become stale over time, and processes are often not updated to reflect environment and personnel changes. Prioritizing evaluations, continuous training, and recovery validation exercises will enable an organization to be better prepared in the event of a disaster. Detection Opportunities for Backups   Use Case MITRE ID Description Volume Shadow Deletion T1490 – Inhibit System Recovery Search for instances where a threat actor will delete volume shadow copies to inhibit system recovery. This can be accomplished using the command line, PowerShell, and other utilities. Unauthorized Access Attempt T1078 – Valid Accounts Search for unauthorized users attempting to access the media and applications that are used to manage data backups. Suspicious Usage of the DSRM Password T1078 – Valid Accounts Monitor security event logs on domain controllers for: Event ID 4794 - An attempt was made to set the Directory Services Restore Mode administrator password Monitoring the following registry key on domain controllers: HKLM\System\CurrentControlSet\Control\Lsa\DSRMAdminLogonBehavior Figure 4: DSRM registry key for monitoring The possible values for the registry key noted in Figure 4 are: 0 (default): The DSRM Administrator account can only be used if the domain controller is restarted in Directory Services Restore Mode. 1: The DSRM Administrator account can be used for a console-based log on if the local Active Directory Domain Services service is stopped. 2: The DSRM Administrator account can be used for console or network access without needing to reboot a domain controller. Table 4: Detection opportunities for backups IT and OT Segmentation Organizations should ensure that there is both physical and logical segmentation between corporate information technology (IT) domains, identities, networks, and assets and those used in direct support of operational technology (OT) processes and control. By enforcing IT and OT segmentation, organizations can inhibit a threat actor's ability to pivot from corporate environments to mission-critical OT assets using compromised accounts and existing network access paths.  OT environments should leverage separate identity stores (e.g., dedicated Active Directory domains), which are not trusted or cross-used in support of corporate identity and authentication. The compromise of a corporate identity or asset should not result in a threat actor's ability to directly pivot to accessing an asset that has the ability to influence an OT process. In addition to separate AD forests being leveraged for IT and OT, segmentation should also include technologies that may have a dual use in the IT and OT environments (backup servers, antivirus [AV], endpoint detection and response [EDR], jump servers, storage, virtual network infrastructure). OT segmentation should be designed such that if there is a disruption in the corporate (IT) environment, the OT process can safely function independently, without a direct dependency (account, asset, network pathway) with the corporate infrastructure. For any dependencies that cannot be readily segmented, organizations should identify potential short-term processes or manual controls to ensure that the OT environment can be effectively isolated if evidence of an IT (corporate)-focused incident were detected.  Segmenting IT and OT environments is a best practice recommended by industry standards such as the National Institute of Standards and Technology (NIST) SP 800-82r3: Guide to Operational Technology (OT) Security and IEC 62443 (formerly ISA99). According to these best-practice standards, segmenting IT and OT networks should include the following: OT attack surface reduction by restricting the scope of ports, services, and protocols that are directly accessible within the OT network from the corporate (IT) network. Incoming access from corporate (IT) into OT must terminate within a segmented OT demilitarized zone (DMZ). The OT DMZ must require that a separate level of authentication and access be granted (outside of leveraging an account or endpoint that resides within the corporate IT domain).  Explicit firewall rules should restrict both incoming traffic from the corporate environment and outgoing traffic from the OT environment. Firewalls should be configured using the principle of deny by default, with only approved and authorized traffic flows permitted. Egress (internet) traffic flows for all assets that support OT should also follow the deny-by-default model. Identity (account) segmentation must be enforced between corporate IT and OT. An account or endpoint within either environment should not have any permissions or access rights assigned outside of the respective environment.  Remote access to the OT environment should not leverage similar accounts that have remote access permissions assigned within the corporate IT environment. MFA using separate credentials should be enforced for remotely accessing OT assets and resources. Training and verification of manual control processes, including isolation and reliability verification for safety systems. Secured enclaves for storing backups, programming logic, and logistical diagrams for systems and devices that comprise the OT infrastructure. The default usernames and passwords associated with OT devices should always be changed from the default vendor configuration(s).  Detection Opportunities for IT and OT Segmented Environments Use Case MITRE ID Description Network Service Scanning T1046 – Network Service Scanning Search for instances where a threat actor is performing internal network discovery to identify open ports and services between segmented environments. Unauthorized Authentication Attempts Between Segmented Environments T1078 – Valid Accounts Search for failed logins for accounts limited to one environment attempting to log in within another environment. This can detect threat actors attempting to reuse credentials for lateral movement between networks. Table 5: Detection opportunities for IT and OT segmented environments Egress Restrictions Servers and assets that are infrequently rebooted are highly targeted by threat actors for establishing backdoors to create persistent beacons to command-and-control (C2) infrastructure. By blocking or severely limiting internet access for these types of assets, an organization can effectively reduce the risk of a threat actor compromising servers, extracting data, or installing backdoors that leverage egress communications for maintaining access. Egress restrictions should be enforced so that servers, internal network devices, critical IT assets, OT assets, and field devices cannot attempt to communicate to external sites and addresses (internet resources). The concept of deny by default should apply to all servers, network devices, and critical assets (including both IT and OT), with only allow-listed and authorized egress traffic flows explicitly defined and enforced. Where possible, this should include blocking recursive Domain Name System (DNS) resolutions not included in an allow-list to prevent communication via DNS tunneling. If possible, egress traffic should be routed through an inspection layer (such as a proxy) to monitor external connections and block any connections to malicious domains or IP addresses. Connections to uncategorized network locations (e.g., a domain that has been recently registered) should not be permitted. Ideally, DNS requests would be routed through an external service (e.g., Cisco Umbrella, Infoblox DDI) to monitor for lookups to malicious domains.  Threat actors often attempt to harvest credentials (including New Technology Local Area Network [LAN] Manager [NTLM] hashes) based upon outbound Server Message Block (SMB) or Web-based Distributed Authoring and Versioning (WebDAV) communications. Organizations should review and limit the scope of egress protocols that are permissible from any endpoint within the environment. While Hypertext Transfer Protocol (HTTP) (Transmission Control Protocol (TCP)/80) and HTTP Secure (HTTPS) (TCP/443) egress communications are likely required for many user-based endpoints, the scope of external sites and addresses can potentially be limited based upon web traffic-filtering technologies. Ideally, organizations should only permit egress protocols and communications based upon a predefined allow-list. Common high-risk ports for egress restrictions include: File Transfer Protocol (FTP) Remote Desktop Protocol (RDP) Secure Shell (SSH) Server Message Block (SMB) Trivial File Transfer Protocol (TFTP)  WebDAV Detection Opportunities for Suspicious Egress Traffic Flows Use Case MITRE ID Description External Connection Attempt to a Known Malicious IP TA0011 – Command and Control Leverage threat feeds to identify attempted connections to known bad IP addresses. External Communications from Servers, Critical Assets, and Isolated Network Segments TA0011 – Command and Control Search for egress traffic flows from subnets and addresses that correlate to servers, critical assets, OT segments, and field devices. Outbound Connections Attempted Over SMB T1212 – Exploitation for Credential Access Search for external connection attempts over SMB, as this may be an attempt to harvest credential hashes. Table 6: Detection opportunities for suspicious egress traffic flows Virtualization Infrastructure Protections  Threat actors often target virtualization infrastructure (e.g., VMware vSphere, Microsoft Hyper-V) as part of their reconnaissance, lateral movement, data theft, and potential ransomware deployment objectives. Securing virtualization infrastructure requires a Zero Trust network posture as a primary defense. Because management appliances often lack native MFA for local privileged accounts, identity-based security alone can be a high-risk single point of failure. If credentials are compromised, the logical network architecture becomes the final line of defense protecting the virtualization management plane. To reduce the attack surface of virtualized infrastructure, a best practice for VMware vSphere vCenter ESXi and Hyper-V appliances and servers is to isolate and restrict access to the management interfaces, essentially enclaving these interfaces within isolated virtual local area networks (VLANs) (network segments) where connectivity is only permissible from dedicated subnets where administrative actions can be initiated. To protect the virtualization control plane, organizations must consider a "defense-in-depth" network model. This architecture integrates physical isolation and east-west micro-segmentation to remove all access paths from untrusted networks. The result is a management zone that remains isolated and resilient, even during an active intrusion. VMware vSphere Zero-Trust Network Architecture  The primary goal is to ensure that even if privileged credentials are compromised, the logical network remains the definitive defensive layer preventing access to virtualization management interfaces. Immutable VLAN Segmentation: Enforce strict isolation using distinct 802.1Q VLAN IDs for host management, Infrastructure/VCSA, vMotion (non-routable), Storage (non-routable), and production Guest VMs. Virtual Routing and Forwarding (VRF): Transition all infrastructure VLANs into a dedicated VRF instance. This ensures that even a total compromise of the "User" or "Guest" zones results in no available route to the management zone(s). Layer 3 and 4 Access Policies The management network must be accessible only from trusted, hardened sources. PAW-Exclusive Access: Deconstruct all direct routes from the general corporate LAN to management subnets. Access must originate strictly from a designated Privileged Access Workstation (PAW) subnet. Ingress Filtering (Management Zone): ALLOW: TCP/443 (UI/API) and TCP/902 (MKS) from the PAW subnet only. DENY: Explicitly block SSH (TCP/22) and VAMI (TCP/5480) from all sources except the PAW subnet. Restrictive Egress Policy: Enforce outbound filtering at the hardware gateway (as the VCSA GUI cannot manage egress). To prevent persistence using C2 traffic and data exfiltration, block all internet access except to specific, verified update servers (e.g., VMware Update Manager) and authorized identity providers. Host-Based Firewall Enforcement Complement network firewalls with host-level filtering to eliminate visibility gaps within the same VLAN. VCSA (Photon OS): Transition the default policy to "Default Deny" via the VAMI or, preferably, at the OS level using iptables/nftables for granular source/destination mapping.  ESXi Hypervisors: Restrict all services (SSH, Web Access, NFC/Storage) to specific management IPs by deselecting "Allow connections from any IP address." Additional information related to VMware vSphere VCSA host based firewalls. A listing of administrative ports associated with VMWare vCenter (that should be targeted for isolation). Hyper-V Zero-Trust Network Architecture  Similar to vSphere, Hyper-V requires strict isolation of its various traffic types to prevent lateral movement from guest workloads to the management plane. VLAN Segmentation: Organizations must enforce isolation using distinct VLANs for Host Management, Live Migration, Cluster Heartbeat (CSV), and Production Guest VMs. Non-Routable Networks: Traffic for Live Migration and Cluster Shared Volumes (CSV) should be placed on non-routable VLANs to ensure these high-bandwidth, sensitive streams cannot be intercepted from other segments. Layer 3 and 4 Access Policies The management network must be accessible only from trusted, hardened sources. PAW-Exclusive Access: Deconstruct all direct routes from the general corporate LAN to management subnets. Access must originate strictly from a designated Privileged Access Workstation (PAW) subnet. Ingress Filtering (Management Zone): ALLOW: WinRM / PowerShell Remoting (TCP/5985 and TCP/5986), RDP (TCP/3389), and WMI/RPC (TCP/135 and dynamic RPC ports)strictly from the PAW subnet. If using Windows Admin Center, allow HTTPS (TCP/443) to the gateway. DENY: Explicitly block SMB (TCP/445), RPC/WMI (TCP/135), and all other management traffic from untrusted sources to prevent credential theft and lateral movement. Restrictive Egress Policy: Enforce outbound filtering at the network gateway. To prevent persistence using C2 traffic and data exfiltration, block all internet access from Hyper-V hosts except to specific, verified update servers (e.g., internal WSUS), authorized Active Directory Domain Controllers, and Key Management Servers (KMS). Host-Based Firewall Enforcement Use the Windows Firewall with Advanced Security (WFAS) to achieve a defense-in-depth posture at the host level. Scope Restriction: For all enabled management rules (e.g., File and Printer Sharing, WMI, PowerShell Remoting), modify the Remote IP Address scope to "These IP addresses" and enter only the PAW and management server subnets. Management Logging: Enable logging for Dropped Packets in the Windows Firewall profile. This allows the SIEM to ingest "denied" connection attempts, which serve as high-fidelity indicators of internal reconnaissance or unauthorized access attempts. Additional information related to Hyper-V host based firewalls. Additional information related to securing Hyper-V.  General Virtualization Hardening  To protect management interfaces for VMware vSphere the VMKernel network interface card (NIC) should not be bound to the same virtual network assigned to virtual machines running on the host. Additionally, ESXi servers can be configured in lockdown mode, which will only allow console access from the vCenter server(s). Additional information related to lockdown mode. The SSH protocol (TCP/22) provides a common channel for accessing a physical virtualization server or appliance (vCenter) for administration and troubleshooting. Threat actors commonly leverage SSH for direct access to virtualization infrastructure to conduct destructive attacks. In addition to enclaving access to administrative interfaces, SSH access to virtualization infrastructure should be disabled and only enabled for specific use-cases. If SSH is required, network ACLs should be used to limit where connections can originate. Identity segmentation should also be configured when accessing administrative interfaces associated with virtualization infrastructure. If Active Directory authentication provides direct integrated access to the physical virtualization stack, a threat actor that has compromised a valid Active Directory account (with permissions to manage the virtualization infrastructure) could potentially use the account to directly access virtualized systems to steal data or perform destructive actions. Authentication to virtualized infrastructure should rely upon dedicated and unique accounts that are configured with strong passwords and that are not co-used for additional access within an environment. Additionally, accessing management interfaces associated with virtualization infrastructure should only be initiated from isolated privileged access workstations, which prevent the storing and caching of passwords used for accessing critical infrastructure components. Protecting Hypervisors Against Offline Credential Theft and Exfiltration Organizations should implement a proactive, defense-in-depth technical hardening strategy to systematically address security gaps and mitigate the risk of offline credential theft from the hypervisor layer. The core of this attack is an offline credential theft technique known as a "Disk Swap." Once an adversary has administrative control over the hypervisor (vSphere or Hyper-V), they perform the following steps: Target Identification: The actor identifies a critical virtualized asset, such as a Domain Controller (DC)  Offline Manipulation: The target VM is powered off, and its virtual disk file (e.g., .vmdk for VMware or .vhd/.vhdx for Hyper-V) is detached. NTDS.dit Extraction: The disk is attached to a staging or "orphaned" VM under the attacker's control. From this unmonitored machine, they copy the NTDS.dit Active Directory database. Stealthy Recovery: The disk is re-attached to the original DC, and the VM is powered back on, leaving minimal forensic evidence within the guest operating system. Hardening and Mitigation Guidance To defend against this logic, organizations must implement a defense-in-depth strategy that focuses on cryptographic isolation and strict lifecycle management. Virtual Machine Encryption: Organizations must encrypt all Tier 0 virtualized assets (e.g., Domain Controllers, PKI, and Backup Servers). Encryption ensures that even if a virtual disk file is stolen or detached, it remains unreadable without access to the specific keys.  Strict Decommissioning Processes: Do not leave powered-off or "orphaned" virtual machines on datastores. These "ghost" VMs are ideal staging environments for attackers. Formally decommission assets by deleting their virtual disks rather than just removing them from the inventory. Harden Hypervisor Accounts: Disable or restrict default administrative accounts (such as root on ESXi or the local Administrator on Hyper-V hosts). Enforce Lockdown Mode (VMware ESXi feature) where possible to prevent direct host-level changes outside of the central management plane. Remote Audit Logging: Enable and forward all hypervisor-level audit logs (e.g., hostd.log, vpxa.log, or Windows Event Logs for Hyper-V) to a centralized SIEM.  Protecting Backups Security measures must encompass both production and backup environments. An attack on the production plane is often coupled with a simultaneous focus on backup integrity, creating a total loss of operational continuity. Virtual disk files (VMDK for VMware and VHD/VHDX for Hyper-V) represent a high-value target for offline data theft and direct manipulation. Hardening and Mitigation Guidance To mitigate the risk of offline theft and backup manipulation, organizations must implement a "Default Encrypted" policy across the entire lifecycle of the virtual disk . At-Rest Encryption for all Tier-0 Assets: Implement vSphere VM Encryption or Hyper-V Shielded VMs for all critical infrastructure (e.g., Domain Controllers, Certificate Authorities). This ensures that the raw VMDK or VHDX files are cryptographically protected, rendering them unreadable if detached or mounted by an unauthorized party. Encrypted Backup Repositories: Ensure that the backup application is configured to encrypt backup data at rest using a unique key stored in a separate, hardened Key Management System (KMS). This prevents "direct manipulation" of the backup files even if the backup storage itself is compromised.  Network Isolation of Storage & Backups: Isolate the storage management network and the backup infrastructure into dedicated, non-routable VLANs. Access to the backup console and repositories must require phishing-resistant MFA and originate from a designated Privileged Access Workstation (PAW). Immutability and Air-Gapping: Use Immutable Backup Repositories to ensure that once a backup is written, it cannot be modified or deleted by any user including a compromised administrator for a set period. This provides a definitive recovery point in the event of a ransomware attack or intentional data sabotage. Detection Opportunities for Monitoring Virtualization Infrastructure Use Case MITRE ID Description Unauthorized Access Attempt to Virtualized Infrastructure T1078 – Valid Accounts Search for attempted logins to virtualized infrastructure by unauthorized accounts. Unauthorized SSH Connection Attempt T1021.004 – Remote Services: SSH Search for instances where an SSH connection is attempted when SSH has not been enabled for an approved purpose or is not expected from a specific origination asset. ESXi Shell/SSH Enablement T1059.004 - Command and Scripting Interpreter Monitor ESXi hostd.log and shell.log for the SSH service being enabled via DCUI, vSphere client, or API calls. Alert on any ESXi SSH enablement event that was not preceded by an approved change request. Bulk VM Power-Off Events T1529 - System Shutdown/Reboot Detect sequences where multiple VMs are powered off within a short time window (e.g., >5 VMs in 10 minutes) via vCenter events.  Correlate with vpxd.log "ReceivedPowerOffVM" events. VMDK File Access from Non-Standard Processes T1486 - Data Encrypted for Impact Monitor for processes accessing .vmdk, .vmx, .vmsd, or .vmsn files outside of normal VMware service processes (hostd, vpxd, fdm).  execInstalledOnly Disablement T1562.001 - Impair Defenses: Disable or Modify Tools Monitor ESXi shell.log for execution of "esxcli system settings encryption set" with "--require-exec-installed-only=F" or "--require-secure-boot=F". Alert on any cryptographic enforcement disablement event that was not preceded by an approved change request. vCenter SSO Identity Modification T1556 - Modify Authentication Process Monitor vCenter events and vpxd.log for modifications to SSO identity sources, including the addition of new LDAP providers or changes to vshphere.local administrator group membership. Alert on an identity source change not initiated from a designated PAW subnet. VM Disk Detach and Reattach to Non-Inventory VM T1486 - Data Encrypted for Impact Detect sequences where a virtual disk is removed from a Tier-0 asset via "vim.event.VmReconfiguredEvent" and subsequently attached to an orphaned or non-standard inventory VM.  Correlate with "vim.event.VmRegisteredEvent" events on non-standard datastore paths within the same time window. VCSA Shell Command Anomaly T1059.004 - Command and Scripting Interpreter: Unix Shell Monitor VCSA shell audit logs for execution of high-risk commands (e.g., wget, curl, psql, certificate-manager) by any user following an interactive SSH session. Alert on any instance where these commands are executed outside of an approved change window. Bulk Snapshot Deletion T1490 - Inhibit System Recovery Detects sequences where snapshots are removed across multiple VMs within a short time window via vCenter events. Correlate with "vim-cmd vmsvc/snapshot.removeall" execution in hostd.log to confirm host-level action. Table 7: Detection opportunities for VMware vSphere Protecting Against DDoS Attacks A distributed denial-of-service (DDoS) attack is an example of a disruptive attack that could impact the availability of cloud-based resources and services. Modernized DDoS protection must extend beyond the legacy concepts of filtering and rate-limiting, and include cloud-native capabilities that can scale to combat adversarial capabilities. In addition to third-party DDoS and web application access protection services, the following table provides an overview of DDoS protection capabilities within common cloud-based infrastructures. Cloud Provider DDoS Protection Capability  Google Cloud Google Cloud Armor Amazon Web Services AWS Shield Microsoft Azure Azure DDoS Protection Cloud Platform Agnostic  Imperva WAF Akamai WAF Cloudflare DDoS Protection Table 8: Common cloud capabilities to mitigate DDoS attacks Hardening the Cloud Perimeter  With the hybrid operating model of modern day infrastructure, cloud consoles and SaaS platforms are high-value targets for credential harvesting and data exfiltration. Minimizing these risks requires a dual-defense strategy: robust identity controls to prevent unauthorized access, and platform-specific guardrails to protect access to resources, data, and to minimize the attack surface.  Strong Authentication Enforcement Strong authentication is the foundational requirement for cloud resilience and securing cloud infrastructure. Similar to on-premises environments, a compromise of a privileged credential, token, or session could lead to unintended consequences that result in a high-impact event for an organization. To mitigate these pervasive risks, organizations must unconditionally enforce strong authentication for all external-facing cloud services, administrative portals, and SaaS platforms.  Organizations should enforce the usage of phishing-resistant authenticators such as FIDO2 (WebAuthn) hardware tokens or passkeys, or certificate based authentication for accounts assigned privileged roles and functions. For non-privileged users, authenticator software (Microsoft Authenticator or Okta Verify) should be configured to utilize device-bound factors such as Windows Hello for Business or TouchID. Additionally, organizations should leverage the concept of authenticators (identity + device attestation) as part of the authentication transaction. This includes enforcing a validated-device access policy that restricts privileged access to only originate from managed, compliant, and healthy devices. Trusted network zones should be defined in order to restrict access to cloud resources from the open internet. Untrusted network zones should be defined to restrict authentication from anonymizing services such as VPNs or TOR. Using device-bound session credentials where possible mitigates the risk of session token theft. Identity and Device Segmentation for Privileged Actions The implementation of privileged access workstations (PAWs) is a critical defense against threat actors attempting to compromise administrative sessions. A PAW is a highly hardened, dedicated hardware endpoint used exclusively for sensitive administrative tasks. Administrators should leverage a non-privileged account for daily tasks, while privileged actions are restricted to only being permissible from the hardened PAW, or from explicitly defined IP ranges. This "air-gap" between communication and administration prevents an adversary from moving laterally from a compromised non-privileged identity to a privileged context within hybrid environments.  Just-in-Time Access and the Principle of Least Privilege Static, standing privileges present a security risk in hybrid environments. Following a zero-trust cloud architecture, administrative privileges should be entirely ephemeral. Implementing Just-In-Time (JIT) and Just-Enough-Access (JEA) mechanisms ensures that administrators are granted only the specific, granular permissions necessary to perform a discrete task, and only for a highly limited duration, after which the permissions are automatically revoked. This architectural model provides organizations with the ability to enforce approvals for privileged actions, enhanced monitoring, and detailed visibility regarding any privileged actions taken within a specific session. Securing Non-Human Identities Organizations should implement identity governance practices that include processes to rotate API keys, certificates, service account secrets, tokens, and sessions on a predefined basis. AI agents or identities correlating to autonomous outcomes should be configured with strictly scoped permissions and associated monitoring. Non-privileged users should be restricted from authorizing third-party application integrations or creating API keys without organizational approval. Continuous scanning should be performed to identify and remediate hard-coded secrets and sensitive credentials across all cloud and SaaS environments. Storage Infrastructure Security and Immutable Backups The strategic objective of a destructive cyberattack—whether for extortion or sabotage—is to prolong recovery and reconstitution efforts by ensuring data is irrecoverable. Modern adversaries systematically target the backup plane as part of a destructive event. If backups remain mutable or share an identity plane with the primary environment, attackers can delete or encrypt them, transforming an incident into a prolonged and chaotic recovery exercise. While modern-day redundancy for backups should include multiple data copies across diverse media, geographic separation can be a subverted defensive strategy if logical access is unified. To ensure resilience against destructive attacks, the secondary recovery environment should reside within a sovereign cloud tenant or isolated subscription. This environment should be governed by an independent Identity and Access Management (IAM) plane, using distinct credentials and administrative personas that share no commonality with the production environment. Backups within an isolated environment must be anchored by immutable storage architectures. By leveraging hardware-verified Write-Once, Read-Many (WORM) technology, the recovery plane ensures that data integrity is mathematically guaranteed. Once committed, data cannot be modified, encrypted, or deleted—even by accounts with root or global administrative privileges, until the retention period expires. This creates a definitive "fail-safe" that ensures a known-good recovery point remains accessible regardless of potential security risks in the primary environment. Additional defense-in-depth security architecture controls relevant to common cloud-based infrastructures are included in Table 9. Cloud Provider Identity Controls Secrets Governance Network Controls Policy Guardrails Google Cloud IAM Deny Policies Secret Manager VPC Service Controls Organization Policy Service Amazon Web Services IAM Identity Center Secrets Manager Verified Access Service Control Policies Microsoft Azure Entra ID (PIM) Azure Key Vault Azure Virtual Network Private Link Azure Policy Cloud Agnostic Security Solutions Okta SailPoint Ping Identity Hashicorp Vault CyberArk Zscaler Netskope SSE Wiz Palo Alto Prisma Cloud Orca Security Table 9: Common cloud capabilities for infrastructure hardening Detection Opportunities for Protecting Cloud Infr
    💬 Team Notes
    Article Info
    Source
    Mandiant
    Category
    ◉ Threat Intelligence
    Published
    Archived
    Mar 16, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗