Disrupting Ransomware with Advanced File System Techniques

Technology is becoming more effective at early ransomware detection. Solutions can often automatically shut down attacks and minimize the damage. It is reasonable to conclude, however, that there is no truly impenetrable ransomware defense. The more complete answer lies in recovery.

Maintaining pristine datasets that are more readily restored, minimizing loss while preserving data integrity, is arguably as important as prevention in a cybersecure posture. This means making data immune from damage or alteration, and in some cases tracking access and copy patterns, so recovery from attacks is possible without paying ransoms or exposing sensitive information in the first place.

Threats require redefining baselines around how the enterprise operates. In the case of IT, this implies taking a hard look at how we store, access and manage data. A study by the University of Texas indicates that 94% of companies that suffer catastrophic data loss do not survive. Should data become inaccessible for 10 days or more, according to the National Archives & Records Administration, most will close their doors within a year.

Trusting Backups for Recovery

A fundamental question in the enterprise is whether or not backups can, in fact, be trusted after a ransomware attack. Restoring from a backup after a disaster almost always involves data loss. Even organizations that follow strict data       backup procedures may find that their files have been encrypted.

While cybersecurity best practices often cite a “3-2-1” technique for storing backup copies of data, using two storage locations and a single cloud provider, this is insufficient; cybercriminals routinely probe for any network-attached medium.

Moving backups offline may deny cybercriminals the opportunity to encrypt this data. However, the data itself becomes an “island” from which recovery is possible only using the last unadulterated backup point. Enterprise firms, particularly those with highly distributed workforces and cross-functional teams, access and modify their data as much as thousands of times each hour.

Typically, copies of user data are stored separately from the primary data, often at another location. For additional resilience, copies are stored offsite. This invariably results in data loss, particularly if file restoration is from tape, because the time between the backup and an attack itself creates an operational gap.

Few incident response plans adequately address the data restoration process, which can benefit from architectures that place resources closer to the edge. Interest in edge data storage backup has grown as a result of the pandemic and the prevalence of remote work.

From a prevention standpoint, some solutions can look for suspicious activity at the edge, such as actions consistent with certain attack profiles, to spot file modifications. Specifically, they can ingest data regarding file-system activity from hosts to populate an endpoint file system data model node. However, recovery is not a part of this equation.

Some cloud providers offer an iSCSI target that allows for the creation of block storage volumes. Cached-mode data is written to a storage service and retains frequently accessed files in a local cache. Data is kept locally and is asynchronously backed up to the cloud in the stored mode. Either way, volume snapshots offer space-efficient versioned copies for data recovery.

Approaches that aim to provide an immutable data infrastructure, for instance, the ability to encrypt data to make it illegible to unauthorized eyes, have become so valuable that IT decision-makers often rank them alongside the total cost of ownership benefits of the solutions they acquire. File sharing systems, which often rely on data consolidation for unstructured data management, are now understood as being a crucial part of an enterprise’s ransomware recovery and protection arsenal.

Edge storage now broadly includes data residing in NAS appliances. Global file systems have been designed to use any variation of public or private cloud infrastructure as if it is a data center, although data may be stored up to thousands of miles away from users.

Placing compute resources at the edge has also proven effective, where appliances known as filers control communication with the cloud object store. Edge appliances, in some cases, can be configured to cache the most frequently used files, draw data down from a cloud object store when it is needed and send real-time file changes to cloud storage as new data.

File Storage in the Real World

Machine processes, like those involved in a ransomware attack, can encrypt files many times faster than a human can respond. However, IT departments now recognize that purpose-built file system architecture can be used, in some instances, to actually stop an attack on stored files without having to identify the cause.

The victim of a Curator ransomware crypto-locker attack recently leveraged the capabilities of their global file system to disable write access to the file that was spreading the attack, thereby preventing further contamination to the file network. This was achieved by disabling the Windows CIFS/SMB license through a Panzura file system instance, automatically cutting communication with the affected filer and forcing all locations into read-only mode.

Restoring the data from file systems is a different process, unlike restoring from traditional backup storage. In the case of a file system that catalogs changes to every file in the metadata, the IT team has the option to roll back each file to the desired version. This type of granular snapshot restoration is meant to take less time than otherwise required from an offsite backup and with arguably better reliability.

Immutable data architectures are now also seen by IT professionals as providing an advantage over legacy storage and protection methods. Immutable techniques to object stores are designed to be impervious to overwriting. As users edit files within the file system, changes are synced to the cloud as new data objects. The metadata for each file can, in this example, be updated with every edit, recording which object blocks comprise a “file” or dataset at any given time.

As previously discussed, caching the most frequently used files at the edge can effectively limit a ransomware attack to local data. This is because malware simply crawls file directories, and does not discern whether a file is in the cache or not. For the victim of the Curator attack, for instance, the data held in the file system was not encrypted because the ransomware instead generated data that was written to cloud storage as completely new objects.

Moreover, when malware encounters files that are not held in cache, they are retrieved from the cloud store which takes more time and monopolizes bandwidth. Some advanced file systems are now augmented by data services platforms that can spot this activity and limit the breadth of damage, contributing to faster recovery.

Recent storage techniques in global file systems, while not necessarily designed for ransomware protection, make it possible to recover data from exceptionally “clean” files. With this in mind, a broader cyberresilience stance is emerging.

Avatar photo

Glen Shok

Glen Shok has more than 25 years of experience in enterprise technology including past roles with Oracle, EMC and Cisco. He is vice president of strategic alliances at Panzura, a provider of unstructured data management solutions. He offers commentary on data storage and virtualization, NAS, cloud computing and other enterprise software topics.

glen-shok has 1 posts and counting.See all posts by glen-shok