Protect Sensitive Data
Protecting Sensitive Data during a Pandemic
within the Age of Regulation and Breaches
From all the radical changes brought about by the coronavirus pandemic, the way we interact with digital data has likely been the most impactful.
As the world began to shut down in early to mid-2020, global activity shifted dramatically to the online sphere. Real-world endeavors including social interaction, schooling, and most importantly work, quickly moved to digital platforms.
These changes brought about a slew of new challenges at a scale never before seen. Just the difficulties in equipping individuals and groups with not only the equipment but the infrastructure necessary to operate exclusively online proved enormous. But beyond mere logistical challenges, the shift to the digital sphere alerted much of the business world to the huge threats posed to data security through exposure.
According to a report compiled by Cybersecurity Insiders (CI), the sheer scale of remote workers exponentially increased data security risks across industries. In a poll of hundreds of IT professionals, a whopping two-thirds (69%) were concerned with Work From Home (WFH) security risks. The majority of security workers pointed to low user awareness training and insecure home/public WiFi networks as the biggest issues. Still, others pointed to the use of at-risk personal devices for work tasks and sensitive data leakage as prime threat contributors.
Researchers at CI were not alone. A similar report by Tessian claims that nearly half of employees (48 percent) are less likely to follow safe data practices when working from home. The data, according to Tessian, highlights how traditional security solutions are failing to curb the problem of insider threat and accidental data loss in the age of the remote workforce.
As revealing as these stats are, it would be a mistake to assume they comprise a totally new phenomenon. These emerging trends are much more a continuation of a years-long pattern in the world of data security, one in which the threats to company data have become increasingly apparent, and regulation, laws, and standards to combat them have become increasingly pervasive.
The Age of Regulation
2018 was marked by many as the year of digital regulation. Two major laws, in particular, came online that changed the face of the data security industry. Europe’s General Data Protection Regulation (GDPR) was the most important legal document affecting personal information and its handling to appear in a generation. It introduced sweeping changes to data security, specifically as it pertains to personal identifying information or PII. Less than three months after GDPR was signed into law by the EU parliament, the much anticipated California Consumer Privacy Act (CCPA) came into effect through the state’s legislature. Being the home of Silicon Valley and a slew of globally influential tech companies, California introducing CCPA was a major shock to the IT universe.
The influence both these laws had on how companies relate to the sensitive data they collect is difficult to overstate.
Practically speaking, the most impactful point contained in the two laws was codifying the “right” of individuals to order their data to be deleted. Companies were obligated to provide customers clear methods for making delete requests and to comply with those requests swiftly and within a short timeframe. Needless to say, the legal requirement to dispose of PII on demand put tremendous responsibility on firms to properly store, and keep track of, the troves of data they gathered on clientele.
It’s worth noting that CCPA and GDPR are not identical in regards to this right. First off, CCPA was designed to protect only customers of larger companies–businesses with an annual gross income of $25 million or deals primarily with PII processing. GDPR on the other hand applies to any entity that processes personal data. As far as the scope of subjects’ rights, CCPA’s “Right to Delete” only applies to information given to a company by the subjects themselves, not data collected by a third party. GDPR’s “Right to be Forgotten” on the other hand applies to any and all data gathered and stored that falls under the PII category.
The differences between GDPR and CCPA aside (and they are substantial), they have one important thing in common: both make it incumbent on companies to demonstrate they have a handle on the private data they process. CCPA requires processors to maintain “reasonable security procedures” to ensure client data is not lost. The law holds companies directly responsible for any loss that results from not adequately protecting data. Later, one of the more important amendments to CCPA extended the “reasonable measures” clause to employee data as well requiring companies to (a) safeguard personal information and (b) provide a notice to employees regarding what personal information was collected. Similarly, GDPR’s famous Article 32 obligates controllers and the processors to implement “appropriate technical and organizational measures” to protect data “appropriate to the [level of] risk.”
The Limits of Protection
The vague language of these requirements presented a huge challenge as far as compliance is concerned. How are companies meant to know they’re adequately protecting their data? This is especially true considering the rules’ stratified nature, ie, the more sensitive the data, the more protection it requires.
Scrambling to meet compliance standards, companies naturally ran into all the limitations contained in traditional Data Loss Protection (DLP) approaches. After spending huge resources in capital and man hours to deploy often highly complex platforms, upending their network protocols, and retraining their workforce, companies are still unable to say with confidence their data is protected from exfiltration or even basic human error. Even worse is the toll DLP tends to take on company operations. Even if a platform is successful in providing a modicum of protection, solutions more often than not hinder productivity.
Despite huge efforts undertaken by the industry to refine DLP rules to fit the unique users and specific business cases, oversensitive DLP policies tend to misinterpret employees’ actions or their intent. This results in users regularly being blocked from completing their work.
Classic features of this overarching problem include the dreaded false positive dilemma, where an overly zealous system will generate constant alerts keeping IT and engineers chasing ghosts. At the other side of the spectrum lies the threat of false negatives, which allow important files and data streams to slip through the cracks. At root, all these pitfalls are a function of the narrowly defined, non-dynamic approaches to DLP that despite their outdatedness, have remained standard in the industry.
All said and done, these archaic methods have left huge gaps in DLP capabilities when it comes to data classification and data discovery. Many DLP programs, even those with machine learning functions, rely on pre-set algorithms and regular expression patterns in order to function. These algorithms determine what “sensitive data” is and determine what controls and safety measures are activated in any given scenario, leading to serious issues in identification accuracy, the key to effective DLP coverage. Furthermore, rigid platforms are unable to keep pace with the constant data “switch-up” characteristic of the contemporary business world. Many DLP solutions simply are not equipped to deal with the regular introduction of new confidential or proprietary data into systems.
All of this translates into serious complications for companies trying to navigate the new and complex world of data regulation.
Data Security for the Modern Age
GTB’s Data Security That Workstm has empowered companies in taking on today’s data protection challenge.
Using patented artificial intelligence models, the GTB data protection programs use an intelligent, learning-based system to manage sensitive data. Rather than rely on set models, GTB programs regularly analyze data with ‘intelligent’ algorithms to dynamically identify relevant files and data sets. The platform’s AI is able to hone in on safe user behavior over time, ensuring that threatening anomalies are flagged while keeping regular operations streamlined. This accuracy has allowed GTB to solve the false positive problem providing high-resolution identification while leaving the vast majority of activity undisturbed. False negatives are also drastically limited with GTB’s methods. Based on the indicators learned from previously identified sensitive files and data streams, GTB can track proprietary content even when elements of a file or packet are changed. Attempted data leaks–intentional or accidental–can then be blocked even after the set is altered from its original form through various functions. Deploying GTB’s platform is also seamless, and can be fully integrated into a network without long and costly set-ups or undue interruptions to business continuity.
GTB gives administrators seeking to protect their own and their client’s data the best of both worlds: High-security assurance, along with all the benefits of an efficient and automated platform.
“100 percent catch rate for data leakage? You bet!
We give this product our SC Lab Approved rating, the highest recognition we offer”
For these reasons, GTB is a top choice among those who take data protection seriously and is used by major players across industries, including finance, healthcare, defense contractors, and government.