Category Archives: DATA EXFILTRATION

Zero Trust Data Protection

Zero Trust Data Protection Out with the Old Conventional security models, those based on firewalls, IDS, and the like, operate on the outdated assumption that "everything on the inside of an organization’s network can be trusted".   The contemporary threat landscape facing IT has shown that this is simply not true.   The increased attack…
Read more

Insider Threat and 3rd Party Liability

PageUp and the 3rd Party Liability Problem

3rd Party Liabilities

 

The tech world was thrown into frenzy over the recent hack of international HR service provider PageUp.

In late June, chief executives reported "unusual activity" in its IT infrastructure.  An investigation was launched and emergency notifications were distributed to PageUp’s broad client base.

The industry quickly understood: the implications of this hack were potentially devastating.

PageUp specializes in storing personal details of workforce personnel.   The company boasts two million active users across 190 countries.   All of this data was now suspected of being compromised.

The most recent news on the PageUp damage report was the leaked data of the UK food and hospitality giant Whitbread.   The hotel and coffee shop operator acknowledged that some current and prospective employees’ data may have been compromised during the PageUp hack.  Whitbread sent a message to individuals potentially affected stating that personal detail collected during recruitment processes “may have been accessed and could potentially be used for identity theft.”

Whitbread has reportedly suspended its use of PageUp’s services.

 

The Third Party Liability

The PageUp breach and its subsequent fallout highlight the ever present--and increasingly risky--threat to data posed by third party outsourcing.

 

Third party contractors are extremely attractive targets for cyber criminals.   As one industry leader put it: “information like dates of birth and even maiden names […] gives cyber-criminals all that they need to successfully monetize the hack, from phishing attacks to identity theft.”

 

The risk of third party vendors is especially true in the era of heightened compliance demands set by current data regulations.   Laws like the EU GDPR put all the responsibility on companies when it comes to who they trust to handle their data.   In the medical industry,  HIPAA requirements also extend to any outside service provider dealing with personal data of patients.

 

Handle on the Data

 

Enterprises need to take control of their sensitive data, whether it is on their own networks, or being managed via outsourcing.

This means companies need to vet their digital-service supply chains more seriously.  Managers must get clear answers from service providers on very important questions:

  1. What are the security standards for personnel data?
  2. How up to date are the company’s data loss protection tools?
  3. How does the contractor deal with regulation compliance?

Ensuring the tight standards of contractors is the only way for companies to safely employ third parties to handle their most sensitive data.

 

Badmouthing Data Loss Prevention (DLP) is Fashionable

Badmouthing Data Loss Prevention (DLP) is Fashionable

 

Is DLP Really Dead?

 

I recently came across several digital security vendor sites who describe themselves as a “DLP alternative.”

Perusing through their pages, I came across comments such as “DLP is hard to deploy”, “DLP is hard to maintain” and the classic: “DLP is heavy on the Endpoint”. It’s clear that these security vendors are trying to influence analysts by inserting these negative sentiments into the industry’s discourse on DLP.  Of course, terms such as “hard” or “heavy” are subjective at best and can’t be taken as a concrete, professional assessment.

But my real issue with remarks like these is their shallow understanding of Data Loss Prevention.

Vendors and analysts tend to do a mediocre job explaining what DLP actually is.   Most people treat DLP as a single, specific product. In reality, DLP is a set of tools designed to protect data in various states.    Here’s my definition of what DLP is: A DLP system performs real-time Data Classification on Data in Motion and of Data at Rest and enforces predefined security policies on such streams or data.

This definition also requires us to flesh out our terms.   “Data in Motion” means data on its way from a network to another destination, such as the internet, an external storage device (USB) or even to printers or to fax machines.    “Data at Rest” is data that resides in databases or any unstructured file anywhere on the network. “Data Classification” is the implementation of a DLP policy using specific markers--say, credit card or Social Security numbers for instance.    These policies allow a given transmission of data to be placed in a specific category such as PCI or HIPAA.

From the definition above one can see that DLP is not a single tool, but rather a set of content-aware tools that include a wide range of applications including Network DLP, Endpoint Device Controls, Application Controls, and Data Discovery.

So Which Part is Dead?  

Now that GDPR is in full effect it is hard to understand how Data Discovery is dead or even “seriously ill” as some observers have put it. One of the basic GDPR requirements is to inventory and classify data containing Personal Identifiable Information, or PII. Such data can reside in a wide range of storage areas including file-shares, cloud storage, or other in-house databases. Once the data are discovered, they need to be protected from dissemination to unauthorized entities. Far from being a thing of the past, DLP tools will play a vital role in achieving compliance with the most important set of data regulations ever to hit the world of information technology.

Today's DLP tools are designed mainly to protect PII.

This is a requirement of most data protection regulations in existence, such as PCI, HIPAA, CA1386, California Consumer Privacy Act of 2018, GLBA, GDPR,  NY DFS Cybersecurity, PDPA and SOX.   But protection isn’t as simple as guarding personal details stored on the network. Effective DLP requires a system capable of comprehensive Data Discovery. Achieving Data Discovery means understanding where all enterprise data is located, and to mitigate the risk of loss by various remedial actions such as:

  •         Changing Folder Security Permissions
  •         Moving/Coping the data to another secure folder
  •         Encryption
  •         Redacting images with sensitive data
  •         Enforcing Digital Rights Management
  •         Classification

In addition to these passive defense steps, DLP must also have ways of identifying threats  and protecting against attacks on a network.   Proprietary algorithms such as GTB’s artificial intelligent programs can identify even partial data matches , managers remain alert to any attempts at data exfiltration from  a malicious insider or malware.   Though inaccurate in detecting data exfiltration,  User / Entity Behavior Analytics (UEBA) together with intelligent DLP may be able to identify the presence of malicious programs on a system.   In this way,  systems, such as GTB's, address the insider threat as well, insuring that neither a company’s personnel nor its digital applications become the means for compromising data loss.

But here’s the million-dollar question: if DLP is so essential, why is it getting such a bad rap?

Let’s try to understand where this negative perception came from.

Here are some of the end user complaints as described by a Deloitte Survey entitled “DLP Pitfalls”:

  1.    “High volumes of false positives lead to DLP operations team frustration & inability to focus on true risks”
  2.    “Legitimate business processes are blocked”
  3.    “Frustration with the speed at which the DLP solution becomes functional”
  4.    “Unmanageable incident queues”

These complaints stem from the fact that most DLP vendors have mediocre detection capabilities.   This is because almost all systems use predefined data patterns, called templates, to locate and classify data on a system.  While templates are easy to define and use, they produce waves of false positives that make the system useless from a practical perspective.  Customers are left feeling they’ve bought an expensive toy rather than a system meant to secure their data. No wonder customers are frustrated by DLP capabilities or its value.

So, is it possible to solve the dreaded false positives dilemma produced by DLP systems?  

Fortunately, the answer is yes.

Using content fingerprinting of PII and defining multi-field detection policies, such as combining last name and account number markers within a certain proximity, hones in on specific data and whittles away at irrelevant files. Using this multi-tiered scheme, the system detects the actual data of the company rather than just a data pattern that may or may not be relevant and has been shown to reduce false positives to almost zero.

While some DLP vendors support content "fingerprinting", they do not promote this technique for a good reason.  The number of fingerprints produced can become so large that the system can crash, or at the very least slow down, the network.

But this is not true for all DLP systems. GTB's proprietary  fingerprinting technology allows customers to fingerprint up to 10 billion fields without network degradation.

And as for the concern DLP systems are “unmanageable” and hard to use?

I disagree with the premise.

Even the more sophisticated functions of a DLP system such as running a select statement from one PII table while defining a multi-column policy in another field, are actually quite simple.

In summary, DLP is not just a singular tool and not all DLP system are the same.

Contrary to the naysayers, the growth projections for the industry clearly show that DLP is not “seriously ill” and is definitely not dead.

Equifax submits statement to congressional committees regarding cybersecurity incidient

Equifax Submits Additional Statements to Congress Regarding the Incident Equifax submitted a statement to congressional committees to supplement the company’s responses regarding the extent of the incident impacting U.S. consumers.  "As announced on September 7, 2017, the information stolen by the attackers primarily included: “As a result of its analysis of the standardized data elements, including using…
Read more