HELP! I need to find my Data before I can secure it!
REALLY?
Do I need to first Classify?
Many analysts and some DLP vendors are touting their Data discovery capabilities and promote the notion that one must first Discover and Classify the data before such data can be secured. Analysts are suggesting that Ad-Hoc classification is prone to users’ mistakes and thus they recommend shifting the focus from user awareness and training toward automation.
“Discover, Classify … then Protect”?
There is a real problem with this models’ logic. It fails, as the schema doesn’t protect huge amounts of sensitive data against beaches. We suggest that one is able to protect the data even if the data location is unknown. We also claim that such protection can be done without classifying the data.
Auto-classification or system-based classification is done by first mapping the classification schema to the DLP policies. The system will use various detection engines to scan files and find a match between the data and the pre-defined DLP Policy. So for example SSN is mapped to Restricted. Once the system identifies an SSN in a file, it shall classify it as Restricted. Enterprises then create rules and security policies around the classification of such files. From a data loss/leak prevention point of view, such a file may be in motion to the Internet, by email, saved to a USB device, or even Printed. It may be saved on the network in an unsecured location. All these actions would be a violation of the company’s security policy. But if the detection engine that identified the SSN is used when the file is in motion, then certainly it will detect the SSN in that file even if the file was not classified.
When is classification needed?
There may be many sensitive documents such as a Board’s Meeting Minutes that the DLP detection engine or the DLP system is unable to easily define. In such cases, Ad-Hoc classification is needed and the security policies around the classification level should be enforced. So organizations should still have policies around classification levels. As an example, Quarantine an email with a certain classification level or higher. Disallow the opening of a file for certain users for a certain classification level and above.
What about email classification?
While we showed how Ad-Hoc classification is useful for Data at Rest. Ad-Hoc email classification may result in user errors. Imagine classifying an email at a certain level and then attaching a file with a higher classification level. We call this misclassification and advanced systems will force the user to correct the error before the email is sent or do it automatically.
The same is the case if the email contains data that was mapped to a certain classification level but the user classified the email at a lower level. Here again, advanced systems should be able to correct the error automatically.
Why not classify during DLP protection – in real-time?
GTB has built a system designed to empower users to detect, classify and protect sensitive data all at the same time!
Using smart, proprietary algorithms, one can skip the initial discovery – classification expensive, arduous exercise and move straight to data protection. GTB optimizes data protection policies, providing tailored controls to provide maximum data loss protection.
GTB offers seamless Data Security with virtually zero false positives, ensuring that compliance does not come at the expense of business operations.
GTB Data Security Benefits for SRM Admins
Visibility: Accurately, discover sensitive data; detect and address broken business process, or insider threats including sensitive data breach attempts.
Protection: Automate data protection, breach prevention and incident response both on and off the network; for example, find and quarantine sensitive data within files exposed on user workstations, FileShares and cloud storage.
Notification: Alert and educate users on violations to raise awareness and educate the end user about cybersecurity and corporate policies.
Education: Start target cyber-security training; e.g., identify end-users violating policies and train them.
- Employees and organizations have knowledge and control of the information leaving the organization, where it is being sent, and where it is being preserved.
- Ability to allow user classification to give them influence in how the data they produce is controlled, which increases protection and end-user adoption.
- Control your data across your entire domain in one Central Management Dashboard with Universal policies.
- Many levels of control together with the ability to warn end-users of possible non-compliant – risky activities, protecting from malicious insiders and human error.
- Full data discovery collection detects sensitive data anywhere it is stored, and provides strong classification, watermarking, and other controls.
- Delivers full technical controls on who can copy what data, to what devices, what can be printed, and/or watermarked.
- Integrate with GRC workflows.
- Reduce the risk of fines and non-compliance.
- Protect intellectual property and corporate assets.
- Ensure compliance within industry, regulatory, and corporate policy.
- Ability to enforce boundaries and control what types of sensitive information can flow where.
- Control data flow to third parties and between business units.
Finance Industry
IT services
Banking
Finance
Energy and Utilities Industry Management