In a recent Economist Intelligence Unit survey of 476 executives from around the globe, more than a third of respondents said that their companies suffered significant data losses over the course of last 12 months. On the other hand, more than 80% of respondents said that their protection procedures are at least “somewhat” effective. However, executives at organizations that suffered big data breaches – like the US Government – were presumably confident that their protection systems are safe too, until they discovered that they were in fact not.
The term big data came around 2005; the phrase refers to a wide range of information sets that are too large to be managed and processed by traditional data management tools. 2015 was a big year for big data; some of the major tools and platforms achieved mainstream adaptation. Now that big data has become essential for all business enterprises, major security problems have come to the forefront. Therefore, let us look at some of the biggest data security challenges companies of all sizes are facing in 2016.
You would be surprised to know how the amount of data collected about each person in particular can be processed and analyzed to provide a surprisingly complete picture. Consequently, establishments that own the information are legally responsible for the security of their data. Attempts to make anonymous certain data are useless in protecting people’s privacy, because there is so much data available, that you can use some of it as a link for identification purposes. User information is in transit all the time, being accessed by the inside users, outside contractors, and business partners sharing it for research.
One of the greatest challenges when implementing a big data security system is respecting privacy concerns while still permitting the usage and analysis to continue. While a privacy breach has ethical and legal implications, a large amount of data is useless without being able to use it. This is one of the reasons only 0.5% of data is being used and analyzed at the moment. Granular access control acts on every piece of data individually, ensuring a high level of both security and usability. However, some major problems with efficient implementation of granular access control are keeping track of privacy requirements and policies in a cluster-computing setting; keeping track of user access and the proper employment of security requirements.
Real-time monitoring is designed to alert the company at the very first sign of an attack; however, the amount of feedback from SIEM (security information and event management) system, whose aim is to provide the big-picture feedback of the data, is enormous. Companies that have the resources to closely monitor this feedback and separate the real attacks from the false ones are rare. Fortunately, there are providers that can offer an alternative through a remote support software, for both small businesses and larger enterprises.
As we discussed in the last paragraph, the goal of monitoring in real-time is to give the company the heads-up at the first sign of trouble. Since this does not always happen because of the challenges of identifying the real risks among the huge number of false alarms, it is crucial to have regular, granular audits to recognize breaches after the fact. Audit information can help to identify exactly what happened, so that future breaches can be identified and avoided. An effective inspection depends on numerous factors – controlled and timely access to information, the integrity of the information, etc.
The majority of solutions and platforms are still struggling to handle the vast volume, variety and velocity of big data. So far, security has been a tack-on feature to the managing tools; however, it is now evident that the value of big data lies in both the company’s ability to leverage it for better products and its ability to protect it from outside attacks.