- What We Do
- Software Development
- Data & Analytics
- Salesforce Consulting
- Who We Are
- Who We Serve
Data June 28, 2019
If you still aren’t convinced that data security issues are a big – and costly – deal, consider that Marriott’s data breach, announced last year, could end up costing the hospitality business $3.5 billion when all is said and done.
That’s not even including the hit to Marriott’s reputation, which is much harder to put a dollar amount on.
With big data being bigger than ever, it means big data security is more important than ever. If you’re like so many organizations right now – either getting started with or already deep into big data – definitely check out these 6 big data security issues for 2019 and beyond.
With major data breaches hitting well-known entities, such as Panera Bread, Facebook, Equifax and now Marriott (and too many others to name), many big data experts are looking ahead to find ways to solve the security issues that led to these data breaches.
“Data breaches are a terrifying top trend in the cybercrime world that shows no sign of slowing any time soon,” writes Martin Hron for Avast. “…while some data breaches are deliberate attacks, others are simply neglected databases that security auditors find lying around the web like unguarded, unlocked safes.”
In other words, enterprises have a big responsibility to handle their big data in a way that protects customer and employee data. They also have to be sure to design and use databases accordingly to best uphold this responsibility.
There is an urgency in big data security that cannot be ignored – particularly since the major issues facing big data change from year to year. Enterprises putting big data to good use must face the inherent security challenges – including everything from fake data generation to distributed frameworks.
If cybercriminals have access to your database, they can generate fake data and place it in your data lake (AKA “a centralized repository that allows you to store all your structured and unstructured data at any scale”). That’s why for businesses that rely on real-time data analytics or the Internet of Things (IoT), both limiting access AND being able to detect fake data generation are crucial first steps in protecting your data (and by proxy, your customers).
For example, a financial firm may be unable to identify fraud if they are getting false flags from this fake data generation. A manufacturing company may get a false temperature report, resulting in a slow down in production and some serious loss of revenue.
One of the core components that make big data environments functional is granular access control. Depending on roles, you can grant different users different levels of access to your database and dashboard. At face value, controlling access makes big data more secure.
But as companies use increasingly large sets of data and increasingly complex dashboards, this granular access control can become more difficult and actually open enterprises up to more vulnerabilities. For example, if only a handful of people at your company have access to a particular data set, it may take longer for a breach to be noticed.
More commonly, granular access limits the specific information a user can see in a data set – even if they need access to other parts of the data. This complicates both the performance and maintenance of the system.
Security audits should be built into any system development life cycle – particularly where big data is concerned. However, these security audits are a rarity in the real world.
In many cases, the security audit is overlooked since working with big data already comes with a wide range of challenges – and a security audit is just one more thing to add to the list. This is compounded by the fact that many companies lack qualified employees to design and implement an effective security audit.
One option is to partner with an expert who can conduct your audit and give you the application security you need.
For enterprises to put big data to work, most will need to distribute data analytics across multiple systems. Hadoop, for example, is designed for scalable and distributed computing in a big data environment. But Hadoop originally had no security at all, and effective security in distributed frameworks is still a challenge.
The result? It can take a lot longer for companies to identify a breach when it does occur.
“There continues to be a temporal disconnect between the time frame for attacks versus response,” Satya Gupta, CTO of Virsec, told CPO Magazine recently. “The report points out that attack chains act within minutes while the time to discovery is more likely to be months. This gap must be tightened and security tools need to focus on real-time attack detection if we are to have any chance to curtail these breaches.”
Data provenance is helpful for identifying where a breach comes from, because you can use this technique to track the flow of data using metadata.
But metadata management is a strategic problem for enterprises of all shapes and sizes. As data flows from an increasing number of sources – from unstructured data to data processed in real time – tracking it can be difficult without the right framework. It’s not a new big data concern, but it is an ongoing problem.
Real-time big data analytics are becoming an increasingly popular tool to add to an enterprise’s competitive arsenal. But implementing security compliance tools for real-time analytics is even more complicated and generates a huge amount of data on its own.
The tools should be designed to avoid triggering false signs of breach warnings when there is no danger, since chasing these “false positives” can become time consuming in a real-time environment. They can, in turn, distract from real threats of attack and waste resources.
The good news is that none of these big data security issues are unsolvable. Using best practices for big data architecture and gaining expertise over time, enterprises can be sure to get the benefit of big data without sacrificing security.
To start, the modern enterprise should choose the right data security solution for a big data environment. “Whenever data is mentioned, security should automatically follow; especially when you consider big data is everywhere – on-premise[s], in the cloud, streaming from sensors and devices, and moving further across the internet,” according to Anna Russell, writing about data security solutions for TechRadar.
Russell notes that the best practices for data security in a big data environment are similar to those of any development project: scalability, accessibility, performance, flexibility and the use of hybrid environments.
Beyond those, enterprises would do well to hire an application security engineer – or at least to partner with a development team that has a proven record for creating secure big data environments. The expert you partner with should be well aware of modern security threats and attacks, work through the full software development life cycle, have a focus on application encryption, and be able to model potential cyber threats.
“Data-centric security solutions that meet these criteria will better serve companies for years to come as the amount of data collected grows and privacy and data protection concerns become mainstream and litigious,” concludes Russell in her article.
The best way to develop and build out a big data environment that addresses each of these big data security issues is to start with a data strategy and roadmap. A thorough roadmap can help you piece together into one big coherent plan:
Whether you’ve mapped out a data strategy for your organization in the past or not, since data security is constantly evolving, are you curious to know where you stand in relation to data security issues for 2019 and beyond?
Rather than take on a big project like this alone, call us at RTS Labs to find out if partnering with us for your own data security strategy and roadmap could give you the answers you need.
Contact us to talk about how we can help.