Businesses have realized the importance of data and the power of harnessing the data to be an insights-driven enterprise. However, there is a trade-off between technology innovation and security.
The adoption of emerging technologies like 5G will fuel the proliferation of Internet of Things (IoT) which are often built with basic security controls, creating a larger attack surface. At the same time, reliance on data means that data breaches can cause greater damage. Moreover, the post-COVID-19 way of working, means that we will likely be more reliant on these technologies which enable us to work and interact from a distance, albeit with less proven security.
Artificial Intelligence (AI) can be the best weapon in a company’s cybersecurity arsenal and, thus, it is becoming increasingly integral to information security. From the multitude of ways AI is used in business to creating safer smart cities and safeguarding transportation, AI impacts nearly every aspect of our lives. In fact, in its Reinventing Cybersecurity with Artificial Intelligence report, Capgemini found that 61 percent of respondents can no longer detect data breach attempts without the help of AI. This perspective informed the decision of 48 percent of the surveyed organizations to increase their digital security spending for AI by an average of 29 percent in 2020.
However, AI is not placed solely in the hands of the good. Malicious actors can and will adopt technologies such as AI and machine learning (ML) faster than the good security leaders can. The use of malicious AI and ML will create new challenges for all businesses wishing to safeguard their most precious asset, data.
Data security is not the only challenge businesses must face. Privacy should also concern business leaders for a handful of reasons, especially in the wake of cyber criminals taking advantage of COVID-19 to hack organizations and monetize stolen identities. Consumers are deeply concerned with how their data is collected and used, including new COVID-19 contact tracing apps. A barrage of news about data breaches, government surveillance, corporate misconduct, deep fakes and biases has soured consumer sentiment on current data practices and has diminished the level of trust people place on new technologies.
In this rapidly changing environment, regulators and national or transnational authorities strive to protect both consumer rights and business innovation by framing strategies and policies towards excellence and trust.
What is bias in AI, anyway?
The issue of bias in AI arises from the fact that software products are human creations. The biases of its creators are getting hard-coded into its future. As Fei-Fei Li noted, deep learning systems are “bias in, bias out.” While the algorithms that drive AI may appear to be neutral, the data and applications that shape the outcomes of those algorithms are not. What matters are the people building it, the data they are using, and why they are building it.
Needless to say, AI suffers from bias in all its many applications. That goes for information security, as well. A recent survey from O’Reilly found that 59 percent of respondents didn’t check for fairness, bias or ethical issues while developing their ML models. Not only that, but nearly one in five (19 percent) of organizations revealed that they struggled to adopt AI due to a lack of data, data quality and/or development skills.
These inadequacies ultimately can come together and skew the outcome of an AI-powered security solution. For example, a limited data sample might prevent a tool from flagging certain behavior as suspicious. This false negative could then carry on with its malicious activity, move deeper into the organization’s network and evolve into a security incident without raising any red flags. On the flip side, an improperly tuned algorithm could detect otherwise benign network traffic as malicious, preventing business-critical information from getting through and burdening security teams with unnecessary investigations into false positives.
Excising bias from an AI-powered solution
The computer science and AI communities aren’t unaware of bias. In fact, some companies like Google and Microsoft have plans to develop their own ethical guidelines for AI. This is all well and good, but oftentimes these initiatives don’t consider the cultural and social nuances that shape different interpretations of “ethical behavior.”
Another issue is that companies need to follow through on implementing those principles. Unfortunately, that’s not a foregone conclusion. Companies can often change course or sacrifice their idealism to address financial pressures.
Governmental organizations, such as the European Union, have launched initiatives to address the ethical challenges of “trustworthy AI”, although they are still in the consultation stage. Until that, developers and organizations can place themselves ahead of the competition and begin to shift towards ethical AI by reconceiving the modelling process.
As part of this operational shift, it is important to consider social science. This process should involve using more diverse computer scientists, data sources and security teams to protect organizations. Doing so will help account for contextual and perceptual differences, thereby improving the efficiency of algorithms and scope of input data overall. At the same time, these new AI models should also allow for a degree of dynamism so that they can evolve as we change culturally and socially.
Consumer Rights and Privacy Fines
AI is fed with data that is gathered from a handful of sources. A majority of this data is personal and coupled with the inevitable algorithmic bias we discussed earlier, it creates many privacy-related concerns, not only to privacy groups, but to all citizens.
2018 was a breakthrough year for consumer rights and data privacy regulations with the introduction of GDPR. GDPR has made a global impact since every company processing personal data of European citizens is subject to the provisions and requirements of the regulation.
2020 is going to be just as busy from a data protection standpoint. The new California Consumer Privacy Act (CCPA) will be enforced on July 1, 2020 and while the CCPA legislation may not be an omnibus style law like the GDPR, it has been inspired by it, particularly around data subject rights. More countries, like India, are implementing regulations to help with international data exchange, and we can expect to see additional legislation put into place.
Unfortunately, privacy compliance is still lagging. What is certain is that we won’t see all organizations becoming compliant in 2020. It’s still the case that too many companies don’t want to invest in privacy or simply don’t pay enough attention to achieving compliance. As a result, we can expect to see data protection authorities enforcing compliance and levying fines.
Customers are more aware now than ever before of the rights associated with data privacy regulations around the world. And as breaches hit the headlines nearly every week, 2020 will be the year customers start to ask more questions and demand more control over where organizations are storing data and how they are protecting it. Forrester predicts that privacy class-action lawsuits will increase by 300 percent in 2020. As a result, data discovery, classification and remediation by protecting sensitive data through automated workflows will become an important initiative for enterprises.
The 5G hackathon
Privacy issues will be compounded even more as a result of the increased connectivity brought on by 5G. The fifth generation of wireless technology is already here. Telecommunications companies in the U.S. have begun rolling out 5G service to major cities, while five countries in the EU have commercial 5G service. More and more consumers are expected to have full access to the technology by the end of next year.
5G technology will make the IoT a greater part of our everyday lives. Growth is expected to explode particularly among outdoor surveillance cameras, smart cities and connected cars, fueled by the ultra-fast 5G network to allow IoT devices to transfer exponentially more information. In fact, Gartner predicts that the 5G IoT endpoint installed base will approach 49 million units by 2023.
While 5G availability is exciting, it brings new cybersecurity challenges that pose threats to the majority of IoT devices. In the rush to beat the competition, security is still an afterthought as opposed to a forethought. This makes the expanded IoT landscape a nightmare for cybersecurity experts who must figure out how to protect cell phones, security systems, vehicles, smart homes, and a variety of other devices from being breached.
Hackers will try to profit from the proliferation of IoT data. In the impending 5G enabled world, attack surfaces will be larger than ever before, providing more opportunities for consumers and businesses to be hacked. In addition, high bandwidth will empower criminals to launch much larger attacks that could cripple entire enterprise networks. Some of the most common types of attacks that companies need to prepare for are botnets, distributed denial of service (DDoS), RFID spoofing, Trojan viruses, malware and malicious scripts.
On the privacy side, matters become more complex. 5G service providers will have extensive access to large amounts of data being sent by user devices, which could show exactly what is happening inside a user’s home. At the very least, metadata could describe their living environment, in-house sensors and parameters, such as temperature, pressure, humidity and so on. Such data could expose a user’s privacy or could be manipulated and misused. In addition, service providers could decide to sell this type of data to other service companies such as advertisers for the purposes of opening up new revenue streams.
Security challenges ahead
The global rollout of 5G, and the increasing integration of AI in all forms of daily activities create new security challenges for a trustworthy technology innovation. The absence of a security-by-design and privacy-by-design mindset will make the 2020s a record-breaking decade for cyber-attacks on connected devices, putting consumer privacy at risk. This is especially true given the new post-COVID-19 working environment we are entering. The increase of breaches will result in increased fines for organizations not complying with privacy and security regulations at the federal level. High-tech vendors and government organizations should join forces to develop frameworks that promote excellence and trust, deter threat actors and preserve the features that address what technology was developed for: technical progress and improving the quality of living conditions.