The future of Artificial Intelligence (AI) depends on many factors. Advancements in computing power and the implementation of regulations are just two external influences that could significantly impact what AI will be able to do in the years to come. Before we get to the exciting future uses, however, it’s important to understand exactly where we are today. And that’s not as clear as it should be.

There are a lot of misconceptions about AI, machine learning, and deep learning. Most people don’t have an accurate sense of the differences or relationships between them. As a result, we often see people use the term AI whenever they refer to machines doing seemingly autonomous work. This misuse of the terminology is based on misinformation and a lack of awareness. But this is only part of the problem. 

More significant concerns arise when technology providers over-promise on how their solutions can answer problems or when end-users have unrealistic expectations. We are seeing claims about what AI, in the broadest sense of the word, can do. Unfortunately, a lot of the current technology can’t deliver. 

In order to be able to take full advantage of the potential of AI, it is becoming increasingly important that we develop an understanding of its current capabilities and limitations in order to see where it might help moving forward. 

Not yet ready for critical applications

We’ve seen that, in some of the spaces where AI and machine learning have great potential, they don’t yet have the accuracy required to be fully reliable. Facial recognition AI is an excellent example. 

In 2014, the FBI arrested an innocent man on two separate occasions for bank robbery. The arrests were made based on the false-positive identification by the FBI’s facial recognition system. Eventually, it was established that the crimes were committed by a man with a similar facial structure, but not before an innocent man had his life destroyed by false accusations.

But this doesn’t mean we should abandon facial recognition. On the contrary. We just have to be more precise in our understanding of exactly what it can do. There is an ever-increasing number of applications where a computer vision algorithm can be effective. 

With Street View, Google uses facial recognition to determine if there are any people in a captured image. They are not interested in identifying the people, just knowing if they are present. Once the algorithm finds a person in an image, the system automatically blurs the face to protect privacy. Given the sheer volume of data Google is capturing to produce Street View, this task would be impossible to do manually.

There is the occasional mistake in the process, and Street View sometimes includes a face that is not blurred or blurs something that is not a face. But this is not a critical application, and Google is quick to remedy the problem as soon as they’re notified. What this shows is that, while we can’t yet depend on AI for high-security applications, we can certainly use it to help augment operations.

Making AI part of operations 

Implementing AI in a business process can bring real value. But, as with any technology, you need to know where and how it can fit in. Expecting today’s AI to perform with 100% accuracy can be a dangerous assumption. However, if you’re doing something like estimating how many people entered a secure area, using a computer vision algorithm can save a ton of time. It’s important to understand its uses and limitations. 

Education around the possible applications of AI is key. Too often, we see organizations think AI is cool and can do everything for them. The reality is that, when adopting an AI-based solution, customers need to be educated on what pitfalls might arise and if the solution really can solve all of their use cases. At the same time, solutions providers have to be transparent about what their technology can feasibly do.

Transparency builds trust 

We have to be careful about releasing solutions. Before any product is launched, technology providers must work to ensure that they target the right space and solve specific problems. This allows us to provide effective answers to real-world scenarios. 

Then, we have to be open about what we are doing. Customers need to know the kind of AI we are using, what data a system is collecting, and the type of information being outputted. Solutions providers can build trust by being transparent about whether or not they’re looking for faces or about how long they’re storing data.

When everyone understands the true capabilities of today’s artificial intelligence, we can better manage expectations and position ourselves to take full advantage of what the future of AI has to offer.


This article originally ran in Today’s Cybersecurity Leader, a monthly cybersecurity-focused eNewsletter for security end users, brought to you by Security magazine. Subscribe here.