Security sits down with Ara Ghazaryan, Scylla co-founder and Vice President of Artificial Intelligence (AI), to discuss the benefits and limits of AI and the ethical concerns surrounding the technology.
Security: What is your background and current role?
Ghazaryan: I hold a Ph.D. degree in Optics and Molecular Physics. Over the course of 15 years, I was a postdoctoral researcher at Munich Technical University, Pusan National University, and National Taiwan University, specializing in optics, imaging techniques, and computer vision. As a Co-founder and Vice President of artificial intelligence (AI) at Scylla, my responsibilities include building and training effective and ethical AI-powered physical threat detection solutions.
Security: What are the benefits of AI?
Ghazaryan: There are many benefits, some of which counter the limitations of human beings, such as natural constraints of our senses and reaction speeds. Other benefits simply elevate the manual and routine tasks off-shoulder much like computers do.
AI can compensate for the natural limitations of human beings in the areas of our senses and reaction speeds. Also, AI can take on routine or manual tasks similar to how computers are used, but with more flexibility.
In computer vision, which I specialize in, AI can improve quality control in manufacturing, reduce road fatalities through self-driving cars, and help monitor security cameras to keep people safe.
Security: What are the limits of AI?
Ghazaryan: Naturally, there are limits to AI-powered systems and solutions. Some are due to the limitations of computing powers and speeds. This may limit the types of problems that can be solved with today’s technology. Some applications may be either too slow to be practical or not cost-effective if they require too much expensive computer hardware.
There are always data-related limitations. Training and testing are highly dependent on the quantity and quality of available scarce datasets. Limited data may lead to results that are not reliable in the real world.
Some limits relate to our ability to apply or discover AI algorithms to solve particular types of problems when we are unable to express these in terms that allow us to determine when we have a good solution.
We also have the issue that some AI approaches are like a “black box.” It can be very difficult to explain why a system has made a particular decision. In some fields, such as medicine, this can be an obstacle to the greater adoption of AI.
Some of those are rather provisional and would be resolved in the near or far future with advances in hardware and approaches. The others require more debate within society to develop an acceptable solution.
Security: Are there ethical concerns with AI?
Ghazaryan: Definitely. AI can result in ethical concerns. These issues often arise from biased or inadequate training data or poor quality control.
A well-known example was the case where face recognition algorithms worked well on white males but couldn’t accurately detect people of color, especially women. This problem was from having a biased training set.
A related ethical concern is when face recognition is used to recognize and oppress certain ethnic groups. This is a clear misuse of technology.
Another example is from the field of self-driving cars. In one case, a self-driving car hit and killed a cyclist. The person who was meant to be supervising it did not intervene when required, and the car couldn’t deal with the situation. In this case, AI was used without proper supervision.
Security: What can be done to mitigate some of those ethical concerns?
Ghazaryan: As AI practitioners, the community works consciously to keep AI solutions free of ethical issues, so that AI is a benefit rather than a liability to society.
First, pay a great deal of attention to having unbiased training data. For a face recognition solution, have faces representing people from different races and genders. This ensures that the solution can properly recognize a face, irrespective of the person’s ethnicity or gender.
Secondly, be very careful with testing your solutions before using them in the real world. Ensure that your training data covers different situations and test it within your development facilities in realistic situations.
For vendors, in particular, be mindful of to whom you provide your solutions to. Don’t ever sell your software without understanding what it will be used for. Refuse to supply anyone who will use it to oppress a particular group, or in a way that is detrimental to society.