AI: a new challenge for cybersecurity & privacy

AI improves security processes, but it also creates new threats.

Artificial Intelligence has offered in recent years techniques to improve business processes, perform tasks more efficiently, and analyze a large amount of data in a short time. However, these same techniques can also be used to threaten the privacy and security of users.

Accelerometers used to take your data

In 2016, a Danish graduate called the attention of the world with his master thesis at IT University in Copenhagen. Tony Beltramelli, then 25 years old, had discovered a way to guess the passwords of smartwatch users like the iWatch.

Beltramelli warned that “by their very nature of being wearable, these devices, however, provide a new pervasive attack surface threatening users’ privacy, among others.”

Thanks to the accelerometers that integrate these wearable devices, an application can know if you make a natural hand movement or if you are typing something on a tablet or computer. Beltramelli went further. After collecting enough data, he showed that the accelerometer data combined with machine learning techniques make it possible to guess the password of a mobile device. This task is even easier because most keyboards are standardized in shape and size.

Beltramelli was able to verify this because he had full access to the internal mechanisms of a smartwatch. A user who has not installed any third-party application and who has not granted permissions to the app could be safe. However, in September 2018, a team of researchers from the International Institute of Computer Science in California discovered that some application developers were using malicious techniques to collect user’s data without requiring permission. The figure was not trivial: more than 1,300 applications obtained accurate geolocation information and phone identifiers.

The researchers said in their talk “50 Ways to Leak Your Data” that “apps can evade the permission model and access protected data without the user’s consent, using both veiled or alternative channels. These channels occur when there is a way to access protected resources that are not audited by security mechanisms, leaving the resources unprotected.”

With this technique, applications such as Photo Identifier were able to send user geolocation information taking advantage of the permissions of another application.

In the same way, applications can take advantage of the accelerometer to collect data when using the speaker, as published by a team of security researchers (Abhishek Anand, Chen Wang, Jian Liu, Nitesh Saxena, Yingying Chen) earlier July 2019.

The researchers created an application as a proof-of-concept that mimics the behavior a malicious attacker would have. The app recorded the voice vibrations with the accelerometer and sent the information to a server controlled by the attacker.

In this way, an application can record the conversations that the user has with another person in an audio message or obtain information about their musical tastes. The researchers pointed out that data such as birthday or bank accounts may not be compromised by this method since these are not usually shared by voice messages because they are made up of digits.

However, this technique (called “Spearphone” by its developers) can identify a person’s gender with 90% accuracy, while recognizing the speaker with 80% accuracy.

“Apps can circumvent the permission model and gain access to protected data without user consent by using both covert and side channels,” the researchers wrote.

These channels occur when there is an alternative way to access the protected resource that is not audited by the security mechanism, thus leaving the resource unprotected.

Some mitigation techniques suggested by the researchers were to reduce the sampling rate or vary the maximum volume of a telephone so that the accelerometer readings are difficult.

You’re judged by your face

Another case that has raised concerns about its potential uses against the privacy and security of people is facial recognition. The concerns about its malicious use were rising in San Francisco, and the government was banned to use facial recognition technology in May 2019. Activists and developers questioned the need to use this technology for police purposes.

In the case of facial recognition, the problem does not lie in the danger that a criminal “hacks” your face and uses it to access bank accounts, but in the ways in which the State can use this technology to maintain control and discriminate to certain groups.

One of the problems reported is the lack of precision that this technology has to recognize faces that are not white. There is a margin of error of 0.8% in these devices when it comes to white men, while the margin of error increases to 34.7% for dark-skinned women.

The New York Times denounced that Chinese authorities used this technology to monitor people belonging to ethnic minorities, such as the Muslim population of the Xinjiang region, in what they described as an expression of racism. Several Chinese startups were involved in the development of these facial identification systems, such as Yitu, Megvii, SenseTime, and CloudWalk. Some of them, like Megvii, also develop technology for startups in the United States.

The case of China may sound like an extreme situation, but more and more studies are emerging that point out how many AI algorithms show a bias in interpreting the data, which is not a problem of the technology itself, but in how it is coded and trained.

Such situations have led to warning about the risks of Artificial Intelligence such as the case of Elon Musk who has described the AI as “more dangerous than nuclear weapons.”

All technology carries its risks, and these are true since the first tools of humanity were created. Fire and wedge are a hazard used without caution. Outlawing new technologies is an unrealistic exit. A more promising way is to notice in advance the problems they generate and find innovative solutions to solve them.