AI is dumb (and police should know it)

Artificial intelligence promises to make increasingly important decisions in the future. Should we let it?

We increasingly rely on Artificial Intelligence for decision-making. However, should we do it?

Artificial intelligence promises to make increasingly important decisions in the future. Faced with this trend, people must also become increasingly aware of the limitations of these technologies and the dangers involved.

Unlike Elon Musk’s concerns about the possibility that humans will be replaced by more and more machines until they are relegated to domestic cats, AI presents other problems.

AI issues are not part of science fiction stories. These problems are real and, if we are not cautious, they will become increasingly common.

In the New York Times, reporter Kashmir Hill showed that much of the technology in use today is pretty dumb, although we think it’s smart. And this has caused unnecessary problems for many individuals.

One of these cases is that of Robert Julian-Borchak Williams, a black man in Michigan who was accused of shoplifting. The accusation was due to faulty police work that was based on faulty facial recognition technology.

Robert Julian-Borchak Williams was wrongfully arrested for AI problems recognizing faces.

The software used by police showed Williams ‘driver’s license photo among possible matches with the man in surveillance footage, leading to Williams’ arrest in a crime he did not commit.

The reporter mentioned in an interview that police should use facial recognition identification only as an investigative lead. But instead, people treat facial recognition as a kind of magic. And that’s why you have a case where someone was arrested based on faulty software combined with inadequate police work.

But humans, not just computers, misidentify people in criminal cases. The testimony of witnesses has also often led to problems in recognizing someone’s face. That has been a selling point for many facial recognition technologies.

However, machines do not recognize faces better than humans. A federal study of facial recognition algorithms found that they are biased and misidentify people of color at higher rates than white people.

The study included the two algorithms used in the search for images that led to Williams’ arrest.

Sometimes the algorithm is good and sometimes it is bad, and there is not always a good way to tell the difference. And, generally, policymakers, the government, or the police are not required to veto technology.

Companies that sell facial recognition software say it doesn’t offer a perfect “mix”. Gives a score of the probability that the facial images in the databases match the one you are looking for. Tech companies say none of this is likely to cause arrest. (At least, that’s how they talk to a journalist for The New York Times.)

But on the ground, officers see a picture of a suspect alongside a photo of the most likely match, and it appears to be the correct answer. I have seen that facial recognition works well with some high-quality close-up images. But generally, police officers have grainy videos or a sketch, and computers don’t work well in those cases.

It feels like we know computers are at fault, but we still believe in the answers they spit out.

A Kansas farm owner was harassed by police and random visitors due to a flaw in the software that maps people’s locations from their Internet addresses. People incorrectly thought that mapping software was perfect. Facial recognition has the same problem. People do not delve into technology and do not read the fine print about inaccuracies.