AI is not for reafirm your bias

Computers are not infallible as we would sometimes like to believe. People must become increasingly aware of this as we increasingly use machines to make important decisions.

An unfair arrest and an alleged algorithm to identify criminal tendencies alert us to the dangers of transferring our prejudices to AI.

In the 19th century, when writers imagined an automated world, they considered that certain tasks machines would do much better than humans. How to play chess, for example.

Writers like Edgar Allan Poe considered that the mastermind behind the Turkish chess player (an automaton who supposedly knew how to play chess) should be a human because if it were a machine, it would never lose a game.

There are tasks that machines can perform without error, such as performing complex mathematical calculations. However, they are not infallible as we would sometimes like to believe. People must become increasingly aware of this as we increasingly use machines to make important decisions.

Are the machines fairer?

In the science fiction tale by Russian writer and physicist Anatoly Dneprov, The Number One Model CE Machine, a group of homeless people comment on the amazing things machines can do now. One points out that there are already machines capable of determining if a person is guilty just by asking some random questions and with no apparent relationship, where the accused only answers yes or no.

Currently, we have some machines that do similar jobs, but the results have not been entirely satisfactory. Developers themselves know that their inventions still require further development and that the quality of their analyzes depends very much on the quality of the data they provide.

However, this has not been taken into account even by the police who use this tool. In January of this year, Robert Julian-Borchak Williams, a colored man from Wayne County, Detroit, was wrongfully arrested after being mistakenly identified by an algorithm.

Williams appeared among the police results when they searched for potential suspects in an AI-analyzed database and police arrested him on that test alone, although DataWork Plus, the company that developed the software, had already warned that its identification system. was not infallible.

Williams’ case showed that AI is far from solving racial bias problems or mistakes in identifying suspects. On the contrary, it seems to aggravate these problems.

Reaffirming racial prejudice

In a revival of phrenology pseudoscience, facial recognition software has allegedly been used to calculate a person’s criminal tendency.

Springer planned to publish the results of research related to this technology. However, AI experts at the Coalition for Critical Technology opposed this and called not to validate this or any similar study in the future.

The argument of this coalition is based on the fact that any investigation of this type will be necessarily biased because the data is already biased from the beginning, since the United States justice system is usually harder on people of color than on whites.

If we omit this fact, we run the risk of validating the existing social prejudices.

AI still needs improvement
The researchers believed that it would be easy for a machine to identify the best moves in a Chess game only with calculations. Later they had to discover that this was not so simple.

The first chess programs could not beat even novice players. However, after more than 30 years of research, a computer was able to beat a world champion in chess. After this, computers are already practically unbeatable in the game.

In the same way, we must recognize that today’s AI is hardly learning to work with a large amount of data. It takes many years for it to improve. Hopefully, in that time, we learn to free ourselves from our prejudices.