If AI is predicting your future, are you still free?

If AI is predicting your future are you still free

As you read In those words, there seem to be dozens of algorithms making predictions about you. It was probably an algorithm that you decided to be open to this article because it was intended that you read it. Algorithm prediction can determine whether you are getting a loan or work or an apartment or insurance, and much more.

These predictive analyzes are converging more and more areas of life. And yet no one has asked your permission to make such a prediction. They are not led by any government agency. No one gives you information about the prophecies that determine your destiny. Worse still, a study through academic literature for predictive ethics shows that it is an undeveloped field of knowledge. As a society, we have not thought through the ethical implications of predicting people - people who deserve to be included in a group and free will.

Going against what is at the heart of what it means to be human. Our greatest heroes are those who opposed their ideas: Abraham Lincoln, Mahatma Gandhi, Marie Curie, Hellen Keller, Rosa Parks, Nelson Mandela, and beyond. They all went above and beyond the call of duty. Every school teacher knows children who have achieved more than was handled in their cards. In addition to developing everyone's baseline, we want a society that allows and encourages action that goes against the odds. But the more we use AI to classify people, make predictions in the future, and treat them accordingly, the more we narrow down a human body, which gives us the then to reckless risks.

Man is has been using prophecy since that time Oracle of Delphi. Wars have been paid for according to these predictions. In recent decades, forecasting has been used to inform practices such as setting insurance prices. These predictions tended to be about large groups of people - for example, how many people out of 100,000 will drop their cars. Some of these people would be more cautious and fortunate than others, but high prices were homogenous (except for broad categories such as age groups) with the assumption that risk accumulation allows for costs. higher those who are less careful and less fortunate to be balanced by relatively lower. costs the careful and lucky. The larger the pool, the more expected and stable it was.

READ  Cybersecurity at a crossroads: Moving towards trust in our technologies

Today, predictions are mostly done through machine learning algorithms that use statistics to fill gaps of the unknown. Text algorithms use very large language databases to predict the most plausible end to a series of words. Game algorithms use data from past games to best predict the next move. And algorithms embedded in human behavior use historical data to find our future: what we are going to buy, whether we plan to change jobs, whether we are going to be sick, whether we are about to commit a crime or crash on our car. Under such a model, insurance is no longer about collecting risk from large sets of people. Instead, predictions have grown in person, and you are increasingly paying your own way, based on your personal risk scores - which raises a new set of ethical concerns.

An important feature of prediction is that they do not account for reality. There is a prediction about the future, not the present, and the future is something that has not yet materialized. Prediction is an estimate, and all sorts of thematic assessments and risk biases and values ​​are included. There may be predictions that are largely inaccurate, to be sure, but the relationship between probability and reality is far more convincing and ethically biased than some assume.

Today's institutions, however, often try to make predictions as if they were a model of objective reality. And even when AI predictions are just probabilistic, they are often interpreted as definitive in practice - partly because humans are bad at understanding probability and partly because the incentives on risk avoidance confirm the forecast. (For example, if someone is expected to be 75 percent more likely to be a bad employee, companies will not want to risk hiring them when they have candidates with a lower risk score).

Related Posts

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Subir

We use cookies to ensure that we give the best user experience on our website. If you continue to use this site we will assume that you agree. More information