Opinion

The Society of the Future

Artificial Intelligence: who owns the data and how it is used?

The advent of artificial intelligence, which promises enormous benefits for the society of the future, fills us with anxiety and fears.

We marvel at the new developments and applications in health, transportation and finance, at the same time that we are frightened by the disruptive effect of new technologies in labour markets, commerce and industry.

We treat such events as if they were unrelated and beyond our influence. Maybe it’s time to change that attitude for a less emotional and more reasonable approach, identifying the critical factors on which we must act.

The pillars of artificial intelligence include: a) Big Data (large amounts of data collected on the web, through platforms such as Facebook, Instagram, YouTube and others); b) algorithms and c) Deep Learning (which structures these algorithms in layers, creating an “artificial neural network” to learn and make autonomous decisions)

The “data” that we generate when we choose between options offered by the platforms used on the web are “mined” by algorithms (revealing a personal profile of sex, age, political, moral preferences, etc.) that allow the machines “learn” to predict results.

This is how a man found out that his young daughter was pregnant, according to Yuval Harari. She began receiving offers for pregnant women from the mall she frequented. Consulted by his parents, she denied the situation, honestly. They complaint then to the mall management, only to find out weeks later that their daughter was really pregnant.

Can our data be used in a way that we do not authorize? Beyond the headaches of the aforementioned youngster and family, “data” is a source of extraordinary wealth in the 21st century.

Data is in the information age just as land was for agricultural societies or machines for the industrial age. “Data-driven” companies grow 8 times faster than the global GDP. Maybe it’s time to protect “data-ownership”, preserving the “wealth” of those who generate it.

The book “Argentina 4.0” (2013) announces the emergence of a “New Economy” that will bring new rules and regulations. To believe it is enough to see the European Union new legislation regarding data-ownership (2018). Have you already authorized your usual platforms to use your data while accessing them in Europe? European legislation poses that “data” belongs to the person who generates it.

It´s value, nevertheless, is quoted to the extent that algorithms use it in large volumes, “aggregating” it, to feed the processes of “Deep” or “Machine Learning”.

There lies another great problem. The “data” we generate carries our prejudices and other cultural patterns. By relying on it, the processes of artificial intelligence reproduce and amplify it.

The algorithms decide, based on the preferences we reveal when interacting on the web, which will be the next movie suggested by Netflix or the next video offered on YouTube. The massive use of them, however, will present more complex choices.

Political leaders blame algorithms for segmenting their proposals, showing it only to those who already sympathize with it, preventing them from attracting new voters.

What happens when a company has hired more male than female programmers, for example, and decides to resort to artificial intelligence for new recruitments? According to the expert Kriti Sharma, the algorithm will infer that men are better than women in that field, based on the past preferences of the company. It will recommend men first, then, for new hires.

Those are unacceptable behaviours, which would not be tolerated if it were executed by humans. There is no reason to accept it from algorithms.

We will have to work on the “black box” of “Machine Learning” processes and on the artificial neural networks of “Deep Learning” to prevent their predictions from harming people’s lives. Do we have any idea of ​​how many decisions – such as the insurance premium we will pay or our credit rating – are taken by algorithms every day based on our sex, race or religion?

In terms of Zeynet Tuvekci´s TED talk: “we cannot outsource our moral responsibilities in machines”; artificial intelligence “does not give us a get out of ethics free card”.

Woody Allen says he is interested in the future because there he will spend the rest of his life. I like to add that we have the opportunity to design it. Let’s do it. Artificial intelligence cannot and should not replicate or amplify human prejudices. Let’s regulate the ownership of the data it uses and the ethics of the algorithms that govern it´s learning process.

Algorithms vs. the people

26 de February, 2023

Are algorithms a danger to democracy? In Argentina 4.0, The Citizens Revolution (Prometeo, 2013), we…

Democratic innovations in the Digital era

15 de January, 2023

The “digital democracy” refers to a new phase of democratic practice, led by innovations aim…

3D printing technology in the building industry

10 de December, 2022

A few days ago, upon receiving a report on the first modular house built by…

View more