
Will super intelligent machines take over the world?
Science fiction has taught us to fear Artificial Intelligence, machines that possess the same rational capabilities as humans and that can process an almost infinite quantity of information.
There are many storylines about super intelligent robots, some of which have gained some sort of conscience and even surpassed our intelligence, that end up rebelling against their creators and harming humans. This fear is legitimate since most of us, even if we are constantly using AI systems in our daily lives, are not familiar with how they work and their real impacts on us. Most AI does not even look like a fictional robot at all.
What should we worry about?
We should not be concerned about AIs turning against us and ultimately bringing us to extinction, at least not yet, since they lack conscience as well as emotion, need to be financed, built and maintained by humans. The true problem is that AI magnifies issues already present in our society: inequality, discrimination, power asymmetries and oppression.
The AI sector, part of the big-tech sector, heavily affects climate change and labor conditions especially in developing countries. The most important concern about AI systems is the space we have given it in our decision-making processes in governments, public administrations, courts and banks. All these institutions are using AI systems to take decisions that are fundamental for our lives, respectively, where public spending should go, how much we owe the state, the length of our sentence if we are convicted of a crime and if we have access to credit. Then it is natural that questions arise about accountability, transparency, fairness and governance of AI.
Who is responsible for decisions taken by AI? On which grounds are AI systems taking those decisions? Is the decision taken by an AI fair? How to develop regulations and standards to govern them and not let algorithms govern us?
AI is a tool that should be used to make our lives easier, not an obscure phenomenon we should look at with suspicion. They can help us in improving knowledge and have useful applications such as self-driving cars, smart homes, recommendation systems, face and speech recognition and many more. Some can even engage in conversation, like ChatGPT that has rapidly gained millions of users thanks to its many capabilities.
The use of AI systems could be profoundly beneficial to humans but at the same time if we do not take the right precautions such as including human control in decisions made by AI or create standards for its use, through governance, unanticipated negative consequences could follow.
But how does AI work? What if it is biased?
While old AI responded exactly to the codes (tasks) written by humans, modern AI is machine learning built to emulate a human brain and its neurons, creating a neural net network. A signal drives through artificial neurons as it would in a human brain. Basically, it is fed data and learns through it, mimicking the patterns already existing in the data used to train the machine. Then the AI produces an output according to what it was asked to do.
Training data is a huge amount of historical data, and it is inevitably flawed because replicates existing issues in our society. The data is about millions of people, consisting of pictures and information uploaded on social media every day. Other data that is more critical such as healthcare data, criminal record data and credit data and the privacy implications are even more serious.
AI amplifies gender bias and racism bias, discriminating certain categories of people while privileging others. Categorization while collecting data requires political and social choices and it is one of the sources of discriminating and unfair outcomes.
To counter the social threat of amplifying already existing inequalities we should try to build fairer systems, including diverse training data or a fast way to challenge AI decisions if we are caused injury by its unintended or unwanted consequences.
The bias in the system may lead to an unfair outcome that will prejudice certain individuals, just because they have been categorized in a certain way by the algorithm itself, reinforcing ableist, ageist, racist, sexist and generally discriminating ideas.
The bias can be from statistical sources and the AI may not be taking a decision in the appropriate context since the data may simplify the real world.
How can AI seriously harm us? COMPAS recidivism algorithm
Algorithms are increasingly being used by judges to assess the likelihood that a defendant re-offends. ProPublica, a non-profit investigative journalism organization, analyzed one of the leading risk assessment algorithms in the US, called COMPAS.
The report shows that black defendants were often incorrectly classified as high risk, deemed more likely to commit a crime again. Even when the black defendant did not have a criminal record the system would classify him as more high risk for the collectivity than a white defendant that had offended before.
On the other hand, white defendants were more likely to be incorrectly classified as low risk, even when violent offences were involved. Looking at realized recidivism rates, after two years many of the black defendants classified as high or medium risk did not re-offend while most of the white defendants classified as low risk ended up reo-ffending. Black defendants were nearly twice more likely of being labeled high risk.
The bias against black people perpetrated by the software, influenced the lives of many, since it was used to define length of sentence and parole. The risk assessment, while it did not explicitly include as one of the drivers of the risk level race, included many variables correlated with it: poverty and unemployment.
This is just one way in which people of color are still discriminated by machine learning, in search engines they are both underrepresented and mispresented. This case study shows the negative impact an inaccurate or biased AI can have on our daily lives and our personal freedom, threatening our rights.
Not all decisions are equal
We should not place all our decision-making power in AI but use it to take more informed decisions. In some critical sectors like welfare, healthcare, criminal justice and warfare we should not totally rely on automated decision-making systems.
There is of course a difference between making an AI decide who would be a recipient of an organ transplant and making a virtual assistant (Alexa, Siri, Google Assistant) decide to put on a song. AI systems have autonomously killed people or dramatically impacted on households. In both cases serious problems surfaced, respectively killing people other than the target during imprecise automated drone strikes (United States NSA drone strikes) and assigning automated debts not due to people who then committed suicide (Robodebt in Australia).
These are extreme examples where AI mistakes cost human lives, but it is to better understand the profound impact they can have on us. Other sectors we should be using AI carefully in are workplace hiring and surveillance, workers are increasingly monitored for efficiency and treated like robots (Amazon). A credit rating AI can give one a bad credit score and or an AI hiring system can automatically exclude one as an applicant from a job. The Chinese Social Credit System relies on AI to decide if a citizen can travel abroad, access to loans and access to private schools.
AI systems in the decision-making system of governments and financial institutions should be tested and research studies published before national application, in contrast to how things usually go.
Transparency, privacy and accountability
As far as transparency is concerned, in modern AI systems we cannot see how they operate and what leads them to the output, they are basically a black box. To better understand the mistakes that an AI makes it should include in the output an explanation for its decision, helping us to identify how it was misled.
A lot of data is collected to train AI systems. Everything people upload in social media such as selfies, videos, audios, personal info, is collected in huge datasets. Even mugshots have been used to train face recognition AI, without the consent of the people portraited and any benefit for them.
But who is accountable for the decision AI takes and its impact on people’s lives? If the AI is at fault is the creator responsible for the unintended consequences or the user?
Accountability is still a tricky question since even when AI companies break the law, they rarely face consequences and always put-up statements about their algorithms having been fixed .When users are made accountable, it takes a long time to claim damages (Robodebt).
Attempts to regulate AI have been pursued after Edward Snowden revelations in 2013. He leaked highly classified information from the National Security Agency, an intelligent agency of the United States. People realized that many AI systems were secretly being developed by national governments. A State can use AI for security purposes to control its citizens, whoever enters its borders or uses its internet infrastructure. This includes both domestic crimes and terrorism. People who have already been convicted can be heavily monitored and screening can lead to drone strikes where the identity of the target is not known but data, or metadata suggested they are a terrorist. Furthermore, AI can detect undocumented workers leading to their deportation.
So given the fact that AI are built in secret by governments, some may not be willing to regulate them if they are not protected from other state’s AI use for espionage or warfare and if it hinders security purposes. UNESCO developed global standards on the ethics of AI, while there have also been national and regional developments such as the GDPR in Europe. UNESCO recommendations focus on social implications of AI and its impacts on human rights and fundamental freedoms. The main pillars of the legal framework are fairness, non-discrimination, right to privacy, human oversight, transparency and ability to explain decisions. Those are in fact the issues and values one should keep in mind when talking about AI.
Environmental and labor issues
Cheap labor is heavily used in the AI sector. Underpaid and exploited workers from disadvantaged communities perform alienating repetitive tasks, that give the impression of certain AI systems being autonomous. People are required to build, maintain and test AI.
Other than exploitation, workers are under massive surveillance that AI systems can put in place. Workers are continuously controlled to extract from them the highest possible productivity and even their social media is monitored.
Another important concern is the impact the AI industry has on climate change. To build and maintain AI a lot of critical resources, such as metals, are needed.
The costs of mining are borne by the disadvantaged communities where the extraction takes place, that may even be in conflict zones, while its benefits are enjoyed by big corporations in developed countries. A huge amount of electricity is needed to create the computational infrastructure, just think about the enormous data centers.
AI has a global supply chain, materials and production phases are scattered all over the world. So even its logistic has a high carbon footprint, driven by the costs of transport.
“AI for good”? Who do they truly serve?
Finally, to get the whole picture about the true issues we need to understand that AI serves corporations and states, that is capitalism and politics.
AI reinforces the inequality between developing countries and developed countries, since being in possession of powerful AI can sustain economic growth or strengthen the power of a nation. Not only do they consolidate the power of one nation against the others, but AI reinforces the control of governments upon its own citizens especially through surveillance, possession of critical data and pushing a particular political agenda. AI should also serve the interests of the people the decision it is taking will impact on, not just those of their creators or investors, whether state actors or big corporations, but profit and power cannot be eliminated from the picture but just addressed by governance.
References
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Andrejevic, M. (2019). Automated Media. Routledge.
Pasquale, F. (2015). The black box society: the secret algorithms that control money and information . Harvard University Press.
The Future of AI | Peter Graf | TEDxSonomaCounty – YouTube
The danger of AI is weirder than you think | Janelle Shane – YouTube
Flew, T. (2021). Regulating platforms. Polity.
Just, & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157
Noble, S.U. (2018). Algorithms of oppression: how search engines reinforce racism. New York University Press.
How We Analyzed the COMPAS Recidivism Algorithm — ProPublica
Can we build AI without losing control over it? | Sam Harris – YouTube
Be the first to comment