More

    Data, Algorithms and “Artificial” Intelligence: What Is The Problem?

    The storage and control of data by large corporations has been an input for greater exploitation, racism and gender discrimination

    Must Read

    TCRN STAFFhttps://www.TheCostaRicaNews.com
    Creating a Conscious alternative news network that we feel the world needs. Pura Vida!

    Data, algorithms and artificial intelligence (AI) are topics with a constant presence in many regions of the world in debates that range from futuristic technologies such as autonomous vehicles, to everyday applications that negatively affect our communities. As feminist, anti-capitalist and anti-racist activists, we must understand the implications and policies of these technologies, as in many cases they accentuate inequalities related to wealth and power and reproduce racial and gender discrimination.

    Data, algorithms and artificial intelligence occupy more and more space in our lives, although, in general, we are hardly aware of their existence. Its impacts, at times, can be equally invisible, but they are related to all our struggles for a more just world. Access to these technologies is uneven, and the balance is increasingly tilting toward powerful institutions such as the Armed Forces, the police, and businesses.

    Only a few private agents have the computational capacity to run the most robust AI models, so even universities depend on them for their research. As for data, we produce it every day, sometimes consciously, sometimes just keeping our smartphones with oneself all the time without even using them.

    Scandal that made headlines

    A few years ago, the Facebook-Cambridge Analytica scandal made headlines, using data to influence votes and elections in the UK and US. Generally, we only learned this from whistleblowers. [1], since there is a total lack of transparency around the algorithms and the data inserted in them, which makes it difficult to understand their impact. Some examples help us understand how these technologies and the way they are implemented change decision-making methods, worsen working conditions, intensify inequality and oppression and even damage the environment.

    Automated decision making (ADM) systems use data and algorithms to make decisions on behalf of humans. They are changing not only how decisions are made, but also where and by whom. In some cases, they shift decision-making from public space to private spaces, or effectively place control over public space in the hands of private companies.

    Some insurers have implemented ADM and AI technologies to determine the legitimacy of claims notices. According to them, this is a more efficient and profitable way to make these decisions. But, often, information about what data is used and what criteria are applied to these determinations is not made available to the public because it is considered a trade secret of a company.

    In some cases, insurers even use data to forecast risks and calculate rates based on expected behaviors, which is just a new way of affecting the principle of solidarity, which is the basis of group insurance, and accentuating neoliberal and individualistic principles.

    Furthermore, these models use data from the past to predict future outcomes, which makes them inherently conservative and predisposed to reproduce or even intensify forms of discrimination suffered in the past. Although they do not use race directly as identifiable data, indicators such as ZIP codes generally serve the same purpose, and these AI models tend to discriminate against racialized communities.

    Both private and public

    Not only private companies, but also governments have AI systems in place to provide services more efficiently and detect fraud – which is usually synonymous with cost reduction. Chile is among the countries that have started a program to use AI to manage healthcare, reduce waiting times, and make treatment decisions. Critics of the program fear that the system will cause harm by perpetuating prejudices based on race, ethnicity or country of origin and gender.

    Argentina developed a model in collaboration with Microsoft to prevent school dropouts and early pregnancy. Based on information such as neighborhood, ethnicity, country of origin or hot water supply, an algorithm predicts which girls are most likely to get pregnant, and based on that, the government directs services. But, in fact, the government is using this technology to avoid having to implement extensive sexuality education, which, incidentally, does not enter the model’s calculations for predicting teenage pregnancy.

    “Smart cities”

    Under the banner of “Smart Cities”, city governments are handing over entire neighborhoods to private companies for experimentation with technologies. Sidewalk Labs, a subsidiary of Alphabet (the company that owns Google), wanted to develop a neighborhood in Toronto, Canada, and collect massive amounts of data on residents to, among other things, predict their movements in order to regulate traffic. The company even had plans to apply its own taxes and control some public services. If it weren’t for the activists who mobilized against this project, the government would have simply handed over the public space to one of the largest and most powerful private companies in the world.

    Putting decision-making power over public space in the hands of private companies is not the only problem for initiatives such as “Smart Cities”. An example from India shows that they too tend to create large-scale surveillance mechanisms. Lucknow City Police recently announced a plan to use cameras and facial recognition technology (FRT) to identify expressions of distress on women’s faces.

    Under the guise of combating violence against women, several cities in India have spent exorbitant amounts of money to implement surveillance systems, money that could have been invested in community-led projects to combat gender-based violence.

    Rather than address the root of the problem, the government perpetuates patriarchal norms by creating surveillance regimes. Additionally, facial recognition technology has been shown to be significantly less accurate for non-cis white men, and emotion-sensing technology is considered highly flawed.

    AI is causing heightened surveillance in many areas of life in many countries, but especially in liberal democracies: from monitoring software that monitors students during online exams to what is known as ‘smart surveillance’, which tends to intensify surveillance of already marginalized communities.

    Body cameras

    A good example of this is body cameras, which have been heralded as solutions to combat police brutality and serve as an argument against demands for budget cuts or even abolition of the police. From a feminist perspective, it should be noted that surveillance technologies not only exist in the public space, but also play an increasingly important role in domestic violence.

    Law enforcement authorities also create “gang databases” that generate further discrimination in racialized communities. It is well known that private data mining companies such as Palantir or Amazon support immigration agencies in the deportation of undocumented immigrants. AI is used to foresee crimes that will occur and who will commit them. Because these models are based on past crime and criminal record data, they are highly skewed toward racialized communities. Furthermore, in fact, they can contribute to crime rather than prevent it.

    Another example of how these AI surveillance systems support white supremacy and heterosexual patriarchy is airport security systems. Black women, Sikh men [3] and Muslim women are more frequently targeted by invasive inquiries. And because these models and technologies enforce cisnormativity, trans and non-binary people are identified as divergent and are inspected.

    Surveillance technologies are not only used by the police, immigration agencies and the military. It is increasingly common for companies to monitor their employees using AI. As in any other context, surveillance technologies in the workplace reinforce existing discrimination and power disparities.

    This development may have started within the big platform and big data companies [4], but the fastest growing sector, data capitalism imposes new working conditions not only on the workers of the sector – its scope is even greater. Probably the best-known example of this type of surveillance is Amazon, where employed people are constantly monitored and if their productivity rates continually fall below expectations, they are automatically fired.

    Retail sector

    Other examples include the clothing retail sector, where tasks such as organizing merchandise for the demonstration are now decided by algorithms, depriving working people of their autonomy. Black people and other racialized people, especially women, are more likely to hold low-paid and unstable jobs and are therefore often the most affected by this dehumanization of work. Platform companies like Amazon or Uber, with the support of huge amounts of applied capital, not only change their industries, but also manage to impose changes in the legislation that weaken the protection of workers and affect entire economies.

    That’s what they did in California, claiming that the change would create better opportunities for racialized working women. However, a recent study concluded that this change actually “legalized racial subordination.”

    So far we have seen that AI and algorithms contribute to power disparities, shift decision-making sites from public space to non-transparent private companies, and intensify the harms inherent in racist, capitalist, heteropatriarchal, and cisnormative systems. Furthermore, these technologies frequently attempt to give the impression that they are fully automated, when in reality they rely on a large amount of cheap labor.

    And, when fully automated, they are capable of consuming absurd amounts of energy, as demonstrated in the case of some language processing models. Bringing these facts to light cost the employment of leading researchers. The strategies of activists to resist these technologies and / or give visibility to the damage caused by them

    Understanding the damages

    In general, the first step in these strategies is to understand the damages that can result and to document where the technologies are being applied. The Our Data Bodies project produced the Digital Defense Playbook, a material aimed at raising public awareness of how communities are affected by data-driven technologies.

    The No la Mía IA [Not My AI] platform, for example, has been mapping biased and disruptive projects in Latin America. The group Organizers Warning Notification and Information for Tenants – OWN-IT!] Built a data bank in Los Angeles to help tenants against rent increases. In response to predictive policing technology, activists created the White Collar Crime Risk Zone Map to anticipate where in the US financial crime is most likely to occur.

    Some people have decided to stop using certain tools, such as the Google or Facebook search engine, thus refusing to provide even more data to these companies. They argue that the problem is not individual data, but the dataset used to restructure environments that extract more from us in the form of data and labor, and that are becoming less and less transparent.

    Data obfuscation

    Another strategy is data obfuscation or masking: activists created plug-ins that randomly click on Google ads or randomly “like” Facebook pages to fool algorithms. There are also ways to prevent AI from recognizing faces in photos and using them to train algorithms. The Oracle for Transfeminist Technologies presents a totally different approach, a deck that invites the exercise of collective imagination for a different technology.

    Indigenous people living on Turtle Island (US and Canada) are already very familiar with surveillance and with the collection of large volumes of data about them that are used against them. From this experience, they created approaches to First Nations data sovereignty: principles related to data collection, access, and ownership to prevent further harm and enable First Nations, Métis, and Inuit [ 5] benefit from their own data.

    AI, algorithms, and data-driven technologies aren’t just troublesome privacy issues. Much more is at stake. As we organize our struggles, we most likely use data-producing technologies for companies that profit from data capitalism. We need to be aware of the implications of this, the damage these technologies cause and how to resist them so that our mobilizations are successful.

    https://gnosiscr.com/
    https://gnosiscr.com/
    - Advertisement -

    Subscribe to our newsletter

    Get all the latest news, events, offers and special announcements.

    Latest News

    European Tourism Leaders Explore the Rich Biodiversity of Costa Rica

    European tourism agencies visit Tortuguero, Caño Negro, San José, Turrialba, San Gerardo de Dota, Manuel Antonio, Guanacaste, Río Celeste,...

    More Articles Like This

    Language »