Why do algorithms create these problems? How do we go about solving the problem of asking artificial intelligence?

Have you ever heard the story about the world’s smartest horse,?The horse trained by the German Wilhelm von Osten at the end of the 19th century to perform all kinds of commands provided to it by humans (Crawford, 2021). The question of why horses have human intelligence is usually thought to be the result of the domestication of the human mind. Let’s go back to today’s society, where people have trained machines to form a program with advanced intelligence – artificial intelligence (AI). There is no doubt that the advent of AI has brought us many advantages in terms of simulation and optimization along with the training of algorithmic models. However, when humans place the working models of machines in the context of a more complex human society, this can raise some controversial issues for us. If you are interested in these topics, please continue to explore the world of AI by visiting this page.

Source from: file:///Users/chase/Downloads/10.12987_9780300252392-001.pdf

Platform-discourse shaping

When we describe a program today, we usually habitually refer to them as a platform, implying that they are a system that is programmable and capable of collecting and processing data (De Reuver, 2018). The definition and role of a platform vary in different contexts. The term “platform” has a political connotation. A platform can describe the ‘agenda’ or ‘ideas’ of a political party, group, or candidate. When people express their views in public life, they do so based on their own ideas. The target stakeholders shape the platform in a specific context that resonates with a specific audience (Gillespie, 2014). In the shift from mass society to digital platforms, platform creators often choose to structure their platforms in ways conducive to their development, shaping content in line with popular preferences.

In shaping this environment, platforms use algorithms to bring artificial intelligence into close contact with us. Digital media are material objects and infrastructures that exist in the real world and involve interaction with objects and spaces (Awan & O’Loughlin, 2009). According to Mark Andrejevic and Mark Burdon (2014), we live in a world that is increasingly becoming a ‘sensor society,’ where public spaces are filled with sensors that capture people, things, environments, and their interactions. Simply put, the rhythms of our lives are interconnected through these sensors and the data-processing capabilities of digital media.

What makes a platform work – Algorithms and machine learning

According to scholars’ theories, an algorithm is a collection of logical steps used to quickly organize the body of data and take action to achieve the desired outcome (Striphas, 2015). Based on this definition, we can judge that TikTok operates by making analyses and judgments about its audience and pushing out media content that users love to operate. Users’ interests are a complex feature to present in a data-driven form. However, technology companies use algorithms to capture people’s interests and form massive databases (Pi et al., 2020).

The key to this is machine learning, as the algorithms implement the programs constructed by engineers. Machine learning is not a specific or single formula. They are a learning method that improves through constant feedback rather than a fixed sequence. Initially, algorithms are implemented by generating training models on a corpus already validated by designers or previous users. These models formalize the problem and the goal to be represented in machine terms (Zhou, 2021). For example, when we type the word “beer” into TikTok, the videos that appear contain the critical feature of “beer,” i.e., the video appears as a yellow liquid with foam. However, with hundreds of millions of pieces of content, it seems impossible for the platform to categorize every video.

TikTok displays results when typing “beer”

Machine learning algorithms and training processes can be divided into two basic types: supervised and unsupervised.

Supervised methods use predefined features, such as people, objects, logos, or text in images, in the training process and tune the classification algorithm through multiple training sessions to improve classification accuracy. This approach is typically trained using labeled training data and validated on test data to determine the generalization ability and overfitting of the classifier.

 Supervised Learning vs Unsupervised Learning (Source from:https://www.v7labs.com/blog/supervised-vs-unsupervised-learning

On the other hand, unsupervised methods do not rely on predefined features but instead, inductively identify recurring patterns from the input data. This approach can help organize cultural material, identify underlying visual features in image datasets that are difficult for humans to perceive, such as color, composition, etc., and be used for clustering and linking operations. Instead of pre-labeled training data, unsupervised methods rely on converting digital images into abstract representations for comparison and clustering. Rather than focusing on specific symbolic features, this approach relies on underlying attributes to organize collections of images.

An exciting study from China found that gay creators on DouYin (the Chinese version of TikTok) like to create content in their bedrooms. Moreover, this content is also more popular with the gay community. One possible reason is that the bedroom is a private space for the more wonderful community to explore their sexuality and express themselves (Wang & Zhou, 2022). Expressing one’s identity in public spaces seems to be problematic in China. Therefore, the algorithm identifies and associates these characteristics with the gay community.

The creators of DouYin use the bedroom and the dim lighting to emphasize a sense of privacy and an exploration of their sexuality.

Algorithmic discrimination

Discrimination exists only in the mutual behavior of humans, and we may find it difficult to believe that discrimination exists in machines. However, the platform’s algorithms are a mapping of human social experience. In some earlier cases, we can find racist tendencies in platforms. For example, Google labeled African Americans as gorillas. When a TikTok creator used words such as ‘black’ and ‘Black Lives Matter,’ he was prompted to remove the inappropriate content (Murray, 2021). These algorithmic biases are related to the distribution of power in the online world, and the operation of the platform system brings the biases of human society into the machine.

Furthermore, and more worryingly, the above examples are only examples of algorithmic biases identified by humans. Often, the operation of algorithms involves more complex processes, and how audiences receive content is passive. Audiences have limited time to view media content. Hence, algorithms need to identify features of media content that are likely to be popular through the algorithm to engage the audience in a short period (Lupinacci, 2021). In this workflow, whether the algorithm can successfully recommend media content will be questionable. Do these opposing forces of the Internet presence reinforce deep-seated prejudices? Such concerns deserve to be considered and solutions offered.

For more videos on how algorithms discriminate, check out the video below

Are Algorithms Stealing Our Privacy?

As mentioned in our previous introduction to algorithms, they analyze features by capturing elements of information such as images. Privacy concerns will arise from this. When we take a photo with our smartphone, the algorithm will capture our facial features to assist the platform in recommending content. As algorithms become more prevalent in society, several social and psychological factors may lead to the rationalization of reduced privacy, thus reducing users’ awareness of privacy protection. These factors include perceptions of the benefits and convenience of algorithms, a low perceived probability of suffering damage, exposure to adverse outcomes only after use has begun, and the certainty that privacy is no longer guaranteed (Fast & Jago, 2020). When you start using an app, the platform usually requires the user to tick a privacy clause. However, most people need to read it carefully. We seem impatient to experience the convenience digital apps bring us.

Furthermore, as the popularity of platforms in the digital age has made it impossible for people to live without them, people have given up on protecting their privacy. Platforms instill in their users the idea that if we cannot give up our obsession with privacy, we will not be able to access this ‘territory.’ Since the platform’s operation is not open and transparent, it is difficult for individual users to react to the platform’s problems. Therefore, multiple stakeholders need to be involved in solving practical problems.

Say no to algorithmic discrimination

After racist incidents such as labeling black people as gorillas, Google apologized and banned its system from recognizing photos of gorillas, a seemingly simple move by Google that has piqued the interest of algorithm researchers – is there a better way to operate to circumvent these failures? A report from the European Union offers some possible approaches. Algorithm companies need to create a feedback mechanism to improve the accuracy of their algorithms by listening to audiences of different backgrounds. Algorithms based on multi-scenario experiments can understand the discrimination people based on protected characteristics may face (European Agency for fundamental rights, 2022). High-level algorithms, therefore, require human oversight and, where appropriate, human intervention.

Although some countries have introduced regulations to prevent discrimination in algorithms, according to Criado and Such (2019), there are currently more ways to improve the current detection of discrimination.

  1. Define the legal and ethical framework: Anti-discrimination legislation needs to be developed globally, and the scope of legal protection needs to be clarified. This includes clearly defining discrimination and, when it occurs, considering the diversity of cultural, social, and ethical attitudes.
  2. Developing a new understanding of the concepts of discrimination and anti-discrimination: The concepts of discrimination and anti-discrimination need to be revisited and redefined from the perspective of decision-making algorithms. This includes considering factors such as implicit bias and unequal treatment in decision-making algorithms and proposing new concepts and definitions to understand better and prevent digital discrimination.
  3. Define formal non-discriminatory computational specifications: The use of obligatory logical specifications can be used to define the acceptable treatment of users, thus prohibiting otherwise unacceptable behavior, and acting as a common language between humans and machines, promoting algorithmic transparency.

Defending our privacy

As we have mentioned, the user’s perception of privacy could be more vital. In the Internet space, the user’s power against capital is insignificant. In this context, the government is essential in protecting people’s data privacy.

Firstly, the government can hold hearings to question the people in charge of the companies involved.

Watch the video below, which details the recent hearing of TikTok CFO – Shou ZI Chew – with a US Congressman and features a heated debate on how TikTok protects data privacy.

Hearings serve multiple purposes for companies to protect data privacy: to gather opinions and insights, to enhance transparency and openness, to guide compliance and improvement, to resolve disputes and controversies, and to promote the development of laws and regulations. Hearings provide a forum for parties to express their views, resolve disputes and improve practices, enhance corporate transparency and credibility, and lead to the development of more appropriate laws and regulations to protect data privacy (Federal Trade Commission USA, 2012).

According to the guidance provided by the Office of the Victorian Information Commissioner (n.d.), there are restrictions on the collection of data, instructions on the use of data, and restrictions on the use of information. These restrictions are made because the leakage of data by AI goes beyond the act of collecting data and therefore needs to be restricted.

While algorithms can efficiently process the vast amounts of data generated in human societies, their decision-making processes can be influenced by factors such as the amount of data, sample bias and program design. Therefore, when using algorithms, we should analyse them in depth to understand the possible social and ethical implications of algorithms in order to avoid these negative effects in their subsequent development and use. At the same time, we should also think critically about the results produced by algorithms in different contexts and avoid over-reliance on them at the expense of human intelligence and judgement.

References

CBS. (2023, March 23). TikTok CEO Shou Zi Chew testifies before House committee as lawmakers push to ban app [Video]. YouTube. https://www.youtube.com/watch?v=x1xEuK0Fxu8

Crawford,K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

De Reuver, M., Sørensen, C., & Basole, R. C. (2018). The digital platform: a research agenda. Journal of information technology33(2), 124-135.

European Agency for Fundamental Rights. (2022). Bias in algorithms artificial intelligence and discrimination. European Union. file:///Users/chase/Downloads/bias%20in%20algorithms-TK0922347ENN.pdf

Federal Trade Commission. (2012). The Government’s Role in Privacy: Getting it Right. USA. https://www.ftc.gov/news-events/news/speeches/governments-role-privacy-getting-it-right

Fast, N. J., & Jago, A. S. (2020). Privacy matters… or does it? Algorithms, rationalization, and the erosion of concern for privacy. Current opinion in psychology31, 44-48.

Gillespie, T., Boczkowski, P. J., & Foot, K. A. (Eds.). (2014). Media technologies: Essays on communication, materiality, and society. MIT Press.

Lupinacci, L. (2021). ‘Absentmindedly scrolling through nothing’: liveness and compulsory continuous connectedness in social media. Media, Culture & Society43(2), 273-290.

Murray,C. (2021, July 10). TikTok algorithm error sparks allegations of racial bias [Press release]. https://www.nbcnews.com/news/us-news/tiktok-algorithm-prevents-user-declaring-support-black-lives-matter-n1273413

MIT Media Lab. (2018, February 10). Gender Shades .YouTube. https://www.youtube.com/watch?v=TWWsW1w-BVo

Office of the Victorian Information Commissioner. (n.d.). Artificial intelligence and privacy – issues and challenges. OVIC. https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-and-privacy-issues-and-challenges/

Pi, Q., Zhou, G., Zhang, Y., Wang, Z., Ren, L., Fan, Y., … & Gai, K. (2020, October). Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (pp. 2685-2692).

Striphas, T. (2015). Algorithmic culture. European journal of cultural studies18(4-5), 395-412.

Wang, S., & Zhou, O. T. (2022). Being recognized in an algorithmic system: Cruel optimism in gay visibility on Douyin and Zhihu. Sexualities, 13634607221106912.

Be the first to comment

Leave a Reply