Algorithms and the Rise of Eco Chambers

ARIN6902 BLOG POST (week5)By Daisy (SID: 530010050)

In this current generation of media representation within online communities, computer algorithms are the primary governors of what kind of messages you ingest on a day-to-day basis. These mathematical calculations are complex and are based on data such as educational background, age, and political views, all of which are released by the user when they agree to the Internet’s version of the TOS. Almost all of the exploratory usage algorithms, such as social media applications, shopping platforms, and so on strive to get you to ingest the content that you like the most. In moreover, our preferences and ideas are reflected to us in the form of epistemically conditioned social media networks, in accession to the information we ingest, which is formed through the social media communities we join. We typically form small groups in the comment sections of famous creators on Instagram and Snapchat, or within the confines of instant messaging groups like Facebook and WhatsApp groups. Machine learning coupled with the rise of algorithms has changed the way information is presented to users (Crawford, 2021).  In this regard, companies such as Facebook have been able to identify individual preferences and interests and guide material that aligns with those preferences and interests. However, while this convenience is hard to dismiss, algorithms have also exposed user to content that confirms their biases and existing beliefs, creating echo chambers that have polarized users along ideological lines.

Case Study

The 2016 US General Election will surely be remembered as the most contentious in the annals of the world’s most advanced democracy. Against all chances, Donald Trump won the election that pundits predicted would go to Hillary Clinton. In that election, social media played a significant role in Trump’s victory which was not seen coming. The election personalized content on Facebook and Twitter and polarized voters along ideological lines, creating tension that sometimes manifested itself in the form of rows between supporters of the two candidates. The presidential elections demonstrated that democracy has evolved, as have the means of achieving success. Nevertheless, despite gaining the majority vote, Hillary Clinton’s campaign made mistakes such as spending more than $200 million on television ads, which was twice that of Donald Trump. According to Ortega (2016), Clinton spent $30 million on social media, but Donald Trump invested more significantly that was more than $90 million in social media advertising that helped drive his political theme home which impacted more people. Essentially, social media allows campaigns to make more direct contact with the people they want to reach, help generate funds, and spread ideas. Social media played even a more important role in spreading the two candidate’s agendas across the borders.

In European jurisdictions, the proponents of leaving the EU used social media effectively in the Brexit vote. There are many reasons for the success of Brexit, but the structure of social media amplified the idea among the European nations. However, there are various allegations that the use of such networks was manipulated in this instance to propagate disinformation and emotional contagion such as the claim that Pope Francis was backing Donald Trump for the oval office (Braer, 2016). Hence, this resulted in a flood of condemnation directed at Facebook, Google, and Twitter, although the mess had already happened, thanks to the reinforcement by social media, and the populist opinions of other leaders such as President Rodrigo Duterte of the Philippines. Today, there are billions of users on social media networks that have been in operation for decades and have given rise of Eco Chambers that have drove election campaigns in democracies and even shifted the global politics. For instance, the AI used by Facebook and other social media companies learns and feeds back to network members what it discovers about their ideas. In other words, Algorithms have reinforced the Echo Chamber’s political notions and biases, regardless of where they came from, and this is encapsulated in a new context.

Impact of Algorithms and Eco Chambers

Undermining Democracy

Eco chambers created by algorithms can result in the spread of misinformation, creating political polarization that makes it impossible to arrive at consensuses. The rise of echo chambers on social media has had serious consequences, especially on the quest of democracy. It has been demonstrated that reinforcing social networks usually diminish informational cross-pressures and hence lead to attitudinal ambiguity and this is seen to have a visible impact on public support for the political system (Lindner & Aichholzer, 2020). In addition, un-ambiguous people are more likely than ambivalent people to have assimilated fewer unfavorable ideas and formed stronger emotions about their favored political entity. Similarly, it is well known that election victors have greater levels of democratic adherence than other people. Also, the use of echo chambers in social media has influenced the number of attitudinal factors that have the potential to amplify these psychological repercussions of electoral contests. It is important to note that, due to their ambivalence-reducing qualities, social media echo chambers are able to increase the observed democratic disparity between followers of winning and losing parties. Therefore, it is anticipated that echo chambers will be able to increase satisfaction with democracy among election victors while at the same time having the opposite impact on followers of unsuccessful parties. Also, social media echo chambers have the ability to reduce voter uncertainty about a particular political party and hence improving democracy.

Algorithmic bias

Algorithms offer far more than basic automation of systems. The allure of algorithms has always been objective, data-driven, and educated decision-making. Although this guarantee is within reach, companies should consider and mitigate the possible risks ahead of time, including ensuring that their software and systems do not result in bias against specific groups of people. Algorithms created using biased data can foment stereotypes that have discriminatory outcomes (Just & Latzer, 2016). Because computers can handle identically positioned people and things differently, studies have uncovered some worrisome instances of automated decision-making that falls short of standards. In this context, some algorithms have the potential to cause duplication and even increase human bias, especially those affecting vulnerable groups. For instance, automated risk evaluations used by the US courts to determine bond and sentencing limits, can produce erroneous findings, which result in significant impact on certain groups, such as lengthier jail terms or higher bails placed on people of color. In this case, the algorithm leads to bias characterized by results that are consistently viewed less favorably for those in a given group and no meaningful differences between clusters to justify the harm. Therefore, it is clear that algorithm bias can result from incomplete or unrepresentative data, or when the user relies on inaccurate information that represents historic inequalities.

Recommendations

Accountability and transparency

Proponents of democracy have stressed on the need to have transparency and accountability while using emerging technologies to ensure that everyone is served equitably without bias. This principle has been extremely beneficial in ensuring that transparent information serves as the foundation for more effective governance and service delivery. In this regard, there is need to regulate algorithmic decision-making to hedge against the negative impacts that eco chambers have (Flew, 2021). For instance, efforts to have freedom of information and effective monitoring of public budget have resulted in tangible, beneficial changes to how AI and Echo Chambers are used. At the multilateral level, initiatives such as the Open Government Partnership have genuinely advanced transparency by creating frameworks for nations to track success and exchange lessons learned from the use of algorithms and other technologies. Similarly, the long-term effect of Echo Chambers and Algorithms on enhancing people’s lives has been divided. Assumptions are usually unproven, change processes is hazy, and circumstances are inadequately analyzed. In this regard, the attempts to demonstrate how openness of Algorithms can lead to greater accountability is normally misunderstood or absent, and hence the political will remains difficult to comprehend and produce. Technologies such as Algorithms are frequently used as the end objective rather than as part of a broader plan to effectively engage people and governments. It is therefore recommended that Algorithms should be used in a transparent manner to transition from open government to open governance.

In addition, to ensure algorithmic accountability, third-party audits are a frequent technique for providing transparency. This is also referred to as limited openness. For instance, after the Federal Trade Commission (FTC) received complaints about Google, watchdog algorithms developed by FTC employees discovered that Google’s search algorithms usually caused its own services to show ahead of others in the search results. Therefore, to ensure transparency, the evaluation methods and findings were made public and described. In this case, although the FTC determined that Google’s activities were not anti-competitive, the negative press generated by the probe prompted Google to make changes. In this regard, it is recommended that, in the spirit of transparency, algorithm decisions can be opaque for technological and social reasons, as well as being intentionally opaque to safeguard intellectual property (Andrejevic, 2019). For instance, sometimes the algorithms could be too complicated to describe, or the attempts to try and explain them could necessitate the use of data that breaches a nations privacy laws. Therefore, regardless of the reasons, organizations, private businesses, and governments around the world should be looking into new ways to handle the issue of automated best practices and responsibility while at the same time providing as much information as possible to the general public in order to create confidence.

Media literacy

There should be conscious effort to train the masses on media literacy. Critical thinking and fact-checking should be emphasized in media literacy classes in schools. The challenge for innovators teaching media literacy is to increase knowledge of the highly algorithmically controlled media environment, which is different from the screen media ecology. Educators can mitigate these challenges by incorporating data collected online and offline to provide consumers with unique media messages and advertisements and the order in which media content is distributed. The consequences of individual pursuit of information in this respect is the media environment that best fits each user’s preferences (Valtonen et al., 2019). It is also important to note that although informed and judgmental media users can look past the content of the feedback found in the recommendations provided following searches on existing AI platforms such as Facebook, TikTok, or Google, those who do not have such expertise, especially younger and inexperienced users, will not be able to recognize the echo chamber and filter bubble created by the underlying algorithms in in the culpability. Thus, it is suggested that media educators should ensure that they teach the public about new tools and processes and help investigate how algorithms influence the behavior of social media users.

Reference List

Andrejevic, M. (2019). Automated media, in Automated Media. London: Routledge. Pp. 44-72.

Braer, D. (2016). The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming. The Cut. https://www.thecut.com/2016/11/how-facebook-and-the-filter-bubble-pushed-trump-to-victory.html.

Crawford, K. (2021). The Atlas of Al: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp.1-21.

Flew, T. (2021).  Regulating Platforms. Cambridge: Polity, pp.79-86.

Just, N., & Latzer, M. (2019). Governance by algorithms: Reality construction by algorithmic selection on the Internet. Media, Culture & Society 39(2), pp.238-258. https://doi.org/10.1177/0163443716643157.

Lindner, R., & Aichholzer, G. (2020). E-democracy: Conceptual foundations and recent trends. European e-democracy in practice, 11-45.

Ortega, A. (2016, November 22). ocial media and democracy: Trump harnessed the power of algorithms. Retrieved from realinstitutoelcano.org: https://www.realinstitutoelcano.org/en/blog/social-media-democracy-trump-harnessed-power-algorithms/

Valtonen, T., Tedre, M., Mäkitalo, K., & Vartiainen, H. (2019). Media literacy education in the age of machine learning. Journal of Media Literacy Education11(2), 20-36. https://digitalcommons.uri.edu/jmle/vol11/iss2/2/.

Be the first to comment

Leave a Reply