Infinite Innovation:

How AI Change Our Life?

4Degrees cofounder and CTO David Vandegrift r stated, “I think anybody making assumptions about intelligent software capabilities capping out at some point are mistaken.”

Although artificial intelligence has only a short seventy-year history, have already fundamentally changed the way we live, work, and interact with the world. From the initial concept of machine learning to today’s powerful deep learning algorithms, the development of artificial intelligence has been constantly evolving, constantly promoting the advancement of technology and society. The continuation of this history will determine the future direction and shape our lifestyle and social structure.

A critical moment in the history of AI

https://ourworldindata.org/brief-history-of-ai

Dartmouth Conference (1956): Considered the birth of AI, this conference was organized by John McCarthy, Marvin Minsky, Allen Newell, and Claude Shannon to discuss how machines could simulate human thinking.

Deep Blue vs. Kasparov (1997): IBM’s Deep Blue defeated world chess champion Garry Kasparov in a highly publicized match, demonstrating AI’s potential in strategic decision-making.

Rise of Machine Learning (2000s-present): Advances in machine learning, particularly deep learning, fueled by increases in computational power and the availability of large datasets, have led to significant breakthroughs in AI applications such as image recognition, natural language processing, and autonomous vehicles.

AlphaGo (2016): Google’s AlphaGo defeated world Go champion Lee Sedol, showcasing AI’s ability to master complex, strategy-based games using deep reinforcement learning techniques.

https://ourworldindata.org/brief-history-of-ai

According to relevant research from the University of Oxford, it is expected that by 2026, machines will be able to write school papers completely, and self-driving trucks will replace drivers by 2027.(https://arxiv.org/pdf/1705.08807.pdf)

AI intelligence will definitely bring new productivity and lifestyles to our society. Its emergence will definitely be beneficial, but in practical applications, it may be a double-edged sword.

The impact of AI on different industries

Regulating platforms is of paramount importance not only in the digital governance arena but also across key sectors such as education, healthcare, transportation, and warfare….

What Amazon brought to me is a profound illustration of the impact of artificial intelligence on consumer behavior, with the company employing complex and idiosyncratic shopping algorithms to personalize the customer experience. As a result, your purchase history and even browsing history will be retained. While these algorithms provide convenience, they may also leak information. As Shoshana Zuboff’s critique of “surveillance capitalism” calls for it, consumer data may be commodified and invade privacy (Zuboff, 2019). An important example is that targeted advertising based on consumers’ shopping patterns and behaviors can manipulate the decisions of most buyers, which to a certain extent will sacrifice consumer privacy and autonomy.

The healthcare industry is deeply affected by artificial intelligence, and IBM’s Watson Health is a direction worth paying attention to and exploring the potential of artificial intelligence to assist disease diagnosis and treatment planning. However, the controversy surrounding Google DeepMind’s deal with the NHS highlights the complex ethical landscape that creates tensions between innovation and patient privacy (Crawford, 2021, pp. 1-21). DeepMind’s access to healthcare data was found to be well beyond what was necessary for application testing, an incident that is a stark reminder that with innovation comes the potential for overdoing it.

Man is a product of his environment. Education is closely related to our lives. Different growth environments and the ideas we accept will shape our future. And adaptive learning platforms, such as Carnegie Learning’s, seek to tailor the educational experience to students’ individual needs. Safiya Umoja Noble’s research in “Algorithms of Oppression” (2018) reveals how these same algorithms perpetuate social biases and widen educational gaps (Noble, 2018, pp. 15-63). Education is the source, and naturally the differences in cognition will be greater. It cannot be said to be narrow-minded, but it may not be inclusive enough. Standardized testing software has a tendency to intentionally, or unintentionally, bias students from certain backgrounds, widening educational gaps that might otherwise be closing.Therefore, platform regulation plays a vital role in fostering a fair and inclusive online learning environment.

News itself has a certain degree of bias, especially the connection between the story itself and the storyteller is one thing, and how the story is understood is another. Thanks to today’s highly developed media ecosystem, especially social media platforms like Facebook, which are heavily influenced by artificial intelligence algorithms that filter and prioritize news content. As Pasquale puts it in Black Box Society (2015), these can create “filter bubbles” in which users are presented with a narrow view of reality that is inherently subjective, and people I am more willing to see what I want to see, so that the gap between the same things in the eyes of different people may become wider and wider. At the same time, new media can even wrap people’s reading by pushing out a large amount of information. The result is personalized content that affects elections or incites social divisions (Pasquale, 2015, pp. 1-18).

In 2017, Microsoft’s “AI for Earth” initiative will address environmental responsibilities and contradictions: it pledges to invest US$50 million in technical and resource support to focus on the four major areas of climate change, agricultural development, biodiversity, and water resources. Organizations in related fields provide technologies and services based on cloud computing and artificial intelligence. The fight against climate change will be aided by artificial intelligence, but as Crawford highlighted, supporting such initiatives while ignoring their environmental impact is due to the energy-intensive nature of the data centers required to support AI computing. (Crawford, 2021). This paradox arises from the increased carbon emissions associated with training large-scale machine learning models, which in the short term may offset the sustainability benefits these models are intended to achieve.

AI is applied to Tesla’s car-machine system.

(https://bernardmarr.com/how-tesla-is-using-artificial-intelligence-to-create-the-autonomous-cars-of-the-future/)

The shift to autonomous driving in transportation is represented by Tesla’s autonomous driving technology. Although it has the potential to revolutionize driving and reduce accidents, it also raises some safety issues. Angela Chao’s accident is a painful lesson, (https://www.bbc.com/news/world-us -canada-68622898) This fatal accident involving a Tesla vehicle with superior self-driving technology has sparked discussions about the intelligent programming of self-driving cars and who is responsible when an accident occurs – the driver, the manufacturer or the human operator Autonomous driving or key emergency avoidance systems associated with intelligence. Of course, safety always comes first. After all, the fact that a person has passed away cannot change it.

The latest headlines reported on April 14 that Iran also used drones to attack Israel.

“The Israeli military stated that 99% of the missiles and drones launched by Iran at night were intercepted and missed their targets. ”(https://www.bbc.com/news/live/world-middle-east-68737710)

Autonomous weapons systems (LAWS) represent the most cutting-edge military technology, and artificial intelligence systems can select targets for attack without human intervention. While such systems have the potential to reduce the risk to human soldiers by removing them from direct combat, in the current tense situation, could problems with AI execution systems lead to the destruction of a region or even a regime? Richard III’s horseshoe was the trigger? Nothing related to war is a trivial matter. The impact is far-reaching. Perhaps the world’s attention is focused on the current Middle East.

Local report pictures, war has broken out.

https://news24online.com/world/iran-launches-drone-and-missile-attack-on-israel-risking-significant-escalation-in-middle-east/249881/)

The deployment of these advanced systems forces us to confront critical questions: Who is responsible when AI systems make decisions that result in unintended casualties or collateral damage? How do we properly program machines to comply with international laws of war and the principles of proportionality and distinction? Additionally, the potential for machines to make decisions in fractions of a second complicates traditional military engagement strategies and raises concerns about escalatory dynamics that could lead to unintended conflicts.

Kenneth Payne critically states in Artificial Intelligence: The Revolution in Strategic Affairs (2018) that “the emergence of machines capable of making decisions for themselves, the hallmark of so-called artificial intelligence, has implications not only for the future of warfare but also for It raises profound questions about the future of human agency and even the future of humanity itself.” The deployment of artificial intelligence-equipped unmanned aerial vehicles (UAVs) such as the MQ-9 Reaper is reshaping the landscape of conventional warfare. According to Paul Scharre’s Army of None (2018), advances in military technology have not only changed combat strategies, but also brought moral challenges, so killing people What on earth is AI? Does it, he, she have autonomous consciousness? This encapsulates the broader existential questions that arise alongside debates about technology—questions that extend far beyond the realm of military strategy to philosophy, ethics, and the definition of human agency in a world shared by intelligent machines. The increased ability of drones to operate semi-autonomously or autonomously is a key issue in the future.

Sunday  14 April , 2024 {HMC} Iran has launched dozens of drones and missiles at Israel, the country’s Islamic Revolutionary Guard Corps (IRGC) confirmed, after Israel said Tehran had begun attacks.

(https://www.hiiraanweyn.net/2024/04/14/iran-launches-air-attack-on-israel-with-drones-hours-away/)

Additionally, potential hacking or malfunctions within AI systems add another layer of risk. Relying on complex algorithms and data inputs to make decisions in chaotic and unpredictable environments such as battlefields is inherently prone to errors, leading to catastrophic consequences. Therefore, supervision and intervention must precede development, but it would be great if there were no wars.

The major changes brought about by AI have reshaped our world. Although its development history is only seventy years ago, we should realize that we are in the early stages of this historical era. The rapid development of this technology in recent years often makes us overlook the fact that these seemingly huge innovations are just the beginning.

Although artificial intelligence has a relatively short history, it has already transformed our perceptions, knowledge, and behavior. There are no signs that these trends will stabilize anytime soon. On the contrary, especially in the past decade, the development of computers and networks has led to the vast collection and application of data. Artificial intelligence technology is rapidly gaining unprecedented attention, with each passing year getting better and better.

All major technological advances bring positive and negative consequences, and AI is no exception. According to Flew, Terry Regulating Platforms, as this technology continues to advance, its impact is expected to become larger and larger. So while innovating,by establishing effective regulatory mechanisms, ensuring that technological advancements align with ethical standards and societal needs for sustainable development.

The relevant laws and regulations should be more complete and more realistic. Only in this way can you make good use of this double-edged sword.

Reference:

The big idea: Should we worry about artificial intelligence? https://www.theguardian.com/books/2021/nov/29/the-big-idea-should-we-worry-about-artificial-intelligence

When Will AI Exceed Human Performance?
Evidence from AI Experts .Katja Grace, John Salvatier1705.08807.pdf (arxiv.org)

Shoshana Zuboff(2019)The Age of Surveillance Capitalism,CT: Yale University Press, pp. 146

https://www.shortform.com/summary/the-age-of-surveillance-capitalism-summary-shoshana-zuboffutm_source=bing&utm_medium=cpc&msclkid=23a49007aad5136495bcf61c7f68ab89

 Crawford, Kate (2021) The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-21.The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence on JSTOR

 Pasquale, Frank (2015). ‘The Need to Know’, in The Black Box Society: the secret algorithms that control money and information. Cambridge: Harvard University Press, pp.1-18.The Black Box Society: The Secret Algorithms That Control Money and Information on JSTOR

 Noble, Safiya U. (2018) A society, searching. In Algorithms of Oppression: How search engines reinforce racism. New York: New York University. pp. 15-63. Algorithms of Oppression: How Search Engines Reinforce Racism | Science

Kenneth Payne (2018) Artificial Intelligence: A Revolution in Strategic Affairs?, Survival, 60:5, 7-32,   Full article: Artificial Intelligence: A Revolution in Strategic Affairs? (tandfonline.com)

Paul Scharre. 2018.Army of none: autonomous weapons and the future of war.  New York:  W. W. Norton & Company.Army of none: autonomous weapons and the future of war | International Affairs | Oxford Academic (oup.com)

 Just, Natascha & Latzer, Michael (2019) ‘Governance by algorithms: reality construction by algorithmic selection on the Internet’, Media, Culture & Society 39(2), pp. 238-258. Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet by Natascha Just, Michael Latzer :: SSRN

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 79-86.Regulating Platforms | Wiley

Be the first to comment

Leave a Reply