“Willy’s Chocolate Experience” and the Future of AI Advertising

On February 10, 2024, a Reddit user under the name of “Prestigious_Try4610” created a post aimed towards residents of Glasgow, Scotland. They wrote, “Has anyone else been getting FB ads for “Willys [sic] Chocolate Experience”? Every image is AI generated along with all the gibberish text it try’s [sic] to create. Not 1 single picture giving people an idea of what they are shelling out money for and yet people are buying up tickets” (Prestigious_Try4610, 2024). Not only was Prestigious_Try4610 able to quickly identify the art promoting the event as having been generated by an artificial intelligence program, but they predicted the reason for its use: to create a false impression of what attendees could expect from the experience, for the purpose of selling tickets. Less than one month later, the catastrophic execution of “Willy’s Chocolate Experience” would quickly lead it to become the most infamous and well-known example of false advertising in the public consciousness. This event and its use of generative artificial intelligence programs to create images for advertising purposes serve to highlight the abuse of AI for nefarious purposes and the lack of much-needed governance in the field.

The Rise of AI in Business

To moderate artificial intelligence, it must first be categorized and defined, something which itself leads to many complications. “Each way of defining artificial intelligence is doing work, setting a frame for how it will be understood, measured, valued, and governed (Crawford, 2021, p. 7). To define artificial intelligence is to impose limitations on both it as well as the way it can be moderated. Viewing its functionality from a business angle, artificial intelligence has been used for the personalization of customer recommendations, mitigating interactions between the company and consumer, gathering data to provide insights for the business, and more, with specific tasks depending on the nature of the industry. “Generative” AI programs are those that do not specifically organize data, but rather use it to produce new content such as text, video, or images. Programs that create images, such as the popular DALL-E and Midjourney, have been available for public use since the early 2020s, the former being released in 2021, the latter in 2022. For this blog post, I have chosen to focus specifically on this category of AI programs, with an emphasis on their use in advertising. The issue of this usage is growing in controversy; as observed by Kate Crawford, “If AI is defined by consumer brands for corporate infrastructure, then marketing and advertising have predetermined the horizon” (2021, pp. 7–8). With only a few sentences entered, generative programs can create art and by extension advertisements that appear, at least on a surface level, complex, intricate, and appealing to a potential consumer. The use of AI may be convenient, but it raises the question – why, especially for the promotion of a product, is it being used, if not to hide something? Why not show the real-life product itself? To not do so is to create mistrust about the validity of the advertised item, as is demonstrated below.

Case Study

In February of 2024, artificially generated images began appearing as Facebook advertisements for the event “Willy’s Chocolate Experience”. Created by “House of Illuminati,” a company formed by Billy Coull, the event’s website claimed it to be “a journey filled with wondrous creations and enchanting surprises at every turn”, and prominently featured generated AI art as its primary form of marketing. No real pictures of the space were presented to potential attendees.

The above are two of the several AI generated images used in advertising the event.

While a cursory glance at the images may have convinced a casual viewer of their merit, they contain many errors frequently committed by generative AI; notably, mishappen characters and objects. The words captioning in the first image may impart an impression of fantastical words to someone simply quickly glancing through the advertisements or website. However, while the words physically resemble fanciful descriptors in their letter assemblage, a closer read shows that they are in fact nonsensical and misspelled corruptions that a human artist would not have included. The implication of the images is that they are an artist’s rendition of the actual event, thus misleading customers as to the event’s quality. Indeed, upon entry to Willy’s Chocolate Experience, attendees discovered a mostly empty warehouse with a few scattered props and tables. The scripts given to the present character actors appeared to be generated by artificial intelligence as well (Wu, 2024), and were so incomprehensible and long that actors were eventually told by supervisors to improvise their lines.

The poor quality of the event led to not only attendees calling the police in order to report it as a scam, but also the incident’s internet virality. The structure and errors in the issued apology from Coull led many to speculate that it too may have been AI generated (IggyBall, 2024). This frequent and heavy use of AI by Coull in both the Willy’s Chocolate Experience event and his other ventures, such as novel writing, in order to quickly and cheaply produce a low-quality product, demonstrate the need for moderation of such technologies. While quality control stemming from the use of generative programs is an issue, governance of the use of AI in advertising is likely more achievable, and addresses the root issue.

A Lack of Governance and Oversight

Apart from new laws outlining the use of gathered consumer data by companies, little moderation currently exists in the world of artificial intelligence, and specifically generative AI. Companies who develop artificial intelligence programs, large-scale corporations who use them, and even governments, have outlined standards and expect a level of compliance to both legal and ethical restrictions in their use (Camilleri, 2023). However, none have much control over individual third parties once the programs are in their hands. Not only does a lack of governance of artificial intelligence pose a threat to consumers, its use of internal and external data to generate new content creates ethical and legal complications for producers as well as the general public. One of the “core disputes” of artificial intelligence is whether “human intelligence can be formalized and reproduced by machines” (Crawford, 2021, p. 5). While in the present day, AI can create text and images with nearly human-like accuracy, it is still subject to inhuman tendencies that create telling errors. However, while the images generated for the promotion of the Willy’s Chocolate Experience event contained numerous spelling errors, the art itself was fairly cohesive; a potential attendee not acquainted with the tropes and mistakes commonly found in AI art (and not reading the written aspects too closely) would be easily deceived into believing the art was a human creation, and therefore, that the event itself would have the same level of effort and work put into it as the illustrations. Indeed, according to Ha et al., “normal, non-artist users are generally unable to tell the difference between human art and AI-generated images.” (2024).

This lack of distinction and mimicking of human art poses a myriad of ethical and legal issues. Firstly, elements of artists’ styles and works are often sourced and repurposed without attribution or compensation in order to train AI. Additionally, in declaring the “identification of AI images” as “also a legal and regulatory issue” in the United States, Ha et al. proclaim, “Commercial companies want to copyright their creative content, but the US Copyright Office has ruled that only human created artwork (or human contributions to hybrid artwork) can be copyrighted. Thus businesses using generative AI might try to pass off AI images as human art to obtain copyright” (Ha et al., 2024). This creates a situation in which legality may be skirted or disregarded altogether in order to achieve profit, as well as avoiding paying a human artist so that the company itself can reap the financial rewards that are generated by the use of an AI’s art. As “Distinguishing human art from the output of generative AI models is an important problem whose impact will only grow with time” (Ha et al., 2024), this situation grows more likely as generative AI programs become more developed. Finally, the use of artificial intelligence further continues the often poorly regulated environmental impact of the tech industry. As described by Crawford, artificial intelligence is an “extractive industry. The creation of contemporary AI systems depends on exploiting energy and mineral resources from the planet, cheap labor, and data at scale. (Crawford, 2021, p. 15). In addition to taking aspects of artists’ work, AI also contributes to technological world’s impact on the Earth.

Potential Solutions in Governance

The systems and networks that make up AI programs, as well as the variety of different functions and users they serve, complicate the notion of governing, moderating, or even controlling the use of AI. Crawford states, “In the case of AI, there is no singular black box to open, no secret to expose, but a multitude of interlaced systems of power. Complete transparency, then, is an impossible goal.” However, what we do possess can be used in tandem to de-scrambled these systems, as she continues “we gain a better understanding of AI’s role in the world by engaging with its material architectures, contextual environments, and prevailing politics and by tracing how they are connected (Crawford, 2021, p. 12). As implied by Crawford, a variety of systems and tactics can be created in order to assist in regulation. While corporations are implicitly, and often internally, tasked with a “social responsibility” (Camilleri, 2023) in using artificial intelligence ethically (most often connected to the management of user data), there is no such expectation or accountability on a member of the public, apart from potential social backlash, as seen in the above case study.

In the case of image-based AI generation, “A number of software and web services offer the ability to detect if an image is generated by generative AI image models” (Ha et al., 2024). These programs may be used by corporation or governments in both internal identifications and in ensuring public knowledge of AI use. However, as AI is defined by its attempt to resemble human creation, this task is far from straightforward, even for specialized programs. While “Commercial detectors perform surprisingly well”, they also “are heavily affected by feature space perturbations” (Ha et al., 2024). Additionally, these programs are far from flawless, and can mislabel human creations as AI. The misidentification of art as either human-created or AI-generated poses legal challenges and complications to all parties involved, creating unnecessary conflict. Thus, internal regulation by producers of AI content is perhaps the best way to properly alert consumers of said content to any potential deception, rather than relying on external identification programs. Recently, bills in the states of Florida (Perry, 2024) and Wisconsin (Karnopp, 2024) in the United States have been proposed to add labels to political advertisements that contain material generated by AI. While these bills have yet to pass, they could set a standard that will eventually spread to advertising as a whole, making important movements forward.


As Prestigious_Try4610 observed in their initial post, identifying the event as a misleading when many others did not, “Scams like this will become more prevalent in years to come and people will keep falling for them unless they start to educate themselves as it’s obvious the media has no interest in highlighting these dangers” (2024). As artificial intelligence grows closer to replicating human creations, identification of its use will grow more challenging, especially for those who lack awareness of its misuse. Legally enforceable moderation and identification of AI use, either instilled by creators of generative programs, or governments themselves, may be the best path forward. In defining the costs and tolls effected on users and the world by AI, Crawford remarks:

“AI is neither artificial nor intelligent…In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. (Crawford, 2021, p. 8).

Artificial intelligence is defined by the society that we live in; by continuing to reside in one that prioritizes profit over public welfare, and training AI to do the same, we risk reinforcing the strength of pre-established, money-first systems.


Camilleri, M.A (2023, July 18). Artificial intelligence governance: Ethical considerations and implications for social responsibility. Wiley Online Library. https://doi.org/10.1111/exsy.13406

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://doi.org/10.12987/9780300252392

Ha, A.Y.J., Passananti J., Bhaskar R., Shan S., Southen R., Zheng H., & Zhao B.Y. (2024). Organic or Diffused: Can We Distinguish Human Art from AI-generated Images? arXiv:2402.03214v2 [cs.CV]. https://doi.org/10.48550/arXiv.2402.03214

House of Illuminati (2024, March 1). Willy’s Chocolate Experience. https://web.archive.org/web/20240301025857/https://willyschocolateexperience.com/

IggyBall. (2024). Definitely not a PR professional, the writing is horrible. Chat GPT at best. [Comment on the online forum post House of Illuminati apologizes for ‘Willy’sChocolate Experience’ event.] Reddit. https://www.reddit.com/r/popculturechat/comments/1b4u47k/house_of_illuminati_apologizes_for_willys/

Karnopp, Hope (2024, February 20). Deceptive AI campaign ads could target Wisconsin. Lawmakers have a plan to fight them. Milwaukee Journal Sentinel. https://www.jsonline.com/story/news/politics/elections/2024/02/20/wisconsin-could-be-hit-with-ai-campaign-ads-lawmakers-plan-to-fight/72524767007

Perry, Mitch (2024, February 14). Proposal to place disclaimers on political ads that use AI could become law in Florida. Florida Phoenix. https://floridaphoenix.com/2024/02/14/proposal-to-place-disclaimers-on-political-ads-that-use-ai-could-become-law-in-florida/

Prestigious_Try4610. (2024, February 10.) Willys Chocolate Experience [Online forum post]. Reddit. https://www.reddit.com/r/glasgow/comments/1anlib2/willys_chocolate_experience/

Wu, D. (2024, February 27). ‘Dreadful’ Wonka-themed children’s event leads some guests to call police. The Washington Post. https://www.washingtonpost.com/world/2024/02/27/wonka-childrens-event-glasgow

Be the first to comment

Leave a Reply