Will Meta’s attempt to rein in detailed targeted ads end discriminatory practices of online advertising?

Early this year, Meta announced that they were working on the disuse of some specific targeted ads since 15 January 2024 (Meta, 2024). Advertisers would be asked to amend their targeting options to coincide with the measure. In concrete, the following targeting options would be removed:

  • “sensitive” attributes related options
  • Underused options
  • Superfluous options

Until 18 March 2024, those remained relevant ads would be discontinued.

Meta’s updates to detailed targeting in 2024. https://www.facebook.com/business/help/458835214668072

Apart from the second and third clear items, questions might be raised relating to the first one:

  • What are exactly sensitive attributes related to targeting selections?
  • What negative impacts does their existence have on us?
  • Will the mere action of Meta improve this situation?

I hope my analysis throughout the blog could clarify these questions for you.

What are targeted ads and sensitive targeting options?

Firstly, I would like to provide some background information about targeted advertising. Targeted advertising means businesses only push ads to a group of users based on certain characteristics (Speicher, 2018). They all have, for example, certain interests, preferences and so on (GCFGlobal). Therein, race, age and gender are viewed as sensitive attributes.

Negative effects relating to privacy

Such an action is obviously at the edge of invading one’s privacy. Rengel suggested that “Privacy” is a common right, considered as “The right to be left alone”, leaving one’s personal information, characteristic and condition unexposed (Rengel, 2013 as cited in Terry, 2021). On this basis, the right of privacy is not supposed to be discriminated in regard of one’s race, ethnicity, gender and health condition (Terry, 2021).

The characteristics are captured by platforms through collecting and analysing users’ online practices (Speicher, 2018). These online practices could reveal personal information like age, gender, nationality and traits and in turn, favour the advertiser to identify their target users.

Negative effects relating to exclusion

For those who own certain sensitive characters, advertisers would write all their phone numbers or email addresses into a list (Speicher, 2018).

Such a list is called a list of PII (personally identifiable information). This list would be uploaded and processed to push ads to those registered customers (Speicher, 2018). Here, since only PII list will be traced but not exactly the sensitive information, the discrimination is hard to distinguish.

Meanwhile, there is a risk of expanding this list with the participation of look-alike audience. Look-alike audience means those who have similar traits to the selected group.

Therefore, an even stronger bias is cultivated. On the one hand, people with these attributes are purposefully included. On the other hand, those without these characters are excluded. Their legitimate rights are deprived.

Case: How the problem of Facebook’s discrimination of housing ads was processed from 2016 to 2024

In 2016, Facebook (Meta-owned) was found to send housing ads to users according to sex, national origin and race (ProPublica, 2017). For example, they created “whites only” ads. At that time, Meta promised to build a fair system for housing ads.

ProPublica’s report about Facebook’s discriminatory practices about housing ads. https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin

However, discriminatory treatment still existed in 2017. When ProPublica (2017) tried to release housing ads on Facebook, they were required to hide these ads from African Americans, people interested in wheelchair ramps, Jews, expats from Argentina and Spanish speakers. These people are all protected by the Fair Housing Act. They are supposed to be treated fairly and get access to those ads.

Discriminatory targeted options appeared when ProPublica tried to post housing ads on Facebook. https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin

In 2022, the U.S. Department of Justice reached a settlement agreement with Meta. Meta agreed to create a new system to avoid discriminatory treatment in housing ads and pay for the fine of over $115,000 (AP, 2022).

Such a result also delivered a message to all businesses about housing, employment and credit cards that discriminatory targeting is banned (AP, 2022).

AP’s new article about Facebook and US’s settlement about discriminatory housing ads. https://apnews.com/article/facebook-meta-discriminatory-housing-ads-a89f557bc62932b1b608559af198dfb2

Thinking about the announcement of Meta this year at the very beginning, it might be questionable how Meta will indeed do. This year’s announcement contains the measure to remove those discriminatory practices but has not mentioned concretely what kind of content will be discontinued.

Meanwhile, due to the existence of the PII list, the actual biased action of Meta is difficult to find. Also, look-alike targeting is still allowed in the announcement of Meta. This could still magnify even nuanced discrimination.

How to truly tackle the problem of discriminatory practices of targeting ads?

So, how could the online discriminatory practices of targeting ads truly be mitigated?

Our first point is probably looking into the cause of the discrimination.

Datta (2018) suggested that the online advertising system is built by the cooperation of different roles. Publishers are the host of online content and websites, while advertisers try to put their ads on the websites. Meanwhile, the role ad networks are those who link the above two characters. Ultimately, ads are displayed in front of the customers. Actually, all of the roles have the chance to infuse discrimination into the process.

Publishers, such as Facebook and Google, could evaluate the preference of certain products to customers and display more ads to those targeted ones. Advertisers could explicitly use demographic categories and keywords to exclude a certain group of people. And publishers would not stop such actions of advertisers (Datta, 2018).

Meanwhile, ad competitors could have impact on each other. For example, one product is viewed as more valued for women by advertisers. Advertiser A targets the same between male and female customers (without bias). However, advertiser B target females more than males (bias exits). In such a situation, advertiser A could possibly only reach male customers in the end (Datta, 2018).

Therefore, as long as bias exists in the market, other advertisers seem to be compelled to join in to enhance competitiveness and make more profit. As time passes, this becomes a habit and advertisers forget how to treat all people fairly.

Then, we could probably look into how sensitive information is exposed to those advertisers and publishers. The common ways could be from custom data and data brokers. Apart from this, advertisers could also obtain the details from public records, such as voter records, including date of birth, gender and inferred race (Speicher, 2018).

Regulation and laws by governments

Based on the above fact, only relying on self-discipline of platforms (like Meta) and advertisers is not enough to tackle the bias. Government regulation on the actions of platforms and advertisers is needed. Also, control over the accessibility to personal data is required.

There are many already released regulations and laws to protect personal data online. For instance, General data protection regulation from EU demands businesses to ask for customers’ consent about used data. And those terms are supposed to be rejectable by customers (GDPR).

In concrete, a lawful and transparent process of collecting data is required. Companies should only collect necessary information for legitimate purposes. Meanwhile, companies should always keep eyes on the customer’s updated consent. As for personally identifiable information, they should only be required under certain necessary situations (GDPR). Such regulations could be helpful to restrict illegal actions of companies from abusing personal data.

In Australia, there is an opt-out system called Consumer Data Right (CDR) which gives customers the freedom to control their data. Protected information contains name, contact number and behaviours of using certain services and products (CDR).

People could choose which companies could see and operate the data. Also, customers could decide which type of data could be operated and for what purpose. Here, the use of biased targeting is obviously problematic. In addition, people could change their consent at any time. And requiring eliminating unnecessary information is allowed.

This right is protected and regulated by OAIC and Australian Competition and Consumer Commission. If people are doubtful about how their data is handled by a certain business, they could submit a CDR complaints form. Either unresolved complaint about a business more than 30 days ago or issues happened less than 12 months ago would be accepted. Involved problems could be disclosure and security of CDR data and so on. Please see this link for an introduction video about Consumer Data Right complaints.

An introduction video about Consumer Data Right complaints. https://www.oaic.gov.au/engage-with-us/research-and-training-resources/videos/consumer-data-right-complaints

The above two regulations by governments could obviously protect the online data security and used purpose of customers. Therefore, the discrimination is less likely to happen online.

Awareness and proactivity of individuals

Apart from measures taken by governments, the Awareness and imitativeness of individuals are also really important to tackle the bias of targeting ads.

A survey about Australians’ awareness about privacy regulation showed that only 2 in 100 Australians are aware of and could name the “Privacy Act” or “Australian Privacy Principles”. 67% of Australians are aware of it but 31% of Australians are totally unaware. Meanwhile, full-time employees and highly educated people are more likely to be aware of it (Australian Government, 2023).

A figure about Australians’ awareness of the Privacy Act in 2023. https://www.oaic.gov.au/engage-with-us/research-and-training-resources/videos/consumer-data-right-complaints

Another survey demonstrated that 38% of Australians know that the Privacy Commissioner OAIC in Australia constructs privacy laws and could support their complaints about sceptical use of personal data. However, 67% of Australians are not aware of OAIC’s role (Australian Government, 2023).

A figure about Australians’ awareness of the Privacy Commissioner (OAIC) in 2023. https://www.oaic.gov.au/engage-with-us/research-and-training-resources/videos/consumer-data-right-complaints

We could see that most Australians have an awareness about privacy regulation but are unaware of the organizations that could favour their privacy complaints.

Another fact is that there is a contrast, existing between people’s attitudes and behaviour towards privacy. On the one hand, people insist on consideration about it. On the other hand, they actually tend to neglect some online behaviours that could disclose their personal data. For instance, when people are required to share their personal information to get access to services and products on a website, they could just easily allow all cookies (Francis and Francis, 2017, as cited in Terry, 2021).

However, this could be traced back to the complex and tricky terms of platforms, as many of them provide only all-or-nothing options (Terry, 2021). However, in 2018, the Australian Competition and Consumer Commission still raised the concern about personal information even if users were given the right to manage their privacy selections. This is because the platforms own an advantaged status. They have control over the provided terms and information (ACCC, 2018, as cited in Terry, 2021).

Conclusion

After the above discussion, we could see that not only companies like Meta are responsible for discriminatory targeted ads. Efforts by companies (publishers and advertisers), governments, as well as individuals are required to mitigate the existing bias in targeted ads. Companies are supposed to be mindful about how to handle customers’ personal data and obey the rules that are constructed by governments. Governments’ roles as a controller over companies and a supporter for individuals are really vital. As for individuals, being aware of protecting the use of their own data and the possibility to ask for help from official privacy institutions is crucial.

There is still a long way to go to truly tackle this problem, as different characters could influence and restrain each other. And there is always a balance to be struck among the different roles in the online advertising market. But we are still looking forward to seeing that one day there will be no bias in online advertising.

Bibliography:

Australian Government. (2023). Australian Community Attitudes to Privacy Survey 2023. https://www.oaic.gov.au/engage-with-us/research-and-training-resources/research/australian-community-attitudes-to-privacy-survey/australian-community-attitudes-to-privacy-survey-2023#awareness-of-the-privacy-act

Australian Government. (2024). Consumer Data Right complaints. https://www.oaic.gov.au/engage-with-us/research-and-training-resources/videos/consumer-data-right-complaints

Australian Government. (2024). How the Consumer Data Right opt-in process works. https://www.oaic.gov.au/consumer-data-right/information-for-consumers/how-the-consumer-data-right-opt-in-process-works

Australian Government. (2024). What is the Consumer Data Right? https://www.oaic.gov.au/consumer-data-right/information-for-consumers/what-is-the-consumer-data-right

Australian Government. (2024). What you can complain to us about. https://www.oaic.gov.au/consumer-data-right/consumer-data-right-complaints/what-you-can-complain-to-us-about

Angwin, J. (2017, Nov 21). Facebook (Still) Letting Housing Advertisers Exclude Users by Race. ProPublica. https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin

Datta, A. (2018). Discrimination in Online Advertising: A Multidisciplinary Inquiry. Proceedings of Machine Learning Research.

GCFGlobal. (2024). What is Targeted Advertising. https://edu.gcfglobal.org/en/thenow/what-is-targeted-advertising/1/

Intersoft consulting. (2018). General Data Protection Regulation. https://gdpr-info.eu/art-5-gdpr/

Meta. (2024). Updates to detailed targeting. https://en-gb.facebook.com/business/help/458835214668072

Neumeister, L. (2022, June 22). Facebook and US sign deal to end discriminatory housing ads. AP. https://www.griffith.edu.au/library/study/referencing/apa-7

Speicher, T. (2018). Potential for Discrimination in Online Targeted Advertising. Proceedings of Machine Learning Research.

Terry, F. (2021). Regulating platforms. Polity Press.

Be the first to comment

Leave a Reply