Why did the use of AI lie detectors in EU customs ultimately fail?

End of iborderctrl project

End of iborderctrl project

Are you willing to take a polygraph test in order to travel to Europe? In 2016, the EU sponsored the project iborderctrl, hoping to review tourist entry information in advance through information collection, artificial intelligence electronic verification and other methods. However, this project, which was originally designed to improve the efficiency of tourist entry, has caused a lot of controversy and discussion. Ultimately, the project chose to end in 2019. Can a AI lie detector test replace real customs? Perhaps one day in the future, Customs will really choose to introduce more artificial intelligence technology and algorithms to assist decision-making. Such a possibility forces us to think about how the public will face government departments choosing artificial intelligence for decision-making.

1. What is iBorderCtrl project?

In 2016, the EU funded a project called iborderctrl. This project hopes that travellers without EU passports will register in advance with the iborderctrl system. During the registration process, travellers need to provide the system with information including name, address, entire travel process, and social media accounts.[1]. This system will ask the user to turn on the camera to determine whether the user is “lying” through information such as the user’s micro-expressions seen by artificial intelligence. The Iborderctrl project received EU funding in 2016 and finally ended in 2019. Iborderctrl originally envisioned that it could speed up the efficiency of border passengers entering the border by allowing travellers to register and fill in information in advance, and at the same time review passengers who may enter the country illegally in advance to improve border security. Some people also believe that the emergence of electronic review can correct the biases and errors that staff may make, thereby improving the efficiency and stability of passenger entry.[2]

2. Technologies applied in the iBorderCtrl project

In the iborderctrl project, researchers created the Automated Deception Detection System (ADDS).[3]  This system answers some immigration questions, such as the identity of the immigrant, the purpose of the immigrant’s tr ip, etc. When the traveller explains the answer, the system will scan the face and glasses through the web camera to determine whether he is lying. During this process, the test results will not be disclosed to passengers, who will only receive a QR code. Guards scan the QR code and see the polygraph results. If there are a certain number of incorrect answers to one of the questions, entry may be denied.[4]

In addition, according to the description on the previous iborderctrl webpage, the project also uses the Biometrics Module incorporating fingerprints and palm vein technologies (BIO module), for the biometric identity validation of the traveller; the Face Matching Tool (FMT), receives images of the traveller (both video type and photo) in order to create at first their biometric signature; the Hidden Human Detection Tool (HHD) supports the Border Guard in searching and detecting hidden people inside various vehicles (i.e. passengers attempting illegal border crossing); Risk Based Assessment Tool (RBAT), implements a risk–assessment routine which aggregates and correlates the risks estimations received by the processing of the travellers’ data and documents supporting the decision-making of the border guard. Life and identity information are used to analyse and study whether users are at risk of illegal entry. 

3. Low success rate or useless at all? Iborderctrl The Automated Deception Detection System sparks controversy

The Automated Deception Detection System has also been criticized, with academics and experts arguing that the database provided by the lie review system can lead to errors and discrimination. According to reports by opponents of iborderctrl, American scientists use refugee screening as a reason to hope that large-scale surveillance will control crime or detect terrorists. However, after simulation by the data model, the effect was counterproductive. The accuracy of monitoring is difficult to detect hidden terrorists (or some terrorists exist in groups, and the system cannot all screen out the correct terrorists). (Vera Wilde,2016) At the same time, because of the accuracy and false recognition rate, once it is based on a huge population base, there will inevitably be many misjudgments. Staff at iborderctrl said that the accuracy of the Automated Deception Detection System is about 76%. Such screening will cause a large number of normal inbound travellers to undergo more and more troublesome inspections simply because of system errors.[5] However, staff also expressed that the team is confident that the success rate of lie detection can be improved to 85%. In order to avoid possible serious errors in the system, the iborderctrl project stated, “iBorderCtrl is a human in the loop system and the Border Guard will use his/her experience in making the final decision.” (iBorderCtrl, 2016).

In addition to accuracy, the lie detection logic of polygraphs has also aroused opposition from some scholars. “Ray Bull, professor of criminal investigation at the University of Derby, has assisted British police with interview techniques and specializes in methods of detecting deception. He told The Intercept that the iBorderCtrl project was “not credible” because there is no evidence that monitoring microgestures on people’s faces is an accurate way to measure lying.”[6]

4. Database hidden dangers faced by iborderctrl

The European Union has also given many discussions and suggestions on the iBorderCtrl project. The European Union has also conducted multiple studies on the rationality of the technology used in the iBorderCtrl project and the problems that may arise from the entire project.

Through the Data justice project, “main results have indicated a growing trend in relying on data collection for decision-making and organizational processes across the areas of migration, law enforcement and work. Importantly, this trend derives from a number of contextual factors that emphasizes the situated nature of data-driven systems within different institutions and that also highlights the negotiations and tensions that surround the implementation of new technologies.”[7]

In the iborderctrl project, although it is necessary to collect passenger information and verify the passenger’s identity, iborderctrl collects far more data than daily entry and has the possibility to retain this data (fingerprints, palm prints, facial recognition, etc.). Compared with traditional entry and review methods, iborderctrl collection and storage of user information is worrying. For the processing of these data, the iborderctrl project must also comply with the General Data Protection Regulation (GDOR) promulgated by the European Union.

Subsequently, the EU pointed out in its content on the Data justice project, “By privileging a concern with social justice, the project has contributed to establishing data justice as a field that analyzes data with an explicit focus on structural inequality, highlighting the unevenness of implications of data across different groups and communities in society. It has also moved the discussion on the governance of data-driven systems to consider not just questions of privacy and data protection, but human rights more widely and particularly social and economic rights.”Data-driven systems Possible discrimination and prejudice are also hidden dangers in the iborderctrl project.

5.References and reflections on the current use of AI in customs.

The controversial discussion about iborderctrl focuses on several aspects. In addition to the concerns about excessive information collection and technical doubts about the lie detector itself discussed above, people’s concerns about the iborderctrl project are also about whether artificial intelligence can truly replace humans in making some official government decisions.

Scholars Natascha Just and Michael Latze pointed out that “algorithms on the Internet can be seen as governance mechanisms, as instruments used to exert power and as increasingly autonomous actors with power to further political and economic interests on the individual but also on the public/collective level.”(2017,p245)Artificial intelligence and algorithms have become tools for efficient operation of government departments.

Lyria Bennett Moses has a more cautious attitude. She said in her TEDx speech that more and more governments hope to use a wide range of artificial intelligence technologies for official decision-making and task allocation. She used Centrelink as an example. It uses algorithms to calculate tax and benefit data to assess who paid more and how much benefit. There were errors in the formula that resulted in a debt letter being sent to a person who was not owed money. There are many similar examples. In a system that determines the probability of committing a crime, some African Americans are rated as high risk, while some Americans with criminal records receive lower ratings. Once the errors of artificial intelligence algorithms are bound to the decision-making of government departments, they usually cause very troublesome and serious consequences for ordinary people. Lyria Bennett Moses also pointed out that if an individual is dissatisfied with a certain “technology” or “algorithm”, they can choose to change platforms and service providers, but the interaction with the government is not an individual’s choice.

The controversy over artificial intelligence as a means of analysis and decision-making in official government departments stems from thinking about the relationship between government, artificial intelligence, and the public. In this process, we need to consider whether strict and multiple tests have been conducted when artificial intelligence technology is introduced into government platforms, and whether there have been sufficient verifications on possible impacts such as database selection and social equality in decision-making. 

Secondly, when deciding whether to choose artificial intelligence as an aid for government platform decision-making, the government should involve the public in this process. If it wants to introduce artificial intelligence into government work decision-making, the government needs to increase the public’s understanding of the technical process and the transparency of the selected artificial intelligence to participate in the decision-making process. Finally, the government should also promptly correct the erroneous decisions or unfair behaviors that artificial intelligence may happen and make plans and division of responsibilities in advance. If errors in artificial intelligence are simply attributed to calculation errors, it will cause new neglect and harm to ordinary people who are affected by decision-making errors or unfairness. Being able to answer these three questions accurately and firmly is an important prerequisite for artificial intelligence to be introduced into official government departments.

When looking back at the iborderctrl project from the perspective of these three issues, the controversy also stems from the EU’s own uncertainty about the technology used in the iborderctrl project and its public acceptance. In the iborderctrl project, EU officials did not provide a specific solution to the large number of errors caused by the accuracy of polygraphs. The iborderctrl project team stated that these errors will not be directly fed back to users, but will be reviewed again by staff, and problems caused by artificial intelligence will be passed on to staff. The reason why this project failed to come to fruition was that it was unable to effectively respond to the errors that may occur in polygraph tests. Therefore, at this stage, the public cannot fully accept providing their own information to artificial intelligence and letting the results calculated by artificial intelligence replace the decision-making of official departments.[8]

Write at the end

The European Union’s iBorderCtrl proposed a wonderful idea of using artificial intelligence to help manage borders, but it ultimately failed. From the iBorderCtrl project, the security of the database, the accuracy of artificial intelligence lie detection, and even the public’s attitude towards the use of artificial intelligence by customs have been focused and discussed. Reflecting on the failure of the iBorderCtrl project, government departments still have a long way to go if they want to introduce artificial intelligence as part of public decision-making.

Reference list:

1.Ainhoa (2023). The iBorderCtrl Project: Heading to Fast & Secure 

2.Border Control. [online] www.iborderctrl.eu. Available at: https://www.iborderctrl.eu/iborderctrl-project-the-quest-of-expediting-border-crossing-processes.html.

3.CORDIS | European Commission. (2023). Datafication and the Welfare State | H2020. [online] Available at: https://cordis.europa.eu/project/id/759903/reporting

4.Gallagher, R. and Jona, L. (2019). We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive. [online] The Intercept. Available at: https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/.

5.Hall, L. and Clapton, W. (2021). Programming the machine: gender, race, sexuality, AI, and the construction of credibility and deceit at the border. Internet Policy Review, [online] 10(4). Available at:https://policyreview.info/articles/analysis/programming-machine-gender-race-sexuality-ai-and-construction-credibility-and.

6.iBorderCtrl. (2016). iBorderCtrl: Intelligent Portable Control System.

7.IBorderCtrl_NO (n.d.). iBorderCtrl? No! | iBorderCtrl.no. [online] iborderctrl.no. Available at: https://iborderctrl.no/start.

8.Just, Natascha & Latzer, Michael (2019) ‘Governance by algorithms: reality construction by algorithmic selection on the Internet’, Media, Culture & Society 39(2), pp. 238-258

9.Lise Endregard, H. (2022). ‘Just’ Research: A Case Study of EU-funded Research with Experimental Artificial Intelligence Technology for Border Control – Peace Research Institute Oslo (PRIO). [online] www.prio.org. Available at: https://www.prio.org/publications/13182  [Accessed 13 Apr. 2024].

10.Sánchez-Monedero, J. and Dencik, L. (2020). The Politics of Deceptive borders: ‘Biomarkers of Deceit’ and the Case of iBorderCtrl. Information, Communication & Society, 25(3), pp.1–18. https://doi.org/10.1080/1369118x.2020.1792530.  

11.VERA WILDE (2016). REFUGEE SCREENING: A BRIEF INTRODUCTION (AND A REQUEST FOR EQUIPMENT) | SCQ. [online] SCQ | The Science Creative Quarterly. Available at: https://www.scq.ubc.ca/refugee-screening-a-brief-introduction-and-a-request-for-equipment/

[1] https://iborderctrl.no/start

[2] https://www.iborderctrl.eu/iborderctrl-project-the-quest-of-expediting-border-crossing-processes.html

[3] https://www.prio.org/publications/13182

[4] https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/

[5] https://www.scq.ubc.ca/refugee-screening-a-brief-introduction-and-a-request-for-equipment/

[6] https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/

[7] https://cordis.europa.eu/project/id/759903/reporting

[8] https://www.tandfonline.com/doi/full/10.1080/1369118X.2020.1792530

Be the first to comment

Leave a Reply