This week we reiterated our call to Israel to end the war in Palestine, and to stop the use of biometric mass surveillance against the people of Gaza. Recent reports from The New York Times have revealed that the use of facial recognition and AI-powered tools is contributing to the suffering and killing of civilians. Following the 7 October 2023 attacks by Hamas, Israeli intelligence officers began using facial recognition technology to conduct mass surveillance of people in Gaza, with the alleged aim of identifying individuals based on their affiliation with Hamas or other groups. ARTICLE 19 has long advocated that the untargeted use of facial recognition supporting mass surveillance in
public spaces is incompatible with international human rights law. Because of the intrinsic invasiveness of facial recognition technology, it should only be used in exceptional circumstances, justified by and tied to a specific legitimate purpose, permitted under international law. Yet the Israeli army has failed to be transparent about its use of facial recognition technology. This, coupled with the scarce information coming out of Gaza, makes it extremely difficult to scrutinise Israel’s actions. Biometric technology is not always accurate – far from it. Numerous studies demonstrate that facial recognition fails on accuracy, particularly for underrepresented or historically disadvantaged groups. In the case of Palestinian poet Mosab Abu Toha, a contributor to The New Yorker, Israel’s deployment of biometric mass surveillance resulted in him being wrongly identified – with severe consequences. After Abu Toha was misidentified based on data generated by facial recognition technology, he was beaten and
interrogated in an Israeli detention centre for two days before being returned to Gaza. |