Rite Aid Faces Five-Year Ban on Facial Recognition Technology After FTC Settlement
Rite Aid has agreed to a significant five-year ban on the use of facial recognition technology in its stores. This decision comes after the Federal Trade Commission (FTC) found that the company's use of the technology led to false accusations of crimes and disproportionately targeted people of color.
The FTC and Rite Aid reached a settlement on Tuesday following a complaint that accused the chain of employing AI-based software in numerous stores. The software was used to identify individuals who were perceived as potential shoplifters or criminals. As a result, these individuals were either prevented from entering the stores or were expelled from them.
However, the technology's imperfections led to numerous false-positive alerts, resulting in customers being wrongfully identified as criminals. The FTC highlighted instances where Rite Aid employees publicly accused people of criminal activity, leading to embarrassment and distress for the falsely accused individuals. Some customers were even wrongly detained and subjected to searches.
In response to the settlement, Rite Aid stated that while they are pleased to reach an agreement with the FTC, they fundamentally disagree with the allegations regarding their use of facial recognition. The company emphasized that the technology was part of a pilot program used in a limited number of stores, which had been discontinued over three years ago, prior to the FTC's investigation.
The FTC's legal filing, covering complaints from 2012 to 2020, revealed that the software often led to erroneous accusations by employees. Notably, the deployment of this facial recognition software was predominantly in neighborhoods with large Black, Latino, and Asian communities.
Samuel Levine, director of the FTC's Bureau of Consumer Protection, condemned Rite Aid's use of facial surveillance systems, citing the humiliation and other harms faced by its customers. Furthermore, he pointed out the risk to consumers' sensitive information due to order violations.
Under the proposed order, Rite Aid is required to implement comprehensive safeguards to prevent customer harm when deploying AI-based technology. The order also restricts the company from using the technology if it cannot manage potential risks to consumers effectively.
Rite Aid has committed to enhancing and formalizing the practices and policies of its comprehensive information security program as part of the agreement with the FTC. The company must also delete the database of low-quality pictures of customer faces, labeled as “persons of interest,” and notify those customers who are in the database.
The FTC's order will take effect following approval from the courts, given Rite Aid's ongoing bankruptcy proceedings. This case highlights the ethical and legal challenges surrounding the use of facial recognition technology in commercial settings and the need for stringent regulations to protect consumer privacy and prevent discrimination.