Strong words from investment firms on the ethical use of facial recognition

Categorization AI algorithms should be “avoided at all costs” by companies, according to nearly two dozen industry investment firms that monitor facial recognition technology.

European asset manager Candriam, owned by New York Life Investments, and 20 other investors have updated their initiative by reviewing the facial biometrics industry. Candriam is a tortured acronym for Conviction and Responsibility In Asset Management.

Mincing words was not a priority for the authors of The report.

Categorization algorithms “introduce too much potential for discrimination and human right violations and must be avoided at all costs, according a summary of the report’s findings.

In effect, the investor group has “encouraged” vendors to “avoid” selling systems to law enforcement until proper regulation is in place.

And no one should expect proper regulation either. The authors, who report that the industry already knows this well, advise sellers and buyers to “talk openly about the ethics” of AI. Those who do should be considered potentially a good facial recognition business partner.

In another familiar industry finding, the report suggests leaders create governance policies that address human rights risks. They should also “publish a detailed human rights policy with references to how they use AI” and deal with biometrics.

Process identification and authentication should also be the roles assigned to short-term systems.

The investor group cited three multinational companies as examples of how to mitigate human rights risks: Thales, Motorola Solutions and Microsoft. The transparency, explainability, policies and features of the Thales facial recognition system used for passport checks at Paris Charles de Gaulle airport are commended.

Candriam, who leads the group, spear its facial recognition industry country in March 2021.

Article topics

| | | | |