Saturday, December 8, 2018

Microsoft sounds an alarm over facial recognition technology


Microsoft sounds an alarm over facial recognition technology

How Chinese-style monitoring could come to the United States
By Casey Newton@CaseyNewton Dec 7, 2018, 6:00am EST
SHARE
Alex Castro

The Interface is a daily column about the intersection of social media and democracy. Subscribe to the newsletter here.


Sophisticated facial-recognition technology is at the heart of many of China’s more dystopian security initiatives. With 200 million surveillance cameras — more than four times as many in the United States — China’s facial-recognition systems track members of the Uighur Muslim minority, block the entrances to housing complexes, and shame debtors by displaying their faces on billboards.

I often include these stories here because it seems inevitable that they will make their way to the United States, at least in some form. But before they do, a coalition of public and private interests are attempting to sound the alarm.

AI Now is a group affiliated with New York University that counts as its members employees of tech companies including Google and Microsoft. In a new paper published Thursday, the group calls on governments to regulate the use of artificial intelligence and facial recognition technologies before they can undermine basic civil liberties. The authors write:


Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance.

The AI Now researchers are particularly concerned about what’s called “affect recognition” — and attempt to identify people’s emotions, and possibly manipulate them, using machine learning.


”There is no longer a question of whether there are issues with accountability,” AI Now co-founder Meredith Whittaker, who works at Google, told Bloomberg “It’s what we do about it.”

Later in the day, Microsoft’s president, Brad Smith, echoed some of those concerns in a speech at the Brookings Institution:


We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

The paper comes a day after news that the Secret Service plans to deploy facial recognition outside the White House. Presumably, what the agency calls a “test” will not stop there:


The ACLU says that the current test seems appropriately narrow, but that it “crosses an important line by opening the door to the mass, suspicionless scrutiny of Americans on public sidewalks” — like the road outside the White House. (The program’s technology is supposed to analyze faces up to 20 yards from the camera.) “Face recognition is one of the most dangerous biometrics from a privacy standpoint because it can so easily be expanded and abused — including by being deployed on a mass scale without people’s knowledge or permission.”

Perhaps Americans’ enduring paranoia about big government will prevent more Chinese-style initiatives from ever taking root. But I can also imagine a scenario in which a populist, authoritarian leader, constantly invoking the twin specters of terrorism and unchecked illegal immigration, rallies popular support around surveillance technology.

It feels like a conversation worth having.

No comments:

Post a Comment