MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
people
Recherche

Babysitter vetting and voice-analysis: Have we reached peak AI snakeoil?

lundi 26 novembre 2018, 21:38 , par BoingBoing
The ever-useful Gartner Hype Cycle identified an inflection point in the life of any new technology: the 'Peak of Inflated Expectations,' attained just before the sharp dropoff into the 'Trough of Disillusionment'; I've lived through the hype-cycles of several kinds of technology and one iron-clad correlate of the 'Peak of Inflated Expectations' is the 'Peak of Huckster Snakeoil Salesmen': the moment at which con-artists just add a tech buzzword to some crooked scam and head out into the market to net a fortune before everyone gets wise to the idea that the shiny new hypefodder isn't a magic bullet.

Machine Learning has enjoyed an extraordinarily long and destructive peak, with hucksters invoking AI to sell racist predictive policing systems, racist sentencing and parole systems, and other Weapons of Math Destruction.

But those were Long Cons run by sophisticated hucksters with huge gangs of confederates; lately, we've been seeing a lot of short cons run by petty grifters who prey on fears to target individuals and small businesses, rather than cities, nations and Fortune 100 multinationals.

Here's an example: Predictim uses a secret 'black-box algorithm' to mine your babysitters' social media accounts and generate a 'risk rating' that you're entrusting your kid to someone who is a drug abuser, a bully, a harasser, or someone who has a 'bad attitude' or is 'disrespectful.'

This system does not weed out risky people. It is a modern-day ducking stool, used to brand people as witches. What's more, it's a near-certainty that its ranking system is racially biased and also discriminates on the basis of class (because poor and racialized people are overpoliced and more likely to be arrested or otherwise disciplined for offenses that wealthier, whiter people get away with, so if you train a machine-learning system to find the correlates of anti-social behavior, it will just tell you to steer clear of brown people and poor people).

But the company -- backed by the University of California at Berkeley’s Skydeck tech incubator (this is a stain on the UC system and Skydeck) -- is finding customers, because it has found a way to play on suckers' fears. As Sal Parsa, Predictim co-founder says, 'There’s people out there who either have mental illness or are just born evil. Our goal is to do anything we can to stop them.'

Once babysitters click the 'I consent' link on a parent's request to give Predictim access to their social media, they are at risk of having an unaccountable algorithm assign an arbitrary, unappealable score to their name that could permanently bar them from working in their industry.

In addition to pushing junk tech, Predictim's management is font of junk psychology: for example, CTO Joel Simonoff wants to feed data from social media streams to the unscientific Meyers-Briggs test (a latter-day astrological tool) to produce an even more unscientific personality category that parents can use to discriminate against potential sitters.

Predictim doesn't promise to keep predators away from your kids, just to 'help.' But when you read the feedback of Predictim's customers, like San Francisco's Diana Werner, you see that the customers have somehow gotten the impression that using Predictim will keep your kids safe ('Predictim goes into depth, really dissecting a person — their social and mental status. 100 percent of the parents are going to want to use this We all want the perfect babysitter.').

Ruby on Rails creator David Heinemeier Hansson shredded Predictim in an epic Twitter thread that shamed UC Berkeley, the company's founders and employees and its customers.

But he has his work cut out for him, because Predictim is just for starters.

Companies like AC Global Risk have announced that they can use voice-stress analysis to identify criminals, even before they've committed crimes, using (again) proprietary machine-learning systems that can 'forever change for the better how human risk is measured.'

AC Global Risk's products are, if anything, even more dangerous than Predictim: they're being marketed as a potential answer to Donald Trump's 'extreme vetting' obsession, and AC Global Risk is proposing to subject refugees fleeing for their lives to this unaccountable black-box's judgment, potentially sending people to be murdered in their home countries on the strength of its random-number generator's judgment.

AC Global Risk raises every red flag: they claim that they can predict whether someone is a criminal with 97 percent accuracy, by analyzing their voices. As with Predictim, the people their algorithm condemns have no right of appeal; and as with Predictim, the company can dismiss its false positives as sour grapes from 'bad guys' the system caught, and claim that its false negatives were among that tiny 3% who slipped through its net ('Imagine how much worse it would have been if you hadn't been paying us to sit in judgment!').

CEO Alex Martin has spoken of looking “for actual risk along the continuum that is present in every human.” Yet the idea that risk is an innate and legible human trait — and that this trait can be ascertained from just the voice — rests on flawed assumptions, explained Todorov, the Princeton psychologist. Our ability to detect how people actually feel versus how we are perceiving them to feel has been a notoriously difficult problem in machine learning, Todorov continued. The possibility for mistaken impressions might be further complicated by the evaluative setting. “People at the border are already in fraught and highly emotionally charged circumstances,” Pugliese said. “How can they comply in a so-called normal way?”

Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude. [Drew Harwell/Washington Post]

The Dangerous Junk Science of Vocal Risk Assessment [Ava Kofman/The Intercept]
https://boingboing.net/2018/11/26/ducking-stool-2-0.html
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
jeu. 21 nov. - 20:30 CET