[CitizensNewswire]
Beware the Listening Machines
One of my great pleasures in life is attending conferences on fields I'm
intrigued by, but know nothing about. (A second pleasure is writing
about these events.) So when my friend Kate Crawford invited me to a
daylong
“Listening Machine Summit,” I could hardly refuse.
What's a listening machine? The example of everyone's lips was
Hello Barbie, a version of the impossibly proportioned doll that will listen to your child speak and respond in kind. Here’s how
The Washington Post described the doll back in March: “
At a recent New York toy fair, a Mattel
representative introduced the newest version of Barbie by saying:
‘Welcome to New York, Barbie.’ The doll, named Hello Barbie, responded:
‘I love New York! Don't you? Tell me, what's your favorite part about
the city? The food, fashion, or the sights?’
Barbie accomplishes this magic by recording your
child’s question, uploading it to a speech recognition server,
identifying a recognizable keyword (“New York”) and offering an
appropriate synthesized response. The company behind Barbie’s newfound
voice,
ToyTalk
, uses your child’s utterance to help tune their speech recognition, likely storing the voice file for future use.
And that’s the trick with listening systems. If you can imagine reasons
why you might not want Mattel maintaining a record of things your child
says while talking to his or her doll, you should be able to imagine the
possible harms that could come from use—abuse or interrogation of other
listening systems. (“Siri, this is the police. Give us the last hundred
searches Mr. Zuckerman asked you to conduct on Google. Has he ever
searched for bomb-making instructions?”)
As one of the speakers put it (we’re under
Chatham House rules, so I can’t tell you who), listening machines trigger all three aspects of the surveillance holy trinity:
- They're pervasive, starting to appear in all aspects of our lives.
- They're persistent, capable of keeping records of what we've said indefinitely.
- They process the data they collect, seeking to understand what people are saying and acting on what they're able to understand.
To reduce the creepy nature of their surveillant behavior, listening
systems are often embedded in devices designed to be charming, cute, and
delightful: toys, robots, and smooth-voiced personal assistants.
Proponents of listening systems see them as a major way technology
integrates itself more deeply into our lives, making it routine for
computers to become our helpers, playmates, and confidants. A video of a
robot designed to be a shared household companion sparked a great deal
of debate, both about whether we would want to interact with a robot in
the ways proposed by the product’s designers, and how a sufficiently
powerful companion robot should behave.
If a robot observes spousal abuse, should it call the police? If the
robot is designed to be friend and confidant to everyone in the house,
but was paid for by the mother, should we expect it to rat out one of
the kids for smoking marijuana? (Underlying these questions is the
assumption that the robot will inevitably be smart enough to understand
and interpret complex phenomena. One of our best speakers made the case
that robots are very far from having this level of understanding, but
that well-designed robots were systems designed to deceive us into
believing that they had these deeper levels of understanding.)
Despite the helpful provocations offered by real and proposed consumer
products, the questions I found most interesting focused on being
unwittingly and unwillingly surveilled by listening machines. What
happens when systems like
ShotSpotter,
currently designed to identify shots fired in a city, begins
dispatching police to other events, like a rowdy pool party (just to
pick a
timely example)?
Workers in call centers already have their interactions recorded for
review by their supervisors—what happens when Uber drivers and other
members of the 1099 economy
are required to record their interactions with customers for possible
review? (A friend points out that many already do as a way of defending
themselves from possible firing in light of bad reviews.) It’s one thing
to
choose to invite listening machines into your
life—confiding in Siri or a cuddly robot companion—and something
entirely different to be heard by machines installed by your employer or
by local law enforcement.
A representative of one of the consumer regulatory agencies in the
United States gave an excellent talk in which she outlined some of the
existing laws and principles that could potentially be used to regulate
listening machines in the future. While the U.S. does not have
comprehensive privacy legislation in the way many European nations do,
there are sector-specific laws that can protect against abusive
listening machines: the Children's Online Privacy Protection Act, the
Fair Credit Reporting Act, HIPA, and others. She noted that electronic
surveillance systems had been the subject of two regulatory actions in
the U.S., where Federal Trade Commission protections against “unfair and
deceptive acts in commerce” led to action against the Aaron’s
rent-to-own chain, which installed privacy-violating software in the
laptops they rented out, capturing images of anyone in front of the
camera.
FTC argued that this was a real and concrete harm to consumers with no
offsetting benefits, and Aaron’s settled, disabling the software. I
found the idea that existing regulations and longstanding ideas of
fairness could provide a framework for regulating listening machines
fascinating, but I'm not sure I buy it. Outside of the enforcement
context, I wonder whether these ideas provide a robust enough framework
for thinking about
future regulation of listening systems,
because I’m not sure anyone understands the implications of these
systems well enough to anticipate possible futures for them. A day
thinking about eavesdropping dolls and personal assistants left me
confident only that I don't think anyone has thought enough about the
implications of these systems to posit possible, desirable futures for
their use.
Over the past thirty or more years, we’ve seen a particular Pushmi-Pullyu pattern of technology regulation, to borrow
a species from
Doctor Doolittle. Companies invent new technologies
and bring them to market. Consumers occasionally react, and if
sufficient numbers react loudly enough, government regulators
investigate and mandate changes. There’s a sense that this is the
correct process, that more aggressive regulation would crush innovation
before inventors could show us the benefits of their new ideas. But this
is a model in which regulation is a very modest counterweight to market
forces. So long as a product is on the market, it’s engaged in
persuading people that a new type of behavior is the new normal. When
Apple brought Siri to market, it engaged in a multi-front campaign to
persuade people that they should regularly speak to a computer to make
appointments, order dinner, check traffic conditions, and seek advice.
Apple was able to lower barriers to adoption by making the product a
pre-installed part of their very popular phone, making it available for
free, and
heavily advertising the new functionality. Even the wave of jokes about the
limits to Siri's speech recognition capabilities and
feature films that seek to complicate our relationships with digital entities
serve the purpose of calcifying "the new normal," this idea that people
talk to their phones and share sensitive information with them, and
that's just the way things are now.
Perhaps at some point, we’ll see a lawsuit challenging Apple’s use of
Siri data. Perhaps Apple will offer different financing packages for a
future iCar with lending rates determined by a personality profile
generated, in part, by a purchaser’s interactions with Siri. Empowered
by the Fair Credit Reporting Act, regulators might get involved and
demand that credit decisions be made only using transparently disclosed,
challengeable fiscal data, not correlations between one’s taste in
takeout food and creditworthiness. Fine. But in the ensuing years, Apple
has already won: We're talking to our phones, sharing our lives,
generating terabytes of data in the process. The problem with this
approach to regulation is that we rarely, if ever, have a conversation
about the technological world we’d like to have.
Do we want a world in which we confide in our phones? And how should
companies be forced to handle the data generated by these new
interactions? (At the Listening Machine Summit, smart policy people in
the room had suggestions like “robot privilege.” Such protection would
behave like attorney/client privilege—prohibiting law enforcement from
luring robots into making a person testify, requiring in-line “visceral”
notice of privacy risks in these systems, banning price discrimination
based on privacy protected data, and reforming the “third party
directive.”) These questions, a friend points out, aren't regulatory
questions, but policy ones. The challenge is figuring out how, in our
current, barely functional political landscape, we decide what
technologies should trigger pre-emptive conversations about whether,
when, and how those products should come to market.
If my example of Siri affecting your credit score seems either fanciful
or trivial, consider the NSA's expansive data collection programs as
revealed by Edward Snowden. Again, we're seeing
pushmi-pullyu regulation in which branches of the
intelligence community got out way ahead of popular opinion and
congressional oversight, and is only now being modestly pulled back.
There's encouraging news from the world of synthetic biology, where a
powerful new technology for gene manipulation called CRISPR is promising
to revolutionize the field.
CRISPR makes it vastly easier to cut the DNA within an organism,
which allows biologists to remove genes they don't want and add genes
they do. (Turns out that the cutting is the hard part: DNA's self-repair
mechanisms mean you can introduce sequences you’d like incorporated
within DNA, and the cell’s DNA-patching systems will include your
sequence as a patch.) By itself, CRISPR is provoking lots of thought
about what sorts of genetic manipulation are appropriate and desirable.
But a further idea—
the gene drive—is
leading to impassioned debate within the scientific world. It's
possible to make CRISPR inheritable, which means that not only can you
change the genome in an organism, but you can make it virtually certain
that its offspring will inherit the genomic change. (Inherited changes
generally propagate slowly through a population, as only half the
offspring inherit the change. But if you make a change on one half the
chromosome and put CRISPR on the other half, the offspring either
inherits the changed gene, or CRISPR, which will then make the change.)
The upshot is that it could well be possible to engineer a species of
mosquitoes that couldn't pass on malaria, or that simply couldn't
reproduce, ending the species as a whole. But who gets to make these
decisions?
The good news is that there's both a precedent of executive authority to
ban certain lines of research, and a robust tradition of debate within
the scientific community that seeks to influence this policymaking.
Smart people are making cases for and against gene drive,
and I've had the pleasure of talking to scientists trying to make gene
drive possible who are genuinely thrilled to be having public
conversations about whether, when, and how the technology should come
into play.
We need a better culture of policymaking in the IT world. We need a
better tradition of talking through the “whethers, whens, and hows” of
technologies like listening machines. And we need more conversations
that aren’t about what’s possible, but about what’s desirable.
No comments:
Post a Comment