Êíèãà: Code 2.0

Body Parts

Body Parts

Criminals leave evidence behind, both because they’re usually not terribly rational and because it’s extremely hard not to. And technology is only making it harder not to. With DNA technology, it becomes increasingly difficult for a criminal to avoid leaving his mark, and increasingly easy for law enforcement to identify with extremely high confidence whether X did Y.

Some nations have begun to capitalize on this new advantage. And again, Britain is in the lead[11]. Beginning in 1995, the British government started collecting DNA samples to include in a national registry. The program was initially promoted as a way to fight terrorism. But in a decade, its use has become much less discriminating.

In December 2005, while riding public transportation in London, I read the following on a public announcement poster:

Abuse, Assault, Arrest: Our staff are here to help you. Spitting on DLR staff is classified as an assault and is a criminal offence. Saliva Recovery Kits are now held on every train and will be used to identi fy offenders against the national DNA database.

And why not? Spitting may be harmless. But it is insulting. And if the tools exist to identify the perpetrator of the insult, why not use them?

In all these cases, technologies designed either without monitoring as their aim or with just limited monitoring as their capacity have now become expert technologies for monitoring. The aggregate of these technologies produces an extraordinary range of searchable data. And, more importantly, as these technologies mature, there will be essentially no way for anyone living within ordinary society to escape this monitoring. Monitoring to produce searchable data will become the default architecture for public space, as standard as street lights. From the simple ability to trace back to an individual, to the more troubling ability to know what that individual is doing or likes at any particular moment, the maturing data infrastructure produces a panopticon beyond anything Bentham ever imagined.

“Orwell” is the word you’re looking for. And while I believe that analogies to Orwell are just about always useless, let’s make one comparison here nonetheless. While the ends of the government in 1984 were certainly vastly more evil than anything our government would ever pursue, it is interesting to note just how inefficient, relative to the current range of technologies, Orwell’s technologies were. The central device was a “telescreen” that both broadcasted content and monitored behavior on the other side. But the great virtue of the telescreen was that you knew what it, in principle, could see. Winston knew where to hide, because the perspective of the telescreen was transparent[12]. It was easy to know what it couldn’t see, and hence easy to know where to do the stuff you didn’t want it to see.

That’s not the world we live in today. You can’t know whether your search on the Internet is being monitored. You don’t know whether a camera is trying to identify who you are. Your telephone doesn’t make funny clicks as the NSA listens in. Your e-mail doesn’t report when some bot has searched it. The technologies of today have none of the integrity of the technologies of 1984. None are decent enough to let you know when your life is being recorded.

There’s a second difference as well. The great flaw to the design of 1984 was in imagining just how it was that behavior was being monitored. There were no computers in the story. The monitoring was done by gaggles of guards watching banks of televisions. But that monitoring produced no simple way for the guards to connect their intelligence. There was no search across the brains of the guards. Sure, a guard might notice that you’re talking to someone you shouldn’t be talking to or that you’ve entered a part of a city you shouldn’t be in. But there was no single guard who had a complete picture of the life of Winston.

Again, that “imperfection” can now be eliminated. We can monitor everything and search the product of that monitoring. Even Orwell couldn’t imagine that.

I’ve surveyed a range of technologies to identify a common form. In each, the individual acts in a context that is technically public. I don’t mean it should be treated by the law as “public” in the sense that privacy should not be protected there. I’m not addressing that question yet. I mean only that the individual is putting his words or image in a context that he doesn’t control. Walking down 5th Avenue is the clearest example. Sending a letter is another. In both cases, the individual has put himself in a stream of activity that he doesn’t control.

The question for us, then, is what limits there should be — in the name of “privacy” — on the ability to surveil these activities. But even that question puts the matter too broadly. By “surveil”, I don’t mean surveillance generally. I mean the very specific kind of surveillance the examples above evince. I mean what we could call “digital surveillance.”

“Digital surveillance” is the process by which some form of human activity is analyzed by a computer according to some specified rule. The rule might say “flag all e-mail talking about Al Qaeda.” Or it might say “flag all e-mail praising Governor Dean.” Again, at this point I’m not focused upon the normative or legal question of whether such surveillance should be allowed. At this point, we’re just working through definitions. In each of the cases above, the critical feature in each is that a computer is sorting data for some follow-up review by some human. The sophistication of the search is a technical question, but there’s no doubt that its accuracy is improving substantially.

So should this form of monitoring be allowed?

I find when I ask this question framed precisely like this that there are two polar opposite reactions. On the one hand, friends of privacy say that there’s nothing new here. There’s no difference between the police reading your mail, and the police’s computer reading your e-mail. In both cases, a legitimate and reasonable expectation of privacy has been breached. In both cases, the law should protect against that breach.

On the other hand, friends of security insist there is a fundamental difference. As Judge Richard Posner wrote in the Washington Post, in an article defending the Bush Administration’s (extensive[13]) surveillance of domestic communications, “machine collection and processing of data cannot, as such, invade privacy.” Why? Because it is a machine that is processing the data. Machines don’t gossip. They don’t care about your affair with your co-worker. They don’t punish you for your political opinions. They’re just logic machines that act based upon conditions. Indeed, as Judge Posner argues, “this initial sifting, far from invading privacy (a computer is not a sentient being), keeps most private data from being read by any intelligence officer. ” We’re better off having machines read our e-mail, Posner suggests, both because of the security gain, and because the alternative snoop — an intelligence officer — would be much more nosey.

But it would go too far to suggest there isn’t some cost to this system. If we lived in a world where our every communication was monitored (if?), that would certainly challenge the sense that we were “left alone.” We would be left alone in the sense a toddler is left in a playroom — with parents listening carefully from the next room. There would certainly be something distinctively different about the world of perpetual monitoring, and that difference must be reckoned in any account of whether this sort of surveillance should be allowed.

We should also account for the “best intentions” phenomenon. Systems of surveillance are instituted for one reason; they get used for another. Jeff Rosen has cataloged the abuses of the surveillance culture that Britain has become[14]: Video cameras used to leer at women or for sensational news stories. Or in the United States, the massive surveillance for the purpose of tracking “terrorists” was also used to track domestic environmental and antiwar groups[15].

But let’s frame the question in its most compelling form. Imagine a system of digital surveillance in which the algorithm was known and verifiable: We knew, that is, exactly what was being searched for; we trusted that’s all that was being searched for. That surveillance was broad and indiscriminate. But before anything could be done on the basis of the results from that surveillance, a court would have to act. So the machine would spit out bits of data implicating X in some targeted crime, and a court would decide whether that data sufficed either to justify an arrest or a more traditional search. And finally, to make the system as protective as we can, the only evidence that could be used from this surveillance would be evidence directed against the crimes being surveilled for. So for example, if you’re looking for terrorists, you don’t use the evidence to prosecute for tax evasion. I’m not saying what the targeted crimes are; all I’m saying is that we don’t use the traditional rule that allows all evidence gathered legally to be usable for any legal end.

Would such a system violate the protections of the Fourth Amendment? Should it?

The answer to this question depends upon your conception of the value protected by the Fourth Amendment. As I described in Chapter 6, that amendment was targeted against indiscriminate searches and “general warrants” — that is, searches that were not particularized to any particular individual and the immunity that was granted to those engaging in that search. But those searches, like any search at that time, imposed burdens on the person being searched. If you viewed the value the Fourth Amendment protected as the protection from the unjustified burden of this indiscriminate search, then this digital surveillance would seem to raise no significant problems. As framed above, they produce no burden at all unless sufficient evidence is discovered to induce a court to authorize a search.

But it may be that we understand the Fourth Amendment to protect a kind of dignity. Even if a search does not burden anyone, or even if one doesn’t notice the search at all, this conception of privacy holds that the very idea of a search is an offense to dignity. That dignity interest is only matched if the state has a good reason to search before it searches. From this perspective, a search without justification harms your dignity whether it interferes with your life or not.

I saw these two conceptions of privacy play out against each other in a tragically common encounter in Washington, D.C. A friend and I had arranged a “police ride-along” — riding with District police during their ordinary patrol. The neighborhood we patrolled was among the poorest in the city, and around 11:00 p.m. a report came in that a car alarm had been tripped in a location close to ours. When we arrived near the scene, at least five police officers were attempting to hold three youths; three of the officers were holding the suspects flat against the wall, with their legs spread and their faces pressed against the brick.

These three were “suspects” — they were near a car alarm when it went off — and yet, from the looks of things, you would have thought they had been caught holding the Hope diamond.

And then an extraordinary disruption broke out. To the surprise of everyone, and to my terror (for this seemed a tinder box, and what I am about to describe seemed the match), one of the three youths, no older than seventeen, turned around in a fit of anger and started screaming at the cops. “Every time anything happens in this neighborhood, I get thrown against the wall, and a gun pushed against my head. I’ve never done anything illegal, but I’m constantly being pushed around by cops with guns.”

His friend then turned around and tried to calm him down. “Cool it, man, they’re just trying to do their job. It’ll be over in a minute, and everything will be cool.”

“I’m not going to cool it. Why the fuck do I have to live this way? I am not a criminal. I don’t deserve to be treated like this. Someday one of these guns is going to go off by accident — and then I’ll be a fucking statistic. What then?”

At this point the cops intervened, three of them flipping the indignant youth around against the wall, his face again flat against the brick. “This will be over in a minute. If you check out, you’ll be free to go. Just relax.”

In the voice of rage of the first youth was the outrage of dignity denied. Whether reasonable or not, whether minimally intrusive or not, there was something insulting about this experience — all the more insulting when repeated, one imagines, over and over again. As Justice Scalia has written, wondering whether the framers of the Constitution would have considered constitutional the police practice known as a “Terry stop” — stopping and frisking any individual whenever the police have a reasonable suspicion — “I frankly doubt . . . whether the fiercely proud men who adopted our Fourth Amendment would have allowed themselves to be subjected, on mere suspicion of being armed and dangerous, to such indignity[16]”.

And yet again, there is the argument of minimal intrusion. If privacy is a protection against unjustified and excessive disruption, then this was no invasion of privacy. As the second youth argued, the intrusion was minimal; it would pass quickly (as it did — five minutes later, after their identification checked out, we had left); and it was reasonably related to some legitimate end. Privacy here is simply the protection against unreasonable and burdensome intrusions, and this search, the second youth argued, was not so unreasonable and burdensome as to justify the fit of anger (which also risked a much greater danger).

From this perspective, the harm in digital surveillance is even harder to reckon. I’m certain there are those who feel an indignity at the very idea that records about them are being reviewed by computers. But most would recognize a very different dignity at stake here. Unlike those unfortunate kids against the wall, there is no real interference here at all. Very much as with those kids, if nothing is found, nothing will happen. So what is the indignity? How is it expressed?

A third conception of privacy is about neither preserving dignity nor minimizing intrusion. It is instead substantive — privacy as a way to constrain the power of the state to regulate. Here the work of William Stuntz is a guide[17]. Stuntz argues that the real purpose of the Fourth and Fifth Amendments is to make some types of regulation too difficult by making the evidence needed to prosecute such violations effective ly impossible to gather.

This is a hard idea for us to imagine. In our world, the sources of evidence are many — credit card records, telephone records, video cameras at 7-Elevens — so it’s hard for us to imagine any crime that there wouldn’t be some evidence to prosecute. But put yourself back two hundred years when the only real evidence was testimony and things, and the rules of evidence forbade the defendant from testifying at all. Imagine in that context the state wanted to punish you for “sedition.” The only good evidence of sedition would be your writings or your own testimony about your thoughts. If those two sources were eliminated, then it would be practically impossible to prosecute sedition successfully.

As Stuntz argues, this is just what the Fourth and Fifth Amendments do. Combined, they make collecting the evidence for a crime like sedition impossible, thereby making it useless for the state to try to prosecute it. And not just sedition — as Stuntz argues, the effect of the Fourth, Fifth, and Sixth Amendments was to restrict the scope of regulation that was practically possible. As he writes: “Just as a law banning the use of contraceptives would tend to encourage bedroom searches, so also would a ban on bedroom searches tend to discourage laws prohibiting contraceptives[18]”.

But were not such searches already restricted by, for example, the First Amendment? Would not a law punishing seditious libel have been unconstitutional in any case? In fact, that was not at all clear at the founding; indeed, it was so unclear that in 1798 Congress passed the Alien and Sedition Acts, which in effect punished sedition quite directly[19]. Many thought these laws unconstitutional, but the Fourth and Fifth Amendments would have been effective limits on their enforcement, whether the substantive laws were constitutional or not.

In this conception, privacy is meant as a substantive limit on government’s power[20]. Understood this way, privacy does more than protect dignity or limit intrusion; privacy limits what government can do.

If this were the conception of privacy, then digital surveillance could well accommodate it. If there were certain crimes that it was inappropriate to prosecute, we could remove them from the search algorithm. It would be hard to identify what crimes constitutionally must be removed from the algorithm — the First Amendment clearly banishes sedition from the list already. Maybe the rule simply tracks constitutional limitation.

Now the key is to recognize that, in principle, these three distinct conceptions of privacy could yield different results depending on the case. A search, for example, might not be intrusive but might offend dignity. In that case, we would have to choose a conception of privacy that we believed best captured the Constitution’s protection.

At the time of the founding, however, these different conceptions of privacy would not, for the most part, have yielded different conclusions. Any search that reached beyond the substantive limits of the amendment, or beyond the limits of dignity, would also have been a disturbance. Half of the framers could have held the dignity conception and half the utility conception, but because every search would have involved a violation of both, all the framers could have endorsed the protections of the Fourth Amendment.

Today, however, that’s not true. Today these three conceptions could yield very different results. The utility conception could permit efficient searches that are forbidden by the dignity and substantive conceptions. The correct translation (as Brandeis employed the term in the Olmstead wiretapping case) depends on selecting the proper conception to translate.

In this sense, our original protections were the product of what Cass Sunstein calls an “incompletely theorized agreement[21]”. Given the technology of the time, there was no reason to work out which theory underlay the constitutional text; all three were consistent with existing technology. But as the technology has changed, the original context has been challenged. Now that technologies such as the worm can search without disturbing, there is a conflict about what the Fourth Amendment protects.

This conflict is the other side of Sunstein’s incompletely theorized agreement. We might say that in any incompletely theorized agreement ambiguities will be latent, and we can describe contexts where these latencies emerge. The latent ambiguities about the protection of privacy, for example, are being rendered patent by the evolution of technology. And this in turn forces us to choose.

Some will once again try to suggest that the choice has been made — by our Constitution, in our past. This is the rhetoric of much of our constitutional jurisprudence, but it is not very helpful here. I do not think the framers worked out what the amendment would protect in a world where perfectly noninvasive searches could be conducted. They did not establish a constitution to apply in all possible worlds; they established a constitution for their world. When their world differs from ours in a way that reveals a choice they did not have to make, then we need to make that choice.

Îãëàâëåíèå êíèãè

Îãëàâëåíèå ñòàòüè/êíèãè

Ãåíåðàöèÿ: 0.056. Çàïðîñîâ Ê ÁÄ/Cache: 0 / 0
ïîäåëèòüñÿ
Ââåðõ Âíèç