Êíèãà: Code 2.0

The Regulators of Speech: Publication

The Regulators of Speech: Publication

Floyd Abrams is one of America’s leading First Amendment lawyers. In 1971 he was a young partner at the law firm of Cahill, Gordon[6]. Late in the evening of Monday, June 14, he received a call from James Goodale, in-house counsel for the New York Times. Goodale asked Abrams, together with Alexander Bickel, a Yale Law School professor, to defend the New York Times in a lawsuit that was to be filed the very next day.

The New York Times had just refused the government’s request that it cease all publication of what we now know as the “Pentagon Papers” and return the source documents to the Department of Defense[7]. These papers, mostly from the Pentagon’s “History of U.S. Decision Making Process on Vietnam Policy”, evaluated U.S. policy during the Vietnam War[8]. Their evaluation was very negative, and their conclusions were devastating. The papers made the government look extremely bad and made the war seem unwinnable.

The papers had been given to the New York Times by someone who did think the war was unwinnable; who had worked in the Pentagon and helped write the report; someone who was not anti-war at first but, over time, had come to see the impossibility that the Vietnam War was.

This someone was Daniel Ellsberg. Ellsberg smuggled one of the 15 copies of the papers from a safe at the RAND Corporation to an offsite photocopier. There, he and a colleague, Anthony Russo, photocopied the papers over a period of several weeks[9]. Ellsberg tried without success to make the papers public by having them read into the Congressional Record. He eventually contacted the New York Times reporter Neil Sheehan in the hope that the Times would publish them. Ellsberg knew that this was a criminal act, but for him the war itself was a criminal act; his aim was to let the American people see just what kind of a crime it was.

For two and a half months the Times editors pored over the papers, working to verify their authenticity and accuracy. After an extensive review, the editors determined that they were authentic and resolved to publish the first of a ten-part series of excerpts and stories on Sunday, June 13, 1971[10].

On Monday afternoon, one day after the first installment appeared, Attorney General John Mitchell sent a telegraph to the New York Times stating:

I respectfully request that you publish no further information of this character and advise me that you have made arrangements for the return of these documents to the Department of Defense[11].

When the Times failed to comply, the government filed papers to enjoin the paper from continuing to publish stories and excerpts from the documents[12].

The government’s claims were simple: These papers contained government secrets; they were stolen from the possession of the government; to publish them would put many American soldiers at risk and embarrass the United States in the eyes of the world. This concern about embarrassment was more than mere vanity: Embarrassment, the government argued, would weaken our bargaining position in the efforts to negotiate a peace. Because of the harm that would come from further publication, the Court should step in to stop it.

The argument was not unprecedented. Past courts had stopped the publication of life-threatening texts, especially in the context of war. As the Supreme Court said in Near v. Minnesota, for example, “no one would question but that a government might prevent actual obstruction to its recruiting service or the publication of the sailing dates of transports or the number and location of troops[13]”.

Yet the question was not easily resolved. Standing against precedent was an increasingly clear command: If the First Amendment meant anything, it meant that the government generally cannot exercise the power of prior restraint[14]. “Prior restraint” is when the government gets a court to stop publication of some material, rather than punish the publisher later for what was illegally published. Such a power is thought to present much greater risks to a system of free speech.[15] Attorney General Mitchell was asking the Court to exercise this power of prior restraint.

The Court struggled with the question, but resolved it quickly. It struggled because the costs seemed so high[16], but when it resolved the question, it did so quite squarely against the government. In the Court’s reading, the Constitution gave the New York Times the right to publish without the threat of prior restraint.

The Pentagon Papers is a First Amendment classic — a striking reminder of how powerful a constitution can be. But even classics get old. And in a speech that Abrams gave around the time the first edition to this book was published, Abrams asked an incredible question: Is the case really important anymore? Or has technology rendered this protection of the First Amendment unnecessary?

Abrams’s question was motivated by an obvious point: For the government to succeed in a claim that a printing should be stopped, it must show “irreparable harm” — harm so significant and irreversible that the Court must intervene to prevent it[17]. But that showing depends on the publication not occurring — if the Pentagon Papers had already been published by the Chicago Tribune, the government could have claimed no compelling interest to stop its publication in the New York Times. When the cat is already out of the bag, preventing further publication does not return the cat to the bag.

This point is made clear in a case that came after New York Times — a case that could have been invented by a law professor. In the late 1970s, the Progressive commissioned an article by Howard Morland about the workings of an H-bomb. The Progressive first submitted the manuscript to the Department of Energy, and the government in turn brought an injunction to block its publication. The government’s claim was compelling: to give to the world the secrets of how to build a bomb would make it possible for any terrorist to annihilate any city. On March 26, 1979, Judge Robert Warren of the Western District of Wisconsin agreed and issued a temporary restraining order enjoining the Progressive from publishing the article[18].

Unlike the Pentagon Papers case, this case didn’t race to the Supreme Court. Instead, it stewed, no doubt in part because the district judge hearing the case understood the great risk this publication presented. The judge did stop the publication while he thought through the case. He thought for two and a half months. The publishers went to the Court of Appeals, and to the Supreme Court, asking each to hurry the thinking along. No court intervened.

Until Chuck Hansen, a computer programmer, ran a “Design Your Own H-Bomb” contest and circulated an eighteen-page letter in which he detailed his understanding of how an H-Bomb works. On September 16, 1979, the Press-Connection of Madison, Wisconsin, published the letter. The next day the government moved to withdraw its case, conceding that it was now moot. The compelling interest of the government ended once the secret was out[19].

Note what this sequence implies. There is a need for the constitutional protection that the Pentagon Papers case represents only because there is a real constraint on publishing. Publishing requires a publisher, and a publisher can be punished by the state. But if the essence or facts of the publication are published elsewhere first, then the need for constitutional protection disappears. Once the piece is published, there is no further legal justification for suppressing it.

So, Abrams asks, would the case be important today? Is the constitutional protection of the Pentagon Papers case still essential?

Surprisingly, Floyd Abrams suggests not[20]. Today there’s a way to ensure that the government never has a compelling interest in asking a court to suppress publication. If the New York Times wanted to publish the Pentagon Papers today, it could ensure that the papers had been previously published simply by leaking them to a USENET newsgroup, or one of a million blogs. More quickly than its own newspaper is distributed, the papers would then be published in millions of places across the world. The need for the constitutional protection would be erased, because the architecture of the system gives anyone the power to publish quickly and anonymously.

Thus the architecture of the Net, Abrams suggested, eliminates the need for the constitutional protection. Even better, Abrams went on, the Net protects against prior restraint just as the Constitution did — by ensuring that strong controls on information can no longer be achieved. The Net does what publication of the Pentagon Papers was designed to do — ensure that the truth does not remain hidden.

But there’s a second side to this story.

On July 17, 1996, TWA Flight 800 fell from the sky ten miles off the southern coast of Center Moriches, New York. Two hundred and thirty people were killed. Immediately after the accident the United States launched the (then) largest investigation of an airplane crash in the history of the National Transportation Safety Board (NTSB), spending $27 million to discover the cause of the crash, which eventually was determined to have been a mechanical failure[21].

This was not, however, the view of the Internet. From the beginning, stories circulated about “friendly fire” — missiles that were seen to hit the airplane. Dozens of eyewitnesses reported that they saw a streaking light shoot toward the plane just before it went down. There were stories about missile tests conducted by the Navy seventy miles from the crash site[22]. The Net claimed that there was a cover-up by the U.S. government to hide its involvement in one of the worst civil air disasters in American history.

The government denied these reports. Yet the more the government denied them, the more contrary “evidence” appeared on the Net[23]. And then, as a final straw in the story, there was a report, purportedly by a government insider, claiming that indeed there was a conspiracy — because evidence suggested that friendly fire had shot down TWA 800[24].

The former press secretary to President John F. Kennedy believed this report. In a speech in France, Pierre Salinger announced that his government was hiding the facts of the case, and that he had the proof.

I remember this event well. I was talking to a colleague just after I heard Salinger’s report. I recounted Salinger’s report to this colleague, a leading constitutional scholar from one of the top American law schools. We both were at a loss about what to believe. There were cross-cutting intuitions about credibility. Salinger was no nut, but the story was certainly loony.

Salinger, it turns out, had been caught by the Net. He had been tricked by the flip side of the point Floyd Abrams has made. In a world where everyone can publish, it is very hard to know what to believe. Publishers are also editors, and editors make decisions about what to publish — decisions that ordinarily are driven at least in part by the question, is it true? Statements cannot verify themselves. We cannot always tell, from a sentence reporting a fact about the world, whether that sentence is true[25]. So in addition to our own experience and knowledge of the world, we must rely on structures of reputation that build credibility. When something is published, we associate the claim with the publisher. If the New York Times says that aliens have kidnapped the President, it is viewed differently from a story with the identical words published in the National Enquirer.

When a new technology comes along, however, we are likely to lose our bearings. This is nothing new. It is said that the word phony comes from the birth of the telephone — the phony was the con artist who used the phone to trick people who were familiar with face-to-face communication only. We should expect the same uncertainty in cyberspace, and expect that it too, at first, will shake expectations of credibility.

Abrams’s argument then depends on a feature of the Net that we cannot take for granted. If there were credibility on the Net, the importance of the Pentagon Papers case would indeed be diminished. But if speech on the Net lacks credibility, the protections of the Constitution again become important.

“Credibility”, however, is not a quality that is legislated or coded. It comes from institutions of trust that help the reader separate reliable from unreliable sources. Flight 800 thus raises an important question: How can we reestablish credibility in this space so that it is not lost to the loons[26]?

In the first edition of this book, that question could only be answered hypothetically. But in the time since, we’ve begun to see an answer to this question emerge. And the word at the center of that answer is: Blog.

At this writing, there are more than 50 million weblogs on the Internet. There’s no single way to describe what these blogs are. They differ dramatically, and probably most of what gets written there is just crap. But it is wrong to judge a dynamic by a snapshot. And the structure of authority that this dynamic is building is something very new.

At their best, blogs are instances of amateur journalism — where “amateur”, again, means not second rate or inferior, but one who does what he does for the love of the work and not the money. These journalists write about the world — some from a political perspective, some from the point of view of a particular interest. But they all triangulate across a range of other writers to produce an argument, or a report, that adds something new. The ethic of this space is linking — of pointing, and commenting. And while this linking is not “fair and balanced”, it does produce a vigorous exchange of ideas.

These blogs are ranked. Services such as Technorati constantly count the blog space, watching who links to whom, and which blogs produce the greatest credibility. And these rankings contribute to an economy of ideas that builds a discipline around them. Bloggers get authority from the citation others give them; that authority attracts attention. It is a new reputation system, established not by editors or CEOs of media companies, but by an extraordinarily diverse range of contributors.

And in the end, these amateur journalists have an effect. When TWA flight 800 fell from the sky, there were theories about conspiracies that were filtered through no structure of credibility. Today, there are more structure s of credibility. So when Dan Rather produced a letter on CBS’s 60 Minutes purporting to establish a certain fraud by the President, it took the blogosphere 24 hours to establish this media company’s evidence was faked. More incredibly, it took CBS almost two weeks to acknowledge what blogs had established[27]. The collaborative work of the blogs uncovered the truth, and in the process embarrassed a very powerful media company. But by contrast to the behavior of that media company, they demonstrated something important about how the Net had matured.

This collaboration comes with no guarantees, except the guarantee of a process. The most extraordinary collaborative process in the context of content is Wikipedia. Wikipedia is a free online encyclopedia, created solely by volunteers. Launched at the beginning of 2001, these (literally thousands of) volunteers have now created over 2 million articles. There are nine major language versions (not including the Klingon version), with about half of the total articles in English.

The aim of the Wikipedia is neutrality. The contributors edit, and reedit, to frame a piece neutrally. Sometimes that effort fails — particularly controversial topics can’t help but attract fierce conflict. But in the main, the work is an unbelievable success. With nothing more than the effort of volunteers, the most used, and perhaps the most useful encyclopedia ever written has been created through millions of uncoordinated instances of collaboration.

Wikipedia, however, can’t guarantee its results. It can’t guarantee that, at any particular moment, there won’t be errors in its entries. But of course, no one can make that guarantee. Indeed, in one study that randomly collected entries from Wikipedia and from Encyclopedia Britannica, there were just as many errors in Britannica as in Wikipedia[28].

But Wikipedia is open to a certain kind of risk that Britannica is not — maliciousness. In May 2005, the entry to an article about John Seigenthaler Sr. was defaced by a prankster. Because not many people were monitoring the entry, it took four months before the error was noticed and corrected. Seigenthaler wasn’t happy about this. He, understandably, complained that it was the architecture of Wikipedia that was to blame.

Wikipedia’s architecture could be different. But the lesson here is not its failures. It is instead the extraordinary surprise of Wikipedia’s success. There is an unprecedented collaboration of people from around the world working to converge upon truth across a wide range of topics. That, in a sense, is what science does as well. It uses a different kind of “peer review” to police its results. That “peer review” is no guarantee either — South Koreans, for example, were quite convinced that one of their leading scientists, Hwang Woo-Suk, had discovered a technique to clone human stem cells. They believed it because peer-reviewed journals had reported it. But whether right to believe it or not, the journals were wrong. Woo-Suk was a fraud, and he hadn’t cloned stem cells, or anything else worth the attention of the world.

Blogs don’t coordinate any collaborative process to truth in the way Wikipedia does. In a sense, the votes for any particular position at any particular moment are always uncounted, while at every moment they are always tallied on Wikipedia. But even if they’re untallied, readers of blogs learn to triangulate on the truth. Just as with witnesses at an accident (though better, since these witnesses have reputations), the reader constructs what must be true from a range of views. Cass Sunstein rightly worries that the norms among bloggers have not evolved enough to include internal diversity of citation[29]. That may well be true. But whatever the normal reading practice is for ordinary issues, the diversity of the blogosphere gives readers an extremely wide range of views to consider when any major issue — such as that which stung Salinger — emerges. When tied to the maturing reputation system that constantly tempers influence, this means that it is easier to balance extreme views with the correction that many voices can build.

A credibility can thus emerge, that, while not perfect, is at least differently encumbered. NBC News must worry about its bottom line, because its reporting increasingly responds to it. Blogs don’t have a bottom line. They are — in the main — amateurs. Reputation constrains both, and the competition between the two forms of journalism has increasingly improved each. We have a richer environment for free speech today than five years ago — a commercial press tempered by blogs regulated by a technology of reputation that guides the reader as much as the writer.

Errors will remain. Everyone has a favorite example — mine is the ridiculous story about Al Gore claiming to have “invented the Internet.” The story originated with a CNN interview on March 9, 1999. In that interview, in response to a question about what was different about Gore over Bradley, Gore said the following:

During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country’s economic growth and environmental protection, improvements in our educational system[30].

As is clear from the context, Gore is stating not that he invented the technology of the Internet, but that he “took the initiative in moving forward a whole range of initiatives” that have been important to the country. But the story was retold as the claim that Gore “invented the Internet.” That’s how the Internet journalist Declan McCullagh repeated it two weeks later: “The vice president offered up a whopper of a tall tale in which he claimed to have invented the Internet. ” That characterization — plainly false — stuck. In a 2003 study of the media’s handling of the story, Chip Health and Jonathan Bendor conclude, “We show that the false version of Gore’s statement dominated the true one in mainstream political discourse by a wide margin. This is a clear failure in the marketplace of ideas, which we document in detail[31]”.

The only redeeming part of this story is that it’s simple to document the falsity — because of the Internet. Seth Finkelstein, a programmer and anti-censorware activist, has created a page on the Internet collecting the original interview and the subsequent reports about it[32]. His is the model of the very best the Internet could be. That virtue, however, didn’t carry too far beyond the Internet.

Îãëàâëåíèå êíèãè


Ãåíåðàöèÿ: 0.055. Çàïðîñîâ Ê ÁÄ/Cache: 0 / 0
ïîäåëèòüñÿ
Ââåðõ Âíèç