Êíèãà: Code 2.0

Regulating Net-Porn

Regulating Net-Porn

Of all the possible speech regulations on the Net (putting copyright to one side for the moment), the United States Congress has been most eager to regulate porn. That eagerness, however, has not yet translated into success. Congress has passed two pieces of major legislation. The first was struck down completely. The second continues to be battered down in its struggle through the courts.

The first statute was the product of a scare. Just about the time the Net was coming into the popular consciousness, a particularly seedy aspect of the Net came into view first. This was porn on the Net. This concern became widespread in the United States early in 1995[36]. Its source was an extraordinary rise in the number of ordinary users of the Net, and therefore a rise in use by kids and an even more extraordinary rise in the availability of what many call porn on the Net. An extremely controversial (and deeply flawed) study published in the Georgetown University Law Review reported that the Net was awash in porn[37]. Time ran a cover story about its availability[38]. Senators and congressmen were bombarded with demands to do something to regulate “cybersmut.”

Congress responded in 1996 with the Communications Decency Act (CDA). A law of extraordinary stupidity, the CDA practically impaled itself on the First Amendment. The law made it a felony to transmit “indecent” material on the Net to a minor or to a place where a minor could observe it. But it gave speakers on the Net a defense — if they took good-faith, “reasonable, effective” steps to screen out children, then they could speak “indecently[39]”.

There were at least three problems with the CDA, any one of which should have doomed it to well-deserved extinction[40]. The first was the scope of the speech it addressed: “Indecency” is not a category of speech that Congress has the power to regulate (at least not outside the context of broadcasting)[41]. As I have already described, Congress can regulate speech that is “harmful to minors”, or Ginsberg speech, but that is very different from speech called “indecent.” Thus, the first strike against the statute was that it reached too far.

Strike two was vagueness. The form of the allowable defenses was clear: So long as there was an architecture for screening out kids, the speech would be permitted. But the architectures that existed at the time for screening out children were relatively crude, and in some cases quite expensive. It was unclear whether, to satisfy the statute, they had to be extremely effective or just reasonably effective given the state of the technology. If the former, then the defenses were no defense at all, because an extremely effective block was extremely expensive; the cost of a reasonably effective block would not have been so high.

Strike three was the government’s own doing. In arguing its case before the Supreme Court in 1997, the government did little either to narrow the scope of the speech being regulated or to expand the scope of the defenses. It stuck with the hopelessly vague, overbroad definition Congress had given it, and it displayed a poor understanding of how the technology might have provided a defense. As the Court considered the case, there seemed to be no way that an identification system could satisfy the statute without creating an undue burden on Internet speakers.

Congress responded quickly by passing a second statute aimed at protecting kids from porn. This was the Child Online Protect ion Act (COPA) of 1998[42]. This statute was better tailored to the constitutional requirements. It aimed at regulating speech that was harmful to minors. It allowed commercial websites to provide such speech so long as the website verified the viewer’s age. Yet in June 2003, the Supreme Court enjoined enforcement of the statute[43].

Both statutes respond to a legitimate and important concern. Parents certainly have the right to protect their kids from this form of speech, and it is perfectly understandable that Congress would want to help parents secure this protection.

But both statutes by Congress are unconstitutional — not, as some suggest, because there is no way that Congress could help parents. Instead both are unconstitutional because the particular way that Congress has tried to help parents puts more of a burden on legitimate speech (for adults that is) than is necessary.

In my view, however, there is a perfectly constitutional statute that Congress could pass that would have an important effect on protecting kids from porn.

To see what that statute looks like, we need to step back a bit from the CDA and COPA to identify what the legitimate objectives of this speech regulation would be.

Ginsberg[44] established that there is a class of speech that adults have a right to but that children do not. States can regulate that class to ensure that such speech is channeled to the proper user and blocked from the improper user.

Conceptually, for such a regulation can work, two questions must be answered:

Is the speaker uttering “regulable” speech — meaning speech “harmful to minors”?

Is the listener entitled to consume this speech — meaning is he a minor?

And with the answers to these questions, the logic of this regulation is:

IF

(speech == regulable)

AND

(listener == minor)

THEN

block access.

Now between the listener and the speaker, clearly the speaker is in a better position to answer question #1. The listener can’t know whether the speech is harmful to minors until the listener encounters the speech. If the listener is a minor, then it is too late. And between the listener and the speaker, clearly the listener is in a better position to answer question #2. On the Internet especially, it is extremely burdensome for the speaker to certify the age of the listener. It is the listener who knows his age most cheaply.

The CDA and COPA placed the burden of answering question #1 on the speaker, and #2 on both the speaker and the listener. A speaker had to determine whether his speech was regulable, and a speaker and a listener had to cooperate to verify the age of the listener. If the speaker didn’t, and the listener was a minor, then the speaker was guilty of a felony.

Real-space law also assigns the burden in exactly the same way. If you want to sell porn in New York, you both need to determine whether the content you’re selling is “harmful to minors”, and you need to determine whether the person you’re selling to is a minor. But real space is importantly different from cyberspace, at least in the high cost of answering question #2: In real space, the answer is almost automatic (again, it’s hard for a kid to hide that he’s a kid). And where the answer is not automatic, there’s a cheap system of identification (a driver’s license, for example). But in cyberspace, any mandatory system of identification constitutes a burden both for the speaker and the listener. Even under COPA, a speaker has to bear the burden of a credit card system, and the listener has to trust a pornographer with his credit card just to get access to constitutionally protected speech.

There’s another feature of the CDA/COPA laws that seems necessary but isn’t: They both place the burden of their regulation upon everyone, including those who have a constitutional right to listen. They require, that is, everyone to show an ID when it is only kids who can constitutionally be blocked.

So compare then the burdens of the CDA/COPA to a different regulatory scheme: one that placed the burden of question #1 (whether the content is harmful to minors) on the speaker and placed the burden of question #2 (whether the listener is a minor) on the listener.

One version of this scheme is simple, obviously ineffective and unfair to the speaker: A requirement that a website blocks access with a page that says “The content on this page is harmful to minors. Click here if you are a minor.” This scheme places the burden of age identification on the kid. But obviously, it would have zero effect in actually blocking a kid. And, less obviously, this scheme would be unfair to speakers. A speaker may well have content that constitutes material “harmful to minors”, but not everyone who offers such material should be labeled a pornographer. This transparent block is stigmatizing to some, and if a less burdensome system were possible, that stigma should also render regulation supporting this unconstitutional.

So what’s an alternative for this scheme that might actually work?

I’m going to demonstrate such a system with a particular example. Once you see the example, the general point will be easier to see as well.

Everyone knows the Apple Macintosh. It, like every modern operating system, now allows users to specify “accounts” on a particular machine. I’ve set one up for my son, Willem (he’s only three, but I want to be prepared). When I set up Willem’s account, I set it up with “parental controls.” That means I get to specify precisely what programs he gets to use, and what access he has to the Internet. The “parental controls” make it (effectively) impossible to change these specifications. You need the administrator’s password to do that, and if that’s kept secret, then the universe the kid gets to through the computer is the universe defined by the access the parent selects.

Imagine one of the programs I could select was a browser with a function we could call “kids-mode-browsing” (KMB). That browser would be programmed to watch on any web page for a particular mark. Let’s call that mark the “harmful to minors” mark, or <H2M> for short. That mark, or in the language of the Web, tag, would bracket any content the speaker believes is harmful to minors, and the KMB browser would then not display any content bracketed with this <H2M> tag. So, for example, a web page marked up “Blah blah blah <H2M>block this</H2M> blah blah blah” would appear on a KMB screen as: “Blah blah blah blah blah blah.”

So, if the world of the World Wide Web was marked with <H2M> tags, and if browser manufacturers built this <H2M>-filtering function into their browsers, then parents would be able to configure their machines so their kids didn’t get access to any content marked <H2M>. The policy objective of enabling parental control would be achieved with a minimal burden on constitutionally entitled speakers.

How can we get (much of the) world of the Web to mark its harmful to minors content with <H2M> tags?

This is the role for government. Unlike the CDA or COPA, the regulation required to make this system work — to the extent it works, and more on that below — is simply that speakers mark their content. Speakers would not be required to block access; speakers would not be required to verify age. All the speaker would be required to do is to tag content deemed harmful to minors with the proper tag.

This tag, moreover, would not be a public marking that a website was a porn site. This proposal is not like the (idiotic, imho) proposals that we create a .sex or .xxx domain for the Internet. People shouldn’t have to locate to a red-light district just to have adult material on their site. The <H2M> tag instead would be hidden from the ordinary user — unless that user looks for it, or wants to block that content him or herself.

Once the government enacts this law, then browser manufacturers would have an incentive to build this (very simple) filtering technology into their browsers. Indeed, given the open-source Mozilla browser technology — to which anyone could add anything they wanted — the costs of building this modified browser are extremely low. And once the government enacts this law, and browser manufacturers build a browser that recognizes this tag, then parents have would have as strong a reason to adopt platforms that enable them to control where their kids go on the Internet.

Thus, in this solution, the LAW creates an incentive (through penalties for noncompliance) for sites with “harmful to minors” material to change their ARCHITECTURE (by adding <H2M> tags) which creates a MARKET for browser manufacturers (new markets) to add filtering to their code, so that parents can protect their kids. The only burden created by this solution is on the speaker; this solution does not burden the rightful consumer of porn at all. To that consumer, there is no change in the way the Web is experienced, because without a browser that looks for the <H2M> tag, the tag is invisible to the consumer.

But isn’t that burden on the speaker unconstitutional? It’s hard to see why it would be, if it is constitutional in real space to tell a speaker he must filter kids from his content “harmful to minors.” No doubt there’s a burden. But the question isn’t whether there’s a burden. The constitutional question is whether there is a less burdensome way to achieve this important state interest.

But what about foreign sites? Americans can’t regulate what happens in Russia. Actually, that’s less true than you think. As we’ll see in the next chapter, there’s much that the U.S. government can do and does to effectively control what other countries do.

Still, you might worry that sites in other countries won’t obey American law because it’s not likely we’ll send in the Marines to take out a noncomplying website. That’s certainly true. But to the extent that a parent is concerned about this, as I already described, there is a market already to enable geographic filtering of content. The same browser that filters on <H2M> could in principle subscribe to an IP mapping service to enable access to American sites only.

But won’t kids get around this restriction? Sure, of course some will. But the measure of success for legislation (as opposed to missile tracking software) is not 100 percent. The question the legislature asks is whether the law will make things better off[45]. To substantially block access to <H2M> content would be a significant improvement, and that would be enough to make the law make sense.

But why not simply rely upon filters that parents and libraries install on their computers? Voluntary filters don’t require any new laws, and they therefore don’t require any state-sponsored censorship to achieve their ends.

It is this view that I want to work hardest to dislodge, because built within it are all the mistakes that a pre-cyberlaw understanding brings to the question of regulation in cyberspace.

First, consider the word “censorship.” What this regulation would do is give parents the opportunity to exercise an important choice. Enabling parents to do this has been deemed a compelling state interest. The kids who can’t get access to this content because their parents exercised this choice might call it “censorship”, but that isn’t a very useful application of the term. If there is a legitimate reason to block this form of access, that’s speech regulation. There’s no reason to call it names.

Second, consider the preference for “voluntary filters.” If voluntary filters were to achieve the very same end (blocking H2M speech and only H2M speech), I’d be all for them. But they don’t. As the ACLU quite powerfully described (shortly after winning the case that struck down the CDA partly on the grounds that private filters were a less restrictive means than government regulation):

The ashes of the CDA were barely smoldering when the White House called a summit meeting to encourage Internet users to self-rate their speech and to urge industry leaders to develop and deploy the tools for blocking “inappropriate speech.” The meeting was “voluntary”, of course: the White House claimed it wasn’t holding anyone’s feet to the fire. But the ACLU and others . . . were genuinely alarmed by the tenor of the White House summit and the unabashed enthusiasm for technological fixes that will make it easier to block or render invisible controversial speech. . . . It was not any one proposal or announcement that caused our alarm; rather, it was the failure to examine the longer-term implications for the Internet of rating and blocking schemes[46].

The ACLU’s concern is the obvious one: The filters that the market has created not only filter much more broadly than the legitimate interest the state has here — blocking <H2M> speech — they also do so in a totally nontransparent way. There have been many horror stories of sites being included in filters for all the wrong reasons (including for simply criticizing the filter)[47]. And when you are wrongfully blocked by a filter, there’s not much you can do. The filter is just a particularly effective recommendation list. You can’t sue Zagat’s just because they steer customers to your competitors.

My point is not that we should ban filters, or that parents shouldn’t be allowed to block more than H2M speech. My point is that if we rely upon private action alone, more speech will be blocked than if the government acted wisely and efficiently.

And that frames my final criticism: As I’ve argued from the start, our focus should be on the liberty to speak, not just on the government’s role in restricting speech. Thus, between two “solutions” to a particular speech problem, one that involves the government and suppresses speech narrowly, and one that doesn’t involve the government but suppresses speech broadly, constitutional values should tilt us to favor the former. First Amendment values (even if not the First Amendment directly) should lead to favoring a speech regulation system that is thin and accountable, and in which the government’s action or inaction leads only to the suppression of speech the government has a legitimate interest in suppressing. Or, put differently, the fact that the government is involved should not necessarily disqualify a solution as a proper, rights-protective solution.

The private filters the market has produced so far are both expensive and over-inclusive. They block content that is beyond the state’s interest in regulating speech. They are effectively subsidized because there is no less restrictive alternative.

Publicly required filters (which are what the <H2M> tag effectively enables) are narrowly targeted on the legitimate state interest. And if there is a dispute about that tag — if for example, a prosecutor says a website with information about breast cancer must tag the information with an <H2M> tag — then the website at least has the opportunity to fight that. If that filtering were in private software, there would be no opportunity to fight it through legal means. All that free speech activists could then do is write powerful, but largely invisible, articles like the ACLU’s famous plea.

It has taken key civil rights organizations too long to recognize this private threat to free-speech values. The tradition of civil rights is focused directly on government action alone. I would be the last to say that there’s not great danger from government misbehavior. But there is also danger to free speech from private misbehavior. An obsessive refusal to even consider the one threat against the other does not serve the values promoted by the First Amendment.

But then what about public filtering technologies, like PICS? Wouldn’t PICS be a solution that avoided the “secret list problem” you identified?

PICS is an acronym for the World Wide Web Consortium’s Platform for Internet Content Selection. We have already seen a relative (actually, a child) of PICS in the chapter about privacy: P3P. Like PICS, is a protocol for rating and filtering content on the Net. In the context of privacy, the content was made up of assertions about privacy practices, and the regime was designed to help individuals negotiate those practices.

With online speech the idea is much the same. PICS divides the problem of filtering into two parts — labeling (rating content) and then filtering (blocking content on the basis of the rating). The idea was that software authors would compete to write software that could filter according to the ratings; content providers and rating organizations would compete to rate content. Users would then pick their filtering software and rating system. If you wanted the ratings of the Christian Right, for example, you could select its rating system; if I wanted the ratings of the Atheist Left, I could select that. By picking our raters, we would pick the content we wanted the software to filter.

This regime requires a few assumptions. First, software manufacturers would have to write the code necessary to filter the material. (This has already been done in some major browsers). Second, rating organizations would actively have to rate the Net. This, of course, would be no simple task; organizations have not risen to the challenge of billions of web pages. Third, organizations that rated the Net in a way that allowed for a simple translation from one rating system to another would have a competitive advantage over other raters. They could, for example, sell a rating system to the government of Taiwan and then easily develop a slightly different rating system for the “government” of IBM.

If all three assumptions held true, any number of ratings could be applied to the Net. As envisioned by its authors, PICS would be neutral among ratings and neutral among filters; the system would simply provide a language with which content on the Net could be rated, and with which decisions about how to use that rated material could be made from machine to machine[48].

Neutrality sounds like a good thing. It sounds like an idea that policymakers should embrace. Your speech is not my speech; we are both free to speak and listen as we want. We should establish regimes that protect that freedom, and PICS seems to be just such a regime.

But PICS contains more “neutrality” than we might like. PICS is not just horizontally neutral — allowing individuals to choose from a range of rating systems the one he or she wants; PICS is also vertically neutral — allowing the filter to be imposed at any level in the distributional chain. Most people who first endorsed the system imagined the PICS filter sitting on a user’s computer, filtering according to the desires of that individual. But nothing in the design of PICS prevents organizations that provide access to the Net from filtering content as well. Filtering can occur at any level in the distributional chain — the user, the company through which the user gains access, the ISP, or even the jurisdiction within which the user lives. Nothing in the design of PICS, that is, requires that such filters announce themselves. Filtering in an architecture like PICS can be invisible. Indeed, in some of its implementations invisibility is part of its design[49].

This should set off alarms for those keen to protect First Amendment values — even though the protocol is totally private. As a (perhaps) unintended consequence, the PICS regime not only enables nontransparent filtering but, by producing a market in filtering technology, engenders filters for much more than Ginsberg speech. That, of course, was the ACLU’s legitimate complaint against the original CDA. But here the market, whose tastes are the tastes of the community, facilitates the filtering. Built into the filter are the norms of a community, which are broader than the narrow filter of Ginsberg. The filtering system can expand as broadly as the users want, or as far upstream as sources want.

The H2M+KMB solution alternative is much narrower. It enables a kind of private zoning of speech. But there would be no incentive for speakers to block out listeners; the incentive of a speaker is to have more, not fewer, listeners. The only requirements to filter out listeners would be those that may constitutionally be imposed — Ginsberg speech requirements. Since they would be imposed by the state, these requirements could be tested against the Constitution, and if the state were found to have reached too far, it could be checked.

The difference between these two solutions, then, is in the generalizability of the regimes. The filtering regime would establish an architecture that could be used to filter any kind of speech, and the desires for filtering then could be expected to reach beyond a constitutional minimum; the zoning regime would establish an architecture for blocking that would not have this more general purpose.

Which regime should we prefer?

Notice the values implicit in each regime. Both are general solutions to particular problems. The filtering regime does not limit itself to Ginsberg speech; it can be used to rate, and filter, any Internet content. And the zoning regime, in principle, is not limited to zoning only for Ginsberg speech. The <H2M> kids-ID zoning solution could be used to advance other child protective schemes. Thus, both have applications far beyond the specifics of porn on the Net.

At least in principle. We should be asking, however, what incentives are there to extend the solution beyond the problem. And what resistance is there to such extensions?

Here we begin to see the important difference between the two regimes. When your access is blocked because of a certificate you are holding, you want to know why. When you are told you cannot enter a certain site, the claim to exclude is checked at least by the person being excluded. Sometimes the exclusion is justified, but when it is not, it can be challenged. Zoning, then, builds into itself a system for its own limitation. A site cannot block someone from the site without that individual knowing it[50].

Filtering is different. If you cannot see the content, you cannot know what is being blocked. Content could be filtered by a PICS filter somewhere upstream and you would not necessarily know this was happening. Nothing in the PICS design requires truth in blocking in the way that the zoning solution does. Thus, upstream filtering becomes easier, less transparent, and less costly with PICS.

This effect is even clearer if we take apart the components of the filtering process. Recall the two elements of filtering solutions — labeling content, and then blocking based on that labeling. We might well argue that the labeling is the more dangerous of the two elements. If content is labeled, then it is possible to monitor who gets what without even blocking access. That might well raise greater concerns than blocking, since blocking at least puts the user on notice.

These possibilities should trouble us only if we have reason to question the value of filtering generally, and upstream filtering in particular. I believe we do. But I must confess that my concern grows out of yet another latent ambiguity in our constitutional past.

There is undeniable value in filtering. We all filter out much more than we process, and in general it is better if we can select our filters rather than have others select them for us. If I read the New York Times rather than the Wall Street Journal, I am selecting a filter according to my understanding of the values of both newspapers. Obviously, in any particular case, there cannot be a problem with this.

But there is also a value in confronting the unfiltered. We individually may want to avoid issues of poverty or of inequality, and so we might prefer to tune those facts out of our universe. But it would be terrible from the standpoint of society if citizens could simply tune out problems that were not theirs, because those same citizens have to select leaders to manage these very problems[51].

In real space we do not have to worry about this problem too much because filtering is usually imperfect. However much I’d like to ignore homelessness, I cannot go to my bank without confronting homeless people on the street; however much I’d like to ignore inequality, I cannot drive to the airport without passing through neighborhoods that remind me of how unequal a nation the United States is. All sorts of issues I’d rather not think about force themselves on me. They demand my attention in real space, regardless of my filtering choices.

Of course, this is not true for everyone. The very rich can cut themselves off from what they do not want to see. Think of the butler on a 19th-century English estate, answering the door and sending away those he thinks should not trouble his master. Those people lived perfectly filtered lives. And so do some today.

But most of us do not. We must confront the problems of others and think about issues that affect our society. This exposure makes us better citizens[52]. We can better deliberate and vote on issues that affect others if we have some sense of the problems they face.

What happens, then, if the imperfections of filtering disappear? What happens if everyone can, in effect, have a butler? Would such a world be consistent with the values of the First Amendment?

Some believe that it would not be. Cass Sunstein, for example, has argued quite forcefully that the framers embraced what he calls a “Madisonian” conception of the First Amendment[53]. This Madisonian conception rejects the notion that the mix of speech we see should solely be a function of individual choice[54]. It insists, Sunstein claims, on ensuring that we are exposed to the range of issues we need to understand if we are to function as citizens. It therefore would reject any architecture that makes consumer choice trump. Choice is not a bad circumstance in the Madisonian scheme, but it is not the end of the matter. Ithiel de Sola Pool makes a very similar point:

What will it mean if audiences are increasingly fractionated into small groups with special interests? What will it mean if the agenda of national fads and concerns is no longer effectively set by a few mass media to which everyone is exposed? Such a trend raises for society the reverse problems from those posed by mass conformism. The cohesion and effective functioning of a democratic society depends upon some sort of public agora in which everyone participates and where all deal with a common agenda of problems, however much they may argue over the solutions[55].

On the other side are scholars such as Geoffrey Stone, who insists just as strongly that no such paternalistic ideal is found anywhere in the conception of free speech embraced by our framers[56]. The amendment, he says, is merely concerned with banning state control of private choice. Since enabling private choice is no problem under this regime, neither is perfect filtering.

This conflict among brilliant University of Chicago law professors reveals another latent ambiguity, and, as with other such ambiguity, I do not think we get far by appealing to Madison. To use Sunstein against Sunstein, the framers’ First Amendment was an incompletely theorized agreement, and it is better simply to confess that it did not cover the case of perfect filtering. The framers couldn’t imagine a PICS-enabled world; they certainly didn’t agree upon the scope of the First Amendment in such a world. If we are to support one regime over another, we must do so by asserting the values we want to embrace rather than claiming they have already been embraced.

So what values should we choose? In my view, we should not opt for perfect filtering[57]. We should not design for the most efficient system of censoring — or at least, we should not do this in a way that allows invisible upstream filtering. Nor should we opt for perfect filtering so long as the tendency worldwide is to overfilter speech. If there is speech the government has an interest in controlling, then let that control be obvious to the users. A political response is possible only when regulation is transparent.

Thus, my vote is for the regime that is least transformative of important public values. A zoning regime that enables children to self-identify is less transformative than a filtering regime that in effect requires all speech to be labeled. A zoning regime is not only less transformative but less enabling (of other regulation) — it requires the smallest change to the existing architecture of the Net and does not easily generalize to a far more significant regulation.

I would opt for a zoning regime even if it required a law and the filtering solution required only private choice. If the state is pushing for a change in the mix of law and architecture, I do not care that it is pushing with law in one context and with norms in the other. From my perspective, the question is the result, not the means — does the regime produced by these changes protect free speech values?

Others are obsessed with this distinction between law and private action. They view regulation by the state as universally suspect and regulation by private actors as beyond the scope of constitutional review. And, to their credit, most constitutional law is on their side.

But as I’ve hinted before, and defend more below, I do not think we should get caught up in the lines that lawyers draw. Our question should be the values we want cyberspace to protect. The lawyers will figure out how.

The annoying skeptic who keeps noting my “inconsistencies” will like to pester me again at this point. In the last chapter, I embraced an architecture for privacy that is in essence the architecture of PICS. P3P, like PICS, would enable machine-to-machine negotiation about content. The content of P3P is rules about privacy practices, and with PICS it is rules about content. But how, the skeptic asks, can I oppose one yet favor the other?

The answer is the same as before: The values of speech are different from the values of privacy; the control we want to vest over speech is less than the control we want to vest over privacy. For the same reasons that we disable some of the control over intellectual property, we should disable some of the control over speech. A little bit of messiness or friction in the context of speech is a value, not a cost.

But are these values different just because I say they are? No. They are only different if we say they are different. In real space we treat them as different. My core argument is that we choose how we want to treat them in cyberspace.

Îãëàâëåíèå êíèãè


Ãåíåðàöèÿ: 1.224. Çàïðîñîâ Ê ÁÄ/Cache: 3 / 1
ïîäåëèòüñÿ
Ââåðõ Âíèç