Книга: Code 2.0

Who Did What, Where?

Who Did What, Where?

Regulability also depends upon knowing the “what” in “who did what, where?” But again, the Internet as originally designed didn’t help the regulator here either. If the Internet protocol simply cuts up data into packets and stamps an address on them, then nothing in the basic protocol would tell anyone looking at the packet what the packet was for.

For example, imagine you’re a telephone company providing broadband Internet access (DSL) across your telephone lines. Some smart innovator develops Voice-over-IP (VOIP) — an application that makes it possible to use the Internet to make telephone calls. You, the phone company, aren’t happy about that, because now people using your DSL service can make unmetered telephone calls. That freedom cuts into your profit.

Is there anything you can do about this? Relying upon just the Internet protocols, the answer is no. The “packets” of data that contain the simulated-telephone calls look just like any packet of data. They don’t come labeled with VOIP or any other consistent moniker. Instead, packets are simply marked with addresses. They are not marked with explanations of what is going on with each.

But as my example is meant to suggest, we can easily understand why some would be very keen to understand what packets are flowing across their network, and not just for anti-competitive purposes. Network administrators trying to decide whether to add new capacity need to know what the existing capacity is being used for. Businesses keen to avoid their employees wasting time with sports or porn have a strong interest in knowing just what their employees are doing. Universities trying to avoid viruses or malware being installed on network computers need to know what kind of packets are flowing onto their network. In all these cases, there’s an obvious and valid will to identify what packets are flowing on the network. And as they say, where there’s a will, there’s a way.

The way follows the same technique described in the section above. Again, the TCP/IP protocol doesn’t include technology for identifying the content carried in TCP/IP packets. But it also doesn’t interfere with applications that might examine TCP/IP packets and report what those packets are about.

So, for example, consider a package produced by Ipanema Technologies. This technology enables a network owner to inspect the packets traveling on its network. As its webpage promises,

The Ipanema Systems “deep” layer 7 packet inspection automatically recognizes all critical business and recreational application flows running over the network. Real-time graphical interfaces as well as minute-by-minute reports are available to rapidly discover newly deployed applications.[17]

Using the data gathered by this technology, the system generates reports about the applications being used in the network, and who’s using them. These technologies make it possible to control network use, either to economize on bandwidth costs, or to block uses that the network owner doesn’t permit.

Another example of this kind of content control is a product called “iProtectYou.[18]”This product also scans packets on a network, but this control is implemented at the level of a particular machine. Parents load this software on a computer; the software then monitors all network traffic with that computer. As the company describes, the program can then “filter harmful websites and newsgroups; restrict Internet time to a predetermined schedule; decide which programs can have Internet access; limit the amount of data that can be sent or received to/from your computer; block e-mails, online chats, instant messages and P2P connections containing inappropriate words; and produce detailed Internet activity logs.” Once again, this is an application that sits on top of the network and watches. It intervenes in network activity when it identifies the activity as the kind the administrator wants to control.

In addition to these technologies of control, programmers have developed a wide range of programs to monitor networks. Perhaps the dominant application in this context is called “nmap” — a program

for network exploration or security auditing . . . designed to rapidly scan large networks. . . . Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics.[19]

This software is “free software”, meaning the source code is available, and any modifications of the source code must be made available as well. These conditions essentially guarantee that the code necessary to engage in this monitoring will always be available.

Finally, coders have developed “packet filtering” technology, which, as one popular example describes, “is the selective passing or blocking of data packets as they pass through a network interface. . . . The most often used criteria are source and destination address, source and destination port, and protocol. ” This again is a technology that’s monitoring “what” is carried within packets, and decides what’s allowed based upon what it finds.

In each of these cases, a layer of code complements the TCP/IP protocol, to give network administrators something TCP/IP alone would not — namely, knowledge about “what” is carried in the network packets. That knowledge increases the “regulability” of network use. If a company doesn’t want its employees using IM chat, then these technologies will enforce that rule — by blocking the packets containing IM chat. Or if a company wants to know which employees use sexually explicit speech in Internet communication, these technologies will reveal that as well. Again, there are plenty of perfectly respectable reasons why network administrators might want to exercise this regulatory authority — even if there are plenty of cases where such power would be an abuse. Because of this legitimate demand, software products like this are developed.

Now, of course, there are countermeasures that users can adopt to avoid just this sort of monitoring. A user who encrypts the data he sends across the network will avoid any filtering on the basis of key words. And there are plenty of technologies designed to “anonymize” behavior on the Net, so administrators can’t easily know what an individual is doing on a network. But these countermeasures require a significant investment for a particular user to deploy — whether of time or money. The vast majority won’t bother, and the ability of network administrators to monitor content and use of the network will be preserved.

Thus, as with changes that increased the ability to identify “who” someone is who is using a network, here too, private interests provide a sufficient incentive to develop technologies that make it increasingly easy to say “what” someone is doing who is using a network. A gap in the knowledge provided by the plain vanilla Internet is thus plugged by these privately developed technologies.

Оглавление книги

Оглавление статьи/книги

Генерация: 0.027. Запросов К БД/Cache: 0 / 0
поделиться
Вверх Вниз