Skip navigation

Fiber Internet: Cities are doing it for themselves

We ought to treat the facilities that connect us to the Internet as a public utility. So argued Harold Feld, a lawyer for the advocacy group Public Knowledge, in a speech at the recent Personal Democracy Forum.

But what that would that actually look like? Feld’s brief talk left that unspecified – on purpose according to remarks he made in a mailing list I’m on.

Historically we create public utilities when we think a service is so fundamental to citizens’ participation in their economy, government or culture that we want to make sure all citizens have affordable access to it. Feld argues for treating the Net as a public utility so that the government can ensure that the access providers serve the needs of their communities, not just their own desire to maximize profits.

In the United States, much of the discussion on this subject has turned to the role of municipalities. This is consistent with how we treat provisioning other public services, such as natural gas and electricity to households. In February, the Federal Communications Commissions cleared the way for this by preempting state laws that prevented municipalities from competing with private Internet access vendors.

But how would municipalities do this? There are many factors, leading to many permutations. For example:

The best ID is a web of IDs

If you want to link to a book you’ve just read, what do you use? Amazon? Sure, but suppose you don’t feel like giving them the free advertising. Maybe you use Open Library, although their book pages are a little geeky. Maybe you Google for the book and link to the publisher’s page about it. Any of these sources are better than leaving the reference unlinked, but the fact that we’re not sure what to point at is a problem.

It might seem that the solution is to have everyone link books to a single catalog of all existing books, perhaps Open Library or WorldCat.org. But there are good reasons to keep things much messier than that.

To see why, take it out of the realm of books and instead think about people. Let’s say you want to post about someone named Christina Gomez. You probably have a few options for making it clear which Christina Gomez you mean. You might link to her blog, her Twitter handle, her LinkedIn page, the bio of her on her employer’s site, the bio on the site of the choir she sings with, or her police record for the time she shot a man in Reno just to watch him die.

Fortunately, there is a way to stitch all those Christina Gomez links together. In the world of the Semantic Web — the world in which Web pages yield more of their meaning to computers examining them — there’s something called a “SameAs” statement. As the name implies, saying that one link is the SameAs another means that they are both talking about the same thing in the world…the same person, book, place, etc.

SameAs statements, which are made visible to computers but hidden from human eyes, look like hacks to get over the unpleasant fact that we don’t all link to the same places. In fact, the world is better off with many ways of linking things. There’s richness in that messiness.

This may seem counter-intuitive. We’ve long assumed that if you want to disambiguate references – “Which Christina Gomez is this talking about?” – it’s best to have a single source that everyone uses, like having a single Social Security number or a single passport for any particular country. (“Wait, which US passport did I use when I left the country?” is a bad thing to mutter to a US Immigration officer.)

But the book Linked Data: Evolving the Web into a Global Data Space, by Tom Heath and Christian Bizer explains why SameAs is not a bandaid for a suboptimal situation. That “bandaid” actually provides important social functions. The book lists three.

The better way to Net Neutrality

Open Access is a section of the Ting blog dedicated to discussions about the open Internet, net neutrality and other important online topics. Just for the record, you can put a check for us in the “for” column.

Let me be clear: I am a firm supporter of Net Neutrality policies that prevent the large access providers from controlling what we see and build on the Net. But I also favor a stronger way of preserving the neutral Net: Structural separation that prevents any business that provides access to the Net from also selling content and services over the Net. That removes the access providers’ financial incentive to give priority to some content producers over others.

But to see why structural separation is important, it helps to understand the Net’s organic neutrality that Net Neutrality policies aim at preserving.

Organic net neutrality

Open Access is a section of the Ting blog dedicated to discussions about the open Internet, net neutrality and other important online topics. Just for the record, you can put a check for us in the “for” column.

There are two types of Net Neutrality. Supporters of it (like me) spend most of their time arguing for Artificial Net Neutrality: a government policy that regulates the few dominant providers of access to the Internet. In fact, we should be spending more of our time reminding people that before Artificial Net Neutrality the Internet came by its neutrality naturally, even organically.

To see the difference, you have to keep in mind, (as my friend Doc Searls frequently reminds me) that Net Neutrality refers not only to a policy but to a fundamental characteristic of the Internet. The Internet is an inter-network: local networks agree to pass data (divided into packets) without discriminating among them, so that no matter what participating network you’re plugged into, you can always get and send information anywhere else on the Net. That’s the magic of the Net: It doesn’t care how you’ve plugged in, where you are, or what sort of information you’re looking for. It will all get to you, no matter where it’s coming from, what it’s about, or what type of application created it.

In fact, it’s because the creators of the Internet didn’t try to anticipate what people would use it for that it has become the greatest engine of creativity and wealth in recorded history. For example, if the Internet had been designed primarily for connecting static pages, it would have become less suitable for phone calls or video. If the current Internet access providers decide that videos are their highest priority traffic, then online games might suffer, and it would be harder to establish the next new idea — maybe it’s holograms or some new high-def audio stream or a web of astronomers working on data shared around the world.

In short, we don’t want the businesses that sell us access to the Internet to have the power to decide what gets priority on the Internet…especially since many of them are also in the content business and thus would be tempted to give preference to their own videos and music streams. Artificial Net Neutrality as a policy is intended to preserve the Internet’s non-discriminatory nature by regulating the access providers.

Even the most fervent supporters of Net Neutrality policies usually favor it only because we now have so few access providers (also known as Internet Service Providers, or ISPs). Before a series of decisions by the U.S. Federal Communications Commision beginning in 2002, and a ruling by the Supreme Court in 2005, there were more than 9,000 ISPs in that country. Now the ones that remain are either serving small, often remote, areas or are one of the tiny handful of absolute giants.

When you talk about Net Neutrality with Seth Johnson, a tireless advocate presently working at the international level to defend the Internet, he explains that before 2005, when there was a vibrant, competitive market for ISPs, the Internet was naturally neutral. Back when the Internet was composed of relatively small local networks, if an ISP wanted to promise its subscribers that it would provide a “fast lane” for movies, or games, or singing telegrams, or whatever, it could only provide that favorable discrimination within its own small network. The many other networks those packets passed through wouldn’t know or care about that one network’s preferences. Zipping packets through the last couple of miles to your house would be like speeding up a jet for the last hundred meters of its flight: it wouldn’t make any noticeable difference.

That was then. We need a Net Neutrality policy now because the giant ISPs’ own networks are so extensive that a packet of data may spend most of its time within a single network. That network can institute discriminatory practices that are noticeable. A Net Neutrality policy prevents them from giving in to this commercial temptation.

Many of us Net Neutrality advocates, including Seth and Doc, would far rather see the Internet’s natural infrastructure restored — a big network composed of many smaller networks — which would in turn restore natural Net Neutrality. We lost that infrastructure through a political process. We could get it back the same way, by once again treating the wires and cables through which Internet packets flow as a public resource, open to thousands of competing ISPs, none of which would be able to effectively discriminate among packets.

It’s a shame that we’ve let the market for ISPs become so non-competitive that we have to resort to government policies to preserve the Net’s natural neutrality. As with peaches and whole grains, an organically neutral Internet would be even better for the entire system.

< Previous12