The Fall of Joi Ito

Back in 2003 I got into a blog tussle with Joi Ito, the disgraced former director of the MIT Media Lab who was forced to resign from the Lab and a number of corporate boards over ethical lapses related to Jeffrey Epstein. I was fairly amazed that Ito was hired by the Media Lab in the first place.

It seems like a job for a futurist, a technologist, or an intellectual, and Ito is none of those things. But he is well-connected, which is great for fundraising if nothing else.

Emergent Democracy

I’m not going to rehash the issues at MIT because they’ve been well covered by Ronan Farrow, Andrew Orlowski, and Evgeny Morozov. I’d like to share a post I wrote about Ito’s ideas about something he called “emergent democracy”, my reaction to them, and Ito’s reaction to my commentary. This is about schadenfreude, in other words.

Ito was one of the first to jump aboard the blog train in the days when we still called blogs “weblogs”. He tried to put together an essay mashing up the ideas Steven Johnson laid out in his 2001 book Emergence: The Connected Lives of Ants, Brains, Cities, and Software with Howard Rheingold’s 1993 musings about the Internet in The Virtual Community.

In essence, Ito claimed that the Internet could, given the creation of new tools, revolutionize the ways societies govern themselves. Instead of the musty old top-down, command-and-control model of representative democracy, the Internet could expand the circle of participation in governmental decision-making and usher in a new era of direct democracy.

Since [1993, Rheingold] has been criticized as being naive about his views. This is because the tools and protocols of the Internet have not yet evolved enough to allow the emergence of Internet democracy to create a higher-level order. As these tools evolve we are on the verge of an awakening of the Internet. This awakening will facilitate a political model enabled by technology to support those basic attributes of democracy which have eroded as power has become concentrated within corporations and governments. It is possible that new technologies may enable a higher-level order, which in turn will enable a form of emergent democracy able to manage complex issues and support, change or replace our current representative democracy. It is also possible that new technologies will empower terrorists or totalitarian regimes. These tools will have the ability to either enhance or deteriorate democracy and we must do what is possible to influence the development of the tools for better democracy.

Emergent Democracy, Joi Ito, 2003.

Mixed Results

To Ito’s and Rheingold’s credit, they didn’t see a future that was all peaches and cream. But it was fairly obvious even in those days that the popularization of the Internet was going to bring forth both good and bad results.

We can learn obscure subjects quickly, we can shop at the world’s largest store without leaving our desks, and we can learn how to fix things. But we also have Trump in the White House and a networked terror cult known as ISIS tearing it up in the Middle East.

It wasn’t either/or, it was both/and: new conveniences and new threats at the same time. Ito didn’t anticipate this, but it was always the most likely future. He also found himself unable to complete the essay, so he turned it over to one of his Wellbert friends, Jon Lebkowsky, to finish.

My Criticism

I addressed an early draft of Emergent Democracy in this post, Emergence Fantasies. It appeared to me that Ito was effectively touting a form of government like the California initiative process that would be informed by blog posts and effectively controlled by a blogger elite. The elite bar was pretty low among the blogs in 2003, so this didn’t look like progress to me.

The larger problem was the essential incoherence of Ito’s reasoning. Well-connected as he is socially, Ito is no intellectual. He also lacks a reasonable understanding of the ways legislative bodies work, at least according to my frame of reference as someone who’s been working with them for twenty years or so.

The emergence thing is also suspect. A the time, it was a fixation among the crowd that thinks of Jared Diamond, Steven Pinker, and Nassim Taleb as great thinkers, but it’s little more than trivia about the behavior of animal groups. Ant colonies are far from grass-roots democracies in any case, and they’ve fascinated political thinkers for thousands of years. I’d be happy to read a book on the biochemistry of ant colonies, but Emergence is not it.

So I said this:

Emergent democracy apparently differs from representative democracy by virtue of being unmediated, and is claimed by the author to offer superior solutions to complex social problems because governments don’t scale, or something. Emergent democracy belief requires us to abandon notions of intellectual property and corporations, apparently because such old-fashioned constructs would prevent democratic ants from figuring out where to bury their dead partners, I think. One thing that is clear is that weblogs are an essential tool for bringing emergent democracy to its full development, and another is that the cross-blog debate on the liberation of Iraq is a really cool example of the kind of advanced discourse that will solve all these problems we’ve had over the years as soon as we evolve our tools to the ant colony level.

Emergence fantasies, me, 2003.

The conversation continued on Ito’s blog under a post ironically titled Can we control our lust for power? The answer to that question was obviously “no”.

The Black List

Ito was not amused, so he black-listed me:

Mr. Bennett has a very dismissive and insulting way of engaging and is a good example of “noise” when we talk about the “signal to noise ratio”. Adam has recently taken over the fight for me on my blog. My Bennett filter is now officially on so I won’t link to his site or engage directly with the fellow any more. At moments he seems to have a point, but it’s very tiring engaging with him and I would recommend others from wasting as much time as I have.

So that’s Joi Ito for you: a man who loves Jeffrey Epstein so much that he’s willing to lie to his bosses to keep him in the Media Lab social network but can’t take honest criticism. His fall from grace was long overdue, and I’m proud to have such enemies.

How to Make a Cup of Coffee

I recently saw a funny blog post titled a “Tech Guy’s Version of the Perfect Cup of Coffee.” It wasn’t meant to be funny, but it wasn’t very knowing. The author buys Peet’s grocery store coffee beans, uses a mediocre grinder, and pours distilled water (from a pot that keeps it hot all day) over … Continue reading “How to Make a Cup of Coffee”

I recently saw a funny blog post titled a “Tech Guy’s Version of the Perfect Cup of Coffee.” It wasn’t meant to be funny, but it wasn’t very knowing. The author buys Peet’s grocery store coffee beans, uses a mediocre grinder, and pours distilled water (from a pot that keeps it hot all day) over one of the Chemex filter drip pots that science geeks used in the ’70s.

When I want drip coffee, I heat reverse osmosis filtered water to the right temperature (201 F) on demand in a digital electric Pino kettle and pour it over a measured pile of grounds in a $15 Clever Coffee Dripper with a nice Filtropa filter in it on a gram scale. The critical part is the coffee, which I roast myself. Peet’s is an over-roasted blend that’s designed for high profitability; the darker you roast, the less caffeine in the cup, so the more coffees you can sell a given person. That’s nice work if you can get it, but any home roaster can do better with a little experience. Most of the coffee I drink is espresso and cappuccino, however.

It all starts with your coffee beans. Every couple of months, I pick up a batch of green coffee beans from Sweet Maria’s, an Oakland coffee importer run by Tom Owen, a man the New Yorker called a “mad scientist” and a member of the “coffee dream team.” His palette is impeccable, as they say, and green beans are well less than half the price of roasted ones.

I roast a batch once a week or so in a Behmor 1600, an economically priced home roaster with a capacity near a pound, about what I drink in a week.
Once you know what you’re doing, it takes 20 minutes to roast a batch, probably less time than it takes to drop by a good coffee shop for some roasted beans. Roasted coffee doesn’t keep more than about ten days unless you freeze it at -10 F anyway (that’s deep freeze temperature, not freezer in the fridge level) so it’s a weekly chore to top up your inventory for most people anyhow.

I keep track of my inventory and roast history with a program called “Roaster Thing” developed by a coffee geek named Ira that has an optional USB-connected temperature sensor. One day, Roaster Thing will control the temperature in the Behmor and I’ll be able to create custom roast profiles for each batch (heat curves that squeeze the full flavor out of each batch.) An alternative is the pricier HotTop, but it’s only a half-pound device and isn’t computer-integrated.

After the appropriate resting period (three days, usually) I grind in a Bartaza Vario that’s programmed to produce about 18 grams at the touch of a button. Ideally, I’d use the new Vario-W that grinds to precise weight rather than time, but mine is close enough and the Vario-W doesn’t grind directly into the espresso basket anyway. When changing roasts, I check the weight with a $25 gram scale from Amazon.
The Baratzas are kind of messy, so I use a little funnel that keeps most of the grinds in my espresso basket.

Then I pull my shots with a Breville BES900XL Dual Boiler espresso machine. The Breville is the most “tech guy” part of the setup because of the way it was specified and designed. Breville is an Australian company that specializes in smart kitchen appliances, which is to say appliance with embedded microprocessors that make routine kitchen tasks like toasting, juicing, and blending more manageable. Australia, surprisingly enough, is a nation that’s perhaps even more obsessed with espresso than is Italy, and there’s strong competition to serve the market for home espresso machines between Breville and Sunbeam.

Sunbeam raised the bar with a dual pump/dual heater machine that permits espresso brewing and milk steaming at the same time, tricky because they require different temperatures. The Sunbeam produces a decent but not exceptional shot in most hands. It wouldn’t work well in the US because it uses thermoblock heaters than require 230 volt electricity for optimal operation.

Breville did Sunbeam one better by hiring a dream team of coffee geeks and haunting the Internet’s coffee fanatic message forums to find out what the high-end espresso consumer was after. These forums, with names like CoffeeGeek, Home Barrista, and Coffee Snobs, are the places where coffee fanatics exchange tips on everything from beans to hacking their roasters, grinders, and espresso makers for maximum performance.

CoffeeGeek readers figured out how to adapt precise electronic temperature controllers (“proportional integral derivative” controllers or PIDs) to their machines, a practice that produced the first major upgrade to espresso making in 40 years. This is user-driven innovation.

Breville uses two PIDs in the Dual Boiler and borrows techniques from the high-end ($6500) La Marzocco GS/3 espresso maker, such as preheating the brew water with a heat exchange process driven by the higher temperature steam boiler. The Breville also pre-infuses and has the best user interface ever devised for a consumer-grade coffee machine of any type.

If you’re a tech guy and you like coffee, this is your way to a more perfect cup of coffee. The all-in price of a setup like this is close to $2000. That seems like a lot, but consider that your price per double espresso is about 50 cents. Now calculate how long it takes you to pay yourself the two grand back at $2.50 a cup.

And by the way, you’ll be drinking better coffee with as much or as little caffeine as you like; the roasting time and dosing controls the caffeine content, you see, and the flavor is whatever you want it to be.

This is another tech guy’s version of the perfect cup of coffee.

Is Blogging Journalism?

I said on Facebook the other day that tech blogging is more like “infotainment” than journalism. The confusion around this subject is behind the reaction to the Michael Arrington VC fund and Arianna Huffington’s desire for editorial control over TechCrunch. Here’s a quote from Robert Scoble that should clarify the issue: Several years ago Arrington … Continue reading “Is Blogging Journalism?”

I said on Facebook the other day that tech blogging is more like “infotainment” than journalism. The confusion around this subject is behind the reaction to the Michael Arrington VC fund and Arianna Huffington’s desire for editorial control over TechCrunch. Here’s a quote from Robert Scoble that should clarify the issue:

Several years ago Arrington and I were headed to some conference and I asked him about how he sees himself. Did he consider himself a blogger or a journalist, I asked. His answer stuck with me all this time: “I’m an entertainer.”

You can say the same thing about John Dvorak (as I have) and any number of other media figures who blog about technology.

I hope this helps with the general understanding.

Has the FCC Created a Stone Too Heavy for It to Lift?

After five years of bickering, the FCC passed an Open Internet Report & Order on a partisan 3-2 vote this week. The order is meant to guarantee that the Internet of the future will be just as free and open as the Internet of the past. Its success depends on how fast the Commission can … Continue reading “Has the FCC Created a Stone Too Heavy for It to Lift?”

After five years of bickering, the FCC passed an Open Internet Report & Order on a partisan 3-2 vote this week. The order is meant to guarantee that the Internet of the future will be just as free and open as the Internet of the past. Its success depends on how fast the Commission can transform itself from an old school telecom regulator wired to resist change into an innovation stimulator embracing opportunity. One thing we can be sure about is that the order hasn’t tamped down the hyperbole that’s fueled the fight to control the Internet’s constituent parts for all these years.

Advocates of net neutrality professed deep disappointment that the FCC’s rules weren’t more proscriptive and severe. Free Press called the order “fake net neutrality,” Public Knowledge said it “fell far short,” Media Access Project called it “inadequate and riddled with loopholes,” and New America Foundation accused the FCC of “caving to telecom lobbyists.” These were their official statements to the press; their Tweets were even harsher.

Free marketers were almost as angry: Cato denounced the order as “speech control,” Washington Policy Center said it “fundamentally changes many aspects of the infrastructure of the Internet,” and the Reason Foundation said it will lead to “quagmire after quagmire of technicalities, which as they add up will have a toll on investment, service and development.”

Republican Congressional leaders made no secret of their displeasure with the FCC’s disregard for their will: Rep. Fred Upton (R, Michigan,) the incoming Commerce Committee Chairman called it a “hostile action against innovation that can’t be allowed to stand,” Rep. Greg Walden (R, Oregon,) incoming Chairman of the Subcommittee on Communications and Technology called it a “power grab,” and vowed to hold hearings to overturn it, while Sen. Kay Bailey Hutchison (R, Texas,) Ranking Member of the Senate Commerce, Science, and Transportation Committee said the order “threatens the future economic growth of the Internet.” Setting Internet policy is indeed a Congressional prerogative rather than an agency matter, so the longer-term solution must come from the Hill, and sooner would be better than later.

Contrary to this criticism and to snarky blogger claims, not everyone was upset with the FCC’s action, coming as it did after a year-long proceeding on Internet regulation meant to fulfill an Obama campaign pledge to advance net neutrality. The President himself declared the FCC action an important part of his strategy to “advance American innovation, economic growth, and job creation,” and Senator John Kerry (D, Massachusetts) applauded the FCC for reaching consensus.

Technology industry reaction ranged from positive to resigned: Information Technology Industry Council President and CEO Dean Garfield declared the measure “ensures continued innovation and investment in the Internet,” TechNet supported it, and National Cable and Telecommunications Association head Kyle McSlarrow said it could have been much worse. At the Information Technology and Innovation Foundation, we were pleased by the promises of a relatively humble set of the rules, less so with the final details; we remain encouraged by the robust process the FCC intends to create for judging complaints, one that puts technical people on the front lines. In the end, the order got the support of the only majority that counts, three FCC commissioners.

Most of us who reacted favorably acknowledged the FCC’s order wasn’t exactly as we would have written it, but accepted it as a pragmatic political compromise that produces more positives than negatives. The hoped-for closing of the raucous debate will have immense benefits on its own, as simply bringing this distracting chapter in the Internet’s story to an end will allow more time for sober discussion about the directions we’d like the Internet to take in its future development. There is no shortage of policy issues that have been cramped by the tendency to view net neutrality as the one great magic wand with the power to solve all the Internet’s problems: The FCC has work to do on freeing up spectrum for mobile networking, the Universal Service Fund needs to be reformed, and the National Broadband Plan needs to be implemented.

If the FCC’s approach proves sound, it might well be exported to other countries, forming the basis of a consistent international approach to the oversight of an international network developed on consistent standards of its own. Such an outcome would have positive consequences for the Internet standards community, which has its own backlog of unfinished business such as scalable routing, congestion management, security, and the domestication of peer-to-peer file sharing and content delivery networks to resolve. This outcome is far from inevitable; last minute rule changes make it less likely than it might have been.

The most important thing the FCC can do in implementing its system of Internet oversight is to elevate process over proscriptive rules. The traditional approach to telecom regulation is to develop a thick sheath of regulations that govern everything from the insignias on the telephone repair person’s uniform to the colors of the insulators on RJ11 cables and apply them in top-down, command-and-control fashion. Many of those on the pro-net neutrality side are steeped in telecom tradition, and they expected such an approach from the FCC for the Internet; theirs are the angry reactions.

But the Internet isn’t a telecom network, and a foot-high stack of regulations certainly would produce the negative consequences for innovation and progress the FCC’s critics have forecast. The appropriate way to address Internet regulation as to follow the model that the Internet has developed for itself, based on a small number of abstract but meaningful principles (each of which is subject to change for good reason) applied by a broad-based community of experts in a collaborative, consultative setting. Internet standards are not devised in an adversarial setting populated by angels and devils locked into mortal combat; they come from a process that values “rough consensus and running code.”

The specifics of the FCC’s order nevertheless give pause to those well-schooled in networking. A few hours before the Commission’s vote, Commissioner Copps persuaded Chairman Genachowski to reverse the Waxman Bill’s presumption regarding the premium transport services that enable Internet TV and video conferencing to enjoy the same level of quality as cable TV. Where the early drafts permitted these services as long as they were offered for sale on a non-discriminatory basis, the final rule arbitrarily presumes them harmful.

The order makes hash of the relationship of the content accelerators provided by Akamai and others to the presumptively impermissible communication accelerators that ISPs might provide one day in order to enable HD group video conferencing and similar emerging applications. The Commission majority fears that allowing network operators to offer premium transport to leading edge apps will put the squeeze on generic transport, but fails to consider that such potential downsides of well-accepted technical practices for Quality of Service can be prevented by applying a simple quota limit on the percentage of a pipe that can be sold as “premium.” This fact, which is obvious to skilled protocol engineers, goes unmentioned in the order.

The poor reasoning for this rule casts doubt on the FCC’s ability to enforce it effectively without outside expertise. By rejecting Internet standards such as RFC 2475 and IEEE standards such as 802.1Q that don’t conform to the telecom activists’ nostalgic, “all packets are equal” vision of the Internet, the FCC chose to blind itself to one of the central points in Tim Wu’s “Network Neutrality, Broadband Discrimination” paper that started the fight: A neutral Internet favors content applications, as a class, over communication applications and is therefore not truly an open network. The only way to make a network neutral among all applications is to differentiate loss and delay among applications; preferably, this is done by user-controlled means. That’s not always possible, so other means are sometimes necessary as well.

All in all, the Commission has built a stone too heavy for it to lift all by itself. The rules have just enough flexibility that the outside technical advisory groups that will examine complaints may be able to correct the order’s errors, but to be effective, the advisors need much deeper technical knowledge than the FCC staffers who wrote the order can provide.

It’s difficult to ask the FCC – an institution with its own 75 year tradition in which it has served as the battleground for bitter disputes between monopolists and public interest warriors – to turn on a dime and embrace a new spirit of collaboration, but without such a far-reaching institutional transformation its Internet regulation project will not be successful. Those of us who work with the FCC are required to take a leap of faith to the effect that the Commission is committed to transforming itself from a hidebound analog regulator into a digital age shepherd of innovation. Now that the Open Internet Report & Order has passed, we have no choice but to put our shoulders to the rock to help push it along. There’s no turning back now.

[cross-posted from the Innovation Policy Blog]

Speaking today in DC

This event will be webcast today: ITIF: Events ITIF Event: Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate Many advocates of strict net neutrality regulation argue that the Internet has always been a “dumb pipe” and that Congress should require that it remains so. A new report by ITIF Research Fellow … Continue reading “Speaking today in DC”

This event will be webcast today:

ITIF: Events

ITIF Event: Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate

Many advocates of strict net neutrality regulation argue that the Internet has always been a “dumb pipe” and that Congress should require that it remains so. A new report by ITIF Research Fellow Richard Bennett reviews the historical development of the Internet architecture and finds that contrary to such claims, an extraordinarily high degree of intelligence is embedded in the network core. Indeed, the fact that the Internet was originally built to serve the needs of the network research community but has grown into a global platform of commerce and communications was only made possible by continuous and innovative Internet engineering. In the new ITIF report “End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate,” Bennett traces the development of the Internet architecture from the CYCLADES network in France to the present, highlighting developments that have implications for Internet policy. This review will help both engineers and policy makers separate the essentials from the incidentals, identify challenges to continued evolution, and develop appropriate policy frameworks.

See you there.

How Markey III Hurts the Internet

Take a look at my analysis of Congressman Markey’s latest foray into Internet management on Internet Evolution. It’s the Big Report that will be up for a week or so. Here’s a teaser: Reading the latest version of Congressman Ed Markey’s (D-MA) Internet Freedom Preservation Act of 2009 is like going to your high school … Continue reading “How Markey III Hurts the Internet”

Take a look at my analysis of Congressman Markey’s latest foray into Internet management on Internet Evolution. It’s the Big Report that will be up for a week or so. Here’s a teaser:

Reading the latest version of Congressman Ed Markey’s (D-MA) Internet Freedom Preservation Act of 2009 is like going to your high school reunion: It forces you to think about issues that once appeared to be vitally important but which have faded into the background with time.

When the first version of this bill appeared, in 2005, the Internet policy community was abuzz with fears that the telcos were poised to make major changes to the Internet. Former SBC/AT&T chairman Ed Whiteacre was complaining about Vonage and Google “using his pipes for free,” and former BellSouth CEO Bill Smith was offering to accelerate Internet services for a fee.

Our friends in the public interest lobby warned us that, without immediate Congressional action, the Internet as we knew it would soon be a thing of the past.

In the intervening years, Congress did exactly nothing to shore up the regulatory system, and the Internet appears to be working as well as it ever has: New services are still coming online, the spam is still flowing, and the denial-of-service attacks are still a regular occurrence.

Enjoy.

, ,

Nostalgia Blues

San Jose Mercury News columnist Troy Wolverton engaged in a bit of nostalgia in Friday’s paper. He pines for the Golden Age of dial-up Internet access, when Internet users had a plethora of choices: A decade ago, when dial-up Internet access was the norm, you could choose from dozens of providers. With so many rivals, … Continue reading “Nostalgia Blues”

San Jose Mercury News columnist Troy Wolverton engaged in a bit of nostalgia in Friday’s paper. He pines for the Golden Age of dial-up Internet access, when Internet users had a plethora of choices:

A decade ago, when dial-up Internet access was the norm, you could choose from dozens of providers. With so many rivals, you could find Internet access at a reasonable price all by itself, without having to buy a bundle of other services with it.

There was competition because regulators forced the local phone giants to allow such services on their networks. But regulators backed away from open-access rules as the broadband era got under way. While local phone and cable companies could permit other companies to use their networks to offer competing services, regulators didn’t require them to do so and cable providers typically didn’t.

Wolverton’s chief complaint is that the DSL service he buys from Earthlink is slow and unreliable. He acknowledges that he could get cheaper service from AT&T and faster service from Comcast, but doesn’t choose to switch because he doesn’t want to “pay through the nose.”

The trouble with nostalgia is that the past never really was as rosy as we tend remember it, and the present is rarely as bad as it appears through the lens of imagination. Let’s consider the facts.

Back in the dial-up days, there were no more than three first-class ISPs in the Bay Area: Best Internet, Netcom, and Rahul. They charged $25-30/month, over the $15-20 we also paid for a phone line dedicated to Internet access; we didn’t want our friends to get a busy signal when we were on-line. So we paid roughly $45/month to access the Internet at 40 Kb/s download and 14 Kb/s or so upstream.

Now that the nirvana of dial-up competition (read: several companies selling Twinkies and nobody selling steak) has ended, what can we get for $45/month? One choice in the Bay Area is Comcast, who will gladly provide you with a 15 Mb/s service for a bit less than $45 ($42.95 after the promotion ends,) or a 20 Mb/s service for a bit more, $52.95. If this is “paying through the nose,” then what were we doing when we paid the same prices for 400 times less performance back in the Golden Age? And if you don’t want or need this much speed, you can get reasonable DSL-class service from a number of ISPs that’s 40 times faster and roughly half the price of dial-up.

Wolverton’s column is making the rounds of the Internet mailing lists and blogs where broadband service is discussed, to mixed reviews. Selective memory fails to provide a sound basis for broadband policy, and that’s really all that Wolverton provides.

, ,

Are the FCC Workshops Fair?

The FCC has run three days of workshops on the National Broadband Plan now, for the purpose of bringing a diverse set of perspectives on broadband technology and deployment issues to the attention of FCC staff. You can see the workshop agendas here. The collection of speakers is indeed richly varied. As you would expect, … Continue reading “Are the FCC Workshops Fair?”

The FCC has run three days of workshops on the National Broadband Plan now, for the purpose of bringing a diverse set of perspectives on broadband technology and deployment issues to the attention of FCC staff. You can see the workshop agendas here. The collection of speakers is indeed richly varied. As you would expect, the session on eGov featured a number of government people and a larger collection of folks from the non-profit sector, all but one of whom has a distinctly left-of-center orientation. Grass-roots devolution arguments have a leftish and populist flavor, so who better to make the argument than people from left-of-center think tanks?

Similarly, the sessions on technology featured a diverse set of voices, but emphasized speakers with actual technology backgrounds. Despite the technology focus, a good number of non-technologists were included, such as media historian Sascha Meinrath, Dave Burstein, Amazon lobbyist Paul Misener, and veteran telephone regulator Mark Cooper. A number of the technology speakers came from the non-profit or university sector, such as Victor Frost of the National Science Foundation, Henning Schulzrinne of Columbia University and IETF, and Bill St. Arnaud of Canarie. The ISPs spanned the range of big operators such as Verizon and Comcast down to a ISPs with fewer than 2000 customers.

Given these facts, it’s a bit odd that some of the public interest groups are claiming to have been left out. There aren’t more than a small handful of genuine technologists working for the public interest groups; you can practically count them on one hand without using the thumb, and there’s no question that their point of view was well represented on the first three days of panels. Sascha Meinrath’s comments at the mobile wireless session on European hobbyist networks were quite entertaining, although not particularly serious. Claiming that “hub-and-spoke” networks are less scalable and efficient than wireless meshes is not credible.

The complaint has the feel of “working the refs” in a basketball game, not as much a legitimate complaint as a tactical move to crowd out the technical voices in the panels to come.

I hope the FCC rolls its collective eyes and calls the game as it sees it. Solid policy positions aren’t contradicted by sound technical analysis, they’re reinforced by it. The advocates shouldn’t fear the FCC’s search for good technical data, they should embrace it.

Let a thousand flowers bloom, folks.

Cross-posted at CircleID.

Another Net Neutrality Meltdown

Over the weekend, a swarm of allegations hit the Internet to the effect that AT&T was blocking access to the the 4chan web site. This report from Techcrunch was fairly representative: As if AT&T wasn’t already bad enough. In an act that is sure to spark internet rebellions everywhere, AT&T has apparently declared war on … Continue reading “Another Net Neutrality Meltdown”

Over the weekend, a swarm of allegations hit the Internet to the effect that AT&T was blocking access to the the 4chan web site. This report from Techcrunch was fairly representative:

As if AT&T wasn’t already bad enough. In an act that is sure to spark internet rebellions everywhere, AT&T has apparently declared war on the extremely popular imageboard 4chan.org, blocking some of the site’s most popular message boards, including /r9k/ and the infamous /b/. moot, who started 4chan and continues to run the site, has posted a note to the 4chan status blog indicating that AT&T is in fact filtering/blocking the site for many of its customers (we’re still trying to confirm from AT&T’s side).

4chan, in case you didn’t know, is a picture-sharing site that serves as the on-line home to a lovable band of pranksters who like to launch DOS attacks and other forms of mischief against anyone who peeves them. The infamous “Anonymous” DOS attack on the Scientology cult was organized by 4chan members, which is a feather in their cap from my point of view. So the general reaction to the news that AT&T had black-holed some of 4chan’s servers was essentially “woe is AT&T, they don’t know who they’re messing with.” Poke 4chan, they poke back, and hard.

By Monday afternoon, it was apparent that the story was not all it seemed. The owner of 4chan, a fellow known as “moot,” admitted that AT&T had good reason to take action against 4chan, which was actually launching what amounted to a DOS attack against some AT&T customers without realizing it:

For the past three weeks, 4chan has been under a constant DDoS attack. We were able to filter this specific type of attack in a fashion that was more or less transparent to the end user.

Unfortunately, as an unintended consequence of the method used, some Internet users received errant traffic from one of our network switches. A handful happened to be AT&T customers.

In response, AT&T filtered all traffic to and from our img.4chan.org IPs (which serve /b/ & /r9k/) for their entire network, instead of only the affected customers. AT&T did not contact us prior to implementing the block.

moot didn’t apologize in so many words, but he did more or less admit his site was misbehaving while still calling the AT&T action “a poorly executed, disproportionate response” and suggesting that is was a “blessing in disguise” because it renewed interest in net neutrality and net censorship. Of course, these subjects aren’t far from the radar given the renewed war over Internet regulation sparked by the comments on the FCC’s National Broadband Plan, but thanks for playing.

The 4chan situation joins a growing list of faux net neutrality crises that have turned out to be nothing when investigated for a new minutes:

* Tom Foremski claimed that Cox Cable blocked access to Craig’s List on June 6th, 2006, but it turned out to be a strange interaction between a personal firewall and Craig’s List’s odd TCP settings. Craig’s List ultimately changed their setup, and the software vendor changed theirs as well. Both parties had the power to fix the problem all along.

* Researchers at the U. of Colorado, Boulder claimed on April 9, 2008, that Comcast was blocking their Internet access when in fact it was their own local NAT that was blocking a stream that looked like a DOS attack. These are people who really should know better.

The tendency to scream “censorship” first and ask questions later doesn’t do anyone any good, so before the next storm of protest arises over a network management problem, let’s get the facts straight. There will be web accounts of AT&T “censoring” 4chan for months and years to come, because these rumors never get corrected on the Internet. As long as Google indexes by popularity, and the complaints are more widespread than the corrections, the complaints will remain the “real story.” I’d like to see some blog posts titled “I really screwed this story up,” but that’s not going to happen – all we’re going to see are some ambiguous updates buried at the end of the misleading stories.

UPDATE: It’s worth noting that AT&T wasn’t the only ISP or carrier to block 4chan’s aggressive switch on Sunday. Another network engineer who found it wise to block the site until it had corrected its DDOS counter-attack posted this to the NANOG list:

Date: Sun, Jul 26, 2009 at 11:05 PM
Subject: Re: AT&T. Layer 6-8 needed.

There has been alot of customers on our network who were complaining about ACK scan reports coming from 207.126.64.181. We had no choice but to block that single IP until the attacks let up. It was a decision I made with the gentleman that owns the colo facility currently hosts 4chan. There was no other way around it. I’m sure AT&T is probably blocking it for the same reason. 4chan has been under attack for over 3 weeks, the attacks filling up an entire GigE. If you want to blame anyone, blame the script kiddies who pull this kind of stunt.

Regards,
Shon Elliott
Senior Network Engineer
unWired Broadband, Inc.

Despite the abundance of good reasons for shutting off access to a domain with a misbehaving switch, AT&T continues to face criticism for the action, some of quite strange. David Reed, a highly vocal net neutrality advocate, went black-helicopters on the story:

I’d be interested in how AT&T managed to block *only* certain parts of 4chan’s web content. Since DNS routing does not depend on the characters after the “/” in a URL in *any* way, the site’s mention that AT&T was blocking only certain sub-“directories” of 4chan’s content suggests that the blocking involved *reading content of end-to-end communications”.

If AT&T admits it was doing this, they should supply to the rest of the world a description of the technology that they were using to focus their blocking. Since AT&T has deployed content-scanning-and-recording boxes for the NSA in its US-based switching fabric, perhaps that is how they do it. However, even if you believe that is legitimate for the US Gov’t to do, the applicability of similar technology to commercial traffic blocking is not clearly in the domain of acceptable Internet traffic management.

What happened, of course, was that a single IP address inside 4chan’s network was blocked. This IP address – 207.126.64.181 – hosts the /b/ and /r9k/ discussion and upload boards at 4chan, and DNS has nothing to do with it. Reed is one of the characters who complains about network management practices before all the relevant bodies, but one wonders if he actually understands how IP traffic is routed on the modern Internet.

And as I predicted, new blog posts are still going up claiming that AT&T is censoring 4chan. Click through to Technorati to see some of them.

Is Broadband a Civil Right?

Sometimes you have to wonder if people appreciate the significance of what they’re saying. On Huffington Post this morning, I found an account of a panel at the Personal Democracy Forum gathering on the question of who controls the Internet’s optical core. The writer, Steve Rosenbaum, declares that Broadband is a Civil Right: If the … Continue reading “Is Broadband a Civil Right?”

Sometimes you have to wonder if people appreciate the significance of what they’re saying. On Huffington Post this morning, I found an account of a panel at the Personal Democracy Forum gathering on the question of who controls the Internet’s optical core. The writer, Steve Rosenbaum, declares that Broadband is a Civil Right:

If the internet is the backbone of free speech and participation, how can it be owned by corporate interests whose primary concern isn’t freedom or self expression or political dissent? Doesn’t it have to be free?

OK, that’s a reasonable point to discuss. Unfortunately, the example that’s supposed to back up this argument is the role that broadband networks have played in the Iranian protests. Does anyone see the problem here? Narrow-band SMS on private networks was a big problem for the government of Iran in the recent protests, but broadband not so much because they could control it easily through a small number of filters.

If broadband infrastructure isn’t owned by private companies, it’s owned by governments; the networks are too big to be owned any other way. So in the overall scheme of things, if I have to choose who’s more likely to let me protest the government from among: A) The Government; or B) Anybody Else, my choice is pretty obviously not the government.

Isn’t this obvious?

,