The Rise and Fall of Twitter
Apple's MobileMe service has proven to be a horror It's been a curious thing to watch: Apple makes mistakes all the time, but they are usually small mistakes, easily swept aside in the tide of their own user's enthusiasm. But MobileMe wasn't just a cool service with some flaws: it ripped itself out of Apple's usually fertile loins like some Belial-esque monstrosity and immediately went for the throat of those who had expected to love it most.
The good news is that Steve Jobs is pissed. In an internal email to Apple employees yesterday evening, Jobs admitted that MobileMe was "not up to Apple's standards" and should have been rolled out in small chunks rather than as a "monolithic service."
"It was a mistake to launch MobileMe at the same time as iPhone 3G, iPhone 2.0 software and the App Store," Jobs said. "We all had more than enough to do, and MobileMe could have been delayed without consequence." No kidding.
The bad news: Apple still doesn't think MobileMe is up to snuff. Jobs says that Apple will "press on to make it a service we are all proud of by the end of this year." Pessimistically, that means beleaguered MobileMe users could have four months of teeth gritting ahead of them.
It also looks like the MobileMe team has been reorganized. The group will now report to iTunes honcho Eddy Cue, who also heads up the Apple Store. Cue will report directly to Jobs. There's no word on what happened to MobileMe's previous department head: fired out of a cannon into an industrial shredder is my bet.
Steve Jobs: MobileMe "not up to Apple standards" [Ars Technica]
Reposted from Read Write Web.
The Federal Communications Commission ruled this morning by a 3 to 2 vote that Comcast's arbitrary throttling of customers' use of BitTorrent was illegal. Hours before the ruling, the Electronic Frontier Foundation released software that anyone can use to see if their Internet Service Provider (ISP) is engaging in the same or similar behavior.
BitTorrent accounts for a substantial percentage of traffic on the internet and some people believe it causes unfair slowdowns for web users doing anything else online. Many other people argue that ISPs have an obligation to treat all internet traffic equally regardless of content. This is a key battle in the Network Neutrality debate.
Enforcement Against Comcast
Comcast voluntarily stopped throttling in March, but today's FCC decision is important FCC Chair Kevin Martin says so that "consumers deserve to know that the commitment is backed up by legal enforcement." Martin, a Republican, is believed by some to be taking an out-of-charecter populist stance on the matter because he's preparing to run for a position in the US House of Representatives.
EFF Releases "Switzerland"
The Electronic Frontier Foundation today released software called "Switzerland" (as in, the neutral country) that can be used by consumers to test our networks for ISP interference.
The EFF explains:
"Switzerland is an open source, command-line software tool designed to detect the modification or injection of packets of data by ISPs. Switzerland detects changes made by software tools believed to be in use by ISPs such as Sandvine and AudibleMagic, advertising systems like FairEagle, and various censorship systems. Although currently intended for use by technically sophisticated Internet users, development plans aim to make the tool increasingly easy to use."
This quote from the EFF release puts things into context:
"The sad truth is that the FCC is ill-equipped to detect ISPs interfering with your Internet connection," said Fred von Lohmann, EFF Senior Intellectual Property Attorney. "It's up to concerned Internet users to investigate possible network neutrality violations, and EFF's Switzerland software is designed to help with that effort. Comcast isn't the first, and certainly won't be the last, ISP to meddle surreptitiously with its subscribers' Internet communications for its own benefit."
On the other hand, people downloading long lists of huge media files over common networks could be seen as an onerous drain on the "bandwidth commons." Slowing down an entire neighborhood's web use because you want to get the entire archives of some TV show is arguably pretty anti-social behavior.
We'd love to get our readers' thoughts on these questions - and for those of you able to put Switzerland to use, let us know if your ISP appears to be doing the same kinds of shady things that Comcast was slapped for today. These are going to be very big issues for the near-term future of the web.
Reposted from Read Write Web.
Yesterday I made a post about how the new iPhone application, Loopt, was causing a lot of angst amongst some top bloggers, and people I admire, about their completely idiotic way in which they handle user invites. The main issue dealt with privacy concerns stemming people getting invites from people they didn't know - people who they hadn't given their phone number out to. The invites were sent, unsolicited, via SMS (a big no-no). Loopt has responded on their company blog, first making a small post that seemed to brush off the concerns without addressing the actual question. Later, when the uproar of complaints grew louder & more numerous, they attempted to quell the anger in more depth. iJustine's intitial post about the problem has now made Techmeme, which should accelerate awareness. This seems to be working already as InfoWeek has just written an article chronicaling the details of the problem.
A twitter clone that is open source?
Could it be possible?
Read/Write Web did an article on it.
Marshall Kilpatrick is there already.
Dave Winer is there too.
A researcher at Trinity College Dublin has software that lets users map the links between Wikipedia pages. His Web site is called “Six Degrees of Wikipedia,” modeled after the trivia game “Six Degrees of Kevin Bacon.” Instead of the degrees being measured by presence in the same film, degrees are determined by articles that link to each other.
For example, how many clicks through Wikipedia does it take to get from “Gatorade” to “Genghis Khan”? Three: Start at “Gatorade,” then click to “Connecticut,” then “June 1,” then “Genghis Khan.”
Stephen Dolan, the researcher who created the software, has also used the code to determine which Wikipedia article is the “center” of Wikipedia—that is, which article is the hub that most other articles must go through in the “Six Degrees” game. Not including the articles that are just lists (e.g., years), the article closest to the center is “United Kingdom,” at an average of 3.67 clicks to any other article. “Billie Jean King” and “United States” follow, with an average of 3.68 clicks and 3.69 clicks, respectively.
More detailed information can be found on Mr. Dolan’s Web site.
This article originally appeared on Publishing 2.0.
Why is Google making more money everyday while newspapers are making less? I’m going to pick on The Washington Post again only because it’s my local paper and this is a local example.
There were severe storms in the Washington area today, and the power went out in our Reston office. I wanted to find some information about the status of power outages to see whether we should go into the office tomorrow. Here’s what I found on the homepage of WashingtonPost.com:
This is the WASHINGTON Post, right? So where’s the news about Washington? We just got pounded by a nasty storm — but it’s not homepage worthy.
Fortunately, although it’s not top of mind for the homepage editors, it is top of mind for readers — I found the article about the storm in the list of most viewed articles in the far corner of the homepage. I go to the article, where I find highly useful information like this:
“We have a ton of trees down, a ton of traffic lights out,” said Loudoun County Sheriff’s Office spokesman Kraig Troxell.
So what’s my next step, when I can’t find what I want on the web? Of course:
Thanks, Google, just what I was looking for:
Wow, I thought — it can’t be that bad, can it? So I went back to the WashingtonPost.com homepage. This time, I clicked on the Metro section in the main navigation. Sure enough, the storm was the lead story.
And there at the top was the link to the same useless article. But then below the photo was this tiny link: Capital Weather Gang Blog: Storm Updates
I clicked on the link, and wow:
Real-time radar, frequent storm warning updates with LINKS, and… a link to that page I had been SEARCHING for on Dominion Power about outages. (Note the link to the useless news story buried at the bottom.)
It was a brilliant web-native news and information effort — BURIED three layers deep, where I couldn’t FIND it.
Is it any wonder why Google makes $20 billion on search?
And what’s the root cause problem? The useless article with no real-time data and no links was written for the PRINT newspaper. And the homepage is edited to match what will be important in the PRINT newspaper. And the navigation assumes I think like I do when I’m reading the PRINT newspaper. Want local news? Go to the metro SECTION.
The Capital Weather Gang blog is a great example of “getting” the web — but then making it impossible to find…
Oh, and if you click on the tiny Weather link on the homepage (which I only noticed on my fourth visit), you get a page that looks like the weather page in, you guessed it, the print newspaper — all STATIC.
Again, it takes another click to get to the dynamic, web-native weather blog.
Yesterday, I saw a ranking of the top 25 “newspaper websites” — and that’s exactly the problem, isn’t it? These are newsPAPER websites, instead of WEBsites.
WashingtonPost.com ranks #5, with this comment:
The figures from the WPO 10-Q indicate that revenue for the company’s online business is relatively small and represents only a modest part of the sales for the newspaper group. That is unfortunate. If any company should be right behind The New York Times in internet revenue it is the Post.
Here’s an idea for newspaper website homepages — just a search box and a list of blogs. Seriously. Instead of putting all the web-native content and publishing in the blog ghetto, like NYTimes.com does, why not make that the WHOLE site? (I mean seriously, having a blog section on the website is like having a section in the paper for 14 column inch stories.)
It’s like newspapers on the web as saying: here’s all the static stuff we produced for the paper — you want all of our dynamic web innovation? Oh, that’s downstairs, in the back room. Knock twice before you enter.
It’s a shame — so much marginalized value.
I bet I could stop going to the New York Times site entirely and just subscribe to all of their blog RSS feeds, and still get all the news, but in a web-native format, with data and LINKS.
Of course, the only way to do that is click on 50 RSS buttons one at a time. And they only publish partial feeds.
Sigh.
UPDATE:
Mark Potts had a similar frustration with the storm coverage — and it looks like he never even found the weather blog.
Another big missed opportunity — the Dominion electric site can’t tell me specifically if the power is still out in our office in Reston. But I bet Washington Post readers with offices in that area - or even in our office condo — could help me out, if someone gave them a place to do so. The Post weather blog has a ton of comments, but information is haphazard — how about a structured data form where you can post your power outage status, maybe map it on Google maps?
Lastly, at least Google knows how to make the Post’s weather blog findable:
Courtesy of Mashable:
Have you, like so many other users, experienced problems with Twitter in the last couple of days? Not getting updates from all the people you’re following, but only a handful? The problem seems to be lying with Twitter cache.
Here’s a quick fix for the problem. Simply find some person you’re not already following, follow them and then unfollow them. Refresh your Twitter page and voila, your Twitter cache should now be restored, and you should be getting updates from everyone.
Thanks to engtech for the tip on Twitter. Check his blog, too, it’s cool.