I've been chatting with many of my friends and colleagues about an issue that's been bugging me for a while, namely whether academic research has any role to play in the emerging Web 2.0. I've been slowly coming to the conclusion that the answer is not much.
I had a similar discussion with other researchers at HotMobile a few weeks ago. When the web first came out, pretty much every systems researcher ignored it because it was so ugly. The web was not very sophisticated in terms of distributed systems, HTTP lacked elegance, HTML conflated many different ideas, and so on. There were also not any really new ideas with the web, as evidenced by the fact that Tim Berners-Lee's first paper on the Web was (probably rightfully) rejected from an ACM conference on hypertext.
I'm sure one thing that really irked researchers about the nascent web was that it completely ignored the large body of work in hypertext and distributed systems that had preceded it. Even in 1997, as the web was rapidly expanding and well after Netscape's explosive IPO, Infocom (one of the leading conferences on network communication) only had one paper about the Web. However, by this time, it was already too late, and the Web had taken on a life of its own.
The main lament here is that, if only we researchers had engaged with the early developers of the Web, we could have avoided many of the problems we face today. I'm not entirely convinced of this, however, since researchers really like to explore the full design space of things and be highly rigorous rather than letting things be ugly and good enough. (But that's another rant for another day.)
(Another fun one I hear often from computer science researchers is, why didn't we invent the web? I've heard this from digital library people, systems people, and HCI people. It's also funny to see how many books and articles I see from researchers saying that they had anticipated the web... No, you didn't, otherwise you would have gotten it out first. It's like the number of people that worked on the original Macintosh: for some reason, the number seems to keep increasing with time.)
So the question comes up again: is there any role for the research community for Web 2.0? I'm increasingly thinking that the answer is no, because the cultures, goals, and incentives with these two communities are far too misaligned.
Most of these Web 2.0 web sites are from small startup teams that care about making a successful product that lots of people use. They have the time, money, and resources to engineering that research teams do not. Most Web 2.0 teams also don't care about novelty, but rather the best implementation of something. For example, when some developers felt that del.icio.us sold out by going commercial, they just set up a clone site called del.irio.us (though the site seems to be down now).
There's also no incentive for web sites to do fully rigorous evaluations like you would see in academic papers, because there's no time, resources, or credit for doing so. Likewise, there's no reason to publish papers at all, as it might help your competitors.
There could be papers published about how people use it and the community that develops around such web sites, but that seems less about anything in particular about Web 2.0 and more about the general usefulness and utility of the web site.
So, in summary, I think that the research community will have little to directly offer to the emerging space of Web 2.0 apps, but may have some things to contribute with respect to evaluating and understanding how people use these kinds of apps in the wild and how to improve the user experience. You know, the stuff that we've already been doing at CHI.
Now, I just have to figure out what I'm going to teach Web 2.0 in the Software Architectures for User Interfaces (SAUI) class this fall.