Interview with Bill Thompson: The Future

5 Mar

While technology has become an important part of our social, economic and political life, most analysis about technology remains woefully inadequate, limited to singing paeans about Apple and Google, and occasional rote articles about security and privacy issues. It is to this news market full of haberdasher opining that Mr. Bill Thompson brings his considerable intellect and analytical skills every week for his column on technology for the BBC.

To those unfamiliar with his articles, Mr. Bill Thompson is a respected technology guru and a distinguished commentator on technology and copyright issues for the BBC. Mr. Thompson’s calm moderated erudition of technology comes from his extensive experience in the IT industry in varying capacities and a childhood without computers. “I was born in 1960. So I grew up before there were computers around. Indeed, I never touched one at school.” It was not until his third year at Cambridge University when he was running experiments in Psychology that he first touched a computer. He says that in many ways his first experiences with computers formed his mindset about computers. And that view—computers are there to perform a useful function—has stayed with him for over 25 years.

Mr. Thompson went on to get a Master’s level diploma in Computer Science from Cambridge University in 1983. After graduating from Cambridge, he joined a small computer firm and then quit it to join Acorn Computers Limited, creators of the successful BBC Micro, as a database consultant. He left the enterprise because “they wanted to promote me” and joined as a courseware developer with Instruction Set. After a stint with PIPEX, he found himself running Guardian’s New Media division a decade or so ago when the Internet was still in its infancy. After working for a few years managing Guardian’s online site, Mr. Thompson left to pursue writing and commenting full time. It is in the field of writing and providing astute analysis on technology-related issues that Mr. Thompson finds himself today.

I interviewed Mr. Thompson via Skype about a month ago. The interview covered a wide range of issues. Given the diversity of issues covered I have chosen to put an edited transcript of the interview rather than an essay styled thematic story. Here’s an edited (both style and content) transcript of the interview.

The technology opinion marketplace seems to be split between technology evangelists and Luddites. Your writing, on the other hand, manifests a broad range of experience; it reflects moderated enthusiasm about what computers can do. I find it an astute and yet optimistic account.

I am fundamentally optimistic about the possibilities of this technology that we have invented to both make the world a better place and to help us recover from some of the mistakes of the past and make better decisions as a species, not just as a society, in the future. It informs my writing. It informs as well the things that I am interested in and the areas that I want to explore.

Our relationship with machines was once fraught with incomprehension and fear. Machines epitomized the large mechanized state and its dominance over the natural world. There was a spate of movies somewhere in the 70s when refrigerators and microwaves rose up to attack us. Over the past decade or so, our relationship has transformed to such a degree that not only do we rely on fairly sophisticated machines to do our daily chores, but we also look at machines as a way to achieve utopian ideals. Fred Turner, professor of Communication at Stanford, in “From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism” traces this rise of digital utopianism to American counterculture. How do you think the relationship evolved?

The way you phrase the question leads me to think that perhaps it was the exaggerated claims of Artificial Intelligence community that led people to worry that computers would reach the point at which they would take over. And the complete failure of AI to deliver on any its promises has led us to a more phlegmatic and accepting attitude, which is that these are just machines; we don’t know how to make them clever enough to threaten us, and therefore we can just get on with using them.

The fact is known that Skynet is not going to launch nuclear weapons at us in a Terminator world and so we can then focus on the fact that the essential humanity of the Terminator itself, certainly in the second and third movies, is a source of redemption. We can actually feel positive about the machines instead of negative about them.

When you have a computer that is around, that crashes constantly, that is infected with viruses and malware, that doesn’t do what is supposed to do and stuff like that, you are not afraid of it. You are irritated by it. And you treat it as you would a recalcitrant child that you might love and care for and that has some value but is certainly not something that is going to threaten you. And then we can use the machines. That then actually allows us to focus on what you call the Utopian or altruistic aspects. It allows us to focus on machines in a much broader context, which recognizes that human agency is behind it.

The dystopian stories rely on machines getting out of control but in fact, we live in a world in which the machines are being used negatively by people, by governments, by corporations, and by individuals. The failure to have AI allows us to accept that – to reject the systems they have built without rejecting the machines themselves.

And for those who actually believe that information and communication technologies are quite positive – (it allows us) to focus on what could be done for good instead of just dismissing all of the technology as being bad. It allows us to take a much more complex and nuanced point of view.

You make an excellent point. I see where you are coming from.

In a sense, it is where I am coming from and which is—I am a liberal humanist atheist. I believe we make this world and we have the potential to make it better, and the technologies we invent should be part of that process.

Just as I am politically socialist, I believe in equality of opportunity and social justice and all those things [similarly] I have a humanist approach to technology which is that what we have made we can make ‘do good’ for us.

Google News: Positives, Negatives, and the Rest

16 Nov

Google News is the sixth most visited news site, according to Alexa Web Traffic Rankings. Given its popularity, it deserves closer attention.

What is Google News? Google News is a news aggregation service that scours around ten thousand news sources, categorizes the articles and ranks them. What sets Google News apart is that it is not monetized. It doesn’t feature ads. Nor does it have deals with publishers. The other distinguishing part is that it is run by software engineers rather than journalists.

Criticisms

1. Copyright: Some argue that the service infringes of copyrights.

2. Lost Revenue: Some argue that the service causes news sources to lose revenue.

3. Popular is not the same as important or diverse: Google News highlights popular stories and sources. In doing so, it likely exacerbates the already large gap between popular news stories and viewpoints and the rest. The criticism doesn’t ring true. Google News merely mimics the information (news) and economic topography of the real world, which encompasses the economic underpinnings of the virtual world as in better-funded sites tend to be more popular or firms more successful in real world may have better-produced sites and hence may, in turn, attract more traffic. It does, however, bring into question whether Google can do better than merely mimic the topography of the world. There are, of course, multiple problems associated with any such venture, especially for Google, whose search algorithm is built around measuring popularity and authority of sites. The key problem is that news is not immune to being anything more than a popularity contest shepherded by rating (euphemism for financial interests) driven news media. A look at New York Times homepage, with extensive selection of lifestyle articles, gives one an idea of the depth of the problem. So if Google were to venture out and produce a list of stories that were sorted by relevance to say policy, not that any such thing can be done, there is a good chance that an average user will find the news articles irrelevant. Of course, a user-determined topical selection of stories would probably be very useful for users. While numerous social scientists have issued a caveat against adopting the latter approach arguing that it may lead to further atomization and decline in sociotropism, I believe that their appeals are disingenuous given that specialized interest in narrowly defined topics and interests in global news can flower together.

4. Transparency: Google News is not particularly transparent in the way it functions. Given the often abstruse and economically constrained processes that determine the content of newspapers, I don’t see why Google News process is any less transparent. I believe the objection primarily stems from people’s discomfort with automated processes determining the order and selection of news items. Automated processes don’t imply that they aren’t based on adaptive systems based on criteria commonly used by editors across newsrooms. More importantly, Google News works off the editorial decisions made by organizations across the board, for they include details like placement and section of the article within the news site as a pointer for the relative importance of the news article. At this point, we may also want to deal with the question of accountability, as pertaining to the veracity of news items. Given that Google News provides a variety of news sources, it automatically provides users with a way to check for inconsistencies within and between articles. In addition, Google News relies on the fact that in this day and age, some blogger will post an erratum to a “Google News source” site, of which there are over ten thousand, and that in turn may be featured within Google News.

Positives

Google News gives people the ability to mine through a gargantuan number of news sources and come up with a list of news stories on the same “topic” (or event) and the ability to search for a particular topic quickly. One can envision that both the user looking for a diversity of news sources or looking for quick information on a particular topic, could both be interested in other related information on the topic. More substantively, Google News may want to collate information from its web, video and image search, along with links to key organizations mentioned in the websites and put then right next to the link to the story. For example, BBC offers a related link to India’s country profile next to a story on India. Another way Google News can add value for its users is by leveraging the statistics it compiles of when and where news stories were published, stories published in the last 24 hrs or 48 hrs etc. I would love to see a feature called the “state of news” that shows statistical trends on news items getting coverage, patterns of coverage etc. (this endeavor would be similar to Google Trends)

Diversity of News Stories

What do we mean by diversity and what kind of diversity would users find most useful? Diversity can mean diverse locations—publishers or datelines, viewpoint—for or against an issue, depth—a quick summary or a large tome, medium—video, text, or audio, type of news—reporting versus analysis. Of course, Google can circumvent all of these concerns by setting up parallel mechanisms for all the measures it deems important. For example, a map/google news “mashup” can prove to be useful in highlighting where news is currently coming from. Going back to the topic of ensuring diversity – conceptual diversity is possibly the hardest to implement. There can be a multitude of angles for a story – not just for and against binary positions and facets can quickly become unruly, indefensible and unusable. For example if it splits news stories based on news sources (like liberal or conservative – people will argue over whether right categorizations were chosen or even about the labeling, for example, social conservatives and fiscal conservatives) or organizations cited (for example there is a good chance that an article using statistics from Heritage foundation leans in a conservative direction but that is hardly a rule). Still, I feel that these measures can prove to be helpful in at least mining for a diversity of articles on the same topic. One of the challenges of categorization is to come up with “natural” categories as in coming up with categorization that is “intuitive” for people. Given the conceptual diversity and the related abstruseness, Google may though want to preclude offering them as clickable categories to users thought it may want to use the categorization technique to display “diverse” stories. Similarly, more complex statistical measures can also prove to be useful in subcategorization, for example providing a statistical reference to the most common phrases or keywords or even Amazon like statistics on the relative hardness of reading. Google News may also just want to list the organizations cited in the news article and leave the decision of categorization to users.

Beyond Non-Profit
Google News’ current “philanthropic” (people may argue otherwise viewing it as a publicity stunt) model is fundamentally flawed for it may restrict the money it needs to innovate and grow. Hence, it is important that it explores possible monetization opportunities. There are two possible ways to monetize Google News – developing a portal (like Yahoo!) and developing tools or services that it can charge for. While Google is already forging ahead with its portal model, it has yet to make appreciable progress in offering widely incorporable tools for its Google News service. There is a strong probability that news organizations would be interested in buying a product that displays “related news items” next to news articles. This is something that Technorati already for does for blogs but there is ample room for both, additional players, and for improving the quality of the content. It would be interesting to see a product that helps display Google News results along with Google image, blog, and video search results.

Comments Please! The Future Of Blog Comments

11 Nov

Often times the comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point, continue endlessly, and are generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best, and a substantial distraction at worst. Here, I discuss a few ways we can re-engineer commenting systems to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that, firstly, the comments are not arranged in any particular order of relevance and, secondly, that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap.

One way to “fix” this problem is by having a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote for, or against, the comment. This occasionally leads to rating “spam”. The BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article, and by making it easier for users to navigate to the topic, or informational blurb, of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that adds a hyperlink to relevant portions of the story in comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments. These are often posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will encourage users to put in a more considered response. Obviously, there is a danger of angering the user, leading to him/her adding a longer, more pointless comment or just giving up. On average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider developing algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article – to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, this follow-up piece will be able to solicit more comments, and the process would repeat again, helping to take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated!

Making Comments More Useful

10 Nov

Often times comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point; continue endlessly and generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best and a substantial distraction at worst. Here below I discuss a few ways we can re-engineer commenting systems so to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that firstly the comments are not arranged in any particular order of relevance and secondly that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap. One way to “fix” this problem is by using a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote ( Phillip Winn) for or against the comment leading occasionally to rating “spam”. BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article and by making it easier for users to navigate to the topic or informational blurb of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that hyperlinks relevant portions of the story to comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments, often times posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will prod users to put in a more considered response. Obviously, there is a danger of angering the user leading to him/her adding a longer more pointless comment or just giving up but on an average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider coding in algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article- to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, then this follow up piece will be able to solicit more comments and the process repeated again helping take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated.

Remaking Blog Powered Webzines

5 Nov

“Eight percent of internet users, or about 12 million American adults, keep a blog. Thirty-nine percent of internet users, or about 57 million American adults, read blogs,” according to a study (pdf) conducted by Pew Internet & American Life Project.
The astounding number of people who maintain blogs, nearly all of whom have joined the bandwagon in the past couple of years, has been driven by the fact that blogs have finally delivered the promise of Internet – they have given an average user the ability to self-publish. Of course, the self-publishing revolution has not been limited to blogs but extends to places like myspace.com, flickr.com, and youtube.com- that have lowered the bar to “publish” content to the world.

From blogs to better content

The popularity of blogs has led to the creation of enormous amount of content. This, in turn, has spawned a home industry devoted to finding ways to sift through the content that has led to the evolution of things like tagging, “digging”, RSS, blog aggregators, and edited multiple contributor driven sites like huffingtonpost.com and blogcritics.org. Among all the above innovations, it is the last innovation I am particularly excited about for it has the capability of creating robust well written independent online magazines. For these sites to be able to compete with the ‘establishment magazines’ like Newsweek, they need to rethink their business and creative plan. Most importantly, they need to focus on the following issues –

  1. Redesign and repackage. For sites like blogcritics.org to move to the next level, they need to pay more attention to web design and packaging of their stories. To accomplish this, they may want to assign “producers” for the home page and beyond. “Producers” would be in charge of creating a layout for the home page, choosing the news stories and the multimedia elements displayed there. By assigning more resources on design and slotting multimedia elements, the sites can add to the user experience.

    There are twin problems with implementing this feature – labor and coming up with graphics. Blogcritics.org portrays itself as a destination for top writers and hence fails to attract talent in other areas critical to developing an online news business including web and multimedia design and development.

    Blogcritics.org and other sites similar to it should try to reach out to other segments of professionals (like graphic designers, photo editors) needed to produce a quality magazine. They may also want to invest programming resources in creating templates to display photo galleries and other multimedia features. In addition, these sites may want to tap into the user base of sites like Flickr and Youtube so as to expand the magazine to newer vistas like news delivered via audio or video.

  2. Most read stories/ emailed stories list and relevant stories– Provide features on the site that make the reader stay longer on the site including providing a list of most read or emailed stories. Another feature that can prove to be useful is providing a list of other relevant stories from history and even link to general information sites like Wikipedia. This adds value to the user experience by providing them access to more in-depth information on the topic.
  3. Comments – Comments are integral to sites like blogcritics.org but they have not been implemented well. Comments sections tend to be overrun by repeated entries, pointless entries, grammatical and spelling errors, spamming and running far too long. To solve this, they should create a comment rating mechanism, and think about assigning a writer to incorporate all the relevant points from comments and put it in a post. A Gmail like innovation that breaks up comments into discussions on a topic can also come in handy.
  4. Most successful webzines have been ones that have focused on a particular sector or a product like webzines devoted to Apple computers. The market for news, ideas, and reviews is much more challenging and the recent move by Gannet to use blog content will make it much harder to retain quality content producers. Hence, one must start investigating revenue sharing mechanisms for writers and producers and tie their earnings to the number of hits their articles get.
  5. Start deliberating about an ethics policy for reviewing items including guidelines on conflict of interest, usage of copyrighted content, plagiarism etc. and publish those guidelines and set up appropriate redressal mechanisms for settling the disputes.
  6. Create technical solutions for hosting other media including audio, images, and video.
  7. Come up with a clear privacy and copyright policy for writers, users who comment, and content buyers. In a related point, as the organization grows, it will become important to keep content producers and other affiliates informed of any deals the publishers negotiate.
  8. Allow a transparent and simple way for writers/editors to issue corrections to published articles.

From Satellites to Streets

11 Jul

There has been a tremendous growth in satellite-guided navigation systems and secondary applications relying on GIS systems, like finding shops near the place you are, etc. However, it remains opaque to me why we are using satellites to beam this information when we can easily embed RFID/or similar chips on road signs for pennies. Road signs need to move from the ‘dumb’ painted visual boards era to electronic tag era, where signs beam out information on a set frequency (or answer when queried) to which a variety of devices may be tuned in.

Indeed it would be wonderful to have “rich” devices, connected to the Internet, where people can leave our comments, just like message boards, or blogs. This will remove the need for expensive satellite signal reception boxes or the cost of maintaining satellites. The concept is not limited to road signs and can include any and everything from shops to homes to chips informing the car where the curbs are so that it stays in the lane.

Possibilities are endless. And we must start now.

End of Information Hierarchy

11 Nov

Today, people have a variety of ways to explore a collection via the Internet as opposed to carefully orchestrated explorations in a brick and mortar museum with a curated exhibition.

A curator comes up with a story along with other contextual information about the exhibit and arranges the exhibition so that the person exploring it has only a few chosen entry points and few ways of exploring the collection. Some of the impediments are put in deliberately while others are a result of hosting an exhibition in the real world where the design of building etc. still matter.

Cut to the online world and the user is untethered from most of the curated connivances. This, in turn, may be a result of the fact that people haven’t really understood how best to present a virtual museum but that is not the point I want to get into. The result of the untethered experience is that these cultural objects are seen in a twice removed setting -e.g. a pot taken from an archaeological site and then photographed and put on the Internet. So what is the result of all this? It is hard to give an objective listing but one can see that some of the “meaning” is lost in this journey of an artifact from the ground to the Internet.

What happens when information that was once tethered in a context or a story is made available virtually free of context over say Google. Is storing information in hierarchical networks or associations obsolete? How do you maintain the integrity of information when context-free snippets of information are freely available?

Say, for example, once upon a time people learned about history via a scholar who chose carefully the specific issues about history. Today, a teen gets his/her history by searching on the web often encountering a lot of miscellaneous information. I would argue that the person then can come away, from such a scattered exploration, with a bunch of miscellaneous trivia and no real understanding of the major issue at hand. The key idea here is that for transmission of “knowledge”, the integrity of information is of prime value.