Idealog: Opening Interfaces to Websites and Software

20 Mar

Imagine the following scenario: You go to NYTimes.com, and are offered a choice between a variety of interfaces, not just skins or font adjustments, built for a variety of purposes by whoever wants to put in the effort. You get to pick the interface that is right for you – carries the stories you like, presented in the fashion you prefer. Wouldn’t that be something?

Currently, we are made to choose between using weak customization options or build some ad hoc interface using RSS readers. What is missing is open-sourced or internally produced a selection of interfaces that cater to diverse needs, and wants of the public.

We need to separate data from the interface. Applying it to the example at hand – NY Times is in the data business, not the interface business. So like Google Maps, it can make data available, with some stipulations, including monetization requirements, and let others take charge of creatively thinking of ways that data can be presented. If this seems too revolutionary, more middle of the road options exist. NY Times can allow people to build interfaces which are then made available on the New York Times site. For monetization, NY Times can reserve areas of real estate, or charge users for using some ad-free interfaces.

This trick can be replicated across websites, and easily extended to software. For example, MS-Excel can have a variety of interfaces, all easily searchable, downloadable, and deployable, that cater to specific needs of say, Chemical Engineers, or Microbiologists, or programmers. The logic remains the same – MS needn’t be in the interface business, or more limitedly, doesn’t need to control it completely or inefficiently (for it does allow tedious customization), but can be a platform on which people can build, and share, innovative ways to exploit the underlying engine.

An adjacent broader and more useful idea is to come up with a rich interface development toolkit that provides access to processed open data.

Expanding the Database Yet Further

25 Feb

“College sophomores may not be people,” wrote Carl Hovland, quoting Tolman. Yet research done on them continues to be a significant part of all research in Social Science. The status quo is a function of cost and effort.

In 2007, Cindy Kam, proposed using the university staff as a way to move beyond the ‘narrow database.’

There are two other convenient ways of expanding the database yet further: alumni, and local community colleges. Using social networks, universities can tap into interested students, and staff acquaintances.

A common (university-wide) platform to recruit and manage panels of such respondents would not only yield greater quality control and convenience but also cost savings.

Idealog: Internet Panel + Media Monitoring

4 Jan

Media scholars have for long complained about the lack of good measures of media use. Survey self-reports have been shown to be notoriously unreliable, especially for news, where there is significant over-reporting, and without good measures, research lags. The same is true for most research in marketing.

Until recently, the state of the art aggregate media use measures were Nielsen ratings, which put a `meter’ in a few households, or asked people to keep a diary of what they saw. In short, the aggregate measures were pretty bad as well. Digital media, which allows for effortless tracking, and the rise of Internet polling however for the first time provides an opportunity to create `panels’ of respondents for whom we have near perfect measures of media use. The proposal is quite simple: create a hybrid of Nielsen on steroids and YouGov/Polimetrix or Knowledge Network kind of recruiting of individuals.

Logistics: Give people free cable and Internet (~ 80/month) in return for 2 hours of their time per month and monitoring of media consumption. Pay people who already have cable (~100/month) for installing a device and software. Recording channel information is enough for TV, but Internet equivalent of a channel—domain—clearly isn’t, as people can self-select within websites. So we only need to monitor the channel for TV but more for the Internet.

While the number of devices on which people browse the Internet, and watch TV has multiplied, there generally remains only one `pipe’ per house. We can install a monitoring device at the central hub for cable, and automatically install software for anyone who connects to the Internet router or do passive monitoring on the router. Monitoring can also be done through applications on mobile devices.

Monetizability: Consumer companies (say Kellog’s, Ford), Communication researchers, Political hacks (e.g. how many watched campaign ads) will all pay for it. The crucial innovation (modest) is the addition of the possibility to survey people on a broad range of topics, in addition to getting great media use measures.

Addressing privacy concerns:

  1. Limit recording information to certain channels/websites, ones on which customers advertise, etc. This changing list can be made subject to approval by the individual.
  2. Provide for a web-interface where people can look/suppress the data before it is sent out. Of course, reconfirm that all data is anonymous to deter such censoring.

Ensuring privacy may lead to some data censoring and we can try to prorate the data we get it a couple of ways –

  • Survey people on media use
  • Use Television Rating Points (TRP) by sociodemographics to weight data.

Fomenting Cross-Class Interactions

14 Jan

“A bubble” is often used to describe the shielded seclusion in which students live their lives on the Stanford campus. The words seem appropriate. After all, Stanford has a three-quarter of a mile long boundary that separates it from civilization. And even that ends in a yuppy favored downtown lined with preppy shops. What’s more? Even that sits next to multi-million dollar homes. For reaching the seething humanity, one has to go further. To the nether regions of Palo Alto and cross into East Palo Alto. It is a task so mythically treacherous that no one volunteers, except when it is time to buy the necessities from IKEA.

But even in the famously elitist bubble, there are poor. Stanford spends about $3.2 billion to educate its roughly 15,000 students. The figure amounts to roughly $213,000 per person. About half of this money is spent on salary and benefits. The employees range from $10 hr/cafeteria workers with no benefits to administrators who earn hundreds of thousands of dollars each year. Whichever way you look at it, we have a substantial breadth of employees with whom students can interact and form partnerships to help some, and learn from others.

The shrinking conversational space

Most transactions involving cross-class interaction are economic interactions. For instance, when you buy something or pay for service. For instance, gardening, or pizza delivery. Most of these interactions have been bureaucratized, with greeting protocols and thank you protocols. They leave little space for real human to human interaction and exchange of stories. In India, the middle class often still knows about the lives of their maids and the neighborhood grocer. And that sometimes allows them to form genuine bonds of empathy which helps them look at policy and their own lives differently. It is only through knowing lives of people in other classes that one can build genuine empathy. The damning fact of modern life is that even empathy has been implicated in superficial identity issues. And empathy lies within the contingencies imposed upon the selling and buying, largely absent of information or care.

Proposal

The idea is to connect students, faculty and professional staff with workers from lower economic strata. For example, cashiers, janitors, construction workers, drivers, and other people whose services they rely upon every day. To do that, create an umbrella program that helps people form hyper-local chapters that extend to one building. For example, computer science students may start a program to help teach computers to janitors. Social scientists may work with people to improve their literacy skills.

Detailed Proposal:

There are three parts to the proposed program:

  • Create a website that has the following capabilities
    1. Matching students with Employees. The website will allow students and employees to enter detailed profiles. And it will allow them to search for possible “matches”, based on their skill set, issues they want to work on, and availability (and time).
    2. The website will allow for both short and long-term matching. It will allow students to sign for say helping an employee with his resume’ or a government form. And it will allow for longer-term mentoring or symbiotic matching arrangements.
    3. The website would feature a blog and wiki to advertise successful ventures and collaborative opportunities.
  • Since creating excitement around the program is essential, the program launch will be followed by an advertisement blitz including posters, presentations, and get-to-know sessions.
  • The other crucial part of this venture is ‘hardware’— be it computers/supplies or other things that are needed to make some of this possible. So there would be two parts to the same. One would be a craigslist kind of central clearing house list that will post want and available ads. And the other would be a central fund which students or employees can draw on to make of this happen.

Comments Please! The Future Of Blog Comments

11 Nov

Often times the comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point, continue endlessly, and are generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best, and a substantial distraction at worst. Here, I discuss a few ways we can re-engineer commenting systems to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that, firstly, the comments are not arranged in any particular order of relevance and, secondly, that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap.

One way to “fix” this problem is by having a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote for, or against, the comment. This occasionally leads to rating “spam”. The BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article, and by making it easier for users to navigate to the topic, or informational blurb, of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that adds a hyperlink to relevant portions of the story in comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments. These are often posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will encourage users to put in a more considered response. Obviously, there is a danger of angering the user, leading to him/her adding a longer, more pointless comment or just giving up. On average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider developing algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article – to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, this follow-up piece will be able to solicit more comments, and the process would repeat again, helping to take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated!

Making Comments More Useful

10 Nov

Often times comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point; continue endlessly and generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best and a substantial distraction at worst. Here below I discuss a few ways we can re-engineer commenting systems so to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that firstly the comments are not arranged in any particular order of relevance and secondly that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap. One way to “fix” this problem is by using a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote ( Phillip Winn) for or against the comment leading occasionally to rating “spam”. BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article and by making it easier for users to navigate to the topic or informational blurb of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that hyperlinks relevant portions of the story to comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments, often times posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will prod users to put in a more considered response. Obviously, there is a danger of angering the user leading to him/her adding a longer more pointless comment or just giving up but on an average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider coding in algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article- to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, then this follow up piece will be able to solicit more comments and the process repeated again helping take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated.

Remaking Blog Powered Webzines

5 Nov

“Eight percent of internet users, or about 12 million American adults, keep a blog. Thirty-nine percent of internet users, or about 57 million American adults, read blogs,” according to a study (pdf) conducted by Pew Internet & American Life Project.
The astounding number of people who maintain blogs, nearly all of whom have joined the bandwagon in the past couple of years, has been driven by the fact that blogs have finally delivered the promise of Internet – they have given an average user the ability to self-publish. Of course, the self-publishing revolution has not been limited to blogs but extends to places like myspace.com, flickr.com, and youtube.com- that have lowered the bar to “publish” content to the world.

From blogs to better content

The popularity of blogs has led to the creation of enormous amount of content. This, in turn, has spawned a home industry devoted to finding ways to sift through the content that has led to the evolution of things like tagging, “digging”, RSS, blog aggregators, and edited multiple contributor driven sites like huffingtonpost.com and blogcritics.org. Among all the above innovations, it is the last innovation I am particularly excited about for it has the capability of creating robust well written independent online magazines. For these sites to be able to compete with the ‘establishment magazines’ like Newsweek, they need to rethink their business and creative plan. Most importantly, they need to focus on the following issues –

  1. Redesign and repackage. For sites like blogcritics.org to move to the next level, they need to pay more attention to web design and packaging of their stories. To accomplish this, they may want to assign “producers” for the home page and beyond. “Producers” would be in charge of creating a layout for the home page, choosing the news stories and the multimedia elements displayed there. By assigning more resources on design and slotting multimedia elements, the sites can add to the user experience.

    There are twin problems with implementing this feature – labor and coming up with graphics. Blogcritics.org portrays itself as a destination for top writers and hence fails to attract talent in other areas critical to developing an online news business including web and multimedia design and development.

    Blogcritics.org and other sites similar to it should try to reach out to other segments of professionals (like graphic designers, photo editors) needed to produce a quality magazine. They may also want to invest programming resources in creating templates to display photo galleries and other multimedia features. In addition, these sites may want to tap into the user base of sites like Flickr and Youtube so as to expand the magazine to newer vistas like news delivered via audio or video.

  2. Most read stories/ emailed stories list and relevant stories– Provide features on the site that make the reader stay longer on the site including providing a list of most read or emailed stories. Another feature that can prove to be useful is providing a list of other relevant stories from history and even link to general information sites like Wikipedia. This adds value to the user experience by providing them access to more in-depth information on the topic.
  3. Comments – Comments are integral to sites like blogcritics.org but they have not been implemented well. Comments sections tend to be overrun by repeated entries, pointless entries, grammatical and spelling errors, spamming and running far too long. To solve this, they should create a comment rating mechanism, and think about assigning a writer to incorporate all the relevant points from comments and put it in a post. A Gmail like innovation that breaks up comments into discussions on a topic can also come in handy.
  4. Most successful webzines have been ones that have focused on a particular sector or a product like webzines devoted to Apple computers. The market for news, ideas, and reviews is much more challenging and the recent move by Gannet to use blog content will make it much harder to retain quality content producers. Hence, one must start investigating revenue sharing mechanisms for writers and producers and tie their earnings to the number of hits their articles get.
  5. Start deliberating about an ethics policy for reviewing items including guidelines on conflict of interest, usage of copyrighted content, plagiarism etc. and publish those guidelines and set up appropriate redressal mechanisms for settling the disputes.
  6. Create technical solutions for hosting other media including audio, images, and video.
  7. Come up with a clear privacy and copyright policy for writers, users who comment, and content buyers. In a related point, as the organization grows, it will become important to keep content producers and other affiliates informed of any deals the publishers negotiate.
  8. Allow a transparent and simple way for writers/editors to issue corrections to published articles.

From Satellites to Streets

11 Jul

There has been a tremendous growth in Satellite guided navigation systems and secondary applications relying on these GIS systems like finding shops near the place you are etc. However, it remains opaque to me as to why we are using satellites to beam in this information when we can easily embed RFID/or similar chips on road signs for pennies. The road signage needs to move from the ‘dumb’ painted visual boards era to electronic tag era, where signs beam out information on a set frequency to which a variety of devices may be tuned in.

Indeed it would be wonderful to have “rich” devices, connected to the Internet, where we can leave our comments, just like message boards, or blogs. This will remove the need for expensive satellite signal reception boxes or the cost of maintaining satellites. The concept is not limited to road signage and can include any and everything from shops to homes to chips informing the car where the curbs are so that it stays in the lane.

Possibilities are endless. And we must start now.