Campaigns, Construction, and Moviemaking

25 Sep

American presidential political campaigns, big construction projects, and big-budget moviemaking have a lot in common. They are all complex enterprises with lots of moving parts, they all bring together lots of people for a short period, and they all need people to hit the ground running and execute in lockstep to succeed. Success in these activities relies a lot on great software and the ability to hire competent people quickly. It remains an open opportunity to build great software for these industries, software that allows people to plan and execute together.

Incentives to Care

11 Sep

A lot of people have their lives cut short because they eat too much and exercise too little. Worse, the quality of their shortened lives is typically much lower as a result of avoidable' illnesses that stem frombad behavior.’ And that isn’t all. People who are not feeling great are unlikely to be as productive as those who are. Ill-health also imposes a significant psychological cost on loved ones. The net social cost is likely enormous.

One way to reduce such costly avoidable misery is to invest upfront. Teach people good habits and psychological skills early on, and they will be less likely to self-harm.

So why do we invest so little up front? Especially when we know that people are ill-informed (about the consequences of their actions) and myopic.

Part of the answer is that there are few incentives for anyone else to care. Health insurance companies don’t make their profits by caring. They make them by investing wisely. And by minimizing ‘avoidable’ short-term costs. If a member is unlikely to stick with a health plan for life, why invest in their long-term welfare? Or work to minimize negative externalities that may affect the next generation?

One way to make health insurance care is to rate them on estimated quality-adjusted years saved due to interventions they sponsored. That needs good interventions and good data science. And that is an opportunity. Another way is to get the government to invest heavily early on to address this market failure. Another version would be to get the government to subsidize care that reduces long-term costs.

Clarifai(ng) the Future of Clarifai

31 Dec

Clarifai is a promising AI start-up. In a short(ish) time, it has made major progress on an important problem. And it is rapidly rolling out products with lots of business potential. But there are still some things that it could do.

As I understand it, the base version of Clarifai API is trying to do two things at once: a) learn various recognizable patterns in images b) rank the patterns based on ‘appropriateness’ and probab_true. I think Clarifai would have to split these two things over time and allow people to input what abstract dimensions are ‘appropriate’ for them. As the idiom goes, an image is a thousand words. In an image, there can be information about social class, race, and country, but also shapes, patterns, colors, perspective, depth, time of the day etc. And Clarifai should allow people to pick dimensions appropriate for the task. Though, defining dimensions would be hard. But that shouldn’t stymie the efforts. And ad hoc advances may be useful. For instance, one dimension could be abstract shapes and colors. Another could be the more ‘human’ dimension etc.

Extending the logic, Clarifai should support the building of abstract data science applications that solve a particular problem. For instance, say a user is only interested in learning about whether the photo features a man or a woman. And the user wants to build a Clarifai based classifier. (That person is me. The task is inferring gender of first names. See here.) Clarifai could in principle allow the user to train a classifier that uses all other information in the images, including jewelry, color, perspective, etc. and provide an out of sample error for that particular task. The crucial point is allowing users fuller access to what Clarifai can do and then letting the users manage it at their ends. To that end again, input about user objectives needs to be built into the API. Basic hooks could be developed for classification and clustering inputs.

More generally, Clarifai should eventually support more user inputs and a greater variety of outputs. Limiting the product to tagging is a mistake.

There are three other general directions for Clarifai to go into. A product that automatically sections an image into multiple images and tags each section would be useful. This would allow, for instance, to count the number of women in a photo. Another direction to go would be to provide the ‘best’ set of tags that collectively describe a set of images. (It may seem like violating the spirit of what I noted above but it needn’t — a user could want just this.) By the same token, Clarifai could build general-purpose discrimination engines — a list of tags that distinguishes image(s) the best.

Beyond this, the obvious. Clarifai can also provide synonyms of tags to make tags easier to use. And it could allow users to specify if they want, say tags in ‘UK English’ etc.

I Recommend It: A Recommender System For Scholarship Discovery

1 Oct

The rate of production of scholarship has never been higher. And while our ability to discover relevant scholarship per unit of time has never kept pace with the production of knowledge, it has also risen sharply—most recently, due to Google scholar.

The efficiency of discovery of relevant scholarship, however, has plateaued over the last few years, even as the rate of production of scholarship has kept its steady upward climb. Part of the reason why the growth has plateaued is because current ways of doing things cannot be made considerably more efficient very quickly. New growth in the rate of discovery will need knowledge discovery systems to get more data on the user’s needs, and access to structured databases of academic production.

The easiest next step would perhaps be to build a collaborative recommendation system. In particular, think of a system that takes your .bib file, trawls a large database of citation lists, for example, JSTOR or PLoS, and produces recommendations for scholarship you may have missed. The logic of a collaborative recommender system is pretty simple: learn from articles which have cited similar scholarship as you. If we have metadata on scholarship, for instance, sub-field, the actual text of an article or even the abstract, we could recommend based on the extent to which two articles cite the same kind of scholarly article. Direct elicitations (search terms are but one form) from the user can also be added to guide the recommendations. And meta-characteristics, for instance, page rank of a piece of scholarship, could be used to order recommendations.

Some Aspects of Online Learning Re-imagined

3 Sep

Trevor Hastie is a superb teacher. He excels at making complex things simple. Till recently, his lectures were only available to those fortunate enough to be at Stanford. But now, he teaches a free introductory course on statistical learning via the Stanford online learning platform. (I browsed through the online lectures — they are superb.)

Online learning, pregnant with rich possibilities, however, has met its cynics and its first failures. The proportion of students who drop-off after signing up for these courses is astonishing (but so are the sign-up rates). Some preliminary data suggest that compared to some face-to-face teaching, students enrolled in online learning don’t perform as well. But these are early days, and much can be done to refine the systems and promote the interests of thousands upon thousands of teachers who have much to contribute to the world. In that spirit, I offer some suggestions.

Currently, Coursera, edX, and similar such ventures mostly attempt to replicate an off-line course online. The model is reasonable but doesn’t do justice to the unique opportunities of teaching online. 

The Online Learning Platform: For Coursera etc., to be true learning platforms, they need to provide rich APIs for allowing the development of both learning and teaching tools (and easy ways for developers to monetize these applications). For instance, professors could get access to visualization applications, and students could get access to applications that put them in touch with peers interested in peer-to-peer teaching. The possibilities are endless. Integration with other social networks and other communication tools at the discretion of students are obvious future steps.

Learning from Peers: The single teacher model is quaint. It locks out contributions from other teachers. And it locks out teachers from potential revenues. We could turn it into a win-win. Scout ideas and materials from teachers and pay them for their contributions, ideally as a share of the revenue — so that they have a steady income.

The Online Experimentation Platform: So many researchers in education and psychology are curious and willing to contribute to this broad agenda of online learning. But no protocols and platforms have been established to exploit this willingness. The online learning platforms could release more data at one end. And at the other end, they could create a platform for researchers to option ideas that the community and company can vet. Winning ideas can form the basis of research that is implemented on the platform.

Beyond these ideas, there are ideas to do with courses that enhance students’ ability to succeed in these courses. Providing students ‘life skills’ either face-to-face or online, teaching them ways to become more independent, etc., can allow them to better adapt to the opportunities and challenges of learning online.

Toward a Two-Sided Market on Uber

17 Jun

Uber prices rides based on the availability of Uber drivers in the area, the demand, and the destination. This price is the same for whoever is on the other end of the transaction. For instance, Uber doesn’t take into account that someone in a hurry may be willing to pay more for a quicker service. By the same token, Uber doesn’t allow drivers to distinguish themselves on price (Airbnb, for instance, allows this). It leads to a simpler interface but it produces some inefficiency—some needs go unmet, some drivers go underutilized, etc. It may make sense to try out a system that allows for bidding on both sides.

Enabling FOIA: Getting Better Access to Government Data at a Lower Cost

28 May

Freedom of Information Act is vital for holding the government to account. But little effort has gone into building tools that enable fulfillment of FOIA requests. As a consequence of this underinvestment, fulfilling FOIA requests often means onerous, costly work for government agencies and long delays for the requesters. Here, I discuss a couple of alternatives. One tool that can prove particularly effective in lowering the cost of fulfilling FOIA requests is an anonymizer—a tool that detects proper nouns, addresses, telephone numbers, etc. and blurs them. This is easily achieved using modern machine learning methods. To ensure 100% accuracy, humans can quickly vet the suggestions by the algorithm along with ‘suspect words.’ Or a captcha like system that asks people in developing countries to label suspect words as nouns etc. can be created to further reduce costs. This covers one conventional solution. Another way to solve the problem would be to create a sandbox architecture. Rather than give requesters stacks of redacted documents, often unnecessary, one can allow people the right to run certain queries on the data. Results of these queries can be vetted internally. A classic example would be to allow people to query government emails for the total number of times porn domains are accessed via servers at government offices.

Idealog: Creating A Leaky Internet

6 Apr

Recent Wikileaks episode has highlighted the immense control national governments and private companies have on what content can be hosted. Within days of being identified by the U.S. government as a problem, private companies in charge of hosting and providing banking services to Wikileaks withdrew support, largely neutering organization’s ability to raise funds, and host content.

Successful attempts to cut Internet in Egypt and Libya also pose questions of a similar nature.

So two questions follow. Should anything be done about it? And if so, what? The answer to the first is not as clear, but on balance, perhaps such absolute discretionary control over the fate of ‘hostile’ information/or technology should not be the allowed. As to the second question, given many of the hosting, banking companies, etc. essential to disseminating content are privately held, and susceptible to both government and market pressures, dissemination engine ought to be independent of those as much as possible (bottlenecks remain: most pipes are owned by governments or corporations). Here are three ideas:

  1. Create an international server farm on which content can be hosted by anyone but only removed after due process, set internationally. (NGO supported farms may work as well.)
  2. We already have ways to disseminate content without centralized hosting—P2P. But these systems lack a browser that collates torrents and builds a webpage in live time. Such a torrent based browser can vastly improve the ability of P2P networks to host content.
  3. For Libya/Egypt etc. the problem is of a different nature. We need applications like Twitter to continue to function even if the artery to central servers goes down. This can be handled by building applications in a manner that they can be run on edge servers with local data. I believe this kind of redundancy can also be useful for businesses.

Idealog: Surveys in Exchange for Temporary Internet Access

21 Feb

Recruiting diverse samples on the Internet is a tough business. Even if one uses probability sampling, one must still think of creative ways of incentivizing response. For example, Knowledge Networks samples uses RDD (one can use ABS) and then entices selected respondents with free Internet access (or money) in exchange for filling out surveys each month.

One can extend that concept in the following manner – there are a variety of places where people must wait or have time to spare, where they also increasingly have their laptops and their cell-phones, for example, the airport, the airplane, the dentist’s office, the DMV, hotel room, etc., and where they look to browse the Internet without paying any money. There are other places where people just want access to the Internet without paying (say cafes). Hence, one way to recruit people for surveys would be to give free temporary Internet access or local coupon(s) in return for filling out survey(s).

One can extend this method to ‘buying’ temporary Internet access to buying any sort of product (e.g. content) or service.

Idealog: ‘Automatically’ Summarize Articles and Judicial Opinions

16 Nov

While many of the articles and judicial opinions are never cited, a system can be established to build summaries of those that are. The idea is straightforward- harvest sentences just before the citation in academic articles citing the article of interest. One can learn from the summaries of the articles citing the articles, or from the summaries of other co-cited articles. The framework can be applied to judicial opinions, and to tally positions by politicians/institutions over a topic, etc.

This form of “summarization” utilizes intellectual work done by others, and allows for tracking major rebuttals and weaknesses identified by others. On the downside, summarization is necessarily limited to points made by others. Either way, the method is likely to produce superior results than methods that harvest ‘topic sentences’ etc.

Update: In 2015, I released the first draft of autosum.

Idealog: Monetizing Websites using Mechanical Turk

14 Jul

Mechanical Turk, a system for crowd-sourcing complex Human Intelligence Tasks (HITs) through parceling a large task and putting the parcels on a marketplace, has seen a recent resurgence, partly on the back of the success of easier to work with clones like crowdflower.com.

Given that on the Internet people find it easier to part with labor than with money, one way to monetize websites may be to request users to help pay for the site by doing a task on Mechanical Turk, made easily available as part of the website. Websites can ask users to fill out the kind of tasks they would like to work on as part of their profiles.

For example, a news website may also provide its users a choice between watching ads and doing a small number of tasks for every X number of articles that they read. Similarly, a news website may ask its users to proofread a sentence or two of its own articles, thereby reducing costs of its production.

Idealog: Opening Interfaces to Websites and Software

20 Mar

Imagine the following scenario: You go to NYTimes.com, and are offered a choice between a variety of interfaces, not just skins or font adjustments, built for a variety of purposes by whoever wants to put in the effort. You get to pick the interface that is right for you – carries the stories you like, presented in the fashion you prefer. Wouldn’t that be something?

Currently, we are made to choose between using weak customization options or build some ad hoc interface using RSS readers. What is missing is open-sourced or internally produced a selection of interfaces that cater to diverse needs, and wants of the public.

We need to separate data from the interface. Applying it to the example at hand – NY Times is in the data business, not the interface business. So like Google Maps, it can make data available, with some stipulations, including monetization requirements, and let others take charge of creatively thinking of ways that data can be presented. If this seems too revolutionary, more middle of the road options exist. NY Times can allow people to build interfaces which are then made available on the New York Times site. For monetization, NY Times can reserve areas of real estate, or charge users for using some ad-free interfaces.

This trick can be replicated across websites, and easily extended to software. For example, MS-Excel can have a variety of interfaces, all easily searchable, downloadable, and deployable, that cater to specific needs of say, Chemical Engineers, or Microbiologists, or programmers. The logic remains the same – MS needn’t be in the interface business, or more limitedly, doesn’t need to control it completely or inefficiently (for it does allow tedious customization), but can be a platform on which people can build, and share, innovative ways to exploit the underlying engine.

An adjacent broader and more useful idea is to come up with a rich interface development toolkit that provides access to processed open data.

Expanding the Database Yet Further

25 Feb

“College sophomores may not be people,” wrote Carl Hovland, quoting Tolman. Yet research done on them continues to be a significant part of all research in Social Science. The status quo is a function of cost and effort.

In 2007, Cindy Kam, proposed using the university staff as a way to move beyond the ‘narrow database.’

There are two other convenient ways of expanding the database yet further: alumni, and local community colleges. Using social networks, universities can tap into interested students, and staff acquaintances.

A common (university-wide) platform to recruit and manage panels of such respondents would not only yield greater quality control and convenience but also cost savings.

Idealog: Internet Panel + Media Monitoring

4 Jan

Media scholars have for long complained about the lack of good measures of media use. Survey self-reports have been shown to be notoriously unreliable, especially for news, where there is significant over-reporting, and without good measures, research lags. The same is true for most research in marketing.

Until recently, the state of the art aggregate media use measures were Nielsen ratings, which put a `meter’ in a few households, or asked people to keep a diary of what they saw. In short, the aggregate measures were pretty bad as well. Digital media, which allows for effortless tracking, and the rise of Internet polling however for the first time provides an opportunity to create `panels’ of respondents for whom we have near perfect measures of media use. The proposal is quite simple: create a hybrid of Nielsen on steroids and YouGov/Polimetrix or Knowledge Network kind of recruiting of individuals.

Logistics: Give people free cable and Internet (~ 80/month) in return for 2 hours of their time per month and monitoring of media consumption. Pay people who already have cable (~100/month) for installing a device and software. Recording channel information is enough for TV, but Internet equivalent of a channel—domain—clearly isn’t, as people can self-select within websites. So we only need to monitor the channel for TV but more for the Internet.

While the number of devices on which people browse the Internet, and watch TV has multiplied, there generally remains only one `pipe’ per house. We can install a monitoring device at the central hub for cable, and automatically install software for anyone who connects to the Internet router or do passive monitoring on the router. Monitoring can also be done through applications on mobile devices.

Monetizability: Consumer companies (say Kellog’s, Ford), Communication researchers, Political hacks (e.g. how many watched campaign ads) will all pay for it. The crucial innovation (modest) is the addition of the possibility to survey people on a broad range of topics, in addition to getting great media use measures.

Addressing privacy concerns:

  1. Limit recording information to certain channels/websites, ones on which customers advertise, etc. This changing list can be made subject to approval by the individual.
  2. Provide for a web-interface where people can look/suppress the data before it is sent out. Of course, reconfirm that all data is anonymous to deter such censoring.

Ensuring privacy may lead to some data censoring and we can try to prorate the data we get it a couple of ways –

  • Survey people on media use
  • Use Television Rating Points (TRP) by sociodemographics to weight data.

Fomenting Cross-Class Interactions

14 Jan

“A bubble” is often used to describe the shielded seclusion in which students live their lives on the Stanford campus. The words seem appropriate. After all, Stanford has a three-quarters-of-a-mile-long boundary that separates it from civilization. And even that ends in a yuppy favored downtown lined with preppy shops. What’s more? Even that sits next to multi-million dollar homes. To reach the seething humanity, one has to go further. To the nether regions of Palo Alto and cross into East Palo Alto. It is a task so mythically treacherous that no one volunteers, except when it is time to buy the necessities from IKEA.

But even in the famously elitist bubble, there are poor. Stanford spends about $3.2 billion to educate its roughly 15,000 students. The figure amounts to roughly $213,000 per person. About half of this money is spent on salary and benefits. The employees range from $10 hr/cafeteria workers with no benefits to administrators who earn hundreds of thousands of dollars each year. Whichever way you look at it, we have a substantial breadth of employees with whom students can interact and form partnerships to help some and learn from others.

The shrinking conversational space

Most transactions involving cross-class interaction are economic interactions. For instance, when you buy something or pay for a service. For instance, gardening or pizza delivery. Most of these interactions have been bureaucratized, with greeting protocols and thank you protocols. They leave little space for real human-to-human interaction and exchange of stories. In India, the middle class often still knows about the lives of their maids and the neighborhood grocer. And that sometimes allows them to form genuine bonds of empathy which helps them look at policy and their own lives differently. It is only through knowing lives of people in other classes that one can build genuine empathy. The damning fact of modern life is that even empathy has been implicated in superficial identity issues. And empathy lies within the contingencies imposed upon the selling and buying, largely absent of information or care.

Proposal

The idea is to connect students, faculty, and professional staff with workers from lower economic strata. For example, cashiers, janitors, construction workers, drivers, and other people whose services they rely upon every day. To do that, create an umbrella program that helps people form hyper-local chapters that extend to one building. For example, computer science students may start a program to help teach computers to janitors. Social scientists may work with people to improve their literacy skills.

Detailed Proposal:

There are three parts to the proposed program:

  • Create a website that has the following capabilities
    1. Matching students with Employees. The website will allow students and employees to enter detailed profiles. And it will allow them to search for possible “matches” based on their skill set, issues they want to work on, and availability (and time).
    2. The website will allow for both short and long-term matching. It will allow students to sign, for example, helping an employee with his resume’ or a government form. And it will allow for longer-term mentoring or symbiotic matching arrangements.
    3. The website would feature a blog and wiki to advertise successful ventures and collaborative opportunities.
  • Since creating excitement around the program is essential, the program launch will be followed by an advertisement blitz including posters, presentations, and get-to-know sessions.
  • The other crucial part of this venture is ‘hardware’— be it computers/supplies or other things that are needed to make some of this possible. So there would be two parts to the same. One would be a craigslist kind of central clearing house list that will post want and available ads. And the other would be a central fund that students or employees can draw on to make this happen.

Comments Please! The Future Of Blog Comments

11 Nov

Often times the comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point, continue endlessly, and are generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best, and a substantial distraction at worst. Here, I discuss a few ways we can re-engineer commenting systems to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that, firstly, the comments are not arranged in any particular order of relevance and, secondly, that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap.

One way to “fix” this problem is by having a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote for, or against, the comment. This occasionally leads to rating “spam”. The BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article, and by making it easier for users to navigate to the topic, or informational blurb, of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that adds a hyperlink to relevant portions of the story in comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments. These are often posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will encourage users to put in a more considered response. Obviously, there is a danger of angering the user, leading to him/her adding a longer, more pointless comment or just giving up. On average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider developing algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article – to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, this follow-up piece will be able to solicit more comments, and the process would repeat again, helping to take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated!

Making Comments More Useful

10 Nov

Often times comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point; continue endlessly and generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best and a substantial distraction at worst. Here below I discuss a few ways we can re-engineer commenting systems so to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that firstly the comments are not arranged in any particular order of relevance and secondly that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap. One way to “fix” this problem is by using a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote ( Phillip Winn) for or against the comment leading occasionally to rating “spam”. BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article and by making it easier for users to navigate to the topic or informational blurb of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that hyperlinks relevant portions of the story to comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments, often times posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will prod users to put in a more considered response. Obviously, there is a danger of angering the user leading to him/her adding a longer more pointless comment or just giving up but on an average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider coding in algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article- to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, then this follow up piece will be able to solicit more comments and the process repeated again helping take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated.

From Satellites to Streets

11 Jul

There has been a tremendous growth in satellite-guided navigation systems and secondary applications relying on GIS systems, like finding shops near the place you are, etc. However, it remains opaque to me why we are using satellites to beam this information when we can easily embed RFID/or similar chips on road signs for pennies. Road signs need to move from the ‘dumb’ painted visual boards era to electronic tag era, where signs beam out information on a set frequency (or answer when queried) to which a variety of devices may be tuned in.

Indeed it would be wonderful to have “rich” devices, connected to the Internet, where people can leave our comments, just like message boards, or blogs. This will remove the need for expensive satellite signal reception boxes or the cost of maintaining satellites. The concept is not limited to road signs and can include any and everything from shops to homes to chips informing the car where the curbs are so that it stays in the lane.

Possibilities are endless. And we must start now.