Big Data Algorithms: Too Complicated to Communicate?

11 Apr

“A decision is made about you, and you have no idea why it was done,” said Rajeev Date, an investor in data-science lenders and a former deputy director of Consumer Financial Protection Bureau

From NYT: If Algorithms Know All, How Much Should Humans Help?

The assertion that there is no intuition behind decisions made by algorithms strikes me as silly. So does the related assertion that such intuition cannot be communicated effectively. We can back out the logic for most algorithms. Heuristic accounts of the logic — e.g. which variables were important — can be given yet more easily. For instance, for inference from seemingly complicated-to-interpret methods such as ensemble methods, intuition for what variables are important can be gotten in the same way as it is gotten for methods like bagging. However, even when specific points are hard to convey, the meta-logic of the system can be explained to the end user.

What is true, however, is that it isn’t being done. For instance, WSJ covering Orion routing system at UPS reports:

“For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. …One driver, who declined to speak for attribution, said he has been on Orion since mid-2014 and dislikes it, because it strikes him as illogical.”

WSJ: At UPS, the Algorithm Is the Driver

Communication architecture is an essential part of all human focused systems. And what to communicate when are important questions that deserve careful thought. The default cannot be no communication.

The lack of systems that communicate intuition behind algorithms strikes me as a great opportunity. HCI people — make some money.

Idealog: Monetizing Websites using Mechanical Turk

14 Jul

Mechanical Turk, a system for crowd-sourcing complex Human Intelligence Tasks (HITs) through parceling a large task and putting the parcels on a marketplace, has seen a recent resurgence, partly on the back of the success of easier to work with clones like

Given that on the Internet people find it easier to part with labor than with money, one way to monetize websites may be to request users to help pay for the site by doing a task on Mechanical Turk, made easily available as part of the website. Websites can ask users to fill out the kind of tasks they would like to work on as part of their profiles.

For example, a news website may also provide its users a choice between watching ads and doing a small number of tasks for every X number of articles that they read. Similarly, a news website may ask its users to proofread a sentence or two of its own articles, thereby reducing costs of its production.

Some Notable Ideas

30 Jun

Education of a senator

While top colleges have been known to buck their admissions criteria for the wrong reasons, they do on average admit smarter students, and the academic training they provide is easily above average. So it may be instructive to know what percentage of Senators, the most elite of the political brass, have graced the hallowed corridors of top academic institutions, and if at all the proportion attending top colleges varies by party.

Whereas 5 of the 42 Republican senators have attended top colleges (in the top 20), 22 of the 57 Democratic Senators have done the same. The skew in numbers may be because New England is home to both top schools and a good many Democratic Senators. But privilege accorded by accident of geography is no less consequential than one afforded by some more equitable regime. As in if people of fundamentally similar caliber are there in Senate in either party, a belief cursory viewing of CSPAN would absolve one-off, some due to the happenstance of geography have been trained better than others.

Discounting elite education, one may simply want to look at the extent of education. There one finds that whereas 73% of the Democratic Senators have an advanced degree, only 64% of Republican Senators have the same. While again it is likely that going to top schools increases admission to good advanced degree programs, thereby making advanced education more attractive perhaps, there again one can conjecture that whatever the reason – some groups of people, due to mere privilege perhaps, have better skills than others.

Why do telephones have numbers?
Unique alphabetical addresses could always be uniquely coded using numbers or translated as signals. However technological limitations meant that such coding wasn’t widely practiced. This meant people were forced to memorize multiple long strings of numbers, a skill most people never prided themselves with, and when forced to learn it, they took to it like fish to land. Freedom from such bondage would surely be welcomed by many. And now, unsparing hands of technological change which have made such coding ubiquitous hope to deliver comeuppance to telephone numbers. Providing each phone with an alphabetical ID that maps to its IP address would be one smart way of implementing this change.

Misinformation and Agenda Setting
People actively learn from the environment, though they do so inexpertly. Given an average American consumes over 60 hrs of media each week, one prominent place from they learn is the media. Say if one were to watch a great many crime shows – merely as a result of the great many crime shows on television – one may impute wrongly that crime rate has been increasing. A similar, though more potent, example of the this may be the 70% of Americans who believe breast cancer is the number one reason for female mortality.

Idealog: Opening Interfaces to Websites and Software

20 Mar

Imagine the following scenario: You go to, and are offered a choice between a variety of interfaces, not just skins or font adjustments, built for a variety of purposes by whoever wants to put in the effort. You get to pick the interface that is right for you – carries the stories you like, presented in the fashion you prefer. Wouldn’t that be something?

Currently, we are made to choose between using weak customization options or build some ad hoc interface using RSS readers. What is missing is open-sourced or internally produced a selection of interfaces that cater to diverse needs, and wants of the public.

We need to separate data from the interface. Applying it to the example at hand – NY Times is in the data business, not the interface business. So like Google Maps, it can make data available, with some stipulations, including monetization requirements, and let others take charge of creatively thinking of ways that data can be presented. If this seems too revolutionary, more middle of the road options exist. NY Times can allow people to build interfaces which are then made available on the New York Times site. For monetization, NY Times can reserve areas of real estate, or charge users for using some ad-free interfaces.

This trick can be replicated across websites, and easily extended to software. For example, MS-Excel can have a variety of interfaces, all easily searchable, downloadable, and deployable, that cater to specific needs of say, Chemical Engineers, or Microbiologists, or programmers. The logic remains the same – MS needn’t be in the interface business, or more limitedly, doesn’t need to control it completely or inefficiently (for it does allow tedious customization), but can be a platform on which people can build, and share, innovative ways to exploit the underlying engine.

An adjacent broader and more useful idea is to come up with a rich interface development toolkit that provides access to processed open data.

Idealog: Internet Panel + Media Monitoring

4 Jan

Media scholars have for long complained about the lack of good measures of media use. Survey self-reports have been shown to be notoriously unreliable, especially for news, where there is significant over-reporting, and without good measures, research lags. The same is true for most research in marketing.

Until recently, the state of the art aggregate media use measures were Nielsen ratings, which put a `meter’ in a few households, or asked people to keep a diary of what they saw. In short, the aggregate measures were pretty bad as well. Digital media, which allows for effortless tracking, and the rise of Internet polling however for the first time provides an opportunity to create `panels’ of respondents for whom we have near perfect measures of media use. The proposal is quite simple: create a hybrid of Nielsen on steroids and YouGov/Polimetrix or Knowledge Network kind of recruiting of individuals.

Logistics: Give people free cable and Internet (~ 80/month) in return for 2 hours of their time per month and monitoring of media consumption. Pay people who already have cable (~100/month) for installing a device and software. Recording channel information is enough for TV, but Internet equivalent of a channel—domain—clearly isn’t, as people can self-select within websites. So we only need to monitor the channel for TV but more for the Internet.

While the number of devices on which people browse the Internet, and watch TV has multiplied, there generally remains only one `pipe’ per house. We can install a monitoring device at the central hub for cable, and automatically install software for anyone who connects to the Internet router or do passive monitoring on the router. Monitoring can also be done through applications on mobile devices.

Monetizability: Consumer companies (say Kellog’s, Ford), Communication researchers, Political hacks (e.g. how many watched campaign ads) will all pay for it. The crucial innovation (modest) is the addition of the possibility to survey people on a broad range of topics, in addition to getting great media use measures.

Addressing privacy concerns:

  1. Limit recording information to certain channels/websites, ones on which customers advertise, etc. This changing list can be made subject to approval by the individual.
  2. Provide for a web-interface where people can look/suppress the data before it is sent out. Of course, reconfirm that all data is anonymous to deter such censoring.

Ensuring privacy may lead to some data censoring and we can try to prorate the data we get it a couple of ways –

  • Survey people on media use
  • Use Television Rating Points (TRP) by sociodemographics to weight data.

Marginal value of ‘Breaking News’

12 Mar

Media organizations spend millions of dollars each year trying to arrange for the logistics of providing ‘breaking news’. They send overzealous reporters, and camera crew to far off counties (and countries -though not as often in US), pay for live satellite uplink, and pay for numerous other little logistical details to make ‘live news’. They do so primarily to compete and to out do each other but when queried may regale you with mythical benefits of providing the death of a soldier in Iraq a few minutes early to a chronically bored, apathetic US citizen. The fact is that there is little or no value whatsoever for a citizen of breaking news for a large range of events. Breaking news is provided primarily as a way to introduce drama into the news cast and done so in a style to exaggerate the importance of the miniscule and the irrelevant.

The more insidious element of breaking news is that repeated news stories about marginal events, which most breaking news events are – for example a small bomb blast in Iraq, a murder in some small town in Michigan, provide little or no information to a citizen consumer about the relative gravity of the event or its relative importance. In doing so they make a citizen consumer think either that all news (and issues) is peripheral or that these minor events are of critical importance. Either way, they do a disservice to the society at large.

This doesn’t quite end the laundry list of deleterious effects of breaking news. Focus on breaking news makes sure that most attention is given to an issue when the journalists on the ground typically know the least about the issue. To take this a step further – often times the ‘sources’ for reporting during the initial few minutes of an event are often times ‘official sources’. In doing so the breaking news format legitimizes the official version of the news which then gets corrected a week or a month later in the back pages of a newspaper.

While there is little hope that the contagion of ‘breaking news’ will ever stop (and it stands to believe that web, radio and television will continue to be afflicted by the malaise), it is possible for people to opt for longer better reported articles in good magazines or learn about an issue or an event through Wikipedia, as Chaste in his column for this site suggested earlier.

Interview with Bill Thompson: Fragmented Information

10 Mar

This is the fourth and concluding part of the interview with BBC technology columnist, Mr. Bill Thompson.

part 1, part 2, part 3

This kind of completes two of the major questions that I had. I would now move on to digital literacy and fragmented informational landscape. Google has made facts accessible to people – too accessible, some might say. What Google has done is allowed the people to pick up little facts, disembodied and without the contextual information. It may lead to a consumer who has a very particularistic trajectory of information and opinions. Do you see that as a possibility or does the fundamental interlinked nature of the Internet somehow manages to make information accessible in a more complete way? In a related point do you see that while we are becoming information-rich, we are also simultaneously becoming knowledge poor?

That is such a big question. In fact, I share your concerns. I think there is a real danger – that it’s not even just that there is sort of a surfeit of facts and a lack of knowledge, its that the range of facts which we have available to us becomes defined by what is accessible through Google. And as we know that even Google, or any other search engine, only indexes a small portion of the sum of human knowledge, of the sum of what is available. And we see that this effect also becomes self-reinforcing so that somebody is researching something and they search on Google, find some information, they then reproduce that information and link to its source and it becomes therefore even more dominant, it becomes more likely to be the thing people will find next time they search and as a result alternative points of view, more obscure references, the more complex stuff which is harder to simplify and express drops down the Google ranking and essentially then becomes invisible.

There is much to be said for hard research that takes time, that is careful, that uncovers this sort of deeper information and makes it available to other people. We see in the world of non-fiction publishing, particularly I think with history every year or two we see a radical revisionist biography of some major historical figure based on a close reading of the archives or access to information which was previously unavailable. So all the biographies of Einstein are having to be rewritten at the moment because his letters from the 1950s have just become available and they give us a very different view of the man and particularly of his politics. Now if our view of Einstein was one defined by what Google finds out about Einstein we would know remarkably little. So we need scholars, we need the people who are always going to delve a little more deeply and there is danger in the Google world – it becomes harder to do that and fewer people will even have access to the products of their [careful researcher’s] work because what they write will not itself make it high up the ranking, will not have a sufficient ‘page rank’.

So I actually do think Google and the model of information access which it presents us is one that should be challenged and it should only ever be one part of the system. It is a bit like Wikipedia. I teach a journalism class and I say to my students that Wikipedia may be a good place to start your research but it must never be the place to finish it. Similarly, with Google, anybody who only uses the Google search engine knows too little about the world.

You bring up an important point. Search engine design, and other web usage patterns are increasingly channeling users to a small set of sites with a particular set of knowledge and viewpoints. But hasn’t that always been the case? An epidemiological study of how knowledge has traditionally spread in the world would probably show that at any one time only a small amount of knowledge is available to most people while most other knowledge withers into oblivion. So has Google really fundamentally changed the dynamics?

You are trying to do that to me again and I won’t let you.

This is not a fundamental shift in what it means to be human. None of this is a fundamental shift in what it means to be a human. Things may be faster, we may more access or whatever but we have always had these problems and we have always found solutions to them. And I am not a sort of a millenialist about this; I don’t think this is the end of civilization. I think we face short-term issues and we historically have found a way around them and we will again. That Google’s current dominance is a blip. In a sense – it will go, I don’t know how. Ok, here’s a good way in which Google’s dominance could go. So, at the moment we have worries in the world about H5N1 avian flu mutating into a form which infects humans. Let’s just suppose that this happens and that somebody somewhere writes an obscure academic paper which describes how basically to cure it and how to prevent infection in your household. Well all the people who rely on Google won’t find this paper will die and all the people who go to their library and look up the paper version will live and therefore the Google world will be over. How about that? There is something, perhaps not quite on that scale, something will happen which will force us to question our dependence on Google and that would be a good thing. We shouldn’t ever depend on anyone like that.

You know Mr. Thompson, even libraries have sort of shifted. They are increasingly interested in providing Internet access.

Yeah, it is and it is search rather than structure. And you know the fact is that search tools make it easy to be lazy and we are a lazy species and therefore we will lazy and we will carry on being lazy until we are forced until something bad happens because of our laziness at which point we will mend our ways.

That’s why I had brought up the question of fragmented knowledge earlier. One of my close friends is blind and he generally has to read through the book to reach the information that he wants. He tends to have a much fuller idea of context and the kind of corroboration that he presents is much different from the casual kind of scattered anecdotal argumentation that others present. Of course part of that is a function of he being a conscientious arguer but certainly part of it stems from he not having as many shortcuts to knowledge and actually having a fuller contextual understanding of the topic at hand. The fact is that most users can now parachute in and out of information and Google has helped make it easier.

I don’t think we see what’s really going on. There is a lot more information and there is a lot more to cope with and this superficial skimming is a very effective strategy. Skim reading is something we know how to do, we teach our children how to do, we value in ourselves and indeed in them, and skim surfing is just as valuable. You know I monitor thirty-forty blogs, news sites and stuff like that and when I am doing it, I don’t look too closely at things. That doesn’t mean that I don’t have the ability or the facility to do something which is a lot deeper and a lot more involved.

I have a fifteen-year daughter. She is doing her GCSE exams this year. And I have watched over the last 18 months or so how she has developed her ability to focus, her research skills, her reading around, she is surrounded by a pile of books, she has stopped using the computer as the way to find things quickly because she now needs to know stuff in depth and she is doing all of that. So I suspect that from the outside observing children we seem them in a certain way because we only see part of what they do and we have to look in more detail. It is too easy to have the wrong idea and actually I am a lot more hopeful about this, having seen this with my daughter and I think I will start to see it with my son, who is fourteen at the moment. And again I see his application to the things he cares about and the way he searches. He is a big fan of The Oblivion, the X-Box game, his engagement and the depth of his understanding is immense. So we shouldn’t let the fact that we look at some domain of activity where they are purely superficial let us lose sight of the fact of other areas where it is not superficial at all, where they have developed exactly those skills which would want them to have.


Bill Thompson’s blog

Interview with Bill Thompson: Copyright Law

8 Mar

This is part III of a four-part interview with Mr. Bill Thompson, noted technology columnist with the BBC.

part 1, part 2

“Copyright is not a Lockean natural right but is a limited right granted to authors in order to further the public interest. This principle is explicitly expressed in the U.S. Constitution, which grants the power to create a system of copyright to Congress in order to further the public interest in “promoting progress in science and the useful arts.” (Miller and Feigenbaum, Yale) UK’s copyright law dates back to Statute of Anne from 1709, which states – “An Act for the Encouragement of Learning, by vesting the Copies of Printed Books in the Authors or purchasers of such Copies, during the Times therein mentioned.” Both seem to see copyright as something tailored towards the public good. The modern understanding of it has sort of disintegrated into a sort of “right to make as much money as one can”. Am I correct in saying that? Please elaborate your views on the subject.

Copyright started out as an attempt to restrict the ability of publishers of books to control absolutely what they did under contract law and to establish limitations on the period in which a work of fiction or indeed any written work could be exploited by one group of people, and to ensure that after a certain amount of time it was available as part of the public domain to serve the public good. So copyright has always been about taking away any absolute right so that the creator of a work of art, fiction, literature or non-fiction has so that everyone can benefit; take away the absolute right and give away in return monopoly over certain forms of exploitation during which period they are expected to make enough money or gain enough benefit to encourage them to carry on creating.

So the idea is that it is a balance – give the creator enough so that they can create more and encourage them to do that because it is good but make sure that the products of their creative output fall into the public domain so they can be used by everyone for the wider good on the grounds that you can never know in advance who will make the best use of someone else’s creative output and therefore it should be available. So, the fact that the early years of the last century a cartoonist in the United States called Walt Disney drew a mouse based on other people’s ideas is great and Disney and his family have had a lot of time to exploit the value in the mouse but there are other people now who could do a better job with it and they should be allowed to get their hands on the mouse and do cool stuff with it. That’s the idea and that is the principle that is being broken by large corporations who see the economic advantage to themselves in extending the term of copyright, in limiting the freedoms that other people have because they don’t care about the public good, they care about their own good. And legislatures, particularly in the United States but also elsewhere, have been bought off, corruptly or not, and have not been true to the original principles, which is that in the end it should all go into the public domain so that anybody who wants can make use of it and exploit it in creative ways that we cannot yet imagine. In a sense it’s an expression of humility – it’s saying that we cannot know for sure who will be able to do the best with its work and therefore it is the interest of everybody that it should be available to everybody. That was the breakthrough – the insight – of copyright law 300 years ago. We are coming up on the 300th anniversary of the Statute of Anne, the first codified copyright law and I think we should big party for it.

The point is that – the point is most eloquently made not by Larry Lessig, who is good, but by Richard Stallman of the Free Software Foundation and his point is just that copyright is broken and it needs to be rebalanced and we need new and different approach to copyright and in a sense it is the one area of law where we actually do need to start again. I am always an advocate of trying to make old laws work with new technologies. I think that we should be very cautious about making new laws because looking back historically it does like that today’s politicians are more stupid and more corrupt than those of older days and therefore are less likely to make good laws – that just seems to be the case. Correct me if I am wrong. And therefore we should avoid giving them the ability to screw things up. But with copyright, we are forced to. So we have to engage with the political system, we have to make sure that the people who have political power understand the issues and we have to force them to do the right thing. In other areas for example libel laws and all sorts of other aspects of what we do online, in fact, the existing legal framework has proven remarkably robust. There have been problems over jurisdiction and problems over enforcement but the laws themselves have applied pretty well in the networked world and we haven’t needed that many new laws and that is a good thing. Copyright is the one area where we clearly do.

Copyright, if minimally construed, is the right to produce copies. This particular understanding is fabulously unsuited for the Internet era where technology companies like Google have a business model based on making daily copies of content and making it searchable. Book publishers, along with some other content producers, have cried foul. It seems to me that they don’t understand the Internet model, which in a way has changed the whole dynamic of ‘copying’.

I don’t think it has changed the whole dynamic as much as it as exposed another reading of the word copy and made it the dominant reading and so undermined part of the ball. Parliamentary draughtsmen, the people who wrote those laws, were perfectly right in using the word like they did; it is just that we have promoted one particular facet of copy. The fact that we use the word copy to refer to the version that is made in sort of viewing a webpage on a browser – the version that is held in the display memory and all those sorts of things – we could have avoided a lot of this fuss by redefining what the word copy means thirty years ago or fifty years ago or just not using the word copy. It wouldn’t have actually helped the larger issue because the real problem with copyright is not that too many incidental acts on our computer systems, on our network are in principle in breach of copyright, it’s the fact that the existence of the network makes it possible to breach copyright deliberately, almost maliciously.

As we talk I am waiting for the Episode 13 of Series 3 of Battlestar Galactica to download onto my PC via BitTorrent from the United States so I could watch it. Ok! Now that is a complete infringement of copyright.
[I reply jokingly – so I am going to the MPAA.] Feel free, I would welcome their letter. I would delete it once I have watched it and I would buy the DVD once it comes out. But Sky here hasn’t started showing it four months after it was on the Science Fiction channel. Well, I am not going to wait four months to watch something when it is available. I mean that’s just foolish. That exposes holes in copyright law. It also exposes holes in the economic strategy of multinational corporations who run the broadcast industry in the UK and the US because they just don’t understand the market or what people are doing. There are times when you have to stretch the system to demonstrate the absurdity of the old model and that’s what I see myself as doing.

The US and EU copyright regimes differ in some marked ways. Similarly, Australian copyright law is different in its statute of limitations that is much smaller than the US. Post Internet, we do really need a common international framework for copyright.

But we do. We have that. We have the World Trade Organization, we have WIPO – the World Intellectual Property Organization, we have the Berne (convention signatories). There is an international framework for copyright. It’s as broken as anything else. We need a new Berne, we need to go back to Switzerland and renegotiate what copyright means on a global level but there is that framework but it’s been caught out by technology.

Databases are given legal protection in EU via its database directive while similar privileges haven’t been granted in US. What do you make of this effort to give copyright to databases?

That’s just a European absurdity which we will realize was a mistake and eventually change. You have a database copyright in the European Union and in some other countries though not in the United States and it is clearly a mistake. There is growing awareness that something needs to be done about it because it’s not necessary to offer such protection. The idea that you get automatic protection for taking other people’s data and structuring it in a certain way has limited economic flexibility and has damaged competitiveness.

There is always a problem you see that as new technologies emerge to suggest new rights to go with them and this was the case where [we drafted something into] a law before wiser counsels could prevail.

Gowers report recently received a fair bit of attention. The report, I believe, had this wonderful recommendation for handling patent applications. It talked about putting up patent applications online and having an open commenting period. You in fact wrote about the report in your recent column. Can you talk a little more about the report?

Gowers report was commissioned by the Chancellor of the Exchequer, Gordon Brown, who is a senior government minister, basically second only to Tony Blair and indeed Gordon Brown hopes to be Prime Minister within the next few months. Because of the way British politics works he can probably manage that without ever getting elected because he would just become party leader and therefore automatically the Prime Minister because the Labor Party is the dominant party in the government.

Brown commissioned a man called Andrew Gowers, who had at that point just been fired from being the editor of the Financial Times, to carry out this report. Andrew is a nice man but many of us doubted his ability to resist the Copyright lobby, to resist the pressures, to write something which would make industry happy, but he surprised us all, partly thanks to the excellent team of people he had working for him at the Treasury in the UK. He came up with a report that wasn’t radical but was sensible and what we do best in British politics is sensible because people can behind sensible. He said some things which were well argued, didn’t give in to the vested interests and didn’t give the music industry what they wanted.

Unfortunately, the Gowers Report is just that – it is a report, it is a series of recommendations which then goes into the government machine and has then to be acted on. It doesn’t do anything itself. We have a political issue here which is that when Gordon Brown as Chancellor of the Exchequer commissioned the report, he believed that by the time it was published he would be Prime Minister, he believed by then Tony Blair would have gone and he would then be in a position to take this report and say I commissioned this report when I was Chancellor and it is absolutely fantastic, now I am Prime Minister and I am going to make it happen. Unfortunately, Tony Blair has refused to go and so Gordon Brown has received the report as a Chancellor and has no real power to deliver on it. And so the question is when Gordon does become Prime Minister – will it be his priorities – probably not, will the world have changed- probably, will he have been leaned on so effectively by the very wealthy music and movie industry so that he will actually dilute some of its recommendations –well tragically probably yes. So the timing is all wrong. The opportunity that Gowers presented was for Gordon Brown to say – this is great let’s just do it. Now we are going to have to wait – eight months – and [in that time] things would have changed and there will be a lot else for Gordon Brown to do. So for those of us who think that the recommendations are good are trying to keep the pressure on and keep track of what is happening, have the right conversations and make sure that when Gordon does become Prime Minister, because it looks fairly likely that he will, that he is reminded of his at the right time in the right way so that it can then turn into real change.

The other thing to remember is that a lot of changes that are proposed, a lot of recommendations are proposed, are actually international recommendations. So there are things that will have to happen at a European level or at a global level and so to some extent it is a call for British ministers, for British representatives, for British commissioners at Europe, for British delegates at WIPO to behave in a different way but it will take some time before we know that’s being successful. The report advocates engagement at a global level. It then needs to happen.

Interview with Bill Thompson: The Political Economy of the Internet

6 Mar

This is part 2 of the interview with Bill Thompson, technology columnist with the BBC. part 1

When I look at the Internet, there is this wonderful sense of volunteerism. It is incredible to see the kind of things that have come out of recent technology like the open source movement, and Wikipedia. Even Internet companies seem to have adopted sort of socially nurturing missions. How did these norms of volunteerism get created? Has technology merely enabled these norms? Or are we witnessing something entirely new here?

If you look at common space peer production, as Yochai Benkler calls it, what motivates people is exactly the same question as what motivates altruism. Because what we have with contributions to open source projects like Linux or positive contributions to Wikipedia, is what would seem to be on surface just pure altruistic behavior. So we can ask the same questions. What do people get in return? And do they have to get something in return?

Pekka Himanen in the Hacker Ethic, I think, nailed what people get in return— the social value you get from that, the sense of self-worth, the rewards that you are looking for, all of that makes perfect sense to me. I don’t think we need to ask any more questions about that. You get stuff back from contributing to the Linux kernel or putting something up on SourceForge. The stuff you get back is the same sort of stuff you get back from being a good active citizen. It is the same stuff as you get back from say recycling your trash.

The question as to whether something new is emerging, whether what’s happening online, because it allows for distributed participation – because the product of the online activity is say, certainly in the case of open source, a tool which can then itself be used elsewhere, or in the case of Wikipedia, a new approach to collating knowledge. Whether something completely new or radical is coming out of there still remains to be seen. I am quite skeptical about that. I am quite skeptical of brand new emergent properties of network behavior because we remain still the same physical and psychological human beings. I am not one of those people who believes that singularity is coming, that they are about to transcend the limitations of the corporeal body and that some magical breakthrough in humanity is going to happen thanks to the Internet and new biomedical procedures. I don’t think we are on the verge of that change.

I think that Internet as a collaborative environment might emphasize what it is to work together and change what it means to be a good citizen, but it doesn’t fundamentally alter the debate.

But the kind of interactions that we see today wouldn’t have happened if it were not for the Internet. For example, the fact that I am talking to you today is, I believe, sufficiently radical.

But has it changed anything fundamentally? Ok, it has allowed us to find each other but there was in the 13th century medieval Europe a very rich and complicated network of traveling scholars, who would travel from university or monastery to share each other’s ideas, they would exchange text. It was on a smaller scale, it was much slower, and it was at a lower level but was it fundamentally different to what we are doing in the blogosphere or with communications like this? Just because there is more of it doesn’t mean it is automatically different.

Let me move on here to a related but different topic. I imagine that the techniques which have been developed around this distributed model be applied to a variety of different places. For example, lessons from open source movement can be applied to how we do research. Can lessons of the Internet be applied elsewhere? Certainly, alternative forms of decision making are emerging within companies. Is Internet creating entirely new decision models and economies?

That’s quite a big question. There’s a sort of boring answer to it which is just that more and more organizations and more and more areas of human activity are reaching that third stage in their adoption of information and communication technologies. The first stage is where you just computerize your existing practices. The second stage is where you tinker with things and perhaps redefine certain structures. But the third stage is where you think ok these technologies are here so lets design our organizational processes, structures and functions around the affordances of the technology, which is a very hard thing to do but something which more and more places are doing. So just as in the 1830s and 1840s, organizations built themselves around the capabilities of steam systems and technologies and in the 1920s they built themselves around the new availability of the telephone, so now, in the West certainly, it is reasonable to assume that the network is there, and the things it makes possible it will continue to make possible. So you start to build structures, workflow and practices, businesses and indeed whole sectors of the economy around what the net does. In that sense, it is changing lots of things. As I said, I think that’s a boring insight. That’s what happens! We develop new technologies and we come to rely on them. It’s happened for the past five thousand years. So while it may be a new one but it’s the same pattern. Joseph Schumpeter got it right in the 1930s talking about waves of ‘Creative Destruction’ and everybody is now talking about that in the media but fundamentally there is nothing different going on there.

There is a more interesting aspect of that. Are some of the outputs of the more technological areas—the open source movement and things like that—creating wholly new possibilities for human creative and economic expression? And, they might be. I don’t think we know yet. I think it’s too early to tell. We have seen the basis of the Western economy and hence of the global economy move online (become digital) over the past twenty years. As Marx would put it the economic base has shifted. We are seeing the superstructures move now to reflect that. The idea of economic determinism is not right at every point in history but certainly, the world we live in now is a post-capitalist world. We still use the word Capitalism to describe it but in fact, the economy works in a slightly different way and we are going to need a new word for it. In that world – we have a new economic base – we will find new ways of being. And we will start to see the impact in art and culture, in forms of religious expression. You know we haven’t yet seen a technologically based region and it is about time we saw something emerge where the core presets rely on the technology.

Are we really post-Capitalist as you put it? I would still argue that Capitalism still trumps. The usage patterns of websites etc. still largely reflect the ‘old economy’. More importantly, I would argue that the promise of Information Age has long been swallowed by the quicksand of Capital.

When I say post-Capitalist, I don’t mean it’s not capitalist. If you look at the move from the feudal economy to Capitalism, the accumulation of capital became important. It still remains very important. It is still what drives things. The rich get more, the powerful remain more powerful and indeed those who have good creative ideas get appropriated by the system. We are seeing it happen already with the online video world where now if you create a cool 30-second video, your goal is to monetize that asset and basically you put it on Youtube and try to advertise it – you become part of the system and that this continues to happen. Just in parenthesis, the idea is that we are post-Capitalist not in that we are replacing Capitalism but it’s a different form of Capitalism—it is Uber Capitalism, it is Networked Capitalism. We need a new word for what we can do now. It doesn’t mean that those with capital don’t dominate because they do and they will continue for some time, I imagine.

In that sense that the network had some sort of democratizing influence is misguided. It hasn’t. It has enabled much greater participation. It may well make it possible for more people to benefit from their creativity in a modest way but I don’t think it will do anything to challenge the fundamental split between the owners of capital, those who invest their money and that counts as their work, and the wage slaves, the proletariat, those who have to do stuff every day in order to carry on and earn enough money to live. I don’t think it will change that at all.

Your comments are just spot on. There is an astute understanding of the political economy of the net especially at a time when one constantly hears of the wondrous impact of the Internet to revolutionize everything from Democracy to Economy.

Yeah. The network is a product of an advanced Capitalist economy largely driven by the economic and political interests of the United States although that balance is starting to shift. We see what is happening – particularly India and China are starting to have some influence, not very strong at the moment but growing, on the evolution of the network. But again India and China are trying to find their own ways of be industrial capitalist economies. They are not really trying to find their ways to be something completely different.

The digital economy, as you pointed out, still largely reflects the ‘real’ world underneath it. Things will change and are changing in some crucial fundamental ways but the virtual world is anchored to the real world. One facet of that real world is the acute gender imbalance in the IT industry. What are your thoughts on the issue?

There have been massive advances, particularly in Europe and the United States, [which] are I think two [places] in which over the past 100 years we have accepted and indeed believe that differences [in treatment] between men and women, which existed in many other societies, were just wrong. The differences which are currently enforced on billions of women around the world by their religions should be overcome. This was a historical era. There is no real difference [between genders]; the gender differential is unjust. Social justice requires equality. But it’s [gender equality] a very recent idea, it’s a very recent innovation and one of the last places where it has made an impact is within the education system so that fifty years ago the education system would push the men towards science and technology and women towards art and domestic skills. I think we are just living through the consequences of that in that sort of adults that we have today, in the people of my age now. When I was in school the girls would be glided away from the sciences and as a result technology and engineering were to a large extent male preserves and we are still correcting that historical injustice.

Now, what’s interesting though is that whilst we see that difference between those who build and create the machines, and at the engineering level, we are seeing it much less and less at the user level. So now the demographics of Internet use, computer use, laptop use, mobile phone use and all those sorts of things, certainly within the West, reflect the general population. Over the last ten years I have watched Internet use equalize, certainly here in the UK between men and women, and indeed what research has been done about how computers are used in the household makes it very clear that the computer has now become another household device that is as likely to be used by or controlled by the women or girls in the house as by the boys. So I think at the user level where the technology pushes through into our daily life that distinction isn’t there anymore. It’s at the programmer level where we see fewer women programmers and fewer women web designers. There are still a lot of them out there, friends of mine, male and female who are just as equally good and astute and capable at coding and developing and all those things but we still do see fewer. And I think it’s just a general societal imbalance that has yet to be corrected.

Interview with Bill Thompson: The Future

5 Mar

While technology has become an important part of our social, economic and political life, most analysis about technology remains woefully inadequate, limited to singing paeans about Apple and Google, and occasional rote articles about security and privacy issues. It is to this news market full of haberdasher opining that Mr. Bill Thompson brings his considerable intellect and analytical skills every week for his column on technology for the BBC.

To those unfamiliar with his articles, Mr. Bill Thompson is a respected technology guru and a distinguished commentator on technology and copyright issues for the BBC. Mr. Thompson’s calm moderated erudition of technology comes from his extensive experience in the IT industry in varying capacities and a childhood without computers. “I was born in 1960. So I grew up before there were computers around. Indeed, I never touched one at school.” It was not until his third year at Cambridge University when he was running experiments in Psychology that he first touched a computer. He says that in many ways his first experiences with computers formed his mindset about computers. And that view—computers are there to perform a useful function—has stayed with him for over 25 years.

Mr. Thompson went on to get a Master’s level diploma in Computer Science from Cambridge University in 1983. After graduating from Cambridge, he joined a small computer firm and then quit it to join Acorn Computers Limited, creators of the successful BBC Micro, as a database consultant. He left the enterprise because “they wanted to promote me” and joined as a courseware developer with Instruction Set. After a stint with PIPEX, he found himself running Guardian’s New Media division a decade or so ago when the Internet was still in its infancy. After working for a few years managing Guardian’s online site, Mr. Thompson left to pursue writing and commenting full time. It is in the field of writing and providing astute analysis on technology-related issues that Mr. Thompson finds himself today.

I interviewed Mr. Thompson via Skype about a month ago. The interview covered a wide range of issues. Given the diversity of issues covered I have chosen to put an edited transcript of the interview rather than an essay styled thematic story. Here’s an edited (both style and content) transcript of the interview.

The technology opinion marketplace seems to be split between technology evangelists and Luddites. Your writing, on the other hand, manifests a broad range of experience; it reflects moderated enthusiasm about what computers can do. I find it an astute and yet optimistic account.

I am fundamentally optimistic about the possibilities of this technology that we have invented to both make the world a better place and to help us recover from some of the mistakes of the past and make better decisions as a species, not just as a society, in the future. It informs my writing. It informs as well the things that I am interested in and the areas that I want to explore.

Our relationship with machines was once fraught with incomprehension and fear. Machines epitomized the large mechanized state and its dominance over the natural world. There was a spate of movies somewhere in the 70s when refrigerators and microwaves rose up to attack us. Over the past decade or so, our relationship has transformed to such a degree that not only do we rely on fairly sophisticated machines to do our daily chores, but we also look at machines as a way to achieve utopian ideals. Fred Turner, professor of Communication at Stanford, in “From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism” traces this rise of digital utopianism to American counterculture. How do you think the relationship evolved?

The way you phrase the question leads me to think that perhaps it was the exaggerated claims of Artificial Intelligence community that led people to worry that computers would reach the point at which they would take over. And the complete failure of AI to deliver on any its promises has led us to a more phlegmatic and accepting attitude, which is that these are just machines; we don’t know how to make them clever enough to threaten us, and therefore we can just get on with using them.

The fact is known that Skynet is not going to launch nuclear weapons at us in a Terminator world and so we can then focus on the fact that the essential humanity of the Terminator itself, certainly in the second and third movies, is a source of redemption. We can actually feel positive about the machines instead of negative about them.

When you have a computer that is around, that crashes constantly, that is infected with viruses and malware, that doesn’t do what is supposed to do and stuff like that, you are not afraid of it. You are irritated by it. And you treat it as you would a recalcitrant child that you might love and care for and that has some value but is certainly not something that is going to threaten you. And then we can use the machines. That then actually allows us to focus on what you call the Utopian or altruistic aspects. It allows us to focus on machines in a much broader context, which recognizes that human agency is behind it.

The dystopian stories rely on machines getting out of control but in fact, we live in a world in which the machines are being used negatively by people, by governments, by corporations, and by individuals. The failure to have AI allows us to accept that – to reject the systems they have built without rejecting the machines themselves.

And for those who actually believe that information and communication technologies are quite positive – (it allows us) to focus on what could be done for good instead of just dismissing all of the technology as being bad. It allows us to take a much more complex and nuanced point of view.

You make an excellent point. I see where you are coming from.

In a sense, it is where I am coming from and which is—I am a liberal humanist atheist. I believe we make this world and we have the potential to make it better, and the technologies we invent should be part of that process.

Just as I am politically socialist, I believe in equality of opportunity and social justice and all those things [similarly] I have a humanist approach to technology which is that what we have made we can make ‘do good’ for us.