5 is smaller than 1.9!

10 Feb

“In the late 1990s, the leading methods caught about 80 percent of fraudulent transactions. These rates improved to 90–95 percent in 2000 and to 98–99.9 percent today. That last jump is a result of machine learning; the change from 98 percent to 99.9 percent has been transformational.

An improvement from 85 percent to 90 percent accuracy means that mistakes fall by one-third. An improvement from 98 percent to 99.9 percent means mistakes fall by a factor of twenty. An improvement of twenty no longer seems incremental.”


From Prediction Machines by Agarwal, Gans, and Goldfarb.

One way to compare the improvements is to compare differences in percentages —5 and 1.9. That is what I would have done. That is so because conditional on the same difference in percentages, lower the base, the greater the multiplicative factor, which makes it a cheap way of making small improvements look better. Even then, for consistency, the comparison would have been between percentage increases in accuracy, between (90 – 85)/85 and (99.9 – 98)/98. But, AGG had to flip the estimand to percentage errors to make the latter relative change look better.

Disgusting

7 Feb

Vegetarians turn at the thought of eating the meat of a cow that has died from a heart attack. The disgust that vegetarians experience is not principled. Nor is the greater opposition to homosexuality that people espouse when they are exposed to foul smell. Haidt uses similar such provocative examples to expose chinks in how we think about what is moral and what is not.

Knowing that what we find disgusting may not always be “disgusting,” that our moral reasoning can be flawed, is a superpower. Because thinking that you are in the right makes you self-righteous. It makes you think that you know all the facts, that you are somehow better. Often, we are not. If we stop conflating disgust with being in the right or indeed, with being right, we shall all get along a lot better.

The Best We Can Do is Responsibly Answer the Questions that Life Asks of Us

5 Feb

Faced with mass murder, it is hard to escape the conclusion that life has no meaning. For how could it be that life has meaning when lives matter so little? As a German Jew in a concentration camp, Victor Frankl had to confront that question.

In Man’s Search for Meaning, Frankl gives two answers to the question. His first answer is a reflexive rejection of the meaninglessness of life. Frankl claims that life is “unconditional[ly] meaningful.” There is something to that, but not enough to hang on to for too long. It is also not his big point.

Instead, Frankl has a more nuanced point: “If there is … meaning in life …, then there must be … meaning in suffering.” (Because suffering is an inescapable part of life.) The meaning of suffering, according to him, lies in how we respond to it. Do we suffer with dignity? Or do we let suffering degrade us? The broader, deeper point that underpins the claim is that we cannot always choose our conditions, but we can choose the “stand [we take] toward the conditions.” And life’s meaning is stored in the stand we take, in how we respond to the questions that “life asks of us.”

Not only that, the extent of human achievement is: responsibly answering the questions that life asks of us. This means two things. First, that questions about human achievement can only be answered within the context of one’s life. And second, in responsibly answering questions that life asks of us, we attain what humans can ever attain. In a limited life, circumscribed by unavoidable suffering, for instance, the peak of human achievement is keeping dignity. If your life offers you more, then, by all means, do more—derive meaning from action, from beauty, and from love. But also take solace in the fact that we can achieve the greatest heights a human can achieve in how we respond to unavoidable suffering.

Ruined by Google

13 Jan

Information on tap is a boon. But if it means that the only thing we will end up knowing—have in your heads—is where to go to find the information, it may also be a bane.

Accessible stored cognitions are vital. They allow us to verify and contextualize new information. If we need to look things up, because of laziness or forgetfulness, we will end up accepting some false statements, which we would have easily refuted had we had the relevant information in our memory, or we will fail to contextualize some statements appropriately.

Information on tap also produces another malaise. It changes the topography of what we know. As search costs go down, people move from learning about a topic systematically to narrowly searching for whatever they need to know, now. And knowledge built on narrow searches looks like Swiss cheese.

Worse, many a time when people find the narrow thing they are looking for, they think that that is all there to know. For instance, in Computer Science and Machine Learning, people can increasingly execute sophisticated things without knowing much. (And that is a mostly a good thing.) But getting something to work—by copying the code from StackOverflow—gives people the sense that they “know.” And when we think we know, we also know that there is not much more to know. Thus, information on tap reduces the horizons of our knowledge about our ignorance.

In becoming better at fulfilling our narrower needs, lower search costs may be killing expertise. And that is mostly a bad thing.

Experiments Without Control

4 Jan

Say that you are in the search engine business. And say that you have built a model that estimates how relevant an ad is based on the ‘context’: search query, previous few queries, kind of device, location, and such. Now let’s assume that for context X, the rank-ordered list of ads based on expected profit is: product A, product B, and product C. Now say that you want to estimate how effective an ad for product A is in driving the sales of product A. One conventional way to estimate this is to randomly assign during serve time: for context X, serve half the people an ad for product A and serve half the people no ad. But if it is true (and you can verify this) that an ad for product B doesn’t cause people to buy product A, then you can switch the ‘no ad’ control where you are not making any money with an ad for product B. With this, you can estimate the effectiveness of ad for product A while sacrificing the least amount of revenue. Better yet, if it is true that ad for product A doesn’t cause people to buy product B, you can also at the same time get an estimate of the efficacy of ad for product B.

The Benefit of Targeting

16 Dec

What is the benefit of targeting? Why (and when) do we need experiments to estimate the benefits of targeting? And what is the right baseline to compare against?

I start with a business casual explanation, using examples to illustrate some of the issues at hand. Later in the note, I present a formal explanation to precisely describe the assumptions to clarify under what conditions targeting may be a reasonable thing to do.

Business Casual

Say that you have some TVs to sell. And say that you could show an ad about the TVs to everyone in the city for free. Your goal is to sell as many TVs as possible. Does it make sense for you to build a model to pick out people who would be especially likely to buy the TV and only show an ad to them? No, it doesn’t. Unless ads make people less likely to purchase TVs, you are always better-off reaching out to everyone.

You are wise. You use common sense to sell more TVs than the guy who spent a bunch of money building the model and selling less. You make tons of money. And you use the money to buy Honda and Mercedes dealerships. You still retain the magical power of being able to show ads to everyone for free. Your goal is to maximize profits. And selling Mercedes nets you more profit than Hondas. Should you use a model to show some people ads about Toyota and other people ads about Honda? The answer is still no. Under likely to hold assumptions, the optimal strategy is to show an ad for Mercedes first and then an ad for Toyota. (You can show the Toyota ad first if people who want to buy Mercedes won’t buy a cheaper car if they see an ad for a cheaper car first.)

But what if you are limited to only one ad? What would you do? In that case, a model may make sense. Let’s see how things may look with some fake data. Let’s compare the outcomes of four strategies: two model-based targeting strategies and two target-everyone with one ad strategies. To make things easier, let’s assume that selling Mercedes nets ten units of profits and selling Honda nets five units of profit. Let’s also assume that people will only buy something if they see an ad for their preferred product.

Continue reading here (pdf).

Making an Impression: Learning from Google Ads

31 Oct

Broadly, Google Ads works as follows: 1. Advertisers create an ad, choose keywords, and make a bid (on cost-per-click or CPC) (You can bid on cost-per-view and cost-per-impression also, but we limit our discussion to CPC.), 2. the Google Ads account team vets whether the keywords are related to the product being advertised, and 3. people see the ad from the winning bid when they search for a term that includes the keyword or when they browse content that is related to the keyword (some Google Ads are shown on sites that use Google AdSense).

There is a further nuance to the last step. Generally, on popular keywords, Google has thousands of candidate ads to choose from. And Google doesn’t simply choose the ad from the winning bid. Instead, it uses data to choose an ad (or a few ads) that yield the most profit (Click Through Rate (CTR)*bid). (Google probably has a more complex user utility function and doesn’t show ads below a low predicted CTR*bid.) In all, who Google shows ads to depends on the predicted CTR and the money it will make per click.

Given this setup, we can reason about the audience for an ad. First, the higher the bid, the broader the audience. Second, it is not clear how well Google can predict CTR per ad conditional on keyword bid especially when the ad run is small. And if that is so, we expect Google to show the ad with the highest bid to a random subset of people searching for the keyword or browsing content related to the keyword. Under such conditions, you can use the total number of impressions per demographic group as an indicator of interest in the keyword. For instance, if you make the highest bid on the keyword ‘election’ and you find that total number of impressions that your ad makes among people 65+ are 10x more than people between ages 18-24, under some assumptions, e.g., similar use of ad blockers, similar rates of clicking ads conditional on relevance (which would become same as predicted relevance), similar utility functions (that is younger people are not more sensitive to irritation from irrelevant ads than older people), etc., you can infer relative interest of 18-24 versus 65+ in elections.

The other case where you can infer relative interest in a keyword (topic) from impressions is when ad markets are thin. For common keywords like ‘elections,’ Google generally has thousands of candidate ads for national campaigns. But if you only want to show your ad in a small geographic area or an infrequently searched term, the candidate set can be pretty small. If your ad is the only one, then your ad will be shown wherever it exceeds some minimum threshold of predicted CTR*bid. Assuming a high enough bid, you can take the total number of impressions of an ad as a proxy for total searches for the term and how often people browsed related content.

With all of this in mind, I discuss results from a Google Ads campaign. More here.

The Value of Predicting Bad Things

30 Oct

Foreknowledge of bad things is useful because it gives us an opportunity to a. prevent it, and b. plan for it.

Let’s refine our intuitions with a couple of concrete examples.

Many companies work super hard to predict customer ‘churn’—which customer is not going to use a product over a specific period (which can be the entire lifetime). If you know who is going to churn in advance, you can: a. work to prevent it, b. make better investment decisions based on expected cash flow, and c. make better resource allocation decisions.

Users “churn” because they don’t think the product is worth the price, which may be because a) they haven’t figured out a way to use the product optimally, b) a better product has come on the horizon, or c) their circumstances have changed. You can deal with this by sweetening the deal. You can prevent users from abandoning your product by offering them discounts. (It is useful to experiment to learn about the precise demand elasticity at various predicted levels of churn.) You can also give discounts is the form of offering some premium features free. Among people who don’t use the product much, you can run campaigns to help people use the product more effectively.

If you can predict cash-flow, you can optimally trade-off risk so that you always have cash at hand to pay your obligations. Churn can also help you with resource allocation. It can mean that you need to temporarily hire more customer success managers. Or it can mean that you need to lay off some people.

The second example is from patient care. If you could predict reasonably that someone will be seriously sick in a year’s time (and you can in many cases), you can use it to prioritize patient care, and again plan investment (if you were an insurance company) and resources (if you were a health services company).

Lastly, as is obvious, the earlier you can learn, the better you can plan. But generally, you need to trade-off between noise in prediction and headstart—things further away are harder to predict. The noise-headstart trade-off is something that should be done thoughtfully and amended based on data.

The Other Side

23 Oct

Samantha Laine Perfas of the Christian Science Monitor interviewed me about the gap between perceptions and reality for her podcast ‘perception gaps’ over a month ago. You can listen to the episode here (Episode 2).

The Monitor has also made the transcript of the podcast available here. Some excerpts:

“Differences need not be, and we don’t expect them to be, reasons why people dislike each other. We are all different from each other, right. …. Each person is unique, but we somehow seem to make a big fuss about certain differences and make less of a fuss about certain other differences.”

One way to fix it:

If you know so little and assume so much, … the answer is [to] simply stop doing that. Learn a little bit, assume a little less, and see where the conversation goes.

The interview is based on the following research:

  1. Partisan Composition (pdf) and Measuring Shares of Partisan Composition (pdf)
  2. Affect Not Ideology (pdf)
  3. Coming to Dislike (pdf)
  4. All in the Eye of the Beholder (pdf)

Related blog posts and think pieces:

  1. Party Time
  2. Pride and Prejudice
  3. Loss of Confidence
  4. How to read Ahler and Sood

Loss of Confidence

21 Oct

We all overestimate how much we know. If the aphorism, “the more you know, the more you know that you don’t know” is true, then how else could it be? But knowing more is not the only path to learning about our ignorance. Mistakes are another. When we make mistakes, we get to adjust our parameters (understanding) about how much we know. Overconfident people, however, incur smaller losses when they make mistakes. They don’t learn as much from mistakes because they externalize the source of errors or don’t acknowledge the mistakes, believing it is you who is wrong, not them. So, the most ignorant (the most confident) very likely make the least progress in learning about their ignorance when they make mistakes. (Ignorance is just one source of why people overestimate how much they know. There are many other factors, including personality.) But if you know this, you can fix it.