Conscious Uncoupling: Separating Compliance from Treatment

18 Sep

Let’s say that we want to measure the effect of a phone call encouraging people to register to vote on voting. Let’s define compliance as a person taking the call (like they do in Gerber and Green, 2000, etc.). And let’s assume that the compliance rate is low. The traditional way to estimate the effect of the phone call is via an RCT: randomly split the sample into Treatment and Control, call everyone in the Treatment Group, wait till after the election, and calculate the difference in the proportion who voted. Assuming that the treatment doesn’t affect non-compliers, etc., we can also estimate the Complier Average Treatment Effect.

But one way to think about non-compliance in the example above is as follows: “Buddy, you need to reach these people using another way.” That is a useful thing to know, but it is an observational point. You can fit a predictive model for who picks up phone calls and who doesn’t. The experiment is useful in answering how much can you persuade the people you reach on the phone. And you can learn that by randomizing conditional on compliance.

For such cases, here’s what we can do:

  1. Call a reasonably large random sample of people. Learn a model for who complies.
  2. Use it to target people who are likelier to comply and randomize post a person picking up.

More generally, the Average Treatment Effect is useful for global rollouts of one policy. But when is that a good counterfactual to learn? Tautologically, when that is all you can do or when it is the optimal thing to do. If we are not in that world, why not learn about—and I am using an example to be concrete—a) what is a good way to reach me? b) what message most persuades me? For instance, for political campaigns, the optimal strategy is to estimate the cost of reaching people by phone, mail, f2f, etc., estimate the probability of reaching each using each of the media, estimate the payoff for different messages for different kinds of people, and then target using the medium and the message that delivers the greatest benefit. (For a discussion about targeting, see here.)

Technically, a message could have the greatest payoff for the person who is least likely to comply. And the optimal strategy could still be to call everyone. To learn treatment effects among people who are unlikely to comply (using a particular method), you will need to build experiments to increase compliance. More generally, if you are thinking about multi-arm bandits or some such dynamic learning system, the insight is to have treatment arms around both compliance and message. The other general point, implicit in the essay, is that rather than be fixated on calculating ATE, we should be fixated on an optimization objective, e.g., the additional number of people persuaded to turn out to vote per dollar.

Sidebar

It is useful to think about the cost and benefit of an incremental voter. Let’s say you are a strategist for party p given the task of turning out voters. Here’s one way to think about the problem:

  1. The benefit of turning out a voter in an election is not limited to the election. It also increases the probability of them turning out in the next election. The benefit is pro-rated by the voter’s probability of voting for party p.

  2. The cost of turning out a voter is a sum of targeting costs and persuasion costs. The targeting costs could be the cost of identifying voters unlikely to vote unless contacted who would likely vote for party p or you could also build a model for persuadability and target further based on that. The persuasion costs include the cost of contacting the voter and persuading the voter

  3. The cost of turning out a voter is likely greater than the cost of voting. For instance, some campaigns spend $150, some others think it is useful to spend as much as $1000. If cash transfers were allowed, we should be able to get people to vote at much lower prices. But given cash transfers aren’t allowed, the only option is persuasion and that is generally expensive.

Prediction Errors: Using ML For Measurement

1 Sep

Say you want to measure how often people visit pornographic domains over some period. To measure that, you build a model to predict whether or not a domain hosts pornography. And let’s assume that for the chosen classification threshold, the False Positive rate (FP) is 10\% and the False Negative rate (FN) is 7\%. Here below, we discuss some of the concerns with using scores from such a model and discuss ways to address the issues.

Let’s get some notation out of the way. Let’s say that we have n users and that we can iterate over them using i. Let’s denote the total number of unique domains—domains visited by any of the n users at least once during the observation window—by k. And let’s use j to iterate over the domains. Let’s denote the number of visits to domain j by user i by c_{ij} = {0, 1, 2, ....}. And let’s denote the total number of unique domains a person visits (\sum (c_{ij} == 1)) using t_i. Lastly, let’s denote predicted labels about whether or not each domain hosts pornography by p, so we have p_1, ..., p_j, ... , p_k.

Let’s start with a simple point. Say there are 5 domains with p: {1_1, 1_2, 1_3, 1_4, 1_5}. Let’s say user one visits the first three sites once and let’s say that user two visits all five sites once. Given 10\% of the predictions are false positives, the total measurement error in user one’s score = 3 * .10 and the total measurement error in user two’s score = 5 * .10. The general point is that total false positives increase as a function of predicted 1s. And the total number of false negatives increase as the number of predicted 0s.

Read more here.

Comparing Ad Targeting Regimes

30 Aug

Ad targeting is often useful when you have multiple things to sell (opportunity cost) or when the cost of running an ad is non-trivial or when an irrelevant ad reduces your ability to reach the user later or any combination of the above. (For a more formal treatment, see here.)

But say that you want proof—you want to estimate the benefit of targeting. How would you do it?

When there is one product to sell, some people have gone about it as follows: randomize to treatment and control, show the ad to a random subset of respondents in the control group and an equal number of respondents picked by a model in the treatment group, and compare the outcomes of the two groups (it reduces to comparing subsets unless there are spillovers). This experiment can be thought off as a way to estimate how to spend a fixed budget optimally. (In this case, the budget is the number of ads you can run.) But if you were interested in finding out whether a budget allocated by a model would be more optimal than say random allocation, you don’t need an experiment (unless there are spillovers). All you need to do is show the ad to a random set of users. For each user, you know whether or not they would have been selected to see an ad by the model. And you can use this information to calculate payoffs for the respondents chosen by the model, and for the randomly selected group.

Let me expand for clarity. Say that you can measure profit from ads using CTR. Say that we have built two different models for selecting people to whom we should show ads—Model A and Model B. Now say that we want to compare which model yields a higher CTR. We can have four potential scenarios for selection of respondents by the model:

model_a, model_b
0, 0
1, 0
0, 1
1, 1

For CTR, 0-0 doesn’t add any information. It is the conditional probability. To measure which of the models is better, draw a fixed size random sample of users picked by model_a and another random sample of the same size from users picked by model_b and compare CTR. (The same user can be picked twice. It doesn’t matter.)

Now that we know what to do, let’s understand why experiments are wasteful. The heuristic account is as follows: experiments are there to compare ‘similar people.’ When estimating allocative efficiency of picking different sets of people, we are tautologically comparing different people. That is the point of the comparison.

All this still leaves the question of how would we measure the benefit of targeting. If you had only one ad to run and wanted to choose between showing an advertisement to everyone versus fewer people, then show the ad to everyone and estimate profits based on the rows selected in the model and profits from showing the ad to everyone. Generally, showing an ad to everyone will win.

If you had multiple ads, you would need to randomize. Assign each person in the treatment group to a targeted ad. In the control group, you could show an ad for a random product. Or you could show an advertisement for any one product that yields the maximum revenue. Pick whichever number is higher as the one to compare against.

What’s Relevant? Learning from Organic Growth

26 Aug

Say that we want to find people to whom a product is relevant. One way to do that is to launch a small campaign advertising the product and learn from people who click on the ad, or better yet, learn from people who not just click on the ad but go and try out the product and end up using it. But if you didn’t have the luxury of running a small campaign and waiting a while, you can learn from organic growth.

Conventionally, people learn from organic growth by posing it as a supervised problem. And they generate the labels as follows: people who have ‘never’ (mostly: in the last 6–12 months) used the product are labeled as 0 and people who “adopted” the product in the latest time period, e.g., over the last month, are labeled 1. People who have used the product in the last 6–12 months or so are filtered out.

There are three problems with generating labels this way. First, not all the people who ‘adopt’ a product continue to use the product. Many of the people who try it find that it is not useful or find the price too high and abandon it. This means that a lot of 1s are mislabeled. Second, the cleanest 1s are the people who ‘adopted’ the product some time ago and have continued to use it since. Removing those is thus a bad idea. Third, the good 0s are those who tried the product but didn’t persist with it not those who never tried the product. Generating the labels in such a manner also allows you to mitigate one of the significant problems with learning from organic growth: people who organically find a product are different from those who don’t. Here, you are subsetting on the kinds of people who found the product, except that one found it useful and another did not. This empirical strategy has its problems, but it is distinctly better than the conventional approach.

Quality Data: Plumbing ML Data Pipelines

6 Aug

What’s the difference between a scientist and a data scientist? Scientists often collect their own data, and data scientists often use data collected by other people. That is part jest but speaks to an important point. Good scientists know their data. Good data scientists must know their data too. To help data scientists learn about the data they use, we need to build systems that give them good data about the data. But what is good data about the data? And how do we build systems that deliver that? Here’s some advice (tailored toward rectangular data for convenience):

  • From Where, How Much, and Such
    • Provenance: how were each of the columns in the data created (obtained)? If the data are derivative, find out the provenance of the original data. Be as concrete as possible, linking to scripts, related teams, and such.
    • How Frequently is it updated
    • Cost per unit of data, e.g., a cell in rectangular data.
    Both, the frequency with which data are updated, and the cost per unit of data may change over time. Provenance may change as well: a new team (person) may start managing data. So the person who ‘owns’ the data must come back to these questions every so often. Come up with a plan.
  • What? To know what the data mean, you need a data dictionary. A data dictionary explains the key characteristics of the data. It includes:
    1. Information about each of the columns in plain language.
    2. How were the data were collected? For instance, if you conducted a survey, you need the question text and the response options (if any) that were offered, along with the ‘mode’, and where in a sequence of questions does this lie, was it alone on the screen, etc.
    3. Data type
    4. How (if at all) are missing values generated?
    5. For integer columns, it gives the range, sd, mean, median, n_0s, and n_missing. For categorical, it gives the number of unique values, what each label means, and a frequency table that includes n_missing (if missing can be of multiple types, show a row for each).
    6. The number of duplicates in data and if they are allowed and a reason for why you would see them. 
    7. Number of rows and columns
    8. Sampling
    9. For supervised models, store correlation of y with key x_vars
  • What If? What if you have a question? Who should you bug? Who ‘owns’ the ‘column’ of data?

Store these data in JSON so that you can use this information to validate against. Produce the JSON for each update. You can flag when data are some s.d. above below last ingest.

Store all this metadata with the data. For e.g., you can extend the dataframe class in Scala to make it so.

Auto-generate reports in markdown with each ingest.

In many ML applications, you are also ingesting data back from the user. So you need the same as above for the data you are getting from the user (and some of it at least needs to match the stored data). 

For any derived data, you need the scripts and the logic, ideally in a notebook. This is your translation function.

Where possible, follow the third normal form of databases. Only store translations when translation is expensive. Even then, think twice.

Lastly, some quality control. Periodically sit down with your team to see if you should see what you are seeing. For instance, if you are in the survey business, do the completion times make sense? If you are doing supervised learning, get a random sample of labels. Assess their quality. You can also assess the quality by looking at errors in classification that your supervised model makes. Are the errors because the data are mislabeled? Keep iterating. Keep improving. And keep cataloging those improvements. You should be able to ‘diff’ data collection, not just numerical summaries of data. And with what the method I highlight above, you should be.

Optimal Sequence in Which to Service Orders

27 Jul

What is the optimal order in which to service orders assuming a fixed budget?

Let’s assume that we have to service orders o_1,…,…o_n, with the n orders iterated by i. Let’s also assume that for each service order, we know how the costs change over time. For simplicity, let’s assume that time is discrete and portioned in units of days. If we service order o_i at time t, we expect the cost to be c_it. Each service order also has an expiration time, j, after which the order cannot be serviced. The cost at expiration time, j, is the cost of failure and denoted by c_ij.

The optimal sequence of servicing orders is determined by expected losses—service the order first where the expected loss is the greatest. This leaves us with the question of how to estimate expected loss at time t. To come up with an expectation, we need to sum over some probability distribution. For o_it, we need the probability, p_it, that we would service o_i at t+1 till j. And then, we need to multiply p_it with c_ij. So framed, the expected loss for order i at time t =
c_it – \Sigma_{t+1}_{j} p_it * c_it

However, determining p_it is not straightforward. New items are added to the queue at t+1. On the flip side, we also get to re-prioritize at t+1. The question is if we will get to the item o_i at t+1? (It means p_it is 0 or 1.) For that, we need to forecast the kinds of items in the queue tomorrow. One simplification is to assume that items in the queue today are the same that will be in the queue tomorrow. Then, it reduces to estimating the cost of punting each item again tomorrow, sorting based on the costs at t+1, and checking whether we will get to clear the item. (We can forgo the simplification by forecasting our queue tomorrow, and each day after that till j for each item, and calculating the costs.)

If the data are available, we can tack on clearing time per order and get a better answer to whether we will clear o_it at time t or not.

Optimal Sequence in Which to Schedule Appointments

1 Jul

Say that you have a travel agency. Your job is to book rooms at hotels. Some hotels fill up more quickly than others, and you want to figure out which hotels to book at first so that your net booking rate is as high as it could be the staff you have.

The logic of prioritization is simple: prioritize those hotels where the expected loss if you don’t book now is the largest. The only thing we need to do is find a way to formalize the losses. Going straight to formalization is daunting. A toy example helps.

Imagine that there are two hotels Hotel A and Hotel B where if you call 2-days and 1-day in advance, the chances of successfully booking a room are .8 and .8, and .8 and .5 respectively. You can only make one call a day. So it is Hotel A or Hotel B. Also, assume that failing to book a room at Hotel A and Hotel B costs the same.

If you were making a decision 1-day out on which hotel to call to book, the smart thing would be to choose Hotel A. The probability of making a booking is larger. But ‘larger’ can be formalized in terms of losses. Day 0, the probability goes to 0. So you make .8 units of loss with Hotel A and .5 with Hotel B. So the potential loss from waiting is larger for Hotel A than Hotel B.

If you were asked to choose 2-days out, which one should you choose? In Hotel A, if you forgo 2-days out, your chances of successfully booking a room next day are .8. At Hotel B, the chances are .5. Let’s play out the two scenarios. If we choose to book at Hotel A 2-days out and Hotel B 1-day out, our expected batting average is (.8 + .5)/2. If we choose the opposite, our batting average is (.8 + .8)/2. It makes sense to choose the latter. Framed as expected losses, we go from .8 to .8 or 0 expected loss for Hotel A and .3 expected loss for Hotel B. So we should book Hotel B 2-days out.

Now that we have the intuition, let’s move to 3-days, 2-days, and 1-day out as that generalizes to k-days out nicely. To understand the logic, let’s first work out a 101 probability question. Say that you have two fair coins that you toss independently. What is the chance of getting at least one head? The potential options are HH, HT, TH, and TT. The chance is 3/4. Or 1 minus the chance of getting a TT (or two failures) or 1- .5*.5.

The 3-days out example is next. See below for the table. If you miss the chance of calling Hotel A 3-days out, the expected loss is the decline in success in booking 2-days or 1-day out. Assume that the probabilities 2-days out and 1-day our are independent and it becomes something similar to the example about coins. The probability of successfully booking 2-days and 1-days out is thus 1 – the probability of failure. Calculate expected losses for each and now you have a way to which Hotel to call on Day 3.

|       | 3-day | 2-day | 1-day |
|-------|-------|-------|-------|
| Hotel A | .9    | .9    | .4    |
| Hotel B | .9    | .9    | .9    |

In our example, the number for Hotel A and Hotel B come to 1 – (1/10)*(6/10) and 1 – (1/10)*(1/10) respectively. Based on that, we should call Hotel A 3-days out before we call Hotel B.

Code 44: How to Read Ahler and Sood

27 Jun

This is a follow-up to the hilarious Twitter thread about the sequence of 44s. Numbers in Perry’s 538 piece come from this paper.

First, yes 44s are indeed correct. (Better yet, look for yourself.) But what do the 44s refer to? 44 is the average of all the responses. When Perry writes “Republicans estimated the share at 46 percent,” (we have similar language in the paper, which is regrettable as it can be easily misunderstood), it doesn’t mean that every Republican thinks so. It may not even mean that the median Republican thinks so. See OA 1.7 for medians, OA 1.8 for distributions, but see also OA 2.8.1, Table OA 2.18, OA 2.8.2, OA 2.11 and Table OA 2.23.

Key points =

1. Large majorities overestimate the share of party-stereotypical groups in the party, except for Evangelicals and Southerners.

2. Compared to what people think is the share of a group in the population, people still think the share of the group in the stereotyped party is greater. (But how much more varies a fair bit.)

3. People also generally underestimate the share of counter-stereotypical groups in the party.

Automating Understanding, Not Just ML

27 Jun

Some of the most complex parts of Machine Learning are largely automated. The modal ML person types in simple commands for very complex operations and voila! Some companies, like Microsoft (Azure) and DataRobot, also provide a UI for this. And this has generally not turned out well. Why? Because this kind of system does too little for the modal ML person and expects too much from the rest. So the modal ML person doesn’t use it. And the people who do use it, generally use it badly. The black box remains the black box. But not much is needed to place a lamp in this black box. Really, just two things are needed:

1. A data summarization and visualization engine, preferably with some chatbot feature that guides people smartly through the key points, including the problems. For instance, start with univariate summaries, highlighting ranges, missing data, sparsity, and such. Then, if it is a supervised problem, give people a bunch of loess plots or explain the ‘best fitting’ parametric approximations with y in plain English, such as, “people who eat 1 more cookie live 5 minutes shorter on average.”

2. An explanation engine, including what the explanations of observational predictions mean. We already have reasonable implementations of this.

When you have both, you have automated complexity thoughtfully, in a way that empowers people, rather than create a system that enables people to do fancy things badly.

Talking On a Tangent

22 Jun

What is the trend over the last X months? One estimate of the ‘trend’ over the last k time periods is what I call the ‘hold up the ends’ method. Look at t_k and t_0, get the difference between the two, and divide by the number of time periods. If t_k > t_0, you say that things are going up. If t_k < t_0, you say things are going down. And if they are the same, then you say that things are flat. But this method can elide over important non-linearity. For instance, say unemployment went down in the first 9 months and then went up over the last 3 but ended with t_k < t_0. What is the trend? If by trend, we mean average slope over the last t time periods, and if there is no measurement error, then 'hold up the ends' method is reasonable. If there is measurement error, we would want to smooth the time series first before we hold up the ends. Often people care about 'consistency' in the trend. One estimate of consistency is the following: the proportion of times we get a number of the same sign when we do pairwise comparison of any two time consecutive time periods. Often people also care more about later time periods than earlier time periods. And one could build on that intuition by weighting later changes more.

Targeting 101

22 Jun

Targeting Economics

Say that there is a company that makes more than one product. And users of any one of its products don’t use all of its products. In effect, the company has a \textit{captive} audience. The company can run an ad in any of its products about the one or more other products that a user doesn’t use. Should it consider targeting—showing different (number of) ads to different users? There are five things to consider:

  • Opportunity Cost: If the opportunity is limited, could the company make more profit by showing an ad about something else?
  • The Cost of Showing an Ad to an Additional User: The cost of serving an ad; it is close to zero in the digital economy.
  • The Cost of a Worse Product: As a result of seeing an irrelevant ad in the product, the user likes the product less. (The magnitude of the reduction depends on how disruptive the ad is and how irrelevant it is.) The company suffers in the end as its long-term profits are lower.
  • Poisoning the Well: Showing an irrelevant ad means that people are more likely to skip whatever ad you present next. It reduces the company’s ability to pitch other products successfully.
  • Profits: On the flip side of the ledger are expected profits. What are the expected profits from showing an ad? If you show a user an ad for a relevant product, they may not just buy and use the other product, but may also become less likely to switch from your stack. Further, they may even proselytize your product, netting you more users.

I formalize the problem here (pdf).

Estimating Bias and Error in Perceptions of Group Composition

14 Nov

People’s reports of perceptions of the share of various groups in the population are typically biased. The bias is generally greater for smaller groups. The bias also appears to vary by how people feel about the group—they are likelier to think that the groups they don’t like are bigger—and by stereotypes about the groups (see here and here).

A new paper makes a remarkable claim: “explicit estimates are not direct reflections of perceptions, but systematic transformations of those perceptions. As a result, surveys and polls that ask participants to estimate demographic proportions cannot be interpreted as direct measures of participants’ (mis)information since a large portion of apparent error on any particular question will likely reflect rescaling toward a more moderate expected value…”

The claim is supported by a figure that takes the form of plotting a curve over averages. (It also reports results from other papers that base their inferences on similar figures.)

The evidence doesn’t seem right for the claim. Ideally, we want to plot curves within people and show that the curves are roughly the same. (I doubt it to be the case.)

Second, it is one thing to claim that the reports of perceptions follow a particular rescaling formula and another to claim that people are aware of what they are doing. I doubt that people are.

Third, if the claim that ‘a large portion of apparent error on any particular question will likely reflect rescaling toward a more moderate expected value’ is true, then presenting people with correct information ought not to change how people think about groups, for e.g., perceived threat from immigrants. The calibrated error should be a much better moderator than the raw error. Again, I doubt it.

But I could be proven wrong about each. And I am ok with that. The goal is to learn the right thing, not to be proven right.

Measuring Segregation

31 Aug

Dissimilarity index is a measure of segregation. It runs as follows:

\frac{1}{2} \sum\limits_{i=1}^{n} \frac{g_{i1}}{G_1} - \frac{g_{i2}}{G_2}
where:

g_{i1} is population of g_1 in the ith area
G_{i1} is population of g_1 in the larger area
from which dissimilarity is being measured against

The measure suffers from a couple of issues:

  1. Concerns about lumpiness. Even in a small area, are black people at one end, white people at another?
  2. Choice of baseline. If the larger area (say a state) is 95\% white (Iowa is 91.3% White), dissimilarity is naturally likely to be small.

One way to address the concern about lumpiness is to provide an estimate of the spatial variance of the quantity of interest. But to measure variance, you need local measures of the quantity of interest. One way to arrive at local measures is as follows:

  1. Create a distance matrix across all addresses. Get latitude and longitude. And start with Euclidean distances, though smart measures that take account of physical features are a natural next step. (For those worried about computing super huge matrices, the good news is that computation can be parallelized.)
  2. For each address, find n closest addresses and estimate the quantity of interest. Where multiple houses are similar distance apart, sample randomly or include all. One advantage of n closest rather than addresses in a particular area is that it naturally accounts for variations in density.

But once you have arrived at the local measure, why just report variance? Why not report means of compelling common-sense metrics, like the proportion of addresses (people) for whom the closest house has people of another race?

As for baseline numbers (generally just a couple of numbers): they are there to help you interpret. They can be brought in later.

Sample This: Sampling Randomly from the Streets

23 Jun

Say you want to learn about the average number of potholes per unit paved street in a city. To estimate that quantity, the following sampling plan can be employed:

1. Get all the streets in a city from Google Maps or OSM
2. Starting from one end of the street, split each street into .5 km segments till you reach the end of the street. The last segment, or if the street is shorter than .5km, the only segment, can be shorter than .5 km.
3. Get the lat/long of start/end of the segments.
4. Create a database of all the segments: segment_id, street_name, start_lat, start_long, end_lat, end_long
5. Sample from rows of the database
6. Produce a CSV of the sampled segments (subset of step 4)
7. Plot the lat/long on Google Map — filling all the area within the segment.
8. Collect data on the highlighted segments.

For Python package that implements this, see https://github.com/soodoku/geo_sampling.

Clustering and then Classifying: Improving Prediction for Crudely-labeled and Mislabeled Data

8 Jun

Mislabeled and crudely labeled data are common problems in data science. Supervised prediction of such data expectedly yields poor results—coefficients are biased, and accuracy with regards to the true label is poor. One solution to the problem is to hand code labels of an `adequate’ sample and infer true labels based on a model trained on that data.

Another solution relies on the intuition (assumption) that the distance between rows (covariates) of a label will be lower than the distance between rows of different labels. One way to leverage that intuition is to cluster the data within each label, infer true labels from erroneous labels, and then predict inferred true labels. For a class of problems, the method can be shown to always improve accuracy. (You can also predict just the cluster labels.)

Here’s one potential solution for a scenario where we have a binary dependent variable:

Assume a mis_labeled vector called mis_label that codes some true 0s as 1 and some true 1s as 0s.

  1. For each mis_label (1 and 0), use k-means with k = 2 to get 2 clusters within each label for a total of 4 labels
  2. Assuming mislabeling rate < 50%, create a new col. = est_true_label, which takes: 1 when mis_label = 1 and cluster label is of the majority class (that is cluster label class is more than 50% of the mis_label = 1), otherwise 0. 0 when mis_label = 0 and cluster label is of the majority class (that is cluster label class is more than 50% of the mis_label = 0), otherwise 1.
  3. Predict est_true_label using logistic regression and produce accuracy estimates based on true_labels and bias estimates in coefficient estimates (compared to coefficients from logistic regression coefficients from true labels)

The Missing Plot

27 May

Datasets often contain missing values. And often enough—at least in social science data—values are missing systematically. So how do we visualize missing values? After all, they are missing.

Some analysts simply list-wise delete points with missing values. Others impute, replacing missing values with mean or median. Yet others use sophisticated methods to impute missing values. None of the methods, however, automatically acknowledge that any of the data are missing in the visualizations.

It is important to acknowledge missing data.

One can do it is by providing a tally of how much data are missing on each of the variables in a small table in the graph. Another, perhaps better, method is to plot the missing values as a function of a covariate. For bivariate graphs, the solution is pretty simple. Create a dummy vector that tallies missing values. And plot the dummy vector in addition to the data. For instance, see:
missing

(The script to produce the graph can be downloaded from the following GitHub Gist.)

In cases, where missing values are imputed, the dummy vector can also be used to ‘color’ the points that were imputed.

About 85% Problematic: The Trouble With Human-In-The-Loop ML Systems

26 Apr

MIT researchers recently unveiled a system that combines machine learning with input from users to ‘predict 85% of the attacks.’ Each day, the system winnows down millions of rows to a few hundred atypical data points and passes these points on to ‘human experts’ who then label the few hundred data points. The system then uses the labels to refine the algorithm.

At the first blush, using data from users in such a way to refine the algorithm seems like the right thing to do, even the obvious thing to do. And there exist a variety of systems that do precisely this. In the context of cyber data (and a broad category of similar such data), however, it may not be the right thing to do. There are two big reasons for that. A low false positive rate can be much more easily achieved if we do not care about the false negative rate. And there are good reasons to worry a lot about false negative rates in cyber data. And second, and perhaps more importantly, incorporating user input on complex tasks (or where data is insufficiently rich) reduces to the following: given a complex task with inadequate time, the users use cheap heuristics to label the data, and supervised aspect of the algorithm reduces to learning cheap heuristics that humans use.

Interpeting Clusters and ‘Outliers’ from Clustering Algorithms

19 Feb

Assume that the data from the dominant data generating process are structured so that they occupy a few small portions of a high-dimensional space. Say we use a hard partition clustering algorithm to learn the structure of the data. And say that it does—learn the structure. Anything that lies outside the few narrow pockets of high-dimensional space is an ‘outlier,’ improbable (even impossible) given the dominant data generating process. (These ‘outliers’ may be generated by a small malicious data generating processes.) Even points on the fringes of the narrow pockets are suspicious. If so, one reasonable measure of suspiciousness of a point is its distance from the centroid of the cluster to which it is assigned; the further the point from the centroid, the more suspicious it is. (Distance can be some multivariate distance metric, or proportion of points assigned to the cluster that are further away from the cluster centroid than the point whose score we are tallying.)

How can we interpret an outlier (score)? Tautological explanations—it is improbable given the dominant data generating process—aside.

Simply providing distance to the centroid doesn’t give enough context. And for obvious reasons, for high-dimensional vectors, providing distance on each feature isn’t reasonable either. A better approach involves some feature selection. This can be done in various ways, all of which take the same general form. Find distance to the centroid on features on which the points assigned to the cluster have the least variation. Or, on features that discriminate the cluster from other clusters the best. Or, on features that predict distance from the cluster centroid the best. Limit the features arbitrarily to a small set. On this limited feature set, calculate cluster means and standard deviations, and give standardized distance (for categorical variable, just provide ) to the centroid.

Read More (pdf with pseudo code)

Sampling With Coprimes

1 Jan

Say you want to sample from a sequence of length n. Multiples of a number that is relatively prime to the length of the sequence (n) cover the entire sequence and have the property that the entire sequence is covered before any number is repeated. This is a known result from number theory. We could use the result to (sequentially) (see below for what I mean) sample from a series.

For instance, if the sequence is 1, 2, 3,…, 9, the number 5 is one such number (5 and 9 are coprime). Using multiples of 5, we get:

1 2 3 4 5 6 7 8 9
X
X X
X X
X X
X X

If the length of the sequence is odd, then we all know that 2 will do. But not all even numbers will do. For instance, for the same length of 9, if you were to choose 6, it would result in 6, 3, 9, and 6 again.

Some R code:


seq_length = 6
rel_prime  = 5
multiples  = rel_prime*(1:seq_length)
multiples  = ifelse(multiples > seq_length, multiples %% seq_length, multiples)
multiples  = ifelse(multiples ==0, seq_length, multiples)
length(unique(multiples))

Where can we use this? It makes passes over an address space less discoverable.

Beyond Anomaly Detection: Supervised Learning from `Bad’ Transactions

20 Sep

Nearly every time you connect to the Internet, multiple servers log a bunch of details about the request. For instance, details about the connecting IPs, the protocol being used, etc. (One popular software for collecting such data is Cisco’s Netflow.) Lots of companies analyze this data in an attempt to flag `anomalous’ transactions. Given the data are low quality—IPs do not uniquely map to people, information per transaction is vanishingly small—the chances of building a useful anomaly detection algorithm using conventional unsupervised methods are extremely low.

One way to solve the problem is to re-express it as a supervised problem. Google, various security firms, security researchers, etc. everyday flag a bunch of IPs for various nefarious activities, including hosting malware (passive DNS), scanning, or actively . Check to see if these IPs are in the database, and learn from the transactions that include the blacklisted IPs. Using the model, flag transactions that look most similar to the transactions with blacklisted IPs. And validate the worthiness of flagged transactions with the highest probability of being with a malicious IP by checking to see if the IPs are blacklisted at a future date or by using a subject matter expert.