What’s the difference between a scientist and a data scientist? Scientists often collect their own data, and data scientists often use data collected by other people. That is part jest but speaks to an important point. Good scientists know their data. Good data scientists must know their data too. To help data scientists learn about the data they use, we need to build systems that give them good data about the data. But what is good data about data? And how do we build systems that deliver that? Here’s some advice (tailored toward rectangular data for convenience):
- From Where, How Much, and Such
- Provenance: how were each of the columns in the data created (obtained)? If the data are derivative, find out the provenance of the original data. Be as concrete as possible, linking to scripts, related teams, and such.
- How Frequently is it updated
- Cost per unit of data, e.g., a cell in rectangular data.
- What? To know what the data mean, you need a data dictionary. A data dictionary explains the key characteristics of the data. It includes:
- Information about each of the columns in plain language.
- How were the data were collected? For instance, if you conducted a survey, you need the question text and the response options (if any) that were offered, along with the ‘mode’, and
where ina sequence of questions does this lie, was it alone on the screen, etc.
- Data type
- How (if at all) are missing values generated?
- For integer columns, it gives the range, sd, mean, median, n_0s, and n_missing. For categorical, it gives the number of unique values, what each label means, and a frequency table that includes n_missing (if missing can be of multiple types, show a row for each).
- The number of duplicates in data and if they are allowed and a reason for why you would see them.
- Number of rows and columns
- For supervised models, store correlation of y with key x_vars
- What If? What if you have a question? Who should you bug? Who ‘owns’ the ‘column’ of data?
Store these data in JSON so that you can use this information to validate against. Produce the JSON for each update. You can flag when data are some s.d. above below last ingest.
Store all this metadata with the data. For e.g., you can extend the dataframe class in Scala to make it so.
Auto-generate reports in markdown with each ingest.
In many ML applications, you are also ingesting data back from the user. So you need the same as above for the data you are getting from the user (and some of it at least needs to match the stored data).
For any derived data, you need the scripts and the logic, ideally in
Where possible, follow the third normal form of databases. Only store translations when translation is expensive. Even then, think twice.
Lastly, some quality control. Periodically sit down with your team to see if you should see what you are seeing. For instance, if you are in the survey business, do the completion times make sense? If you are doing supervised learning, get a random sample of labels. Assess their quality. You can also assess the quality by looking at errors in classification that your supervised model makes. Are the errors because the data are mislabeled? Keep iterating. Keep improving. And keep cataloging those improvements. You should be able to ‘diff’ data collection, not just numerical summaries of data. And with what the method I highlight above, you should be.