Differential measurement error across control and treatment groups or (pre and post-treatment measurement waves) can vitiate estimates of treatment effect. One prominent reason for differential error in survey context is differential motivation. For instance, if participants are less motivated to respond accurately on the pre-treatment survey than on the post-treatment survey (or respondents satisfice more on the pre-treatment survey that post-treatment survey), the pre-post difference score will be a biased estimator of the treatment effect. Weiksner (2008) provides an empirical example of the broad problem that he identifies – participants acquiesced more during the pre-treatment (treatment is the Deliberative Poll) survey than the post-treatment survey, compromising inferences about how many much attitude change happens over the course of the treatment. To correct for it, one may want to replace agree/disagree responses with construct specific questions (Weiksner, 2008). A perhaps better solution would be to incentivize all (or a random subset of) responses to the pre-treatment survey. Possible incentives include â€“ monetary rewards, adding a preface to the screens telling people how important accurate responses are to research, etc. This is the same strategy that I advocate for dealing with satisficing more generally (see here) â€“ which translates to minimizing errors, than the dominant though somewhat more suboptimal strategy of â€œbalancing errorsâ€ by randomizing the response order.