Early reports suggest that some areas may face difficulty in using the Place Survey for assessing progress against National Indicator targets. And at least one LA is reportedly in discussions with their Government Office over not setting improvement targets for National Indicators based on surveys.
Quite apart from the debate over whether the survey questions are the right ones for assessing local progress, some LAs are talking about response rates as low as 20%. That could result in LAs struggling to meet guidance that the “achieved sample size should be no smaller than 1,100”. As a result, findings from the survey will come with pretty big margins of error (and even more so if you’re wanting to use the data to look at particular areas or groups). And this has consequences when it comes to identifying whether targets have been achieved or not.
We’ve explored this issue in work for the Department of Communities and Local Government, identifying which datasets are appropriate for target setting and performance assessment at neighbourhood level. To cut a long report short, the key message is that due to relatively small numbers of cases, few datasets are robust enough for target-setting at neighbourhood level (the main exceptions being benefits data from DWP), but that lots of datasets provide very useful intelligence. The full report ‘Assessing Neighbourhood-Level Data for Target Setting’ is available here.
The issue of sample size is very relevant to whether the Place Survey can be used to evaluate progress at LA level. For example, with 1,000 respondents, standard margins of error on any one indicator value are in the region of plus/minus 3%. With only 500 respondents, that’s more like plus/minus 4%. This is useful intelligence, but to demonstrate progress on these indicators you’d need to be seeing changes bigger than 6% with 1,000 filled-in surveys (and 8% with 500). That’s a pretty big shift to sign-up to as a National Indicator target, but any changes smaller than this could simply be due to chance fluctuation.
So, what are the key messages?
First, if we’re serious about using the Place Survey results for improving services and assessing progress against targets, local partners with low response rates will want to understand the pattern of non-response. Without this information, it’s difficult to know how representative the survey is of local resident views. For example, are particular groups less likely to fill-in and return the postal survey (those with low literacy skills, or poor English language, or in the most deprived areas spring to mind).
Second, local partners and Government Offices (and other agencies) need to be sure they’re taking margins of error into account when looking at indicators and targets based on the Place Survey and other sample surveys. It’s worth emphasising the Royal Statistical Society message that the “reporting of Performance Management data should always include measures of uncertainty”. Simply asserting that targets have been reached or not, based on indicator values, is not good enough.
Third, the process of identifying and setting targets for local priorities always needs significant input from research teams. Just because something is a policy priority does not mean that there is decent data with which we can evaluate progress. Einstein put it rather better – “Not everything that counts can be counted, and not everything that can be counted counts.”
Tom Smith, OCSI