Following on from the launch of Left behind? Understanding communities on the edge, we were thrilled to add the Community Needs Index to Local Insight. Stefan Noble, Director and Head of Research at OCSI answers some of the questions our users have been asking about this new measure and the ‘left-behind’ areas project.
Firstly Stefan, what does the Community Needs Index aim to do?
The Community Needs Index aims to bring together data on the social and cultural factors that can lead to poorer outcomes in communities. For example, challenges around a lack of places to meet, poor facilities, poor connectivity and low participation. The index attempts to bring all the various sources of information on these themes together into a single index, so that we can identify which areas have the greatest challenges on those factors.
It is a very different index from the Index of Multiple Deprivation (IMD) as it’s more about the social needs and the social gaps that contribute towards challenges in a community – rather than the economic needs and challenges.
Why is the Community Needs Index an important addition to the data landscape?
The sorts of challenges that the index covers have been observed sociologically for a while – for example, falling levels of civic and community participation, more issues arising that are associated with loneliness and isolation as well as more and more services that are closing down. However, the Community Needs Index is the first time anyone has attempted to produce a comprehensive, quantitative measure of the social in this way.
This can bring to the foreground issues that haven’t previously been identified. The same set of areas tend to get picked up again and again on more economic-based measures such as the IMD, however, it is a slightly different set of areas that reveals itself when you focus on the social.
For users starting to explore the Community Needs Index, what can and can’t it be used for?
In common with many indices, it is a relative rather than an absolute measure. So you can compare how your neighbourhood performs relative to other areas. However, you can’t say one area is twice as bad or half as bad as another. The number itself is only meaningful in a relative rather than absolute sense.
Secondly, this index is a neighbourhood level measure rather than a measure of a town, city or local authority. So whilst you can look at whether there are higher concentrations of areas that score highly on the Community Needs Index within a larger geographical area, you can’t then generalise the large geographical area on the basis of the presence – or not – of deprived areas on the Community Needs Index itself.
It is also important to recognise that the community needs challenges that have been identified really exacerbate themselves when they are combined with deprivation. So although the measure can tell you something in isolation, we’re particularly interested in when the two challenges – social and economic – interact with each other. This is the context in which the index was developed.
Finally, this is an area-based measure, so we are absolutely not saying that everybody in one area faces the same challenges, we are just saying that the concentrations of challenge are higher in that area.
Can you explain some of the reasons the index was developed at ward level?
We knew we wanted it to be a neighbourhood level index rather than, for example, town level as we wanted to recognise small area inequality.
Other indices and a lot of open data are published at Lower Super Output Area (LSOA) level. In the end, we selected wards over those statistical geographies as we wanted it to represent a recognised community outside of the stats geek world.
Very few people know which LSOA they live in, but people do tend to know what ward they live in because you are engaged with it when you vote and wards are often named after particular neighbourhoods. Conversely, LSOAs cross neighbourhood boundaries and don’t represent existing communities on the ground.
Wards were more appropriate for the needs of this project, which was specifically about communities rather trying to compare equal blocks of space. We ran the analysis using LSOAs as well to see what impact it would make as a lightweight validation approach and it produced pretty similar results.
Additionally, there was a data availability challenge as some indicators were published at ward level and not the other geographies such as some of the funding data and voter turnout data. Finally, on average they are larger than LSOAs, which means you have a slightly more robust sample population to work with.
Of course, there are some limitations. Ward boundaries change every year and can be very uneven in terms of population size. There are some neighbourhoods in Birmingham where the wards are massive, whereas some of the rural ones are pretty small.
Are all of the indicators in the index weighted equally?
In general, we applied equal weighting to every indicator within a specific domain. However, in the cases where we had concerns over the robustness of an indicator, we produced composite indicators by merging a bunch of indicators, that were all measuring the same thing, together. This meant that they all then got implicit lower weights than the other ‘administrative measures’.
What’s an administrative measure?
Administrative measures consist of datasets that were collected for other purposes and therefore we can be more confident on the robustness of the datasets. These datasets aren’t usually based on surveys or samples.
What themes within the Community Needs Index was it difficult to find data for?
Any data to do with participation is difficult as most of it is survey-based and relies on people responding to those surveys. There was a really big survey in 2008 on community activity and civic participation. Since then, there have been other surveys such as the Understanding Society survey and Community Life survey, which have a much smaller sample size.
Which particular finding has really stuck with you from the Community Needs Index?
I think the most surprising thing was when we looked at the levels of funding locally. We looked at funding in the ‘left-behind’ areas, other deprived areas and England as a whole. We found that the ‘left-behind’ areas despite greater needs, are getting less funding than the other deprived areas and both of them are getting less funding than the national average. We’d have expected that on a needs basis, the areas of the greatest need would be getting the most funding. So that was surprising and stuck with us.
Lastly, what’s next for the Community Needs Index and the ‘left-behind’ areas project?
We are now moving into the next phase of the left-behind areas project, which will include updating some of the data (for example, including IMD 2019), looking for additional data sources (for example, engaging with grantmakers that are not currently publishing data with 360Giving) and a methodological review.
If you have any questions on the left-behind areas project, please get in touch with Jeremy Yung at Local Trust at Jeremy.Yung@localtrust.org.uk