Evaluation

Evaluation is a central part of understanding if and how our actions, activities, ‘interventions’ and services make a difference to the lives of those we seek to help or serve.  As with quality research, evaluation needs to be sensitive to the context in which it operates, and the methods employed need to be rigorous whilst also being appropriate to the activity or work being evaluated.

Rather than presenting off-the-shelf evaluation, I work with project, practitioners and service users to identify and agree the most effective way to evaluate particular services or interventions, drawing upon a range of research / evaluation experience and evaluation tools in order to develop bespoke evaluation process that meet the needs of organisations, commissioner or projects.  This includes, but is not limited to:

  • Programme theory / theory of change / intervention logic – exploring the assumptions behind interventions and how they are intended to operate, and then examining whether this is the case
  • Bennett’s Hierarchy of Outcomes – exploring the different benefits and outcomes that projects and interventions seek to achieve, and how these can be measured or evidenced
  • Process evaluation – exploring the factors involved in why and how things have happened, drawing out learning for future service development
  • Qualitative and quantitative research / evaluation – combining statistical data about provision and demographics, conducting surveys and interviews, focus groups, participatory workshops and other activities in order to get a good understanding of your project
  • Drawing on wider evidence – our work doesn’t take place in a vacuum, yet it can often be difficult to pull together and critically analyse and summarise the evidence-base and wider context around what we do.  Academic and grey literature reviews, together with wider social data, can aid evaluations and provide support for future funding or practice development.
  • Beyond outcomes and impact – recognising that not everything of value can be measured (and not everything that can be measured is of value), evaluation approaches recognise that outcome and impact are not always effective indicators of the social worth of projects (or at least not in the linear way that is often assumed by many evaluation processes).  Other narratives of practice, and of notions of ‘good’ practice and the value and worth and explored in a broader sense where appropriate.

As well as bringing the rigour of high quality analysis to evaluation (something that can be overlooked in much evaluation practice), I am also keen to ensure that evaluation findings and recommendations are communicated effectively to a variety of stakeholders.  Whilst it is not always possible (for example, where organisational sensitivities mean that some findings can’t be shared), I work towards ‘diffuse dissemination’ by presenting findings in different ways to different audiences (service users, children and young people, community groups, practitioners, managers, policymakers, etc.) in order to ensure that learning can be understood and acting upon appropriately.