Meaningful impact in Arts and Humanities research: Who decides?

Alexander P

The past 3 (and a bit) years of my job as Impact Research Fellow at Bath Spa University have been dominated, as so many others, by developing Impact Case Studies (ICS). Preparing these ICS for the REF has highlighted many challenges around research impact, such as the need to embed it into the work, the burden of evidencing impact and lack of support for impact outside of the REF framework. Others have already reflected on issues including the challenges that the REF guidance provided for those trying to support researchers to write their ICS and the problems that arise from only celebrating ‘shiny’ and uncomplicated narratives of research impact

Supporting all REF Impact Case Studies across my institution (we submitted 22 across 9 Units of Assessment) has made me realise that existing impact frameworks for research impact from creative Arts and Humanities research are a key problem.

Most impact frameworks (such as logic models and theory of change approaches) encourage researchers (or evaluators) to pre-decide what meaningful impact will look like and then to build ways of capturing these outcomes and impacts within their projects. Many frameworks also encourage you to capture as many things in a quantitative way as possible; which makes sense when the systems in place reward such quantitative outcomes. These impact frameworks tend to conceptualise research impact as taking place in an applied context, where either a research outcome is applied to a real-world situation to fix a problem or where researchers work with a community to co-devise a solution to a challenge that community is experiencing. However, impact from creative Arts and Humanities research often does not follow a process that is driven by a challenge that needs to be solved or overcome. This then leads to the retrospective application of impact frameworks to research projects to determine what the most meaningful thing to submit will be.

In the process of applying impact frameworks that don’t quite fit, it’s easy to lose sight of what might have been most meaningful to the participants, partners or stakeholders who were involved in the process, in favour of those things that can be counted, captured or articulated in a narrative that reviewers will respond to. Even when you have been able to collaborate with your participants or stakeholders to decide what counts as meaningful impact, it is difficult to move beyond established types of value that society deems important, which largely fall into three categories: saving lives, making money and improving wellbeing (however not by solving the structural issues that cause inequality). 

These established impact frameworks also struggle to incorporate the fact that for most people change is a cycle that includes periods of improvements and setbacks; it is rarely a neat tick box of sustainable and permanent improvement. In addition, the context in which this change takes place determines the meaning. Whilst an individual research project might be able to affect some change, this always remains situated against a backdrop of inequality and different levels of privilege. Finally, once a set of indicators is chosen to denote impact, any subsequent participants taking part are not able to decide for themselves what is meaningful for them.

So this is the first challenge in developing and capturing meaningful impact from creative research: you can’t always work out beforehand what it will look like and it will look different for different people (and even for the same person impact is unlikely to be a steady, unchanging thing).

The second challenge for research impact in the Arts and Humanities is that existing evaluation frameworks fall into the trap of standard strategies that struggle to capture a nuanced picture of people’s experiences. There are examples of creative venues and organisations doing great work, but generally a lot of evaluation strategies boil down to capturing whether audiences liked something or not (and how much they liked it). This is in a large part due to the evaluation culture that we are in (you can’t leave a restroom these days without indicating through the medium of smiley faces how your experience was) - which primes you for giving a very specific kind of feedback that is not very nuanced or detailed. 

In this context it’s useful to briefly talk about the concept of demand characteristics, which describes the very social nature of taking part in things like research and evaluation and how your perception of the reason for taking part impacts on your responses. A concept that originated in psychology, demand characteristics highlight that when we are asked to do something in a situation that we know is research (or evaluation), we implicitly and unconsciously search for a way to determine our response as meaningful (see Orne, 1962, and Orne and Whitehouse, 2000, for more detail). For instance, as a participant I might decide that I want to support research therefore take part in a study, whilst I subconsciously try and work out what the researcher wants me to say or do so I can say give them what they are looking for (there will be a minority in any situation that go against this and who try to do their best to test the edges or disrupt the situation, which usually stems from them not finding meaning in the social situation).

In an arts context this results in audience members filling in evaluation forms with the unconscious understanding that these evaluations matter for the artist’s / venue’s funding and so they will provide the feedback that they think will help support that artist or venue retain or attract more funding. This does not mean that they are dishonest in what they say, rather that they focus on sharing the things that they think are meaningful in the context. Which usually means how much they loved something, rather than providing a deeper or more complex insight into their experiences and why something might have struck a chord with them.

These two challenges taken together highlight a need for the sector to try to improve how we understand, develop, and capture meaningful impact from creative research that doesn’t only focus on the economic or improved well-being outcomes from arts practices. This is additionally important because creative research methods are often used as a way of communicating research outcomes from other disciplines whilst socially engaged arts practices are commonly highlighted as the ‘best’ way of creating impact as they fall within existing impact frameworks around improving wellbeing (which then fail to capture the more radical outcomes of such work). If we don’t change the way impact from creative research is understood and captured we risk losing valuable and meaningful insights into how research can make a difference to people’s lives as already Arts and Humanities researchers are driven by systems such as the REF towards approaches that fit the assessment of research rather than what is most meaningful for the work.

To explore these two things together, I have started a new research project that aims to develop an impact framework that enables participants to determine what meaningful impact looks like for them individually. The aim is for this framework to be suitable for both creative Arts and Humanities research as well as supporting creative and cultural organisations to capture the impact of their work more meaningfully. The framework will also embed evaluation strategies into participants’ experiences, to enable the capturing of a richer reflection of these experiences together with some understanding of the context in which these are meaningful (or not) for them. In addition, this type of evaluation supports the building of reflection skills for participants as something that is a meaningful end in itself, meaning that evaluation becomes a process that is mutually beneficial, rather than only being done for the benefit of the evaluator (and whoever they represent). Currently I am working on several pilot projects to test out ideas in advance of a larger project to tackle this challenge fully. The first pilot project is in collaboration with Coney and called The Magic Trick. We have just finished collecting the data, so a full case study on the outcomes will follow in the coming months!

If you’re interested in having a conversation about anything here then please do get in touch! a.breel@bathspa.ac.uk

Image credit: Alexander P