Critical thinking not used - Museums Association

Critical thinking not used

Research finds that museums fail to share or learn from evaluation
What kind of impact does evaluation really have on museum work? A group of museum professionals and evaluators met in London recently to discuss the subject as part of a research project, Evaluating Evaluation, which has been funded by the Heritage Lottery Fund (HLF) and the Wellcome Collection.

The results of the research will be unveiled this month, but attendees were given a preview – and it wasn’t entirely positive.

The project focused on summative evaluation – evaluation submitted at the end of a project to assess its success and impact.

The researchers, Christian Heath, professor of work and organisation at King’s College London, and Maurice Davies, the university’s visiting senior research fellow, reviewed hundreds of summative evaluations and interviewed many practitioners over the course of the project.

Multimillion-pound spend

There are no overall figures available, but it is likely that UK museums spend millions on summative evaluation every year. HLF, for its part, recommends that evaluation costs should equal no more than 3% of the grant request for projects under £2m, and 1% for grants of £2m or more.

With so much investment, it would seem unthinkable that evaluation did not have a beneficial impact on specific projects, as well as a wider bearing on museum learning and practice. But the research has found that this is frequently not the case.

The project identified several problems, both with the methodology of evaluation itself, and how it is used by museums, which limit the benefits summative evaluation can bring.

First among these is access to and sharing of findings. The research found that, barring some notable exceptions, summative evaluations are neither widely read nor referred to by organisations during subsequent projects, and are rarely shared with the wider sector. In some cases, organisations actively limit access to reports.

Little action has so far been taken in the UK to collate or extrapolate the information within evaluations, meaning that important observations about areas such as visitor behaviour, design and project management are likely being overlooked.

But even if findings were to be shared, the research found that because methods of data collection and analysis vary hugely between projects, it is hard to make a comparative analysis or to build a “more general corpus of findings”.

The possibility of funders setting overarching objectives for summative evaluations and establishing a repository of findings was raised at the London event, but met with some reservation.

There were concerns that this might be overly bureaucratic, comparisons could be difficult and there could be issues of copyright.

A further problem was evaluation’s place within the institutional framework. Some evaluators felt that the value of their work was marginalised and seen as an end in itself rather than an integral part of projects.

Missed opportunities

The temporary nature of many projects, with consultants often brought in from outside, means teams disperse quickly afterwards – and so does their collective expertise. One freelance design consultant at the meeting reported being excluded from the evaluation process despite having valuable insights to share.

The research also found that the very purpose of summative evaluation is often in conflict. On the one hand, reports are commissioned for monitoring and accountability to funders, as well as for advocacy, but this often means organisations put a glossy spin on findings and hold back on being too forthright for fear of putting future grants in jeopardy.

On the other hand, evaluation is carried out to inform learning and practice within the project team and the wider sector, to improve the visitor experience – so an honest appraisal of what worked and what didn’t is essential.

Ongoing process

An HLF spokeswoman told the meeting that the organisation would prefer a genuine account, and would like evaluation to be seen as an ongoing process and a valuable resource for audience research.

So what can be done to increase the impact of evaluation? Several speakers, including representatives from the Riverside Museum in Glasgow and the Museum of Archaeology and Anthropology in Cambridge, gave examples of good practice.

Individual methods differed, but key to their success was the fact that the organisations had thought about summative evaluation early on and made it a continuous process, with a joined-up strategy linking previous and future evaluations.

The research project may be the first step in motivating others to do the same.

Send feedback to: maurice.davies@ntlworld.com

Summative evaluation: Key issues

  • A lack of overarching objectives and sharing of information means evaluation often has little impact beyond specific projects and useful findings may be lost
  • Conflicting purposes – is evaluation done for advocacy and accountability to funders, or to improve learning and practice?
  • Museums can be reluctant to seek a wart-and-all perspective because they worry about jeopardising future funding
  • Evaluators can feel that their work is undervalued and seen as a necessary chore rather than a beneficia l enterprise. Reports often come too late for remedial action to be taken
  • Funders tend to focus on evaluating the “finished product” and its short-term impact. Could they do more to make evaluation an ongoing process?



Leave a comment

You must be to post a comment.

Discover

Advertisement
Join the Museums Association today to read this article

Over 12,000 museum professionals have already become members. Join to gain access to exclusive articles, free entry to museums and access to our members events.

Join