Why evaluation doesn't measure up - Museums Association

Conference 2024: The Joy of Museums booking open now – Book before 31 March 2024 for a 10% discount

Conference 2024: The Joy of Museums booking open now – Book before 31 March 2024 for a 10% discount

Why evaluation doesn’t measure up

Evaluation is a standard part of museum projects. There’s front-end evaluation, which takes place while early ideas for a project …
Christian Heath; Maurice Davies
Share
Evaluation is a standard part of museum projects. There’s front-end evaluation, which takes place while early ideas for a project are developing.

There’s formative evaluation, which typically tests out prototype exhibits. Then, at the end of the project, there’s summative evaluation, which is – well – a little confusing. We are taking a critical look at it in a project called Evaluating Evaluation, funded by Heritage Lottery Fund and the Wellcome Trust.

No one seems to have done the sums, but UK museums probably spend millions on evaluation each year. Given that, it’s disappointing how little impact evaluation appears to have, even within the institution that commissioned it.

In some museums a range of staff scrutinise and discuss summative evaluations and attempt to use the findings to inform subsequent projects, However, this appears to relatively rare.

The problem is exacerbated because many teams disperse after developing a particular exhibition or gallery – both internal staff and external consultants. Even members of the external design companies that were involved in the development of the project rarely get to see summative evaluations. A few museums publish some of their evaluations on the web, but many reports are seen by very few people.

It is possible that evaluation would have more impact if it were made more accessible. But availability is only one of many problems. There is huge variety in the ways in which evaluation studies are undertaken, in the themes they address, the methods they use, and in the ways in which data is analysed. They also vary in how they characterise and explore such matters as visitor motivation and learning.

Therefore, it’s difficult to compare evaluations or to draw conclusions that could inform the design and development of displays and exhibitions more generally. To be fair, formative evaluation often seems to significantly improve things such as interactive exhibits, an area in which evaluation is relatively advanced. But summative evaluation is problematic.

It’s normally sent to the project funders and often used for advocacy. In fact, many summative evaluations seem to set out to demonstrate success, rather take an honest critical stance. People have told us that the versions of evaluations that do get circulated are often edited to play down problems.

Summative evaluations are expected to achieve the impossible: to help museums learn from failure, while proving the project met all its objectives. Is it time to rethink how the sector approaches evaluation?

Christian Heath is professor of work and organisation at King’s College, London (KCL); Maurice Davies is visiting senior research fellow at KCL, and the head of policy and communication at the Museums Association. Examples of exemplary evaluations can be sent to: christian.heath@kcl.ac.uk; maurice.davies@ntlworld.com



Leave a comment

You must be to post a comment.

Discover

Advertisement