Christian Heath (L); Maurice Davies (R)

Why evaluation doesn't measure up

Christian Heath; Maurice Davies, 01.06.2012
Evaluation is a standard part of museum projects. There’s front-end evaluation, which takes place while early ideas for a project are developing.

There’s formative evaluation, which typically tests out prototype exhibits. Then, at the end of the project, there’s summative evaluation, which is – well – a little confusing. We are taking a critical look at it in a project called Evaluating Evaluation, funded by Heritage Lottery Fund and the Wellcome Trust.

No one seems to have done the sums, but UK museums probably spend millions on evaluation each year. Given that, it’s disappointing how little impact evaluation appears to have, even within the institution that commissioned it.

In some museums a range of staff scrutinise and discuss summative evaluations and attempt to use the findings to inform subsequent projects, However, this appears to relatively rare.

The problem is exacerbated because many teams disperse after developing a particular exhibition or gallery – both internal staff and external consultants. Even members of the external design companies that were involved in the development of the project rarely get to see summative evaluations. A few museums publish some of their evaluations on the web, but many reports are seen by very few people.

It is possible that evaluation would have more impact if it were made more accessible. But availability is only one of many problems. There is huge variety in the ways in which evaluation studies are undertaken, in the themes they address, the methods they use, and in the ways in which data is analysed. They also vary in how they characterise and explore such matters as visitor motivation and learning.

Therefore, it’s difficult to compare evaluations or to draw conclusions that could inform the design and development of displays and exhibitions more generally. To be fair, formative evaluation often seems to significantly improve things such as interactive exhibits, an area in which evaluation is relatively advanced. But summative evaluation is problematic.

It’s normally sent to the project funders and often used for advocacy. In fact, many summative evaluations seem to set out to demonstrate success, rather take an honest critical stance. People have told us that the versions of evaluations that do get circulated are often edited to play down problems.

Summative evaluations are expected to achieve the impossible: to help museums learn from failure, while proving the project met all its objectives. Is it time to rethink how the sector approaches evaluation?

Christian Heath is professor of work and organisation at King’s College, London (KCL); Maurice Davies is visiting senior research fellow at KCL, and the head of policy and communication at the Museums Association. Examples of exemplary evaluations can be sent to:;


Sort by: Most recent - Most liked
MA Member
11.06.2012, 02:56
This sounds like important and interesting work. I look forward to seeing what the outcome is.

Reading this post and reflecting on it, I started to wonder if evaluation has become a mere reporting requirement without influencing organisational culture - as I wrote in this blog post

It left me wondering whether funder-instigated evaluation has actually hindered the usefulness of evaluation within organisations. We are answering a funder's questions, not our own, with all the baggage associated with that distinction.