Policy & Practice - A Development Education Review

 

 

The System Cannot Fail – Reflections on The Audit Society: Rituals of Verification

issue11
Monitoring & Evaluation
Autumn 2010

Carlos Bruen

Auditing, monitoring, evaluating.  The mere mention of these can leave the listener or reader in a temporary state of boredom until the topic shifts onto more engaging issues.  Yet as many in the development education community know too well, assessment, checking and account-giving are an everyday part of work.  Moreover, success or failure in auditing, monitoring and evaluation can have critical implications for the future of those audited, highlighting in turn the central role of these practices in organisation and control.  Michael Power recognises this centrality in his engaging and critical exploration of the audit explosion in the United Kingdom in the late 1980s.  Initially published in 1997, The Audit Society: Rituals of Verification remains as relevant today as then.

 

            Power, questioning the meaning behind the explosion in auditing, monitoring and evaluating, distinguishes between the operational and normative characteristics of these practices.  This leads him to highlight how practices of evidence gathering are also ideas-bound, or rather, ‘systems of values and goals inscribed in the official programmes which demand it’ (1997:7). From this, Power begins his exploration by examining the history of financial auditing.  He describes the shifting and contentious relationship between audit practices and programmatic responses to financial scandals, corporate failures and the detection of fraud.  This serves to illustrate how the audit process is a collective activity, characterised by an ambiguity that permits discretion in the construction of a legitimising narrative to also support evaluation and monitoring routines and procedures themselves.  He argues that ultimately these routines and procedures, coupled with slavish adherence to performance measures, can serve to simply maintain an institutionally credible audit system.  This falls short of achieving the ideal of productive learning and improvement that monitoring and evaluation practices arguably should set out to achieve.

 

            Power goes on to examine evaluation exercises in higher education and medicine, and highlights how an excessive focus on these practices can have dysfunctional effects on organisations.  These case studies lend further strength to his arguments, and highlight an aspirational dimension to many auditing practices that are not always linked to operational capacity, improvement or the objectives of the organisations being evaluated.  Power does not however reject the need for and value of monitoring and evaluation outright, given that these procedures can greatly assist organisations.  Instead, his book represents a critical questioning of everyday practices that are often taken for granted, despite these practices being a powerful force for organisation and control.

 

            At times theoretically complex, and sometimes lacking empirical support, the book nonetheless opens up for questioning the consequences of checking and monitoring that warns against the worst excesses of evaluation procedures.  Moreover, it offers rewards, particularly for those directly engaged in auditing, monitoring and evaluation procedures required by their donors and by their own organisation.  It asks the reader to consider who and what are auditing, monitoring and evaluation procedures and routines for, and to question the neutrality of monitoring techniques and consider them bound to the maintenance of institutional credibility.  By recognising the normative character of monitoring procedures, it asks us to question the value systems underlying these procedures, and significantly, the social relations that produce them.  As Power notes, auditing is an interactive and negotiated process.  This raises several questions: to what extent are members – and which members - of the development education community contributing to the value system underlying the official donor evaluation practices that their organisations are subject to?  And if the broader community is not contributing to any great extent, how might they go about ensuring they will in the future?  At minimum, it asks for the nuts and bolts of auditing, monitoring and evaluation to be hotly debated within the development education community so that they might also contribute to the design and implementation of the instruments of organisation and control that these procedures represent.

 

            Provocatively, it also asks that development educators question the monitoring and evaluation procedures they construct and use to determine the effectiveness of their own development education programmes.  What normative framework shapes programme evaluation?  Are these evaluation procedures designed to enhance programme effectiveness, and how does it connect with the aspirational goal of assisting programme participants in challenging global inequalities and bringing about a more just, sustainable world?  Or are development educators caught up in rituals of verification that merely produce comforting signals to themselves and their funders?

 

References

 

Power, M (1997) The Audit Society: Rituals of Verification, Oxford: Oxford University Press.

 

 

Carlos Bruen is a researcher at the Royal College of Surgeons in Ireland. His work focuses on global health policy and governance, including analysis of the impact of global health initiatives on the health systems of southern African countries (see www.ghinet.org).

Citation: 
Bruen, C (2010) 'The System Cannot Fail – Reflections on The Audit Society: Rituals of Verification', Policy and Practice: A Development Education Review, Vol. 11, Autumn, pp. 131-133.