Article Information

Author:
Matodzi M. Amisi1

Affiliation:
1Department of Planning, Monitoring and Evaluations, South Africa

Correspondence to:
Matodzi Amisi

Email:
matodzi@presidency-dpme.gov.za

Postal address:
Room 273A, Union Buildings, Pretoria 0001, South Africa

Dates:
Received: 04 Feb. 2015
Accepted: 23 June 2015
Published: 31 Aug. 2015

How to cite this article:
Amisi, M.M., 2015, ‘Development of South Africa’s national evaluation policy and system 2011−2014’, African Evaluation Journal 3(1), Art. #109, 7 pages. http://dx.doi.org/10.4102/aej.v3i1.109

Copyright Notice:
© 2014. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Development of South Africa’s national evaluation policy and system 2011−2014
In This Original Research...
Open Access
Abstract
Introduction
Defining use
Effective communication: An evolving concept
DPME experience: Facilitating dialogical communication
Communicating findings through continued engagement
Concluding remarks
Acknowledgements
   • Competing interests
References
Abstract

There is a growing recognition of the complex relationship between evaluation and research, and policy and practice. Policy making is inherently political, and public administration is contingent on various factors, that is budgets, capabilities and systems other than evidence. This has evolved in the Department of Planning, Monitoring and Evaluations (DPME) in South Africa challenging conventional ideas of communication between evaluators and policymakers and practitioners. These are characterised by monologues from evaluators to policymakers and practitioners, which are reserved exclusively for communicating the finished product. This article is a reflection of the emerging work of the DPME valuations which is investigating the relational dynamics between evaluators and programme personnel, and encouraging more interactive and diversified communication throughout the evaluation process. The article offers a public sector observation. The lessons and implications can be useful, firstly to other countries establishing evaluation systems, and also those who have an interest in enhancing the use of evidence by government agencies in developing countries.

Introduction

The use of empirically derived evidence to inform policy developments and reviews, and project decisions, is critical to improve the performance and effect of government policy and programmes (Ajakaiye 2007; Mackay 2007). Although a premium is placed on empirical evidence, its use in policy and practice remains minimal (Davies, Nutley & Smith 2000a; Dhaliwal & Tulloch 2013; Duncan 2005; Mackay 2007). South Africa is no different to other countries in this regard. When Cabinet approved the National Evaluation Policy Framework (NEPF) in 2011, it placed emphasis on increasing the utilisation of evaluative evidence in planning, budgeting and management decisions (Department of Planning, Monitoring and Evaluation 2011). This fitted in with government’s public management discourse of the time. The outcomes-based approach (government approach to public management), which was adopted in 2010, emphasises outcomes and results and the need to focus government investment on interventions and programmes that maximise the impact of government expenditure (Presidency 2010). Despite the adoption of a results-based management policy framework five years prior, the application of empirical evidence in policy and practice remains sporadic (DPME 2014). A number of factors explain this trend, including invisibility of and inadequate access to empirical studies, limited capacity to apply evidence, et cetera (Paine-Cronin 2011).

Evaluation is relatively new in the South African government management system. Although other agencies have been doing evaluations for some time it is only after the establishment of the DPME and the approval of the NEPF in 2011 that evaluations have been systemised and institutionalised within government. The DPME is an oversight department to support the use of monitoring and evaluation evidence in government. The DPME’s responsibility is limited to monitoring and evaluation of programmes and does not implement social programmes. Consequently, the DPME sought to strengthen the evaluation-practice interface through a system that is underpinned by collaboration and partnerships, wherein ongoing communication is critical to ensuring adoption of the system and use of evaluation evidence. This article reflects on the DPME’s experience with institutionalising communication between evaluators and practitioners throughout the evaluation process. It highlights how, in a context where evaluations are nascent, and a partially open window of influence exists, communication carried out this way enhances the likelihood of evaluations influencing policies, programmes and management practice.

Defining use

There is growing appreciation of the complex relationship between evidence and its use in policy and management decision making. Increasingly, scholars have conceded that policy making is inherently political, influenced by a number of imperatives (including budgetary, administrative and contextual limitations) other than scientific evidence on what works or does not. Empirical knowledge itself − particularly in social services − is not apolitical or value free and rarely provides unequivocal solutions to an issue (Duncan 2005; Jones, Datta & Jones 2009; Kitson et al. 2008; Landry, Lamari & Amara 2003; Mendizabal 2014). There is minimal demonstrated linear progression from provision of evidence to its direct implementation (Duncan 2005; Kitson et al. 2008; Weiss 1979). In response to this challenge, concepts emerged in research and policy terminology, such as ‘evidence influenced policy’, ‘evidence aware policy’ (Davies & Nutley 2001; Davies et al. 2000a) and ‘evidence inspired policy’ (Duncan 2005). These alternative concepts embrace the complexity of policy and emphasise the idea of evaluations (and research), providing substantive material that edifies the knowledge base for policy and decision makers. In these frames, scientific evidence is seen as only an ‘influencer’ of policies and decisions (Davies & Nutley 2001) within a complex policy environment. This view is corroborated by evidence from Peck and Gorzalski’s (2009) research that found minimal instrumental use and widespread conceptual application of evaluation findings. Such application is broader than the operationalisation of evaluation recommendations and includes all potential uses and learning − acquisition of new knowledge, skills, conceptual enlightenment − that result from both the evaluation process and evaluation findings.

The South African NEPF recognises complexity inherent in the research-practice nexus and does not explicitly refer to evidence based policy as its theoretical underpinning. Although emphasis is placed on the ‘use’ of evaluation findings as the primary objective of doing evaluations, ‘use’ is defined and utilised broadly to include both instrumental and conceptual. The NEPF further recognises that within the government environment, use cannot only focus on government's application of acquired knowledge, but should include the public's utilisation of evidence to hold government accountable. The policy framework uses the terms ‘inform’ and ‘use’ interchangeably or within the same text to describe the potential of evaluation evidence influencing decision and policy making in government. Evaluation is understood as part of the broader outcomes oriented public management and accountable democratic governance. In this article, the term ‘use’ will be applied broadly and interchangeably with the term ‘influence’, referring to any different way in which evaluations can (or do) affect policy, programmes, organisational operations, and management. Whenever reference is made to ‘use’, ‘influence’ and evidence based policy in this article, the reader should understand it in a broader sense.

Effective communication: An evolving concept

Within the knowledge development sectors, communi­cation is often viewed as a process or event that happens after an evaluation has been completed (Edwards 2010; Lester & Braverman 2012). Much emphasis has been placed on communication of evaluation results; numerous tools were developed and published to aid evaluators in communicating the outcomes of their evaluations more effectively. This is often predicated on a simplistic notion of how empirical evidence influences policy and practice (Almeida & Báscolo 2006), and the assumption that the audience will understand, agree with and accept the value of such communication and will invariably take action upon receipt of the message (Rochow 2005). There is an inclination to focus on implementation of evaluation recommendations, which is an end product of the evaluation process and dependent on there being an evaluation report (Forss, Rebien & Carlsson 2002). This one-way communication where the evaluator is the transferor of knowledge to recipient policy actors fails to recognise complexities that characterise policy space and are ineffective to influence actions of practitioners. To increase the advantage or competitive edge evaluations have in influencing policy and practice, evaluators have to view themselves and communication differently.

Sherwood, Paredes and Ordòñez (2014:33) stated that communication should be understood as more than just words and language and that it is about ‘sense-making and meaning-making as well as social organisation’. Communication involves identifying patterns, constructing order and plausible understanding of the patterns and interpreting the meaning of people's experiences and observations that might be otherwise unintelligible. Through application of communication tools, an alternative interpretation is offered; translating the incomprehensible or meaningless into information the recipients can engage with. Meaning and sense-making inherently is a result of sharing and testing emerging ideas with others (Ancona 2010). Communication has to be a two-way process to offer effective and meaningful representation of observed phenomena. Effectual communication is, therefore, intrinsically dialogical. Dialogue, the direct relation between people, suggests the sharing of thoughts and knowledge, and collective thinking, in a context where participants put aside their opinions and conclusions to fully understand the intended message (Jenlink & Banathy 2005; Rallis & Rossman 2000). It requires openness to have preconceived ideas altered in the exchange with others (Jenlink & Banathy 2005; Swidler 2011). This kind of interaction and exchange is generative; it provides material to transform existing beliefs and create new meaning (Jenlink & Banathy 2005; Rallis & Rossman 2000). In this dialogical form, communication is neither something that is done or used, but relations that are created (Jenlink & Banathy 2005); and according to Sherwood et al. (2014), it is simply the essence of being a researcher.

Seeing communication as involving meaning making and requiring interaction between various players challenges the notion of spherical separation between research and practice, where engagement with the potential users of evidence is reserved exclusively for sharing completed pieces of work. Evaluators can no longer afford to see themselves as abstracted from the context they study. By recommending (and communicating) a particular course of action over another, they act upon their context, becoming social actors (Sherwood et al. 2014). Rallis and Rossman (2000) argued that evaluators, as agents of change, have to see themselves as critical friends. To give empirical evidence the boost it needs in influencing or informing policy discourse and practice, evaluators need to be willing to connect with those who occupy the policy and practice spaces. There has to be meaningful interaction (long enough) between the evaluators and programme people informing how the evaluation is formulated, best possible methods to respond to the evaluation, and how data are interpreted into knowledge that can be applied. It is through open partnerships that span the development, validation and incorporation of evidence that there is likely to be meaningful dialogue between the evaluator and programme officials that can establish firmly, in the knowledge base of practitioners, lessons coming out of the evaluation. This is not to compromise the evaluator's independence, objectivity or credibility, but to enhance the probability of the evaluation being utilised in different ways (Davies & Nutley 2001; Edwards 2010).

DPME experience: Facilitating dialogical communication

With a use-oriented evaluation system, the DPME adopted a user-led approach to implementing evaluations. Programme people, and not evaluators, elect to partner with the DPME by recommending their programmes to be evaluated. The conceptualisation process is carried out collaboratively between an evaluator based in the DPME and the relevant programme people. Collectively, they define the problem the evaluation is to address and identify both the methodology and evaluation expertise best suited to respond to the evaluation questions. The discussions held between an evaluator and programme people encourage collective reflection and knowledge sharing that frame the focus of the evaluation and the evaluation questions.

Most of the programmes evaluated as part of the DPME's National Evaluation Plan (NEP) − an annual plan of government important evaluations − are fairly extensive in terms of budget, the number of people they reach and political significance. Most of them are implemented by (or at least affect) a variety of institutions and government departments. The NEPF recommends that an Evaluation Steering Committee (ESC) manages each of the evaluations. The ESC is made up of the relevant institutions and departments, experts in the particular service area and programme and the DPME, and is presided over by the department that proposed the evaluation (DPME 2011). The ESC has an overall project management role, providing substantive input on important evaluation outputs starting with conceptualisation (i.e. Terms of Reference [ToRs]), and progressing to implementation of the evaluations and recommending changes to programmes evaluated. The steering committee discussions are formative; it is a space for constructive dialogue between the evaluator and programme people, mediated by external voices of the DPME and industry experts, as the evaluation is unfolding.

For credibility of the evaluation findings, the DPME outsources the implementation of evaluations to independent evaluators in academia, research (Non-Governmental Organisations [NGOs]) or private consulting firms. Effective and correct communication is important during the implementation of the evaluation. The appointed independent evaluator closely engages with an advisory team (that is a subset of the ESC and includes key programme people) and the members of the ESC. All evaluation deliverables are subject to discussion and input from this advisory group, whilst important deliverables (i.e. the inception report and proposal of the evaluator, literature review, methodology chapters, and evaluation reports) are presented by the evaluator and discussed at ESC meetings. The ESC creates the space for dialogical communication (Rallis & Rossman 2000), in which the evaluator and programme people have sufficient interaction to influence both the evaluation and practice. It bridges the spherical divide between research and practice. Evaluators − although afforded a degree of independence − are neither detached from the programme space nor are they transferors of knowledge to policy makers. Evaluators are what Rallis and Rossman (2000) termed the critical friend. In a symbiotic relationship, the evaluator, through application of evaluation methods, offers possible interpretations of practitioners’ observations and experiences. The evaluator's interpretations, in turn, are interrogated by those affected.

This early engagement and ongoing conversation between evaluators and programme people builds the knowledge base and confidence (in the evidence generated) of the latter located in implementing departments. They will in turn refer to, and infer from, the evaluation and become advocates for the evaluation within their departments. They navigate and transcend the evaluation practice boundaries and, by applying learnings from participation in the evaluation, integrate new understandings from evaluation findings into practice. Secondly, the involvement of programme stakeholders in the framing of an issue to be evaluated, analysis of findings and making recommendations during the evaluation process, builds trust and ownership of the process (Davies, Nutley & Smith 2000b; Peck & Gorzalski 2009; Rallis & Rossman 2000). Trust is a function of repeated personal interaction between parties (Arvey 2009) and is critical in encouraging use of evidence (Carden 2005). When officials own an evaluation process, they champion the evaluation, and studies have shown that an inclusive and participatory evaluation process increases the likelihood of an evaluation being utilised (Patton 1997; Peck & Gorzalski 2009; Raab & Stuppert 2014).

An additional benefit of stakeholders being involved in the process is that it increases the evaluator's understanding of the programme context, which is an enabling factor for use of evaluative evidence (Peck & Gorzalski 2009; Raab & Stuppert 2014; Torres, Preskill & Piontek 2005). Government policies and programmes often respond to complex social problems through complicated institutional arrangements that span a number of organisations. DPME's experience shows that evaluators, although having evaluative skills, sometime have a limited understanding of the programme implementation environment. Through constant dialogic communication between the evaluator and programme people, the evaluator gains a deeper understanding and appreciation of the challenges facing a programme, the social problem it responds to, and − together with programme people – identifies ways in which the programme challenges can be addressed to improve performance and impact (Carden 2005). Such cooperation and dialogical communication increases the likelihood of the evaluation outcome responding to the perceived policy and programme challenges and recommendations being relevant, practical and actionable (Peck & Gorzalski 2009).

Maintaining ongoing interactive communication between practitioners and evaluators throughout the evaluation process is complex, particularly when issues are politicised and people do not respond timeously to requests. This can delay evaluations and potentially increase associated costs. However, despite the challenges, there are more benefits than risks derived from communication between evaluators and programme people as the evaluation unfolds. In practice, the experience has been that where programme managers are actively involved in the evaluation, and senior managers support the evaluation process, minimal resistance in implementing the findings of the evaluation is observed. The approach also builds trust between the DPME and the line departments which is synonymous to trusting the evaluation practice and its outcomes. This is important for the sustainability of evaluation discipline in government departments.

Communicating findings through continued engagement

Communication during implementation is specific and has a narrow focus regarding its intended audience and purpose. Communication of findings and recommendations remains necessary for a wider audience to access and understand their implications (Da Costa 2008; Davies & Nutley 2001; Raab & Stuppert 2014; Stetson 2008). DPME recognises limitations with conventional communication approaches and has diversified it to reach different target audiences. Conventional dissemination ideas tend to lean heavily towards researcher and specialist oriented communication. There is more value ascribed to standard academic publications, that is, peer-reviewed journals, conference papers, and the like, which fulfil the curiosity of those who already have interest in academic knowledge (Dhaliwal & Tulloch 2013). The evaluator often takes centre stage and emphasis is on communicating to policymakers and practitioners, with minimal feedback to evaluators as to whether the message has been understood fully and accurately. Where there is a flood of information, competing interests and the need to balance a variety of factors (Davies et al. 2000b), these traditional methods of communicating evaluation results dominated by monologue from evaluators to practitioners and policymakers are insufficient and ineffective. They do not adequately incorporate evaluation findings into the knowledge base of policymakers and practitioners (Davies & Nutley 2001; Edwards 2010). Increasingly there is a recognition that communication that is dynamic and iterative (Peck & Gorzalski 2009; Stetson 2008; Torres et al. 2005) and is part of a continuous dialogue between evaluators and practitioners (Carden 2005) is more effective in influencing policy and practice. This is the thinking DPME has adopted in its approach to communication of evaluation results.

The DPME encourages identification of end-users of evaluations during the conceptualisation phase. This informs the development of messages and communication material for different stakeholder groups. The practical value of findings might not be as obvious to policy makers or practitioners as knowledge producers may want to think (Jones et al. 2009). Empirical research, therefore, has to be translated and adapted to formats and language that is accessible to end-users (Jones et al. 2009; Landry et al. 2003). DPME has used different tools to translate technical evaluation reports to information that practitioners can use.

At report level, to make evaluation findings more accessible, the DPME's standardised reporting includes, in addition to the overall evaluation report and other papers that evaluators might want to extract from the evaluation, three summary reports. These consist of:

  • A one-page policy brief that provides information of strategic orientation, and is mainly addressed to political principals, specifically the Minister of a particular portfolio.
  • A five-page summary targeted at the senior management of the relevant departments that summarises the evaluation process, the findings and recommendations.
  • A 25-page summary report that provides a synthesis of the overall evaluation report.

The reports are written in simplified language, free of technical jargon to make them accessible to both programme people and the layperson, with inputs from practitioners. Whilst the main report is also made available to the public, the three summary reports are distributed more widely. A series of policy briefs that situate evaluation findings within a broader context to highlight policy implications and policy recommendations are being developed to complement summary reports. Policy briefs will be targeted primarily at officials within the public sector, special interest groups such as academics, civil society and research and policy centres. These are written by programme people, which encourage them to reflect what the evaluation implies for practice.

The written reports, however, do not adequately encourage continued dialogue around evaluation results, nor are they sufficient to ensure adoption and use of evaluation evidence. The DPME holds the view that interaction is as important when sharing evaluation results as it is during the implementation phase. Together with the programme people, the DPME makes presentations to various groupings of stakeholders, including senior management of the custodian department, governance forums such as the Minister and Members of the Executive Council (MINMEC), Cabinet, relevant portfolio committees in parliament and any other structures relevant for a particular evaluation. As policymakers and practitioners are effectively a conglomeration of various stakeholder groups of varying ideologies, influence and power (Pointer 2014), discussions around the evaluation results are held separately with each of the individual groupings. This is important for two reasons. Firstly, it allows messages to be tailor-made for each group to fit in with their ideological inclinations and political affinity. This enables communication activities to be targeted appropriately and to relay findings and recommendations in a manner that is persuasive to a specific target group. It also minimises the chances of ideological and pragmatic conflict between stakeholders during the discussions, which could deter from the communication exercise and derail the chances of the evaluation message being heard, understood and acted upon. Secondly, it ensures that information is presented in a way that fits in with the audience's way of seeing the world (Jones et al. 2009), which cannot be achieved with traditional communication approaches. Conventional dissemination methods tend to use authoritative language that focuses on results and observed outcomes without being grounded on an understanding of the contextual intricacies or those who are engaged in implementation (Rallis & Rossman 2000). Language used in such presentations of evaluation results is distanced and depersonalised, not geared to the audience and their level of comprehension. It can possibly make practitioners and policy people defensive and less likely to learn from such an evaluation (Rallis & Rossman 2000). Dialoguing on evaluation findings makes communication far more than merely the provision of information. It is a way of adding meaning to often complex and technical evaluation findings in a way that is understandable and enlightening to the target audience. This breaks down defences and improves the willingness to allow the message to challenge or complements pre-existing views.

For internal stakeholders (Ministers, MINMEC, Cabinet, parliament, etc.), oral interpersonal communication has been relatively effective to secure political support for the evaluations. It has encouraged discussions between the evaluators, practitioners and policymakers, which − according to Torres et al. (2005) − foster understanding, enhance transparency and increase the likelihood of partners committing to act on evaluation findings. Even in cases where no immediate decision is taken, the interaction with policy makers demonstrates and sensitises them of the value of evaluations. A political context that embraces the use of evidence can only improve the utilisation of evaluation evidence (Dhaliwal & Tulloch 2013; Jones et al. 2009; Raab & Stuppert 2014).

The NEPF has repeatedly underlined the importance of sharing evaluation evidence with the public (DPME 2011). When evaluations commissioned by government agencies are not made public, an opportunity is missed to inform public discourse and policy debates, enhance the knowledge base on a particular issue and promote accountability. The website has been a useful platform to reach a wider audience. All evaluation reports, management response to the evaluation and improvement plans (implementation of recommendations) are placed on the website, once they have been presented to Cabinet. Although this is relatively new, there are indications that it is increasingly inserting evaluation evidence in the public debate.

The DPME is also encouraging knowledge sharing and using evaluation findings in various discussion forums and seminars. The seminars bring together government, civil society, academia and other sectors interested in a particular issue. Seminars create the opportunity for discussions between participants and provide a chance for knowledge and experiences sharing. What DPME has done differently is that programme managers present evaluation findings and lead discussions. This is a move away from conventional dissemination methods (Rallis & Rossman 2000), where the evaluator takes centre stage and communicates to policy makers and practitioners without feedback or engagement, or which tend to use authoritative language that focuses on results and observed outcomes without being based on an understanding of the contextual intricacies and those who are engaged in implementation. Seminars will encourage sustained dialogue between evaluators and practitioners, where communicating evaluation findings is an active process that involves both the evaluator and practitioners.

A number of limitations affect the DPME's communication strategy. The DPME is not an independent evaluation institution; it operates within the bureaucratic normative rules and institutional culture that governs the public sector. As evaluations are collaborations between the DPME and the custodian departments, all communication efforts are guided by how much the custodian departments actually want to communicate and whether they allow effective communication to take place. The DPME has no legal or institutional jurisdiction to communicate evaluation results of an implementation programme that it has no responsibility over and for which it is not accountable. If the DPME communicates evaluation outcomes of another department's programme on its own, specifically of negative results, it could appear as an exposé of non-performance, not only of an individual programme, but of the department or ministry. This is particularly concerning, given that the DPME is situated in, and is part of, the Presidency. However, if the DPME only widely publishes positive results, the credibility of the evaluation system (and entire government evaluation endeavours) could be jeopardised. There is tension between being insistent towards departments to make evaluation results accessible to a wider audience, through media and other tools, and growing the fledgling evaluation system. As evaluations are still emerging in the South African public sector, forcing departments to go public with results, whilst they still experience evaluation anxieties, might be met with resistance and also potentially a withdrawal from participating in evaluations.

These challenges demand that the DPME reconsiders the role it should and wants to play in encouraging wider communication of evaluation results. The DPME is an oversight department with a long term objective to generate a continuous demand for evaluations and promote evidence-informed policy and practice within government. At the point where the national evaluation system is only four years old, there are indications that the department can perhaps better enhance the use of evaluation evidence through strengthened active and direct communication with users of evidence. This can be through a collaborative approach to evaluations that sustains communication between programme people and evaluators during the evaluation process, through dialogue with policy makers, and policy dialogue between government and civil society. The approaches have shown to be relatively effective in promoting evidence-informed policy, programme planning, budgeting and operations. In the longer term, as the discipline matures and evaluations become imprinted in the operating processes of government, there will be more scope for evaluation findings to be communicated transparently and widely. For now, it would seem more beneficial to focus on communication that generates interest and appreciation of evaluations, and encourages the application of lessons learned in policy and management practices.

Concluding remarks

This article has offered a reflection from a government department on the value of communication that underpins the entire evaluation process as a facilitative factor for increased use of evaluative evidence. Conventionally, most emphasis has been placed on communication of evaluation results. The DPME experience has, however, shown the importance of two-way, interactive communication between practitioners and evaluators throughout the implementation of an evaluation, including communication of results. Where programme and senior managers are engaged and communicated with during the conceptualisation, implementation and discussions about emerging lessons from an evaluation, less contestations have been experienced, and recommendations have been implemented with relative ease. This is not to understate communication of results. Communication of evaluation results remains important, but should be appreciated within the same frame of collaboration and collectivism, where evaluators see themselves as actors in the policy space. Effective communication of evaluation results requires more than a once off event where the evaluator tells practitioners what they are not doing correctly or what is not working and what they should do. It should be part of continuous conversation and relations building exercise between evaluators and practitioners. Communication practiced this way can certainly boost the influence evidence has in policy and practice. It is important to note that communication is not a panacea for all problems that hamper the application of evaluation evidence. It is simply a tool available to evaluators which can significantly improve the utility and usability of evaluation evidence.

Acknowledgements

My thanks go to Beryl Leach, International Initiative of Impact Evaluation (3ie), for reviewing the first draft of this article. Your insight on the topic of communication around evaluations was certainly helpful in refining the article.

Competing interests

The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

References

Ajakaiye, O., 2007, ‘Levelling the playing field: Strengthening the role of African research in policymaking in and for Sub-Saharan Africa’, in E.T. Ayuk & M.A. Marouani (eds.), The policy paradox in Africa: Strengthening the link between economic research and policy-making, pp. 19–36, Africa World Press,Trenton.

Almeida, C. & Báscolo, E., 2006, ‘ Use of research results in policy decision-making, formulation, and implementation: A review of the literature’, Cadernos de Saúde Pública 22 suppl., S7–S33. PMID: 17086340, http://dx.doi.org/10.1590/S0102-311X2006001300002

Ancona, R., 2010, Sense making: Framing and acting in the unknown, viewed 3 September 2014, from http://www.sagepub.com/upm-data/42924_1.pdf.

Arvey, R., 2009, Why face-to-face business meetings matter, viewed 27 August 2014, from http://www.iacconline.org/content/files/WhyFace-to-FaceBusinessMeetingsMatter.pdf.

Carden, F., 2005, ‘Context matters − The influence of IDRC-supported research on policy processes’, in E.T. Ayuk & M.A. Marouani (eds.), The policy paradox in Africa: Strengthening the link between economic research and policy-making, pp. 93–129, Africa World Press, Trenton.

Da Costa, P., 2008, Study on communicating evaluation results, viewed 26 August 2014, from http://www.oecd.org/dac/evaluation/StudyonCommunicatingEvaluationResults_FINAL_091112.pdf

Davies, H.T.O. & Nutley, S.M., 2001, ‘Evidence-based policy and practice: Moving from rhetoric to reality’, paper presented at the Third International Interdisciplinary Evidence-based Policies and Indicator Systems conference, University of Durham, 3–7 July, viewed 26 Aug. 2014, from http://www.cem.org/attachments/ebe/P086-P095%20Huw%20Davies%20and%20Sandra%20Nutley.pdf.

Davies, H.T.O., Nutley, S.M. & Smith, P.C., 2000a, ‘Learning from the past, prospects for the future’, in H. Davies, S. Nutley & P. Smith (eds.), What works: Evidence based policy and practice in the public sector, pp. 351–361, Policy Press, Bristol. http://dx.doi.org/10.1332/policypress/9781861341914.003.0016

Davies, H.T.O, Nutley, S.M. & Smith, P.C., 2000b, ‘Introducing evidence-based policy and practice in public services’, in H. Davies, S. Nutley & P. Smith (eds.), What works: Evidence-based policy and practice in the public sector, pp. 351–361, Policy Press, Bristol. http://dx.doi.org/10.1332/policypress/9781861341914.003.0001

Department of Planning, Monitoring and Evaluations (DPME), 2011, The national evaluation policy framework, Presidency, Pretoria.

Department of Planning, Monitoring and Evaluations (DPME), 2014, ‘The study into the state and use of monitoring and evaluation in government: Synthesis report and strategic proposals for continuous improvements’, (unpublished document), Presidency, Pretoria.

Dhaliwal, I. & Tulloch, C., 2013, From research to practice: Using evidence from impact evaluations to inform development policy, viewed 26 August 2014, from http://www.povertyactionlab.org/publication/research-policy

Duncan, S., 2005, ‘Towards evidence-inspired policymaking’, Social Sciences 61, viewed 27 August 2014, from http://www.esrc.ac.uk/_images/Social%20Sciences%20issue61_tcm8-8265.pdf

Edwards, M., 2010, ’Making research more relevant to policy: Evidence and suggestions’, in G. Bammer, A. Michaux & A. Sanson (eds.), Bridging theKnowDo gap: Knowledge brokering to improve child wellbeing, pp. 55–64, Australian National University, Canberra.

Forss, K., Rebien, C.C. & Carlsson, J., 2002, ‘Process use of evaluations: Types of use that precede lessons learned and feedback’, Evaluation 8(1), 29–45. http://dx.doi.org/10.1177/1358902002008001515

Jenlink, P.M. & Banathy, B.H., 2005, ‘Dialogue: Conversation as culture creating and conscious evolving’, in B.H. Banathy & P.M. Jenlink (eds.), Dialogue as a means of collective communication, pp. 3–16, Kluwer academic, Plenum publishers, New York. http://dx.doi.org/10.1007/0-306-48690-3_1

Jones, N., Datta, A. & Jones, H., 2009, Knowledge, policy and power: Six dimension of the knowledge-development policy interface, Oversees Development Institute, viewed 09 May 2014, from http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/4919.pdf

Kitson, A.L., Roycroft-Malone, J., Harvey, G., McCormack, B., Seers, K. & Titchen, A., 2008, ‘Evaluating the successful implementation of evidence into practice using the PARiHS framework: Theoretical and practical challenges’, Implementation Science 3, 1. http://dx.doi.org/10.1186/1748-5908-3-1

Landry, R., Lamari, M. & Amara, N., 2003, ‘The extent and determinants of the utilization of university research in government agencies’, Public Administration Review 63(2), 192–205. http://dx.doi.org/10.1111/1540-6210.00279

Lester, W.B. & Braverman, M.T., 2012, ‘Communicating results to different audiences’, in M.J. Braverman, N.A. Constatine & J.K. Slater (eds.), Foundations and evaluation: Contexts and practice for effective philanthropy, pp. 281–304, Wiley.

Mackay, K., 2007, How to build M& E systems to support better government, The World Bank, Washington, DC. http://dx.doi.org/10.1596/978-0-8213-7191-6

Mendizabal, E., 2014, ‘Communicating complex ideas’, in E. Mendizabal (ed.), Communicating complex ideas: Translating research into practical social and policy changes, pp. 1–12, On Think Tanks.

Paine-Cronin, G., 2011, Use of evidence in policymaking in South Africa: A sample of attitudes of senior policymakers, Report for Programme to Support Pro-Poor Policy Development (PSPPD), Pretoria.

Patton, M.Q., 1997, Utilisation-focused evaluation: The new century text, Sage, London, quoted in M.M. Mark & G.T. Henry, 2004, ‘The mechanism and outcomes of evaluation influence’, Evaluation 10(1), 35–57. http://dx.doi.org/10.1177/1356389004042326

Peck, L.R. & Gorzalski, L.M., 2009, ‘An evaluation use framework and empirical assessment’, Journal of Multidisciplinary Valuation 6(12), 139–155.

Pointer, R., 2014, Power and ideology are key to communicating research, viewed 05 September 2014, from http://www.politicsandideas.org/?p=1604

Presidency, 2010, Guideline to outcomes approach, Presidency, Pretoria.

Raab, M. & Stuppert, W., 2014, Review of evaluation approached and methods for interventions related to violence against women and girls, viewed 09 July 2014, from http://r4d.dfid.gov.uk/pdf/outputs/misc_gov/61259-Raab_Stuppert_Report_VAWG_Evaluations_Review_DFID_20140626.pdf

Rallis, S.F. & Rossman, G.B., 2000, ‘Dialogue for learning: Evaluator as critical friend’, New Directions for Evaluation 86, 81–92. http://dx.doi.org/10.1002/ev.1174

Rochow, G., 2005, ‘The key role of communication theory in reporting evaluation findings in multi-institutional international evaluations’, paper presented at the Joint Canadian Evaluation Society/American Evaluation Association evaluation conference, Toronto, 28 October.

Sherwood, S., Paredes, M. & Ordòñez, A., 2014, ‘Moving from communication as a profession to communication as being in Northern Ecuador’, in E. Mendizabal (ed.), Communicating complex ideas: Translating research into practical social and policy changes, pp. 31–55, On Think Tanks.

Stetson, V., 2008, Communicating and reporting on an evaluation: Guidelines and tools, Catholic Relief Service, Baltimore.

Swidler, L., 2011, What is dialogue? viewed 28 September 2014, from http://institute.jesdialogue.org/fileadm

Torres, R.T., Preskil, H. & Piontek, M.E., 2005, quoted in V. Stetson, 2008, Communicating and reporting on an evaluation: Guidelines and tools, Catholic Relief Service, Baltimore.

Weiss, C.H., 1979, ‘The many meanings of research utilization’, Public Administration Review, 39(5), 426–431. http://dx.doi.org/10.2307/3109916


 

Crossref Citations

1. The pilot evaluation for the National Evaluation System in South Africa – A diagnostic review of early childhood development
Margot Davids, Marie-Louise Samuels, Roseline September, Tshimi L. Moeng, Linda Richter, Thabo W. Mabogoane, Ian Goldman, Thabani Buthelezi
African Evaluation Journal  vol: 3  issue: 1  year: 2015  
doi: 10.4102/aej.v3i1.141