About the Author(s)


James Kabuye Email
School of Business, Uganda Technology and Management University, Uganda

Benon C. Basheka
School of Business, Uganda Technology and Management University, Uganda

Citation


Kabuye, J. & Basheka, B.C., 2017, ‘Institutional design and utilisation of evaluation results in Uganda’s public universities: Empirical findings from Kyambogo University’, African Evaluation Journal 5(1), a190. https://doi.org/10.4102/aej.v5i1.190

Original Research

Institutional design and utilisation of evaluation results in Uganda’s public universities: Empirical findings from Kyambogo University

James Kabuye, Benon C. Basheka

Received: 18 Nov. 2016; Accepted: 24 Mar. 2017; Published: 29 June 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: The need for evidence-based decision-making scaled up the need for monitoring and evaluation systems in Africa. The education sector has received increasing scrutiny, owing to its centrality in promoting the national agenda of countries. The higher education sub-sector has expanded in its drive to increase accessibility, albeit with numerous challenges and doubts, especially about the quality of education. Numerous evaluations in this sub-sector in Uganda have been carried out, but their results have not been used for effective decision-making. In this regard, the non-utilisation trend of evaluation findings is attributable to the design of the institutions where these evaluations are carried out.

Objectives: The study examined the relationship between institutional design (procedural rules, evaluation processes and institutional capacity) and utilisation of evaluation results at Kyambogo University.

Methodology: This was a cross-sectional survey involving a sample of 118 respondents whose views were obtained through the use of questionnaires and key informant interviews triangulated with documentary analysis.

Results: The study found that procedural rules, evaluation processes and evaluation capacity had a positive (0.459, 0.486 and 0.765, respectively) and a statistically significant (sig. = 0.000) effect on utilisation of evaluation results. This means that the dimensions of institutional design were important predictors of utilisation of evaluation results by a public sector agency.

Conclusion: Strengthening of the evaluation competences and capacity of the university by empowering the Directorate of Planning and Development to coordinate and harmonise all evaluations and be charged with the follow-up of utilisation of the results is an emerging recommendation from this study.

Introduction

The world is experiencing an increasing demand for effective utilisation of evaluation results (Porter & Goldman 2013), and public universities in different contexts cannot be an exception. In fact, evaluation findings in public universities would go a long way in improving their performance levels and help in promoting administrative accountability as reported by Matsiliza (2012). Hardlife and Zhou (2013) suggest that evaluation results utilisation signifies a gradual shift from the traditional implementation-based approach to the contemporary results-based approach, which in no small measure has penetrated the university list of expectations. The contemporary development process acknowledges the importance of evidence in decision-making and data as an indispensable element of the sustainable development agenda (United Nations 2015). Unfortunately, there remains limited utilisation of evaluation results, which seems to be the consequence of the setting of the organisation not matching the purpose of the evaluation as demonstrated by Widmer and Neuenschwander (2004).

Kusek and Rist (2004) and OECD (2002) opine that evaluation is the systematic and objective process of assessment of an ongoing or completed intervention to determine the relevance, efficiency, effectiveness, impact and sustainability. They argue that an evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of the commissioners of evaluations. In agreement with Scriven (1996), the authors remind us that while evaluation is still a very young field of study and often claimed by several other disciplines such as philosophy, political science, psychology, sociology, anthropology, education, economics, communication, public administration, information technology, statistics and measurement, it is an old practice. An attempt to trace its roots as a profession brings about the ‘chicken-egg dilemma’ because there is hardly any science of what comes first (Basheka & Byamugisha 2015).

Ledermann (2012) equally reminds his audience that the questions of whether or not evaluations are used are as old as the evaluation business itself, and this serves to confirm that it is indeed an old practice but a new discipline as more scholarship on the same is relatively new. Patton (2002) suggests that the history of evaluations and their utilisation were holy processes in the management cycle of organisations, arguing that from the bible, the first evaluation of the 10 days dietary project that got Daniel, Hannaniah, Mishael and Azariah fed on vegetables and water healthier and better fed saw its findings put to practical use (Dan 1:1–20). Højlund (2014) defends the utilisation of evaluations from a rationalism perspective and notes that evaluation itself was born in a time of belief in a better world through rational interventions and social engineering. His argument is that evaluation is inherently rationalist, causal and evolutionary in nature. This same view is shared by Cousins, Goh and Clark (2004) whose definition of evaluation regarded it as a systematic inquiry leading to judgements about a project, programme, policy or organisation’s merit, worth and significance, and support for programme (or organisational) decision-making has important implications for evaluation.

Sandison, Vaux and Mitchell (2006) aver that the utilisation of evaluation has been a topic of lively debate in the development of the public sector since the 1970s. They demonstrate that since the 1970s, the neat and linear connection between evaluation findings and policy or programme improvement has been increasingly challenged. The rest of the article covers the theoretical and conceptual landscape, the problem statement and research questions, methodology, findings from the study, conclusions and policy implications. The theoretical and conceptual framework is addressed first.

Theoretical and conceptual framework

This study was underpinned by the utilisation-focused theoretical framework proposed by Patton (2008), who argued that utilisation-focused evaluation (UFE) begins with the premise that evaluations should be judged by their utility and actual use. UFEs should advocate for close collaboration between evaluators and intended evaluation users. His arguments stemmed from the fact that close collaboration enhances understanding and buy-in by the intended users of evaluations, which results in increased commitment to use evaluation findings. It further allows stakeholders to improve the quality of their evaluation processes through joint planning, implementation, monitoring and self-evaluation (Widmer & Neuenschwander 2004). King and Stevahn (2013) assert that interactions with stakeholders make or break any evaluation process. Their argument is that utilisation of evaluation hinges on the process for conducting an evaluation. They add that effective interactions amongst the stakeholders of the evaluation result in a more effective evaluation process.

Smits and Champagne (2008) argue for practical participatory evaluation as a supplementary approach to evidence-based decision-making and accountability in policymaking. This in turn contributes to learning amongst the stakeholders with the intention of reinforcing understanding, the ownership of results and a sense of obligation to follow through on the results. Their argument is built on Turnbull (1999) who concludes that participative processes warrant stakeholders to be involved in evaluation decision-making early enough and to share joint responsibility for the evaluation report with the evaluator. In light of the current study, the UFE theory requires that the institutional design in terms of procedural rules, the evaluation processes as well as the evaluation capacity is adequate to support the utilisation of evaluation results as in the conceptual framework.

Utilisation of evaluation is the use of the findings of an evaluation as well as the implementation of the recommendations of the evaluation. Johnson, Greenseid and Toal (2009) explain that evaluation use is ‘any application of evaluation processes, products, or findings to produce an effect’. Evaluation utilisation demonstrates the consequence of evaluation studies. It answers the question, ‘So what after presenting the findings of an evaluation?’. It therefore underscores the linkage between evaluation and policy. This is because the aim of evaluation is to assist people and public organisations to improve their plans, policies and practices on behalf of citizens (Weiss 1999). Utilisation of evaluation results also ensures sustainability (Schaumburg-Müller 1996). In this study, utilisation is assessed in terms of its five strands of instrumental, conceptual, process-related, symbolic and general utilisation (Balthasar 2008).

Instrumental utilisation of an evaluation is the implementation of the recommendations. This is the intended, targeted and direct use of evaluation by the decision-makers in the intervention. According to Rich (1991), instrumental utilisation refers to ‘utilization that can be documented’; however, Mayne (1994) regards instrumental utilisation of evaluations as the implementation of evaluation results and recommendations. Vedung (1997) describes it as utilising evaluations as means in goal-directed problem-solving processes. However, conceptual utilisation is the change in opinions, attitudes or ideas regarding certain aspects of the evaluated programme as the consequence of an evaluation (Balthasar 2009). Vedung (1997) shows that conceptual utilisation occurs when cognitive, affective and normative insights are gained through evaluations. In the same way, Weiss (1977) observes it as an ongoing sedimentation of perceptions, theories, concepts, ways of looking at the world and enlightenment. Conceptual utilisation as presented by Rossi, Lipsey and Freemen (2004) is the utilisation of evaluation findings to enhance knowledge about the type of intervention under study with an intention of influencing the thinking about issues in a general way.

Process-related utilisation as described by Patton (1997) is one that results in the sharing of the problem under investigation and develops strong networks for the commissioners of the evaluations. This same route is taken by Henry and Mark (2003) who explain it as the action or learning that takes place as a result of evaluation findings or as a result of participation in evaluation procedures. Symbolic utilisation occurs when decision-makers use evaluations to confirm their perspective and to obtain legitimation for themselves (Henry & Rog 1998). Henry and Mark (2003) conclude that it is the use of evaluation to claim a rational basis for action, or inaction, or to justify pre-existing positions. Moleko (2011) identifies the symbolic utilisation of evaluation results when evaluation becomes an instrument of political manoeuvring to the Pork-Barrel approach. From this perspective, evaluations are used as a justification for what decision-makers are interested in doing. Relatedly, Patton (2008) regards symbolic utilisation as the token utilisation made of an evaluation result to fulfil a requirement to do evaluation or to show support for an intervention area. A combination of all these four types of evaluation utilisation therefore gives the general utilisation; in this study, general utilisation was used for the general benefit of utilisation.

Evaluation is strongly dependent on its social and organisational context (Dahler-Larsen 2012). This shows that the extent to which evaluation results are utilised is linked to the institutional context. In this regard, the researchers’ choice of institutional design is supported by the empirical studies of Balthasar (2006, 2008) and Højlund (2014) that suggest the use of institutional design to explain the utilisation of evaluation findings. Their empirical contribution in this regard motivated the researchers to study the institutional explanation for the utilisation of evaluation results. Other studies in the field of evaluation utilisation have dwelt on environment and process-related factors (Cousins & Leitherwood 1986). For example, Lester and Wilds (1990) talk of contextual variables such as the nature of the political environment where policy analysis occurs, the nature of the problem, issue salience and bureaucratic variables, user characteristics, clear definition of objectives by decision-maker, decision-maker interest, decision-maker style and decision-maker participation, whereas Bayley (2008) presents the characteristics of the evaluation as factors that influence utilisation of evaluation results.

Mackay (2006) uses institutionalisation to describe the creation of a monitoring and evaluation (M&E) system that produces monitoring information and evaluation findings which are judged valuable by key stakeholders, which are used in the pursuit of good governance and where there is sufficient demand for the M&E function to ensure its funding and its sustainability for the foreseeable future. Mackay (2006) and Kusek and Rist (2004) demonstrate that the utilisation of results should be embedded within the operation framework of public sector organisations. Kusek and Rist advise that the success of any results-based M&E system depends on how lessons learned are incorporated into the decision-making process of the institution. This requires sustaining the M&E system within the organisation that involves: demand for accountability, clear roles and responsibilities, trustworthy and credible information, accountability, capacity and incentives. Mackay (2006) buttresses that while in African countries public organisation collect a range of performance information, the same is hardly utilised because its quality of data is often poor. Therefore, to enhance the utilisation of evaluation results, Mackay advises that public sector organisations should build reliable data systems that support the M&E function. Moreover, Dhakal (2014) concludes that institutionalisation of evidence-based policymaking, planning and decision-making practices is the panacea for timely demand and use of evaluations in the government sector.

Problem statement and research questions

The strength of an evaluation is measured by the extent to which the findings and recommendations are utilised (Patton 1997). Utilisation of evaluations has been appreciated by numerous scholars in the field of evaluations (Patton 1997; Rebora & Turri 2011; Widmer & Neuenschwander 2004). The extent to which these evaluations are utilised has been associated with the design of the institutions for which and in which evaluations are carried out. On this subject, Balthasar (2006, 2007 in Ledermann 2012, 2009), has systematically presented the effect of the institutional design on the utilisation of evaluations. In addition, for the case of Kyambogo University (KYU), notwithstanding the numerous evaluations that have been carried out, available evidence indicates that the utilisation level for the evaluation results is still weak. This has resulted in perpetual low performance levels as indicated by the copious strikes from both students and staff fraternity (GOU 2015a). This low utilisation level has been blamed on the institutional design. Therefore, through this study, the researchers intended to build on the work of Balthasar (2006, 2007 in Ledermann 2012, 2009) to examine the effect of the institution’s procedural rules, processes and capacities on the utilisation of evaluations at KYU.

The following explicit research questions were explored:

  • How are the institutional procedural rules related to the utilisation of evaluations in Uganda’s public universities?
  • What is the effect of institutional evaluation processes on the utilisation of evaluations in Uganda’s public universities?
  • How does the institutional capacity affect the utilisation of evaluations in Uganda’s public universities?

Methodology

This study was conducted through a cross-sectional survey design. Cross-sectional survey research design is a present-oriented methodology that is used to investigate populations by selecting samples to analyse and discover occurrences (Oso & Onen 2009). It was used to study a group of people just one time, in a single session, focusing on the institutional design and utilisation of evaluations at KYU. Surveys are designed to provide a picture of how things are at a specific time. Cross-sectional survey design was adopted because it helps the researcher gather data from a sample of a wider population at a particular time (Amin 2005) and use such data to make inferences about the wider population.

A sample size of 118 respondents, who were selected using simple random sampling and purposive sampling, was considered. Simple random sampling selects a sample without bias from the target or accessible population (Oso & Onen 2009). The sampling frame used was a list of academic staff that was sought from the Directorate of Human Resources to help in determining the respondents. The study also used purposive sampling to select administrative staff from the Directorate of Planning and Development (DPD) that houses the M&E function because they are specific and know the subject matter. Purposive sampling enables the researcher to decide whom to include in the sample based on their typicality (Oso & Onen 2009).

The study used both a questionnaire survey for quantitative data as well as key informant interviews and document analysis for qualitative data. The questionnaire survey instrument was self-administered to respondents in relation to the dimensions and indicators of institutional design as well as those of utilisation of evaluation results. Closed-ended questions in a Likert scale following a five-category response continuum were used: strongly agree, agree, not sure, disagree and strongly disagree. Questionnaires are often a one-time data-gathering device on the variables of interest to the researcher (Amin 2005). Key informant interviews were used to explore which elements of and how institutional design affect the utilisation of evaluations at KYU. These were given to the selected staff of the DPD. Interview method was used because it provides an excellent opportunity for the study to take note of issues that cannot be directly observed or difficult to put down in writing, thus capturing the meanings beyond the words (Oso & Onen 2009).

In processing the analysis of data relating to all research questions of the study, the researchers used the Statistical Package for Social Scientists (SPSS) where data were entered, edited, cleaned and sorted. This programme was used to do univariate analysis. Through this, the study described the demographic attributes of respondents as well as the attributes of utilisation of evaluation results, institutional evaluation procedural rules, processes and capacity. Univariate analysis of these variables was used to obtain descriptive data in the form of means, frequencies and percentages of the respondents. In establishing the relationships amongst variables, bi-variate analysis was done using Spearman’s rank order correlation to establish relationships amongst the study variables (Amin 2005). The correlation coefficient (r) takes a value between -1 and 1, with 1 indicating perfect positive linear correlation and -1 indicating a perfect negative linear correlation. A positive correlation shows a positive association between the variables (increasing values in one variable correspond to increasing values in the other variable), whereas a negative correlation shows a negative association between the variables (increasing values in one variable correspond to decreasing values in the other variable). A relationship value close to 0 shows no association between the variables (Amin 2005). The quantitative data were collated with qualitative data, and various responses from respondents were categorised into common responses that related to the objectives of the study as advised by Amin (2005). These were qualitative data that were obtained from interviews and various documents and analysed by content analysis (Kothari 2004). This helped to corroborate the data obtained through the questionnaires. Therefore, each piece of work was read through thoroughly to identify where the themes belonged and hence presented, interpreted and analysed.

Findings

The background characteristics of the respondents included sex, age, faculty or school and academic qualification. The total number of respondents to the study was 118 sampled across the seven faculties or schools that make the university. Out of these, 66.95% were male and 33.05% were female. A total of 30.5% of the respondents were from Faculty of Arts and Social Sciences, 18.6% from the School of Management and Entrepreneurship, the Faculties of Education and Special Needs each had 13.6%, and other faculties such as Vocational Studies, Sciences and Engineering each had 9.3%, 7.6% and 6.8% of the respondents to the study, respectively. A total of 65.3% had a Master’s Degree, 14.4% had a Bachelor’s Degree, 11.0% had a Postgraduate Diploma and 9.3% had a PhD. In terms of age distribution, 44.1% were aged between 18 and 35 years, 28.8% were aged between 36 and 45 years, 24.6% were aged between 46 and 59 years and 2.5% were aged 60 years and above. These characteristics are a true reflection of KYU because, for instance, Faculty of Arts and Social Sciences is the biggest faculty at the university. This implies that participation and inclusiveness of the staff who constitute the majority are critical for buy-in and ownership and hence utilisation of the evaluation results.

This shows that the university staff in terms of academic qualifications are largely young academics holding lower staff positions of assistant lecturers. This is because the minimum requirement for appointment as a lecturer is possession of a Master’s Degree and evidence of enrolment for a PhD at KYU (GOU 2015a). It is noted that this is a good minimum requirement but does not cut across all Uganda’s public universities. It is one of the highest requirements in the country because, in some other public universities in Uganda, people with only a Master’s Degree are recruited at lecturer level. This signifies a need for harmonisation of minimum requirements for academic positions. Low academic qualifications were earlier noted by Kyambogo University (2013) that showed the proportion of PhD holders to the total number of academic staff at the university to be below NCHE standards and blamed the situation on the ban on recruitment from 2011 to 2012. In addition, Baryamureeba (2015) contextually observed that in Uganda, PhD training is still undeveloped.

Institutional procedural rules and utilisation of evaluation results

Institutional procedural rules were studied in this study in terms of rules pertaining to assumption of costs, stakeholders’ involvement, planning and implementation of the evaluations and the evaluation results.

From documentary review, we realise that rules in public universities are informed by the universities and other Tertiary Institutions Act 2001 as amended in 2003 and 2006 (GOU 2006). This Act, formally establishes the universities and guides the operations of the institutions. It also establishes and develops a system governing institutions of higher education. Other laws governing the country also affect the working of public universities such as the Uganda Public Finance Management Act that relates to the assumption of costs in evaluations (GOU 2015b). This Act has been operationalised in terms of policies and resolutions that are consistent with the mother law. For example, numerous policies and manuals are passed by the University Council to ensure smooth running of the university. As a case in point, KYU now has the Human Resource Manual, Financial Management Policy, Research and Innovations Policy, ICT Policy, Records Information Management Policy, Quality Assurance Policy, Disability and Gender Mainstreaming Policies (GOU 2015a) and the Strategic and Master plans.

Notable cases regarding procedural rules and utilisation of results that are recent are enumerated below. The first was failure of the search committee for the vice chancellor due to bending the rules (Tumusiime 2016). The other regarded the recruitment of teaching assistants or graduate fellows. The most recent being a public advert in Daily Monitor of 11 March 2016 calling for applicants for the position of graduate fellow. In this advert, the Appointments Board had required applicants to have graduated within the most recent five years (something that was outside the Human Resources Manual of the University). This was contested and the university has had to re-advertise for this position on 25 July 2016. (http://kyu.ac.ug/downloads/KYU%percent20-%percent20GRADUATE%percent20FELLOW%percent20AD%20-%percent20JULY%202016.pdf)

The survey descriptive results showed that respondents ranked rules pertaining to assumption of costs over rules guiding stakeholders’ involvement as well as planning and implementation of the evaluations and the evaluation recommendation. At a mean of 1.8983, 44.1% of the respondents strongly agreed that the university has rules governing evaluation costs compared with 30.5% who agreed. With a mean of 2.0522, 52.5% of the respondents agreed that the rules require effective participation of stakeholders throughout the evaluation, while 23.7% strongly agreed; with a mean of 2.0593, 55.1% agreed that the university has rules that guide the implementation of the recommendations from the evaluations, while 21.2% strongly agreed. These results imply that the respondents appreciated the existence of rules guiding evaluations as well as utilisation of the evaluation results. It is worth noting that the respondents to the study perceived these procedural rules to vary in importance with the rules regarding assumption of costs having a mean of 1.89 being ranked above all the other procedural rules considered in this study.

The survey results showed a Spearman’s rank order correlation, r = 0.459 with P = 0.000, revealing that there was a low, positive correlation between procedural rules and utilisation of evaluation results, which was statistically significant.

The above is buttressed by interview findings that agreed to that the university has rules of procedure that eventually affect the evaluations and hence the utilisation of the evaluation results. One key informant remarked that

‘What we do is guided by the Kyambogo University budget which is approved by the parliament of the republic of Uganda and the work plans as well as the procurement plans. Some things may automatically be non-priorities and therefore unfunded in the budget. Such things cannot be implemented since implementing them may result in financial impropriety.’ (Participant 1, male, Public Administration)

The observation implies that KYU has rules and frameworks that guide operations at the university. These range from guidelines and instructions from Senate and top management, internal policies and manuals passed by the University Council and laws passed by the Parliament of Republic of Uganda.

Institutional evaluation processes and utilisation of evaluation results

From review of available documents at the university, it was evident that the university has a number of existing rules of procedures as presented earlier in this manuscript; a lot of ad hoc tendencies are exhibited. Many committees work on ad hoc basis; a case in point is the Ad hoc Committee of Senate Investigating Irregular Admissions of Students at KYU formed at the 7th session of the 48th meeting held on Tuesday 12 June 2012. Another one was the ad hoc committee set up by the university on 22 August 2012 to investigate the staff associations’ allegations including mismanagement of the university by the vice chancellor at the time. Others include the ad hoc committee for KYU envisioning KYU colleges 2015–2030 and ad hoc committee on capital development for the mid-term (2009/2010–2011/2012).

Evaluation processes were studied in terms of triggering of evaluations, allocation and publication practice processes. The triggering of evaluation was studied in terms of who triggers evaluations at the university and whether it is planned or done on an ad hoc basis. The survey descriptive results showed that respondents ranked triggering evaluations above the other institutional processes. At a mean of 2.2393, 36.4% of the respondents strongly agreed that the university triggers evaluations on an ad hoc basis, while 22% agreed. On average, 50.8% of all respondents believed that the evaluators trigger the evaluations at the university with a mean of 2.3403 compared with 12.7% that strongly agreed. On average, 24.6% of the respondents agreed and 18.6% strongly agreed that evaluators are selected through a competitive process at the university at a mean of 2.735. However, at a mean of 2.8376, 34.7% and 11% of all respondents agreed and strongly agreed, respectively, that the results of the evaluations are disseminated to all stakeholders in the university, whereas 26.3% and 21.1% of the respondents strongly agreed and agreed, respectively, that as part of the evaluation process, the dissemination is frequently done in a timely manner.

Spearman’s rank order correlation showed that r = 0.486 with P = 0.000. This implies that there was a weak, positive and statistically significant correlation between institutional evaluation processes and utilisation of evaluation results at KYU.

Institutional capacity and utilisation of evaluation results

Institutional capacity was studied in terms of a unit responsible for evaluations at the university, competencies of the staff (individual capability to manage evaluations), a culture of benefiting from evaluation evidence, and adequacy of staff numbers in the responsible unit. From document review, for instance DPD (2014), it is seen that the unit responsible for evaluations is grappling with capacity challenges to coordinate all the M&E mandates of university programmes and projects. For instance, it has only one officer (Senior Planning Officer) in charge of M&E who only concentrates on financial M&E, yet the core function of the university is academics. Probably, this is why it is possible that the course units are not harmonised across departments; so you find project planning and management, marketing or entrepreneurship for the same academic level, for example Bachelor’s, in different departments or faculties with different course contents and hence examinations.

The survey descriptive statistics showed that respondents to the survey questionnaire ranked a unit responsible for evaluation over other indicators of institutional capacity. At a mean of 2.2, 42.4% of the respondents agreed, while 25.4% strongly agreed that the university has a unit responsible for evaluations. Relatedly, however, 26.3% agreed and 10.2% strongly agreed that the unit has an adequate number of staff to manage evaluations at the university at a mean of 2.9. Relatedly, 28.8% agreed, while 16.1% of all the respondents strongly agreed that the staff have adequate competencies to manage the evaluations at the university at a mean of 2.63. In addition, 26.3% of the respondents agreed on average that the university has a culture of benefiting from evaluation evidence, but 20.3% strongly agreed at a mean of 2.68. On average, 34.7% agreed and 22.0% strongly agreed that they were involved in evaluations at the university at a mean of 2.9, while 50% and 32.2% of all the respondents agreed and strongly agreed, respectively, that they had adequate capacity or capability to manage evaluations at a mean of 2.06. The results imply that most of the respondents perceived themselves to possess adequate capacity to manage evaluations, that there was a unit responsible for evaluations and that the staff had sufficient competences to manage evaluations at the university with respective means of 2.06, 2.20 and 2.6, respectively.

The Spearman’s rank order correlation results (r = 0.765, p = 0.000) revealed that there was a strong, positive correlation between institutional capacity and utilisation of evaluation results, which was statistically significant.

From interviews, it was clear that KYU has capacity to manage evaluations. A key informant was quoted saying,

‘We have the evaluation competencies in various fields of speciality. Our staff members have the competences to manage the evaluations. Some of them even consult to government ministries, department and agencies as well as other organisations that seek their expertise.’ (Participant 2, male, Economist).

This shows that the individual members agree to possession of the capacity as well-appropriate guidelines that direct the processes of M&E in the university.

Discussion of findings

The results show that procedural rules have a positive average effect on utilisation of evaluation results but are less likely to influence the utilisation of evaluation. The positive relation between procedural rules and utilisation of evaluation results is traced from the availability of laws and frameworks in terms of policies and manuals to guide the utilisation processes. The relation is average because legal framework, for instance the UTIA, is ineffective (Baryamureeba 2015). GOU (2015a) observed that respondents interviewed by the IGG investigation team were unanimously of the view that the current composition of the University Council is unmanageable. The 27 (minimum) University Council members are diverse, including the following: (1) representatives of a sector relevant to the university depending on its objectives and mission, appointed by the relevant body of that sector, (2) representative of the Ministry of Education, (3) three appointees of the Minister of Education from the public, (4) three representatives of the administrative staff associations, (5) two representatives of the students’ association, (6) three members appointed by the University Council from the public, (7) representative of the Ministry of Finance, (8) representative from Ministry in charge of higher education and (9) representative of the District Council in whose jurisdiction the university is found; for the case of KYU, this is Kampala Capital City Authority (KCCA). Such composition has always resulted in University Councils being dominated by members of staff of the university with at least 16 of the total number. This makes it very difficult to take firm policy decisions towards streamlining the administration of the university.

The results are in agreement with Firme et al. (2009) who guide that a set of guidelines establishes rules and procedures to properly conduct planning, implementation and effective utilisation of evaluation results, in all levels of possible implementation.

Regarding the triggering of evaluations, it is evident that who triggers the evaluation matters a lot. Mayne, Divorski and Lemaire (1998) argue that once evaluations are triggered by those responsible for implementation of the measures, difficulties are faced in asking questions of effect and relevance of the measures and programmes. This is because diverse forms of institutionalisation disagree in their ability to deal with the varying information requirements of the target groups. In this case, relatedly, Balthasar (2008) posits that triggering of the evaluation by the unit responsible for the measures or implementation of the examination within the office promotes process-related utilisation. Williams, De Laat and Stern (2002), on the other hand, aver that the independent evaluations need to be carried out by people who are not involved in the implementation of a measure, in contrary to Conley-Tyler (2005) who argues that internal and external evaluators can be independent depending on the evaluation role they choose. In this regard, Schaumburg-Müller (1996) shows that establishment of a unit responsible for the evaluation function in an institution is an important indicator of demand for evaluation and its utilisation. He cites Colombia where evaluations are based on legislation or constitution. Interestingly, Højlund (2014) notes that an organisation with a culture of evaluation and measurement is likely to have a culture that supports its desire to use knowledge instrumentally.

The results show that institutional evaluation processes have a positive average effect on the utilisation of evaluation results and that this effect is statistically significant. The averageness could be because of the bureaucratic tendencies of public administration with vertical administrative structure. This significant effect shows that the process through which an evaluation is carried out is very important in explaining whether the results will be implemented. So critical is the issue of participation of stakeholders which informs ownership of results. The process of doing the evaluations needs to be participatory and consultative so that an input of stakeholders is sourced and where possible is considered (Kyambogo University 2013). Short of that, results are referred to as the evaluators’, which increases the distance between the evaluators and the evaluees (Balthasar 2008).

The results show that the institutional evaluation capacity has a positive and a statistically significant effect on utilisation of evaluation results. Evaluation capacity enhances the ability of the organisation to carry out good evaluations and hence utilise results. This is line with Conley-Tyler’s (2005) finding that building staff capacity may be a strong factor in some cases, but may make no sense for an organisation that is only going to conduct one evaluation once in a very long time, say a decade. In the same line, Léautier (2012) educates us that evaluation capacity to conduct evaluations as well as capacity to use evaluations is very critical.

Conclusions and policy implications

The current institutional procedural rules are significant and averagely positive in explaining the utilisation of evaluation results in KYU. This is because the existing legal frameworks, notably the law that establishes and guides the governance of public universities, are marred by issues that complicate the functioning of these universities. The positive sign would nevertheless signify that once rules are improved on, then they guide the planning, the costs incurred in the evaluations as well as the implementation of the recommendations from the evaluations. Amongst the procedural rules, those ranked highly by respondents are the rules that pertain to the assumption of costs and it is followed by the participation and involvement of stakeholders. Therefore, the cost implications need be clear and the stakeholders need to actively participate in the evaluations so that they own up to the results and support the utilisation of the findings in improving the performance of the public universities in Uganda.

The current institutional evaluation process has a positive average and significant relation with the utilisation of evaluation results at KYU. This implies that when the evaluation is carried out through a good process, then the results will be good and acceptable and therefore utilisable, hence utilisation. Amongst the processes for evaluation, it is critical that the evaluators are selected on merit through a competitive process. This is deemed to increase confidence in the evaluation results. At Kyambogo, it was evident that many evaluations are commissioned on an ad hoc basis, but what is clear is that the membership is on merit and the evaluators try as much as possible to consult widely prior to the generation of the final report to the commissioners of the evaluation. The null hypothesis relating to institutional evaluation process in Chapter 1 is accepted.

The issue of institutional evaluation capacity was found to be highly related to the utilisation of evaluation results at KYU. Amongst the indicators of institutional evaluation capacity were the competences of the individuals to manage evaluation and the unit responsible for evaluations. These were ranked over and above all others as far as evaluations are concerned. The individual respondents to the study questionnaire themselves believed in their competences to manage evaluations, which is critical in self-esteem. Otherwise, it would be bad if they doubted their skills and abilities to manage evaluation. This unit responsible for evaluations helps to coordinate and harmonise evaluation issues in the university, especially in the field of academic evaluation so that quality assurance is enhanced in the higher education sub-sector. Therefore, it is paramount to strengthen the institutional evaluation capacity such that good evaluations are commissioned, overseen and the results are utilised. This is fundamental such that the utilisation of evaluation results is not left to the goodwill of individuals at the university.

The present study provides a useful departure point for high-education technocrats and public universities’ managers in Uganda to examine the policies and practices so that results of evaluations are utilised:

  • There is need to strengthen the unit responsible for M&Es in the Directorate of Planning and Development at the university. This is expected to provide the much-needed assurance on quality of services provided at the university. In addition, it will enhance the harmonisation of the university as one organisation such that, for example, all students who do a course unit are facilitated on a uniform course outline, write the same exam, are marked using one marking guide and even the grading follows a uniform scale.
  • The rules that govern the public universities will need to be re-examined to suit the best practices of corporate governance. The need for inclusiveness and participation does not need to take precedence over the cardinal principles of who supervises whom. Therefore, the composition of stakeholders that constitute the university top organs needs to be re-examined to support utilisation of evaluation results. Clear attendant policies and rules undoubtly highlight the procedures for evaluations in the university so that utilisation is not grounded on goodwill of individuals but rather on policy and procedure.

Acknowledgements

This work interrogates institutional design and its dimensions including the institutional evaluation procedural rules, process and capacity and establishes how each of these contributes to the utilisation of evaluation results in Uganda’s public universities by taking Kyambogo University as a case study. In doing so, it builds on the work of Balthasar (various years) to introduce the African and Ugandan perspective regarding institutional design and utilisation of evaluation results.

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

J.K. did the conception, data collection, analysis and writing of the article. B.C.B supervised the process and revised it to make it scholarly.

References

Amin, M.E., 2005, Social science research: Conception, methodology and analysis, Makerere University Printery, Kampala, Uganda.

Balthasar, A., 2006, ‘The effects of institutional design on the utilization of evaluation: Evidenced using Qualitative Comparative Analysis (QCA)’, Evaluation 12(3), 354–372.

Balthasar, A., 2008, ‘Institutional design and utilization of evaluation: A contribution to a theory of evaluation influence based on Swiss experience’, Evaluation Review, viewed 22 November 2015, from http://erx.sagepub.com/content/early/2008/10/14/0193841X08322068

Balthasar, A., 2009, ‘Institutional design and utilization of evaluation: A contribution to a theory of evaluation influence based on Swiss experience’, Evaluation Review 33(3), 226–256.

Baryamureeba, V., 2015, University education needs an overhaul, viewed 23 March 2016, from http://utamu.ac.ug/newss/775-university-education-needs-an-overhaul

Basheka, B.C. & Byamugisha, A., 2015, ‘The state of monitoring and evaluation (M&E) as a discipline in Africa. From infancy to childhood?’, African Journal of Public Affairs 8(3), viewed 22 November 2015, from http://www.clear-aa.co.za/wp-content/uploads

Bayley, J.S., 2008, Maximizing the use of evaluation findings, ADB Based on Penney, T.’s Draft Research Proposal for PhD Candidature, viewed 2 December 2015, from www.adb.org/sites/default/files/evaluation

Conley-Tyler, M., 2005, ‘A fundamental choice: Internal or external evaluation?’, Evaluation Journal of Australia 4(1&2), 3–11.

Cousins, J.B., Goh, S.C. & Clark, S., 2004, ‘Integrating evaluative inquiry into the organisational culture: A review and synthesis of the knowledge base’, The Canadian Journal of Program Evaluation 19, 99–141.

Cousins, J.B. & Leithwood, K.A., 1986, ‘Current empirical research on evaluation utilization’, Review of Educational Research 56(3), 331–364. https://doi.org/10.3102/00346543056003331

Dahler-Larsen, P., 2012, The evaluation society, Stanford Press, Palo Alto, CA.

Dhakal, T.R., 2014, ‘Institutionalization and use of evaluations in the public sector in Nepal’, Journal of MultiDisciplinary Evaluation 10(23), 51–58.

DPD, 2014, Status report for Directorate of Planning and Development, Kyambogo University, viewed 6 February 2016, from http://www.kyu.ac.ug/index.php/administration/directorates/planning

Firme, T.P., Letichevsky, A.C., Dannemann, Â.C. & Stone, V., 2009, ‘Evaluation culture and evaluation policy as guides to practice: Reflections on the Brazilian experience’, Ensaio: Avaliação e Políticas Públicas em Educação 17(62), 169–180.

GOU, 2006, The Universities and other Tertiary Institutions (Amendment) Act, 2006, viewed 20 October 2016, from http://www.ulii.org/ug/legislation/act/2006/2006/universities%20and%20other%20tertiary%20institutions%20ammendment%20Act%202006.pdf

GOU, 2015a, Report on investigations into mismanagement and corruption at Kyambogo University, The Inspectorate of Government, V (Governance), viewed 8 December 2015, from https://www.igg.go.ug/static/files/publications/IG_Governance_Report_on_Kyambogo_2015_2.pdf

GOU, 2015b, The Public Finance Management Act, viewed 4 October 2016, from http://www.ugandainvest.go.ug/wp-content/uploads/2016/02/Uganda_Public_Finance_Management_Act_2015_3.pdf

Hardlife, Z. & Zhou, G., 2013, ‘Utilisation of monitoring and evaluation systems by development agencies: The case of the UNDP in Zimbabwe’, American International Journal of Contemporary Research 3(3), 70–83.

Henry, G.T. & Mark, M.M., 2003, ‘Beyond use: Understanding evaluation’s influence on attitudes and actions’, American Journal of Evaluation 24(3), 293–314.

Henry, G.T. &. Rog, D.J., 1998, ‘A realist theory and analysis of utilization’, in G.T. Henry, G. Julnes & M.M. Mark (eds.), Realist evaluation: An emerging theory in support of practice, pp. 89–102, Jossey-Bass, San Francisco, CA.

Højlund, S., 2014, ‘Evaluation use in the organizational context: Changing focus to improve theory’, Evaluation 20(1), 26–43, viewed 1 December 2015, from evi.sagepub.com

Johnson, K., Greenseid, L.O. & Toal S.A., 2009, ‘Research on evaluation use: A review of the empirical literature from 1986 to 2005’, American Journal of Evaluation 30, 377–410. https://doi.org/10.1177/1098214009341660

King, J.A. & Stevahn, L., 2013, Interactive evaluation practice. Mastering the interpersonal dynamics of program evaluation, Sage Publications, Los Angeles, CA.

Kothari, C.R., 2004, Research methodology methods and techniques, 2nd rev. edn., New Age International (P) Ltd Publishers, New Delhi.

Kusek, J.Z. & Rist R.C., 2004, Ten steps to a results-based monitoring and evaluation system. A handbook for development practitioners, The World Bank, Washington, DC.

Kyambogo University, 2013, Strategic plan 2012/2013–2022/2023, Kampala, Uganda, viewed 4 December 2016, from http://www.kyu.ac.ug/index.php/policies/strategic-plan

Léautier, F.A., 2012, Building evaluation capacity in Africa: Strategies and challenges. The African Capacity Building Foundation (ACBF), AFDB Evaluation Week on the theme: Evaluation for Development, December 3–6, Tunis, Tunisia, viewed 13 September 2016 from http://docslide.net/documents/building-evaluation-capacity-in-africa-strategies-and-challenges-by-dr-frannie-a-leautier-executive-secretary-the-african-capacity-building-foundation.html#

Ledermann, S., 2012, ‘Exploring the necessary conditions for evaluation use in program change’, American Journal of Evaluation 33(2), 159–178. https://doi.org/10.1177/1098214011411573

Lester, J.P. & Wilds, L.J., 1990, ‘The utilization of public policy analysis: A conceptual framework’, Evaluation and Program Planning 13, 313–319. https://doi.org/10.1016/0149-7189(90)90062-2

Mackay, K., 2006, Institutionalization of monitoring and evaluation systems to improve public sector management, Independent Evaluation Group & The Thematic Group For Poverty Analysis, Monitoring and Impact Evaluation, ECD Working Paper Series 15, The World Bank, Washington, DC.

Matsiliza, N., 2012, ‘Participatory M&E: Reviewing an inclusive approach in the South Africa’s Government wide M & E’, Africa Public Service Delivery and Performance Review 1(2), 67–83.

Mayne, J., 1994, ‘Utilizing evaluation in organizations’, in F.L. Leeuw, R. Rist & R.C. Sonnichsen (eds.), Can governments learn?, pp. 17–43, OECD’s, New Brunswick, NJ.

Mayne, J., Divorski, S. & Lemaire, D., 1998, ‘Locating evaluation: Anchoring evaluation in the executive or the legislature, or both or elsewhere?’, in R. Boyle & D. Lemaire (eds.), Building effective evaluation capacity: Lessons from practice, pp. 23–52, Transaction Publishers, New Brunswick, NJ.

Moleko, M.P., 2011, ‘Influence and originality in Michael Quinn Patton’s utilization-focused evaluation’, thesis presented in partial fulfillment of the requirements for the degree ‘Master of Philosophy in Social Science Methods’, University of Stellenbosch.

Organisation for Economic Co-operation and Development (OECD), 2002, Glossary of key terms in evaluation and results-based management, OECD, Paris.

Oso, W.Y. & Onen, D., 2009, A general guide to writing research proposal and report. A handbook for beginning researchers, rev. edn., Sitima Printers and Stationers Ltd, Kampala, Uganda.

Patton, M.Q., 2002, Qualitative research and evaluation methods, 3rd edn., Sage, Thousand Oaks, CA.

Patton, M.Q., 2008, Utilization-focused evaluation, 4th edn., Sage Publications, Thousand Oaks, CA.

Patton, Q.M., 1997, Utilization focused evaluation: The new century text, 3rd edn., Sage, London.

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1). https://doi.org/10.4102/aej.v1i1.25

Rebora, G. & Turri, M., 2011, ‘Critical factors in the use of evaluation in Italian universities’, Higher Education 61(5), 531–544, viewed 19 September 2015, from http://www.jstor.org

Rich, R.F., 1991, ‘Knowledge creation, dissemination, and utilisation’, Knowledge 12, 319–337.

Rossi, P.H., Lipsey, M.W. & Freemen, H.E., 2004, Evaluation: A systematic approach, 7th edn., Sage, Thousand Oaks, CA.

Sandison, P., Vaux, T. & Mitchell, J., 2006, ALNAP review of humanitarian action: Evaluation utilisation the utilisation of evaluations, Claremont Press, Horsham, London.

Schaumburg-Müller, H., 1996, Issues in aid evaluation capacity building: Donor support and experiences, Report for the DAC Expert Group on Aid Evaluation, OECD, viewed 13 July 2016, from http://www.oecd.org/development/evaluation/16546669.pdf

Scriven, M., 1996, ‘The theory behind practical evaluation’, Evaluation 2(4), 393–404. https://doi.org/10.1177/135638909600200403

Smits, P.A. & Champagne, F., 2008, ‘An assessment of the theoretical underpinnings of practical participatory evaluation’, American Journal of Evaluation 29, 427, viewed 6 June 2016 from http://aje.sagepub.com/content/29/4/427

Tumusiime, C., 2016, Kyambogo University in vice chancellor crisis, viewed 10 May 2016, from http://www.observer.ug/education/45272-kyambogo-university-in-vice-chancellor-crisis

Turnbull, B., 1999, ‘The mediating effect of participation efficacy on evaluation use’, Evaluation and Program Planning 22, 131–140. https://doi.org/10.1016/S0149-7189(99)00012-9

United Nations, 2015, ‘Integrating Geospatial Information into the 2030 Agenda for Sustainable Development’, Prepared by Greg Scott, United Nations Statistics Division, University of Melbourne, Twentieth United Nations Regional Cartographic Conference for Asia and the Pacific E/CONF.104/IP.1., October 6–9, 2015.

Vedung, E., 1997, Transaction Publishers, London.

Weiss, C.H., 1977, Using social research in public policy making, Volume 11 of Lexington Books Regional Science Monograph Series, Lexington Books, The University of Michigan, United States.

Weiss, C.H., 1999, The interface between evaluation and public policy, viewed 13 August 2016, from http://evi.sagepub.com/content/5/4/468

Widmer, T. & Neuenschwander, P., 2004, ‘Embedding evaluation in the Swiss Federal Administration: Purpose, institutional design and utilization’, Evaluation 10(4), 388–409. https://doi.org/10.1177/1356389004050283

Williams, K., De Laat, B. & Stern, E., 2002, The use of evaluation in the commission services, Technopolis France, Paris.



Crossref Citations

No related citations found.