About the Author(s)


Hesborn Wao Email symbol
Division of Evidence-Based Medicine and Comparative Effectiveness Research, USF Health Morsani College of Medicine (MCOM), University of South Florida, United States

Rohin Onyango
Africa Capacity Alliance, Nairobi, Kenya

Elizabeth Kisio
Children of God Relief Institute-Lea Toto Program, Karen-Nairobi, Kenya

Moses Njatha
ICF, MEASURE Evaluation PIMA, Nairobi, Kenya

Nelson O. Onyango
School of Mathematics, University of Nairobi, Kenya

Citation


Wao, H., Onyango, R., Kisio, E., Njatha, M. & Onyango, N.O., 2017, ‘Strengthening capacity for monitoring and evaluation through short course training in Kenya’, African Evaluation Journal 5(1), a192. https://doi.org/10.4102/aej.v5i1.192

Original Research

Strengthening capacity for monitoring and evaluation through short course training in Kenya

Hesborn Wao, Rohin Onyango, Elizabeth Kisio, Moses Njatha, Nelson O. Onyango

Received: 18 Nov. 2016; Accepted: 24 Dec. 2016; Published: 13 Apr. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Weak monitoring and evaluation (M&E) systems and limited supply of M&E human resources in Africa signal the need to strengthen M&E capacity.

Objectives: This exploratory study evaluated the effect of short course training on professionals’ knowledge and skills in the areas of mixed methods research, systematic review and meta-analysis and general principles of M&E.

Methods: A partially mixed concurrent dominant status design including quantitative (multilevel modelling and meta-analyses) and qualitative (thematic content analysis) components was employed to evaluate the impact of a 4-day short course training focusing on these areas.

Results: Thirty-five participants participated in the training. Participants experienced an increase in knowledge in the three areas; however, average change in knowledge did not differ across participants’ employment settings. Participants’ self-stated objectives considered as SMART and belonging to a higher level in Bloom’s taxonomy were associated with change in knowledge. Based on comments made by participants, majority intended to apply what they learned to their work; clarity of content delivery was the most liked aspect of the training, and the use of more practical sessions was recommended as a way to improve the training.

Conclusions: This study provides preliminary evidence of potential of the use of short course training as an approach to strengthening capacity in M&E in less-developed countries such as Kenya. It underscores the importance of participants’ self-stated objective(s) as an element to be considered in the enhancement of knowledge, attitudes and skills needed for acceptable capacity building in M&E.

Introduction

Effective monitoring and evaluation (M&E) is pivotal for good governance and public resource management because it promotes transparency, accountability and a performance culture (AFDB 2009; Mackay 2006). The emphasis on results, effectiveness and impact continues to highlight the weaknesses and the limited supply of M&E capacity in Africa, thus signalling the growing need to strengthen M&E capacity development efforts in the continent (AFDB 2009; Porter & Goldman 2013).

The literature on continuing professional development in M&E identifies practice-based learning activities for enhancing professional competence and lately links it to various strategies for lifelong learning (Kuji-Shikatani 2015; Patton & Patrizi 2005; Stevahn & King 2016; Torres 2016). There is emphasis on the importance of conceptualising organisational evaluation capacity building (ECB), both in terms of the capacity to do [conduct] evaluation and to use it (Cousins et al. 2008, 2014b; Hueftle Stockdill, Baizerman & Compton 2002). Briefly, ECB refers to the intentional continuous act of creating and sustaining organisational processes that make doing quality programme evaluation and using it routine within and between organisations (Hueftle Stockdill et al. 2002). With the rising demands for accountability and evidence-based practices, ECB continues to be a topic of great interest to many organisations. Although evaluators are engaged in ECB activities in these organisations, little is known about the strategies used and their effectiveness (Preskill & Boyle 2008). Broadly, delivery of training to individuals aspiring to work in M&E field (pre-service) or those already practicing M&E (in-service) have been either via direct approaches (e.g. university-based graduate programmes, achievement-oriented training offered by professional societies in the form of workshops, online programmes or training institute) or via indirect approaches such as the use of practicum activities (Cousins, Bourgeois & Associates 2014a).

Over the last 10 years, Kenya has made significant progress in strengthening M&E capacity. A number of training programmes have been established by universities or colleges, research institutions and development partners. Nine universities (five public and four private) currently offer master’s level training in M&E. International organisations and communities of practice such as MEASURE Evaluation provide platforms for M&E training including self-paced courses, webinars, or live classes. Although these training programmes contribute to the supply side by preparing M&E professionals, research involving multiple organisations across Africa has shown that contents of similar trainings tend to emphasise theory at the expense of practical application and are perceived to have limited practical utility or relevance to trainees’ work (Cousins et al. 2014b; Tarsilla 2014). These findings are consistent with findings of studies conducted in developed countries (Clinton 2013; Suarez-Balcazar & Taylor-Ritzler 2013). To conduct evidence-based M&E, for example, professionals need to have skills on how to search for evidence [which often include systematic review and meta-analysis (SRMA)], how to prepare evidence if none exist and how to undertake mixed methods investigation as is increasingly necessitated by the evaluation questions addressed.

For most M&E professionals, who tend to have limited time to attend formal training, short-term courses (hereafter, short courses) provide an opportunity amongst an array of capacity building initiatives to enhance M&E skills. By short courses, we are referring to intensive and highly specialised training aimed at refreshing and upgrading the knowledge and skills of professionals so as to effectively perform their work. In Kenya, a number of institutions (e.g. Kenya Institute of Management, Population Studies & Research Institute at the University of Nairobi, the ADAM Consortium Project, InsideNGO and AMREF International) conduct short courses to strengthen capacity in M&E. Although short courses are increasingly used, a systematic evaluation of their impact on professionals’ knowledge and skills has not been performed, especially in less-developed countries. Based on a recent survey of 35 national evaluation societies in 33 low- and middle-income countries, it was found that face-to-face interaction, which often includes hands-on tacit knowledge, is associated with enhanced ECB (Dewachter & Holvoet 2016). Based on results of this multinational study, we surmise that short course, a training modality that includes face-to-face interaction, has the potential to strengthen M&E capacity of participants.

Objectives

The primary purpose of this study was to evaluate the effect of short course training on professionals’ knowledge and skills in three areas: mixed methods research (MMR), SRMA and general principles of M&E. A secondary purpose was to explore the extent to which short course training facilitated application of knowledge to workplace. This baseline evaluation was intended to inform policies related to the use of short courses as a strategy to strengthen M&E in less-developed countries such as Kenya.

Methods

Description of the short course training

A team of five trainers from the University of South Florida, University of Nairobi, ICF, MEASURE Evaluation PIMA, Children of God Relief Institute and Africa Capacity Alliance conducted a 4-day short course consisting of three interrelated segments: MMR, SRMA and general principles of M&E. The purpose of the training was to strengthen the practical M&E skills by focusing on MMR, SRMA and statistical data analysis.

The training consisted of 16 modules. Each day (08:30–17:30), four modules were covered, each lasting about 2 h. There was a 30-min morning tea break, 1-h lunch break and 15-min evening tea break. A typical session consisted of an interactive PowerPoint presentation with lots of practical examples, questions and answers, practice using different analysis software with trainers being present for consultation, request of verbal feedback from participants, sharing of real-life examples (e.g. having two doctoral students present a summary of their dissertation work, which was based on MMR design) and reinforcement of key concepts at the end of each session. Moore, Green and Gallis’ conceptual model for planning and assessing continuous learning guided the structuring of the training (Moore, Green & Gallis 2009). For example, consider Preparing Evidence, which was a sample SRMA module, the session’s activities were structured such that the trainer could assess the extent to which a participant knew what to do (e.g. could identify dichotomous, continuous or time-to-event outcomes in a meta-analysis scenario), how to do it and when to do it (e.g. could determine whether relative risk, mean difference or hazard ratio is the appropriate effect measure) and could independently demonstrate to others how to do it (e.g. could show peers how to perform a given meta-analysis using software such as CMA or Stata).

This training was unique in several aspects. Firstly, it incorporated hands-on use of multiple software (RevMan, Stata, R, Excel and CMA) to perform different analysis, thus affording participants the opportunity to practice what they learned before returning to their workplace. Secondly, rather than waiting until the end of training, participants provided feedback which informed readjustment of the training to be more relevant to their needs. Thirdly, participant diversity in terms of disciplines encouraged interaction as all shared a common interest in M&E. Fourthly, unlike most training where trainers all come from the same institution, ours was a multidisciplinary team from different collaborating institutions: a Carnegie African Diaspora Fellowship Program (CADFP) fellow from the University of South Florida in the United States; a CADFP host from the School of Mathematics, University of Nairobi; and three M&E experts from three different capacity development partners in Kenya (ICF, MEASURE Evaluation PIMA, Children of God Relief Institute and Africa Capacity Alliance). The composition of the training team was informed by a study indicating that collaboration between international partners and African institutions or between in-country institutions and organisations keen on ECB was a promising strategy for enhancing M&E capacity in Africa (Tarsilla 2014).

Study design

To evaluate the effect of short course on participants’ knowledge and skills in the areas of MMR, SRMA and principles of M&E, a partially mixed concurrent dominant status design was employed (Leech & Onwuegbuzie 2009). Partially mixed implies that findings from quantitative and qualitative phases were integrated after completing data analyses; concurrent implies that data for both phases were collected concurrently; and dominant implies that the quantitative phase had more weight in addressing the overarching question, ‘What is the effect of short course training on participants’ knowledge and skills?’ Numeric data quantified participants’ perception of the effect (quantitative phase), whereas qualitative data facilitated explanation of the quantified impact, thus allowing us to gain a deeper insight of the impact of the training and justify meta-inferences drawn (Greene & Caracelli 1997).

Evaluation questions

Two questions were addressed in the quantitative phase: (1) Do participants experience a change in knowledge following participation in the short course?; (2) What factors affect change in knowledge? Five questions were addressed in the qualitative phase: (1) What objectives do participants cite for attending the training?; (2) What changes in practice do participants intend to make following the training (or perceived barriers to making practice changes, if any)?; (3) What aspects of the training do participants like the most?; (4) What topics do participants suggest for future training?; and (5) How can the training be improved?

Context and participants

Data for this study were collected as part of a larger project, Building Capacity Through Training and Mentoring of Graduate Students and Early-Career Faculty in Systematic Reviews, Meta-Analysis, and Mixed Methods Applications, whose aim was to build capacity in M&E through training and mentoring in MMR and SRMA. The first author was a CADFP fellow, the senior author was CADFP host and the remaining co-authors were senior M&E professionals in Kenya. The goal of CADFP is to facilitate equitable, effective and mutually beneficial international higher education engagements between African Diaspora academics in the United States and Canada and scholars in Africa (CADFP 2016). Consistent with Operations Evaluations Department’s three pillars of M&E capacity development (AFDB 2009), this training focused on strengthening individual knowledge and skills in M&E using short course approach; enhancing institutional capacity through collaboration with host institution and engaging experts as co-trainers; and creating an enabling environment by engaging the leadership at the host institution to consider offering similar training in the future.

The training targeted both pre- and in-service M&E professionals including early-career faculty and graduate students interested in enhancing their programme evaluation skills and practitioners in health, social and behavioural science research from public and private sectors. Noteworthy, participation in this study was motivated by participants’ desire to enhance their knowledge and skills in M&E in general and specifically in two areas: MMR and SRMA. About one-third of the participants were funded by their institution. The remaining participants were either self-funded (non-students) or received a waiver (students) from the host institution.

Evaluation of the training

At the end of the training, participants were emailed a link to the evaluation survey and requested to complete it using either their laptops or smartphones. The survey, developed in Qualtrics (data management software), contained objective items (quantitative data) and open-ended items (qualitative data). The items were developed according to the first two of Kirkpatrick’s four levels of programme evaluation (Kirkpatrick & Kirkpatrick 2005), that is, reaction (participants’ opinion of the learning experience) and learning (degree to which participants perceived their knowledge changed as a result of participating in the training). Plans are underway to send a post-training survey in 6 months; however, we attempted to assess change in participants’ improvement in job performance by using a proxy measure, intention to make practice changes following the training.

Quantitative data

Change in knowledge, the outcome of interest, was assessed by having participants rate their level of knowledge about content covered before and after the training using a 5-point rating scale (1 = novice to 5 = expert). Participants’ reaction to the training (e.g. degree of satisfaction with content, extent information was effectively conveyed, whether the sessions were organised and whether there was an opportunity for interaction) was rated on a 5-point scale (1 = disagree strongly to 5 = agree strongly), whereas the degree of achievement of objectives was rated on a 3-point scale (1 = no, 2 = somewhat, 3 = yes).

Qualitative data

The survey included open-ended items requiring participants to state their objective for attending the session, changes in practice they intended to make following the training (or perceived barriers to making the changes), aspects of the training they liked the most, topics they suggested for future training and recommendation of how to improve the training. Responses to these items constituted the qualitative data. Qualitative data were quantitised to aid integration with quantitative data. For example, self-stated objectives were examined for a degree to which they constituted a SMART objective (Doran 1981), each objective being scored 1 if the following elements were specified: Specific (activity or action of interest), Measurable (amount of change expected in terms of quantity, quality or frequency), Achievable (attainable given time frame and resources), Relevant (impact of activity) and Time-bound (time frame for action). A score of 0 was recorded if an element was missing. A maximum score of 5 indicated SMART objective, whereas a score of < 5 indicated the objective was not SMART. Similarly, we used Bloom’s taxonomy (Anderson & Krathwohl 2001; Bloom et al. 1956) to determine the highest category implied in the self-stated objective(s). Knowledge (recall of facts and basic concepts) was scored 1, comprehension (understanding of ideas and concepts) was scored 2, application (use of information in a new setting) was scored 3, analysis (drawing connections amongst ideas) was scored 4, synthesis (assembling elements to form whole) was scored 5 and evaluation (justification of a position) was scored 6. A score of 3 and above was considered high-level Bloom’s taxonomy, whereas a score of less than 3 was considered low-level Bloom’s taxonomy. Survey items were developed by the first author and independently reviewed by other trainers for clarity, appropriateness and relevance. Additional feedback on the survey was obtained from two volunteers who reviewed the items for clarity and appropriateness.

Data analysis
Quantitative: Multilevel modelling and meta-analysis

To model change in knowledge, we considered the clustering of participants (level-1 unit of analysis) within employment settings (level-2). Acknowledging this nested nature of data implied, we did not assume the outcome was invariant across employment settings (academic vs. government agency vs. nongovernmental organisations). Such an assumption may lead to incorrect conclusions being drawn from the resulting inferential statistics (Raudenbush & Bryk 2002). The SAS PROC MIXED routine was used to fit hierarchical models (Singer 1998). We began with the unconditional means model to assess the variation of mean change in knowledge across employment settings. Next, the outcome (Yij) was expressed as a linear combination of the grand mean (γ00), setting (µ0j) and random error associated with the ith participant in the jth setting (rij): Yij = γ00 + µ0j + rij, where µ0j ~iid N(0, τ00) and rij ~iid N(0, σ2). We estimated the fixed effect γ00 (the average change in knowledge) and two random effects, τ00 (variability in means across settings) and σ2 (variability in means within settings). Next, we added the predictors, one at a time, and assessed model fit using different indices (Akaike 1973). Because content covered in the three segments was different (MMR vs. SRMA vs. M&E), meta-analysis technique was used to assess if the impact of the training differed across the segments. For each segment, we calculated the mean difference in change in knowledge and aggregated the mean difference for all participants in the segment. Thereafter we used these aggregate mean differences from each segment and compared them across segments. The participants, intervention comparison and outcome (PICO) characteristics for meta-analysis were as follows: Participants (each segment included at least one participant engaged in M&E-related work), Intervention (participation in a training segment), Comparison (pre-training and post-training knowledge levels were compared) and Outcome (change in knowledge = ‘post-training level’ – ‘pre-training level’).

Qualitative: Thematic content analysis using constant comparison technique

Thematic content analysis was accomplished in four steps. Firstly, responses to open-ended items were independently coded by three members of our research team. We employed in vivo coding [i.e. assigning a section of data (word or statement) a label using a word or short phrase taken from that section] (Wao, Dedrick & Ferron 2011). This technique of coding ensures that the concepts remain as close as possible to participants’ own words. Next, we constantly compared each code with the preceding ones to avoid redundancy. The third step involved aggregating codes containing statements similar in content to form themes. Finally, we computed theme frequency (i.e. the number of participants who cited significant statements classified under a particular theme, expressed as a percentage of all participants). The strategy of quantitising qualitative data (i.e. computing theme frequency) allowed us to glean additional information from the qualitative data, thus enhancing the credibility of our findings. Microsoft Excel was used to manage qualitative data, whereas NVivo was used to perform qualitative data analysis.

Results

A total of 35 participants from diverse backgrounds participated in the training (43% women; 31% students, 21% faculty members, 41% in-service M&E professionals; 31% from academic institutions, 31% public institutions and 31% NGOs). Participants spent an average of 24 min to complete the survey (9–56 min). Although all participants were expected to participate in all modules, a few participants were about 5–10 min late for some modules.

Quantitative phase
Participants’ knowledge increased in each training segment

The results of the meta-analysis showed an overall statistically significant increase in mean change in knowledge following the training [standard mean difference (SMD) = 1.60, 95% CI: 1.03, 2.17] (Figure 1). For each segment, the mean change in knowledge was as follows: MMR (SMD = 1.92, CI: 1.28, 2.57), SRMA (SMD = 1.86, CI: 1.19, 2.54) and M&E (1.60, CI: 1.03, 2.17). There was moderate heterogeneity amongst training segments (I2 = 56.7%).

FIGURE 1: Effect of short course training on knowledge (N = 35 participants).

Change in knowledge did not differ across employment settings

The results of the unconditional means model showed that for each segment, the average change in knowledge did not differ across employment settings (τ00); however, there was significant variation (σ2) amongst participants within employment settings (Table 1, top panel). Estimates of the intra-class correlation (ρ), the portion of the total variance between employment settings, suggested existence of clustering of knowledge change within settings. Overall, multilevel modelling results showed significant increase in knowledge across employment settings.

TABLE 1: Parameter estimates and standard errors for modelling change in knowledge following short course training (N = 35 unique participants).
Factors associated with change in knowledge

When other factors were added to the unconditional model (Table 1, bottom panel), for the MMR segment, two factors were associated with change in knowledge: whether the objective was considered SMART and whether it belonged to a higher level Bloom’s taxonomy. For the M&E segment, the extent to which the session’s content was perceived as applicable to work was associated with change in knowledge. For the SRMA segment, we did not find evidence of a factor associated with change in knowledge (Table 1, upper panel).

Qualitative phase

Majority of participants who consented to participate in the three sections of the evaluation survey responded to the open-ended questions (MMR 85%, SRMA 96% and M&E 100%).

Participants’ self-stated objectives for attending the training

We assumed that the content of the training was relevant and time-bound within the training lifespan. Thus, determination of whether an objective was ‘SMART’ depended on evidence of being specific, measurable and attainable. Thematic analysis revealed that only a few participants stated objectives which were classified as SMART objectives (MMR 35%, SRMA 26% and M&E 30%). For example, ‘… to integrate quantitative and qualitative methods of analyses’ (MMR), ‘… to collect and summarise all empirical evidence’ (SRMA) and ‘… to conduct M&E quantitative analysis’ (M&E) were classified as SMART objectives because action verbs (italicised) were used to describe the activity. Vast majority of participants used less precise action verbs (e.g. ‘know’, ‘understand’, ‘appreciate’ and ‘be aware of’), which are open to multiple interpretations and difficult to measure. Similarly, participants tended to use vague phrases (italicised): ‘To be able to get a lot wiser in the programmes or software used to analyse data’ and ‘To be informed and have a broader view on the available research methods and their applications’.

Participants’ self-stated objectives classified as belonging to higher level Bloom’s taxonomy varied (MMR 43%, SRMA 30% and M&E 55%). Examples included ‘To learn MMR skills which in turn will assist me in the supervision of undergraduate and postgraduate academic projects’ (MMR), ‘Gain more skills in SR and MA especially as it applies to M&E’ (SRMA) and ‘To equip myself with the M&E skills that I can use in my career today and in future’ (M&E).

Changes in practice participants intended to make following the training

The majority of participants made statements suggesting that they intended to apply what they learned in their work (MMR 67%, SRMA 57% and M&E 83%). For example, MMR (‘I plan to use MMR in conducting programme evaluations and during my PHD,…I’ll apply MMR in my literature review’ and ‘Better equipped to conduct technical reviews of evaluation proposal, reviewing academic pieces of work (theses, abstracts)…better equipped to facilitate technical evaluation methods training..’), SRMA (‘Encourage more students to consider conducting a SR and MA as their thesis or dissertation if this sparks interest or appropriate for their chosen topic’ and ‘I am going to use results from existing systematic reviews more effectively at work … I plan to conduct a more structured literature review for my master’s thesis which is ongoing based on the skills I acquired’) and M&E (‘Use of work plans in my daily office tasks and projects’, ‘I am now in a better position of writing good frameworks for proposals’ and ‘Endeavour to use M&E tools to structure all the M&E activities within the projects I am in charge in the organisation’). Only six participants cited potential barriers, which we broadly categorised as institutional (‘A key barrier is a lack of buy in or mandate limitations – I will have to negotiate with my boss to go slightly beyond the scope of my mandate since most evaluations are outsourced’) and availability of software (‘Software availability especially qualitative analysis software such as NVivo that is commercial’ and ‘Access to the various statistical packages could be a hindrance’).

What participants ‘liked most’ about the training

Although ‘clarity of content delivery’ and ‘applicability of knowledge acquired to work’ emerged as the most liked aspects of the training (i.e. highest average values in Table 2, last column), there were significant differences in the frequencies of themes describing most liked aspects of the training across training segments. For MMR, the most liked aspect was ‘clarity of content delivery’ (‘The clear explanations as to the difference in undertaking MMR as opposed to not’). For SRMA, it was the ‘use of technology (software) or tools’ (‘The use of a software to do SR & MA’, ‘Use of CMA for meta-analysis’, and ‘The use of software in SR’), whereas for M&E, it was ‘intriguing nature of the content presented’ (‘I particularly liked the new or emerging issues in M&E as it brought me up to speed with what is happening in M&E especially in LDCs’).

TABLE 2: Frequency of themes describing most liked aspects of the short course training.
Topics suggested for future trainings

For the MMR segment, participants suggested qualitative data analysis (‘More practice in analysing qualitative data’ and ‘Practical on qualitative analysis’) including the use of analysis software (‘software for creating qualitative research themes’ and ‘The analysis of QUAL data’). For the SRMA, more practicals using different software and emphasising data acquisition (‘How to easily identify the variable to pick for use in SR and MA’ and ‘Critical appraisal of studies’) and interpretation (‘Interpretation of the resultant findings of the two processes’) were cited. For the M&E, participants cited ‘Developing M&E work log frame and work plan’, advance M&E topics (‘Complexity-Aware M&E Approaches: Outcome mapping, Impact Evaluation and Communicating Data for Impact’), big data and how to set up M&E system.

How the training could be improved

Comments from participants indicated that majority felt that the training could be improved by having ‘more practical sessions’ and dedicating ‘more time on practicals’ (i.e. highest average values in Table 3, last column). Sample comments related to the theme of ‘more practical sessions’ included MMR (‘More practical sessions on conducting both quan and qual … data interpretation is where the rubber meets the road’), SRMA (‘By using more practical examples’) and M&E (‘Include practical on methods used in M&E’). With respect to the theme of ‘more time for practicals’, participants’ comments included ‘Allocate more time to the practical approach on the qualitative data collection and analysis aspect’ and ‘Allocate more time to widen the scope of training’. A few participants noted that sending materials a priori (‘By sending contents of the training before attendance’) or customising the training to participants’ level of knowledge (‘To group participants depending on their level’) would be beneficial. Although least frequently cited, partnering and creating more awareness about the areas covered in the training (e.g. in MMR and SRMA) amongst institutions could strengthen capacity in the country (‘More awareness in the learning institutions and the stakeholders’ and ‘Further partnerships’). In fact, during the closing ceremony, participants suggested that host institution should seriously consider conducting similar short courses in the future. In response to this high demand, the fellow discussed with the institution leadership scale up opportunities.

TABLE 3: Frequency of themes describing how to improve the short course training.

Ethical consideration

Although the items included in the online evaluation survey had low sensitivity, a simple informed consent was sought from each participant prior to responding to questions in each of the three segments of the survey. Participants had to click ‘Yes’ to participate in each segment or ‘No’ to decline participation. Some quotations from participants have been included in this study, however, anonymity is still guaranteed as one cannot trace them to individual respondents.

Discussion

The purpose of this study was to evaluate the effect of short course training on participants’ knowledge and skills in the areas of MMR, SRMA and general principles of M&E. Overall, we found that short course training impacts trainee’s knowledge and skills in these areas. This finding coincides with the findings of prior research highlighting the potential benefits of short courses in improving the knowledge and skills of biomedical researchers and scholars in Africa (Chima, Nkwanyana & Esterhuizen 2015). This study was similar to the current evaluation in several aspects. Firstly, recognising that knowledge of biostatistics is crucial for understanding and interpretation of scientific literature and active participation in the global research enterprise, it evaluated the impact of a short biostatistics course on knowledge and performance of statistical analysis by biomedical researchers in Africa. Secondly, participants included 40 university-affiliated biomedical researchers from South Africa who participated in a 4-day short course covering topics including descriptive and inferential biostatistics and the use of statistical software packages for data analysis. Thirdly, change in knowledge and performance was measured using objective and subjective criteria with the aid of a pre- and post-training questionnaire.

Quantitative findings of this study suggest that participants’ self-stated objectives classified as SMART objective and belonging to the higher level of Bloom’s taxonomy were associated with significant increase in participants’ knowledge. This finding is corroborated in the qualitative phase in which participants who experienced a large increase in knowledge also tended to state objectives which were considered SMART and belonged to higher level Bloom’s taxonomy.

Few participants state SMART objectives

A clearly stated objective by a participant allows both a trainee and a trainer to determine whether the objective is achieved at the end of the training. In this study, we find that few participants stated objectives, which were considered SMART. Participants tended to use words such as ‘develop’, ‘facilitate’ or ‘support’, which are less descriptive, less specific and difficult to measure. It is worth remembering that the greater the specificity, the greater the measurability. Another problem was the multiple verb use. For example, ‘To explore opportunities to increase my skills in SRMA’ could simply be stated as ‘To increase my skills…’ because exploring opportunities is a step towards increasing skills. Similarly, ‘To acquire skills necessary to conduct Systematic Review and Meta-Analysis’ could be stated as ‘To conduct Systematic…’.

Participants’ self-stated objectives are useful in evaluation of training

Trainers routinely state learning objectives at the beginning of sessions so that learners know what is expected. Findings from this evaluation suggests that asking participants to self-state their objectives for participating in a session is an important element of training evaluation that is seldom undertaken. It can be used to ascertain the degree to which participants’ objectives are met, a useful piece of information as participants’ objectives may be incongruent with the trainers’ stated objectives. We recommend that at the beginning of a training, trainers should ask participants to verbally state their objectives for attending the training and help trainees contrast objectives (i.e. specific, measurable, narrow and concrete statements) with goals (general, broad, intangible and abstract statements). In this study, majority of participants tended to state goals instead of objectives. Words such as ‘explore’, ‘seek’ and ‘encourage’ should be avoided as they tend to describe strategies instead of objectives. The use of more precise verbs (e.g. ‘list’, ‘identify’, ‘compare and contrast’, ‘state’, ‘describe’ and ‘indicate’) which document action and are open to few interpretations should be encouraged.

Relevance of training content to work is associated with change in knowledge

A secondary purpose of this study was to explore the extent to which short course training facilitated change in practice at work. Findings from the quantitative phase show that content perceived to be relevant to work is positively associated with increase in knowledge. Similarly, participants’ statements of changes in practice intended following the training (qualitative phase) largely revolved around application of knowledge to work. In sum, this baseline evaluation is intended to inform future recommendations for use of short courses to strengthen M&E capacity in less-developed countries such as Kenya. Short courses have been successfully employed in health-related training (Bayona et al. 1994; Gordon et al. 2011; Masanza et al. 2010); however, to the best of our knowledge, this is the first attempt at evaluating the impact of short course on participants’ knowledge following the training.

Limitations of the study

Firstly, we acknowledge that the included variables in the quantitative phase constitute a non-exhaustive list of potential predictors of change in knowledge, thus limiting our conclusions. Secondly, participants’ self-reported increase in knowledge may not accurately reflect actual changes in behaviour. However, we included factors that are typically representative of the impact of educational training. Finally, we did not collect baseline data to firmly confirm the impact of short courses; however, plans are underway to send a post-training survey in 6 months. Despite these limitations, this study has several strengths. Firstly, the findings provide an empirical, albeit preliminary, evidence related to the impact of short course training, which might inform the design and conduct of future studies. Secondly, with the burgeoning use of short course approach, to our knowledge, it is the first evaluation of short course training focusing MMR, SRMA and principles of M&E. A prior study in South Africa which evaluated the impact of 4-day short course on knowledge and skills of biomedical researchers and scholars in biostatistics employed a quantitative approach (Chima et al. 2015), thus lacked the complementary strengths and non-overlapping weaknesses, which are an important characteristic of the current mixed methods evaluation.

Conclusion and recommendations

The weak M&E systems and limited supply of M&E human resources in Africa added to the emerging demands to successfully implement national development plans (e.g., The African Union’s Agenda 2063 and United Nation’s Sustainable Development Goals) signals the need to strengthen M&E capacity in resource-constrained countries such as Kenya. This mixed methods evaluation provides preliminary evidence of potential of the use of short course training as an approach to strengthening capacity in M&E in less-developed countries such as Kenya. It underscores the importance of having participants state their objectives for attending the training. This information is useful in the evaluation of the impact of the training. For institutions in less-developed countries that are interested in using short course strategy to build capacity in M&E, this study has the following recommendations:

  • Involve potential participants a priori in developing the training modules. This can be undertaken through a short needs assessment survey, which allows participants to list areas of interest. Trainers can then prepare content that meets the needs of participants.
  • Prior to training, ask participants to state their objectives for participating in the training. Given the importance of SMART objective, trainers should provide participants with sample SMART objective. As revealed in this study, majority of participants are unlikely to state SMART objective, and thus such guidance would help participants to craft SMART objectives. By self-stating their objectives, during the training evaluation, participants will be able to determine the extent to which the objectives are met. In addition, trainers will be able to ascertain the degree to which those objectives are consistent with training objectives.
  • Rather than waiting until the end of the training, request feedback from participants during the meeting and consider adjusting the training in response to the feedback. Feedback can be sought informally during tea break or formally by asking participants to note on a piece of paper ‘what went well’ and ‘what can be improved’ at the end of the day.
  • Finally, given that participants are likely to be from diverse backgrounds, it is recommended that a multidisciplinary training team be assembled. This would ensure that examples provided are relevant to the participants. For example, in this study, a multidisciplinary training team ensured that examples of problems analysed using analysis software were from a wide range of disciplines, thus encouraging engagement in the training.

Acknowledgements

The authors acknowledge the technical support received from the faculty and staff at the University of Nairobi, School of Mathematics (SOM). We are particularly grateful to Prof. Patrick Weke, Director of SOM, for his support during the training. The evaluation was made possible by the cooperation of participants who completed the online survey.

This work was supported by a grant from the Institute of International Education’s Carnegie African Diaspora Fellowship Program, funded by the Carnegie Corporation of New York.

Competing interests

The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

Authors’ contributions

H.W. conceived the initial idea for the study and developed the study design, prepared both quantitative and qualitative data and wrote the initial draft of the manuscript. H.W. and N.O. conducted the statistical analysis, incorporating qualitative data. R.O., E.M. and M.N. were principally responsible for literature review, data cleaning, data coding and qualitative data analysis. N.O. reviewed the statistical analyses. All authors contributed significantly to the data interpretation, drafting of the manuscript and revision of the manuscript. All authors gave their approval of the final manuscript for publication.

References

AFDB, 2009, Building capacity for monitoring and evaluation in Africa: AfDB role and experience, viewed 22 December 2016, from http://www.nec2015.net/sites/default/files/AfDB%20Brief%20-%20Role%20and%20Experience.pdf

Akaike, H. (ed.), 1973, Information theory as an extension of the maximum likelihood principle, Akademiai Kiado, Budapest.

Anderson, L.W. & Krathwohl, D.R. (eds.), 2001, A taxonomy for learning, teaching and assessing: A revision of Bloom’s educational objectives, Addison Wesley Longman, Inc., New York.

Bayona, M., Leaverton, P.E., Rangel-Sharpless, M.C. & Williams, P.D., 1994, ‘Short course training in epidemiology and biostatistics for graduate and undergraduate public health professionals’, Public Health Reports 109, 434–438.

Bloom, B.S., Engelahar, M.D., Frust, E.J., Hill, W.H. & Krathwohl, D.R., 1956, Taxonomy of educational objectives, Handbook 1: Cognitive domain, David McKay, New York.

CADFP, 2016, Carnegie African Diaspora Fellowship Program (CADFP), viewed 13 October 2016, from http://www.iie.org/Programs/Carnegie-African-Diaspora-Fellows-Program#.V_-1s-ArLIV

Chima, S.C., Nkwanyana, N.M. & Esterhuizen, T.M., 2015, ‘Impact of a short biostatistics course on knowledge and performance of postgraduate scholars: Implications for training of African doctors and biomedical researchers’, Nigerian Journal of Clinical Practice 18(Suppl), S62–S70.

Clinton, J., 2013, ‘The true impact of evaluation’, American Journal of Evaluation 35, 120–127. https://doi.org/10.1177/1098214013499602

Cousins, J.B., Bourgeois, I. & Associates, 2014a, ‘Multiple case study methods and findings’, New Directions for Evaluation 2014, 25–99. https://doi.org/10.1002/ev.20077

Cousins, J.B., Elliot, C., Amo, C., Bourgeois, I., Chouinard, J., Goh, S.C. et al., 2008, ‘Organizational capacity to do and use evaluation: Results of a Pan-Canadian survey of evaluators’, Canadian Journal of Program Evaluation 23, 1–35.

Cousins, J.B., Goh, S.C., Elliott, C.J. & Bourgeois, I., 2014b, ‘Framing the capacity to do and use evaluation’, New Directions for Evaluation 2014, 7–23. https://doi.org/10.1002/ev.20076

Dewachter, S. & Holvoet, N., 2016, ‘Facing up to (online) fashion and fads … Face-to-face contact is here to stay in M&E capacity building. Evidence from 35 National Evaluation Societies’, African Evaluation Journal 4, 1–11. https://doi.org/10.4102/aej.v4i1.158

Doran, G.T., 1981, ‘There’s a S.M.A.R.T. way to write management’s goals and objectives’, Management Review 70, 35.

Gordon, E.D., Kramer, E., Couper, I. & Brysiewicz, P., 2011, ‘Family-witnessed resuscitation in emergency departments: Doctors’ attitudes and practices’, South African Medical Journal 101, 765–767.

Greene, J.C. & Caracelli, V.J. (eds.), 1997, Advances in mixed-method evaluation: The challenges and benefits of integrating diverse programs, Jossey-Bass, San Francisco, CA.

Hueftle Stockdill, S., Baizerman, M. & Compton, D.W., 2002, ‘Toward a definition of the ECB process: A conversation with the ECB literature’, New Directions for Evaluation 2002, 7–26. https://doi.org/10.1002/ev.39

Kirkpatrick, D.L. & Kirkpatrick, J.D., 2005, Evaluating training programs: The four levels, Berrett-Koehler Publishers Inc., San Francisco, CA.

Kuji-Shikatani, K., 2015, ‘Credentialed evaluator designation program, the Canadian experience’, New Directions for Evaluation 2015, 71–85. https://doi.org/10.1002/ev.20112

Leech, N.L. & Onwuegbuzie, A.J., 2009, ‘A typology of mixed methods research designs’, Quality & Quantity 43, 265–275. https://doi.org/10.1007/s11135-007-9105-3

Mackay, K., 2006, Institutionalization of monitoring and evaluation systems to improve public sector management, Evaluation Capacity Development Working Paper Series, No. 15, World Bank, Washington, DC.

Masanza, M.M., Nqobile, N., Mukanga, D. & Gitta, S.N., 2010, ‘Laboratory capacity building for the International Health Regulations (IHR[2005]) in resource-poor countries: The experience of the African Field Epidemiology Network (AFENET)’, BMC Public Health 10(Suppl 1), S8. https://doi.org/10.1186/1471-2458-10-S1-S8

Moore, D.E., Jr., Green, J.S. & Gallis, H.A., 2009, ‘Achieving desired results and improved outcomes: Integrating planning and assessment throughout learning activities’, The Journal of Continuing Education in Health Professions 29, 1–15. https://doi.org/10.1002/chp.20001

Patton, M.Q. & Patrizi, P., 2005, ‘Case teaching and evaluation’, New Directions for Evaluation 2005, 5–14. https://doi.org/10.1002/ev.141

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1, 9. https://doi.org/10.4102/aej.v1i1.25

Preskill, H.H.P.F.O. & Boyle, S., 2008, ‘Insights into evaluation capacity building: Motivations, strategies, outcomes, and lessons learned’, Canadian Journal of Program Evaluation 23, 147–174.

Raudenbush, S.W. & Bryk, A.S., 2002, Hierarchical linear models: Applications and data analysis methods, 2nd edn., Sage, Thousand Oaks, CA.

Singer, J.D., 1998, ‘Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models’, Journal of Educational and Behavioral Statistics 23, 323–355. https://doi.org/10.2307/1165280

Stevahn, L. & King, J.A., 2016, ‘Facilitating interactive evaluation practice: Engaging stakeholders constructively’, New Directions for Evaluation 2016, 67–80. https://doi.org/10.1002/ev.20180

Suarez-Balcazar, Y. & Taylor-Ritzler, T., 2013, ‘Moving from science to practice in evaluation capacity building’, American Journal of Evaluation 35, 95–99. https://doi.org/10.1177/1098214013499440

Tarsilla, M., 2014, ‘Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward’, African Evaluation Journal 2, 1–13. https://doi.org/10.4102/aej.v2i1.89

Torres, R.T., 2016, ‘Planning and facilitating working sessions with evaluation stakeholders’, New Directions for Evaluation 2016, 53–66. https://doi.org/10.1002/ev.20179

Wao, H.O., Dedrick, R.F. & Ferron, J.M., 2011, ‘Quantitizing text: Using theme frequency and theme intensity to describe factors influencing time-to-doctorate’, Quality & Quantity 45, 923–934. https://doi.org/10.1007/s11135-010-9404-y


 

Crossref Citations

1. Factors inhibiting the maturity and praxis of Made in Africa Evaluation
Takunda J. Chirau, Mokgophana Ramasobana
African Evaluation Journal  vol: 10  issue: 1  year: 2022  
doi: 10.4102/aej.v10i1.627