About the Author(s)


Maurice T. Agonnoude Email
National School of Public Health and Epidemiologic Monitoring, (ENATSE) University of Parakou, Benin

François Champagne
Department of Health Administration (DASUM), University of Montreal Public Health Research Institute (IRSPUM), University of Montreal School of Public Health (ESPUM), Canada

Nicole Leduc
Department of Health Administration (DASUM), University of Montreal Public Health Research Institute (IRSPUM), University of Montreal School of Public Health (ESPUM), Canada

Citation


Agonnoude, M.T., Champagne, F., Leduc, N. 2016, ‘Evaluation involvement of local HIV/AIDS non-governmental organisations in Benin’, African Evaluation Journal 4(1), a178. http://dx.doi.org/10.4102/aej.v4i1.178

Original Research

Evaluation involvement of local HIV/AIDS non-governmental organisations in Benin

Maurice T. Agonnoude, François Champagne, Nicole Leduc

Received: 01 Apr. 2016; Accepted: 28 Sept. 2016; Published: 30 Nov. 2016

Copyright: © 2016. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: For some years, non-governmental organisations (NGOs) and civil society have become increasingly involved in the fight against the HIV/AIDS pandemic in Africa. But even though their role is well appreciated, their actions are perceived as ineffective because of a lack of monitoring and evaluation capacity.

Objective: This paper aims to describe local HIV/AIDS NGOs’ involvement in evaluation and the characteristics of this involvement.

Method: Descriptive analysis of data collected in questionnaires completed by 34 NGO executives (one per NGO).

Results: Most NGOs do not have the minimal conditions required for positive and effective involvement in evaluations. In addition, funding agencies’ expectations for evaluations, total human resources as well as experience as NGO are contextual factors that explain most aspects of their involvement in evaluations.

Conclusion: This study provides funding agencies, NGO leaders and all those interested in developing evaluation capacity in these NGOs to understand the extent of the task in this area. They must keep in mind that there is no solution for all, but that solutions must be adapted to the developmental level of each organisation.

Introduction

Local non-governmental organisations (NGOs) are playing an increasing role in the fight against the HIV pandemic. Despite recent advances (ONUSIDA 2010), this pandemic continues to have negative effects with very harmful socio-demographic consequences in Africa (Collectif de concertation des Associations et organisations non gouvernementales du Mali 2004). Despite their willingness to act, the NGOs’ work is perceived as ineffective (Ndiaye et al. 2005; Thelot 2009). In Benin, despite many years of funding of all kinds of projects in the HIV/AIDS sector by international donors, there are no data with which to measure the impact of these interventions (Fond Africain de Développement, 2004). The lack of evidence on the interventions’ impact calls into question the legitimacy of these NGOs’ actions and the support they require from their national and international partners (Sitbon & Maresca 2002). Moreover, NGOs’ involvement in evaluation can be useful for continuous improvement of their programmes. Indeed, because it leads to acquisition of individual skills, strengthening of decision-making capacity (Bradley et al. 2002), organisational learning (Hoole & Patterson 2008) and results utilisation, we could consider that NGOs’ involvement in evaluation would be helpful for continuous improvement (Fetterman & Wandersman 2005; Mattessich, Mueller & Holm-Hansen 2009). In Benin’s context, there has been no study of evaluation activities undertaken by local NGOs to improve interventions.

This paper aims to describe local HIV/AIDS NGOs’ involvement in evaluation and the characteristics of this involvement. Widespread concerns for the HIV pandemic, as well as an increased involvement of civil society in this fight, have raised much interest in these NGOs. In the following literature review, we begin with the local context of the fight against HIV/AIDS in Benin and the role of NGOs in this fight, then we move to a conceptualisation of evaluation as an organised system of actions before discussing the characteristics of evaluation activities within these structures.

Literature review

Context: The fight against HIV/AIDS and the role of NGOs in Benin

The HIV-infection epidemic is at a generalised stage in Benin (prevalence equal to or greater than 1%). The prevalence of HIV for the overall population in Benin is stabilising around 1.2% since 2002. The estimated number of adults (15–49 years) with HIV was 65 472 in 2013 and 69 164 in 2015 (Comité National de Lutte contre le Sida 2014). The average prevalence amongst pregnant women at the survey site increased from 0.3% in 1990 to 4.1% in 1999 before stabilising around 2% in 2009, according to reports by the National HIV/AIDS Committee (2008, 2005). In 2014, the estimated number of pregnant women infected by HIV during the previous 12 months was 3428. The successive evaluations of mother-to-child HIV prevention programmes in Benin found a residual transmission rate of 14.1% in 2008, 11.41% in 2012 and 7.62% in 2014 (Programme National de Lutte contre le Sida 2015). The total estimated number of HIV/AIDS orphans was 40 323 in 2013 and 38 737 in 2015 (Comité National de Lutte contre le Sida 2014).

NGOs are involved in the HIV pandemic on three levels: communication activities to promote outreach and behavioural change (Mamadou & Tossou 2005), care for persons living with HIV (2005), and counselling and voluntary screening activities (Catraye et al. 2005). Many NGOs are also involved in care for HIV/AIDS orphans. All these NGOs have direct or indirect support from one or another of the HIV projects funded by government or international donors. Depending on the structure in charge (Comité National de Lutte contre le Sida et les Ist 2005), there are important weaknesses in monitoring and evaluation (M&E) at the national level. These include the absence of any system for mapping all contributing actors and partners, no specific budget for M&E and M&E units, as well as low technical coordination and monitoring of activities at the departmental level (departmental HIV committees are decentralised structures of the National Committee). These departmental HIV committees face major obstacles, such as insufficient appropriation of the ‘three ones’ principle by the actors (one shared action framework in the fight against the epidemic, one national coordination authority, one shared monitoring evaluation framework). In fact, in our view, this point of view supposes that field actors such as NGOs are correctly monitoring evaluation and that insufficiencies occur only at the level of coordination by decentralised structures. Our analysis of this situation reveals that the problem lies with the NGOs who do not have updated information to give to the decentralised Comité National de Lutte contre le Sida (CNLS) structures because they lack the capacity to produce this information.

Evaluation: What is it and how can an NGO be involved in it?

Contandriopoulos et al. (2000, p 38) define evaluation as:

the process of making a value judgement about an intervention by implementing a control system that can provide scientifically valid and socially legitimate information about this intervention or any of its components to the different actors involved, so as to enable them, according to their area of expertise, to take a position on the intervention and make a judgement that can translate into action. [authors’ translation]

Not only is evaluation a major aid in decisions to undertake, pursue, amend or analyse a public health action, but it is also an appropriate means to participate in developing a collective health regulation system needed to respond to social situations. This is why White (2002) stresses the need for rigorous and systematic systems of M&E in all interventions to derive structured lessons and identify clearly the best practices to share. Because of the lack of evaluation budgets and a limited (at best) knowledge of how to conduct an evaluation and utilise the information to inform programmes, this system of evaluation cannot be implemented by field actors (NGOs). Our analysis notes that, although an external observer’s perspective is necessary to lend more credibility, there are many advantages to associating field actors with the evaluation process and even with self-evaluation. In a study titled ‘HIV agencies: Can they evaluate their own services?’ conducted in Edinburgh, Williamson, Mcphail and Lewis (1995) found that self-evaluation put agencies’ executives at ease, was less threatening for them and made it easier to update a continuum of information, thus enabling regular comparisons of activities, which all are core elements for continuous improvement. According to Smith (2001), M&E, especially of gender-sensitive policies, make it possible to detect an intervention’s adverse effects, to learn about the experiences of persons who were not satisfied with the programme and their suggestions for improvement. Duignan (2002) suggests that increased interest in social policies evaluation in New Zealand could lead to more sophisticated evaluations which would then feed back into the design and implementation of good social policies; to achieve this result; however, it is important to build a sustainable capacity for programme evaluation. Moreover, in the specific context of organisations, evaluation can be used as a management tool. Indeed, as Champagne et al. (2009b) have pointed out, evaluation is an essential activity for managers, who must be ready to assess all the aspects of the running of their organisations at any given time. Thus, the more these assessments are based on systematic processes that entail norms, means, appraisals or an evaluative inquiry process, the more valid they will be. In a broader sense, evaluation can encompass the control process in its four stages: establishment of performance norms; measures, collection of data to understand what is happening in the unit being monitored as well as transmission of the information to a control centre; processing and comparison of this information with previously established norms; and feedback from the control centre to the organisational unit as to directives and corrective measures to be taken that may lead to improved outcomes.

What is evaluation capacity?

According to Richter et al. (2000), evaluation capacity can be defined as a combination of knowledge and abilities needed to plan, implement, manage and appraise the effectiveness of programmes. This concept of capacity is thus a general one that includes evaluation amongst the important abilities needed for good planning and management of programmes. Reinforcing this systemic capacity in an organisation is comparable to rebuilding a new one. Our study on NGOs’ involvement in evaluation is based on the assumption that these NGOs already exist, that they function with their strengths and weaknesses, and that it is important to evaluate the impact of their actions as well as their capacity to measure this impact for purposes of improving their activities. For Bamberger, Boyle, Lamaire and Schaumburg-Muller, quoted by Gilliam et al. (2003), reinforcing evaluation capacity, whatever its definition and its role, is an intentional work of continuous creation and supports a whole organisational process that promotes better quality evaluations and their application in daily use. However, because having the capacity does not necessarily lead to its being used, it will be more useful to see evaluation as process in which an organisation decides to invest for the continuous improvement and effectiveness of its interventions.

Evaluation as an organised action system

For Champagne et al. (2009a), based on Parsons’ action system model, evaluation can be considered as an organised action system with four functions: adaptation, maintenance of values and meaning, production, and goal attainment. The adaptation function relates to the extent to which NGOs are able to mobilise resources (human, material and financial) for evaluation, so that they can adapt to an organisational environment to facilitate their development. This function refers not only to NGO staff and their training, which can enable them to respond to evaluation needs, but, according to Champagne et al. (2009a), also to the extent of stakeholders’ participation and the evaluator’s position in relation to decision-makers. Indeed, according to Bradley et al. (2002:270):

the inclusion of local people in the development process reflects an acknowledgement of the need to tap into the wealth of wisdom and experience of the recipients of development aid, and to work with them to move the development of their communities forward.

Similarly, the evaluator’s position with regard to decision-makers can change over time, depending on how advanced the organisation is in the process of establishing an evaluation capacity and culture. The production function provides a description of the technical process of evaluation. It is influenced by two dimensions, according to Champagne et al. (2009a), namely the degree to which the organisation’s staff participate in evaluation and their responsibility for the technical aspects of evaluations. These dimensions of adaptation and production correspond to the three dimensions of the participative evaluation framework of Cousins and Whitemore (1998). When implementing an evaluation capacity and culture, stakeholder participation (especially amongst NGO staff) must be intense and include all phases of evaluation. Similarly, responsibility for all technical aspects must at least be shared with an evaluator-consultant, or else rest entirely with NGO staff (internal evaluation). Many authors have emphasised the advantages of involving programme staff in evaluation, such as the acquisition of knowledge and abilities relating to evaluation and to the programme itself, improved decision-making abilities, efficient use of evaluations to foster a learning environment, and a commitment to using evaluation results to improve programmes (Bradley et al. 2002; Gaventa, Creed & Morrissey 1998; Hoole & Patterson 2008; Papineau & Kiely 1996).

The values maintenance function is the basis upon which evaluation is organised in NGOs. According to Champagne et al. (2009a):

by values maintenance, we mean the fact that values are synthesised within a paradigm, that they are internalised by the actors and institutionalised, within the framework of an action system, which guarantees the system a certain cohesion. [authors’ translation]

The paradigm here is that of an evaluation culture defined by Owen and McDonald (1999), quoted by Owen (2003), as a commitment to the role of evaluation in decision-making in the organisation. Adopting a culture that recognises these results provides opportunities for significant use of the results of systematic inquiry, for internal learning, and for improving organisational efficiency. A culture of internal evaluation can be defined in three dimensions: a role, function or responsibility for evaluation; opportunities for sharing and learning from evaluation results; and a commitment to understanding and using these results in existing programmes as well as in designing new programmes. In addition to a commitment to the roles and function of evaluation in decision-making in an organisation, the paradigm can be seen as a means by which researchers can identify and choose reliable techniques and tools to discover and offer new solutions to existing problems (Champagne et al. 2009a). Such a paradigm is constructed on four axes: ontology, which deals with the nature and the way in which reality is conceptualised (realism, relativism, etc.); epistemology, which describes the nature of the relationships that the evaluator establishes with research subjects (dualism, objectivism, subjectivism); methodology, which relates to the methods that are considered valid for representing, reconstructing and creating the problems to be examined and solutions that can be applied (experimental, dialectic, hermeneutic); and teleology, which defines the intentions, aims, finalities and logics that guide actors (predictions, control or creation by negotiation). According to Levy (1994) and Guba and Lincoln (1994), quoted by Champagne et al. (2009a), the combination of these four modalities leads to three paradigmatic positions: positivism, post-positivism and constructivism. These positions, together with the other dimensions (adaptation, production, maintenance of values and meaning, goal attainment), define an evaluation style that directs, enlightens and guides decision-making throughout an evaluation process. The goal attainment function defines the extent to which evaluation can reach its goals, that is to say, can produce useful and valid information to improve interventions. Two kinds of uses are very relevant in this context: instrumental use, to influence decisions directly, specifically and in a timely way; and conceptual use, or an enlightenment process that consists of knowledge useful for understanding not only the intervention in progress but also many others in the future. Courtney and Bradley Cousins (2007), in an extensive literature review exploring how the construct ‘evaluation process utilisation’ has been operationalised in empirical studies dealing directly or indirectly with this process, identified 46 uses of evaluation processes. They classified these uses into four groups: learning (enlightenment, development of concepts, knowledge and expertise, ability to learn and to recognise other learning opportunities, learning about the programme, intervention or organisation, etc.); actions and behaviours (not repeating previous actions, using evaluation data, results and discoveries, modifying practices, integrating evaluative inquiry into work practices, etc.); attitudes and affects (ethical, personal and professional growth, strengthening capacities and beliefs in one’s ability to influence change, better understanding and respect for others, increased commitment, etc.); and other process uses (shared reflection, social justice, creation of relationships and professional networks, organisational improvement or development, etc.). In addition to these uses of evaluation results, Champagne et al. (2009a) identified another dimension of goal attainment, namely the type of results transfer: it can be open, meaning that it is intended not only for the stakeholders involved, but also for anyone closely or remotely affected by the evaluation and its results, or it can be narrow, i.e. intended only for certain stakeholders.

Factors influencing evaluation

Gilliam et al. (2003) and Gibbs et al. (2002) put forward these factors influencing evaluation activities in an organisation: funding agencies’ expectations, financial resources, leadership, staff (and staff stability), technologies, and the tools available for evaluation. Given that resources, leadership and staff, and technologies are all part of the adaptation function of evaluation, we can say there are two important factors for evaluation activities in an organisation, namely involvement in evaluation and the expectations of funding agencies. In our view, two other factors play an important role in influencing capacity and evaluation activities, even though they are not found in the literature on this topic, namely intervention area (urban, semi-urban or rural) and the NGO’s type of funding. For example, an NGO operating in an urban area is more open to the external world, has more visibility, and is more able to mobilise international funding than a rural NGO. Once it has succeeded at its first projects, its visibility and the confidence of its partners for other larger projects are increased; in this way, it reinforces its expertise in the field and acquires valuable experience.

Theoretical model

The theoretical framework for this study is presented in Figure 1.

FIGURE 1: Theoretical model showing relationships between components of evaluation involvement and local context of action.

This framework considers evaluation to be an organised action system with four dimensions organised around two components: the evaluation involvement level and the style of involvement in evaluation. The evaluation involvement level of an NGO is defined by the quantity and quality of human and financial resources made available for evaluation (adaptation), by the existence of a defined role or function assigned evaluation, by learning opportunities and a commitment to use the evaluation results (maintenance of values and meaning) and by the organisation’s annual rhythm of evaluations (production). The style of involvement in evaluation is defined by the positions that the staff assigned to evaluation hold in the organisation’s decision-making structure (adaptation), by the extent of stakeholder participation in evaluations, by the paradigmatic position of the head of evaluation within the NGO (values), by the participation level of NGO staff in evaluation and by the responsibility this staff can assume for the technical aspects of the evaluation (production). These two components interact to influence the NGO’s capacity to attain its goal and to continuously improve its intervention effectiveness. In addition, this organised system is included in a local context described by five components: geographic area, human resources available, type and expectations of funding agencies, main source of funding, and NGO experience.

Methods

We aim to describe, using the dimensions of our concept ‘involvement in evaluation’, the population of local NGOs in HIV/AIDS.

Population

The population studied is made up of local NGOs in Benin which are private not-for-profit associative institutions recognised by national authorities. This recognition can be either by permission of the Ministry of the Interior, like all other organisations in civil society, or by a partnership contract with the Ministry of Health.

Sampling

The inclusion criteria were: (1) to have been recognised for a minimum of 3 years by the national authorities; and (2) to have worked either in counselling and screening of HIV/AIDS, or in medical or psychosocial care for people living with HIV/AIDS, for at least 3 years. The sample population was composed of 161 NGOs assumed by the Ministry of Health to be working actively in this field. From this population, 110 randomly selected NGOs (number by department was proportional to the number of NGOs in each department in the directory) were contacted either at their head office or by phone. Many could not be reached because of imprecise addresses, or were reached but could not participate because they had not worked in the field for enough years. Only 62 NGOs contacted and able to participate in the study were enlisted (38.51% of the original population). Out of this group, 34 NGOs consented to participate in the study, for a recruitment success rate of 54.84%. Table 1 presents a comparison of these three groups according to characteristics available in the directory. We note that apart from the seniority criterion, the NGOs that agreed to participate in the study do not differ from other NGOs specialised in HIV/AIDS in Benin. Thus, it can be concluded that the sample is representative of Benin’s NGO population. According to the field report, recently created NGOs have many problems and rarely meet the inclusion criteria.

TABLE 1: Comparison of three groups of NGOs in Benin directory of HIV/AIDS NGOs.
Definition of variables

The study has been built around three sets of variables.

Level of involvement in evaluation:

Involvement is defined by a combination of six categorical variables on an ordinal scale (values from 0 to 2):

  • existence in the budget of specific financial resources for evaluation (no resources; insufficient resources; sufficient resources)
  • existence and quality of human resources available for evaluation (no human resources; person or team with only field experience; team with graduate training)
  • annual frequency of learning opportunities and discussion of evaluation results in the NGO (non-existent; 1 to 3; 4 and more)
  • annual frequency of new monthly data being incorporated into evaluation data in the NGO (4 or less; 5 to 8; 9 to 12)
  • existence of an evaluation structure in the NGO’s organisation chart and its association with the conception of new programmes (non-existent; existing but without contribution; existing and operational)
  • level of NGO staff’s participation in evaluations (low; medium; high).
Style of involvement in evaluation:

This is defined by five nominal variables:

  • paradigmatic position of evaluation leader (positivist; constructivist; post-positivist)
  • extent of stakeholder participation (selective; medium; large)
  • evaluation staff’s position in organisational decisions (hierarchical; consultative; decision-maker)
  • staff responsibility in technical aspects of evaluations (none; shared; total)
  • type of result transfer (open; narrow).
Contextual variables:

There are five variables in this category:

  • principal source of funding: discreet variable with three modalities [local funding (self-funding, public or religious funding); foreign-state funding; foreign partners]
  • expectations of funding agencies regarding evaluations (yes; no)
  • geographic location of NGO (rural; semi-urban; urban)
  • NGO staff numbers (in full-time equivalents)
  • experience of NGO (in years).
Methods of data collection

Two methods of data collection were used:

  • document consultations: partnership contracts between the funding structure and the NGO (self-evaluation requirement from partner), organisation charts or job description cards (evaluation role or function description), periodic reports with financial information (specific resources for evaluation in the budget)
  • questionnaires completed by NGO agents in charge of monitoring and evaluating programmes, with closed questions administered by the principal investigator during an interview.
Data quality

These data collection tools were pre-tested in six NGOs with characteristics similar to those of the study population. The pre-test enabled the researchers to reformulate or divide some questions in order to make them more understandable for respondents.

Data analysis

Data collected were subjected to descriptive analysis with univariate frequencies for categorical variables, and central and dispersion parameters for continuous variables. Then bivariate analyses (chi-square test, relative risk, comparison of means by t-test or ANOVA, depending on the case) were carried out between, on the one hand, items of level and style of involvement in evaluations and, on the other, contextual variables. These analyses were performed using SPSS 17.0 software. The cut-off threshold for conclusions was α = 0.05.

Results

Level of involvement in evaluation

Table 2 shows NGO distribution by items related to level of involvement in evaluations. It can be seen that more than 85% of NGOs have no specific financial resources or have only insufficient resources for evaluations, whilst only 11.8% (4/34) have a person (or a team) with graduate training in programme evaluation. Likewise, nearly 80% of NGOs either have no structure or unit in charge of evaluations (29.4%), or they have a structure without a clearly defined role or a role without a structure in charge (50%). Finally, whilst 60% of NGOs have at least four learning opportunities or discussions regarding evaluations or activity reports annually, the same proportion produce new monthly data less than four times a year.

TABLE 2: NGO distribution by items related to level of involvement in evaluation.

The composite index for the level of involvement in evaluation has a near normal distribution, with both the median (5) and the statistical mode (5) near the mean (5.56).

Style of evaluation

Table 3 shows NGO distribution by items related to evaluation style. We find that most NGOs have an evaluation leader who is a decision-maker, and that more than half of them (52.9%) adopt a post-positivist position in their studies with an open transfer of results. On the other hand, the same proportion of evaluation leaders in NGOs work with a limited number of stakeholders. In most NGOs (73.5%), evaluation staff has no responsibility for the technical aspect of the evaluations.

TABLE 3: NGO distribution by items related to style of involvement in evaluation.
Contextual characteristics

More than half of the NGOs (18, or 52.9%) operate in rural areas and have local funding (public or self-funding). For exactly half of them, having a contract with a funding agency implies being involved in evaluations. The total number of personnel in NGOs (full-time) ranges from 2 to 85 with a mean of 18.03 and a standard deviation of 17 years. These NGOs have experience ranging from 4.21 to 21.16 years (mean = 12.4, SD = 4.14).

Contextual variables’ influence on items related to level of involvement in evaluation

Table 4 shows the results of these comparison tests with statistical test values, degrees of freedom and levels of significance. We note that having a local or a foreign partner as source of funding (28.6% versus 16.6% for other NGOs) is significantly associated with having a functional evaluation structure (p = 0.028). The existence of expectations by funding agencies of involvement in evaluation is significantly associated with the existence of sufficient specific financial resources for evaluations (23.5% versus 5.9% for NGOs without these expectations; p = 0.045) as well as a high frequency of new monthly evaluation data (64.5% versus 17.6% for NGOs without expectations regarding evaluations; p = 0.005). In addition, NGOs with a large number of personnel have significantly more opportunities for learning from and discussing evaluation results (average number of NGO personnel having four or more annual opportunities equal to 22.11 versus 10.56 for NGOs having three or fewer annual opportunities; p = 0.016). Finally, these personnel participate significantly more intensively in evaluations (average number of personnel in these NGOs is 36.46 versus 14.27 for those in which personnel participation is average and 14.97 for those in which personnel participation in evaluations is low; p = 0.026).

TABLE 4: Relations between items and composite index of level of involvement in evaluations and contextual variables.

The composite index of the level of involvement in evaluations is statistically significantly influenced by funding agencies’ expectations regarding evaluations in NGOs (average level of involvement in evaluations for these NGOs is 6.65, versus 4.47 for NGOs for which there are no expectations of evaluation from funding agencies; p = 0.002), by the total of NGO human resources (positive correlation equal to 0.447; p = 0.008) and by the total number of years of NGO experience (positive correlation equal to 0.373; p = 0.03). In contrast, the intervention area and the number of years of NGO experience have no influence on items related to involvement level.

Contextual variables’ influence on items related to style of involvement in evaluation

Table 5 shows the results of bivariate analyses between items related to style of involvement in evaluations and contextual variables. It can be seen that local funding (100% of NGOs in this group versus 87% of NGOs with another kind of funding; p = 0.012) and existence of expectations for evaluations from funding agencies (100% of these NGOs versus 88% of NGOs without expectations for evaluations; p = 0.002) are statistically significantly associated with a constructivist or post-positivist position of the evaluation leader, whilst NGOs with a large number of staff are statistically significantly associated with total responsibility for the technical aspects of evaluation (average number of staff of these NGOs equal to 33.38, versus 13.67 for those having a shared responsibility with an external evaluator and 14.87 for NGOs with no responsibility for the technical aspects of evaluation; p = 0.046). Again, the geographic area of intervention and the number of years of NGO experience have no influence on items related to style of involvement in evaluations.

TABLE 5: Relations between items related to style of involvement in evaluation and contextual variables.
Potential benefits and hazards

This study demonstrates the benefits to NGOs in using programme evaluation as a tool for continuous improvement of their intervention quality. That improvement can lead to better care for their clients. The possible harm for the NGO as result of their participation can be the revival of concealed attribution conflicts between the powerful and the powerless in evaluation and accountability for the structure, as the latter do not have the means to implement their vision. For the clients, the possible harm can be linked to recollection of negative experiences with a given NGO. The main researcher was available to discuss all these problems with the people concerned and, together with them, to find suitable solutions.

Informed consent

Information notices and consent forms were provided to all subjects before their recruitment. The subjects’ consent was informed and free, the participation was voluntary and participants could withdraw consent at any stage of participation. Given the sensitivity of the data and records used in this study, a data confidentiality agreement was completed and signed by the main investigator and all researcher agents in the study.

Data protection

Data will be preserved for 7 years in the personal computer of the main investigator in his personal account accessible uniquely by digital footprint and in a file accessible by password.

Ethical considerations

This work has obtained permission from Ethics Committee of die Faculty of Medicine at the University of Montreal (Comité d’Éthique de la Faculté de Médecine de l’Université de Montréal, CERFM) [Certificate number: CERFM 201011 (112) 4#422] and from the National Temporary Ethic Committee for Health Research in Benin (Comité National Provisoire d’Éthique de la Recherche en Santé au Bénin, CNPERS) [Notice Number 013 of 13 October 2010] before starting the research.

Trustworthiness

Reliability

The data collection and quality assurance procedures enable the researchers to have confidence in the study’s reliability.

Validity

An important aspect of this study is the treble triangulation of sources, data types and analysis methods, ensuring exhaustivity and increasing its methodological rigour and its internal validity. All stakeholders’ points of view have been considered (managers and staff of NGO, public or private financing agencies, clients). Many data sources (activities report, contracts between NGO and financing agencies) have been examined in order to compare declarations with written evidence. Both quantitative and qualitative data collection and analysis methods were used in this study. Because of our time and resources difficulties, we were limited in NGO recruitment; this limits the external validity of the study even if we point out many contextual factors (like staff numbers and the funding agencies’ expectations) which can explain the generalisability of the results.

Discussion

NGOs’ involvement in evaluations supposes a minimum number of conditions: having a well-educated and trained person (or team) in evaluation, having specifically designated financial resources in the organisation’s budget for M&E, and having a functional structure defined in an organisation chart. The study’s conclusion is simple: most NGOs have no human resources or have personnel with only field experience in evaluations (88.2%), most have no specific financial resources or have insufficient resources for evaluations (85%), and finally, nearly 80% of these NGOs either have no functional evaluation structure in their organisation chart or have either a structure with no defined role or a role without a structure. Despite these critical conditions not being met, it is very interesting to note that these NGOs are involved in at least minimal evaluation activities. This is why nearly half of these NGOs (41.2%) have new data which can be used as evaluation data nine months per year, nearly one-third (29.4%) have staff that participate fairly or intensively at least four times a year in learning opportunities and in discussion of evaluation results. These are important opportunities for sharing experiences and especially for putting evaluation results into practice. This indicates that there is an awareness of the importance of evaluations for the continuous improvement of interventions. Regarding the influence of context, we note that local funding is associated with the existence of a functional structure; this supports the idea that the NGOs in this study are conscious of the importance of evaluations for their activities. However, funding agencies’ expectations explain the availability of financial resources and the production of evaluations, whilst NGO staff numbers explain not only the intensity of their participation in evaluations but also the responsibility assumed for the technical aspects of evaluation and the multiple opportunities for learning and discussion around evaluation results, all of which are essential for ensuring the use of evaluation results and for improving interventions. We are, therefore, at least partially in agreement with Lomeña-Gelis (2013) who found in Senegal that donor policies and practices heavily influenced evaluation practice.

An attempt at classification by level of involvement in evaluations

Gibbs et al. (2002), in a study conducted in the US on improving evaluation capacity in community-based HIV prevention programmes, showed that involvement in evaluation goes through three stages: acceptance, investment and advancement. In the acceptance stage, the evaluation is carried out according to the funding agencies’ directives with few adaptations to the NGO’s specific interventions or target population; the report is then sent to the agency without internal analysis or review procedures. Compiling these reports could provide useful information, but the NGO has neither the capacity nor the motivation to manage and use this information. In the investment stage, NGO leaders have institutionalised evaluation (either by a specific budget line or by defining a role or function) as a programme improvement tool or to stimulate development of abilities relevant to the NGOs; evaluation activities here go beyond process to include results; staff may adapt evaluation methods to the educational level of clients or to their risk-specific behaviours and often have access to computational data entry and analysis. These NGOs use evaluation results to document successes and programme strengths, to identify fields of action and possible changes, and to support additional funding requests. At the advancement stage, there is much institutionalised support for evaluations and the use of progressively more sophisticated designs and methods; in addition to previous achievements, the NGO here is involved in extended and complex evaluations that are incorporated into the process of planning interventions.

Assuming that NGOs with low participation in evaluations and no responsibility for the technical aspects are at the acceptance stage, we note that this corresponds to 70.6% (24/34) of NGOs in our study sample; thus, 8.8% (3 NGOs) are in the investment stage and 14.6% (6) would then be at the advancement stage. However, if we add for the last two stages of evaluation the requirements of institutionalisation by definition of role or function and by specific budget allocation for evaluation, only two NGOs remain in the investment stage (5.8%) and only one in the advancement stage (2.9%). In Gibbs’ sampling (2002), these proportions are 40%, 55% and 5%, respectively. Thus, more than 90% of the NGOs in our sample are in stages of inexperience with involvement in evaluations. In these stages, according to Gibbs et al. (2002), it is hard for NGOs to attract and retain staff with strong data collection, analysis and processing capabilities; this is the case for 88.2% of our NGOs (30/34), where staff in charge of evaluations have only field experience in evaluation and have no specific graduate training in programme evaluation.

Carman and Fredericks (2010) identified three distinct classes from a cluster classification of 179 NGOs in the US based on 19 challenges that these NGOs faced in implementing and carrying out evaluation as an improvement tool for their interventions. The first class is for NGOs facing few challenges in implementing an evaluation system linked to the organisation’s management system; the boards of directors of these NGOs are regular consumers of evaluation and performance measurement data; these NGOs use evaluation results to promote themselves to their external stakeholders. The second class is for NGOs facing certain challenges of implementation; evaluation is linked to accountability requirements of funding agencies and accreditation bodies; these organisations find it difficult to discover what to measure and how to measure it. The third class is for NGOs facing many challenges, evaluation being only one of them; these organisations have major problems with human resources, have uncertain funding and lack all kinds of resources to develop and sustain a centralised data collection system. These three classes correspond to Gibbs’ advancement, investment and acceptance stages, respectively. In addition, according to Light (2004) quoted by Carman and Fredericks (2010), they correspond to different stages of the life cycle of these organisations.

This study shows that NGOs operating in the same context can be at different stages of development, although most of the NGOs in our sample in the HIV/AIDS sector in Benin were in the beginners stage. Unfortunately, there is no single solution for strengthening their capacities that would fit them all (Carman & Fredericks 2010); solutions must be adapted to the development stage of each NGO. This development stage is not automatically linked to the number of years of the NGOs’ experience because there is no link between the number of years of experience and the existence of financial and human resources or evaluation data. However, to the extent that a long-standing NGO mobilises funding and sets up evaluation structures, it can improve its involvement in evaluation. In agreement with Lomeña-Gelis (2013), we believe that the individual evaluation capacities of some local actors and more diversified, professionalised training would be promising.

Constraints and limitations of the study

This study faced the usual time and resource constraints. Because of these, we were unable to enlist a sufficiently large number of NGOs in our sample to be able to detect any independent effect of involvement in evaluations on effectiveness. The main problem was inadequate operationalisation of the concept of evaluation. Indeed, in attempting to group into 10 categories the 46 types of utilisation of evaluation processes found by Courtney and Bradley Cousins (2007) in empirical studies, we came up against categories that were not mutually exclusive; we were then unable to properly classify the utilisation examples cited by the NGOs in the field. Because of this, we were unable to explain the indirect effect, via utilisation, of the level of involvement on effectiveness. Another difficulty was operationalising and measuring important concepts related to involvement in evaluations, such as evaluation quantity, the paradigmatic position of the evaluation leader, and the effectiveness of interventions. In the future, a similar study should take the time to do an exploratory analysis to validate the different concepts and operationalise them with actors in the field.

Conclusion

This study shows that although there is strong awareness that the involvement of HIV/AIDS NGOs in evaluations is important for the improvement of interventions in Benin, this process is in its early stages. Most NGOs do not even meet the minimal conditions required for positive and effective involvement in evaluation. In addition, both funding agencies’ expectations and staff numbers are contextual factors that explain most dimensions of involvement in evaluations. This study can provide funding agencies, NGO leaders and all those interested in developing evaluation capacity in these NGOs to understand the scope of the task at hand. They must keep in mind that there is no solution for all, but that solutions must be adapted to the developmental level of each organisation. Future research could establish whether this portrait of NGOs’ involvement in evaluations in Benin will help make their interventions in the field more effective.

Acknowledgements

The authors acknowledge the Canadian Fellowship Programme for French-speakers (Programme canadien de Bourse de la Francophonie) for granting a PhD fellowship to the principal researcher and for sponsoring field data collection. We also thank all the NGO leaders in Benin who agreed to participate in this study and Donna Riley for its linguistic review.

Competing interests

The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

Author contributions

M.T.A., as PhD candidate, was the project leader. Under supervision of his research director and co-director, he designed and wrote the study protocol, designed the data collection tools, obtained ethics authorisation and monitored the entire process, planned and implemented the research budget, conducted all interviews with NGO directors, supervised all patient satisfaction surveys, managed all electronical data records, did all processing and analyses and finally wrote the manuscript. F.C., as PhD student supervisor and director of the research, supervised the process of thesis elaboration. N.L., as PhD student supervisor and co-director of the research, also supervised the process of thesis elaboration.

References

Bradley, J.E., Mayfield, M.V., Mehta, M.P. & Rukonge, A., 2002, ‘Participatory evaluation of reproductive health care quality in developing countries’, Social Science & Medicine 55, 269–282. http://dx.doi.org/10.1016/S0277-9536(01)00170-8

Bureau D’appui en Santé Publique 96, B., 2005, Étude sur la prise en charge des personnes vivant avec le VIH (PVVIH), Comité National de Lutte contre le Sida, Projet plurisectoriel de Lutte contre le Sida (PPLS), Cotonou.

Carman, J.G. & Fredericks, K.A., 2010, ‘Evaluation capacity and nonprofit organizations is the glass half-empty or half-full?’, American Journal of Evaluation 31, 84–104. http://dx.doi.org/10.1177/1098214009352361

Catraye, J., Codo, E. & La Ruche, G., 2005, Étude sur les pratiques de dépistage du VIH au BENIN (rapport provisoire), Ministère de la Santé Publique, Coopération Française, Cotonou.

Champagne, F., Contandriopoulos, A.-P. & Tanon, A., 2009a, ‘Utiliser l’évaluation’, in A. Brousselle, F. Champagne, A.-P. Contandrpoulos & Z. Hartz (eds.), L’évaluation: concepts et méthodes. 1 ed. Les Presses de l’Université de Montréal, Montréal.

Champagne, F., Hartz, Z., Brousselle, A. & Contandriopoulos, A.-P., 2009b, ‘L’appréciation normative’, in A. Brousselle, F. Champagne, A.-P. Contandrpoulos & Z. Hartz (eds.), L’évaluation: concepts et méthodes. 1 ed. Les Presses de l’Université de Montréal, Montréal.

Collectif de Concertation des Associations et Organisations non Gouvernementales DU Mali, C. O. 27 septembre 2004 2004. RE: 100 millions de personnes pourraient mourir du SIDA d’ici 2025.

Comité National de Lutte contre le Sida, 2014, Rapport de suivi de la déclaration de politique sur le VIH/Sida au Bénin 2014, Secrétariat permanent du Comité National de Lutte contre le Sida, Cotonou.

Comité National de Lutte contre le Sida et les Ist, 2008, Rapport National de situation à l’intention de l’UNGASS, CNLS, Gouvernement de la République du Bénin, Cotonou.

Comité National de Lutte contre le Sida et les Ist, 2005, Revue du Cadre National stratégique de lutte contre le VIH/SIDA/IST au Bénin 2001–2005 Version provisoire, Secrétariat permanent du Comité National de Lutte contre le Sida (CNLS), Cotonou.

Contandriopoulos, A.P., Champagne, F., Denis, J.L. & Avargues, M.C., 2000, ‘L’évaluation dans le domaine de la santé : concepts et méthodes’, Revue Epidémiologie et Santé Publique 48, 517–539.

Courtney, A. & Bradley Cousins, J., 2007, ‘Going through the process: An examination of the operationalization of process use in empirical research on evaluation’, New Directions for Evaluation 22.

Cousins, J.B. & Whitemore, E., 1998, ‘Framing participatory evaluation’, in E. Whitmore (ed.), Understanding and practicing participatory evaluation, New Directions for Evaluation, San Francisco, CA.

Duignan, P., 2002, ‘Building social policy evaluation capacity’, Social Policy Journal of New Zealand 19, 179–192.

Fetterman, D.M. & Wandersman, A. (eds.), 2005, Empowerment evaluation: Principles in practice, The Guilford Press, New York.

Fond Africain de Développement, F., 2004, Projet d’appui à la lutte contre les IST/VIH/SIDA en République du Bénin : rapport d’évaluation, Banque Africaine de développement (BAD), Tunis.

Gaventa, J., Creed, V. & Morrissey, J., 1998, ‘Scaling up: Participatory monitoring and evaluation of a federal empowerment program’, New Directions for Evaluations, 14, 81–94. http://dx.doi.org/10.1002/ev.1119

Gibbs, D., Napp, D., Jolly, D., Westove, B. & Uhle, G., 2002, ‘Increasing evaluation capacity within community-based HIV prevention programs’, Evaluation and Program Planning 25, 9. http://dx.doi.org/10.1016/S0149-7189(02)00020-4

Gilliam, A., Barrington, T., Davis, D., Lacson, R., Uhl, G. & Phoenix, U., 2003, ‘Building evaluation capacity for HIV prevention programs’, Evaluation and Program Planning 26, 10. http://dx.doi.org/10.1016/S0149-7189(03)00012-0

Hoole, E. & Patterson, T.E., 2008, ‘Voices from the field: Evaluation as part of a learning culture’, New Directions for Evaluation, 21, 93–113. http://dx.doi.org/10.1002/ev.270

Lomeña-Gelis, M., 2013, ‘Evaluation development in Senegal’, African Evaluation Journal 1, 8.

Mamadou, A. & Tossou, E.S., 2005, Recensement et évaluation des interventions IEC et des enquêtes comportementales en matière de lutte contre les IST/VIH de 1995 à 2004 au Bénin, Programme National de Lutte contre le Sida, Projet d’appui à la lutte contre le Sida de la Coopération française, Cotonou.

Mattessich, P.W., Mueller, D.P. & Holm-Hansen, C.A., 2009, ‘Managing evaluation for program improvement at the Wilder Foundation’, in D.W. Compton & B. Michael (ed.), Managing program evaluation: Towards explicating a professional practice. New Directions for Evaluation, Malden, MA.

Ndiaye, P., El Hadj Ould, A., Diedhiou, A., Tal-Dia, A. & Lemort, J.-P., 2005, ‘Évaluation de l’utilisation du préservatif chez les élèves du collège El Mina de Nouakchott, en République islamique de Mauritanie’, Cahiers Santé 15, 6.

ONUSIDA, 2010, Rapport mondial: Rapport ONUSIDA sur l’épidémie mondiale de sida 2010 Rapport mondial annuel. ONUSIDA, Programme commun des Nations Unies sur le VIH/Sida, Genève.

Owen, J.M., 2003, ‘Evaluation culture: a definition and analysis of its development within organisations’, Evaluation Journal of Australasia 3 (new series), 5.

Papineau, D. & Kiely, M.C., 1996, ‘Participatory evaluation in a community organization: fostering stakeholder empowerment and utilisation’, Evaluation and Program Planning 19, 15. http://dx.doi.org/10.1016/0149-7189(95)00041-0

Programme National de Lutte contre le Sida (PNLS), 2010, Repertoire des ONG intervenant dans le domaine de l’infection à VIH/Sida au Bénin, Programme National de Lutte contre le Sida, Cotonou.

Programme National de Lutte contre le Sida (PNLS), 2015, Évaluation de la transmission du VIH de la mère à l’enfant au Bénin, Programme National de Lutte contre le Sida, Cotonou.

Richter, D.L., Prince, M.S., Potts, L.H., Reininger, B.M., Thompson, M.V., Fraser Jacquie P., et al, 2000, ‘Assessing the HIV prevention capacity building needs of community-based organisations’, Public Health Management Practice 6, 12.

Sitbon, A. & Maresca, B., 2002, L’évaluation des campagnes de prévention du Sida, CREDOC, Département « Évaluation des politiques publiques », Paris.

Smith, M.K., 2001, ‘Enhancing gender equity in health programmes: Monitoring and evaluation’, Gender and Development, Health 9, 95–105.

Thelot, F.-L. E. 2009, ‘Le VIH/sida en Afrique du Sud et en Haïti: de l’échec de la gouvernance de l’épidémie aux difficultés d’atteindre les OMD’, Cahiers Santé 19, 12.

White, J., 2002, NGO experiences of mitigating the impacts of HIV/AIDS in sub-Saharan Africa. Facing the Challenge, United Kingdom Department for International Development, Greenwich.

Williamson, L., Mcphail, K. & Lewis, R., 1995, ‘HIV agencies: Can they evaluate their own services?’, AIDS Care 7, 6. http://dx.doi.org/10.1080/09540129550126740



Crossref Citations

No related citations found.