Wednesday 10 July 2019

Enhance Child Learning through Digital eLearning Tools

Papua New Guinea Research Monitoring and Evaluation: Enhance Child Learning through Digital eLearning T...: Enhance Child Learning through Digital eLearning Tools Save the Children Australia's Papua New Guinea Program has piloted the &quo...

Enhance Child Learning through Digital eLearning Tools


Save the Children Australia's Papua New Guinea Program has piloted the "eLearning Project" in  Western Province through the social entrepreneurship initiative by Incentiv in partnership with  Digicel PNG and supported by the Papua New Guinea Sustainable Development Program. Education is the backbone of this nation's future and with the introduction of digital technology, the pilot project in targeted schools in Western Province looks at enhancing student learning through the used of digital online learning tools. Although there are challenges to this, there are more benefits and opportunities to further pursue this so that our children can receive and access to world class updated learning tools that can boost their learning capacity.

Thursday 23 May 2019

Evaluation - Simplifying the Idea of Monitoring, Evaluation and Learning

Most often we tend to segregate Monitoring from Evaluation and try to justify these two terminologies based on the nature or type of development project or program interventions. When I started my career working in development, I often found it very challenging to to understand the approaches to Monitoring and Evaluation and like many other like minded professionals working in this area, I often think that generally monitoring is the day to day data collection of of a project activity and evaluation is either done at the beginning, middle and end of the life cycle of a project. This understanding generally is the summary of a project management cycle as I understand it from the development perspective.

It took me a while to make sense of Evaluation in development practice. How can we evaluate the both the Monitoring aspect of a project and the link that to the overall evaluation of a development project?

Evaluation is a relatively new field that has emerged from a diverse array of applied social sciences. Although it is practice-oriented, there has been a proliferation of research on evaluation theory to prescribe underlying frameworks of evidence-based practice. According to Shadish, Cook, & Leviton (1991), the fundamental purpose of evaluation theory is to specify feasible practices that evaluators can use to construct knowledge about the value of social programs. This explanation of evaluation theory consists of five main components: practice, use, knowledge, valuing, and social programming.

Many development practitioners design evaluations around methodology. Although there are still ongoing debates of best practice approaches to evaluation especially when dealing with evaluation theories, I can confidently say that most of us working in development particularly in the Non Governmental Organisations and faith based organisations usually design evaluations around methodology. I have worked with a number of of International NGOs here in Papua New Guinea and evaluation is still a challenging concept because of the fact the most evaluations of projects or program are usually outsourced to to external consultants while we are tasked with the practical aspects of evaluation mainly in providing logistical support and data collection.

My aim is to socialise the idea of Evaluation and also create a platform where we can bring local Papua New Guinean Evaluators together to share their experiences and practical ideas on how we can make better evaluations based on local knowledge and understanding of where we work. 

In my own way of understanding evaluation approaches with respect to best practice evaluation theories and methodologies, I generally categorize evaluation into two separate categories;
  1. Process Evaluation - Evaluating the activities of the project. In other words, process evaluation determines whether the project has been implemented as planned and most importantly tries to make sense of all the monitoring data that has been collected overtime. In short, process evaluation looks at key areas that a project has control over during the implementation stages. This includes inputs,processes and outputs.
  2. Impact Evaluation - Impact evaluation tries to measure or make sense of the effect of the project or program intervention at the population level. Impact evaluation looks at understanding the effectiveness and relevance of the project.
Evaluators can use quantitative, qualitative, or a mix of both methods for data collection and analysis. However, before considering methodology, evaluators should reflect on the theoretical frameworks that guide their practice. Although evaluation is an applied science, it is important for practitioners to be knowledgeable of theory to ensure their designs are driven by intention and purpose rather than methodological tools.


Wednesday 22 May 2019

A Strong M&E Platform - The PNG Case

It has been conventionally accepted that there is little pressure coming from PNG society for the state to perform better, especially in terms of the delivery of essential services. Among the reasons for this, it is often argued, is a “dysfunctional political system, characterised by poor links between: voters and elected politicians, political parties and governments, and ministers and public servants”. 

Additionally, extreme cultural diversity, rugged geography, difficult access to most subnational settings, traditional forms of social organisation and political authority (the “big man” culture), and customary forms of societal aggregation (the “wantok” system), are also often seen as major impediments to governance accountability and transparency in the implementation of government interventions.

Notwithstanding these constraints, it is important to acknowledge that due to a variety of factors including advances in communication and transport infrastructure over the last decade, there is today an increasing awareness of and demand for improved services. Consequently, a growing number of public officials are seeking reliable information to inform decision making, demonstrate results, and improve accountability. 

Current GoPNG M&E systems are not able to demonstrate with certainty the contribution of development funds to national development impacts, such as improved livelihoods, better health, and improved education outcomes.

Tuesday 21 May 2019

M&E FOR IMPROVED HEALTH SERVICE DELIVERY: UNDERSTANDING AND TRACKING PROGRAM/PROJECT PERFORMANCE THROUGH ANSWERING WHAT, WHEN, HOW AND WHO QUESTIONS

Monitoring is a systematic and continuous collection and analysis of data about the progress of a project or program over time. It involves a continuous process of data gathering and analysis that allows adjustments to be made in the objectives. On the other hand, Evaluation is a systematic periodic collection and analysis of data about the progress of a project or program. An evaluation provides credible and useful information enabling the incorporation of lessons learned into the decision-making process of both recipients and financiers (donors).

Monitoring and evaluation (M&E) systems have played a critical role in advancing the field of health through applying quantitative and qualitative methods in collecting and using health data, to inform decision making, applying rigorous evaluations in assessing program effectiveness, and designing and conducting operational research that address implementation challenges.

M&E systems have globally contributed to the improvement of health through tracking and evaluating the various diseases that mostly affect the low developed countries (LDCs). By establishing strong M&E systems to track and assess performance, strengthening Health Management Information Systems (HMIS) at facility, community, national and international levels to manage health data, and tracking progress towards achievement of health targets, the field of M&E has contributed in the achievement of health outcomes, improving health service delivery and saving lives.

The broad continuum of program/project evaluation stresses the fact that evaluation results are used for making decisions to improve on the overall program performance. This notion attracts no substantial objection until one plugs in the time element and the summative form in the conceptual elucidation of evaluation and the particular intervention being assessed. If an intervention is undergoing a terminal evaluation exercise or impact evaluation, how best will that intervention still benefit from the results? Considering this observation and from the practitioners’ point of view, monitoring outputs are used for coming up with corrective actions on a program under implementation while evaluation outputs are used for generating lessons.

These lessons can depict positive or best practices that can be replicated in future interventions.

Infant mortality rate of Papua New Guinea fell gradually from 104.9 deaths per 1,000 live births in 1968 to 41.8 deaths per 1,000 live births in 2017.Maternal mortality rates in Papua New Guinea appear to have become worse over the past five years with the risk of dying during pregnancy or childbirth now estimated at 1 in every 120 women. The rates have been revealed in a new report by ChildFund Australia which says maternal health is an unrecognised crisis in PNG.

Despite different advances, there are still major challenges to address for which Monitoring and Evaluation can significantly help. As 2015 is knocking and with a significant number of countries expected to not achieve health-related Millennium Development Goals (MDGs), Monitoring and Evaluation is poised to continue playing a prominent role for monitoring performance, accountability, and most importantly for understanding and tracking deliverables in the health sector through answering what, when, how and who questions.

In addition, by means of corruption, embezzlement and misuse of donor resources/funds by states, for example, Uganda hinders the country from attaining more development in the health sector. This has caused many donor agencies, including the UNDP, UNICEF, ADB, World Bank, UKAID, Civil Society Fund, and USAID, to mention but a few, to adjust their monitoring and evaluation policies, focusing on performance, evidence of effectiveness, and impact of the different programs they are funding in order to increase efforts and resources for monitoring, evaluation, and implementation research to bring about sustainable solutions to address global health problems mostly in developing countries more so the Sub Saharan Africa.

In conclusion, since most of the program implementation frameworks are results-oriented and go hand in hand with effective and efficient utilization of resources, governments and donor agencies need to ensure that the Monitoring and Evaluation systems are strengthened because the M&E results are useful in providing authentic best practices to learn from. The performance indicators to measure the impact, outcome, output and input must be SMART in nature.

This in turn motivates implementers once the set performance targets are achieved. Likewise, the results can inform implementers on areas they have not performed well, thereby providing an early warning that trigger administrative decisions/corrective actions for the purposes of improving performance all of which will help in improving service delivery in the health sector hence promoting and improving global health and attaining health equity.

Monitoring and Evaluation Challenges for Public Institutions in Papua New Guinea

There is a very low program management capacity including M&E within public institutions at both the national and sub-national levels. The National Statistics Office which would normally play the coordination and data management oversight role has had limited capacity for a long time. Furthermore, the country has limited experience in rigorous and systematic data collection and analysis. Most data collection and analysis exercises are projectised, one-off efforts, which make objective trend analysis and measuring progress very difficult if not impossible. As a result of the low capacity in M&E and the low priority and support given to the collection and management of information, data have tended to be of low and inconsistent quality and consequently of limited use. There have been minimal incentives to systematically generate evidence useful for decision making and improve accountability. Additionally, it must be acknowledged here that in some cases, there are strong disincentives to collect basic information, especially on expenditure and program implementation. Without strong political support, the likelihood of changing the culture towards better monitoring and accounting for investments and results will be minimal. In the complex PNG context, limited information means that in the majority of cases, high level policy decisions for instance in the allocation of resources, is based on the decision makers’ partial knowledge and past experiences rather than on hard evidence. This decision making process is prone to personal and group interests and has over time become a norm. To change this culture, especially at high levels of government and political hierarchy requires strong political will, a strong supply of good evidence able to influence decision making; but also a demand from beneficiaries for demonstration of results.

Enhance Child Learning through Digital eLearning Tools

Papua New Guinea Research Monitoring and Evaluation: Enhance Child Learning through Digital eLearning T... : Enhance Child Learning through ...