The use of technology for data collection has gained a lot of interest from Monitoring and Evaluation professionals over the past decade. Few would argue the advantages of using digital data gathering (DDG) for M&E. However, there exist several constraints that may still play a role in the lack of its adoption. Here, the writer discusses what DDG is and the advantages of using technology for data collection. The writer also discusses some of the limitations that hinder the adoption of technology for data collection.
What is Digital Data Gathering?
Digital data gathering (or DDG) is a process of collecting data electronically through the use of existing technology such as personal digital assistants (PDAs or smart phones), tablets and net-books for data collection. In other words, it is the use of digital technology for collecting data or information from respondents. Continue reading
Monitoring is the systematic and routine collection of data during project implementation for the purpose of establishing whether an intervention is moving towards the set objectives or project goals. In this case, data is collected throughout the life cycle of the project. The data collection tools are usually embedded into the project activities in order to ensure that the process is seamless. There are several types of monitoring in M&E and they include process monitoring, technical monitoring, assumption monitoring, financial monitoring and impact monitoring. Continue reading
One of the constant features of M&E work is the representation of data. M&E personnel find themselves in positions where they need to or have to represent data to different audiences such as donors, local level stakeholders, and organizational hierarchy among others. Not planning how to represent or visualize data means that M&E personnel find themselves in a situation where the method they choose is inappropriate for the audience, or generally becomes misleading. Continue reading
Most organizations/ projects are faced with the data quality dilemma. Analysis of project data may leave the relevant personnel with reservations regarding the authenticity of the data, the enumerators or even the project impacts. M&E and other management staff may even contemplate the possibility of re-doing the process for the purposes of validation. Here, we take a look at what data quality is, data quality dimensions, causes of poor data and; eventually, ways of improving data quality. Continue reading
Some of the most confused terminologies within the Research, Monitoring and Evaluation field are the words, “input”, “activities”, “outputs”,” outcomes” and “impact”. Within the R, M&E practice, it is important to distinguish between these terms. Understanding these terminologies not only ensures that appropriate indicators are identified, but also that they are effectively measured. Here, we will use specific project cases to demonstrate the difference between these terms. Continue reading
An end of project report should follow the guidelines outlined below. It should include a title page, a list of abbreviations, acknowledgements, an executive summary, a table of contents, an introduction, a methodology section, a results section; conclusions, lessons learnt and recommendations section and; an annex section. The contents for each section are outlined below. Continue reading
The major distinguishing characteristic of evaluation, unlike monitoring, is that it is only conducted periodically at particular stages of the project. As such, there are five main types of evaluation. The different evaluation types vary mainly depending on the stage of the project. While classification could be based on different criteria such as the methodology adopted, here we look at the classification based on the time. These types of evaluation are formative evaluation, mid-term evaluation, summative evaluation, ex-post evaluation and meta- evaluation. Continue reading