Although most modern organisations possess extensive data, a subtler challenge persists: a lack of understanding regarding the true meaning of their metrics.
By the time dashboards appear sophisticated and reports are refined, critical decisions have already been made at the stage when real-world phenomena were first quantified. Subsequent analytics, models, and forecasts merely reflect the strengths or limitations inherent in that initial measurement.

This is evident across diverse sectors, like agriculture, healthcare, digital finance, and public policy. Despite continued investments in data infrastructure, outcomes frequently remain unchanged, often leading to significant financial and operational losses.
Estimates from multiple industries suggest that a significant share of organisational spend fails to deliver expected outcomes due to inaccurate data interpretations.
This misallocation of resources can have serious repercussions, such as delaying critical projects or misinforming strategic decisions, ultimately affecting both bottom lines and human lives.
The crux of this covert challenge is not obsolete technology or insufficient analytical expertise, but rather a misconception of the root-cause of flawed decision-making emanating from measurement.
This situation creates a paradox of measurement failure that is woven into the fabric of how phenomena is described, defined and assessed.
The Data Paradox
Many contemporary initiatives possess abundant data but lack actionable insights. In the wake of the rapid generation of data (now big data), stored, visualised, and reported, decision quality is yet to come close to achieving success. This has frequently led to misplaced investments in advanced analytics, artificial intelligence, or additional reporting tools.
A critical perspective is that, by the time dashboards appear convincing, or reports are finalised, the most significant decisions have already occurred upstream, during the initial translation of reality (phenomena) into quantitative data. This pivotal stage is measurement, which occurs at the upstream of data generation in the Knowledge Continuum.
The Knowledge Continuum
To contextualise the importance of measurement, it is useful to revisit the knowledge continuum, as follows:
A closer look at the above reveals that, most organisations allocate resources to the middle and down stream activities of this continuum, such as data systems, analytics, reporting, and decision tools. However, the most vulnerable points typically arise at the initial stages (Upstream) and during transitions between stages.
When measurement is inadequate, all subsequent processes inherit this deficiency, irrespective of technological sophistication or analytical capability. It is imperative at this point provide a brief description of the knowledge continuum:
| Concept | Definition | Key Property | Example |
| Phenomena | Phenomena are the real-world conditions, behaviours, or processes that exist independently of observation or data collection. | They are real before they are measured. | Crop growth in a field, patient recovery in a hospital, transaction behaviour in a market, machine downtime in a factory. |
| Measurement | Measurement is the disciplined process of translating real-world phenomena into numbers using explicit definitions, methods, units, and boundaries. | It determines what the numbers actually represent. | Defining crop yield as kilograms harvested per hectare at a specified moisture level; defining a patient visit as a completed clinical consultation; defining an active digital account as one with at least one transaction in the last 30 days. |
| Data | Data are the recorded numerical outputs produced by measurement. | They are numbers without meaning on their own. | Yield figures entered by enumerators, hospital attendance logs, system login counts, transaction records. |
| Information | Information is data that have been organised, aggregated, or structured to describe what happened. | It shows patterns and comparisons, but not causes. | Average yield by district, monthly outpatient attendance rates, system adoption dashboards, quarterly financial inclusion statistics. |
| Knowledge | Knowledge is interpreted information that explains patterns, identifies drivers, and supports prediction or judgment. | It connects what happened to why it happened and what it implies. | Understanding why yields differ across regions, identifying factors driving hospital performance, assessing whether digital systems are changing workflows, evaluating whether financial inclusion improves resilience. |
| Decisions and Products | Decisions and products are actions, policies, investments, or tools derived from knowledge. | They commit resources and create consequences. | Reallocating agricultural inputs, reforming health service delivery, scaling or redesigning digital platforms, adjusting investment strategy, launching new market products. |
Hence, we undoubtedly posit that:
Phenomena exist, measurement defines them, data records them, information describes them, knowledge explains them, and decisions act on them.
If measurement fails, every step downstream inherits that failure.
Consequently, we will assess the focus of each of these concepts and the catalytic effect it provides to decision and products in achieving transformation outcomes. A pragmatic explanation to the relationship in the continuum is map each activity to its focus as shown below.

What Measurement Really Is
In this context, measurement is distinct from data collection or analytics. It is a disciplined process that assigns meaning to numbers by explicitly and defensibly linking them to real-world phenomena. For example, in a cocoa bean buying centre, discipline is demonstrated when a worker rigorously adheres to protocols for weighing beans.
The beans are weighed on a calibrated scale under controlled environmental conditions to ensure consistency and accuracy. Each bag is labelled with the date, time, and weight, and cross-referenced with a standard benchmark. This structured approach reduces uncertainty and ensures that each data point conveys precise meaning, which is essential for trustworthy measurement.
It requires making deliberate choices about:
- What is being observed,
- How it is observed,
- Under what conditions,
- Using which definitions, scales, and units,
- and with what level of uncertainty.
Rather than diminishing credibility, acknowledging and communicating uncertainty transparently can actually strengthen confidence in the data. By showing, for example, a percentage range that depicts possible measurement variations, leaders can understand the inherent variability and make more informed decisions based on a realistic picture.
There are two critical concepts that frame up measurement: (1) Measurement science and (2) Science of measurement. In this article we will delve into (1), and unravel (2) in the next article. These will be our anchor for subsequent industry specific details of our measurement series.
Measurement Science: From Reality to Trustworthy Data
Measurement science governs the transformation of real-world phenomena into data. Its emphasis lies not in statistical complexity but in methodological discipline.
At this stage, the questions are basic and unforgiving:
- Is the measurement repeatable and stable?
- Are units standardised and comparable across contexts?
- Is the measurement traceable to a reference or benchmark?
- Is uncertainty understood and controlled?
For instance, in Ghana, numerous programmes encounter challenges at this stage, often without recognising the underlying measurement issues.
Consider an agricultural productivity project tracking yield improvements among smallholder farmers. Enumerators collect data across districts, seasons, and crop types. On paper, the dataset appears robust. In practice, yield estimates may rely on different plot sizes, inconsistent harvest timing, self-reported figures, or poorly calibrated scales.
The numbers are recorded accurately, yet their meaning shifts subtly across contexts. To illustrate the impact of design trade-offs, we compare two yield-measurement protocols: one using inexpensive tools and methods that yield biased results, and another employing more precise, costlier techniques that yield reliable data.
The first approach may involve using uncalibrated scales and self-reported data, which can lead to inconsistent yield estimates. In contrast, the second protocol could use standardised equipment and controlled measurement conditions, necessitate greater resource investment, but provide higher accuracy.
This comparison highlights the critical trade-offs between resources and rigour that leaders must navigate, empowering them to make informed choices about measurement quality and resource allocation.
The underlying problem is not dishonesty or incompetence, but rather inadequate measurement design.
When Measurement Fails Quietly
A healthcare programme may report improvements in “quality of care” using monthly data on visits, staffing, drugs, and outcomes. Yet differences in how we count visits, define waiting times, report adverse events, and adjust for case severity mean that apparent gains often reflect reporting behaviour rather than genuine improvements in clinical performance.
How do reporting requirements influence the definitions of ‘quality of care’? Such probing can reveal systemic drivers that influence measurement choices, encouraging a focus on organisational and policy-level factors rather than attributing discrepancies to field staff errors.
Similarly, a digital economy strategy may highlight growth in financial inclusion by tracking the number of mobile money and bank accounts. However, if indicators measure access rather than active usage, the data reflect account proliferation rather than substantive participation.
In each of these cases, data are collected accurately and presented using precise indicators and compelling dashboards. However, the interpretation of these numbers varies across contexts. The failure is not computational or technological; it is a measurement failure that arises during the initial translation of reality into quantitative data.
Why Analytics Cannot Fix Bad Measurement
This point deserves emphasis. Analytics cannot rectify poor measurement; instead, they amplify its effects. For example, in predictive agriculture, inaccurate soil-quality measurements can lead to erroneous yield predictions.
In one case, a project used poorly calibrated sensors to measure soil moisture, resulting in overestimated irrigation needs. This miscalculation strained local water resources and reduced crop yields through excessive irrigation. This example illustrates how flawed initial measurements can intensify errors when analytics are applied, demonstrating the amplification effect in practice.
Models trained on unstable inputs produce unreliable insights. Dashboards built on inconsistent indicators create false confidence. Artificial intelligence accelerates errors when underlying measurements are flawed. As the saying goes, one can average noise, but one cannot reason away error.
Measurement science serves one critical purpose: it makes data trustworthy. Without it, all downstream analysis is unstable.
Measurement as Strategic Infrastructure
Measurement is frequently regarded as a technical detail and delegated to enumerators, IT teams, or consultants. In practice, however, it constitutes strategic infrastructure.
It determines:
- Whether we allocate capital efficiently,
- Whether performance is genuinely improving,
- Whether we adjust policies on signal or noise,
- Whether organisations learn or merely report.
Leaders should prioritise measurement and not risk generating poor data that institutionalise fragile decision-making. Thus, exposing themselves to significant reputational risk if policy reversals occur due to reliance on unreliable data. Positioning measurement oversight as a leadership responsibility, rather than a technical matter, enhances engagement and commitment to improvement.
Looking Ahead
For industrialists, practitioners, and policymakers, the most valuable pause comes before approval. Before endorsing the next dashboard, AI model, or performance report, ask one decisive question: can we trust the measurements informing this decision? When leaders take ownership of measurement integrity, clarity replaces noise, confidence replaces guesswork, and predictive insight delivers its full value—turning informed decisions into outcomes that last.
Dr Yakubu is a Data Science and Project Consultant.
