Technology is contextual, and hence neither oversimplification nor overcomplication is helpful; it'd be advisable to understand realistic constraints in a very environment and take a tailored approach point by point.
But how can we "see" real challenges and opportunities a person, organization, community, country, or region faces in the information age? Where are you currently at, in terms of digital divide?
One way to evaluate this is to travel to a field, spend time with locals, and co-explore problems and solutions. Such fieldwork, however, can be resource-intensive, uncertain, and irrelevant to other contexts. Consequently, the work is likely to be unsustainable with minimal scaling-up potential.
That's where assessment frameworks come in.
Task-based assessment
By applying a standard set of tasks and measurements globally, institutions can gain qualitative and quantitative insights about technology adaptation while eliminating noise. This eases a local reality check and cross-contextual analysis.
For example, International Computer and Information Literacy Study (ICILS) is a worldwide assessment of ICT literacy in educational settings conducted every five years. They employ their standardized questionnaires and virtual hands-on tasks to measure participants' ability to utilize ICT.
The ICILS reports describe which countries scored higher in information search, use of presentation tools (e.g., PowerPoint), and algorithmic games, as well as how students perceive ICT in their learning journey. It would be a good baseline framework for monitoring real-life access and use of digital technologies.
On the other hand, the facilitated approach can decontextualize insights by posing the same problem set for various but not all nations and aggregating the results for the sake of generalization. It won't be fair because each country uses different languages, and students follow different curricula that are likely regulated by the corresponding governments.
By design, assessment in a closed environment introduces many unrealistic factors into the scene, and hence, the results can be skewed toward a narrow definition of literacy.
So, it's overly simplistic to give a designed task and say, "If you can find Text A on a screen, copy and paste it to Box X, and guess what to do next, you are digitally prepared." In fact, it only dictates one's ability to solve specific, well-documented, synthetic tasks.
Since such task-based lab experiments derive a limited understanding of reality, we need more than tools and HOW-TOs to bridge the divide fully.
Situational assessment
Readiness, in reality, is more psychological and situational.
Even if he/she uses a smartphone daily for social media, they won't be prepared to enjoy what ICT offers to the fullest as long as there is psychological friction.
One can gain confidence in using technology only when they are in a specific situation, where they can objectively confirm their maturity by seeing others' behaviour.
That's the underlying idea of situational assessment frameworks, like Technology Readiness Index (TRI) and TRI 2.0. It focuses on tangible prosperity achieved through the use of technology, not skills.
TRI asks a series of situational questions, which are classified into four categories to measure various contextual factors that promote or limit technology adaptation.
- Optimism: Do you have a positive view of technology?
- Ex. "Technology gives me more freedom of mobility."
- Innovativeness: Is there a tendency for you to be a leader in tech?
- Ex. "Other people come to me for advice on new technologies."
- Discomfort: Do you experience the lack of control over tech and feeling uncomfortable?
- Ex. "In my circle of friends, people are admired more if they own the latest gadgets."
- Insecurity: Do you have skepticism whether tech works appropriately?
- Ex. "I do not feel confident doing business in an online-only setup."
By asking whether subjects agree or disagree with each statement, we get access to a more nuanced and authentic understanding of the state of individuals' literacy. It has been reported that the framework worked well across diverse contexts.
The invisible factor will differentiate knowing something from actually doing it, which is the key to demonstrate a greater degree of users' agency.
Toward effective monitoring and evaluation
Notice that both task-based and situation assessments are useful.
Think about driving a car. To obtain a license, you first complete a series of facilitated tasks in a closed setting, such as knowledge acquisition from a textbook, simulation at a driving school, and examination supervised by an authority. All the tasks are essential for you to start driving, yet none of them make you a mature driver.
In order for you to take full advantage of what an automobile offers, you actually need to drive it as frequently and consciously as possible on public roads. From this experiential and situated learning, you develop nuanced and contextualized driving skills, which will tell us whether one is a good driver or not. That's why, even though I have owned my driving license for 14 years without any accident or violation, the lack of "real" driving opportunities in my car-free life under varying conventions between Japan, Canada, and Malawi, put me at the bottom of the pyramid in terms of agency as a driver.
Therefore, while task-based assessment is definitely a good starting point, it would be pointless without situational validation.
Whether it's qualitative or quantitative, and task-based or situational, modern society relies heavily on standardized assessments to visualize reality in an objectively sound way. The measurement tools enable individual agents to monitor the performance of their actions and make informed decisions on what to do next. It also helps stakeholders better control quality and prioritize areas of investment.
In international development, this is often the job of Monitoring & Evaluation (M&E) personnel. However, my experience in Malawi tells me that assessment methodologies are not structured well, especially in emerging domains like ICT, and tools and languages differ depending on a person in charge; M&E officers are busy with going back and forth between fields and meeting rooms, while they make and distribute questionnaires on an "as-you-go" basis with little to no consistency and alignment with larger strategic goals.
In a reciprocal environment like human society, we cannot optimize things we don't measure. And defining the right objective function—capturing both behavioural and psychological aspects—is essential for the steady development of any person or organization.
This article is part of the series: Altruistic Byte: Real-World Insights for Tech-Driven ChangeSupport
Gift a cup of coffeeShare
Categories
See also
- 2026-01-01
- Why Offline Learning Still Matters in 2026
- 2024-06-24
- The End of the Beginning—What I Talk About When I Talk About Malawi
- 2023-08-23
- Starting Field Study on How Information Flows in Malawi
Last updated: 2026-03-01
Author: Takuya Kitazawa
I am an independent consultant, mentor, and advocate for sustainable technology development with a decade of experience in AI/ML products, data systems, and digital transformation. Based in Canada and originally from Japan, I have lived and worked globally, including part-time residence in Malawi, Africa. See CV for more information, or contact at [email protected].
NowDisclaimer
- Opinions are my own and do not represent the views of organizations I am/was belonging to.
- I am doing my best to ensure the accuracy and fair use of the information. However, there might be some errors, outdated information, or biased subjective statements because the main purpose of this blog is to jot down my personal thoughts as soon as possible before conducting an extensive investigation. Visitors understand the limitations and rely on any information at their own risk.
- That said, if there is any issue with the content, please contact me so I can take the necessary action.