A formula for calculating learning efficacy, (E), considering the importance of each criterion and the specific ratings for peer learning, is: This abstract formula provides a way to quantify learning efficacy, considering various educational criteria and their relative importance (weights) for effective learning. Variable Definition Description S Scalability Ability to accommodate a large number of learners I Information fidelity Quality and reliability of information C Cost effectiveness Financial efficiency of the learning method F Feedback quality Quality of feedback received U Uniformity Consistency of learning experience Summary of five variables that contribute to learning efficacy Weights for each variables are derived from empirical data and expert consensus. All values are on a scale of 0-4, with a “4” representing the highest level. Scalability Information fidelity Cost-benefit Feedback quality Uniformity 4.00 3.00 4.00 3.00 1.00 Assigned weights Here is a summary table including all values for each criterion, learning efficacy calculated …
Why does cascade training fail?
Cascade training remains widely used in global health. Cascade training can look great on paper: an expert trains a small group who, in turn, train others, thereby theoretically scaling the knowledge across an organization. It attempts to combine the advantages of expert coaching and peer learning by passing knowledge down a hierarchy. However, despite its promise and persistent use, cascade training is plagued by several factors that often lead to its failure. This is well-documented in the field of learning, but largely unknown (or ignored) in global health. What are the mechanics of this known inefficacy? Here are four factors that contribute to the failure of cascade training 1. Information loss Consider a model where an expert holds a knowledge set K. In each subsequent layer of the cascade, α percentage of the knowledge is lost: 2. Lack of feedback In a cascade model, only the first layer receives feedback …
The capability trap: Nobody ever gets credit for fixing problems that never happened
Here is a summary of the key points from the article “Nobody ever gets credit for fixing problems that never happened: creating and sustaining process improvement”. Overview Core causal loops The capability trap The “capability trap” refers to the downward spiral organizations can get caught in, where attempting to boost performance by pressuring people to “work harder” actually erodes process capability over time. This trap works through a few key mechanisms: Key takeaway for learning leaders Learning leaders must understand the systemic traps identified in the article that underly failed improvement initiatives and facilitate mental model shifts. This help build sustainable, effective learning programs to be realized through productive capability-enhancing cycles. Key takeaway for immunization leaders It is reasonable to hypothesize that poor health worker performance is a symptom rather than the cause of poor immunization programme performance. Short-term decisions, often responding to top-down targets and donor requirements, hurt capability …
Making sense of sensemaking
In her article “A Shared Lens for Sensemaking in Learning Analytics”, Sasha Poquet argues that the field of learning analytics lacks a shared conceptual language to describe the process of sensemaking around educational data. She reviews prominent theories of sensemaking, delineating tensions between assumptions in dominant paradigms. Poquet then demonstrates the eclectic use of sensemaking frameworks across empirical learning analytics research. For instance, studies frequently conflate noticing dashboard information with interpreting its significance. To advance systematic inquiry, she calls for revisiting epistemic assumptions to reconcile tensions between cognitive and sociocultural traditions. Adopting a transactional perspective, Poquet suggests activity theory, conceptualizations of perceived situational definitions, and ecological affordance perception can jointly illuminate subjective and objective facets of sensemaking. This preliminary framework spotlights the interplay of internal worldviews, external systemic contexts, and emergent perceptual processes in appropriating analytics. The implications span research and practice. The proposed constructs enable precise characterization of variability …
Education as a system of systems: rethinking learning theory to tackle complex threats to our societies
In their 2014 article, Jacobson, Kapur, and Reimann propose shifting the paradigm of learning theory towards the conceptual framework of complexity science. They argue that the longstanding dichotomy between cognitive and situative theories of learning fails to capture the intricate dynamics at play. Learning arises across a “bio-psycho-social” system involving interactive feedback loops linking neuronal processes, individual cognition, social context, and cultural milieu. As such, what emerges cannot be reduced to any individual component. To better understand how macro-scale phenomena like learning manifest from micro-scale interactions, the authors invoke the notion of “emergence” prominent in the study of complex adaptive systems. Discrete agents interacting according to simple rules can self-organize into sophisticated structures through across-scale feedback. For instance, the formation of a traffic jam results from the cumulative behavior of individual drivers. The jam then constrains their ensuing decisions. Similarly, in learning contexts, the construction of shared knowledge, norms, values …
The design of intelligent environments for education
Warren M. Brodey, writing in 1967, advocated for “intelligent environments” that evolve in tandem with inhabitants rather than rigidly conditioning behaviors. The vision described deeply interweaves users and contexts, enabling environments to respond in real-time to boredom and changing needs with shifting modalities. Core arguments state that industrial-model education trains obedience over creativity through standardized, conformity-demanding environments that waste potential. Optimal learning requires tuning instruction to each student. Rigid spaces reflecting hard architecture must give way to soft, living systems adaptively promoting growth. His article categorizes environment and system intelligence across axes like passive/active, simple/complex, stagnant/self-improving. Significant themes include emancipating achievement through tailored guidance per preferences and abilities, architecting feedback loops between human and machine, and progressing through predictive insight rather than blunt insistence. Overarching takeaways reveal that intelligence emerges from environments and inhabitants synergistically improving one another, not stationary enforcement of tradition. For education, this analysis indicates transformative power …
How do we reframe health performance management within complex adaptive systems?
We need a conceptual framework that situates health performance management within complex adaptive systems. This is a summary of an important paper by Tom Newton-Lewis et al. It describes such a conceptual framework that identifies the factors that determine the appropriate balance between directive and enabling approaches to performance management in a given context. Existing performance management approaches in many low- and middle-income country health systems are largely directive, aiming to control behaviour using targets, performance monitoring, incentives, and answerability to hierarchies. Health systems are complex and adaptive: performance outcomes arise from interactions between many interconnected system actors and their ability to adapt to pressures for change. In my view, this important paper mends an important broken link in theories of change that try to consider learning beyond training. The complex, dynamic, multilevel nature of health systems makes outcomes difficult to control, so directive approaches to performance management need to …
What is a “rubric” and why use rubrics in global health education?
Rubrics are well-established, evidence-based tools in education, but largely unknown in global health. At the Geneva Learning Foundation (TGLF), the rubric is a key tool that we use – as part of a comprehensive package of interventions – to transform high-cost, low-volume training dependent on the limited availability of global experts into scalable peer learning to improve access, quality, and outcomes. The more prosaic definition of the rubric – reduced from any pedagogical questioning – is “a type of scoring guide that assesses and articulates specific components and expectations for an assignment” (Source). The rubric is a practical solution to a number of complex issues that prevent effective teaching and learning in global health. Developing a rubric provides a practical method for turning complex content and expertise into a learning process in which learners will learn primarily from each other. Hence, making sense of a rubric requires recognizing and appreciating the value of peer learning. This may be …
Pandemic preparedness through connected transnational digital networks of local actors
In the Geneva Learning Foundation’s approach to effective humanitarian learning, knowledge acquisition and competency development are both necessary but insufficient. This is why, in July 2019, we built the first Impact Accelerator, to support local practitioners beyond learning outcomes all the way to achieving actual health outcomes. What we now call the Full Learning Cycle has become a mature package of interventions that covers the full spectrum from knowledge acquisition to implementation and continuous improvement. This package has produced the same effects in every area of work where we have been able to test it: self-motivated groups manifesting remarkable, emergent leadership, connected laterally to each other in each country and between countries, with a remarkable ability to quickly learn and adapt in the face of the unknown. In 2020, we got to test this package during the COVID-19 pandemic, co-creating the COVID-19 Peer Hub with over 6,000 frontline health professionals, …
Reinventing the path from knowledge to action in global health
At the Geneva Learning Foundation (TGLF), we have just begun to share a publication like no other. It is titled Overcoming barriers to vaccine acceptance in the community: Key learning from the experiences of 734 frontline health workers. You can access the full report here in French and in English. Short summaries are also available in three special issues of The Double Loop, the Foundation’s free Insights newsletter, now available in both English and French. The report, prefaced by Heidi Larson who leads the Vaccine Confidence Project, includes DOI to facilitate citation in academic research. (The Foundation uses a repository established and maintained by the Geneva-based CERN for this purpose.) However, knowing that academic papers have (arguably) an average of three readers, we have a different aspiration for dissemination. As a global community, we recognize the significance of local action to achieve the global goals. The report documents vaccine confidence practices just …