Sustainability and Extension as a Solution to World Hunger

Sustainability and Extension as a Solution to World Hunger

An AI Solution to Evaluating the Adoption of Innovations I presented at AGEDS 461 at Iowa State University.

Quotes of Preface

“It’s a basic truth of the human condition, that everybody lies. The only variable is about what.” Hugh Laurie, House M.D.

“History is Written by Victors.” Winston Churchill

“If you give a man a fish, you feed him for a day. If you teach a man to fish, you feed him for a lifetime.” Matthew 4:19

Puzzle Piece Preface

Developed through the perspective of the Diffusion of Innovations by Everett Rogers, this paper looks to define and introduce the problem of world hunger as an issue ultimately solved by the efficient application of international agricultural Extension rather than current approaches—largely defined by the use of food banks and donations. With the advice of Dr. Nav Ghimire and Dr. Fally Masambuka‑Kanchewa, this paper aims to transform the lengthy, complex problem of evaluating sustainable on‑field agricultural development into smaller instances approachable by an Artificial Neural Network.

The Tipping Point by Malcolm Gladwell explains how “little things can make a huge difference.” [@TippingPoint] Similarly, this paper looks to provide a piece to solving the issues encountered when evaluating the impact of agricultural extension work in Nepal.

This paper will explore key concepts from AGEDS 461 at Iowa State University. Thinking through the perspective of an extension agent, this model could replace the need for on‑field evaluative work by providing automated, scalable assessments of innovation adoption.

Introduction

The investigations explored throughout this paper began with contact with Dr. Nav Ghimire—Associate Director, University of Idaho Extension. He admitted that a large issue of the extension program is evaluating the impact of an innovation in areas like India and Nepal. A “light bulb” moment occurred during a presentation about barriers to international extension in Africa, and the idea for this method began to take shape.

Sustainability against Morality: Barriers to World Hunger Solutions

While this paper focuses on evaluating the impact of extension work, it’s also important to consider the wider vision of how these methods serve global populations. Alarmingly, current food‑security approaches (e.g., food banks) can actually reverse progress. Nick Saul, co‑founder and CEO of Community Food Centres Canada, explained in a 2016 TED Talk that complex food‑bank systems have coincided with worsening food‑security outcomes. The root issue isn’t a lack of food but poverty and misfortune—problems not solved by donations alone.

By building sustainable agricultural and industrial sectors in struggling areas, we address the source of food security issues. This underpins the purpose of agricultural Extension.

Extension as a Solution

Defining Extension Work

Agricultural Extension is more than simply diffusing innovations—it’s the structured exchange of knowledge that helps communities adopt new skills to solve problems and improve their lives. Ensminger and Sanders (1945) define it as the exchange of knowledge to help populations develop and adopt agricultural innovations [@Sanders]. This definition also hints at potential moral biases in extension approaches.

A classic example is the development of hybridized corn at Iowa State University [@Rogers]. Before hybrids, farmers saved seeds, leading to uneven yields and pest resistance. Researchers spread the new seeds by offering trial plots and leveraging networks of farmers and agents.

Extension as a Solution to World Hunger

World hunger is a complex issue. Since Extension promotes sustainable development, the efficiency of extension systems directly affects food security. Evaluating extension programs—especially in places like Nepal—is challenging and often relies on flawed surveys. Dr. Fally observed that farmers sometimes misunderstand extension goals, yielding contradictory data.

Satellite imagery combined with AI can provide unbiased, accurate evaluations without expensive travel or extensive surveys.

A Introduction to AI

Hollywood, Wrong Again?

Current AI lacks the “sentient intelligence” of Hollywood. While AI can pose risks—such as elite control of social media feeds—it doesn’t manifest as in movies.

Advertising Profiles

AI already collects data to target ads. Our devices listen constantly, providing the raw material for algorithms that recommend products and services.

Derivative of Satellite Imagery

Time

Satellite imagery outperforms drones by providing continuous, time‑series data, enabling ongoing evaluation of field innovations.

The Role of AI in Extension

Destroying Hierarchies of Communication

Perfect, global-scale evaluation empowers agents and farmers alike, breaking down traditional top‑down communication.

Farmer Understanding

Transparency between farmers and agents is crucial. Satellite imagery ensures data validity by removing subjective biases.

The Ladder of Farmer Participation in the Extension Process
  1. Level 5: Farmers evaluate Extension independently and report to policymakers.
  2. Level 4: Farmers evaluate alongside managers and decide on service changes.
  3. Level 3: Farmers receive results and ask for recommendations.
  4. Level 2: Farmers receive summaries but aren’t asked to react.
  5. Level 1: Farmers provide data without involvement in planning.

[@nav]

Creating Alternative Communication Ladders

High‑resolution satellite applications—such as those in conservation biology—illustrate how imagery can enable new feedback loops between stakeholders.

Method

This section introduces our proposed method, grounded in Rogers’s Diffusion of Innovations, and compares it with current evaluation approaches.

Characteristics of Innovations

1. Relative Advantage

How much better an innovation is perceived compared to the idea, practice, or product it supersedes.

  • Definition: Degree to which users believe the new idea is superior in terms of economic benefits, social prestige, convenience, or satisfaction.
  • Why it matters: The greater the perceived advantage, the more rapid the adoption. If people see clear gains—cost savings, time savings, improved outcomes—they’ll be more motivated to switch.
  • Example: Solar panels offer lower electricity bills (economic benefit) and “greener” credentials (social prestige) relative to grid power.

2. Trialability

The extent to which an innovation can be experimented with on a limited basis.

  • Definition: Degree to which a new idea can be tested or piloted before full-scale commitment.
  • Why it matters: Lowers perceived risk. Being able to “try it out” helps potential adopters build confidence and iron out uncertainties before making a big investment.
  • Example: Free trials of software let users explore features risk‑free; farmers trial small plots with a new crop variety before dedicating their entire acreage.

3. Compatibility

How consistent the innovation is with existing values, past experiences, and needs of potential adopters.

  • Definition: Degree to which an innovation aligns with users’ cultural norms, workflows, and prior practices.
  • Why it matters: The more an innovation “fits” into what people already do and believe, the fewer changes they must make—reducing friction and resistance.
  • Example: Electric cars that use the same “drive and refuel” mental model as gasoline cars (i.e., you still pump energy into a tank) feel more familiar than radically new transport systems.

4. Complexity

How difficult the innovation is to understand and use.

  • Definition: Degree to which an innovation is perceived as hard to grasp, learn, or implement.
  • Why it matters: Higher complexity slows adoption—users tend to shy away from tools or ideas that seem too confusing or that require steep learning curves.
  • Example: A new analytics dashboard that requires months of training will face more resistance than one with a clean, intuitive UI and built‑in tutorials.

5. Observability

The degree to which the results of an innovation are visible to others.

  • Definition: Extent to which positive outcomes of an innovation (benefits, successes) can be seen, demonstrated, and communicated.
  • Why it matters: Visible results create word‑of‑mouth momentum. When peers see someone benefitting, they’re more likely to try it themselves.
  • Example: Neighbors installing solar panels—in full view on rooftops—serve as a public demonstration that encourages others in the community to explore solar.

Putting it all together:

An innovation that clearly outperforms existing options (relative advantage), can be tested easily (trialability), fits into current habits and values (compatibility), is user‑friendly (low complexity), and yields visible benefits (observability) will diffuse most rapidly through a social system.

Relative Advantage

Drones can monitor crops and apply inputs, but require costly daily flights. Satellite imagery, already in orbit, offers an absolute advantage in cost–benefit ratio. [@drones]

Trialability

Trialability concerns how easily new technologies can be tested. This section describes how accessible AI solutions are for initial experiments.

A Simple Proof‑of‑Concept

Using Teachable Machine, I trained a convolutional neural network to detect tilled soil in satellite images. I collected 100 Google Earth images showing tilled fields to establish a baseline. I also trained a model to recognize combine harvesters, enabling automated tracking of mechanization adoption over time.

Observability

By mapping model outputs to geographic grids, we can visualize innovation adoption at scale.

Compatibility

Privacy concerns arise if high‑resolution imagery is misused. This paper advocates open‑source models and transparent governance to prevent elite exploitation.

Low Complexity

The prototype demonstrates that a skilled programmer can build a basic evaluation system with existing tools.

Sustainability of the Method

Conservation biologists report exponential growth in satellite applications, driven by Moore’s Law and emerging quantum computing. [@Animal]

Context of Evaluation Methods

Dr. Nav Ghimire outlined five evaluation perspectives:

  • Cohort‑Referenced Judgement: Compares similar programs; less useful if both are equally effective.
  • Standard‑Referenced Judgement: Measures fulfillment of mission objectives.
  • Difficulty‑Referenced Judgement: Adjusts for problem complexity.
  • Alternative‑Referenced Judgement: Assesses opportunity costs of resource allocation.
  • Progress‑Referenced Judgement: Determines if new solutions outperform existing ones.

The Impact of Perfect Evaluations of Adoption

Section to discuss the broader implications of achieving near‑perfect, automated evaluation of innovation diffusion.

Conclusion

This AI‑driven, satellite‑based evaluation framework offers a scalable, unbiased way to measure agricultural innovation adoption. To safeguard privacy and encourage trust, the system should remain open source and be promoted openly within the international extension community.

References