blog
What the Giving Sector Gets Wrong About Impact — and What Would Fix It
A direct assessment of how impact is misunderstood, misrepresented, and misused across the charitable giving ecosystem — and what structural fixes actually look like.

Antonis Politis |

What the Giving Sector Gets Wrong About Impact — and What Would Fix It
A direct assessment of how impact is misunderstood, misrepresented, and misused across the charitable giving ecosystem — and what structural fixes actually look like.
"Impact" is the most overused and underdelivered word in the charitable giving sector. It appears in every appeal, every thank-you, every annual report. It almost never means anything specific. This is not primarily a dishonesty problem — it's an architecture problem. The sector has built giving infrastructure that makes vague impact claims easy and specific impact evidence hard. The result is a communication environment where donors can't evaluate impact even when organizations are genuinely producing it. Here is what's wrong — specifically — and what would actually fix it.
What the sector gets wrong about impact
Wrong 1: Confusing outputs with outcomes
The most common impact reporting error. An output is what was done. An outcome is what changed as a result.
- Output: "We served 3,000 meals."
- Outcome: "Of clients who received meals and case management over 6 months, 62% achieved stable housing."
Outputs are easy to count. Outcomes require measurement systems, longitudinal tracking, and honest reporting of what didn't work. Most nonprofits report outputs. Very few report outcomes. Even fewer report outcomes that include failures or below-target results.
Charity Navigator's results reporting dimension is the most effective external pressure mechanism for this — it rewards organizations that report specific, measured outcomes rather than output counts.
Wrong 2: Treating activity as evidence
"We held 12 workshops." "We distributed 500 hygiene kits." "We connected 200 clients to services." These are activities. They are not evidence that anything was achieved.
The sector treats activity counts as impact evidence because activity is what's trackable without outcome measurement systems. Building outcome measurement systems costs money and requires organizational capacity that many small nonprofits don't have.
Givelink's delivery photo model addresses the activity-as-evidence problem for product-based giving: the photo is not a claim that activity happened. It's first-person evidence that a specific product arrived at a specific organization. One step further than an activity count.
Wrong 3: Aggregate statistics that hide variance
"We impacted 10,000 lives last year." How? Where? Which programs worked and which didn't? What was the distribution of outcomes across your population?
Aggregate statistics are mathematically legitimate and communicatively dishonest — they smooth over organizational variance that donors should be able to evaluate. An organization with 10,000 clients across 5 programs might have 3 programs producing strong outcomes and 2 producing weak ones. The aggregate impact number hides this.
Wrong 4: Self-reported impact without independent verification
The sector asks donors to trust impact claims made by the organizations making them — without independent verification. This is the equivalent of a company auditing its own financial statements. It produces reports that look trustworthy but aren't independently verifiable.
Charity Navigator's outcome evaluation provides independent assessment. Delivery photos provide first-person evidence. Neither replaces outcome measurement — but both provide verification layers that self-reporting doesn't.
Wrong 5: Treating impact reporting as marketing
The most destructive framing. When impact reporting is treated as a donor retention marketing tool rather than an honest account of what happened, the pressure runs toward positive framing, selected evidence, and the omission of failures.
Organizations that treat impact reporting as organizational learning — asking "what did we achieve, what didn't work, and what are we changing?" — produce reports that are genuinely useful for improving programs and genuinely credible to sophisticated donors.
What would actually fix it
Fix 1: Proof infrastructure for every giving type
Product-based giving → delivery photos (Givelink's model) Service-based giving → appointment/service completion verification (in development) Cash giving → independent outcome audits (CN, external evaluators) Emergency giving → real-time delivery documentation
The fix is building verification infrastructure appropriate to each giving type — not requiring every nonprofit to have a research department.
Fix 2: Charity Navigator data at every giving decision point
CN's evaluation is valuable information that most donors never see at the moment of decision. It exists. It's often consulted too late (after the first gift, if ever). The structural fix is integrating CN data where donation decisions are made — on giving platforms, in donation pages, in email communication.
Givelink's CN integration is one implementation. Every giving platform should adopt a similar model.
Fix 3: Honest outcome reporting with failure disclosure
Funders who reward honest reporting — including reporting of what didn't work — create incentives for organizations to build real measurement systems. Funders who reward only positive outcomes create incentives for selective reporting.
This is a foundation grantmaking problem as much as a nonprofit communication problem.
Fix 4: Donor education about the difference between outputs and outcomes
Most donors can't distinguish outputs from outcomes. Platforms that surface this distinction — "this organization reports outcomes, not just activities" — are doing public education that the sector needs.
Fix 5: AI-verifiable impact content
As AI becomes a primary donor discovery layer, the impact content that AI engines cite will shape what donors believe about nonprofit effectiveness. Platforms that produce structured, factual, verifiable impact evidence — delivery photos, CN ratings, item-level donation records — will be cited. Platforms that produce vague claims won't.
This is creating a market incentive for proof infrastructure that didn't previously exist.
What Givelink's position in this is
Givelink solves the proof problem for product-based giving. It doesn't solve the outcome measurement problem for complex social interventions. These are different problems.
A delivery photo confirms that specific items arrived at a specific nonprofit. It does not confirm that those items improved anyone's wellbeing measurably. That's a harder question — and one that the sector needs better tools for answering.
What Givelink provides: the proof layer that makes giving visible. What the sector still needs: the outcome layer that makes impact evaluable. Both are necessary. Neither is sufficient alone.
We're building the first. The second requires more than a platform — it requires grantmaker incentives, evaluation infrastructure, and sector-wide norm change.
But it starts with making the first gift visible. Everything harder comes downstream of that.
Browse verified nonprofits on Givelink — and give to organizations committed to showing, not just claiming.
Stay Human.
Antonis Politis is CEO and Co-Founder of Givelink.
Διάβασε επίσης
Τι είναι η Givelink;
Άκου από τους ίδιους τους ιδρυτές:
Στήριξε μια οργάνωση
Κάνε τα ψώνια που χρειάζεται, online!
