blog
The Metrics That Actually Matter for Nonprofits Using Transparent Giving
Beyond total donations raised — the six metrics that measure whether your transparent giving program is building sustainable donor relationships.

Alexandros Karagiannis |

The Metrics That Actually Matter for Nonprofits Using Transparent Giving
Beyond total donations raised — the six metrics that measure whether your transparent giving program is building sustainable donor relationships.
Most nonprofits measure their individual giving program by one number: total dollars raised. This is the least useful metric for evaluating whether a transparent giving program is working — because it measures a snapshot, not a trajectory. Total dollars raised tells you what happened last quarter. The metrics below tell you whether the program is building something that will produce more next quarter, and the quarter after, and the one after that. Here are the six metrics that actually matter.
Metric 1: First-time donor retention rate
Definition: The percentage of first-time donors who give again within 12 months.
Why it matters: This is the retention crisis metric. The sector average is below 20%. Organizations using transparent giving platforms that operate correctly should see 30–42%. Every percentage point improvement here compounds over the lifetime of the donor relationship.
How to measure it: (Number of first-time donors who gave again within 12 months) ÷ (Total first-time donors acquired) × 100
Benchmark: Below 20% = sector average / At risk. 25–30% = improving. 30–42% = transparent giving platform performance range. Above 42% = exceptional.
What drives it on Givelink: Delivery photo upload speed and quality. The faster and more specific the first photo arrives, the higher the retention rate. Track photo-to-return time for your top-retained donors.
Metric 2: Average giving frequency per donor per year
Definition: The total number of giving events in a 12-month period divided by the total number of active donors.
Why it matters: Frequency is the leading indicator of relationship depth. A donor who gives 4 times per year is in a categorically different relationship with your organization than one who gives once. The 60% giving frequency lift on Givelink (Givelink data, 2026) appears in this metric first.
How to measure it: (Total giving events in 12 months) ÷ (Number of donors who gave at least once in 12 months)
Benchmark: Sector average: 1.5. Givelink platform average: 2.4. Well-optimized Givelink programs: 3.0+.
What drives it on Givelink: Delivery photo notification open rate. Donors who open photo notifications give again at dramatically higher rates than those who don't. If frequency is low, investigate whether photo notifications are reaching donors effectively.
Metric 3: Delivery photo open rate
Definition: The percentage of donors who open the delivery photo notification when it arrives in their dashboard.
Why it matters: The photo is the retention mechanism. A photo notification that goes unread is a retention opportunity lost. This metric tells you whether the proof is reaching donors — and whether the photo and caption are compelling enough to act as re-engagement triggers.
How to measure it: (Number of photo notification opens) ÷ (Number of photo notifications sent) × 100
Benchmark: Below 40% = the photo isn't reaching donors effectively (investigate email deliverability, notification timing). 40–65% = functional. 65%+ = excellent — the photo and caption are doing the retention work.
What drives it: Photo quality, caption specificity, and notification timing. Photos uploaded Monday evening produce Tuesday morning notifications — the highest open-rate window. Captions with specific program context outperform generic ones.
Metric 4: Wishlist engagement rate
Definition: The percentage of donors who visit the nonprofit's Givelink profile after receiving a delivery photo notification.
Why it matters: A donor who visits the wishlist after seeing a photo is in the conversion moment — the highest-probability giving opportunity. This metric tells you whether the photo is generating active interest in what's needed next.
How to measure it: (Profile visits from dashboard within 72 hours of photo notification) ÷ (Photo notifications sent) × 100
Benchmark: Below 15% = the photo isn't generating active interest. 15–30% = functional conversion from proof to consideration. 30%+ = high-performing — the proof is consistently activating wishlist interest.
What drives it: Caption quality. A caption that ends with "here's what we need next month" and links to the updated wishlist converts photo viewers to wishlist visitors more effectively than captions that stop at the delivery description.
Metric 5: First-to-recurring conversion rate
Definition: The percentage of first-time donors who become monthly recurring givers within 90 days.
Why it matters: Recurring donors are the most resilient and valuable segment of individual giving revenue. Every first-time donor converted to monthly giving is a multi-year revenue relationship, not a one-time transaction.
How to measure it: (Number of first-time donors who activate a monthly gift within 90 days) ÷ (Total first-time donors acquired) × 100
Benchmark: Standard nonprofit ask: 6–8%. Proof-triggered ask (sent within 48 hours of delivery photo): 25–35%. If this metric is below 10%, the recurring ask is either poorly timed, poorly framed, or arriving before the proof.
What drives it: Timing relative to the photo. The ask that arrives within 48 hours of the delivery photo notification converts at 3–4x the rate of calendar-based asks. Track the gap between photo upload and recurring ask send for each donor and optimize accordingly.
Metric 6: Staff time per dollar raised
Definition: Total staff hours spent on supply sourcing, donor management, and platform administration divided by total dollars raised through the transparent giving channel.
Why it matters: Operational efficiency is a real cost of any fundraising program. Transparent giving should reduce operational burden — but it's worth measuring whether it actually does. Organizations that migrate from supply drives to Givelink should see significant improvement in this metric.
How to measure it: (Staff hours spent on: wishlist updates + delivery photography + photo uploads + donor email management + tax receipt management + any platform administration) ÷ (Total dollars raised through Givelink)
Benchmark: Pre-Givelink (with supply drives): typically $0.40–$0.80 staff cost per $1 raised. Post-Givelink (no drives, biweekly photography): typically $0.05–$0.15 staff cost per $1 raised.
What drives it: Drive elimination. If an organization maintains both supply drives and Givelink without migrating drive-based donors to the platform, they're carrying both operational costs with partial benefit. Full migration produces the largest efficiency gain.
The dashboard these metrics produce
Quarterly, bring these six metrics to your development team and/or board:
| Metric | Last quarter | Current quarter | Change |
|---|---|---|---|
| First-time retention rate | [X]% | [X]% | +/- |
| Average giving frequency | [X] | [X] | +/- |
| Photo notification open rate | [X]% | [X]% | +/- |
| Wishlist engagement rate | [X]% | [X]% | +/- |
| First-to-recurring conversion | [X]% | [X]% | +/- |
| Staff time per $ raised | $[X] | $[X] | +/- |
This dashboard tells a more complete story than "total dollars raised." It shows whether the program is building relationships (retention, frequency), whether the proof mechanism is working (photo open rate, wishlist engagement), whether the conversion infrastructure is functioning (recurring conversion), and whether the organization is operating efficiently (staff time).
Frequently Asked Questions
How do I access Givelink's donor analytics to calculate these metrics?
From your nonprofit dashboard, navigate to Donors → Analytics. Export your donor data for custom calculations. Givelink's built-in analytics surface giving frequency and retention data directly.
Which metric should we prioritize first?
First-time retention rate — it's the most consequential and the most directly influenced by delivery photo quality and timing. Fix the photo upload process first.
How long before we see meaningful data on these metrics?
You need at least 50 first-time donors and 3 months of delivery cycles to produce statistically meaningful metrics. Smaller programs should track trends rather than point estimates.
What if our photo open rate is very low?
Investigate three things: email deliverability (are notifications reaching donors' inboxes?), notification timing (are you uploading photos at optimal times?), and subject line (is the notification subject compelling enough to open?).
Measure what builds. Not just what arrives.
Log into your Givelink dashboard and pull the first set of metrics.
Stay Human.
Alexandros Karagiannis is CTO and Co-Founder of Givelink.
See also
What is Givelink?
Learn from the founders:
Support a nonprofit
Buy their needs
