ESSER Funds Are Gone — How to Build a Tutoring Program That Survives Without Them
ESSER Funds Are Gone — How to Build a Tutoring Program That Survives Without Them
The federal ESSER funds that built or expanded most district tutoring programs expired on September 30, 2024. What happens next depends less on whether your program works than on whether you can prove that it does. That is the central problem facing district leaders right now — not a funding shortage, exactly, but a documentation gap that turns a working program into a defenseless one the moment anyone asks for evidence.
This post is an honest assessment of where things stand, what funding actually exists to replace ESSER, what the research says about tutoring at scale, and — most directly — why programs that survive budget cycles are almost always the ones that built their operational infrastructure before the crisis arrived.
The Instinct to Cut Is Understandable, but It Requires Analysis First
When a dedicated revenue stream disappears, the rational response is to treat the program it funded as discretionary. Tutoring often reads as an add-on — not a core instructional expense, not a compliance requirement, not a contractual obligation. It cuts cleanly on paper.
But cutting on paper and cutting wisely are different things. Before any program reduction, district leaders owe themselves a rigorous answer to a specific question: what is this program actually costing us, and what would we lose per dollar saved? That calculation requires data that, in many districts, doesn’t exist in a form anyone can retrieve on short notice. If you can’t answer it, you are not in a position to make a sound decision — you are in a position to make an expedient one.
The broader context matters here too. As of January 2024, roughly $51 billion in ESSER funds remained unspent or unallocated across the country. For programs already operating, the question isn’t whether to start from scratch — it’s whether the program’s track record justifies a different funding home. And answering that question requires the same data that most programs don’t have.
What the Research Actually Says About Tutoring at Scale
Before getting into the funding mechanics, it’s worth grounding this conversation in what the evidence base actually shows — not the pre-pandemic optimism, but the more precise picture that emerged from large-scale implementation.
In October 2024, Matthew Kraft, Beth Schueler, and Grace Falken at Brown University’s Annenberg Institute published the most comprehensive meta-analysis of tutoring research to date, drawing on 282 randomized controlled trials. Their core finding is worth understanding carefully: pooled effect sizes from studies calibrated to real-world district conditions remain meaningful, but are roughly one-third to one-half the size of estimates from smaller, more controlled studies. The extraordinary numbers often cited from laboratory-scale research don’t fully transfer when programs serve thousands of students across a district.
What this means for a district leader’s budget argument is counterintuitive — this nuance actually strengthens your position, not weakens it. The study finds that effect sizes in the range of 0.16 to 0.21 standard deviations are still of medium to large magnitude for large-scale education interventions. Class size reduction — one of the most expensive levers in education — produces effects in the same range at dramatically higher cost per student. The relevant question isn’t whether tutoring works; it’s whether your tutoring program is designed and implemented in the ways that produce those outcomes.
That distinction matters because Kraft and colleagues also identify a bundled package of design features that appear to partially protect programs from effect-size attrition as they scale: consistent tutor assignment, in-school delivery, sufficient session frequency, and instructional alignment with core classroom content. These are not abstract program philosophy. They are operational decisions that your program either makes deliberately or doesn’t make at all.
Susanna Loeb and the National Student Support Accelerator at Stanford have built an entire practitioner infrastructure around this same insight. Program design and implementation fidelity are not soft considerations — they are the primary variables determining whether a tutoring program produces outcomes worth paying for. Districts that ran programs with inconsistent attendance tracking, rotating tutors, and no systematic data collection are in a harder position. Not because tutoring doesn’t work, but because they cannot prove that their tutoring worked. That is the documentation gap, and it is what the rest of this post is really about.
The Funding Landscape That Actually Exists Post-ESSER
The honest answer about post-ESSER funding is that there is no single replacement for what was a one-time, large-scale federal investment. What exists is a set of durable federal streams, a genuinely uneven state policy landscape, and a philanthropic market that is active but structurally time-limited. Districts that sustain programs through this transition will do it through deliberate funding diversification, not a single solution.
Title I, Part A remains the most important and most underutilized vehicle for district-run tutoring. It is an allowable expense because Title I, Part A funds are explicitly intended to close the achievement gap between high and low-performing students — and LEAs may use them to cover direct student services, including components of a personalized learning approach, which the law specifically names as potentially including high-quality tutoring. This requires a deliberate reallocation decision, not new dollars. Several states have already formalized this: Ohio, for example, dedicates three percent of its Title I allocation to tutoring programs. District leaders who haven’t reviewed their Title I spending in the context of tutoring ROI are leaving viable money on the table.
Title IV, Part A (Student Support and Academic Enrichment) is a smaller but genuinely flexible stream. Tutoring is an allowable expense if it meets the program’s broad goal of providing student support and academic enrichment. Title IV-B, which funds 21st Century Community Learning Centers, also explicitly permits tutoring for students attending low-performing schools. These are not windfalls, but they represent stable federal authorization for tutoring as a legitimate program category — and that authorization survived the FY2026 federal budget process.
At the state level, the picture is variable and worth tracking closely. Tennessee remains the only state to incorporate high-impact tutoring into its permanent K–12 funding formula. Most others have relied on time-limited appropriations — which is both a risk and an opportunity. Louisiana’s experience illustrates both sides: the state mandated high-dosage K–5 tutoring for low-performing students and committed $30 million to the program in 2024–25. During the 2025 legislative session, the House voted to cut that funding, then the Senate restored it. Expansion to grades K–8 is now under consideration for 2026. That kind of year-to-year political volatility is characteristic of state tutoring funding right now. Connecticut, Massachusetts, and New York are all pursuing or renewing tutoring appropriations in their current legislative cycles. Whether any of that reaches your district depends on whether your state’s DOE can point to documented outcomes from programs like yours.
Philanthropy is real but limited. Private funders are actively interested in tutoring because they want to put resources where the evidence is strongest — but few are willing to serve as long-term structural revenue. The appropriate use of philanthropic dollars is to bridge gaps and fund rigorous program evaluation, not to substitute for a sustainable public funding stream. A local foundation grant that lets you document outcomes over two years has compounding value because it builds the case for Title I reallocation or general fund justification.
A District That Almost Cut the Wrong Thing
Consider a mid-sized district — roughly 14,000 students, a Title I-eligible population of about 60 percent, and a tutoring program that launched in fall 2021 with ESSER II funds. Over three years, the program grew to serve approximately 1,200 students per semester in grades 3–5 math and K–3 reading. The program coordinator was confident it was working. Teachers said so. Parents said so.
When ESSER deadlines arrived, the federal programs director pulled together a budget review. The program cost approximately $900,000 per year — a meaningful line item. The immediate instinct among some board members was to cut it entirely.
The problem: the program had almost no systematic outcome data. Session logs were kept inconsistently across buildings. Attendance records lived in three different spreadsheets maintained by three different coordinators. There was no cost-per-student figure anyone could produce in under a week. Without that data, the program coordinator could not make a defensible argument. She could only make an emotional one.
The board did not cut the program entirely, but it cut it by 40 percent — based not on what the data showed, but on the absence of any data to show. A program that was probably working lost nearly half its students.
This is the documentation gap in practice. Programs are not failing their students on impact. They are failing to survive budget cycles because they cannot produce the evidence that distinguishes a valuable program from an expenditure. And this is not an isolated story — it is the dominant pattern in districts across the country right now. The programs getting cut are not necessarily the weakest ones. They are often the ones that built the least infrastructure for proving their own value.
What Makes a Program Fundable — and What Makes It Vulnerable
There is a meaningful distinction between a program that is working and a program that can demonstrate it is working. The first might survive politically. The second will.
Whether you are pursuing Title I reallocation, a state grant, philanthropic support, or general fund justification, you will be asked the same questions. How many students were served? How frequently did they attend? What was the cost per student? What outcomes did you measure, and what did those outcomes show? These are not unreasonable questions. They are exactly what a responsible steward of public funds should ask.
A fundable program answers all four in under 24 hours. A vulnerable one cannot answer them without weeks of manual reconstruction.
Outcome data is foundational: pre- and post-assessments aligned to grade-level standards, disaggregated by student population, and accessible in a format that allows longitudinal comparison. Cost-per-student metrics require accurate session counts, tutor cost allocation, and the ability to separate program costs from general instructional spending. Session fidelity documentation means knowing not just that a session occurred, but its duration, tutor, content, and alignment with classroom instruction. Attendance records need to distinguish between enrollment and actual sessions attended — because a student enrolled in 60 sessions who attended 12 represents a very different program reality than aggregate numbers suggest.
The Kraft/Schueler/Falken analysis makes the stakes concrete. Metro-Nashville Public Schools — which was largely successful at engaging students frequently and staffing its program with dedicated teacher-tutors — found medium effects on standardized test scores in reading. Districts that struggled with student attendance and staffing showed uniformly small or null effects in their first years of scaling. Attendance is not a soft metric. It is the primary predictor of whether your program produces what the research says it should, and it is the first thing any serious funder will ask to see.
The Systems That Make Programs Defensible
A program can know all of the above and still not have the operational foundation to act on it. That is the gap that determines whether districts manage their tutoring programs or react to crises created by them.
The practical question is whether your program generates usable data as a natural byproduct of daily operations — or whether data exists only when someone manually compiles it for a specific purpose. When scheduling, tutor assignment, attendance, assessment, and cost allocation happen in separate systems maintained by different people, the program’s record is only as good as the last time someone cleaned it up. Most districts in ESSER programs never built the connective infrastructure that turns operations into evidence.
A tutoring management system — purpose-built for districts running their own programs — is the mechanism through which those operational records become a coherent, auditable program history. When a board member asks whether the program is producing results in the targeted student population, that question should not require a two-week data pull. When a grant application asks for cost-per-student figures for the past two years, that figure should take minutes to produce, not a staff member’s weekend.
Sierra TMS was designed specifically for this — not as a reporting layer bolted on after the fact, but as the operational core through which districts run programs they own and can account for. The districts that close the documentation gap are not districts with more resources. They are districts that decided early to treat program infrastructure as a non-negotiable investment, not an optional line item.
This is not an argument about software for its own sake. A program run on good systems but poor design will still underperform. But a program run on good design with no operational infrastructure is one that cannot survive its first serious budget challenge — regardless of how well it was working.
A Self-Assessment for District Leaders
Before your next budget cycle, your next board presentation, or your next grant conversation, work through these six questions. They will tell you whether your program is fundable or vulnerable — and where the gaps are.
One: Can you produce a cost-per-student figure for the current program year in under 24 hours? This requires accurate session data, tutor cost allocation, and enrollment numbers that live in the same place. If it takes longer than a day, you have a data infrastructure problem.
Two: What is your actual attendance rate — not enrollment, but sessions attended as a percentage of sessions scheduled? If you don’t know this number, you don’t know whether your dosage model is being implemented. Dosage is the primary design variable in every evidence-based tutoring framework.
Three: Do you have pre- and post-assessment data for your current cohort, disaggregated by grade level and subject? Aggregate-only outcomes data cannot answer which students the program serves best — which is the first question any sophisticated funder will ask.
Four: Can you document session fidelity — duration, tutor, content, alignment to classroom instruction — for a random sample of 50 sessions from last semester, without contacting multiple building coordinators? If you can’t do this cleanly, your fidelity documentation is not functional.
Five: Has your district documented a legal and programmatic rationale for using Title I, Part A or Title IV-A funds for tutoring? These streams are available, but they require affirmative justification — not assumption.
Six: Does your district have a written continuity plan for this program if its current funding source disappears? A plan that hasn’t been committed to paper doesn’t exist.
Score yourself honestly, then find your risk level below.
In 2024, tutoring was a luxury funded by the federal government. In 2026, it’s a core intervention that has to fight for its life in the general fund. If you can’t produce a cost-per-student figure in 24 hours, you aren’t running a program — you’re running a risk.
If you’re working through these questions and want an honest outside assessment — what data you have, what you’re missing, and what it would take to close the gap — Sierra TMS offers a no-obligation program diagnostic for district administrators. It is not a sales call. It is an hour that will leave you with a clearer, more specific picture of what you’re building and what it needs to last.
SEO & Distribution Notes
Target keyword: post-ESSER tutoring funding districts
Secondary keywords: tutoring management system K-12, high-impact tutoring funding, ESSER cliff tutoring, Title I tutoring allowable expenses, tutoring program sustainability districts, high-dosage tutoring ROI, tutoring session fidelity documentation, district tutoring documentation gap
Meta description: ESSER funds are gone and district tutoring programs are at risk — not because they don’t work, but because most can’t prove they do. This guide walks administrators through the real post-ESSER funding landscape, the research on tutoring at scale, and what it takes to build a program that survives budget cycles.
Recommended internal links:
Distribution notes:
Primary: LinkedIn, targeting district administrators, directors of federal programs, curriculum directors, and superintendents. The self-assessment format makes it highly shareable within professional networks.
Secondary: Direct email to district contacts; the six-question diagnostic creates a natural reason to forward internally.
Editorial placement: The non-promotional tone and research citations make this suitable for AASA newsletter, ASCD SmartBrief, or The 74 as a guest contribution.
Timing: High relevance through spring 2026 budget cycles; prioritize promotion January–April window.
Potential syndication partner: National Student Support Accelerator at Stanford (they publish district-facing practitioner content; this post aligns with their mission and cites their work).
Do not promote via Pinterest or visually-oriented channels — audience and content are mismatched.




