Menu Close

Preschool Matters Today

Concerning English Learner Policy Trends in States’ New School Accountability Systems


May 16, 2017
Conor P. Williams

The Center on Enhancing Early Learning Outcomes (CEELO) is proud to partner with New America on this blog series highlighting early learning opportunities and challenges under the Every Student Succeeds Act (ESSA). This post originally appeared May 11, 2017 on the New America site.

Regular readers know that New America’s Dual Language Learners National Work Group has been keeping a close eye on the genesis, passage, and early implementation of the Every Student Succeeds Act (ESSA). We’ve analyzed how the law, the primary federal legislation governing elementary and secondary education, changes No Child Left Behind’s (NCLB) English learner (EL) policies.

The past month has kicked off another round of implementation, and as such, has opened up new opportunities for exploring how ESSA will impact ELs: states have begun submitting their consolidated plans for implementing the law. Under ESSA, these plans serve as states’ efforts to show that they are complying with federal requirements, and should receive their shares of federal elementary and secondary education funding. While ESSA maintains many civil rights protections for ELs, it dramatically decreases the federal government’s ability to shape state education systems’ approaches to EL data and accountability.

So: how are states using their newfound EL policy flexibility? Here are a few worrying trends.

Concerning Trend: ELs, Standardized Assessments, and Academic Proficiency

ESSA requires states to track how each school’s ELs are performing on math and literacy assessments. It also requires states to set goals for raising that performance over time. Most of the states who have submitted ESSA plans so far are using some sort of escalating proficiency targets.

For instance, during Washington, DC’s 2015–16 school year, just 14.8 percent of ELs in grades 3–8 scored proficient or higher on the District’s literacy assessment. Using this as a benchmark, the District set a goal that all schools would increase their percentage of ELs scoring proficient or higher by 3.1 percent each year. If all goes well, this will have 85 percent of ELs scoring at that level by 2038–39.

This approach to ELs and school accountability isn’t just fancifully ambitious; it’s potentially distorting. ELs are defined in ESSA (as they were in NCLB) as those students whose English language skills “may be sufficient to deny the individual…the ability to meet the challenging State academic standards” (ESSA, Sec. 8101 (20)). So if SEAs are required to identify ELs as those students who may not be able to demonstrate skills on academic content assessments, it is generally unproductive for them to set high content proficiency expectations for these students while they are in the group. As ELs near English language proficiency, they generally perform better on content assessments. As they reach English language proficiency (ELP) and exit the group, their scores generally continue improving. As such, instead of setting high subgroup proficiency goals for ELs, states should explore ways of using growth or scale score models to link ELs’ performance on academic content assessments to their ELP levels (for more, see page 7 here. Also: click here or here. See also Kieffer, M., Lesaux, N., & Snow, C. E. (2008). “Promises and pitfalls: Implications of No Child Left Behind for defining, assessing, and serving English language learners,” In G. Sunderman (ed.), Holding NCLB Accountable: Achieving Accountability, Equity, and School Reform).

In other words, while it is possible to raise ELs’ proficiency levels, it is inappropriate to expect that they should simultaneously be 1) the students whose English language abilities prevent them from demonstrating academic knowledge on content assessments and 2) students who should be demonstrating academic proficiency at an 85 percent clip.

Unfortunately, DC’s case is closer to the rule than the exception. While timelines and specific academic proficiency goals vary by state, many states are setting unreasonable proficiency goals for their ELs. For instance, while Nevada’s academic goals sound less overambitious–having 49.8 percent of ELs scoring proficient, up from a 2015–16 baseline of 31.7 percent–they only give schools until 2022 to pull it off.

Concerning Trend: English Language Proficiency, New Tests, and Accountability Blackouts

ESSA also requires states to use standardized assessments to measure ELs’ development of ELP. It also requires states to set goals for getting more ELs on track to becoming proficient in English within a state-determined timeline. In most cases, states take their definition of full proficiency (e.g. a 5.0 on their ELP assessment), subtract each student’s initial ELP score (e.g. a 1.0 or 2.0, etc), and then divide the gap between the two by a fixed number of years. So, in this case, a student initially scoring a 2.0 on the ELP test might be given three years to reach 5.0 (proficiency). Adequate ELP growth for that student would be one level each year.

Schools are then held accountable on these metrics: most states using such models are setting targets for increasing the percentage of a school’s ELs making their adequate growth each year to stay on target for reaching full ELP within the state’s timeline.

This is generally an improvement on prior systems. Most of these “growth to target” models allow ELs some flexibility in their paths to full proficiency, while keeping attention on their progress. In most cases, if a student rapidly progresses through the early levels of a state’s ELP assessment, but then slows down as they near full proficiency, they can bank that early growth towards those later years so that the accountability system still counts them as on-track to become proficient within the state’s timeline. That is, if a student makes two levels of ELP growth in her first year (twice as much as her annual goal), but only half a year of ELP growth in her second year (only half of her annual goal), she would still be counted as making adequate progress to English proficiency. Even though she only made 0.5 levels of growth in her second year, she still would have moved 2.5 levels in two years–more than the 2.0 levels that her state expected her to make.

That’s the good news. But there’s a lot more to analyze about how states are prioritizing ELs’ growing English skills in their accountability systems.

For instance, many states report that they are in the midst of switching their ELP assessments. As a result, some argue that they cannot set ELP progress targets for schools until they have at least several years of data on how ELs perform on these new assessments. Some states are proposing to exclude ELs’ progress towards proficiency from schools’ ratings entirely (until they set benchmarks). Other states are using ELP progress benchmarks from their current ELP assessments to set provisional school targets that they plan to revise in future years (after collecting data from the new tests).

There is no perfect solution to this challenge. States should be cautious about using old tests to set school-level goals for the percentage of students who make adequate progress towards English proficiency. But EL advocates should worry that many states are using this as an excuse to delay development of a system for calculating ELP or for removing the ELP indicator from schools’ accountability ratings entirely.

Illinois appears to have come the closest to addressing these tensions responsibly. The state is setting provisional goals for schools and has promised to update them as data from the new test roll in. What’s more, it has suggested that it may retroactively adjust schools’ accountability scores once these new data become available. This strikes a balance between the need to use ELP assessment data responsibly in goal-setting while also avoiding an ELP accountability blackout for schools.

Concerning Trend: The Relatively Small Weight of the ELP Progress

This brings up another challenge in states’ ESSA plans: most appear to be making ELs’ progress towards learning English a very small part of schools’ accountability scores. In almost every case, states are setting the ELP indicator as either five or ten percent of a school’s rating. (Note: New Jersey is a notable exception. The state is counting ELP progress as 20 percent of schools’ scores.)

The ELP indicator provides an interesting test case for one theory of EL policy reform that wound up in ESSA. During ESSA’s genesis (and since), many EL advocates pushed to move ELP accountability out of ESSA’s EL-focused funding stream (Title III) and into its core funding stream (Title I). The goal here is to make ELs more central to educators’ accountability-driven decisions. This is supposed to happen via two mechanisms: (1) Title I is a much bigger pot of money (around $15 billion, annually, compared to Title III’s ~$750 million) and (2) by moving EL accountability from the district level to the school level.

The danger of this theory of reform, however, is that it could bury accountability for ELs’ growing language skills. ESSA’s Title I accountability covers lots of academic and non-academic educational priorities, so when states design a system to hold schools accountable for how they use Title I money, they have a lot to include (and balance). As I put it in a 2015 post on ESSA,

The challenge with this move is to make sure that this relatively small group of students remains prominent enough. These students got their own funding stream and accountability system in NCLB in part because they’ve historically been overlooked in broader efforts to improve educational equity.

Here’s the operational problem: states designing ESSA accountability systems are signaling which equity priorities schools (and districts) should focus on. In many of these states, school-wide performance on academic content assessments counts for 30–40 percent of their rating. If their systems set students’ ELP progress as just five or ten percent of a school’s score, it is clear that improvement on that ELP indicator isn’t likely to be central to significantly improving that school’s score.

Now What?

There’s no dancing around it: analysis of the first round of state ESSA plans leaves much cause for concern for EL advocates, both in terms of the general patterns outlined above and other, less common policy experiments. For instance, Connecticut is proposing to combine ELs into a larger subgroup of “high needs” students, including students with disabilities and students from low-income families. This approach is likely to dilute schools’ focus on ELs. And Delaware is seeking to delay the assessment of recently-arrived ELs for at least one year beyond what ESSA allows. This could lead schools to de-prioritize these students for at least two, if not three years.

As I’ve written before, ESSA’s thinning of federal equity protections makes it essential that EL stakeholders, advocates, and researchers pay careful attention to the details of state policy decisions. States are unlikely to hold themselves accountable for prioritizing these students’ needs.

This work is only just beginning: It’s still very early days for ESSA implementation. Well over half of states have yet to submit final plans to the Department of Education. Stay tuned.

Conor P. Williams is Founding Director, Dual Language Learners National Work Group and a senior researcher in New America’s Education Policy Program. His work addresses policies and practices related to educational equity, dual language learners, immigration, and school choice. Williams founded New America’s Dual Language Learners National Work Group in 2014.