We are a country that loves to rank EVERYTHING! Read any major website or USA Today and you will often find stories that rank the ten best cities, states, countries, travel spots, etc. We also love to rank schools. States rank them and various organizations rank them. I have even ranked them myself.
I don’t particularly like to rank schools for three major reasons: First, because there is only so much that data can tell you about a school. To truly know what a school is about, you have to be INSIDE the school and talk to teachers, students, parents, and administrators.
Second, rankings use only the readily available data about schools, thus tends to focus on test scores to the exclusion of other important measures. Most knowledgeable people would agree test score data don’t tell you anywhere near the whole picture about a school. Importantly, test score data such as TAKS does not tell you whether a school used valuable instructional time to simply prepare students for the TAKS. A school that focuses exclusively on TAKS preparation is not of the same quality of school as one that does not. Indeed, we know that excessive test preparation does not lead to REAL learning for students, but leads only to the ability to do well on a particular multiple choice test.
Third, most organizations do a terrible job ranking schools. Why? Most organizations do not employ researchers who have the knowledge, skills, and experiences to accurately rank schools. The problem here is that many of the individuals producing these rankings do not understand how to control for factors outside the control of a school. By failing to do so, the rankings inaccurately identify some schools as “successful” and other schools as “failures.”
This is exactly what happens with the Children At Risk (CAR) school rankings that are so highly publicized by media across the state. If such publicly available rankings didn’t hurt the educators in the schools that often help students learn, I wouldn’t particularly care about these rankings. But I know teachers in schools that end up ranked near the bottom suffer from such rankings when, in their hearts, they know they are helping kids learn.And this is not a case where I think Children At Risk staff intentionally set out to produce inaccurate rankings. Indeed, I have seen them adapt their methodology over time. They even met with me this spring to listen to my suggestions on how to improve their rankings. But that does not undue the damage to real people in real schools.
This post is for every teacher, administrator, and other staff member of the schools that truly help kids improve, but get labeled as failures or losers by Annual Yearly Progress (the federal accountability system), the state accountability system, and rankings such as those prepared by Children At Risk.
Why the Children at Risk (CAR) Rankings (and Texas Accountability Ratings) are Inaccurate
The major reason that the CAR rankings are inaccurate is that the ranking methodology fails to adequately adjust for factors outside the control of a school. Granted, they do adjust for the percentage of students participating in the free-/reduced-price lunch program, often referred to as the percentage of economically disadvantaged students. However, as Bruce Baker points out (see http://schoolfinance101.wordpress.com/2011/04/27/research-warning-label-analysis-contains-inadequate-measurement-of-student-poverty/), two schools can have the same percentage of economically disadvantaged students, but have a dramatically different percentage of students participating in the free lunch program. This data is available for free download from the Texas Education Agency and could have been used in the analysis.
Further, the rankings do not control for a host of other factors that influence student achievement that are outside the control of educators at the school. Again, why should schools be labeled for something they have no control over?
Factors Influencing Achievement Outside the Control of Schools
Research generally finds that all student demographic factors tend to be associated with student outcomes. Specifically, researchers generally find that factors such as the percentage of African American, Hispanic, White, Asian, and Native American students are all associated with student outcomes. The reasons for these associations are complex and do NOT make the argument that there is any inherent deficiency in one group compared to another. Rather, these factors are likely serving as proxies for other factors we have no data about such as parental level of education, community safety, discrimination, access to health care, access to nutritional food, access to pollution, etc. Regardless of the underlying reasons, the point here is that Children At Risk should control for these factors when ranking schools.
Let’s look at these factors by CAR ranking deciles (the high schools were placed into ten different groups based on the CAR rankings, with Group 1 having the lowest CAR ranking and Group 10 having the highest CAR ranking.
We see below that schools with higher rankings have significantly lower percentages of students participating in the free-/reduced-price lunch program and the free lunch program. In a regression analysis, both factors were highly statistically significant and negatively related to student outcomes.
As shown below, the same pattern is seen below for minority (African American and Hispanic/Latino) students. Thus, again, the rankings result in a sorting of schools based on student factors beyond the control of the school.
Recent research has also shown that student mobility has negative effects on the individual student’s achievement and high student mobility at the school level has a negative effect o school-level achievement. Again, CAR failed to include this variable in the analysis. Now, some would argue that student mobility is partially under the control of a school. Specifically, some argue that students leave a school because they do not like a school and want to enroll in another school. That may be true to some degree, but studies and anecdotal reports from superintendents and principals in high-poverty urban areas report that mobility is often driven by poverty and that many poor families move from one place to another in a frequent manner because of the lack of affordable housing in most cities.
As shown below, mobility rates are inversely related to the CAR rankings, with higher ranked schools having lower percentages of mobile students and lower ranked schools having greater percentages of mobile students.
Let’s look at the percentage of incoming 9th grade students in 2006 who scored below 2200 on the TAKS math and reading tests as 8th grade students in 2005.
We see above that schools ranked near the top (Deciles 8 thru 10) have lower percentages of students who scored lower than 2200 in 8th grade than schools who were ranked near the bottom (deciles 1 thru 3). In other words, the prior test performance of the students entering highly ranked schools was greater than the prior test performance of students in schools ranked near the bottom. In fact, this was one of the strongest predictors of the percentage of students college-ready (and percentage passing TAKS math and reading in 11th grade, percentage going to college, average SAT and ACT scores, etc., etc.) .
Do the high schools have control of this situation????
Only if the school requires students to apply to get into the school. Thus, while magnet schools, early college schools, and high-performing charter schools like KIPP, YES, and IDEA have built-in controls for prior test performance. KIPP and others argue this is inaccurate, but the lowest performing students do NOT enter such schools and the percentage of the overall student body that could be described as academically struggling is NOT the same in such charter schools as in the local comprehensive schools. Data from Texas backs me up on this point.
So, even though the high schools have no control over the prior test performance of incoming students, percentage of poor students, percentage of minority students, and (to some degree) percentage of mobile students entering into the school and all of these variables have a statistically significant association with outcomes such as college-readiness, average SAT score, graduation rate, attendance, and pretty much every other measure CAR uses to rank schools, CAR does NOT adjust the rankings using these factors.
To a large degree, by not adjusting the rankings for these factors, the CAR rankings simply sort schools based on the student characteristics of the students entering into the school. Schools have NO CONTROL over who comes through their doors. Sorting schools based on student characteristics does not help parents, educators, and policymakers identify schools that are effective in improving student outcomes–it only creates more problems for the schools that already are under-resourced given their student populations.
Again, I don’t think CAR did this on purpose since they tried to control for poverty, but when reliable sources tell me that suggestions about improving the methodology had been made to CAR a few years back, you start to wonder. Regardless, the media–which has an addiction to rankings stronger than an addiction to meth–lapped it up and printed the rankings without considering whether the rankings were even accurate.
What makes me angry is that the CAR rankings–and the NCLB and state accountability ratings–never take these factors into account nor do they look at GROWTH. They simply rank and rate schools on status measures at a single point int time and then bonk the educators on the head and say, “Bad educators–maybe we should fire you!” Yet, some of the lower-ranked, academically unacceptable schools elicit greater GROWTH from their students than the highly ranked, exemplary schools. But no one knows because the Texas accountability system and various rankings systems put forth by different organizations fail to recognize school effectiveness and instead focus only on measures highly correlated with factors outside the control of a school.
One result of this stigma-inducing ranking system designed by CAR and the state legislature (Texas accountability system) is that the low-ranked schools have greater teacher and principal turnover and have a more difficult time attracting well-prepared educators. These schools would already have a hard enough time, but CAR and the Lege piling on certainly does not help matters.
So, lower ranked schools are disadvantaged by greater teacher turnover and a greater percentage of under-qualified teachers, both of which have been shown to be associated with lower student achievement. I am not aware of a single district in Texas who has successfully addressed this issue. Many are trying and some are showing some positive signs, but NO ONE has solved this problem.
And, even though the schools with lower rankings have students who start far behind their peers in other schools, they receive the same amount of money as higher ranked schools as shown below.
How, exactly, is a school with a large percentage of students scoring well below college-readiness levels when they enter the 9th grade supposed to equalize school outcomes with more advantaged schools when we don;t provide them any extra money or support?
So, let me get this straight for everyone. The lowest ranked schools have more poor, mobile, minority, and limited English proficient students and also have greater teacher turnover, more under-qualified teachers, less experienced teachers (not shown), greater principal turnover (not shown), and less experienced principals (not shown), but we don’t provide them any more money to compensate for these realities. But we hold them accountable for meeting the same standards as all other schools.
Does that sound fair to you? I didn’t think so!
We should be investing heavily in these schools and in the communities. We must simultaneously address the issues of poverty and access to opportunities to learn in school. We must make a far greater effort to make sure that the kids who need the most help actually GET the most help.
Ranking schools and holding schools accountable while not acknowledging the factors affecting achievement outside of the control of schools simply makes matters worse, not better.
Isn’t an effective schools one which IMPROVES student outcomes? SO, at the very least, examine growth when ranking schools if you are not going to examine which school perform better than expected.
As shown below, the CAR rankings are correlated with school growth on TAKS using the FAST study’s measure of student progress.
BUT–as shown in the upper-left hand corner, there are some schools that elicit TREMENDOUS GROWTH from their students, yet are still ranked low by CAR. And, as shown in the lower right-hand corner, there are some highly ranked schools that elicit little growth from their students. Please explain to me why the schools eliciting high growth should be ranked lower than the schools eliciting low growth.
Finally, I am NOT saying we should lower standards! All schools should have the same high standards. I am saying that these factors outside the control of schools and student PROGRESS should be considered when ranking schools and determining how well schools are doing in improving outcomes. Only then we will recognize effectiveness rather than the student characteristics of students entering the school.