The "stars" school rating system has a fatal flaw

Bruce Maples (

Imagine this scenario: It’s the first day of the fall semester, and you go to your algebra class. The teacher hands out a syllabus, and you notice there’s nothing about the grading scale.

So you ask what the grading scale is, and the teacher says “Oh, we decide that at the end of the semester. We average all the grades for all the students, then we give an F to the bottom 5%, no matter what their scores are.”

Just to make sure you understand, you ask “So if I make a 95 for the semester, I could still fail?” And the teacher says “If that score is in the bottom 5%, absolutely.”

Sounds absurd, right? But guess what – this is exactly how the state’s “stars” program for rating schools works.

In other words, it’s not a rating – it’s a contest, and 75 schools HAVE to lose every year.

Or as one school board member put it, “It’s the Hunger Games for schools.”

One school board member on the stars rating system: 'It's the Hunger Games for schools.'Click To Tweet

A quick geeky lesson

OK, we’re going to take a moment to give you a quick education stats lesson. Trust me, we’ll keep it short and maybe-sweet.

What we’re talking about here is the difference between criterion-referenced and norm-referenced.

Criterion-referenced is when standards (“criteria”) are set up, and some group of things (learners, workers, schools, businesses, whatever) are judged against those standards. If you meet all the standards, you get a high score.

In a criterion-referenced classroom, every student could get an A. Or, every student could get an F. The criteria are known up front, and it’s clear what you have to do.

Norm-referenced, on the other hand, waits until all the scores are in, then distributes them along a standard distribution model (the infamous “bell-shaped curve”). Those in the middle of the distribution get a C, those at the top get an A, and those at the bottom get an F.

So, for example, if you score a 95 on your final exam, and everyone else scores a 96, you just made an F on that exam.

What Kentucky is doing with the star ratings for schools

The Kentucky system uses a number of factors to come up with a final score for a school: test scores, increase in capability from year to year, achievement gaps, and so on. All of those are spelled out for the schools ahead of time, so everyone knows what’s expected. So far, so good.

But then, the state Department of Education takes all the scores from all the schools and lists them top to bottom. And here’s the fatal flaw, and why this article was written:

No matter what the scores are, the bottom 5% are given one star AND designated CSI schools (Comprehensive Support and Improvement). And those schools automatically lose their site-based decision-making council, and are essentially “taken over” by the state.

In other words, there is a semblance of a criterion-referenced approach – but in the end, a norm-referenced approach is used to assign the stars, and it doesn’t matter what you did in terms of achieving your goals; it’s all a contest with every other school in the state.

How to fix this

Look, there is nothing wrong with grading schools, especially if that grade is based on a variety of factors AND is not racially or culturally biased in some way. Having clear standards to strive for and meet, assuming they are educationally sound, is a good idea.

And if you start getting “grade inflation,” then you get the stakeholders together and discuss the standards to see if they need to be adjusted.

But using a standard distribution to assign stars so you can guarantee a certain number of 1-star schools and a certain number of 5-star schools is misleading, counter-productive, and demoralizing. It spreads a patina of fairness and formality over a misguided approach, and misses the opportunity to actually help schools improve.

Using a standard distribution to assign stars so you can guarantee a certain number of 1-star schools and a certain number of 5-star schools is misleading, counter-productive, and demoralizing.Click To Tweet

The better solution would be to have clear criteria for each rating category, with a well-designed rubric of achievement. Each school gets graded independently, without reference to any other school. If it makes an A, great! If it makes a C or a D, then time to get to work. And if it makes an actual F, then get the outside help started.

The goal, of course, would be to have every school be an A school. Isn’t that what we would want? If the criteria and the rubric and well-designed, that couldn’t happen; we’d have a goodly number of B and C schools, I suspect.

But we wouldn’t have a school’s grade dependent on how it compared to every other school in the state – only how it compared to the standards.

We already do this in other parts of our schools

When I was a high-school band director, we went to two kinds of events: contests and festivals.

  • At contests, you got a place: 1st, 2nd, 3rd.
  • At festivals, you got a rating: a I, or a II, or a III.

At the contest, only one band could win 1st. But at the festival, any band that played well enough could get a I rating.

It’s time to make our school stars system a rating, and not a contest, and stop making 5% of our schools automatic losers.


Print Friendly and PDF