Principle 3: Put faces on the data

 

Good educational data will capture key attributes of the work of each student, educator, classroom, school, and district, not just a typical or average result.

It is not enough to say “on average.” We need to know how many and which students, teachers, schools, and districts are performing at different quality levels.

We need to know:

  • How well is each student being served by their school?
  • How many and which of its classrooms have strong and weak instructional practices?
  • How many and which teachers are receiving the supports they need to create good and great classrooms? and
  • What conditions in this school system are contributing to or undermining the quality of its students’ and educators’ work?

We need this knowledge not to chastise or reward but to change for the better at all levels of the system.

One clear violation of this principle is to report test scores as averages rather than percentages, a mistake that No Child Left Behind sought to rectify. This method masks differences between student sub-groups and between individual students. We err similarly when average daily attendance (ADA) is used as an indicator of school quality. ADA fails to reveal how many and which students attend regularly and which sporadically, information critical to planning targeted interventions or strategies to improve attendance.

We need to think about student attendance differently. There is good local and national research showing where attendance most strongly predicts other critical quality indicators, such as achievement, persistence, and graduation. We calculate each student’s rate of attendance over a given period, then assign quality thresholds: “Johnny is attending 80% of school days, and that’s a level that puts him at risk;” “Tonya is attending 92.5% of the time, and that puts her at optimal levels.” These data points also reveal essential information for a grade level, a school, or a district: “Thirty percent of 9th graders are attending at risk levels and only 20% at optimal levels.”

 

Putting Faces on Attendance Data:
Figure 3.1 below takes a look at how two ways of analyzing and reporting attendance in three actual schools tell very different stories about what’s going on.

 


Looking at average daily attendance (ADA), the first two schools are very strong with over 95% ADA, and the third is strong with 92% ADA. Using the “threshold” approach, the same data reveal that in the first two schools, 12 and 16% of students miss more than a day a week on average. In these schools, between 150 and 200 students in a 1200-student population are at serious risk for poor academic performance and dropping out. The third school, with the lowest ADA, has just 5% of students in this risk group and a larger percentage of students in the optimal attendance group than either of the first two schools. We have now put faces on these data such that schools can know how many and which students are attending at what levels and then act on this information.

 

Putting Faces on Graduation Data:
We can take a similar approach to examining graduation data. Typically, we look at graduation rates in terms of percentages of a freshman cohort that graduate from high school within four years. Many states set a goal of 90% graduation rate and assess progress against that goal, as in Figure 3.2. In this example, Hope County’s high schools would need to show a 28% improvement in their graduation rate to reach the 2017 target of 90% over the next three years: a 9% increase each year. Applying the principle of “putting faces on the data,” a 90% graduation rate would also be achieved if 316 more students, about 100 more each of three years, graduate from this county’s 13 high schools. When this approach is applied at the school level, most would need to graduate between 10 and 20 more students per year for the county and every high school in it to reach its 90% goal.

Our partners have found this method of looking at graduation data to be more actionable, particularly when individual level data on leading indicators of graduation are treated the same way. Further, educators are more encouraged when they can imagine 10 more of their students finishing school each year than when tasked with “increasing the graduation rate 9% annually.”


Two ways of looking at graduation results:

 


Two Ways of Looking at Survey Data:
Social science statistical models use means—numeric averages—and standard deviations as grist for their calculation mills. Education researchers are trained in these methods, and education systems tend to accept them without question. The problem is that presenting results in this form masks information educators need to understand their own and their students’ performance.

In a survey, for example, a student is asked a series of questions about the quality of the instruction they receive in their math class. They are asked: Does the math teacher speak clearly? set high expectations? grade tests fairly? use interesting examples? ask us to think about why our answers are correct? care about my success as a student? Typically it is assumed that all survey questions tap into a single dimension of students’ classroom learning experience, and a student’s responses to them are averaged to produce a single score for this student. These average scores from a group of students are then averaged to produce a score for the school, grade level, class, or demographic group.

Two problems are immediately evident.

First, by averaging a student’s responses across many indicators of their classroom experience, we fail to produce anything that resembles an accurate picture of their classroom experience. If the averaged score is somewhat positive, it may be that the teacher speaks clearly, grades fairly, and cares deeply but doesn’t give great examples or push students to think. The same average can result when the teacher has high expectations, uses interesting examples, and pushes students toward higher-order thinking but doesn’t speak clearly at times and doesn’t seem to care much. These very different realities produce exactly the same numeric result. In both cases the average conceals a great deal about the quality of instruction and gives us little guidance toward improving students’ learning experience. We still don’t know what’s going on with Johnny or what we can do about it.

When we then average these “average” scores in an attempt to evaluate the quality of instruction across groups of students, a second problem emerges and compounds the first. A “somewhat positive” group average may mean most students in the class, grade, or school rate their instruction mediocre and a few think it’s great, or it may mean half think their instruction is excellent and half say it’s poor. Again, the average falls far short of an accurate picture, leaving us now even further removed from the actual experience of students in their classrooms. We must and can do better.

Our alternative is to take each measure, whether a survey item, classroom observation rating, or student test score, and ask which, if any, of the possible scores on this measure convey strong, optimal quality and which, if any, convey weak quality.

With these thresholds in place for each meaningful indicator, we can comprehensively assess the quality of each student’s learning experience starting with single item responses, then combine responses that represent distinct learning experiences. We then can identify how many and which students are having an optimal instructional experience, who may be at risk, and what specific instructional experiences contribute to optimal or at-risk conditions.

Our work with hundreds of educators to apply this principle of putting faces on student outcome data and on survey data confirms that these results carry far more credibility and utility than statistical averages or overall rates. Our approach does require more work, but the payoff is data that go beyond statistical description to information that is clearer, more accessible, and more actionable.

 


 

Measuring What Matters
Practical measures that get you credible and actionable results.

Measuring What Matters tools provide practical measures of the most critical factors impacting student learning and educator practice. These measures have been field-tested and optimized over the last 25 years in more than 100 schools and thousands of classrooms across the country.

 

 

 


Follow us on Twitter (@InstResReform) and Facebook to learn more about how these suggested strategies look through the eyes of teachers, students and administrators.