|Correction: This Commentary erroneously implied that officials of the Atlanta Public Schools had knowingly misreported school data. Following media reports, the Georgia Department of Education found last year that the district had not intentionally distorted or misrepresented any data on disciplinary incidents, and no action was taken against the district. No other district data have been questioned. (March 12, 2004.)|
It will take candor, clarity, and craftsmanship for leaders to create reports that bridge the credibility gap.
Education leaders in many states are limping to the finish line as they report results in their annual accountability report cards. If this were a 400-meter hurdles race, some would be tripping on their own shoelaces, some would be knocking over hurdles, and few, if any, would be finishing in style.
What explains this gawky display? A mix of errors in judgment, lack of capacity, and educator resentment of the intrusion of legislators into their affairs. It is all unfolding in a climate of conflict: Federal laws, especially the No Child Left Behind Act, set the reporting hurdles higher than most state laws. Citizens’ expectations of transparency and candor in school-level reporting are far higher than what most education leaders are delivering to the publics they serve. In some states, such as Illinois and Michigan, state department of education leaders are faulting local district officials for providing data that is incorrect or incomplete. Local leaders, in turn, are blaming state department higher-ups for poor-quality writing, design, and data interpretation in state-issued annual reports. This blame game is full of friction, producing much heat and little light.
After four years of reporting results for school districts in California and Missouri, I have enough experience and battle scars from the accountability wars to offer some clues to the causes of this friction, and to shed some light on possible solutions.
Here, as I see it, are five factors that compromise accountability reports:
1. The Enron Factor. The credibility gap educators have to bridge before their public trusts their reporting of results is vast, due in part to the epidemic of lying by public and private leaders and their use of numbers to deceive. Enron’s trickery (and Adelphia’s and Worldcom’s and others’) is evidence of the problem, but the complicity of corporate accounting firms in cooking the books is the more relevant factor. The parallel in our context might be the concealment by California’s ex-governor, Gray Davis, of the facts about that not-so-golden state’s economic decline and the willing participation of California legislators in this ruse. The deceptions by government and business leaders have left the public jaded, jolted, and duly skeptical when public-sector leaders report on the state of their schools and districts. It will take candor, clarity, and publishing craftsmanship for those leaders to create reports that bridge this credibility gap successfully.
2. The Houston/Atlanta Factor. Both the Houston and Atlanta school systems have been caught cooking their schools’ books, and they are not the only ones. The “golden halo” effect that educators might once have enjoyed, protecting them from the mistrust that tarnished other public-sector leaders, is gone. The misreporting of crime and dropout rates in high schools in both big-city districts is indicative of the political volatility of these data. These are proxy factors for a school’s health, and they are self-reported, consequently most susceptible to being doctored. What is more interesting than these transgressions is the lack of systemic checks and balances on data as it flows from schools to districts to state departments of education.
3. The Graduation-Rate Factor. Graduation rates are a powerful proxy for the health of a high school, and one highly valued by the public. Yet it is a measure that almost all states have gamed with. While there are, according to the Harvard Education Letter, four conventional ways to measure graduation rates, the problem they share is that all depend upon the accuracy of dropout-rate tallies, a factor that is notoriously difficult to determine. Whichever of the four measures a state has adopted, all underreport dropouts, according to Jay P. Greene of the Manhattan Institute, whose scholarly advocacy of attrition-rate measures has been well covered in these pages. A gap of from 15 percent to 30 percent separates the conventional reporting methods from Mr. Greene’s more reality-based measure of attrition, a comparison of the size of the graduating class to the freshman cohort it started as four years prior. What the public “hears” when educators report graduation rates is far, far from what educators “mean.” The result is an unintentional underreporting of results.
4. The “Can’t See the Forest for the Trees” Factor. The reporting of test results is in most states a triumph of detail over the more important bigger picture. The traditional reporting of grade-level and curricular results on standardized tests is, for example, too detailed to see the schoolwide overview. But add to that the reporting of disaggregated results, and the details become an avalanche, burying the reader in a massive body of metrics far too large to make its bigger meaning evident to the lay reader. In New York, this detail-level reporting on test scores results in 10-page annual reports. In California, disaggregated reporting last year on both national norm- referenced tests and the California standards-based, criterion-referenced tests caused most schools’ reports relying on the state’s format to run 14 pages, without any narrative to interpret what the results meant and without any graphs to visualize the results. A typical elementary school in California would have reported 528 data points on testing alone. San Francisco’s school- level reports, which relied on the state’s design, ran over 30 pages, quite a record.
5. The Rubber-Ruler Factor in Measuring Progress. Reporting trends in test scores is an imperfect craft, at best. The lack of longitudinally merged student databases is, of course, a fundamental obstacle to progress in this quest. But educators’ all-too-willing reliance on traditional cross-sectional views of trends (for example, how consecutive years of 4th or 5th graders are doing) only makes matters worse. Delivering to the public trend results of things that don’t matter will make confusion a predictable outcome. Certainly, matched-score results (same students in the same school) would be an improvement, especially in districts and states where student turnover is high. But districts in California that are using matched-score analyses in their testing and assessment departments aren’t bringing this more revealing view of results into their annual accountability reports. Why? The data definitions don’t require it. For districts like Fresno, where student-transiency rates in many schools exceed 50 percent, reporting year-to-year “progress” in test results is a meaningless ritual.
Our five years of work creating accountability reports for California and Missouri school districts, and our recent review of half the states’ accountability reports, have led us to a few recommendations.
- Keep it brief. Research by KSA-Plus Communications featured in Education Week‘s 1999 Quality Counts special report is emphatic on this point. Focus groups and surveys in five states revealed a consistent desire among parents for tight reports, as long as more detailed information was also readily available. Colorado took the hint, and has published brief and accessible reports that are sure to be understood by any citizen capable of reading USA Today. Ohio also has done a great job of distillation, boiling down key factors to a single page. The art of summarizing complex material while avoiding the reductionist’s trick of oversimplifying is no mean feat.
The misreporting of crime and dropout rates in high schools in both big-city districts is indicative of the political volatility of these data.
- Polish your prose. Every profession has its jargon, but education jargon is particularly noxious. Given that schools’ purposes include teaching students to write, accountability reports are where writing should shine. Plain speaking and candor are the proper tone. Explaining, interpreting, and guiding the reader are required if schools are seeking that teachable moment when citizens “get it.” Any corporate or nonprofit annual report makes the most of this opportunity to interpret the year. Why should the citizens served by schools have to suffer poor writing, or be left to wade through tables of data on their own?
- Invest in design. If you think of design as decoration or a nice added feature, think about the butterfly ballot and the presidential-election debacle in Florida four years ago. Good design communicates and, as the butterfly ballot showed us, poor design obfuscates. Those in search of models can look to Colorado again for a functional design that works. It’s not fancy, but its two-color treatment guides the reader’s eye, its use of six panels groups topics effectively, and the data visualizations it employs represent test results clearly, all adding significantly to the reader’s likelihood of success.
- Measure what matters. New York City school leaders have done a good job of measuring graduation rates, allowing for the possibility that some students need five or six years to make it through high school. Indiana, outside its accountability reports, has made it possible for citizens to find out about the teachers at a school: their years of experience, the degrees they hold and the colleges they attended, and their subject authorization and credentials. These factors reach right to the heart of what parents want to know. If report makers begin, as in New York and Indiana, with their readers’ concerns, the resulting reports are likely to produce greater returns on the investment.
- Get it printed. Printing a summary of annual reports is an easy and inexpensive way for school systems to reach their publics. We’ve provided clients with summary reports that fit on a single 81/2-by-14-inch sheet, distilling key factors into a format anyone can read, in English or Spanish. Indiana districts print their summary reports in local newspapers. This is also called for in the No Child Left Behind Act’s “Nonregulatory Guidelines for Accountability Reporting,” issued by the U.S. Department of Education last September. Reaching all parents, all staff members, all school sites, and public agencies (including libraries) is what the federal legislation requires, but it’s also what the public expects.
- Get it distributed. Again, what’s the point of reporting results if you don’t get the word on the street? This common-sense conclusion is reinforced by the No Child Left Behind guidelines, which call explicitly for dissemination and say that posting accountability reports online is not sufficient. In Kansas City, Mo., the Ewing Marion Kauffman Foundation has sponsored six-page annual reports mailed to every family with children in school. But shorter, one-page reports could be mailed along with end-of-period report cards at nominal cost. School-level annual reports could also be mailed inexpensively along with standardized-test results at the end of the school year.
Until Congress gives districts franking privileges for free use of the mails, educators are going to have to be resourceful to find the most inexpensive means of publishing their annual reports. But the long-term payoff for this education of the public will be well worth the effort.
Steve Rees is the president and publisher of School Wise Press in San Francisco. His company has been helping educators and parents make sense of school facts since 1995.