Florida 'Grading' Uses Faulty Data
To the Editor:
There are several problems with the Florida A+ school grading plan that your article on the program ignores ("From Worst to First," April 11, 2001.)
First, in Florida, pupils with disabilities take the Florida Comprehensive Assessment Test, but their performances are not included in determining school grades. This creates an incentive to overclassify pupils as disabled and, I believe, accounts for the rising special education enrollments in many of the schools that have received low grades from the state education agency.
Second, the state tests students by grade level rather than by school entry cohort. Thus, retaining low-achieving students in grade 3 removes these pupils from the testing pool the following year. Thus, short-term increases in performance are manufactured. But to avoid various sanctions, schools only have to improve from an F grade for one year. In other words, the regulations create several incentives for massaging retention and referrals to special education.
I do not have specific data from Fessenden Elementary School, which is featured in your article, but the table accompanying the article shows 12 percent of tested children not included in the school report card. And the article indicates that 17.5 percent of students who should be in 4th grade were retained last year. Thus, almost one-third of the students who might have been 4th graders seem to have been omitted from the FCAT assessment stream last year (probably even more pupils from this cohort were retained prior to last year, so the number of achieving pupils in the cohort is exaggerated).
Finally, the enrollment at Fessenden dropped by roughly 10 percent (45 students) between the 1999 FCAT report and the 2000 report. Declining enrollments are a rarity in Florida schools these days, and I am at a loss to explain this one.
I don't know what accounts for the improved achievement reported for Fessenden Elementary, and I hope instructional improvement, as suggested by your article, is the basis for the reported gains. But given the attention paid to issues of manipulation of reported achievement levels by researchers over the past decade, it always surprises me that neither education agency officials nor reporters seem much concerned with ensuring that reports of rising achievement reflect real improvements in instructional effectiveness.
If we want to measure instructional effectiveness and efficiency, then all children who begin school together need to be assessed together as a cohort, regardless of what grade they are in, and the performances of all students must be used to determine average achievement levels. Only the use of such a procedure will allow us to compare apples to apples.
University of Florida
But Do All Students Want To Learn?
To the Editor:
Re: "Getting Serious About High School" (April 11, 2001): We as experts think we "know" what students need to know to be successful. We determine what we think is important to teach children. Unfortunately, learning is a very personal choice based upon needs, ambitions, abilities, and interests. At some point we must grasp the concept that even if "all" students can learn, it does not mean that all students "want" to learn. And they certainly don't want to learn the same things.
Students' Scores and Parental Obligations
To the Editor:
I read The Wall Street Journal's, The Los Angeles Times', and The New York Times' versions of the lack of progress on the 4th grade National Assessment of Educational Progress scores in reading. It's the teacher's fault. No, it's the education colleges' fault. Then I read the much longer depiction in Education Week ("4th Graders Still Lag on Reading Test," April 22, 2001). I was looking for some clue, some inference that somebody out there would have realized what I have. Apparently, nobody has.
After 20 years of working in secondary schools as a teacher and high school principal, after a bachelor's, master's, and doctorate in education, and after trying everything I'd ever heard or read about or thought up myself to improve student learning, it wasn't until I became pregnant with my first child at the age of 37 that I saw the light.
The solutions to the problems—yes, especially the reading problems—of the K-12 system are not found between the kindergarten classroom and high school graduation. They lie between the maternity ward and the kindergarten door. And with upwards of 80 percent of mothers in the workforce, counting on them to serve as baby teachers will probably yield as much improvement as the federal Title I allotments—not much.
Until society is ready to revisit the magic age at which a "free and appropriate education" should begin, we will continue to spend billions and billions more annually on special intervention programs, beat up teachers and principals (and of course, students and parents) over their "lack of progress," and find insignificant research results.
We can fix the public education system. We can see increases in scores annually in any subject we want to. But not until we stop sending all mothers home from the hospital with no free textbooks, no certified instructor in loco parentis, and no direction whatsoever on how to lay the appropriate neural foundation in their babies' brains upon which schools can later build.
Every child is born hard-wired to learn language. Those fortunate enough to be raised in homes or at expensive day-care institutions where they hear complex sentences frequently and are read to regularly (by anybody), learn exemplary language skills and learn them earlier. Those that aren't, don't.
Let's stop pointing fingers at each other and start pointing them outside, outside the paradigm of a "free and appropriate education" occurring only between the ages of 5 and 18.
University of North Carolina-Greensboro
To the Editor:
As a former adviser to the National Assessment of Educational Progress, I would like to suggest that U.S. Secretary of Education Rod Paige is incorrect when he says, in reference to reading scores and the amount of the money spent for Title I over the past 25 years, that "we have virtually nothing to show for it" ("4th Graders Still Lag on Reading Test," April 22, 2001).
True, reading scores did not go up. But the good news is that they did not go down. Anybody who has been in any large city's, or even a midsize city's, school system over the past 25 years has seen a dramatic lowering of the socioeconomic level of the pupils. And as the NAEP scores have consistently shown, and still show on this latest test, reading scores vary with the socioeconomic level. Teachers are doing a terrific job in not letting the scores fall.
Separating 4th graders' reading scores by race shows that Asian-Americans were the best readers, with whites next, and African-Americans last. If we look at data from the National Center for Health Statistics, we see that in 1990, about the time those 4th graders were born, the percentage of unmarried mothers among Asian-Americans was 13 percent, for whites 20 percent, and for African-Americans 66 percent.
Maybe we should spend a little less money for phonics and a little more for sex education.
New Brunswick, N.J.
Teacher Quality: To Test Is Too Simple
To the Editor:
U.S. Rep. George Miller, D-Calif., and others have oversimplified the process of improving teacher quality ("Teacher Tests Criticized as Single Gauge," April 4, 2001). What good do institutional pass rates on teacher tests do us when 90 percent of new teachers in California urban districts haven't even participated in traditional preservice programs? And what about the recent news that teacher-preparation institutions are having difficulties finding qualified faculty members for their programs? Perhaps it's time to rethink the whole system we provide for preparing teachers for today's schools, instead of promoting yet another "quick fix" testing program.
Point Richmond, Calif.
School-to-Work Offers Direction
To the Editor:
I taught high school agriculture and environmental science for two years after I graduated from college. In my teaching, I was introduced to the school-to-work program for the first time, and found myself wishing that something like it had been available when I was in high school (" School-to-Work Seen as Route to More Than Just a Job," April 11, 2001). Students would have much more direction and focus upon graduation if they went through this type of program, even if it was offered on an informal basis.
The majority of the students I taught were at-risk minority students. Too often on follow-ups with them, I found that they were pretty much at an economic dead end only a year after graduation. We need more school programs that cater to non-college-bound students. I attended a recognized public high school in Connecticut, and even then, I was irritated at the way the school didn't "recognize" those who weren't going off to college. It was as if they didn't matter.
Now I want to get back into the school system and work with some sort of school-to-work program. It's something I feel very strongly about. But through recent correspondence, I have found that funding for the National School-to-Work Program is being cut. Why?
On Reform's Demise: Another Perspective
To the Editor:
Peter Temes' essay "The End of School Reform," (Commentary, April 4, 2001) is a dismal forecast for educators working to bring enduring improvements to classrooms, schools, and school systems. He suggests that we need to "escape from the futilities of School Reform" and instead target our efforts to "make teachers great teachers." Though he correctly points out that skilled teachers are of great value, particularly in troubled schools, he fails to acknowledge that these same teachers' efforts can be futile when surrounded by inhibiting and unsupportive school and district systems.
For the last three years, my colleagues and I have been engaged in a study of the sustainability of reform. We are looking at programs that have, in fact, endured anywhere from 10 years to 30 years, and in doing so, we are learning valuable lessons not only about reform efforts, but about the expectations that we bring to and impose on them. We are abandoning some of our notions of "sustained" or "lasting" programs that we brought to the study and are redefining what it means for a program to "endure" over both the short term and the very long term.
Educators, politicians, and the public tend to expect clear transformations as a result of reform. But we are finding that even though there may be no apparent revolution, that doesn't necessarily mean that there isn't evolution. Change can be subtle. Change can be latent. Change can be residual. Even when a reform program disappears, its effects can remain, in the practice of teachers, the leadership skills of administrators, and the material supports that provide a valuable foundation for increasing improvement.
We must acknowledge that there are many ways to understand what "lasting impact" is and choose our words with care before condemning reform simply because we cannot see its results in the places we expect to find them.
Jeanne Rose Century
Center for Science Education
Education Development Center
Basic Accounting: What 'Multiple Measures' Shouldn't Mean
To the Editor:
Tony Wagner, a co-director of the Change Leadership Group at Harvard University's graduate school of education, and Tom Vander Ark, the executive director of education for the multibillion-dollar Bill & Melinda Gates Foundation in Seattle, say that President Bush's goal of raising learning standards is fine, but that states must choose between a focus on "passing the test" and "meaningful student learning" ("A Critical Fork in the Road," Commentary, April 11, 2001).
Citing increased dropout rates, too much memorization, not enough rigor, and damaged teacher morale, they argue that cheap, paper-and-pencil tests are the problem and performance assessments preserved as digital portfolios are the solution.
I must respectfully disagree. The authors correctly argue that paper-and-pencil tests do not fully assess all the knowledge and skills that are essential to adult success. But what they do not say is that the educational basics best measured by such tests are indispensable to subsequent achievement. To place the measurement of fundamentals in the background, as they suggest, and instead rely on multiple measures such as "peer reviews," class visits, discussions with parents, teachers, and students, and so forth, all blended into a digital portfolio, will only permit more of the kind of failure that virtually everyone now condemns: students who are utterly incompetent in reading, writing, and arithmetic.
The authors note that the "best corporations" rely on sophisticated skill assessments, not paper-and-pencil tests, for important personnel decisions. True, but neither do they hire personnel who cannot read and write, no matter what kind of "teamwork" or other skills they might possess.
Parents, policymakers, and taxpayers are not going to be satisfied with schools that produce students who can't pass a basic-skills test, no matter what else they claim to accomplish. Any satisfactory educational accountability proposal must square with this reality.
Here is a modest suggestion for those who would improve educational accountability by including a broader range of outcome measures: Do not permit secondary outcomes to detract from the primacy of basic skills. For example, do not permit indicators such as a good attendance record or a neatly constructed portfolio to offset poor achievement-test scores.
To put it concretely, if a student is to be accountable for achievement, attendance, and neatness, do not simply add the three scores together. Doing so puts achievement on a plane with the other two indicators. Rather, achievement should be treated as a multiplier for the other, less important, indicators: performance = achievement + (achievement x attendance) + (achievement x neatness).
With this formula, attendance and neatness count in overall performance only to the extent that they support student achievement. With this formula, schools failing to produce basic skills are identified as failing schools no matter what else they produce.
Education Consumers Clearing House
Johnson City, Tenn.
National Certification: A Sampling of Teacher Testimonials
To the Editor:
Michael Podgursky's Commentary on national board certification for teachers defies common sense ("Should States Subsidize National Certification?," April 11, 2001.) It only stands to reason that student learning is enhanced by teachers who are more knowledgeable in their field and are skillful at teaching it to others.
The National Board for Professional Teaching Standards has revolutionized the teaching profession by defining the knowledge and skills that add up to teaching excellence. Like physicians, architects, and other professionals, teachers now have clear and objective standards for identifying accomplished practice and recognizing those who apply it.
National board certification requires applicants to spend months documenting student learning and videotaping their teaching and classroom interactions to produce a professional portfolio. The strength of the process lies in the requirement for actual classroom documentation and evidence, rather than relying on personal recommendations as Mr. Podgursky suggests.
Candidates must also demonstrate their content knowledge through a challenging assessment-center experience, where they are tested exclusively on subject-matter material. National board certification is indeed a rigorous assessment based on standards developed by classroom teachers (and other content experts) for classroom teachers.
Like anyone who has gained additional expertise, board-certified teachers deserve to be rewarded. States that provide incentives to nationally certified teachers are creating more options for experienced teachers, more mentors for new teachers, and more spokespeople for the profession.
National Education Association
To the Editor:
Those who judge the portfolio entries and assessment-center exercises for national board certification do receive training and, more importantly, must prove their ability to assess sample work before they are allowed to score entries. The writer of your essay seems to think that they receive a bit of training, then are "turned loose."
Even after they are trained and have proven their ability by correctly scoring samples previously scored by outstanding assessors, their continuing scoring efforts are monitored by having their work randomly selected for rescoring.
Most of those who assess fall into at least one of three categories: national board-certified teachers who were impressed and pleased at the improvement in their own practice and want to continue that improvement; people interested in certification who look at the assessment process as a way to learn more; and members of college and university faculties, which recognize assessors' training and assessing as a major professional-development process in itself and, therefore, offer graduate or recertification credit to assessors.
This makes assessing national board entries a way to recognize and overcome one's biases, learn a lot from the writings of other educators, improve knowledge and understanding of the national board requirements, and earn graduate credit toward a degree or recertification while earning income. It's a good deal, and qualified professionals see it as an opportunity, not as a low-paying job.
Having achieved national board certification after 25 years of teaching and many, many hours of graduate work and professional training through workshops, I can say that the board process improved my practice as much as all the rest combined. Although this may not be true for everyone, it is true for the board-certified teachers I know.
Put this improvement and professional development together with the fact that a high percentage of applicants are already recognized as outstanding teachers, and you can justify all the incentives being offered and still ask, "Why not more?"
By the way, all four finalists for National Teacher of the Year this year are national board-certified teachers.
To the Editor:
I am a national board-certified physics teacher (I teach in a magnet school), and I object to many of the premises of Michael Podgursky's essay.
As a teacher with 34 years of experience, I was skeptical of what the national certification experience would mean to me and, especially, to my students. I found the process riveting in its thoroughness and depth.
Perhaps most private school teachers do not have the incentive to undertake this process. The lack of respect rendered to qualified public school teachers by many (such as Mr. Podgursky) is one of the reasons I went through this very difficult process. Salary increments were another.
I only wish that your Commentary writer could sit for six hours and take the content-knowledge test I took. Perhaps even he would make a grammatical error or two.
To the Editor:
I am a national board-certified teacher in the designation "early childhood/generalist." This does not signify "vague knowledge," as your Commentary author seems to imply, but means that I am certified in teaching preschool through 3rd grade, in all subject areas. I have taught in inner-city Boston, Costa Rica, Watts-Los Angeles, Tucson, and now inner-city Phoenix.
During my 11 years of teaching experience, I have been supervised by seven different administrators, most of whom did not have much training in assessing teachers—less than the assessors at the National Board for Professional Teaching Standards do. Of the seven, four have thought that I am an excellent teacher—and not the most recent four, but in a mixed sequence. The other three would disagree, mildly to strongly, based on personality and political opinions.
On the other hand, the NBPTS, in as neutral an environment as possible, certified that I meet and exceed rigorous professional standards. Relying on administrators and standardized- test scores that correlate most strongly with parental educational level is not an efficient or even adequate way to assess teacher effectiveness. Although nothing created by humans is perfect, the NBPTS is a giant leap in the right direction.
To the Editor:
Why does Michael Podgursky consider the fact that most teachers with national certification teach in public schools to be verification for his opinion that the credential is not valid? Where I teach, the local charter school has teachers on its staff without even one year of college. Other, private schools employ teachers who are unemployable in our public school systems.
While I pursued national certification, I challenged myself to meet high standards. I studied my own teaching and compared what I was doing in my classroom to those standards. The reflection and study have paid off. My teaching is even better today than when I began the process.
By comparison, when I worked on my master's degree, all I had to do was to parrot back the opinions of the professors. I did not have to demonstrate that I believed in them or even attempt to implement them in my own classroom practice.
My national board portfolio and assessment-center entries revealed much more about me as a teacher than any one-hour observation by a school administrator. I was evaluated according to what I did, not according to how well I flattered my principal.
If Mr. Podgursky really wants to know about national board teachers, he should come into our classrooms and do research on the differences between board-certified teachers in the public schools and their counterparts teaching "elite" upper-middle-class students.
To the Editor:
I teach in North Carolina, which offers incentives to those teachers who achieve national certification. Was that a reason I attempted it? You bet. Was it the only reason? No. In fact, the extra pay was way down on my list of reasons. When your salary is less than the national average to begin with, a 12 percent pay increase only makes you average. I undertook this process mainly because I wanted to improve my teaching, and become above-average
After 17 years in the profession (in both private and public institutions), I was feeling a bit jaded. Pursuing this certification has definitely made me a better teacher, if only by lighting a fire under me to try harder to reach my students. Having to reflect daily on what worked and what did not—and to put that reflection into a form that would convince others—was tough.
I "failed" to achieve national certification on my first try. Missed it by two points. Fortunately, I was given the opportunity to try again. North Carolina paid for me to retake one entry, and I chose to redo the entry that gave me the most problems. I tried again. I know I learned much about my teaching methods. I know I am a better teacher for having gone through this process.
Now, I'm waiting to see if I was able to communicate the reflection of my teaching practices to others in my profession. If I pass, it will be because I was successful in this communication. If I don't pass, I will try again.
Is the process worth it? Anything that makes you examine and reflect on your chosen profession is worth doing.
Vol. 20, Issue 33, Pages 43,45
Vol. 20, Issue 33, Pages 43,45
Get more stories and free e-newsletters!