On this beauteous spring day, I’m sitting here in the man zone (my basement office), looking up through a casement window at a bird’s nest in the eaves of my neighbor’s roof. I’m thinking about... well, you know. The testing window is open until June 15. I have to figure out when to shimmy through before it slams shut. I’m having a hard time getting psyched to go do it-- I wonder if anyone else out there is feeling the same sense of ennui?
The assessment center seems anticlimactic, in a way, after the portfolio. Much as I cursed the process going through, I can’t deny that it was all about what was going on in and around my classroom. Somehow, the test center seems removed from that. It feels like a speed typing test.
This is a dangerous attitude, I know (I’m not cavalier, just toasted). Lord knows, I want to do as well as I can on the written test, at least well enough to get over the hump. A colleague of mine who certified last year has already blotted the math from his mind, but swears that it was the assessment section that was his saving grace.
I went back into the bible for motivation, and found this section on scoring:
For the Early Adolescence/ English Language Arts certificate, the weights are set at 16 percent for each of the three classroom-based portfolio entries, 12 percent for the Documented Accomplishments entry, and 6.67 percent for each of the six assessment center exercises. (EA/ELA 2006 pg 35)
In other words, 60 percent of the grade was in the portfolio. And 40 percent is still to be determined. My colleague was right; that’s a hefty chunk. It seems disproportionate. My year of blood, sweat and videotape versus one day of fast typing is hardly commensurate with a 3:2 ratio.
The rationale for this might be similar to the reasons the International Baccalaureate program, in which I used to teach English, weights the end of course essay tests far more heavily than the two papers written during the course. I remember never wanting to tell kids, during the months that they slaved over the lengthy literary analysis, that these papers were ultimately only worth 15 percent of the grade or so. We teachers saw the value in having the kids write the papers, even if the value wasn’t officially recognized. The bottom line for IB: the papers weren’t as “secure” as the test. There was no way of knowing how much a kid’s teacher had helped him, or how, on a 1500-word typed paper. But given the relative security of a testing environment, the evaluators could be pretty sure those hand-scrawled documents were kids’ own work.
So, does NBPTS not trust us? It’s the only conclusion I can draw based on the disparate weights of the portfolio and the assessment center. I’m not taking it personally, mind you. Standardized testing is what it is: certain measures are pragmatic, and must be taken when attempting to measure the achievement of tens of thousands of anonymous individuals.
The only other option I can conceive is worse: NBPTS doesn’t trust itself. The seemingly skewed weight of the two elements could be a tacit admission that portfolio scoring is more subjective than essay scoring. That’s a scary thought.
Maybe, as usual, I’m being too cynical. The 60/40 split might be a charitable way of giving differently-gifted candidates a chance to succeed. Some of us shine on tape; some perform best under the pressure of a timed writing.
Where does this leave me, little old Candidate # 011something-or-other? Pretty much back where I started this post. Staring out my window as wrens flit about, wondering when to go and get this thing over with.
The opinions expressed in Certifiable? are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.