Plans Advance to Link NAEP to College, Work Readiness
Studies will determine how 'preparedness' can be measured on scale.
Federal officials have taken a major step toward using the test known as “the nation’s report card” to judge 12th graders’ preparation for college and the job market.
The board that sets policy for that exam, the National Assessment of Educational Progress, has voted to accept a report of an expert panel with recommendations for linking student performance on the influential test with skills identified as important by college officials and employers.
Many of the details regarding how, exactly, NAEP would be used to judge students’ college and workforce preparation have yet to be worked out.
But the vote of the National Assessment Governing Board, taken Nov. 21 at the panel’s quarterly meeting here, essentially allows the board to arrange to have more detailed studies to determine how “preparedness” could be reflected on the NAEP scale. Those studies are likely to include comparing the content of NAEP with that of widely used tests of postsecondary preparation, such as the ACT and the SAT, and workplace-placement exams, such as WorkKeys. The board might also seek to collect data from two- and four-year colleges and vocational programs on their standards for placing students in regular, as opposed to remedial, courses.
The board’s goal is to have measures of student preparedness in place for 12th grade NAEP testing next year, so that information could be publicly reported in 2010.
The NAEP reports average student scores for the nation, states, some cities, and individual demographic groups. In theory, incorporating preparedness on NAEP would allow the public to look at 12th graders’ average reading or math scores, which are currently presented on a scale of 0 to 500, and judge how well equipped they are to handle the demands of college and of certain occupations.
Governing-board officials also hope that by providing a clear assessment of students’ ability to handle postsecondary and on-the-job challenges, they can provide clearer information to parents, policymakers, and the general public about how to interpret naep scores, said Mary Crovo, the board’s interim executive director.
Board members have grown increasingly concerned about 12th graders’ lack of motivation to take NAEP seriously, given the distractions students face during their senior year and the competing demands placed on them by other tests and academic factors.
“The question is, how do we make the 12th grade report much more effective?” said governing-board member David P. Driscoll, the former commissioner of education in Massachusetts, after the board’s vote. Judging students’ skills for life after high school, he said, “would add meaning to our reporting, for us, for students, for their parents.”
Currently, the 12th grade NAEP is given at the national level, though 11 states have agreed to take part in an assessment that will produce individual results for those jurisdictions.
Policymakers and college officials have voiced frustration over students arriving in need of remedial coursework. Employers say too many young workers lack useful skills. But opinions differ on how exactly to define “preparedness” for college and work, in terms of knowing which academic and applied skills are most useful. (Diplomas Count, June 12, 2007.)
The panel recommended that NAEP be used as a gauge of students’ “academic preparedness for college and workplace training.” That means students have the knowledge and skills to qualify to enter a credit-bearing course on the way to a four-year degree, or to qualify for placement in a job-training program, which could include an apprenticeship, a community-college technical program, or a vocational program.
The report on preparedness was completed by a seven-member technical panel formed by the governing board. That panel was chaired by Michael W. Kirst, a professor emeritus at Stanford University.
Members of the governing board also began a tentative investigation of whether it is possible to use “targeted” or “adaptive” testing on NAEP, which would mark a major departure from how the exam is administered now.
A committee of the board heard a presentation from John Mazzeo, an associate vice president of the Educational Testing Service, a Princeton, N.J.-based nonprofit research and testing organization. Targeted or adaptive testing, sometimes called tailored testing, involves giving students of different ability levels exams that are written to different levels of difficulty. Currently, NAEP gives all students with a testing population exams with the same demands.
Board members are interested in whether targeted testing could help them reduce the disparities among the numbers of students with disabilities or with limited English skills who states and cities choose to exclude from testing or provide with special accommodations. Critics have said those disparities undermine the value of NAEP scores. ("States Struggle to Meet Achievement Standards for ELLs," July 16, 2008.)
Targeted testing could potentially allow those students to more fully participate in the NAEP. One targeted-testing model, though a potentially costly one, would involve giving NAEP by computer, Mr. Mazzeo told the committee. Students who answered particular questions correctly would move on to more difficult ones, or, if they tripped up, to easier questions.
By tailoring the test in that way, the thinking goes, administrators can judge struggling students’ academic skills more precisely, as opposed to simply receiving a string of wrong answers. For that reason, targeted testing might help provide federal officials with more detailed information about the performance of students scoring at the lowest levels.
For instance, when students from Puerto Rico took the 2003 and 2005 naep in math, their scores were so low that federal officials had difficulty interpreting the results.
The board is already considering other options for dealing with exclusions and accommodations, such as setting national rules on them; using “full population estimates” to adjust for exclusions; and attaching “cautionary flags” to scores if exclusions rose above a specified level.
Vol. 28, Issue 14, Page 22