Two State Groups Win Federal Grants for Common Tests
The U.S. Department of Education today announced awards of some $330 million in federal aid to two state coalitions—representing 44 states and the District of Columbia—for the design of new assessment systems aligned to the recently developed common-core standards.
The grant money will be divided almost equally between the two applicants in the competition, which is part of the federal Race to the Top program spawned by the federal economic-stimulus law.
However, a third group of 12 states that applied for a smaller, $30 million pot under a separate but related competition to support specific exams at the high school level, failed to win an award.
Those three were the only applicants that the department deemed eligible for the competition.
The Race to the Top assessment competition aims to get states to band together to devise improved—and common—assessments of student knowledge in mathematics and English/language arts pegged to the common-core standards, which so far have been adopted by 36 states and the District of Columbia.
“As I travel around the country, the number-one complaint I hear from teachers is that state bubble tests pressure teachers to teach to a test that doesn’t measure what really matters,” Secretary of Education Arne Duncan said in a press release. “Both of these winning applicants are planning to develop assessments that will move us far beyond this and measure real student knowledge and skills.”
Only two consortia competed for the bulk of the funding, which was made available for what the department described as “comprehensive assessment systems” designed to measure whether students are on track for college and career success. The highest rating by a panel of peer reviewers went to the Partnership for Assessment of Readiness for College and Careers, or PARCC, which consists of 26 states. It was awarded $170 million. Meanwhile, the SMARTER Balanced Assessment Consortium, which includes 31 states, will receive $160 million.
Washington is the lead state for the SMARTER Balanced group, while Florida has that role in the PARCC consortium.
Sandra Abrevaya, a spokeswoman for the Education Department, said the PARCC consortium won the highest rating by peer reviewers brought in by the federal agency to evaluate the plans. But, she noted, “it was very close.”
At the same time, she said the peer reviewers determined that the sole applicant for a second category of $30 million in funding reserved specifically for the high school level, called the State Consortium on Board Examination Systems, did not merit an award.
The idea behind the high school application, which involved 12 states—Arizona, Connecticut, Kentucky, Maine, Massachusetts, Mississippi, New Hampshire, New Mexico, New York, Pennsylvania, Rhode Island, and Vermont—was to adapt the kind of board-examination systems that other countries use and align them to the common-core standards.
“The average score by the peer-review experts for that consortium was very low,” Ms. Abrevaya said. “The applicant failed to demonstrate that the assessment system was valid, reliable, and fair for its intended purposes under the standards set in the Federal Register notice.”
Marc S. Tucker, the president of the National Center on Education & the Economy, which organized that consortium, said he was “deeply disappointed” that the group’s proposal wasn’t funded, particularly since it envisioned far more than exams to match academic standards.
His group had hoped to help states and districts offer “highly integrated” systems of instruction that included a core curriculum with course syllabuses, exams derived from those course outlines, and professional development, he said. Noting that the project predated Race to the Top, Mr. Tucker said the 12 states involved plan to seek other sources of funding and move ahead.
Some observers caution that plenty of questions, and challenges, remain as the winning consortia move from getting awards to designing and implementing new assessment systems.
One issue, said Scott Marion, the associate director of the Dover, N.H.-based Center for Assessment, is that the states do not fully understand the future uses of the assessments.
“You’re designing an assessment system when you’re not really clear on what the use is yet, and that’s a huge challenge,” he said, noting that Congress is well behind in reauthorizing the federal No Child Left Behind Act, which likely will have a lot to say on the use of tests.
The department has provided some signals on that, indicating that the assessments would be used not only to measure student achievement, but also growth in performance and teacher and principal effectiveness.
Mr. Marion also said there will be plenty of nitty-gritty issues to work through in finding alignment across states. For example, what will be the testing window for key assessments? And will all states agree to the same policies in making accommodations for students with disabilities or for English-language learners?
“There are all sorts of things like that, all these nagging details when you go from ideas to operation,” said Mr. Marion, whose group provided assistance to all three eligible applicants.
Gary W. Phillips, a vice president and chief scientist at the American Institutes of Research in Washington, also believes it will be challenging to find agreement on all the details over time.
“It’s hard to get 30 states to agree on anything,” he said. “By the nature of the states and their natural independence, they will want to have a lot of flexibility, but you can’t have things that are common and also have a lot of flexibility, so that is another challenge.”
Mr. Phillips also said he wonders what will happen with the states that chose to be in more than one winning consortia.
At least 12 states—Alabama, Colorado, Delaware, Georgia, Kentucky, New Hampshire, New Jersey, North Dakota, Ohio, Oklahoma, Pennsylvania, and South Carolina—participated in both the SMARTER Balanced and PARCC consortia.
“My assumption is that they’ll have to choose one,” he said.
In the end, analysts suggest that the two winning applications have much in common.
Both winning consortia say they would combine the results from performance-based tasks administered throughout the academic year with a more traditional end-of-the-year measure for school accountability purposes. Both also plan to administer their year-end assessments via computer, but only the SMARTER Balanced group would use “computer adaptive” technology, which adjusts the difficulty of questions in relation to a student’s responses, as the basis of that year-end test. ("Three Groups Submit Applications for Race to Top Assessment Grants," July 14, 2010.)
In the executive summary for its application, the PARCC coalition said the common-assessment system it aims to build would offer four “innovative” features to significantly improve the quality and usefulness of large-scale assessments, including: using college and career-readiness as an anchor, measuring rigorous content and students’ ability to apply that content, measuring learning and providing information to educators throughout the school year, and leveraging technology for “innovation, cost-efficiency, and speed.”
“PARCC’s assessment system will provide the tools needed to identify whether students—from grade 3 through high school—are on a trajectory for postsecondary success and, critically, where gaps may exist and how they can be remediated well before students enter college or the workforce,” the application says.
For its part, the SMARTER Balanced Assessment Consortium outlined its vision for “a new generation assessment system” that contains “a set of balanced components that can be adapted to meet students’ needs across participating states.
The undertaking would be “rooted in a concern for the valid, reliable, and fair assessment of the deep disciplinary understanding and higher-order thinking skills that are increasingly demanded by a knowledge-based global economy,” the consortium says. It promised an approach of “responsible flexibility,” whereby the consortium provides options for “customizable system components while also ensuring comparability of high-stakes summative test results across states.”
Vol. 30, Issue 03
Get more stories and free e-newsletters!