Student Achievement Since No Child Left Behind
July 29, 2008
Student Achievement Since No Child Left Behind
- Guests:
- Jack Jennings is the president and CEO of the Center on Education Policy, a research and advocacy organization in Washington, D.C.
- Nancy Kober is co-author of “Has Student Achievement Increased Since 2002? State Test Score Trends Through 2006-2007.”
Alexis Reed (Moderator):
Good afternoon, and welcome to Education Week’s Live Chat: Student Achievement Since No Child Left Behind.
Joining us live are Center on Education Policy President and CEO Jack Jennings, and Nancy Kober, CEP Consultant and Co-Author of “Has Student Achievement Increased Since 2002? State Test Score Trends Through 2006-2007,”. In this Live Chat, they will discuss and answer your questions about student achievement in relation to NCLB. I’m Alexis Reed, a research associate in the EPE Research Center, and I’ll be moderating this discussion with our guests, each of whom has a unique background and perspective on NCLB and student achievement. We’re already getting a tremendous number of questions for this chat, so let’s get right to them.
Question from Daniel Welton, Secondary School Counselor, Sullivan West Central School:
If the data shows students are doing better is this the result of test design (making the test easier)or on how the test is scored? Would students do as well on a test designed on the same basic material which predated NCLB?
Jack Jennings:
Our report addresses the issue of how well students are doing on state tests and how does this compare to the results from states on the National Assessment of Education Progress. We do not directly deal with the reasons for the results, but we do speculate on the reasons based on other research including our own. The four reasons we have identified are that students know more, that there is more time being spent on reading and math (the tested subjects), that more time is being spent on test preparation, and that there could be changes on the state tests that are difficult to detect. Related to the last point is one of our rules which is that we only identify trends if there are three years of results on comparable tests, so if a state had substantially changed its test and reported that to us we would not use the results in identifying a trend.
Question from Ellen Rintell, Professor, Salem State College:
When examining the achievement test scores for this report, did you know if the scores of English language learners were included in the data reported? How comparable are the scores from one state to another with respect to the inclusion or exclusion of scores of English Language Learners in their data?
Nancy Kober:
Our study did collect test results for ELLs from every state that could provide them, and the profiles for the individual states on the CEP website show those results. However, we did not reach national conclusions about achievement gap trends over time for the ELL subgroup because of comparability issues. In particular, federal and state rules that govern which ELL students are tested, how they are tested, and when their scores can be counted as proficient have changed substantially since 2002, in ways that could affect the year-to-year comparability of test results even in the same state as well as across states. So we urge people to be cautious in drawing conclusions about trends for this group. The same goes for students with disabilities.
Question from Tamara Netzel Teacher Gray’s Creek Middle School Cumberland County Schools Hope Mills NC:
My school has missed AYP for two years in a row only by one point in the Exceptional Children subgroup in Math. Because of this, NCLB paints a bleek picture of our school from the outside looking in. I am proud of the hard working, well educated, caring people who work in my school. It seems we have done everything imaginable to help our students, but not enough to reach AYP. Our AYP grows each year, but as the bar raises, we still fall short somehow. How will legislators change a seemingly unrealistic program predicting 100% children to reach 100% proficiency in a few years?
Jack Jennings:
That question is not directly addressed by this report since this study deals with state test results related to NCLB. But, the Center on Education Policy has conducted other research on NCLB that addresses that question. On our web site are our recommendations for Reauthorizing the Elementary and Secondary Education Act (August 28, 2007) which we presented in testimony to Congress. Those recommendations include allowing states to shift to a “growth model” which would measure academic progress every year for students geared to the growth in the better performing districts in the state, instead of using a goal of 100% proficiency for all students by 2014.
Question from Dr. Doris G. Johnson, Associate Professor of Education, Wright State University:
My questions piggyback on -- How are students performing under NCLB Is the achievement gap narrowing? Ultimately, is NCLB working? To which American group’s achievement gap are you referring? Asia to Caucasian? Asia to African American? Hispanic to Caucasian? Number of students from each group compared to which number of students achievement? My observation is that Caucasians do not believe they score lower than Asians. The only groups that receive much press are Caucasian achievement scores compared to African American scores. Why is this occurring?
Nancy Kober:
We found that in states with consistent test data since 2002, achievement gaps have narrowed more often than they have widened. This trend is especially notable for the African American-white gap and the gap between low-income and non-low-income students.
Our study also reported national trends in the Latino-white gap, and we found progress in narrowing gaps. But only a limited number of states had sufficient data to draw conclusions about Latino students. We excluded several states from our national analysis of the Latino-white gap (and, for that matter, the Native American-white gap) because these subgroups were either very small in size or had changed substantially in size over the years analyzed, factors that could make test score trends less reliable.
Regarding your questions about specific subgroups, CEP has posted individual state profiles for each state on its website (www.cep-dc.org), with test results and number of test-takers for each of these groups: all students in the state, White, African American, Latino, Asian, Native American (the latter four groups compared with white students), low-income, students with disabilities, and English language learners. These profiles show that in many states, the Asian subgroup performed as well as or better than white students.
As for your question about whether NCLB is working, our study found improvements in reading and math test scores in most states. But we caution that it’s impossible to say how much of this improvement is due to NCLB because many different federal, state and local actions have been taken at the same time. Other CEP studies of NCLB (see www.cep-dc.org) have looked at positive and negative aspects of NCLB. Question from Libby Brydolf, teacher, Rosebank Elementary:
What has been NCLB’s impact on teachers? Is the turnover rate higher or lower? Have salaries changed? What is the teacher satisfaction rate? What concerns do teachers have about NCLB nationwide?
Jack Jennings:
The report we are discussing today only deals with the results of state tests used for NCLB purposes. Other research that the Center on Education Policy has done over the last six years addresses the last question--the concerns of educators. Those reports, especially the series From the Capital to the Classroom, are on our web site, CEP-DC.org. I do not believe there is any uniform reliable data addressing the issues of teacher turnover rates or salary changes due to NCLB.
Question from Dana Camp-Farber, a teacher in Houston, TX:
Are the measures of student achievement consistent and valid to be generalized to the entire public school populations? Are all states using the same method to measure student populations and subgroups?
Nancy Kober:
The main data in this study come from each state’s own test that it uses for NCLB accountability. Since nearly every student in the grades tested for NCLB take these tests, the results would be generalizable to the student population for that state.
These state tests differ greatly in format, content, difficulty, scoring scales, and other features. For that reason, we did not directly compare one state’s test scores or percentages proficient with another state’s. Instead, we arrived at a national picture of achievement by tallying the number of states with moderate-to-large and small gains and declines in achievement based on their own tests.
Federal NCLB guidance includes requirements that all states must follow about measuring achievement of subgroups but there is still room for state variations in these general criteria. So states vary on such issues as how many students must be in a subgroup to make it large enough to count for AYP purposes.
Question from Jim Kohlmoos, Knowledge Alliance:
Jack, In your annual report you have wisely emphasized that one should not try to draw causal conclusions from the data about whether NCLB has been responsible for the upward trends since 2002. But most of the the press continues to interpret your results as a testimony to NCLB’s effectiveness (or ineffectiveness). Why did you start the trend analysis in 2002? Why not go back further and compare the trend lines from other eras particularly for the Improving America’s Schools Act?
Jack Jennings:
Our report , press release and other materials carefully state that we cannot draw direct causual connections between these test results and NCLB because there have been so many things going on in the schools at the same time as NCLB. With regard to your second question, last year, when we issued our first report on student achievement, we did go back before 2002 when NCLB was enacted for those states that had the data. In answer to your last question, we started from 2002 this year because NCLB has resulted in much more test data and the publication of test results so that we could analyze the trends better. Before NCLB a minority of the states had all the testing required by that Act.
Question from Sonia, Teacher, LAUSD:
The report shows how well the children do on exams. What does it tells us about children being able to use important critical thinking skills?
Nancy Kober:
Our study did not get into analyzing the content of state tests to determine how well these tests assess critical thinking. We do point out in the report that test scores are not synonymous with achievement. We caution readers that tests are an incomplete and imperfect measure of student learning, and not all important knowledge and skills are well measured by the kinds of large-scale standardized tests that states use for NCLB accountability.
Question from Nicole Longevin-Burroughs, Manager of Education and Community Programs Pittsburgh Symphony Orchestra:
How are the arts being factored into the NCLB assessments? They are core subjects.
Jack Jennings:
The No Child Left Behind Act requires the testing of reading/English language arts and mathematics and uses these results for accountability purposes for schools and school districts. This year science is also tested but the results are not used for accountability purposes. The NCLB does not require the testing of arts, and therefore they are not factored into NCLB assessments.
Question from Nathan Campbell, Researcher, Public Consulting Group:
You mention the NCLB goal of 100% proficiency by 2014. Do the results of the research indicate what states/groups are on track to reach this goal and which are not? Do you have any suggestions on how to find this information out?
Nancy Kober:
The CEP website includes profiles for each state that show the percentages proficient by grade and by subgroup for that state.
In some states, the percentages proficient are higher than in others. But it’s important to remember that how close a state is to proficiency depends on how difficult its test is and where it sets its cut score for proficiency. State tests vary greatly in this respect. So one cannot assume that a state in which more than 90% of students scored proficient in some grades (see Georgia for one example) necessarily has a better education system. Sometimes the states with the lowest percentages proficient are those with the harder tests or lower cut scores.
We also show percentages proficient by subgroup for each state in the profiles. African American, Latino, and Native American students have lower percentages than white or Asian students -- often as much as 20 to 30 percentage point differences in some states. Often students with disabilities are the lowest achieving group.
Question from Sarah Minnick, Social Studies teacher, Achievement House Charter School:
For schools that have underperformed according to NCLB standards, how has the loss of monies or other restrictions affected these schools?
Jack Jennings:
This study only deals with state test results from all 50 states and with a comparison to NAEP results. But, from other work the Center on Education Policy has done dealing with NCLB, we can address your question. In our report on the 4th year of NCLB, we reported that 33 of the 50 states stated that funds had been inadequate to assist all schools identified for improvement, and 80% of school districts said that they had costs for NCLB that were not covered by federal funds. That report is on our web site as From the Capital to the Classroom: Year 4 of the No Child Left Behind Act.
Question from Susan, teacher, Los Angeles Unified School District:
Other than the annual standardized tests, what tools are being used to assess student learning and achievement?
Nancy Kober:
Our study looked only at standardized test data. We used state test data because that is the primary achievement measure used for NCLB accountability. Then we also looked at test data from the National Assessment of Educational Progress to see whether the trends on the state tests were corroborated by another independent assessment (and in general, the trends did usually move in the same direction on both assessments).
But we also make clear in the report that tests have limitations and are not perfect measures. As you know, classroom teachers use a variety of different means to assess how well students are learning. But because these other measures are not consistent across states or nationally, there’s no way to aggregate them into a national picture of achievement.
Question from barbara cherem, professor, U-M:
Why is there so much disparity in the research claiming that test scores have versus have not moved over the history of NCLB? I’m not sure which data to believe, or is it that it’s so variable against the NAEP, depending on subject and grade level?
Jack Jennings:
I am not surprised that you are confused about which report to believe. All I can do is to explain how we went about our work.
We began by convening an expert panel composed of people supportive of NCLB and questioning of NCLB. The experts and we agreed on the rules we would use to analyze the data. Then, we collected test data from the states. All 50 states verified their test data for accuracy. After we were sure we had the correct data, we analyzed it using our agreed-upon rules. In particular, we subjected the data to two different analyses--percent proficient and effect size (each tool has its advantages and defects, and the two tend to balance off one another’s defects).
Lastly, this year we incorporated an analysis of state NAEP data, and compared the results with the states’s own test data. NAEP is an independent measure, based on national standards.
So, we tried to be an objective and as thorough as we could be. We have even put on our web site (CEP-DC.org) all the raw test data that the states verified to us. Anyone in the world can look at that data.
I do not know how much more even-handed and open we could be.
Question from Hans Strong, Director, Persephone:
How accurate a measure of the abilities of primary and secondary school students are performance indices behind the criteria of Adequate Yearly Progress in English language arts and mathematics?
Is analysis of test data concurrent with AYP requirements a fair measure of individual school performance?
Nancy Kober:
Our study did note several limitations of the “percentage proficient” measure used to determine AYP. “Proficient” means different things in different states; state tests and proficiency definitions vary a great deal. Also, the percentage proficient doesn’t capture changes in performance above and below the proficient mark, such as improvement in students moving from “below basic” to “basic” performance or from proficient to advanced. Moreover,the size of achievement gaps between subgroups can vary depending on where a state has set its proficiency cut score.
To address these limitations, our study also looked at effect size, a second type of statistic based on raw test scores. Effect sizes aren’t affected by where the cut score for proficiency is set. Using the effect size measure, we also found that achievement had improved and gaps had narrowed.
This study did not get into the fairness of AYP as an indicator of school performance. Question from Frank J. Hagen, Adjnct Faculty - Wilmington University & Principal, Retired (MD/DE):
While there have been gains in student achievement since the inception of NCLB, it appears that the achievement gap among the identified groups has been decreased at the expense of those students who are proficient or advanced. In other words, we have been raising the scores of students at the basic level while scores for proficient/advanced students have been stagnant. How can we raise the scores for all students?
Jack Jennings:
It is not evident from our study that the increases in achievement by some groups have been at the expense of other groups, namely lower performing students gaining at the expense of higher performing students. We do address the issue on p. 97 of the report of whether white student achievement has decreased and therefore the achievement gap has narrowed. Our initial analysis is that that is not the fact. We expect, however, to do more work in this area. Incidentally, all 50 states verified their test data for our study, and we have posted on our web site all of this data. So, anyone in the world can go to our site (CEP-DC.org) and do their own analysis.
Question from Arnold Packer, Senior Fellow, Western Carolina, U.:
Can you explain the effect size measure more fully? Do you divide by the average Std. dev. in both tests being compared? Does it only differ from the raw difference when the scale had been changed between the two tests?
Nancy Kober:
I’m a generalist with background in education policy, so I did not do the computations for the study. Other people on the study team with Ph.D.s in education research did the actual numbers-crunching. But here’s my layperson’s explanation: We computed effect size by subtracting the mean scale score on a particular test in one year with the mean scale score on the same test in another year, and dividing by the average standard deviation of the two years. If a state introduced a different test during the period being analyzed (2002-2007), we only included those years where the test was comparable, and we only included states in the national analysis of trends if they had at least three years of comparable data from the same test.
Question from Lajuane Brooks, Ed Tech Specialist, LB&A. LLC:
NCLB almost implies individualized learning and in response to the growing demand for individualized programming, the enrollment of online courses has significantly increased. How important do you think the role of individualized-paced learning is for attaining higher student achievement in the future?
Jack Jennings:
This report does not provide information that can answer that question. But, it would seem that with the advances in computer technology and the growing use of computers in schools, we could over time move to much more individualized instruction than what we have today.
Question from Candace Cortiella, Director, The Advocacy Institute:
It would appear that students with disabilities are benefitting from NCLB - particularly because they are now being included in state assessments, which was not the case before the Act. Have your studies looked at this group of students?
Nancy Kober:
We collected test results for students with disabilities from every state that provided the data. The results for this subgroup can be found in the individual state profiles available on the CEP site. If you look at these profiles you will see that performance for students with disabilities did improve in most states, often showing notable gains. Still, students with disabilities remain among the lowest performing subgroups in most states.
We did not summarize national trends for students with disabilities because the federal and state rules for which students are tested, how they are tested, and when their scores are reported as proficient have changed a great deal since 2002. These changes have likely affected the year-to-year comparability of test results for this subgroup, so any conclusions over a period of years may not be reliable.
Question from Ellen Karnowski, Resource Specialist, Lake County International Charter School, Lake County:
With state standards varying widely, wouldn’t there be a benefit to setting nationwide standards?
Jack Jennings:
This report addresses the trends in student achievement since 2002, and used a tool of analysis (effect size) that permits conclusions being drawn from states that have widely varying standards. If we had only used the proficiency levels for analysis, we would not have as sound a report.
On the question of national standards, the new Congress in 2009 is bound to address that issue. From following school reform and particularly NCLB over the years, I see more support today for some form of common standards but not federally-prescribed standards. However, I worked in Congress for many years and understand the degree of controversy any discussion of common/national standards will arouse.
Question from Judith Treadway, Parent Liaison/Advocate, Evanston/Skokie Consolidated School District 65, Illinois:
Closing the academic achievement gap has been the primary goal of the NCLB Act. Today, we see a marketed improvement in most school districts in the area of math, yet not as great in reading. Is there validity in the fact that math is a universal language that is not impacted by variables, such as, cultural/race/ethnic learning styles, appropriateness of materials culturally, school culture/impact, income, social status, etc.? Please explain.
Nancy Kober:
Our study did find that more states had gains in math at all grade levels than in reading. The purpose of this study was to report the trends, and we did not have any evidence that would point us to the reasons why more states had gains in math.
So we can only speculate. It’s possible that the math curriculum and content standards may vary less among states and school districts than reading curriculum but offhand, I can’t point to any specific studies on this topic.
Question from Elizabeth vonWurmb, K-12 Fine Arts Coordinator, Clarkstown Central School District:
While experiencing increased achievement in math and ELA, have districts reported a drop in participation and/or funding in the arts?
Nancy Kober:
This achievement study did not look at that question, but we know from other studies done by our organization (CEP) that many school districts across the country have increased instructional time for reading and math and, in the process, have also reduced time devoted to other subjects, including art and music. The specific percentages of districts reporting cuts in time for these subjects can be found in two reports on the CEP web site (www.cep-dc.org): 1) Instructional Time in Elementary Schools, and 2) Choice, changes, and challenges: Curriculum and Instruction in the NCLB Era.
Question from Taylor Keane, ELA teacher, Boston Public Schools:
If testing is not the best way to evaluate a student’s knowledge, why do we put so much effort into testing? I understand there needs to be a bar, however, test preparation takes away time from other important lessons.
Jack Jennings:
When we began this project three years ago, we convened several experts and asked them how best to judge whether students knew more. Their answer was that the only way to get uniform, standardized information was through looking at test results.
Question from Joy Mordica, Senior Project Manager, Westat:
Has the achievement of minorities and students in urban schools increased since 2002?
Nancy Kober:
We only looked at test results at the state level and didn’t break them down to the school district level, so we didn’t break out results specifically for urban school districts.
Our data do show gains in achievement since 2002 for African American students in most states. We also found that the African American-white achievement gap narrowed in more states than it widened in. We found positive results for Latino and Native American students as well, but there were fewer states with sufficient test data for us to determine trends for these subgroups.
Question from Pedro A Alcocer, parent, PTSA,EESAC, Miami Coral Park Senior:
Based on the results of the report, It is worth to keep NCLB the way it is now?
Nancy Kober:
This study did not make recommendations about NCLB. We did point out that although test scores have improved since 2002 (the year NCLB was enacted), it’s impossible to say to what extent this was because of NCLB. Many different federal, state and local policies have been adopted to raise achievement, and it’s impossible to sort out how much impact any one of these interconnected policies has had.
Our organization (CEP) has done other research on NCLB, and we did develop a set of recommendations for changing the federal law. Those can be found on CEP’s website (www.cep-dc.org).
Question from Jacqueline harris, Asst. Supt. South Huntington UFSD:
Has the student achievement data been disaggrevated by state, region ... urban, suburban, rural? If so, are the performance results consistent across the country for each subgroup - or - are there notable variations?
Nancy Kober:
All of the data for our study was broken down by state, but we did not disaggregate by urban, suburban, or rural status or by region. We advise against making direct comparisons between specific states (and this would apply to regions, too) because state testing systems vary so widely in content, format, difficulty, scoring scales, and other features.
We did look across the nation at four main subgroups: African American, Latino, Native American, and low-income students. Trends of gaps narrowing were particularly notable for the African American and low-income subgroups. Gap trends also narrowed more often than widened for Latino students, but this trend is less conclusive because the number of states where this subgroup was stable in size was limited.
Question from Marlene Thier, Author; The New Science Literacy:Using Language Skills to Help Students Learn Science:
How has NCLB contributed to the intellectual growth and advancement of the upper 10 percent of the student population when so much time is spent in test preparation skills?
Nancy Kober:
Our study did not look at gains above the Proficient level, so we did not track changes in achievement for the top 10% of students.
The Human Resources Research Organization (www.humrro.org) did use the state achievement data we collected to do a study of students performing at the Advanced level on their state’s tests, and found improvements for this group as well. You should be able to find the details for that study on the HumRRO web site. Alexis Reed (Moderator):
Thanks for all the great questions, and many thanks to Mr. Jennings and Ms. Kober for their time and insights. Unfortunately, we have more questions than time, so we’ll have to leave the discussion there. A transcript of this chat will be available on Education Week’s Web site shortly: http://www.edweek.org/chat/
The Fine Print
All questions are screened by an edweek.org editor and the guest speaker prior to posting. A question is not displayed until it is answered by the guest speaker. Due to the volume of questions received, we cannot guarantee that all questions will be answered, or answered in the order of submission. Guests and hosts may decline to answer any questions. Concise questions are strongly encouraged.
Please be sure to include your name and affiliation when posting your question.
Edweek.org’s Online Chat is an open forum where readers can participate in a give- and-take discussion with a variety of guests. Edweek.org reserves the right to condense or edit questions for clarity, but editing is kept to a minimum. Transcripts may also be reproduced in some form in our print edition. We do not correct errors in spelling, punctuation, etc. In addition, we remove statements that have the potential to be libelous or to slander someone. Please read our privacy policy and user agreement if you have questions.
---Chat Editors