School & District Management

Federal Study Probes Readiness of 4th Graders for Computer-Based Writing Tests

By Liana Loewus — July 25, 2014 4 min read
  • Save to favorites
  • Print

Fourth graders are capable of using a computer to type, organize, and write well enough to be assessed, according to a pilot study released July 24 by the National Center for Education Statistics. However, whether the results of a computer-based test offer a true measure of students’ writing abilities has yet to be determined.

The study also presents some ideas for making computer-based assessments more accessible to 4th graders, including by simplifying and reading aloud directions.

Questions about students’ technological readiness are of paramount concern as the majority of states prepare to roll out common-core-aligned tests, most of which will be taken on computers, next spring. The issue is even more thorny when considering young students, who are likely to have had the least practice with computer-based writing.

The NCES, which administers the National Assessment of Educational Progress, is also getting ready for an expansion of its own computer-based assessment. In 2012, results were released from the first computer-based NAEP writing test for 8th and 12th graders, and only one quarter of students scored at the proficient level or higher.

In 2011, the NCES performed a small-scale usability study to see how well 4th graders could access the assessment platform used for the 8th and 12th grade NAEP writing tests, according to a July 24 press release from the federal research center, which is part of the U.S. Department of Education. That 60-student study found that “4th grade students varied in their ability to write using the computer, and that while some features seemed intuitive to students, others were more difficult to access.” Specifically, students had trouble reading and understanding the lengthy directions, and many skipped them altogether.

Based on that study, the NCES developed a modified assessment platform that had fewer words for the directions, presented one direction at a time, and included voice-overs to read the directions.

The next year, 2012, the NCES administered a new writing assessment using the modified platform to a nonrepresentative sample of 13,000 4th graders. The students took two 30-minute or three 20-minute tests using a laptop and headphones. The writing tasks included text and some pictures, video, and audio components and asked students to explain, persuade, or craft a short story.

Overall, 61 percent of students scored at least a 3 on a 1-to-6 scale (with a score of 6 being the highest). The NCES press release says that means the majority of students “wrote enough to be assessed, included ideas that were mostly on topic and used simple organizational strategies in most of their writing.”

However, in an interview, Ebony Walton, an associate research scientist with the NCES, said “the performance piece is really just an initial look.” She emphasized that the results were not weighted and that the pilot study is “just a snapshot. It was not necessarily to make definitive statements about students’ ability or how their use of these tools are related to student performance.”

The study also looked at the number of key strokes students used and how often they hit the backspace key—arguably indicators of technological proficiency. However, the study did not shed light on what the data mean about students’ tech readiness.

Further Research

One glaring question not tackled by the study is how well students perform on the computer-based writing compared to how they perform on the paper-and-pencil test. That is, are 4th graders able to demonstrate their true writing ability using a computer, or is it a barrier for at least some students?

However, the study does offer one comparison, focused on word count. On the NAEP paper-and-pencil writing test in 2010, 4th grade students wrote on average 159 words per response. With the computer-based test, which was 10 minutes longer, they wrote an average of 110 words.

Considering that the higher scoring papers tend to have more words, as the 2012 data show, it’s possible students may not be showing their true abilities on the computer.

Ms. Walton notes that an overall performance comparison to paper-and-pencil scores “would have to be done empirically with another type of study.The study we did here wasn’t designed to make those types of statements.”

Scott Marion, the associate director of the Dover, N.H.-based National Center for the Improvement of Educational Assessment, said he’d like to see a study in which students are each given two writing prompts, responding to one on paper and one on computer. The one written on paper would then be scribed onto the computer for true comparability (since, he also noted that, in general, typed papers tend to receive higher scores than handwritten ones).

Derek Briggs, a professor of quantitative methods and policy analysis at the University of Colorado at Boulder, said that the score comparisons “really could go either way. It could be that for some students you get better insights into their writing [with computer-based tests] and for some you get worse.” (Both Briggs and Marion were not involved in the NCES study.)

“I think we’ll be finding out more about that, especially as PARCC and [Smarter Balanced] tests get rolled out,” Briggs added, in reference to the forthcoming common-core tests from the two state consortia funded with federal dollars.

The NCES is planning to release another, more in-depth study on how well students do with computer-based writing tests by the end of the year.

Related Tags:

A version of this news article first appeared in the Curriculum Matters blog.