Opinion
Assessment Opinion

Teachers Deserve Better Tools for Tracking Subskills

By Tom Vander Ark — July 29, 2014 7 min read
  • Save to favorites
  • Print

The good news is that a growing percentage of students are learning, practicing and applying math in several engaging modalities most providing frequent
instructional feedback. The same is true in other subjects, just a little more slowly. The bad news is that most schools have no way to combine the
assessment information from multiple sources in ways that are useful for driving instructional improvement or managing student progress.

The solution to this problem requires clarity around common expectations and how those expectations will be assessed and a common tagging scheme for
content and assessment. It would be useful to have a couple recognized/widely used practices for combining and using assessment data in competency-based
environments and better growth measures to compare progress in
different environments.

That all seems doable, right? Well, not so fast. Talking to school leaders, gradebook vendors, and assessment providers, there appear to be four
significant problems:

1. Different standards.
States leaving the Common Core will need to figure this out on their own--another reason it’s dumb for small states to create their own odd little academic
cul de sac. About half of the states remaining in the Common Core are making substantial edits and additions to the standards. Even in Common Core states,
“Very few are directly assessing Common Core, they use their own version of standards,” said Justin Meyer from gradebook JumpRope. Most schools and districts have their own way of expressing and assessing standards. The most common reason
for local standards is gaining adequate grain size to make informed instructional decisions (as Will Eden describes in this post on ELA
standards). The bottom line is that even where standards are common, they’re not always the same requiring a semi customized solution to aggregate
formative data from multiple sources.

2. No agreement on tagging data.
In short, sub-skill tracking is complicated. The state EdTech directors (SETDA) worked with the assessment consortia to
create an enhanced schema (GIM-CCSS) for aligning resources to the Common Core.
Doug Levin said, “The problem we set out to solve with granularity was to provide a mechanism to ensure that the breadth and depth of the standards was
able to be evaluated in alignment judgements in assessment (formative and summative), instructional materials, and professional development resources.” The
proposed SETDA solution reflected the richness of the standards statements which frequently encompass several different competencies which may be diverse
enough that one couldn’t teach or assess them in the same lesson or at the same time. However the standards authors and other experts were never able to
reconcile their beliefs on how to do so. Standards author Jason Zimba thought “splitting standards was a bad idea from the start.”

Tim Hudson, DreamBox Learning, said they’re seeing more schools use the CCSS Publisher’s Criteria, something he sees as a positive development in part because
the associated

Toolkit

focuses on clusters not subskills. In a Toolkit essay, Daro, McCallum, and Zimba say, “Fragmenting the Standards into individual standards, or individual
bits of standards, erases all these relationships and produces a sum of parts that is decidedly less than the whole.” They note that focusing too narrowly
on sub-skills contributes to the mile-wide, inch-deep problem we’ve had for some time in math.

For this reason, Hudson thinks educators and students will might find more value in CCSS cluster-level reporting--perhaps some in the form of heat maps
that also emphasize proficiency and growth in the the Major Work areas. Zimba proposed a “wiring diagram” that reflects sub-skill clusters and relationships. Jen Medbery, Kickboard, thinks it may be necessary to keep the sets of subskills distinct and allow gradebook users
to build rules groupings sub-skills. Districts and networks could choose from couple different credentialing or badging schemes linked to skill clusters.

Amplify uses a honeycomb data visualization tool. President Larry Berger
said, “There are sensible sub-skill sequences, but they are often idiosyncratic to a given instructional approach. So the one we use at Amplify enables us
to report on granular progress in the interstices between standards, but it isn’t one that others would agree to standardize on.”

Sometimes sub-skill sequences are developmental (most brains in most contexts would learn them in that particular order), the rest of the time the order of
learning is shaped by the order of teaching.

3. No agreement on defining competency.
The definition is simple--students demonstrate mastery to progress--but there are many variations of competency-based environments. Some use big
gateways--end of course exams or big interdisciplinary demonstrations of learning--while others use small frequent gateways that combine multiple
assessments. Combining multiple assessments in consistent and reliable fashion is important in all of these environments, particularly those where they
guide student progress. The schools doing this well are individual rotation models with custom built platforms (e.g.,Summit & EAA Buzz) that combine several assessments at a unit (or cluster) level.
Otherwise, there’s little agreement about how to combine assessment (i.e., weighting, trailing average, most recent, etc).

The challenge to standards-based grading, according to Justin Meyer, is there is no one place to find out about it and nobody wants to agree.CompetencyWorks is a great start--it’s an online community of educators working hard to figure this out--see The Art and Science of

Designing Competencies
by curator Chris Sturgis.

4. Inadequate tools.
Performance feedback from any learning experience should flow automatically into a super gradebook (called instructional management system or learner
profile); with the exception of a couple closed systems and small pilots, it never works this way. Most schools use spreadsheets to managed multiple
sources of data--it’s like running an airport air traffic control tower with a bunch of scratchpads. Teachers manually add data to spreadsheets and then
manually enter grade into a gradebook. New gradebooks like Engrade, Kickboard and JumpRope support customized deployments but it is still a technical and
manual process that will prevent competency-based education from scaling.

Here’s a common response from an education software provider, “We definitely want to support better information for teachers and explore ways to integrate
more closely with the gradebooks.” The tools won’t get better until there is more philanthropic investment and/or more aggregated demand (i.e., more
schools doing things the same way) that drives investment.

Next steps.
As the amount of assessment data grows, three million American teachers struggle with the inability to combine assessment data from multiple sources.
They’re using spreadsheets to collect data when simple data integration tools would do that automatically. Schools are making up rules about student
progress and reporting when a couple widely used templates should be available.

If simply tracking a checklist of skills won’t cut it, solutions will need to include micro-standard tagging grouped into skill clusters--a two or three
layer hierarchy supporting the ability to combine fine grain and broader performance assessments. As Medbery said, a couple different options and the
ability to customize would be helpful. The SETDA GIM solution appears to be worth reviving and supporting--it’s a good start from a capable and well
positioned organization.

With one or more widely recognized options for tagging CCSS resources, networks and districts will need the ability to manipulate a rule set (i.e., how
assessments are combined, weighted) to manage student progress and reporting. The EAA strategy of requiring students to bring forward three form of
evidence for each unit appears to increase student agency and motivation. This functionality should be built into next gen gradebooks (and instructional
management systems).

To boost student engagement and simply stakeholder reporting, the solutions should be, as Michael Fullan suggests, “irresistibly engaging” for students and
“elegantly efficient” for teachers. Students should be able to log into a mobile application and quickly understand what they need to learn and options for
demonstrating mastery. Teachers should be able to efficiently monitor progress, benefit from informed recommendations and dynamic scheduling, and pinpoint
assistance for struggling students.

This is an education problem more than a technology problem, a political problem more than a psychometric problem. Solving this set of nested challenge
will require leadership and investment. It will require groups of schools (e.g.,League of Innovative Schools, Great Schools Partnership, New Tech Network, etc) to
agree on competency-based protocols and use their market leverage (and some grant funding) to drive investment to solutions for their instructional model.
The next gen gradebooks are all willing and capable partners. Students and teachers deserve better tools.

The opinions expressed in Vander Ark on Innovation are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.