Education Opinion

So How DO We Measure Learning in the Arts...

By Nancy Flanagan — June 27, 2012 5 min read
  • Save to favorites
  • Print

... if we don’t use some kind of standardized assessment?

My previous blog has been re-posted, dissected, praised and scorned. But the question is still out there: How do we measure learning in the arts?

Two critical observations:
• Contrary to what some commenters seem to think, nobody is suggesting that learning in arts education can’t or shouldn’t be assessed. The blog wasn’t a half-baked claim that the arts are too creative/expressive/ethereal/woo-woo for teachers to properly evaluate what their students have produced and learned. Assessing learning in the arts is precisely how students grow in arts knowledge and skill--with the assistance of their teachers, who use those assessments to tailor and improve their instruction as well.
• This contention--that we can’t measure something unless we standardize it--is driving a whole lot of truly damaging, excessive and deceptive testing right now, and not just in the arts. How many times have we heard this: “If we don’t use standardized tests, how will we really know what students have learned? Or how they compare to kids in Singapore?”

Standardization is about uniformity and comparison. Assessment is something else entirely.

What advice would I have for those who would like to do a better job of assessing learning in the arts? (I have no advice for those who want to “compare” teachers’ “efficacy,” using standardized test scores.)

First: Teachers must know their students. You can’t evaluate growth unless you have a handle on these particular kids, at this time. You might know that four year-olds in Japan are capable of repeating five-note tunes on violins with a high degree of accuracy. But that doesn’t tell you what experiences the four year-olds in front of you have had in making music--or what they’re capable of doing. Standardized testing is the antithesis of knowing individual students well, and creatively pushing them to exceed expectations.

Second: Be clear about learning goals. While you’re at it, set challenging and worthy goals--always asking why those goals are important for your students. A lot of the problems in designing assessments in the arts come from a mismatch between what teachers hope and believe they’re teaching and what students are actually learning.

Yes, you can test students’ precise recall of major periods of Western art music--key composers, developments in composition styles, etc. And for a time, your students will “know” that Bach was a Baroque composer, and sonata-allegro form emerged in the Classical period.

But suppose your learning goal was bigger than memorizing historical periods and key composers--say, tracing the elements that shaped Western art music? Students would understand that technologies impact composition (materials, keys, slides, manufacturing techniques, etc.), that harmony developed roughly aligned with the scientific overtone series, that music in non-Western cultures developed very differently--and so on. While these learning goals are richer, and transferable to other disciplines--not to mention more important in understanding development of music-- they’re not easily measured by multiple-guess tests. And lest you think they’re advanced concepts--I taught all of them to middle school students, who produced artifacts demonstrating core understandings.

Third: Use diverse metrics for different skills and concepts. Available, cheap assessment tools should not determine what gets assessed: learning goals come before assessment design. In music, some things can be measured only by observation and skilled listening. Some things require assessment in groups--choral blend, for example. Some things are best assessed by straight-up musical performance and a rubric. Others are best evaluated via discussion, pieces of writing, movement. All assessments modes have worth.

Fourth: Value application, interpretation and analysis over regurgitation. Assessment in the arts has to include more than accurate reproduction. All student musicians are familiar with “down the row” assessment: Teacher assigns difficult passage, then goes down the row, grading each performance. If first musician is exactly correct, subsequent students can benefit from a good model (or suffer by comparison). Last person in line often benefits most. What the teacher thinks they’re assessing: Who practiced? But what they’re actually assessing may be students’ ability to imitate. And none of it has anything to do with higher-order, lifetime musical skills and knowledge.

Fifth: Value self-assessment most of all. Teaching students to self-evaluate, using excellent models, open-ended questions and assignments, and rubrics of essential characteristics of excellent work should be a primary goal for arts educators. There will always be contests, festivals, competitions and grades in the arts. Inculcating an informed sense of self-critique is far more valuable in the long run than trying get a top rating.

Sixth: Don’t consistently privilege one learning mode over another when you assess.

I would also argue that there isn’t much value in rigidly standardizing arts curricula--both methods training for performance skills, and “core” knowledge. Here’s a story about that:

In developing the National Board assessments for accomplished teacher knowledge in music, the NBPTS standards on which the assessments are based say that teachers should “demonstrate a comprehensive knowledge of the musical and stylistic differences that distinguish the music of various historical periods, genres, styles, cultures, and media,” including “the cultural and historical context in which they were developed.”

The assessment creators (I was on that team) agreed we needed an assessment that measured teachers’ knowledge of music history. Some of us assumed that would be a traditional analysis of historical periods and masterworks of Western art music, the large majority of which was created in Europe. The stuff we studied in college.

Some of our team attended historically black colleges, however--and argued that what they learned in university-level music history classes was more relevant to our students: the African roots of American pop music, for example, or how folk music was generated and spread. And where were other world music traditions in the “music history” assessment? Whose history were we privileging?

The discussion was fascinating, and occasionally heated. We ended up including a broader definition of music history in the assessment. The experience altered what I understood as “music history” forever--and changed my teaching. It also made me wary of rigidly designating certain skills and knowledge as “best” or “essential.”

Isn’t there room for change, growth and diverse methodologies in curriculum and assessment?

The opinions expressed in Teacher in a Strange Land are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.