The most fun and rewarding thing to do with Common Sense Media’s new Learning Ratings for Apps and Games is to challenge them. As with any rating system, the ratings themselves are useful, but the real learning starts when young people (or people of all ages) start talking critically about what rankings mean.(I introduced Common Sense Media’s great new Learning Ratings in Monday’s post.)
There were plenty of ratings done by Common Sense Media that had me nodding, but here are a few that had me shaking my head:
Farmville, for instance, get’s 1 out of a possible 3 “books” for learning potential. The reviews laud the game for it’s collaborative and social features, but they claim “it’s inaccurate portrayal of farming hinders its learning.” Are the social features of Farmville meaningful ways for people to collaborate and connect with one another, or are those features really just part of a viral campaign to turn Farmville players into the Zynga marketing department?
Kinect Star Wars gets 1 “book” for learning for encouraging people to be physically active. Does physical activity count as learning? Fruit Ninja gets 0 “books.” Swinging my arms is learning but swishing my finger isn’t? What about the pattern recognition in Fruit Ninja; isn’t that learning? If I want to develop new strategies to get higher scores, then I have to learn about the distribution of the point system and try to understand how to maximize my score from each slash and how to balance the rewards of getting higher scores with the risks of hitting a bomb. Why isn’t that learning?
Why does the HowStuffWorks App get 3 books, but the HowStuffWorks Web Site gets 2 books? Is this just an internal reliability issue, or is there actually something different about the product in the different formats?
Portal 2, (one of the greatest games I’ve ever played) gets 2 books for it’s puzzles and realistic physics. But gravity and observation puzzles are a function of hundreds of titles. Why is learning the rules of the imaginary world of Portal 2 a richer learning experience than learning the rules of the imaginary world in Swordigo (unrated), which has the same kinds of observation and experimentation puzzles?
My point here is not that Common Sense Media is “wrong” in any of these ratings. Summarizing the learning potential of apps and games in an objective way is an incredibly difficult task, and on the whole I think CSM does a terrific job. My point is that great researchers like James Gee and Kurt Squire have used games to challenge our whole notion of what learning is and what is should be.There are no clear answers to these questions. One of the very best things that parents and educators can do with the Common Sense Media Learning Ratings is use them to provoke debates and discussion about learning and games, and to get kids to think carefully about what they think they gain from their time with games.
I’ll also say, as a follow up, that I’d love for Common Sense Media to publish more about the process for rating games and apps as a way to facilitate these richer discussion.
CSM has made a great start explaining some of process underlying these ratings, but I hope they go further. At present, they have a blog post explaining some of the reasoning behind the ratings and another page describing some of the process. These posts hit some key points: that ratings describe potential, not guaranteed, learning from apps and different kids will respond differently to different tools. The blog post ends with the most important point: the most important factor in the learning potential of apps is not which one app to choose but how involved caregivers are in children’s play and use.
These posts also do provide a short summary of the rating process:
- Engagement: Is it engaging, fun, absorbing?
- Learning approach: Is the learning central and not secondary to the experience? Is it relevant and transferable to real life? Does it build concepts and deep understanding? Do kids get exposure to a diversity of people and situations?
- Feedback: Do kids get feedback about their performance? Does their experience (e.g., game play) adjust based on what and how they do?
- Support and extensions: Are there opportunities and resources to support, strengthen, and extend learning? Is the title accessible to a variety of audiences?
This is a good start to explaining how the rating system works, but it would be great to have a link to a white paper explaining the process more fully. Who are the trained experts doing the ratings? Parents? Educators? Media experts? How many experts rate each product? Exactly what are the criteria they use? How do editors ensure reliability? Can developers appeal? Can they have any influence at all over the process? Will raters evaluate published research on particular apps and tools? Unpublished studies commissioned by developers? My hunch is that if this rating system is successful, and I hope it is, CSM will have to do more to answer these questions, even if there are good reasons for maintaining secrecy about certain parts of the process. A richer description of the underlying process for ratings could be a great prompt for a rich discussion about learning and games.
The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.