Editor’s note: This is a guest post from Perry Cook. Dr. Cook is the co-founder of Kadenze, a Professor Emeritus at Princeton University, and a Research Coordinator and IP Strategist at Smule, Inc.
In Parts 1 and 2 of this multi-part series, we looked at the history of studying music and art online. We also looked at the role that social networks and mobile apps can now play in learning and practicing the arts. We also touched on how Digital Signal Processing (DSP) and Machine Learning (or AI) are beginning to change online education. In this final installment we’ll dig deeper into computer-based techniques for teaching and evaluating student work. First we’ll talk some about curriculum design, less specifically about music or even art, but I promise that we’ll loop back to grading and music by the end.
Intro Video for Dr. Cook’s Kadenze MOOC
How We Teach: Curriculum Design
Online education must be much more than just putting a camera in the back of a classroom, taping lectures, maybe adding graphics, and posting videos. We can teach better, tell engaging stories, embed demos, links and other resources, use in-video quizzes, and grab/retain more students. Good professional videos are important, and tech-wise, a good video player interface is essential. This should include rewind, ½ speed and 2x (or faster) video playback, along with closed captions. Couple this with in-video quiz questions, and we’ve gone a long way toward helping different types of students learn at their own pace, while still guaranteeing that they’re watching AND getting the important concepts.
Every “lecture” doesn’t need to be an hour long! And every “week” doesn’t need to have exactly three lectures.
After over 30 years of conventional teaching, I had a wonderful realization when creating my own Kadenze class. Every “lecture” doesn’t need to be an hour long! And every “week” doesn’t need to have exactly three lectures. Some weekly sessions need over 2 hours total video time, while others are more like an hour. It’s totally freeing to cover what needs to be covered, and send the students off to work on their assignments and projects. Breaking videos up into small 2-10 minute chunks is really important. We live in a world where the most popular online articles take 3-6 minutes to read, so allowing our learners to drop in and move forward in a course at their own time and pace is essential.
And we have to make our courses interesting and fun. In the creative coding world, Dan Shiffman of NYU and the Processing Foundation is a master of this. Dan has made and offered courses and modules to teach Processing since 2012, and he’s always churning out new videos and initiatives. When I visited him at NYU, I was amazed that his faculty office IS a green-screen video production studio.
Intro video for Dan Shiffman’s Coding Rainbow YouTube channel
Speaking of green screens, one of the ways that online video teaching can differ from the classroom is in the use of motion graphics, special effects, and artful editing. Like it or not, since online courses live in the internet world, we must compete with kitty videos on YouTube, or whatever Kanye is doing. A teacher, seated at their office desk, with a webcam pointed up their nose, with annoying background noise and bad lighting, might have been OK in the early days of MOOCs, but no more. We must make our videos and coursework as interesting, artful, and professional as possible.
How We Evaluate: Testing, Grades, etc.
Education has always relied on one or more forms of testing. Pop quizzes with a midterm and final. Weekly homeworks and a big final project. Just these very words induce strong post-traumatic anxiety in many. Fortunately, online education gives us a variety of different models for approaching assessment, verification, and certification.
The “Massive” in MOOC means that we may have to deal with education and assessments at scale. For open online courses, 10,000 students enrolling and submitting the first assignment can mean disaster if humans are expected to do the grading. Peer assessment or grading is an option, but students don’t necessarily trust each other, or themselves, to do a task that teachers and teaching assistants have traditionally done.
I often try to provide one funny incorrect answer as part of multiple choice, or make the questions amusing
I already talked about in-video quizzes, which are a cool way to keep students engaged and ensure that they’re learning the important concepts. To make it interesting, I often try to provide one funny incorrect answer as part of multiple choice, or make the questions amusing. I also try to put some extra instruction and useful/ interesting facts into the questions, so the student is always learning and engaged. Figure 1 shows a quiz question from my course.
Figure 1 A well designed quiz question can assess, teach, and amuse.
With some thought up front, and computer assistance, large parts of many courses can be graded at scale. We’ve found that for many courses, starting with machine-graded assignments, then moving toward some peer grading/critique, is a good mix. By the mid+ point of the course, the pool of serious students has usually stabilized, and they know and trust each other more from discussion forums. We’ve also found that a gentler “Gallery Critique” rather than strict “Peer Grading” is another useful term and mechanism for assessment. Figure 2 shows student gallery critiques from an illustration course.
Figure 2 Required “Gallery Critiques” can be an effective means of both grading and engaging.
But Is Machine Grading Fair?
Proper grading algorithms don’t carry any inherent bias. No teacher’s pets. No “that kid bugs me.”
Short answer, Yep! Longer answer: Well designed computer algorithms, combined with well- structured assignments (ask the right questions, clearly), are certainly more consistent, and thus fairer, than a large pool of Instructor(s) and student Teaching Assistants. Proper grading algorithms don’t carry any inherent bias. No teacher’s pets. No “that kid bugs me.” Algorithms don’t wake up hung over, giving themselves 1 hour to grade 50 assignments because they want to meet friends for lunch. I could go on, but your next likely (and valid) question might be might be “what about creativity, and human feedback?” We’ll get to that shortly.
Obviously there are things that computers do much better than humans — counting words, computing the exact length of a submitted music composition sound file, or detecting similarities and minor differences in huge batches of similar files. Plagiarism detection via Grammarly.com, PlagScan.com, and other services is now commonly used for grading traditional university coursework, and for good reason, because sources for buying term papers online have been around for a long time now. Grammarly, and other software like the Hemingway App can help us write better, and target a specific reading level. Similar lexicon and grammar analysis tools can be used to grade essays and other long-form writing, and services like turnitin.com provide “automated assessment of writing at scale.”
There is also a split workload model, where a computer algorithm groups all similar responses for a given question or task, then humans can then grade them in bulk. A new company called Gradescope provides exactly that service.
Back to Art and Music
But what about grading creative arts coursework? As mentioned, gallery critiques and peer assessments are good tools, but there is also quite a lot that computer algorithms can do. There’s a research community called the International Society for Music Information Retrieval that has been looking at “machine listening” topics for a long time now. In Part 2 of this series, we already looked at some music teaching/assessment tools such as SmartMusic. Many years ago, Ajay Kapur, myself, Jordan Hochenbaum, Owen Vallis, and some others started thinking about this topic as applied to the education space. The result was Kadenze, and the graders we’ve constructed extend beyond music and sound, to photos, arbitrary images such as hand drawn and lettered pages (comics!), and videos.
Figure 3 shows the results of machine listening applied to a student composition (a submitted sound file). The “robot” has automatically found and marked the four main sections of the piece, and determined that part of the intro (first) section sounds most similar to the ending section. If the submitted composition had only 1 section, or 100, it would not pass the required submission criteria. This “song” clearly has the requisite form, and variety. Whether it’s a hit or not is up to humans and the marketplace, but the robot likes it a lot.
Figure 3: Robot grader automatically finds and mark sections of student composition.
I think the future is super bright for tools such as these, and as computers get faster and cloud computing gets cheaper, there are huge possibilities for real-time interaction, tutoring, and grading. So it’s possible to start thinking about “robot teachers” or at least something like automated TA office hours, or critique sessions composed of machine learning algorithms trained from the works of the masters, or from past submissions for a given course.
Of course, humans have to select and design the curriculum, create the assignments, write and train the graders, etc. And I hope the final arbiter of great art works will always be humans. But the ability to offer arts courses to millions of students around the world is well worth embracing some new and potentially scary technologies. Personally, I’m in, because I don’t want to grade ten thousand homeworks per week 🙂
Perry R. Cook, PhD
Professor Emeritus, CS and Music, Princeton University
Co-Founder and Executive Vice President, Kadenze Inc.
also, Research Coordinator and IP Strategist, Smule, Inc.