Grading and reporting have been a hot topic in education in recent years, spawning a move to grading practices that are grounded in standards, competencies, or proficiency, depending on the language of the state. The idea has been if we ensure that our grades are based on clear standards, then the meaning of the grade is more accurate and clear. While this certainly is true, most schools moving toward this type of grading are still working on how to apply the process to students who are behind grade levin on one or more skills. This challenge applies to many groups of students, most notably those who qualify for special education services.
Recently, I introduced the Differentiated Assessment and Grading Model (DiAGraM) as a way to apply standards-based grading to students who are behind on grade level standards. Although this model does offer answers to improve the accuracy of grades, which in turn improves communication, this does not solve all of the complexities of supporting learning for students who need intervention. Many teachers who are going gradeless have turned to social media to proclaim the benefits of a gradeless classroom in moving the focus off the points and back onto the learning. For students who need academic intervention, the additional potential benefit of going gradeless is keeping the focus on where they are in their own, unique learning journey, and off of comparing how they are performing relative to peers.
Without question, there is still value in reducing learning data to symbols for ease and speed of interpretation. For example, it is useful to be able to scan data on all of the algebraic operations a group of students is learning and see where certain operations are difficult for many students. Or to see which students are having difficulty with many of them. These data are, indeed, useful to the educator for making decisions about instruction. Going gradeless does not mean we have to abandon quantitative data. But the way we collect and display data to improve instruction does not have to be the same way we display and communicate data with students and their families.
As we consider how we approach assessment, data reduction, and feedback for learning, we must always keep at the forefront of our minds how these procedures apply to and benefit or harm students who require intervention. As Ken O’Connor, Doug Reeves, and I discussed in our recent Educational Leadership article, applying a score in the midst of what should be considered practice is often detrimental to true engagement in learning. And, really, everything leading up to the end of a course is practice!
But for students who are performing far behind grade level in one or more areas, how do we realize the benefits of going gradeless as well as our desire to communicate accurately what a student understands and is able to do? Is it important to reference grade-level expectations? The Office of Special Education Programs cares a lot about this, and with good reason. We must demonstrate that to the greatest extent possible students with disabilities are on grade level. Perhaps the answer lies in recognizing the difference in purpose between organizing data for instructional improvement, organizing data for external reporting, and organizing data to provide feedback we give students. When we consider the differences between these three purposes of assessment, it becomes clear the way we present the data for each of these three purposes must be different. Whether we use quantitative or qualitative data is important and should match our purpose. If we use quantitative data, we have options of how to display this, from traditional-looking grades, to alternate symbols, to colors and graphs with no letters or numbers. Most importantly, the way we need to present data for reporting or even to guide our own reflection for instruction should never blindly drive the way we present data for the purpose of giving students feedback everyday to fuel their passion for and engagement in learning.
In examining our purposes for data display and communication, we may determine that quantitative display of data may, in fact, be incredibly important as an instructor to be able to consume classroom data efficiently and make decisions about what or how to teach. But we may also determine that the quantitative display has implications to student motivation and engagement when shared with students and families during a formative period in the course. We may, instead, determine that qualitative data in the form of print, conversations, or video may be more useful for the purpose of supporting students’ engagement in learning. This truly is an empirical question, and one that is deserving of our serious consideration.