In recent months, we’ve seen a growth of protests against and arguments for the rise of value-added test data. The big question? How is value-added data (and the need to get it) leading us in the wrong direction or making us better educators? The basic theory goes something like this: value-added data can be good, but only at the macro level over longer periods of time. Just about everyone actually agrees on this point, but that yearly test data seems too good to give up and drives hard battle lines.
So what was different 10-15 years ago when standardized tests were used as the macro indicators that they are?
When we were young – a list of ways to use data better in education
- Focus on the right test for the right question – teachers and administrators don’t get the most value from end-of-year state tests because it doesn’t give them enough valuable data to improve during the year. It means they’ll have to continually use other assessments to get that information, all the while practicing for the “real” test which just ends up wastes much needed class time and favors students who can get extra test prep services outside of the 9-3.
- You need “AND” mindsets on student data – teachers who don’t like value-added data don’t all have a problem with “assessment”. They really want to know how their students are doing – what they’re learning, what they’re struggling with, how they can group them better. But “assessment” has increasingly become the one-size fits all variety that doesn’t suit just about everyone, especially 10 year olds. When we combine diagnostic, formative, and summative assessments paired with different types of each we get good info that we can use day over day, week over week, and really improve our practice.
- Communicate about why we test – when I sat down in the big gym to take the infrequent state tests as a kid, it wasn’t a big deal. I had no idea why I was even taking it, didn’t think it mattered for my report card, and didn’t think it impacted my teachers. In retrospect, all three were probably true – the tests were used to look at how each school and district were doing at a 1,000 foot view. When we start publishing data about individual teachers about data that’s not really about that individual teacher and is painfully inaccurate, we do a disservice to everyone. Let’s be clear about why we test and not confuse transparency with data spewage because we have it.
- Plan our tests as an end-to-end barometer – I’m not sure how this would happen, but the tests we take early on should somehow connect with the tests we take as we enter college, leave college, and beyond. Those are the macro indicators that can help us look at whether as a nation we’re really missing critical knowledge and skills to prepare our kids and can adjust accordingly. In some ways, having fewer tests that serve as a true “benchmark” are better – kind of like the French and German systems.
There are more, but what else did you have in mind?