In recent months, we’ve seen a growth of protests against and arguments for the rise of value-added test data. The big question? How is value-added data (and the need to get it) leading us in the wrong direction or making us better educators? The basic theory goes something like this: value-added data can be good, but only at the macro level over longer periods of time. Just about everyone actually agrees on this point, but that yearly test data seems too good to give up and drives hard battle lines.
So what was different 10-15 years ago when standardized tests were used as the macro indicators that they are?
What does it mean to be data driven? Hopefully your first thought wasn’t a dashboard with pretty lines – a way to check your PTG. Being data-driven in some ways is harder than ever, not because of the the type of data or analysis needed, but because of the need to simplify the tidal waves of it coming from all spaces around us. While Business Intelligence is nothing new, we have more real inputs into our business models than before, and thanks to more accessible CRMs and so forth, more ways to pull up and look at that data for the lay worker. This is ultimately a good thing, but not so good until you really know how to use data, and that starts not by getting your degree in statistics or advanced math, but by getting in the right mindset about how you should be using data in your role.
First, a story.
A recent post about Linsanity from the Enterprise Irregulars crowd went almost unnoticed. Not a lot of retweeting. Maybe because NY basketball isn’t what most folks in big data are paying attention to these days, or maybe it’s because Lin is a perfect Black Swan. However you cut it, Jason Corsello ends his short post with a question: “why aren’t most companies analyzing their employee data to find the rising stars?” Good question.
About 3 weeks ago Dion Hinchcliffe wrote a great piece on the Dachis Group blog that summed up the architectural layers of a successful social business. He nailed it on just about every front and instead of summarizing the work here, I’ll focus on what I think was missing: the role of computer learning and prediction from the systems of record to external engagement.