Tuesday, June 26, 2007

Campus Accountability; Or, Assessment for "Them"

When the Commission on the Future of Higher Education's report, "A Test of Leadership," called for accountability, they were suggesting the idea that institutional success ought to be easily compared to other institutions--you could then "buy" an education, kind of like buying a car. Of course I am oversimplifying, but oversimplification is what the report did as well.

Assessment has always been a central part of education, but it has most often been the formative kind of assessment that we used in the classrooms and offices to see how we were doing, where we were slipping, and figure out how we could do better. That kind of assessment, however, seldom tells others how good our stuff is. And that telling others is both what the Commission called for, and what the Department of Education is striving for.

In the meantime, the two big public institution organizations, NASULGC and AASCU, have banded together to offer a voluntary program for their institutions to use. It works like this. Using a template devised by NASULGC and AASCU, institutions will post information on the web that allows parents and students to figure out cost, program availability, graduation rates, enrollment continuance, value-added learning outcomes (through CLA, MAPP, CAAP--all national tests that measure aspects of critical and broad-based thinking), engagement levels (through the NSSE family of assessments) and other bits and pieces of information to allow prospective students and their funders to see what they will get.

This is a step for transparency. If it stops here, it will do more harm than good, as it will begin to be reductive, and we will learn how to use this data for all the wrong purposes. What we need to do is continue to find more and better methods and processes to assess student growth and learning in our courses and across our courses and institutions. We must clearly indicate what is good formative work and what is good summative work. And we must articulate very clearly when those two kinds of assessment come together to give us a more complete picture. We must interpret the data.

Here is what I mean. I do believe that data ought to drive decisions, but, too often, raw data is incomplete. I have gone from drinking caffeinated coffee to drinking non-caf tea, to non-caf coffee, and now I am happily back on the drug. All because of the reports of the effects of caffeine on my system. What we need to remember is that all assessments give us information. The next step is to take all that information from as many assessments as possible and build an interpretation that is clear, articulate, meaningful, and trusted.

Otherwise, the assessment tool will drive the system, rather than the assessment tool informing the interpretations which will drive the system. When I go to my doctor, and he says that my last blood test showed something that he isn't sure about, but he would like me to take more tests, I comply. At our next visit, he tells me that he read the results, talked to so-and-so who is a specialist in this, and their conclusion is that maybe we should think about modifying my prescriptions. I feel good that he is using multiple assessments to gather data, and that he and his colleagues are using their best professional judgement to interpret that data, and that the interpretation may be different when we have more sophisticated tests. That is good assessment.

Good assessment begins with multiple tools, provides trustworthy data, ensures consistency, and is interpreted by professionally competent and knowledgeable people.

No comments: