Showing posts with label student success. Show all posts
Showing posts with label student success. Show all posts

Tuesday, June 26, 2007

Campus Accountability; Or, Assessment for "Them"

When the Commission on the Future of Higher Education's report, "A Test of Leadership," called for accountability, they were suggesting the idea that institutional success ought to be easily compared to other institutions--you could then "buy" an education, kind of like buying a car. Of course I am oversimplifying, but oversimplification is what the report did as well.

Assessment has always been a central part of education, but it has most often been the formative kind of assessment that we used in the classrooms and offices to see how we were doing, where we were slipping, and figure out how we could do better. That kind of assessment, however, seldom tells others how good our stuff is. And that telling others is both what the Commission called for, and what the Department of Education is striving for.

In the meantime, the two big public institution organizations, NASULGC and AASCU, have banded together to offer a voluntary program for their institutions to use. It works like this. Using a template devised by NASULGC and AASCU, institutions will post information on the web that allows parents and students to figure out cost, program availability, graduation rates, enrollment continuance, value-added learning outcomes (through CLA, MAPP, CAAP--all national tests that measure aspects of critical and broad-based thinking), engagement levels (through the NSSE family of assessments) and other bits and pieces of information to allow prospective students and their funders to see what they will get.

This is a step for transparency. If it stops here, it will do more harm than good, as it will begin to be reductive, and we will learn how to use this data for all the wrong purposes. What we need to do is continue to find more and better methods and processes to assess student growth and learning in our courses and across our courses and institutions. We must clearly indicate what is good formative work and what is good summative work. And we must articulate very clearly when those two kinds of assessment come together to give us a more complete picture. We must interpret the data.

Here is what I mean. I do believe that data ought to drive decisions, but, too often, raw data is incomplete. I have gone from drinking caffeinated coffee to drinking non-caf tea, to non-caf coffee, and now I am happily back on the drug. All because of the reports of the effects of caffeine on my system. What we need to remember is that all assessments give us information. The next step is to take all that information from as many assessments as possible and build an interpretation that is clear, articulate, meaningful, and trusted.

Otherwise, the assessment tool will drive the system, rather than the assessment tool informing the interpretations which will drive the system. When I go to my doctor, and he says that my last blood test showed something that he isn't sure about, but he would like me to take more tests, I comply. At our next visit, he tells me that he read the results, talked to so-and-so who is a specialist in this, and their conclusion is that maybe we should think about modifying my prescriptions. I feel good that he is using multiple assessments to gather data, and that he and his colleagues are using their best professional judgement to interpret that data, and that the interpretation may be different when we have more sophisticated tests. That is good assessment.

Good assessment begins with multiple tools, provides trustworthy data, ensures consistency, and is interpreted by professionally competent and knowledgeable people.

Friday, December 01, 2006

It's the Part-Time Thing

The recent Community College Survey of Student Engagement shows that part-time community college students are not as engaged in their in-class and out-of-class work as full-time students. I think most of us would have postulated that without the survey, but the survey gives evidence.

And, as reported in Inside Higher Education, a higher rate of part-time faculty results in a lower completion rate for community college students. Does anyone see a pattern here?

Students will continue to be part-time, and the percentages will grow, at both community colleges and universities. We will probably not see a reversal in the hiring patterns in higher education either. Which leaves us with the challenge: To design and shape a culture of engagement and learning in a culture that is not engaged. I am open for suggestions.

Tuesday, October 17, 2006

Faculty Support for Student Success

The notice that a new study on the effect of adjunct appointments on student success brings with it the news that is not a real surprise, those schools who rely more heavily on adjunct faculty appointments lag behind institutions with fewer adjunct appointments in terms of traditional models of student success.

The study will be published in the forthcoming issue of the Journal of Higher Education. While the study confirms what we have suspected (known might be a better word), it also identifies what some of those factors are. It is not adjunct faculty per se, but how faculty are supported and made part of the institutional structure. It is the faculty members involvement in the institutional life--office time, committee time, advising time, and even just the time to socialize with other faculty. If it takes a village to educate a child, then we might expand that metaphor and recognize that it is the institutional community that educates our college students. In short, our students do better when they are taught by members of the community. What we need to do is figure out how to ensure that all faculty are members of the institutional community.