Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Thursday, August 16, 2007

Individuals and Types

“Begin with an individual, and before you know it you find that you have created a type; begin with a type, and you find that you have created—nothing” (F. Scott Fitzgerald, “The Rich Boy”). In spite of our recognition that Fitzgerald is right, we continue to work from the general back to the specific, which is why we so often have misquided policy. If we pay attention to a new report from the Department of Education, there is a chance that this time we might see that community colleges are different--not just from the rest of academe, but from each other.

The report, titled "Differential Characteristics of Post-Secondary 2-Year Institutions," establishes seven categories of two-year colleges; small, medium, and large publics, allied health non-for-profits, other not-for-profits, degree-granting for-profits, and other for-profits." This is a start, and it begins to give us good information on who attends what category of two-year college, what the faculty cohort looks like in very general terms, and what kinds of completion (or non-completion) experience students have. Maybe this beginning categorization will help us see that we need to step back to look at what an education ought to provide, how it ought to provide it, and how we need to educate all our publics about the purpose, value, and significance of post-secondary education.

Of course, I could make the same argument for Post-Secondary 4-Year Institutions as well as for all of higher education. The Commission on the Future of Higher Education's report last year highlighted that. It treated all of higher education as the type. So those of us who understand our individual institutions could easily say, "Doesn't really apply to me." And that is why the call for some easy method of comparability must be answered with our knowledge that we are not all of a type, but that our differences are good.

That also means that we in the academy must educate ourselves to our individual institutions, and work with our colleagues at their individually different institutions, to find the underlying principles and values that should be established as comparable educational benefits. Then we can show that not all post-secondary educations are the same, nor that they ought to be the same. Students should attend the institution(s) based upon what they see as their educational needs. But first we have to clearly identify and articulate those various outcomes, needs, and values and correspond them to the individual institutions.

Tuesday, July 31, 2007

The Dog Days

It's that time again, "the Dog Days of Summer." I don't know where that phrase came from, but I remember it from my childhood as a way to describe the long, languishing, hot days when dogs would lie on their backs in the middle of the lawn and doze away the day. Otherwise known as August. In the capitol city we are anticipating the hiatus that will come when Congress finally succumbs to the pressure to get out of town with work either done, partly done, or undone. The city will languish and those of us still here will dress down and saunter to work. So, as we approach the lax month, where are we?


Its hard to say where we are, as most of the key legislation for education has moved forward in one body of Congress, but not in the other. For instance, the House is busy moving forward on ther reauthorization of NCLB, but the Senate has yet to act. The Senate has passed its version of the reauthorization of the Higher Education Act, but the House has yet to act. In the Senate version there is more money for students in the financial aid package, and there is a clarification of accrediation rules. On the latter, institutions are responsible for establishing what student success looks like in relation to their mission, and the accrediting agencies need to monitor that. Also, the institutions must be clear about their policies for transfer of credit from other institutions. The heavy hand of control has been reduced to the appropriate role of oversight. While this looks good, we need to wait to see what the House does.

The pressure, of course, is to try to get both the major pieces of education legislation, reauthorization of NCLB and HEA, passed before the end of the Congressional year. Much good work has gone into the reauthorization process, but if it is not passed and becomes law, we will have another continuing resolution, which leaves the old law in place.


The same is true for funding humanities issues. The House has passed its version with an increase for NEH and funding for Archives and Public Records. The Senate HELP committee has passed increased funding, but the full Senate has not voted, yet. Here is the full update from the National Humanities Alliance.

The good news is that both bodies are working to move legislation forward, and they feel the heat of summer's breath on their necks. With that, we hope the dog days of summer will provide the respite and lassitude to prepare us for the needed burst of energy to make the fall productive.

Tuesday, June 26, 2007

Campus Accountability; Or, Assessment for "Them"

When the Commission on the Future of Higher Education's report, "A Test of Leadership," called for accountability, they were suggesting the idea that institutional success ought to be easily compared to other institutions--you could then "buy" an education, kind of like buying a car. Of course I am oversimplifying, but oversimplification is what the report did as well.

Assessment has always been a central part of education, but it has most often been the formative kind of assessment that we used in the classrooms and offices to see how we were doing, where we were slipping, and figure out how we could do better. That kind of assessment, however, seldom tells others how good our stuff is. And that telling others is both what the Commission called for, and what the Department of Education is striving for.

In the meantime, the two big public institution organizations, NASULGC and AASCU, have banded together to offer a voluntary program for their institutions to use. It works like this. Using a template devised by NASULGC and AASCU, institutions will post information on the web that allows parents and students to figure out cost, program availability, graduation rates, enrollment continuance, value-added learning outcomes (through CLA, MAPP, CAAP--all national tests that measure aspects of critical and broad-based thinking), engagement levels (through the NSSE family of assessments) and other bits and pieces of information to allow prospective students and their funders to see what they will get.

This is a step for transparency. If it stops here, it will do more harm than good, as it will begin to be reductive, and we will learn how to use this data for all the wrong purposes. What we need to do is continue to find more and better methods and processes to assess student growth and learning in our courses and across our courses and institutions. We must clearly indicate what is good formative work and what is good summative work. And we must articulate very clearly when those two kinds of assessment come together to give us a more complete picture. We must interpret the data.

Here is what I mean. I do believe that data ought to drive decisions, but, too often, raw data is incomplete. I have gone from drinking caffeinated coffee to drinking non-caf tea, to non-caf coffee, and now I am happily back on the drug. All because of the reports of the effects of caffeine on my system. What we need to remember is that all assessments give us information. The next step is to take all that information from as many assessments as possible and build an interpretation that is clear, articulate, meaningful, and trusted.

Otherwise, the assessment tool will drive the system, rather than the assessment tool informing the interpretations which will drive the system. When I go to my doctor, and he says that my last blood test showed something that he isn't sure about, but he would like me to take more tests, I comply. At our next visit, he tells me that he read the results, talked to so-and-so who is a specialist in this, and their conclusion is that maybe we should think about modifying my prescriptions. I feel good that he is using multiple assessments to gather data, and that he and his colleagues are using their best professional judgement to interpret that data, and that the interpretation may be different when we have more sophisticated tests. That is good assessment.

Good assessment begins with multiple tools, provides trustworthy data, ensures consistency, and is interpreted by professionally competent and knowledgeable people.

Tuesday, April 03, 2007

FIPSE Earmarks

If it quacks, flies, looks like a duck . . . I suppose could be said of earmarks. In 2005, congressional earmarks for higher education out of the Fund for the Improvement of Post Secondary Education (FIPSE) eliminated the grant competition. Since then, congressional earmarks funded by FIPSE have disappeared, but that does not mean that the whole FIPSE budget will be in the open grant program for the next year. As the Chronicle of Higher Education reports, Secretary Spellings has set aside almost half of the money budgeted for this year for FIPSE.

It is completely within the Secretary's purview to do this, but it is unusual. The original intent of the program is to promote the improvement and innovation of higher education without particular limitations. However, the Secretary has made it clear in several instances that she wants to move forward on the recommendations from the Commission of the Future of Higher Education's report, "A Test of Leadership." The two areas receiving the most attention from the Department of Education have been accountability, and you should spell that a-c-c-r-e-d-i-t-a-t-i-o-n, and K-12 to college alignment. Expect to see programs addressing transparency in accountability and high school to college alignment privileged in this round of applications. Not exactly an earmark, but still quacks a bit.

Thursday, November 30, 2006

Issue One--Accreditation

The Department of Education has already begun to move forward on the recommendations of the report, "A Test of Leadership: Charting the Future of U. S. Higher Education." Secretary Spellings' action plan listed three issues she would address; accessibility, affordability, and accountability. The first issue on her agenda is accountability, and the first item under that umbrella is accreditation. On November 29, 2006, the DOE held the "Accreditation Forum" to begin the discussion. Secretary Spellings emphasized when she charged the commission in the fall of 2005 that this would be the beginning of a dialogue, and that was reinforced both by the Forum's organizer, Vicky Schray, Senior Advisor, Office of the Undersecretary, DOE and Secretary Spellings at the Forum. Schray said that we are not here to lay blame or to come to consensus, but to spell out the issues that we should address. Spellings said that she was heartened by the response to the Forum and the work of the members, and that she wanted the higher education community to address the issues.

I must confess that I was initially ambivalent about this meeting, as I assumed accreditation would only include accreditors. But the roomful of invited participants, and Schray's opening comments, showed otherwise. She said that the Forum was convened to address the accreditation process with all the players, not just the accreditation agencies. In attendance were about 70 higher education professionals from around the country representing some of the major accrediting agencies, university/college system officers, institutional officers, policy/think-tank associations, and some of the higher education associations.

If you have not already read the Inside Higher Education and Chronicle of Higher Education stories on the Forum, they give a full picture of the meeting. I will comment on where our place is in this discussion.

Jane Wellman, a Senior Associate at The Institute for Higher Education Policy, in framing the day’s work, emphasized that the accreditation process is an evolved process, not a designed one. Peg Miller, the Director of the National Forum on College-Level Learning stated that one of the conclusions she has come to is that campus assessment cannot serve accountability. Peter Ewell said this in a different way when he said that there is a tension within the accreditation process between three roles: improvement-based peer review, quality assurance, and public information. He contended that current accreditation processes are pretty good at the first of these roles, but they diminish through the other two. Ewell’s concluding point was that maybe the assessment process is being asked to do too much, or if it is to fulfill all those roles it is woefully resource-poor to do it.

The working structure of the Forum was two sessions of discussion tables for the invited participants to address two issues: 1) student learning outcomes, and 2) institutional inputs (resources) and process standards. Not being an invited participant, I had to wait until the groups had worked through their questions and reported out. Kind of like watching student group work.

The report-out revealed that the discussions had been complex and generative. On the issue of student learning outcomes, the groups called for multiple measures, complex processes, establishment of clear outcomes, external audits, clarification between student achievement and student learning, need for common definitions and comparable data systems, clarification of expectations of learning for various degree levels, and the question of whether institutions or student learning should be the center of accountability. Like most good group work, the process prompted strong discussions.

The input or resource issue discussion, coming at the end of the day, still elicited good responses. First off, most of the groups argued with the question, saying that inputs cannot be established until outcomes are clarified. In that light, almost all of the groups said that outcomes trumps inputs. One group said that if the institution can document good outcomes, who cares what the inputs are. Another group said that institutions should be able to make the case for varying from input standards if they can achieve good outcomes. The argument, and we have heard it before, was that by establishing very strong input standards, you stifle creativity and innovation to achieve outcomes. One group did remind us that it is a balance between inputs and outcomes. You cannot have good outcomes if you do not have good inputs. The tension between inputs and outcomes is, of course, problematic. Too often we have agreed to focus on results, and the result has been an incomplete outcome clarification that shortchanges us. On the other hand, too strict an adherence to input standards increases the bureaucratic stasis too often found in our work.

Coming out of this discussion was a stronger and stronger argument for establishing outcomes, particularly in the core expectations for a degree. Writing, reading, and numeracy were specifically cited. Interestingly, no one from any of the discussion groups said we need to talk to disciplinary folks. When I said at the beginning of this report that it was a mixture of higher education professionals and stakeholders, two major groups were not at the table--faculty representing disciplines, and students.

The Forum did succeed in placing good stuff on the table. DOE will sift through all the notes, do a report, and then, in the words of Ms. Schray, "see what comes next."

So, where are we? There was a strong call for more meetings on this issue, and DOE heard that, loud and clear. The discussion raised come good issues, but a full explication of the issues and a clear articulation of progress will require that more stakeholders are at the table. If standards or outcomes are going to be articulated for reading and writing, we need to be part of that discussion.

Friday, October 13, 2006

AFT releases new policy report: Smart Testing, Let's Get It Right

This week, the American Federation of Teachers hosted a forum that featured the presentation and discussion of a paper authored by education writer and consultant Paul Barton. The paper, “Smart Testing: Let’s Get it Right: How assessment-savvy states have become since NCLB?” asserts that only 52 percent of states’ tests are aligned to strong standards, allowing some to conclude that states are doing a better job in developing content standards than in using them to drive assessment. As a result, testing that is not aligned with strong standards drives many accountability systems. This “drift into test-based accountability” is troubling to many educators.

Two years ago, the NCTE Executive Committee adopted Framing Statements on Assessment that describe the Council's guiding principles on assessment. Further, NCTE has endorsed a Joint Organizational Statement on the No Child Left Behind Act that emphasizes the need for the law to "to shift from applying sanctions for failing to raise test scores to holding states and localities accountable for making the systemic changes that improve student achievement."

What assessment practices do you value? Does the current policy emphasis on accountability make it easier, or more difficult, for you to engage in the kinds of assessment practices you believe work best?

Monday, October 09, 2006

Curriculum focal points at the national level

The National Council of Teachers of Mathematics (NCTM) has just published curriculum focal points for prekindergarten through grade 8 mathematics (www.nctm.org). Focal points are the most important three mathematical topics for each grade level, as determined by teachers in those grade levels. A focal point is a cohesive cluster of related knowledge, skills, and concepts.

Because of the profusion of curricular goals, different within and across states, NCTM decided through a wide vetting process to provide a framework for curricular expectations and assessments that it hopes will prompt dialogue within and across states and school districts about what is important mathematically at different grade levels. NCTM states, "Organizing a curriculum around a set of focal points, with a clear emphasis on the processes of mathematics, as outlined in NCTM's Principles and Standards for School Mathematics (i.e. problem solving, reasoning and proof, communication, connections, and representation), can provide students with a connected, coherent, ever expanding body of mathematical knowledge and ways of thinking."

What would focal points for English language arts be in each grade from prekindergarten through grade 8? What three focal points would you suggest for the grade level(s) you teach? Would having focal points help your teaching if they were part of a sequence of focal points across grade levels and valued by your school district and state in curriculum design and assessment?

Friday, August 11, 2006

Tests/Testing Filtering Down (Or Moving Up?)

A recent article reports that test taking is increasingly being used by companies as a hiring factor. Most of these are assessment tests in which applicants rate themselves on personal questions. Companies then use the results to help them determine who makes it to the interview stage. Testing, you see, has become the raison d’etre in businesses and in schools. Once upon a time, schools were said to lag behind modern business practice. But apparently that lag is gone. Proponents of school testing can now cite businesses’ use of tests and vice versa.

Where does it all end?