Friday, December 14, 2007

The Spellings' Report and Beyond, V

In this, the fifth in our series, Anne Herrington looks at the major players in the accountability game and raises some serious questions. Here are her comments from our Featured Session at the NCTE Annual Convention November 17.

Speaker Five
Anne Herrington
University of Massachusetts, Amherst
Accountability and Assessment of Learning Outcomes

With the others, I want to thank Paul Bodmer for securing invitations for NCTE members at the regional summits that the DOE was holding to follow up on the Commission report. At the Boston meeting, about eight of us from NCTE joined senior administrators from state systems and colleges and universities, heartened that we, too, could represent our views. It was a bit unsettling, though, to meet at the plush corporate offices of EMC and receive a leather valise and handsome pen as welcome gifts.

Some Key Players and Key Words

The U.S. Department of Education (and state governments to varying degrees) is but one of at least three key players in shaping policy and practices regarding accountability. The others are the major testing corporations (ETS, ACT, RAND) and national higher education organizations (National Association of State Universities and Land Grant Colleges [NASULGC], American Association of State Colleges and Universities [AASCU], and the Association of American Colleges and Universities [AAC&U]). We—as individual faculty and disciplinary, professional organizations—are presently positioned as ones impacted by those policies and with lesser power to shape them. This current hierarchy of power makes collective action and advocacy by us through our professional organizations all the more important: NCTE, CCCC, WPA.

The ideology of assessment that drives these policies is evident in the key words that circulate in documents by these key players. These key words figure prominently in the report of the Federal Commission on the Future of Higher Education (the “Spellings Commission”) that formed the basis for the Secretary of Education’s Action Plan and, in turn, shaped the Regional Summit discussions: “robust culture of accountability and transparency,” “value-added,” “allowing meaningful interstate comparison of student learning.” To achieve these goals of accountability, calculation of value-added, and comparability, the report called for higher education to be held accountable for learning outcomes, and cited two tests as examples of quality-assessment instruments: ETS’s Measure of Academic Proficiency and Progress, (MAPP) and the RAND Corporation’s Collegiate Learning Assessment (CLA). Already the interconnection among key players is evident, here the testing corporations and the federal government.

With this frame in mind, I will make a couple comments on the Regional Summit and then two related initiatives that are impacting assessment and also show the links among the key players.

Boston Regional Summit

At the Boston Regional Summit, Vickie Schray, Senior Advisor to Under Secretary Tucker, led the discussion on Accountability. I was heartened that of the key words, transparency was stressed, not comparability (benchmarking) or value-added calculations. To paraphrase Schray, it’s each institution’s responsibility to define learning outcomes and determine the best means to asses them. She went on to say, “that doesn’t necessarily mean standardized tests.” When asked explicitly about bench-marking, Schray said, “there is no expectation for bench-marking.” She did stress, though, that having external criteria is important. Reflecting at least in part the input of other NCTE voices at the earlier summits, Schray also mentioned having heard a lot about e-portfolios.

Later, in her talk, Secretary Spellings said the Department was “not calling for a one-size fits all manner of accountability.” She also affirmed that decisions as to what and how to assess should be left to institutions.

In an email report to my University’s Provost, I summed up my impressions: “These comments encouraged me as they seem to be backing off expecting institutions to use reductive tests like CLA, CAAP, or MAPP to assess outcomes and compare ourselves against ‘peers’, i.e., other users of one of those tests. While an institution might still choose one of these, of course, it seemed that the option to select context-sensitive, locally developed assessments was opened to us. That’s certainly more in line with recommendation of AAC&U and also, for writing assessment, the National Council of Teachers of English.” (As you may have inferred, in writing to her, I was trying to make a case for our locally developed assessments and position NCTE with AAC&U.)

Voluntary System of Accountability

Unfortunately, that approach to assessment is not in line with the one taken by NASULGC and AASCU, which over the past year had been developing its “Voluntary System of Accountability,” an effort to develop a template to provide potential students and parents with comparable, and easily accessible, information on basic cost and demographic information, graduation rates, time to degree, and, what is more controversial, learning outcomes. For learning outcomes, they identified three tests that participating institutions could select from for the measurement: ETS’s Measure of Academic Proficiency and Progress (MAPP), RAND’s Collegiate Learning Assessment (CLA) and ACT’s Collegiate Assessment of Academic Proficiency (CAAP)--three standardized tests. The rationale for this outcomes testing provision has essentially been that “if we don’t do it, the feds or someone else with do it for us.” Peter McPherson, President of NASULGC, argues, “If we can’t figure out how to measure ourselves, someone else will figure out how to measure us,” he said. “It’s inevitable.” How adopting these three standardized tests represents higher education “figuring it out” for ourselves is hard to understand. It seems instead to be higher education using what someone else, the testing industry, has figured out.

The template, called the College Portrait, was officially unveiled in early November at the NASULGC annual conference. According to an article in Insidehighered.com (Jaschik), participating institutions already include the California State University, University of North Carolina and University of Wisconsin systems, as well as the Universities of Iowa and Tennessee. Interestingly, not the University of California system. While praising other aspects of the College Portrait, Robert C. Dynes, President of the University of California, questioned the value of the outcomes testing provision: “The university has concluded that using standardized tests on an institutional level as measures of student learning fails to recognize the diversity, breadth, and depth of discipline-specific knowledge and learning that takes place in colleges and universities today.”

DOE FIPSE Funded projects : The Postsecondary Achievement and Institutional Performance Pilot Program

As one way to push forward with assessment, the Department of Education dedicated $2.45 million of FIPSE grant money to assess existing measures of learning outcomes and develop new ones. As reported by Lederman in Insidehighered.com, the three projects involve the major higher education organizations and one or possibly two involve the major testing corporations:

  1. NASULGC receives funds to review the effectiveness of CLA, MAPP, and CAAP. And incredibly, according to Insidehighered.com, the testing companies themselves will work with other testing “experts” to assess the tests reliability and validity. Like trusting a drug-company to research its own drugs.
  2. AASCU’s part of the project (in which AAC&U will also participate) focuses on developing tools for measuring student outcomes in new areas, including “civic engagement, teamwork, personal and social responsibility.” Stay tuned for more testing! I do not know if the testing companies will be involved with this project, but it would not surprise me if they were. AASCU has been advocating value added assessment and use of “recognized and tested national instruments” for quite awhile. Also, in a “webi-nar” I participated in with CLA, the CLA leader said that they are working on developing an assessment of ethics.
  3. The Association of American Colleges and Universities is leading a project to conduct an audit of campus-based assessment projects to identify best practices, with e-portfolios being a focus (http://www.aacu.org/value/index.cfm). Kathi Yancey is on the Advisory Board of this project, a promising sign.

So where does this leave us? Advocacy, action, and research:

As individuals, we need to engage in advocacy and action at our local institutions and with our professional organizations. We should be guided by NCTE and CCCC positions—ones also in line with AAC&U I might add--calling for locally based assessments that are closely linked to curricula and derive from locally identified objectives; assessments that use multiple measures; assessments that engage students in contextualized, meaningful writing.

The WPA and NCTE Resource Guide will also be a valuable tool for us. Participate in NCTE’s Advocacy Month this April, too (http://www.ncte.org/portal/30_view.asp?id=115893).
As an organization, we need to keep trying to impact the U.S. DOE and our states.

We also need to find ways as an organization to be in conversations with the major higher education organizations, certainly to follow the FIPSE projects. Will writing assessment experts be involved in NASULGC’s review of CLA, MAPP, and CAAP, for instance? And in AASCU’s project?

We need to pursue systematic research on the formulation and implementation of “accountability” policies, trying to hold accountability accountable. That research would include examination of the interrelation among key players and the ideologies driving present policies.

We also need to get some questions on the table that the accountability and comparability frame obscures:

  • How much and what sorts of assessment do we—including institutions and public stakeholders—need beyond what occurs in classrooms? This is a key question for faculty to address at the institutional level and to be brought into policy discussions within higher education associations and DOE.
  • How important is having comparability of data with other institutions? Is transparency enough?
  • Who and what should drive curriculum decisions?
  • Where are the dollars going? If you’re paying attention, you’ll see that a good deal of money is flowing to the testing industry. At our own institutions, if money is being spent for standardized tests, is any also going for locally developed and implemented assessments?
  • What is the investment in assessment in relation to other academic needs? And, how are costs and benefits of assessment to be assessed?

References:
Jaschik, Scott. “Accountability System Launched.” November 12, 2007. http://www.insidehighered.com/news/2007/11/12/nasulgc

Lederman, Doug. “Meeting of the Minds.” September 27, 2007. http://insidehighered.com/news/2007/09/27/fipse.

“U.S. Department of Education Awards $2.45 Million for Improved Measures of Achievement for Post-Secondary Students.” http://www.ed.gov/news/pressreleases/2007/09/09282007.html.

No comments: