WPA Position Statement on Assessment - DRAFT

The WPA Position Statement on Assessment

NOTE: This statement is a draft. While it is available for you to use, we also encourage and welcome your comments on this in-process document. If you have comments or suggestions for revisions, click on the "comment" link below. In your comments, please indicate the paragraph to which your comment applies (e.g., para 1, para 2, para 3, etc.) and include any changes or suggestions you would like to see included. We welcome comments and suggestions until November 2007. If you use this document, we also would welcome stories about how it was used and/or received.

Linked to this statement, below, are also useful background resources on current discussions about accountability, assessment, and accreditation.

College writing classes prepare students to become successful 21st century communicators who will be prepared to communicate with a variety of audiences using a variety of media, from traditional pen-and-paper composition to electronic communication.

21st century literacy educators understand that, to be prepared for this new age, communicators must be able to analyze and address the expectations of these audiences and employ reading, writing, and thinking strategies to meet those expectations. Those strategies are complicated and involve integrating multiple cognitive activities, like the processing and production of written texts, with analyses of the contexts and audiences for whom communication is being produced. Good communication in the 21st century is context-specific: what represents good writing in one context, like writing for an on-line political blog, might not be seen as good writing in another, like creating a pen-and-paper lab analysis in a biology class.

Developing successful communication practices takes time and experience. Research has long demonstrated that exposure to the conventions of communication (e.g., writing and reading) in different contexts significantly affects an individual’s ability to reproduce those conventions in her or his own work. Thus, communicators who have grown up in literacy-rich homes where reading and writing are regular features typically have an easier time engaging in communicative processes in educational settings.

Post-secondary institutions, especially, acknowledge these differences. Some are highly selective, attracting students who have extensive experience with communicative practices before entering college, while others focus on working with students who bring less experience with these practices. While all institutions establish rigorous learning goals for their students, those goals and the pedagogies by which they are achieved take into account students’ previous experiences as communicators. Additionally, any assessment of these students must take into account students’ experiences and the specific context in which their learning takes place.

Assessments of the degree to which communicators are meeting these demands must reflect these long-established facts. That is, assessments must proceed from existing research indicating that:

  • The demands of successful 21st century communication involve more than working with pen-and-paper writing and reading. Successful 21st century communicators engage in a range of practices, from production of traditional papers to the creation of multi-media texts. Thus, any assessment of successful communication must examine communicators’ abilities to engage in analysis of the expectations of production of multiple kinds of texts, including consideration of what texts are appropriate for what audiences.
  • Qualities associated with successful communication (writing, reading, and other acts of textual production) are context-specific. Thus, any assessment of successful communication must proceed from actual evidence of communicators’ work. Included in that work must be the communicators’ own analysis of their understanding of the specific context, purpose, and audience for that work.
  • Production of successful communication (writing, reading, and other acts of textual production) involves both cognitive processes and cultural analysis (of audience, purpose, and context). These production processes must acknowledge and build upon communicators’ previous experiences with the conventions of communication in particular contexts. Thus, any assessment of successful communication must take into consideration the interplay among cognitive and cultural processes and must attest to the abilities of communicators in specific contexts.
  • Finally, both the processes used for and products emanating from these assessments must be directed back to the programs and contexts charged with developing successful 21st century communicators. Thus, educators working with these programs must be directly involved with the creation of appropriate assessments, and the assessment results should be used to “close the loop” when they take appropriate actions based on those results.

In order to be useful for educators responsible for advancing student development, measures used to assess the success of these processes and the communication produced as a result of their employment must reflect their full complexity. Additionally, because good communication is context specific, comparing textual production across dissimilar contexts provides neither valid nor reliable assessment data. Instead, assessments must be developed that take into account the complex work of 21st century communication and enable educators to improve that work through their teaching.

Taxonomy upgrade extras: 


I think the draft shows a deft approach to the importance of context when assessing learning in higher ed. This kind of language is almost de rigueur in light of the pressure to adopt one-size-fits-all measures.

I worry, though, that the bulleted sections, while written carefully to privilege context, may give the impression that only insiders themselves can assess outcomes for their own purposes: the wagons seem to be tightly circled.

Would it be possible to add some examples of assessments to the bulleted sections? For example: Where would sampling of student work be appropriate? Where would one want to administer an exam or draw upon a national measure, such as NSSE? Where would portfolios do the best job? And so on.

When we talk about "closing the loop," wouldn't an example or two clarify that metaphor? Assessment wonks know that the phrase means using assessment results to improve the related functions--curriculum, pedagogy, etc. Is that obvious to others? Or does the metaphor reinforce the impression that we're in a closed system that is persistently self-referential?

The knowledge WPAs have about assessment seems muzzled by language that is designed to head off trouble. What can we do to show that we really do know how to assess, and, furthermore, that we have a wide range of well-regarded instruments to put to work?

Thanks for the opportunity to review and and comment on this draft--good work, everyone.


Carol Rutz
Carleton College

thank you for taking this on. I'm going to offer a few general comments. Although full of reasonable and important assertions, the preface is long. Let me suggest that you make it longer--I would have liked in the 1st paragraph a brief statement about the purpose of this document. What kind of assessments are we talking about here? For placement, for achievement at the end of the semester, for advancement into a college, for graduation. If all of these, I would like to see some acknowledgement of the complexities that attend the different kinds of assessments for different purposes, including costs and labor.

I know you thought long about using communicators for the 21st century. I long for writers.

I'm wondering what the 4 points would look like if you boiled them down to quick claims--and how often I violate them in practice. I'm going to see whether I can do that--might take a few minutes so I'm going to send this without a boiled-down version.

Irvin Peckham
Director of the University Writing Program
Louisiana State University

Par. 1: I would avoid double preparedness. And I don’t buy the claim. I don’t think all writing classes need to have students working in multi-media modes. You seem to be talking about the cumulative effect of writing programs. I could very well teach one class focusing on the familiar/personal/reflective essay and be quite happy about it.

Par. 2. I like the gist—we’re really just nodding here to what we know from genre theory & research. A much needed statement, but I would struggle to boil this down.

Par. 3. And of course I agree, though still longing for the good old days of writing. But I was wondering why you need to say all this as a preface to a statement on assessment. Are you going to suggest that a large-scale assessment should take social-class origins into account? Not a bad idea—but I was wondering how we would do it.

Par. 4. Extension of 3. I’m reading that writing assessments must be locally driven and based on what and who’s being taught. This paragraph (if my interpretation is correct) brings par. 3 back into the assessment game.

Par 4. Whoa! I’m a violator. You mean that _any_ assessment of student writing _must_ include a student’s reflection on his or her process and product. I repeat: _any_ assessment. That seems like quite a claim—and I don’t think it’s backed by any research.

Par. 5. That phrase: “_Any_ assessment of __ must taking into consideration the interplay among cognitive and cultural processes . . .” Well, it’s quite a broad statement. I think you mean consider the link between where you grow up and how you think. In which case, I again wonder how large scale assessments might do this.

Par 6. I would love it if you could find a phrase you wouldn’t have to scare away. Isn’t there a reasonable way to say that what we learn from assessments should be used to improve teaching practices. (but then I wonder if that’s true for all assessments of writing.)

Par. 7. You’re saying all assessments must be locally developed (a claim I mostly agree with). We need this statement in the face of ACT and SAT claims, but I think the multiple uses of assessment make the issue a bit more complex than what this paragraph suggests. You’re also closing off assessments for placement (or at least closing the door with a chain lock).

1st Bullet: Any assessment must assess a student’s ability to communicate in a broad range of media and genres. [this claim seems to preclude an assessment used to evaluate student learning and teachers’ success in a specific writing course. You’re referring _only_ to assessment [in caps] the student’s ability to communicate—the whole ball of wax. I just wonder about closing off more locally driven and context specific assessments.]

2nd Bullet: Already said in the preface. To assess writing, we need to look at writing. And we need to look at students writing about their writing. Again—I think this is useful but I wonder whether it’s practical for all assessments.

3rd Bullet: Ah—I see. We’re repeating what we said before in the preface. I wonder why not say it either there or here—just say it in one place. And my question about it still stands.

4th Bullet: Information from all assessments must be used to reflect on and improve teaching practices. [same closed door comment as above.]

My last comment didn't include my apologies for being, well, ummm, well, ... difficult.

Irvin Peckham
Director of the University Writing Program
Louisiana State University

I want to thank the board for taking on this difficult but important task. A strong, substantive statement on assessment from the WPA will be a useful and timely document. I know it would help me in my work as a wpa. I hope the comments that follow will be useful in some way.

I find the statement in its current form both too specific and too general.

I find the document too specific in its endorsement of a particular set of goals focused on technological literacy. (Certainly, we live in a “new age,” but the novel characteristics of this age are not merely technological. Were I to query my institution’s undergraduate deans about the educational missions of their respective schools, I doubt that they would emphasize new media literacy. They would be much more likely to say that they wanted to produce liberally educated leaders for a global world.) I find the document too general in its lack of acknowledgement of specific best practices. (Here I am echoing what Carol has already said).

I could really get behind a document that acknowledged that programs and institutions can have a range of legitimate missions and then went on to argue that assessment practices should suit the programs and institutions in which/to which they are being applied. Such a claim could be elaborated upon by acknowledging and endorsing proven assessment practices and principles, as Carol suggests.

In closing, I offer two incidental observations:

In their use of phrases like “21st century literacy educators” and “21st century communicators,” I imagine the board is attempting to finesse the pressure new media puts on terms like “writing teachers” and “writers.” I don’t know that we as an organization are ready to abandon that these terms, and I don’t know that this document is the place to open that debate. If the document did not open by identifing our "new age" with new communications technologies, it could avoid this awkward semantic issue.

Second, the document focuses almost exclusively on the assessment of individual students (or “communicators”). Only in the last bullet point does the document come close to addressing what I regard as an equally important issue: the assessment of programs. (I say “come close to” because the last bullet only says that information from student assessments should feed back into programs, and that’s a related but different thing.) Addressing this imbalance would, in my opinion, greatly improve the document.


Joe Bizup

Columbia University

I want to reiterate what others have said about the term communication. The emphasis needs to be on writing as communication. We are right to talk about writing as local as embedded as contextual as technology, etc. etc. But I think to a lay person, some of this might look like excuses we are offering for not doing assessment.

I agree that assessment of writing must include assessment of the program--and program includes resources--classrooms, comptuers, teachers, their training, class size, curriculum, and so forth. Do we want to talk about assessment at end of fy comp and then before students graduate?

Maybe we need to make some distinctions explicit. Is this tool assessing documentation of sources, standard English, following genres conventions (of which genre?), narrative abilities, support given for an argument, pronoun use, or what? Maybe our statement should not just talk about the complexity of assessment but point to the many factors that make assessing writing so difficult. Are there some of these things that can be assessed easily and cheaply? But what problems then arise?Maybe we need to talk about assessment of writing becoming subsumed by assessment of the parts--I can assess standard English, but that does assess using sources gracefully or even using them according to convention; I can assess adherence to the 5 par essay, but that does not tell me whether the student can write a lab report, or if he can write a memo when I hire him in my office. I can assess how well a student uses transitions, a synonyms, repetition--in other words coherence and cohesion--but that does not assess his ability to read and summarize. In other words, maybe we need to trot out some of our specialized knowledge instead of just saying that it's real comlicated.

I do think the document ought to account in some terms for the implications of new literacies on assessment. My main comment, however, is to suggest a stronger statement about the cultural bias inherent in assessment.

Stephen Ruffus
Salt Lake Community College

In the current climate of the Spellings Commission and the push for value-added approaches to assessment, I think it would be wise of us to include a reference to the considerable body of research in the discipline that indicates that pre- and post-tests of students' abilities as writers (even when those samples are drawn from relevant contexts for writing and include students' reflection on their work) is not a valid approach. Such a notion seems like old news to most of us--the growth and development of writers might not be realized as neatly as in 14 weeks or two or four years. But external stakeholders (like the State Council on Higher Education in Virginia, which has just issued a mandate to move from competency-based to value-added assessments in core curriculum courses like first-year composition) are increasingly drawn to the value-added model and make no distinction between the complex cognitive development required of writers and other "skills" like learning to use different forms of technology like PowerPoint.

In responding to state mandates like the recent one issued by SCHEV, I'd find it very helpful to be able to refer to an official position statement on assessment that speaks to this point. I appreciate the opportunity to raise this point for discussion among those working on the current draft.

Christina R. McDonald, Institute Director of Writing, Virginia Military Institute

First, a small observation. Para 1 opens with the phrase "College writing classes," but the rest of the document clearly includes texts that occur in WAC (e.g., hand-written lab reports in science classes). I think we need to be specific and address both contexts. We can't ignore the courses we are primarily responsible for, just because most of them happen at the beginning of students' college writing experiences, nor can we assess what students gain in college without looking at the writing they do *outside* college writing classes.

I like the emphasis on complexity of construct. I also agree with Carol Rutz about the defensiveness of the language and with Joe Bizup that the document gives short shrift to program assessment and to such useful and profitable strategies as sampling, rather than assessing every student.

Given the case the document builds that the construct "college writing"--or "college communications"--is complex, I think we need simply to call for assessment measures that are sufficiently complex to capture the set of competencies we seek to assess. In other words, we need to assert what we want and allow that to exclude what we don't want. I think the document does that well in a couple of places, such as paragraph 3.

I also think that the assessments we endorse need to include attention to content. Bullets one and two make that statement implicitly. Given SAT2's complete disregard of the meaning of what test takers have written, I think we need to be explicit that meaning and content matter.

Thanks, folks, for doing this hard work!

I am posting the following comment on behalf of Diane Kelly-Riley.--Shirley Rose 

The WPA position statement on assessment draft is more dedicated to
explicating what communication in the 21st century means, and provides
little specifics in terms of identifying ways in which such abilities can or
should be assessed (except to say that it should be context-specific).  The
WPA statement should include the kinds of assessment(s) for which it¹s
taking a position‹does it include response strategies to student drafts,
grading procedures, preferred methods of classroom evaluation, assessment of
student writing within disciplinary contexts, large-scale efforts to assess
student writing, standardized placement procedures, and/or other alternative
ways of assessing students' writing?  The draft seems to lump all assessment
together, and I'm not sure that classroom assessments have all of the same
obligations/considerations that large-scale writing assessments would have.
The WPA statement should address the variety of assessment contexts.

Finally, since WPA is trying to articulate its position on assessment, I
think it would be worthwhile for the organization to locate itself in
relation to other research/scholarship/position statements on assessment.  I
echo Carol Rutz¹s observation that the current draft has a
circle-the-wagon¹s feel to it.  In my opinion, such a stance is to WPA's


Dr. Diane Kelly-Riley
Director of Writing Assessment
Washington State University Writing Program
Center for Undergraduate Education 305D
Pullman, WA  99164-4530

Phone:  509-335-1323
FAX:    509-335-3212


First, thanks to the committee doing the hard work of drafting, and to CWPA for taking on this project. Boiled down, the final paragraph contains crucial points about assessment, well put. I concur with an earlier comment requesting that *content* be added parallel to cultural and cognitive considerations, as well as the point that we distinguish kinds of assessment here, and speak not only to assessment of writing and writers but of educational programs themselves.

I have three substantive comments about the bulk of what's here.

1. Audience considerations: simplicity and directness. We need to work on saying this in half the space, using less gobbledygook -- almost everything here is Clear Only If Known, and presumably the audience who needs to read this statement doesn't know. The difference between the closing paragraph and the rest of the document is night and day -- we need the directness and readability of the former.

2. This could have been written in 1988. Apart from nods to current multimodality (and it would be really nice to see the "21st century" stuff go away), it's strikingly fundamental and hearkens to the time when the field began working to balance cognition and context in considering writing and writing instruction. So, that's excellent to see, except maybe in this way: What this sense suggests we're leaving out is material specific to *assessment*: we're so busy explaining, essentially, that communication is rhetorical (without using the word, which is fine) that we never get around to saying what White or Haswell or Huot or Broad or many others have taught us about *how to assess writing*. When I say this feels like it could have been written in 1988, I mean it doesn't really seem to have the "flavor" that an additional two decades of research on writing assessment ought to provide. Any way to bring that flavor some? Maybe get a little assertive about better and worse assessment techniques?

3. From an audience perspective, I'm afraid all the setup work of the first few paragraphs will read to an uncomfortable extent like caveats and excuses, and I think we should be really concerned with attempting such nuance in our *opening* moves. The CWPA position on assessment should be, I would think, that assessment of writing is crucial to successful writing instruction and that to be valid it must be carefully and expertly designed to account for all the complexity inherent in writing tasks and situations. Why isn't that statement (or some acceptable equivalent) the first line? Once this assertion about complexity is made, *in the light of* clear (even aggressive) desire to assess, to find out what's happening with students' writing and the health of the programs designed to teach it, then we can build the paragraphs that explain the *sources* of this asserted complexity. An alternative: Consider whether the concluding paragraph is actually your *opening* paragraph, since it comes closest in the document to a direct, clear statement that we're excited to assess and assessment needs to be done expertly and thoughtfully.

Once again, thanks to those doing the hard work of giving the rest of us something to talk about to begin with -- taking the angle of explaining why assessment has to account for complexity is a really smart decision.

Doug Downs, Writing Program Chair -- Utah Valley State College

Salute to those folks who've undertaken this project. I offer the following with humility...

¶1: I imagine some hard thinking went into the use of “communicators.” Here’s an argument for “writers”: From Plato to Secretary Spelling, people outside of rhetoric/writing pedagogy have assumed that we teach communication—that we teach students how to share what has already been known or created. Characterizing students as “communicators” 1) reinforces the idea that knowledge is made elsewhere—or worse yet, that it is conjured quietly before language. 2) makes writing pedagogy about the means (the texts, the arrangement, the formatting, the syntax, the grammar) of passing along ideas and 3) reinforces the notion that language is a vessel for shooting ideas back and forth between people and not the engine of human consciousness.

We’ve tried to teach colleagues, administrators, politicians that language is a tool for building ideas, that writing pedagogy is not exclusively about communication, that composition instructors are not custodians of language, that composition studies is that particular pedagogical space in which students learn how to make, shape, investigate, and evaluate ideas through public written discourse. When we characterize our rhetorically and epistemologically thick enterprise as one of communication, we flatten out the work. We also validate Platonic epistemology and empower its attendant Secretaries.

¶2 We should avoid phrases such as “production of written texts.” We don’t want corporate-minded politicians and their minions to equate “writing” with text production. Granted, students in most of our courses end up generating written texts, but that’s not the heart of the pedagogy—any more than the creation of a gas is at the heart of a chemistry class. It’s not what we want politicians focusing on.

¶3: It may be reductive to characterize writing and reading as “conventions of communication.” While communication is involved, we know that reading and writing are social epistemic processes. They involve the creation of meaning. When we teach students how to read and write, we teach them how to build, investigate, take apart, re-build, and respond to ideas. Those difficult epistemic moves are what we teach and assess. (We’ll never win if people think we teach communication.) It takes years of focus, dedication, and commitment to individual students to teach and assess rhetorical proficiency and the complexities of meaning-making. Those of us mired in this business ought to emphasize these complexities.

I assume this document will have traction and significance--and I hope it anticipates and diffuses some of the assumptions working against us. Peace.

John Mauk

I know I am entering my comments late in the game--although I expressed them to Linda and maybe a few others earlier--I think we should consider a briefer staement that simply endorses a statement already out there--such as the CCCC revised assesment position or the statement drafted (but not yet public) by the NCTE/WPA joint task force on writing assessment.

I find that I can agree in therory on most of what is here, I don't find the langugae and organization as effective as these other two documents.

Besides, I think that creating yet another statement is less effective than signing on to one good one so the college writing community has a more unified, consistent voice on this critical topic.

peggy o'neill