Assessment Gallery and Resources - Assessment Models and Communication Strategies

Please Note: Some sections of these narratives are available only in the PDF versions. PDF attachments are available only for registered users. For more information see Obtaining a WPA username and password.


In the last three years, writing instructors and program administrators have heard more than ever before about the need for assessment. Sometimes calls for this work are generated by instructors and program administrators themselves who are interested in learning about how instructional practices are affecting student learning in writing courses. Increasingly, though, these calls are joined by others from outside—from campus administrators, accrediting bodies, or other external stakeholders. These external calls can be couched in language that has increasingly become part of the assessment discussion, such as “accountability,” “transparency,” or “comparability.”

In response to these increasingly vocal calls for discussion, the Council of Writing Program Administrators (WPA) and the National Council of Teachers of English (NCTE) have partnered to bring writing instructors and program administrators the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities, which draws from existing resources to summarize best practice principles for post-secondary writing assessment.

The Assessment Gallery and Resources includes two components intended to support and illustrate the principles delineated in the White Paper. These include:

Model Assessments. The Assessment Gallery provides models of specific assessments from a range of institutions, from 2-year colleges to R1 universities, that enact the strategies in the NCTE-WPA White Paper. Together, these models illustrate that valid, reliable, and fair assessment reflects consistent principles, and that these principles can be enacted through questions and methods that are appropriate for the institution, the department, and the program.

Communication Strategies. Communicating with interested parties, from department colleagues to campus administrators and external constituencies, is a crucial part of developing a successful assessment. This document offers a framework for communicating and strategies for WPAs to share information and develop alliances with others.

Together, the White Paper and Assessment Gallery and Resources are intended to help writing instructors and program administrators communicate a message made clear in research-based best practices in the English language arts: Valid and reliable assessment is consistent at the level of principle and conceptualization: it is discipline-based, locally determined, and used to inform teaching and learning at the local level.

We hope that these NCTE-WPA materials provide valuable resources for instructors and program administrators as they go about the important work of assessing student writing, writing courses, and writing programs.

NCTE-WPA Task Force Members

  • Linda Adler-Kassner, Co-Chair
  • Howard Tinberg, Co-Chair
  • Jeff Andelora
  • Juanita Comfort
  • Brian Huot
  • Asao Inoue
  • Peggy O’Neill
  • Duane Roen
  • Kathleen Sheerin DeVore
  • Freddy Thomas
Taxonomy upgrade extras: 

Assessment Gallery and Resources - Communication Strategies

PDF icon CommunicationStrategies.pdf205.99 KB

Communicating with interested parties, from department colleagues to campus administrators and external constituencies, is a crucial part of developing a successful assessment. This document offers a framework for communicating and strategies for WPAs to share information and develop alliances with others. The Communication Strategies linked below are intended to help WPAs and writing instructors communicate with interested parties about their assessment work and build these alliances.

NCTE-WPA White Paper on Writing Assessment in Colleges and Universities

The National Council of Teachers of English and the Council of Writing Program Administrators offer this statement, a white paper, on writing assessment in postsecondary education. This white paper is meant to help teachers, administrators, and other stakeholders articulate the general positions, values, and assumptions on writing assessment that both the National Council of Teachers of English and the Council of Writing Program Administrators jointly endorse. What follows is an articulation of common understandings and general agreements in the membership of both organizations on the following:

  • The connections among language, literacy, and writing assessment
  • The principles of effective writing assessment
  • The appropriate, fair, and valid use of writing assessment
  • The role and importance of reliability in writing assessment 

Connections: Language, Literacy, and Writing Assessment

Writing instruction and literacy education at all levels are formal ways in which societies build citizens, and in which citizens develop reading and communication behaviors and competencies in order to participate in various communities. Learning to write better involves engaging in the processes of drafting, reading, and revising; in dialogue, reflections, and formative feedback with peers and teachers; and in formal instruction and imitative activities. A preponderance of research argues that literacy and its teaching are socially contextualized and socially constructed dynamics, evolving as people, exigency, context, and other factors change. The varied language competencies and experiences with which students come to the classroom can sometimes conflict with what they are taught or told to value in school. The assessment of writing, therefore, must account for these contextual and social elements of writing pedagogy and literacy.

Principles of Effective Writing Assessment

The principles of effective writing assessment that can take the form of classroom tests and grades or extracurricular exams measuring student writing ability are highly contextual, and should be adapted or modified in accordance with local needs, issues, purposes, and concerns of stakeholders. These assessments function across large-scale and classroom contexts and are used to make important decisions about students, curriculum, and teachers. Generally, there is agreement about the following principles that tend to be a part of effective, meaningful, and responsible writing assessment: 

  • Writing assessment should place priority on the improvement of teaching and learning.

 Writing assessment responds to student, teacher, institutional, and other stakeholder needs. It should be used to foster environments for student learning. In placement testing, this principle might demand that administrators consider the local classroom conditions students will be entering after they have been placed into a writing course, or the places in the local communities from which students come.

  • Writing assessment should demonstrate that students communicate effectively.

The effectiveness of student performance should be connected to criteria relevant to the educational decisions the assessment is designed to facilitate. For example, in placement testing, student performance should indicate a readiness for the curriculum of the course in which the student is placed. In exit testing, student performance should indicate the completion of course goals and objectives and a readiness to write for the next course or courses in the curriculum. We acknowledge that writing assessment must communicate to a variety of stakeholders the essence of what we want students to learn and the evidence of such learning.

  • Writing assessment should provide the foundation for data-driven, or evidence-based, decision making.  

In some cases, assessment is designed to improve student performance, and in others to improve teaching and curricula. The purposes for assessment differ depending on the desired results of the assessment project. Programs may assess end products of a student’s semester-long work to consider how and whether that work demonstrates the outcomes for the course. Depending on the purpose of the assessment, results can be used to improve instruction at multiple points in the curriculum.

  • Writing assessment should be informed by current scholarship and research in assessment.

While writing assessment should be locally grown and implemented, those designing, implementing, and validating writing assessments should also stay informed of current developments in the fields of writing assessment, composition theory, and literacy studies. This means that those involved in writing assessment should be supported (financially and otherwise) to share and disseminate their own assessment and validation findings and work.

  • Writing assessment should recognize diversity in language.

The methods and language that teachers and administrators use to make decisions and engage students in writing, reading, responding, and revising activities should incorporate meaningfully the multiple values and ways of expressing knowledge by students present in the classroom and local communities. Assessments and the decisions made from them should account for students’ rights to their own languages (see the Guideline approved by the Conference on College Composition and Communication in 1974 and reaffirmed in 2003).

  • Writing assessment should positively impact pedagogy and curriculum.

Curriculum designers and teachers should attempt to understand and incorporate into instruction the ways in which the assessments can improve the curriculum and instruction in classrooms. Positive writing assessment takes into account the nature of writing as a social process and product, situated within particular contexts (e.g., classrooms or timed environments), and limited or shaped by these factors.

  • Writing assessment should use multiple measures and engage multiple perspectives to make decisions that improve teaching and learning.

These multiple measures and perspectives can include the use of several readers and the perspectives they bring to student texts. A single off-the-shelf or standardized test should never be used to make important decisions about students, teachers, or curriculum.

  • Writing assessment should include appropriate input from and information and feedback for students.

Students should have access to the goals, purposes, and scoring criteria for required assessments. Students should also receive appropriate feedback for any important decisions made about them.

  • Writing assessment should be based on continuous conversations with as many stakeholders as possible.

Developing, researching, and validating a writing assessment is a constant process, and one should expect the assessment, its results, and its products to change over time. Thus, it is important to have conversations about the assessment (e.g., dialogue about the features particular teachers notice in student portfolios in various courses).

  • Writing assessment should encourage and expect teachers to be trusted, knowledgeable, and communicative.

Teachers should be the primary agents in writing assessment, and therefore need to be continually educated in writing assessment, to engage in dialogue with one another locally, and to find ways to gain the trust of the other stakeholders. Additionally, other stakeholders should support teachers in their efforts to become more knowledgeable about writing assessment and to communicate to all stakeholders involved.

  • Writing assessment should articulate and communicate clearly its values and expectations to all stakeholders, especially students and, if applicable, parents.

Assessment should not be invisible, mysterious, or elusive to any stakeholders. There should be a variety of ways stakeholders can understand and be informed about the local writing assessment and its methods, findings, and products. 

Appropriate, Fair, and Valid Use of Writing Assessment

The Appropriate use of writing assessment, whether in a classroom or large-scale context, means that it fits the context and decisions that will be made based on it. Appropriateness can also be understood as a measure of the decisions made. For example, when placing students into courses based on portfolio readings, one might ask—and measure in some way—how appropriate the decisions are (do students and teachers later find that the placements put students in the right places?). Appropriateness might also be considered regarding the kinds of evaluation/feedback provided, based on their purpose or use (e.g., grades, summative feedback, formative feedback, recorded audio responses, no responses, detailed annotations/marginalia, responses offered to the entire class and not individual students, etc.).

The Fair use of writing assessment is crucial, since it can be used to make important decisions about individuals. A concern for fairness should guard against any disproportionate social effects on any language minority group. Writing assessments that are used to make important decisions about individuals and the material and educational conditions that affect these individuals should provide an equal opportunity for students to understand the expectations, roles, and purposes of the assessment. For instance, if students have no recourse, or opportunities to respond to evaluations or judgments of their writing, or if they do not have any access to the criteria used to evaluate their writing or to the uses of the assessments of their writing, then those assessments may be unfair. Considering the fair use of power does not mean giving equal power to decide to all stakeholders in an assessment. It means all stakeholders should have as much power over the assessment as their particular roles and positions dictate they can have, considering the ethical and expedient administration of the assessment, and the purposes of judgments.

The Valid use of writing assessment decisions and evaluations is a complex and technical activity. “Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests” (American 9). Every use of an assessment requires a validation inquiry in which an argument is made that the theoretical understanding of the assessment and the evidence the assessment generates support the decisions being made on behalf of the assessment. For example, if we use any method to place students into first-year writing courses, we must provide evidence that students are being correctly placed and profit from the educational experience. Questions such as how well students learn in each course of the curriculum must be answered in order to validate placement decisions. This inquiry-driven, researched-based activity is a required part of the appropriate, fair, and valid use of writing assessment.

Reliable Assessment

A reliable assessment provides consistent results, no matter who conducts the assessment. Because writing assessment often involves more than one rater scoring student performances, it can also involve interrater reliability, a measure of the degree of consistency from one rater judgment to another. A student’s score thus might depend upon the bias of the reader rather than upon the document or product being assessed. Attention to reliability is an integral part of any responsible validity argument.

Works Cited

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association, 1999.

Conference on College Composition and Communication. “Students’ Right to Their Own Language.” College Composition and Communication 25 (1974).

Additional References

Writing Assessment: A Position Statement prepared by the CCCC Assessment Committee

NCTE Framing Statements on Assessment

WPA Assessment Gallery: Assessment Models

PDF icon CarletonColl.pdf181.32 KB
PDF icon FrederickCC.pdf202.77 KB
PDF icon GeorgeMasonU.pdf198.84 KB
PDF icon SaintJosephColl.pdf186.81 KB
PDF icon SeattleU.pdf196.84 KB
PDF icon TidewaterCC.pdf197.91 KB
PDF icon UofKentucky.pdf202.47 KB
PDF icon SaltLakeCC.pdf328.88 KB

The Assessment Gallery provides models of specific assessments from a range of institutions, from 2-year colleges to R1 universities, that enact the strategies in the NCTE-WPA White Paper. Together, these models illustrate that valid, reliable, and fair assessment reflects consistent principles, and that these principles can be enacted through questions and methods that are appropriate for the institution, the department, and the program.

Attached are assessment narratives from the first group of featured institutions. Each narrative includes contact information for the author/contact and addresses assessment questions, principles, methods, findings, application, and adaptability to other institutions. The WPA-NCTE Ad Hoc Task Force is grateful to the authors of these assessments, who invested many hours of time to concisely describe their model projects.

Current featured projects include assessments from:

  • Carleton College (Carol Rutz)
  • Frederick Community College (Kenneth Kerr)
  • George Mason University (Terry Myers Zawacki)
  • Saint Joseph's College (Judy Arzt and Kristine Barnett)
  • Salt Lake Community College Community Writing Center (Tiffany Rousculp)
  • Seattle University (John Bean)
  • Tidewater Community College (Chris Dixon)
  • University of Kentucky (Connie Kendall, Darci Thoune and Deborah Kirkman)
  • Assessment Narrative - Carleton College

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.

    Untitled Document

    Assessment Narrative - Carleton College

    Institution: Carleton College

    Type of Writing Program: Sophomore writing portfolio

    Contact Information: Carol Rutz, Ph.D.,

    Assessment Background and Research Question

    Carleton College, a small liberal arts school in Minnesota, has emphasized writing across the curriculum since the mid-1970s. At that time, the college moved to a distributed model of writing instruction (integrating it throughout the curriculum), which meant that assessment of writing could not be directly linked to seat time.

    As a result of these curricular changes, from 1975 to 2000, writing assessment was conducted by the professor teaching one of dozens of “writing requirement” courses designed to provide instruction and practice, particularly to first- and second-year students. If a student did not meet the instructor’s expectations for writing quality, the student would not be certified for the requirement, although she or he could earn a good grade in the course for other kinds of performance, such as exams and participation.

    Growing dissatisfaction with the one-course, one-instructor method of assessing a graduation requirement led to an internal assessment of the writing requirement through interviews, focus groups, and surveys; findings revealed dissatisfaction on the part of both faculty and students, although there was little primary evidence to support the discontent. The task force assessing the writing requirement noted that no student writing was directly examined and evaluated for their report and perhaps some sort of study should be done—for example, the college could collect portfolios that students would carry with them to trace their development as writers.

    This situation led to the development of our research question: how can the faculty determine that students can write well enough to succeed in advanced courses in the major?

    In 1999, with the one-course assessment still in place, an associate dean applied to a regional foundation for a planning grant for faculty development (already a staple of WAC) tied to writing assessment. With funds in hand, Carleton invited nationally recognized experts on writing instruction and assessment to campus to introduce faculty and staff to models of good assessment and the supporting research.

    Assessment Methods

    As a result of our ongoing conversations about writing assessment, Carleton College developed a required sophomore writing portfolio. Students must submit between three to five essays from a variety of courses across the curriculum, along with a reflective essay, that demonstrate an acceptable mastery of essential aspects of college writing: observation, analysis of complex information, interpretation, identification and use of appropriate sources, writing thesis-driven arguments, and controlling Standard American English. According to Carleton faculty, these are the writing skills that augur success in the major.

    Portfolios are read each summer by faculty volunteers, who receive modest stipends. Reading sessions are prefaced with training in reading—as opposed to grading—portfolios according to a holistic rubric. Three scores are possible: pass, exemplary, and needs work. Training stresses the need to find the boundaries between categories, recognizing that “pass” will be the largest category by far. As portfolio assessment literature predicts (e.g., Hamp-Lyons and Condon; Harrison), percentages tend to be around 80 percent “pass” and 10 percent each for “exemplary” and “needs work.” All files scored “exemplary” or “needs work” are read by at least two readers, and a percentage of “passes” are reread as well. In addition, a percentage of the previous year’s portfolios are reread as a reliability check. Once portfolios for an entire class are read—a three-day process—students are notified of results by email immediately; original portfolios, with reader comments, are returned to students in the fall.

    Through the portfolio assessment, Carleton faculty across the curriculum can understand how and whether students are achieving the writing goals that have been established for them and provide additional intervention and support for students whose work falls short of those goals. Currently, these portfolios are submitted in hard copy; however, the process will likely migrate to an e-portfolio. More details on the uses of the portfolio are available at the URLs listed in the appendix.

    Assessment Principles

    The following principles proved to be important in developing and maintaining Carleton College’s portfolio assessment:

    1. Whatever assessment we decided to employ, it had to be locally designed and performed. Because of the college’s long experience with WAC, Carleton faculty were unwilling to rely on a standardized exam or adopt an assessment instrument used elsewhere. Faculty were truly invested in teaching students how to write well, and they wanted to know the results of their efforts. Therefore, faculty agreed that they must be portfolio readers.
    1. Before the planning grant, knowledge on campus about assessment in general and writing assessment in particular was largely absent. An understanding of the research and the potential for improvement of student writing went a long way toward convincing faculty that it was worth their trouble to conduct reliable, valid assessment of student work.
    1. A research approach to assessment appealed to institutional values and offered data that would be valuable to departments, programs, and individual faculty. The previous assessment provided no information to faculty that would affect teaching or the curriculum in any way. Portfolio assessment combined with a faculty development program is beginning to change the teaching of writing at Carleton (Rutz and Lauer-Glebov).
    1. Portfolio assessment provides context for student writing of all kinds, including writing from ESL students. One result of portfolio assessment has been a general relaxation of anxiety about ESL/EFL and similar issues. Those with lower tolerance for surface error or “writing accents” are more troubled than others, but everyone has a better sense of how to recognize the linguistic mechanisms at work in ESL/EFL writing. Furthermore, we can now document that ESL/EFL students generally perform at a consistently acceptable level.
    1. Because the portfolio includes work that has already received passing grades in courses, the assessment itself is low stakes. The goal is to identify the segment of a given class that will need additional writing support to succeed in the major. As the literature predicts, that segment turns out to be 8–10 percent annually. Students whose portfolios do not pass meet individually with the Writing Program director or another writing professional to identify the problems and agree on remedies. Students then have one term to resubmit successfully.
    1. The faculty development dimension has been essential. An integrated, iterative curriculum of faculty development activities has supported WAC and also provided a model for other curricular initiatives on campus. As a key part of that curriculum, the portfolio reading sessions are collegial, informative, and respectful of student achievement. Junior faculty in particular are able to norm their expectations through reading student work across disciplines in the company of more senior colleagues.
    1. Portfolio assessment is open to investigation by faculty conducting other initiatives, notably a FIPSE-funded project on quantitative reasoning (QR), which uses student work collected in writing portfolios for program assessment. Carleton has benefited from having an archive of student work with IRB permission to use that work in research.
    1. The portfolio scheme is constantly under review, although no substantive changes have been adopted yet. That may change when a curriculum review currently underway is completed. For example, what has been a paper process is likely to eventually migrate to the World Wide Web. Local infrastructure and management of the reading experience will be the most important factors to consider as that move takes place.

    Assessment Results and Follow-Up Activities

    Since Carleton College’s portfolio assessment is ongoing, it is continuously generating results and follow-up activities. Initially, when the portfolio requirements were being developed, the portfolio facilitated a process whereby faculty learned methods of assessing student writing and provided a structure through which they could discuss writing skills across the curriculum. As faculty participated in these initial activities, they developed a structure for a portfolio that would speak to rhetorical tasks necessary to succeed in all majors.

    As faculty have continued to read portfolios, they have learned that their own teaching is affected by familiarity with student work across the curriculum. Faculty seldom get a chance to read work they have not assigned, and the portfolio allows for an efficient means of appreciating Carleton students’ experience as college writers. Through this rich experience of reading student work, faculty calibrate their expectations in their own courses, often revising their assignments to reflect what they have learned from assignments written by colleagues. Findings from the portfolio assessment have also served as the basis for ongoing faculty development. For example, workshops in December 2005 and 2007 focused on making arguments with numbers, combining the goals of both WAC and QR initiatives by helping faculty design assignments that require students to use data rhetorically. (See the link in the appendix for additional information.)

    Students also benefit from the portfolio. They have learned, for example, how to develop a persuasive argument from documentary evidence—their own writing for courses. A community sense of “good writing” at the sophomore level also has clarified expectations of student work in advanced courses for students as well as faculty.

    Assessment Resources

    To get this assessment project off the ground, external funding was essential. We were fortunate to secure funds for visiting speakers, stipends for faculty workshops and portfolio readers, summer curriculum development grants, conference expenses, and other related activities. Sustainability, however, has been a concern from the beginning. Having funding that lasted six years helped change the local culture so that the portfolio is now an accepted and valued feature of Carleton’s curriculum. Regardless, the assessment requires resources. Recently, the college received a bequest that partially endows faculty development programming for WAC, including the portfolio reading sessions.

    Other campus programs that benefited from Writing Program support in the past now include the Writing Program as they plan faculty development. WAC has become a platform for curricular development on campus.

    Staffing for the portfolio has been limited to one full-time professional, a part-time administrative assistant, and student workers, who also work on WAC initiatives throughout the year. This particular portfolio assessment, with similar support, could be adapted at other institutions. In fact, a number of small liberal arts colleges have inquired about it, and some of them have launched pilot projects.


    Additional information about the Carleton College portfolio is available at

    Sample course assignments that would be appropriate for the portfolio and also speak to Carleton’s QR initiative are available at:

    Some of the portfolio results, written for a student audience, are available at


    Hamp-Lyons, Liz, and William Condon. Assessing the Portfolio. Cresskill, NJ: Hampton Press, 1999.
    Harrison, Suzan. “Portfolios Across the Curriculum.” WPA Journal 19.1-2 (1995): 38–49.
    Rutz, Carol, and Jacqulyn Lauer-Glebov. “Assessment and Innovation: One Darn Thing After Another.” Assessing Writing 10.2 (2005): 80–99.


    Assessment Narrative - Frederick Community College

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.


    Assessment Narrative - Frederick Community College

    Institution: Frederick Community College
    Type of Writing Program: Freshman Composition
    Contact Information: Kenneth Kerr, EdD
    Professor of English
    Frederick Community College
    7932 Opossumtown Pike
    Frederick, MD 21702

    Assessment Background and Research Questions

    Frederick Community College has embraced Terry O’Banion’s concept of the “learning college.” One of the principles of that philosophy is, “[T]he learning college and its facilitators succeed only when improved and expanded learning can be documented for its learners” (47). Aside from that, as an institution accredited by the Middle State Commission, we have an obligation under standard 14 to assess student learning in a meaningful and rigorous way and to use the results of assessment activities to improve teaching and learning.

    FCC has created an ongoing assessment model stemming from these two points in which questions are raised by and stem from teachers’ classroom work, and the curriculum is continuously revised based on responses to those questions. These questions always focus on how well students are achieving the learning outcomes for courses that are at the center of the assessment. Instructors volunteer to participate in the assessment on a three-year cycle.

    As part of their annual self-evaluations, each full-time faculty member is required to report what he or she has done over the previous year to improve teaching and learning in his or her courses. In this self-evaluation, faculty identify an area of concern in one or more course or core learning outcome. They discuss what was done to improve teaching and learning in that area and report how learning improved as a result of the change.

    On a regular three-year cycle, faculty then self-select to participate in a formal, rigorous assessment project that focuses on issues of learning improvement. Often these issues come from the self-evaluations. These self-selected faculty volunteers shoulder the responsibility for their departments and oversee the three-year assessment project. They are responsible for designing, implementing, collecting, and analyzing data, and for submitting the final course-level assessment report. Over time, all members of the department are expected to take a turn leading the assessment efforts. This is considered service to the college equivalent to a committee assignment. To ensure that the assessment projects are sufficiently rigorous and likely to yield meaningful, reliable, and valid data about teaching and learning, each department submits an assessment project plan to the Outcome Assessment Council, which then considers and approves the plan. Once approval is given, the department proceeds to collect and analyze data, implement change, and reassess for effectiveness. The cycle works like this:

    Year 1: Departments submit a plan for assessing student learning. This plan will assess all four aspects of the Maryland Higher Education Commission (MHEC) general education requirements.

    Year 2: Departments collect and analyze data and recommend changes to teaching and learning.

    Year 3: Departments reassess to determine the effectiveness of the changes and submit a final report.

    The cycle then begins again with a new team investigating a new area of learning in a different high-enrollment course.

    The Assessment

    In the fall of 2004, the English department developed an assessment project to determine how well students are learning college-level communication skills in our English Composition (EN101) classes. Initially, we conducted a general assessment to gather some baseline data about how well our students were meeting our expectations at the end of the course. We developed a rubric to assess what we thought were the most important skills we wanted students to gain from the course: Content, Organization, Grammar/Punctuation/Mechanics.

    In 2005, we decided to focus on the same general question, “How well are students learning college-level communication skills in our English Composition (EN101) classes?” We took as our criteria the outcomes for our course:

    • Write effective, organized, clear, concise, grammatically correct compositions using appropriate stylistic options (tone, word choice, and sentence patterns) for a specific subject, audience, and purpose (informing, arguing, or persuading).
    • Demonstrate the ability to understand and interpret both written texts and oral presentations in English.
    • Understand the critical role of listening in communication.
    • Demonstrate an ability to organize ideas effectively by selecting and limiting a topic.
    • Develop and support a thesis with relevant and well-reasoned material.
    • Employ a logical plan of development and use effective transitions.
    • Demonstrate an understanding of the conventions of the English language writing essays that are substantially free of errors in grammar, spelling, punctuation, and mechanics.
    • Develop critical thinking skills by:
    • Evaluating evidence by differentiating among facts, opinions, and inferences;
    • Generating and evaluating alternative solutions to problems; and
    • Researching, analyzing, comparing, synthesizing, and drawing inferences from readings and other research materials in order to make valid judgments and rational decisions.
    • Assessment Methods

      All twelve full-time English department faculty collected final research papers (with all identifying information removed) from their regular sections during the spring of 2005. Using the outcomes-based criteria, we examined a sample of the student papers. The results indicated that students were strongest in the area of grammar and mechanics and also demonstrated competency in organizing written communication. However, we found that students did not effectively use information and material to support assertions in their writing.

      We then developed a follow-up study focusing on two new research questions: Is the problem that our students do not know how to find information and materials to support the assertion of their theses? Or, once students have found information, do they not know how to use it effectively?
      In the fall of 2006, we repeated the assessment project, specifically addressing these two questions. We provided students with specific, college-level reading material (articles from a scholarly journal and a general periodical) to be studied outside the classroom. Faculty explained the assignment, distributed materials, and set aside a full class period during the final week of class to have students write an essay responding to the readings. Instructors were cautioned to refrain from dissecting and discussing the materials with students because we were trying to determine their ability to read critically. Students were expected to prepare materials beforehand and could bring in materials and notes to class for drafting. (Although part of our study, the assignment was also a regular class assignment graded by the teacher.)

      In total, 281 student papers were in the sample representing 24 sections taught by the full-time faculty and three adjuncts. From this pool, we randomly selected 60 papers (20 percent) for evaluation.
      The papers were assessed using a four-point rubric, which we developed based on our outcomes and earlier study, that ranged from “accomplished” to “not evident.”
      EN101 In-Class Writing Assessment Rubric

      Learning Objective




      Not Evident

      The writing demonstrates that the student can differentiate facts, opinions, and inferences.

      Student has clearly indicated when he or she is stating opinion as opposed to fact.

      Student does not specifically state whether the evidence is based on fact or opinion.

      Student uses data incorrectly to support a position on the issue.

      The response is entirely based on personal opinion.

      The student analyzes information from various sources.

      Student uses information from both sources provided and clearly identifies the specific source of the material.

      Student uses information from both sources.


      Student uses only one source to support the position.

      Student uses no information from the sources provided.

      The student recognizes and develops alternative perspectives and solutions.

      The response indicates that an original alternative solution to the prompt has been developed.

      Student accepts a solution proposed by one or the other of the provided sources.

      Information from the material is summarized, but no solution is offered.

      Student misreads the data and/or misinterprets what he or she has been asked to write about.

      The student evaluates alternatives to make a sound judgment.

      The student presents and considers multiple alternatives before proposing one as preferred.

      The student mentions the existence of opposing positions.

      The student presents a single position as the only possible solution.

      No alternatives are presented. No judgment is present.

      Three readers read each paper. When a reader determined that a paper was “competent,” or fell above or below “competent,” he or she was asked to provide some explanation for that determination.
      The purpose of the second assessment project was to determine if students could effectively use college-level materials provided to them in response to a prompt they were given in advance. We found that they had trouble handling this information in a competent manner. Consequently, we concluded that we needed to concentrate our efforts on helping students become better critical readers.

      Assessment Principles

      Since FCC’s assessment is ongoing, the methods for assessing questions depend on the questions being asked. Both departments and individual faculty are expected to participate in ongoing assessments. In the English department, we used the following principles to help develop our assessment program and projects:

      • The use of writing assessment should be appropriate, fair, valid, reliable, and equitable. We made sure that every paper had an equal chance of being included in the sample. To show that the emphasis was on student work, we removed any indication of the instructor and the student. All papers were read by all three evaluators, who were calibrated prior to evaluating papers.
      • Writing assessment places priority on improvement of teaching and learning. To reinforce the idea that the purpose of assessment is improvement, results were not presented in such a way that any individual student or instructor could be identified as either weak or exceptional. The idea is to improve the course as a whole. Assessment must be seen as ameliorative, not punitive.

      Student learning outcomes assessment at Frederick Community College is a comprehensive effort focused on measuring student academic achievement. In writing courses, this means assessment results can be used to improve student writing in other classes and other disciplines. As we introduce learning innovations that are effective in improving student learning, these are communicated to other instructors through professional development opportunities throughout the year.

      • Writing assessment provides evidence of student learning. By assessing random samples of students over many semesters, we are able to determine whether the learning innovations we have introduced as a result of assessment data are effective.

      Students also are active partners in the learning process. (Could this extend to a principle of involving students in assessment, or using student work as a central focus of assessment?) Our students are made aware of what is being assessed in the work they produce. We specify what evidence of learning will be looked at closely and remind them.

      • Writing assessment is ongoing, providing the foundation for data-driven, or evidence-based, decision making. Because the assessment project takes place over a period of years and provides data that can be used to measure effectiveness, we can develop changes to our courses based on evidence and then measure the effectiveness of those changes.
      • Writing assessment articulates and communicates clearly its values and expectations. Our students are told what is being assessed and how it will be assessed. They are provided with the assessment rubric. Faculty are aware that they have nothing to fear from assessing their students’ learning. The values and expectations are that everyone is committed to improvement and no one need fear reprisal because of disappointing results.
      • Writing assessment is site-specific. We wanted to develop a program of assessment that would fit within our current practices. We didn’t want to “invent” a way to assess. We wanted to “find” existing opportunities to assess student learning. We wanted to incorporate assessment into what we were already doing so as to cause the least disruption of teaching practices. Our program was also developed with consideration of our size, our resources, and our mission.

      Assessment Results

      The results of the 2005 assessment project indicated that students were strongest in the area of grammar and mechanics. Students also demonstrated competency in organizing written communication. As previously stated, the area of greatest weakness was content. Specifically, we learned that students did not effectively use information and material to support assertions in their writing. This led us to the two new research questions described earlier: Is the problem that our students do not know how to find information and materials to support the assertion of their theses? Or, once students have found information, do they not know how to use it effectively? We believed that the answer to improving student learning in the area of content lay in the answer to these questions.

      The purpose of the second assessment project was to determine whether students could effectively use college-level materials provided to them in response to a prompt they were given in advance. If we found that they could not handle this information in a competent manner, we could then concentrate our efforts on helping students become better critical readers. Analysis of the data showed that, under the best possible interpretation, 67 percent were determined to be fully competent or better in all three learning objectives by at least two of the three evaluators. Only 27 percent of the sample papers were determined to be fully competent or better by all three evaluators. Additionally, fewer than 50 percent of the papers were competent or better on any individual learning objective.

      Assessment Follow-Up Activities

      Our second research project—spring 2006—showed us that even though we provided the students with appropriate college-level material, they were not consistently able to use it to support a thesis. Our assumption is that we must spend more time teaching critical reading and integration of that reading into essay writing to support a thesis. This assumption is bolstered by the placement data that show the number of students requiring developmental writing decreasing while the number requiring developmental reading increasing at a greater-than-inverse relationship.

      Based on these findings, we developed a version of EN101 that will focus more directly on teaching critical reading. We have also designed a new assessment project. We proposed a true experimental design project in which six sections of EN101 Freshman Composition will be given intensive critical reading instruction and practice. Six other sections will be designated as control sections, in which no teaching/learning innovation will be introduced. These students will get the same class they normally would. We will then repeat the in-class essay using provided material for all twelve sections and compare student performance. If the study group out-performs the control group, we will know that we need to rework the EN101 course to include extensive critical reading. We can begin developing professional development materials to teach EN101 instructors how to improve learning in this area.

      Assessment Resources

      While there was no additional funding for these assessments, FCC invests significantly in the assessment process. FCC provides institutional support in the Office of Assessment, where we have two full-time researchers available to assist with design and analysis. There is professional development money available for anyone involved in assessment who wants additional training. Requests for equipment are made from existing initiative money; there is no supply budget for assessment. Finally, faculty involved in assessment are recognized for their work at a status equivalent with serving on an important college committee. Assessment work is considered the type of service we would expect to see from someone applying for promotion.

      Portability and Sustainability of the Design

      The course-level assessment used by FCC is sustainable because of its ongoing, three-year project design. This design encourages that assessment data be used to improve teaching and learning. Furthermore, the effectiveness of the instructional innovations, developed as a result of assessment data analysis, is itself assessed for efficacy. Once an opportunity to improve teaching and learning is identified and ameliorated, the methods by which student learning is improved are presented to all instructors through professional development. After that, a new area of student learning is identified and the cycle continues.

      This design is portable and adaptable to other two-year colleges. We all have course outcomes that have been identified as important skills, knowledge, or abilities we want our learners to demonstrate at the end of a course. We all have embedded opportunities within these courses that provide students a chance to demonstrate their mastery of these skills, knowledge, or abilities. This design identifies existing assessment opportunities rather than creating new tasks for instructors and students. In this way, we think more deeply about the types of work we are asking students to do and why we are asking them to do them. In doing so, we give more meaningful assignments to our students and receive better data on how well they are learning what it is we want them to learn.

      This design requires only a little bit more work by a few people for a limited period. Three people from each department work about 25 to 40 additional hours over the course of a year for three years. Much of this work is done over the summer months when they are not teaching and can be done asynchronously when it best suits their schedules and preferences. These three are then excused from departmental assessment projects until everyone else in the department has served in that capacity.

      Additionally, we all complete annual self-evaluations. It is proper that we should self-evaluate the effectiveness of the courses we teach. Reflecting each year on one aspect of one of our courses that we think can be improved, on what we did to improve it, and on how well the change worked is not only an appropriate professional self-reflection but also valuable data for our colleagues and our college.

      This design is minimally intrusive, involves minimal additional resources, and requires minimal additional effort on the part of a very few faculty above what they are already doing.


      O’Banion, Terry. A Learning College for the 21st Century. Phoenix: American Council on Education and Oryx Press, 1997.


    Assessment Narrative - George Mason University

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.

    Assessment Narrative - George Mason University

    Institution: George Mason University

    Type of Writing Program: Writing across the Curriculum; required upper-division writing-intensive courses in the major

    Contact Information: Terry Myers Zawacki

    Director, WAC and University Writing Center

    Assessment Background and Research Question

    George Mason University, a large Virginia state institution located outside of Washington, D.C., has had a well-established Writing across the Curriculum (WAC) program dating from 1977. The components of the program include an upper-division required composition course in a disciplinary field relevant to the student’s major (e.g., Advanced Composition in the Social Sciences) and an upper-division designated writing-intensive course(s) in the major. In 2001, our State Council of Higher Education in Virginia (SCHEV) required all institutions to develop definitions of six specific learning competencies, one of which was writing, and plans for assessing them, with reporting to begin two years later. Each institution was allowed to develop its own assessment plan. The director of the Office of Institutional Assessment (OIA) consulted with me about how we might respond to this mandate so that we would be able to use the results of the assessment to improve the way writing is taught across the disciplines, not just to prove something about our students’ writing competence to an external audience.

    The year before we received the 2001 mandate to assess writing, the OIA director and the WAC director had already begun to set in place a process for determining the effectiveness of our writing-intensive (WI) requirement in the major. As a first step, we asked the provost to convene the Writing Assessment Group (WAG), comprising representatives from each of the colleges, many of whom had served or were currently also serving on the senate-elected WAC committee. Our first WAG task was to design a survey, described in detail under Assessment Methods, which we circulated to all faculty to determine the number and kinds of writing tasks they assigned and their level of satisfaction with students’ performance on these tasks. Based on the results of this assessment and in response to the state mandate, we developed a second set of research questions related to students’ competence as writers in their majors.

    To fulfill the state’s mandate, all institutions had to (1) submit a plan for assessing students’ writing competence, (2) include a definition of standards for writing competence, along with methods to be used to measure competence, and (3) report results to stakeholders, as well as actions that would be taken based on the results. Mason’s plan focused on the writing of upper-division students in the majors with assessment to be conducted by departmental faculty who would assess representative samples of student writing in the major according to a discipline-specific rubric they had developed. In addition to these departmental results, the proposal also noted that we would include data from the results of the faculty survey on student writing and responses to questions about writing from graduating senior and alumni surveys. Based on all of these findings, we would determine what changes and/or enhancements might need to be made in the WI course(s), to its role in the sequence of major courses, and/or in the faculty development workshops that are targeted to faculty teaching WI courses.

    For purposes of reporting to the state higher education council, our writing assessment group decided to aggregate the results from all of the departments that had conducted assessment, so that individual departments would not be singled out for producing unsatisfactory numbers of less-than-competent writers. However, we asked departmental liaisons to write longer, more detailed reports on their assessment findings to be kept in the Office of Institutional Assessment and to be circulated to department members. In a concluding section of the longer reports, departments are asked to describe the actions they will take, as a result of their findings, to improve the way writing instruction is delivered in the major. The report to SCHEV can be found at Departmental reports are not publicly available; however, scoring rubrics are posted at

    Assessment Methods

    Faculty Survey on Student Writing
    For the first assessment measure in fall 2000, the Faculty Survey on Student Writing was distributed to all faculty, who were asked about student writing at different points along a continuum, such as the writing preparedness of first-year students and transfers and their level of satisfaction with the ability of seniors on 17 writing criteria. Faculty also noted the number and kinds of writing assignments they use in their undergraduate classes, as well as their perception of and interest in overall departmental support and resources for teaching with writing. While, as could be predicted, response to the survey was disappointingly low, a number of units (Biology, College of Nursing and Health Sciences, Computer Science, Electrical and Computer Engineering, English, New Century College, Public and International Affairs, School of Management) had initial response rates of 40 percent or higher. Some units subsequently readministered the survey and achieved higher response rates. A detailed description of the survey results can be found on page 3 of the InFocus newsletter at

    Questions on Writing on Graduating Senior Survey
    Supplementing the information from the faculty survey are results on the writing questions asked each year on the Graduating Senior Survey. The 2006 senior survey included questions about students’ opportunities for revision and feedback in 300-level courses and above, and the effect of feedback on improving their writing, their confidence, and their understanding of their field. The results can be seen at by selecting “Writing Experiences.”

    Course-Embedded Holistic Assessment by Faculty in Majors
    Our current and ongoing assessment is embedded in required upper-division WI courses in the major. Every department offering undergraduate degrees is asked to appoint a liaison who organizes the assessment effort. The liaisons attend a cross-disciplinary workshop, which is designed to teach them methods for developing criteria and assessing papers holistically. The liaison then goes back to his or her department to lead a similar workshop using papers collected from writing-intensive or writing-infused courses. The following paragraphs give a fuller description of these workshops.

    Cross-Disciplinary Training Workshops.
    For the cross-disciplinary training workshop, departmental liaisons read, discuss, and rank sample student papers written in sections of English 302, an advanced writing-in-the-disciplines course required of all students; the papers were written in response to a standardized assignment prompt for a literature review. After the sample papers have been ranked, the faculty go through the process of developing a scoring rubric based on criteria derived from their discussion of traits they valued in the papers. While the purpose of the cross-disciplinary workshop is to teach the liaisons the process to be used in the departmental workshops, the participants always leave with an awareness of how much their expectations may differ from those in other disciplines and even from members of their own disciplines; they also acquire a greater understanding of the challenges student writers face in meeting the expectations of teachers across disciplines. The WAC director leads these “training-the-liaison” workshops with the assistance of other composition faculty as available. She also leads or co-leads (with the designated liaison or another assessment group member) the half-day departmental workshops.

    Departmental Assessment Workshops.
    Before the departmental scoring session, liaisons determine what assignment will be used to evaluate students’ competence. They are asked to select an assignment that requires students to demonstrate the skills and abilities most characteristic of those that writers should possess in the major. Papers written in response to the assignment or set of assignments are collected from all students with their names removed. Then papers are selected at random to provide a representative sample for scoring (the number of papers scored is based on a reliable percentage of the number of majors). Participants in the workshops are typically those faculty who most often teach the WI course(s) or teach with writing in most of their courses. As in the training workshop, they read and discuss three or four sample papers as a group, articulate traits they value in each of the papers, rank the papers, and, finally, develop a rubric with criteria that reflect the traits they’ve listed. Thus the criteria and the scoring rubric are not only discipline-specific but also specific to courses and assignments.

    Using this rubric, faculty score the papers. Each of the papers being assessed gets two readings and a third if the first two overall scores do not agree. Because overall scores can be difficult to determine if there is a spread of scores over individual criteria, faculty, as a group, must decide how they will determine overall competence when some criteria may be assessed as “less-than-satisfactory.” Some groups have decided that any paper receiving a “less-than-satisfactory” on the top one or two criteria must receive an overall “less-than-satisfactory” score. The School of Management decided, for example, that papers assessed as “not competent” in the category of “Formatting and Sentence-Level Concerns” must receive an overall score of “not competent.” Biology faculty agreed that any paper receiving an “unacceptable” rating on “Demonstrates Understanding of Scientific Reasoning” must be judged as “unacceptable” overall. (Note: Departments decide on the language they will use to describe the level of competence, e.g., “less than satisfactory,” “not competent,” “unacceptable.”) 

    Once the scoring has been completed, the departmental liaison is responsible for analyzing the distribution of scores overall and on each criteria and for writing a report on the results to be circulated to the department and sent to OIA. While an analysis of the overall scores on the rubrics gives departments a general picture of students’ writing competence in the major, it is the analysis of the scores for each of the criteria that is most instructive for the purposes of faculty development, i.e., developing teaching strategies and assignments targeted to those areas in which papers were judged to be weak. As explained below, the assessment results also help departments make decisions about where writing is best placed in the curriculum. A more detailed explanation of our assessment process is available on our WAC site at, as are a number of rubrics developed by the departments that have conducted assessment.

    Assessment Principles

    We view assessment as part of an overall philosophy about education that states that good assessment—its methods, practices, and results—can be used to correct, change, and enhance the learning experience for our students. Central to our assessment process is the belief that faculty own the curriculum and, further, that program faculty must share a sense of direction and purpose to establish a coherent learning experience for students—in this case, a coherent writing experience in the major. When writing assessment is embedded in writing-intensive courses in the major and when faculty buy into the process, both the process and the results contribute to the development of teachers, to their greater understanding of student writers, and to the effectiveness of the writing instruction in their classes.  

    Our assessment principles and decisions are also guided by composition and writing-in-the-disciplines research and theory, including Cooper and Odell’s 1977 collection Evaluating Writing: Describing, Measuring, Judging, which describes and provides a rationale for holistic scoring,and Huot’s 2002 (Re)Articulating Writing Assessment for Teaching and Learning, which argues that assessment should be site-based and locally controlled, that writing professionals should lead these efforts, and that our practices should be theoretically grounded, practical, and politically aware. Our process is also informed by genre and activity theory, which accounts for the fact that there are significant disagreements among faculty across and in the same disciplines about what constitutes competent writing. A fuller listing of sources is included at the end of this document.

    Assessment Results

    It would be difficult to sum up in a brief statement all that we have learned from our assessment efforts. The rubrics that departmental faculty develop as a result of our holistic reading and scoring process reveal widely varied expectations for student writing, based in the discipline but also on faculty members’ sense of the writing that is appropriate for undergraduates in their disciplines. Some results can be found on the website pages listed above. I and coauthor Chris Thaiss also discuss assessment results in Engaged Writers and Dynamic Disciplines: Research on the Academic Writing Life.

    One of the most significant things faculty discover as part of the workshop scoring process is that they may not agree with one another on what “good” writing or what “serious” error looks like. While they may start from a position that surface errors are the strongest indicator that students “can’t write,” they see, as a result of collaboratively constructing a scoring rubric, that students’ performance on other higher-order criteria (clear argument, focused thesis, logical evidence, etc.) might be better indicators of students’ ability to write well in the discipline. Faculty can also see how flaws in their assignments might contribute to students’ less-than-successful performance. The subsequent analysis of the scoring results is also useful in helping faculty create more effective assignments, make decisions about appropriate assignments, decide on the best sequence for assignments, and/or improve their teaching-with-writing practices in the areas indicated by the assessment. Further, the reports are also useful for departments in determining appropriate course sequences and whether the current designated WI course is the most appropriate for the major. A more specific discussion of how the assessment results are being used by departments can be found in the InFocus newsletters at .

    Assessment Follow-Up Activities

    In addition, the Southern Association of Colleges and Schools (SACS) requires the assessment of learning outcomes for every academic program, including general education, for accreditation purposes. The writing assessment we have been doing contributes to this report, with each individual unit discussing the results of its assessment of writing in the major and follow-up actions it will take. The university will also include the assessment of writing as part of our larger assessment of general education for the SACS’s  review.

    State Council of Higher Education in Virginia has recently mandated that Virginia institutions include a “value-added” component to our assessment plan. We will build on our current plan by adding a preassessment of students’ writing competence at the completion of first-year composition (FYC), using a random and representative sample of research-based essays. Faculty who teach the course will participate in a scoring workshop, in which they first develop a rubric to specify standards and then blind-rate the papers. In addition to providing comparison data for the postassessment that occurs in the WI courses, the results should also allow us to begin assessing our required English 302 advanced writing-in-the-disciplines course.

    Assessment Resources

    Departmental liaisons are given a very small stipend and a free lunch for participating in the cross-disciplinary training workshops. In some departmental workshops, faculty are given lunch and, if funding is available, a small stipend. In 2004 the provost funded a university-wide reception to recognize faculty for their assessment efforts. Posters describing each department’s assessment procedures, rubrics, and results were created for the reception and subsequently displayed, at the request of our university president, at a meeting of the Board of Visitors. Some posters were also displayed in the bookstore and in departments. Some of the posters can be viewed online at Other than this recognition and some small compensation for term and adjunct faculty who participate in scoring, there are no incentives; we must rely on the goodwill of full-time faculty and their commitment to student learning.

    The WAC director co-chairs the assessment initiative with the OIA director as part of her responsibilities, not because this is part of the job description but because of what the WAC program gains from participating in the process. The assessment workshop is a valuable faculty development opportunity, and both the process and the resulting data provide the director with a valuable perspective on writing in the disciplines across the university, which, in turn, informs ongoing WAC program and faculty development efforts.

    Assessment Design Sustainability and Adaptability

    Our assessment efforts are sustainable up to a certain point. A joint WAC-OIA position has been approved for the next fiscal year for an assistant to help with both writing assessment and the WAC program. However, we still need more resources to enable us to recognize the efforts of those faculty who have participated and to provide incentives to encourage more faculty to participate.

    Our process is adaptable, as proven by departments using the methods for their own ends, e.g., the Department of Communication using a holistic method to develop a rubric for faculty, mostly adjunct, to use in grading papers from lower-division general education and majors courses; departments also find the process useful for calibrating teachers’ reading and evaluation practices. Our School of Management is using the process to develop writing outcomes for their majors and also to measure growth in writing from the gateway to the capstone course.

    The frequent queries we receive from program leaders across the country about our assessment process is evidence of the adaptability of our assessment design to other programs. Indeed, our program has been referred to as “the Mason Model” by some of the WAC and assessment people who frequently contact our program.

    Useful References

    Bazerman, Charles, and David R. Russell. Writing Selves/Writing Societies: Research from Activity Perspectives. Perspectives on Writing. Fort Collins, CO: The WAC Clearinghouse and Mind, Culture, and Activity. 2002. 11 June 2008 <>.
    Cooper, Charles R. “Holistic Evaluation of Writing.” Evaluating Writing: Describing, Measuring, Judging. Urbana, IL: National Council of Teachers of English, 1977. 3–32.
    Cooper, Charles R., and Lee Odell, eds. (Eds). Evaluating Writing: Describing, Measuring, Judging. Urbana, IL: National Council of Teachers of English, 1977
    Haswell, Richard, and Susan McLeod. “WAC Assessment and Internal Audiences: A Dialogue.” Assessing Writing Across the Curriculum: Diverse Approaches and Practices. Ed. Kathleen Blake Yancey and Brian Huot. Greenwich, CT: Ablex, 1997.
    Huot, Brian. (Re)Articulating Writing Assessment for Teaching and Learning. Logan: Utah State UP, 2002.
    Miller, Carolyn R. “Genre as Social Action.” Quarterly Journal of Speech 70.2 (1984): 151–67.
    Russell, David R. “Rethinking Genre in School and Society: An Activity Theory Analysis.” Written Communication 14.4 (1997): 504–54. Accessed online.
    Thaiss, Christopher, and Terry Myers Zawacki. Engaged Writers and Dynamic Disciplines: Research on the Academic Writing Life. Portsmouth, NH: Boynton/Cook, 2006.
    Walvoord, Barbara E. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. San Francisco: Jossey-Bass, 2004.
    White, Edward M. Teaching and Assessing Writing: Recent Advances in Understanding, Evaluating, and Improving Student Performance. San Francisco: Jossey-Bass, 1994.
    Yancey, Kathleen Blake, and Brian Huot, eds. Assessing Writing Across the Curriculum: Diverse Approaches and Practices. Greenwich, CT: Ablex, 1997.


    Assessment Narrative - Saint Joseph College

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.


    Assessment Narrative - Saint Joseph College

    Institution: Saint Joseph College, Connecticut
    Type of Writing Program: Across the Curriculum Cumulative Portfolio (required)
    Contact Information: Dr. Judy Arzt (Director)
     (860) 231-5353

     Dr. Kristine Barnett (Writing Portfolio Coordinator)
     (860) 231-5472

    Assessment Background and Research Question

    In 1988, Saint Joseph College undertook a study of college-wide writing assessment practices used nationally and internationally with the goal of implementing an assessment complementary to its culture. Although for over 50 years our college catalog indicated that students must achieve writing competency to graduate, the faculty questioned whether measures used in the past followed best practices. For instance, as late as the 1980s, a junior-rising exam was used, but this method was contrary to theories set forth by popular writing theorists such as Peter Elbow and Donald Murray. Our faculty members were disenchanted with a one-shot essay writing sample, and although one was administered to all juniors in the spring of 1987, members of the English department soon agreed the method was not suitable and stored the essays in a basement, unread. With the hiring of a director to oversee writing programs, a full-scale search ensued for an effective assessment tool.

    The director, along with an assessment committee, commenced a yearlong study to compile assessment information, and resources were gathered from the Conference on College Composition and Communication (CCCC), American Association for Higher Education (AAHE), and National Testing Network. New publications in the field of assessment were further consulted. In the end, we favored a longitudinal portfolio.

    The questions we sought to answer through a portfolio assessment focused on how students develop as writers over time and how well our curriculum helps shape students as writers. As we sought to define the portfolio program for compatibility with the college mission, faculty were a guiding force in the decision-making process. Faculty attended a series of workshops to become better acquainted with the principles of Writing across the Curriculum (WAC). During the 1989 spring term at a one-day WAC workshop, faculty examined model assignment sheets and reviewed individual students’ writing assignments and due dates for a single semester. These activities helped faculty see writing (amounts and expectations) from a student perspective, as well as understand that writing skills develop over time and vary with genre and discipline. Following the workshop, at the end of the spring term, a WAC consultant was invited to spearhead a two-day, all-faculty retreat focused on encouraging faculty commitment to writing-to-learn activities, fostering the perception of the teaching of writing as a joint faculty partnership, and promoting recognition of the cumulative developmental process of writers. In summary, both the retreat and the workshop created a climate receptive to a longitudinal, cross-disciplinary portfolio program as a means to assess student writing. The following fall, a faculty assessment committee advanced a proposal for a portfolio program to the faculty for a vote, and the proposal received unanimous approval.

    We envisioned the portfolio program as a means to assess student writing as well as the college’s new core curriculum. Annually, students were required to submit one paper from a core course and an additional paper from another course. All core courses were writing intensive, and faculty designed common syllabi and assignments. However, portfolios revealed that students’ strongest writing did not derive from the core classes, but from disciplinary courses that held the students’ interest, often courses in their major or closely allied disciplines. Not surprisingly, whereas the portfolio program survived the test of time, the core curriculum did not. In 1995 we undertook an assessment of the core program and surveyed seniors who had been through a four-year cycle of the courses. The survey results led faculty to abandon the core courses but not the portfolio, for which there was strong support.

    In summary, the research questions guiding the formulation of our portfolio included:

    • What are the characteristics of student writing on our college campus?
    • How do students develop as writers over time?
    • How well do our students succeed in meeting a writing competency requirement for graduation?
    • How can portfolios inform and support curricular and instructional decisions?
    • What does a cross-disciplinary portfolio tell us about writers and their rhetorical strategies?

    Assessment Methods

    Portfolios are evaluated using three techniques. The written commentary section is the most labor intensive but the most helpful to students’ writing development. A criteria checklist and a holistic score complete the evaluative process. (A sample score sheet can be found in the appendix.) The criteria for evaluating portfolios are (1) fluidity and clarity of expression, (2) effective organizational skills, (3) effective use of details and elaboration, (4) critical thinking skills, (5) effective research writing skills, (6) effective use of language and diction, and (7) mechanics and usage. A student’s portfolio is rated in each of these areas using a plus, check, or minus.

    Papers are evaluated holistically and more weight is given to recent work to reflect the student’s progress. A 5-point scoring system is used, with 5 as the high score. A minimum score of 3 is required to complete the process. The sophomore-year evaluation serves as formative assessment, and a lengthy commentary educates students about the criteria and the fit of their writing with each of the seven criteria. Usually a paragraph or two explains each rating, and evaluations conclude with a bulleted list of tangible suggestions for writing improvement. The junior-year final evaluation follows the same process, except summative comments are kept to two paragraphs, as students are completing the process and receive acknowledgment for accomplishments to date, though recommendations for the future are included.

    The process of completing preliminary evaluations, as noted, is complex and time consuming for evaluators, but students report that the feedback is valuable. In the late 1990s, we conducted exit interviews with graduating seniors to assess their response to the program. Most highlighted the advantages of the comprehensive preliminary evaluation, which has remained a distinctive feature of our program.

    The actual scoring process is not particularly difficult in terms of reaching a consensus between two readers. At the preliminary stage, one member of the writing center staff writes a comprehensive evaluation. The second reader responds to this evaluation and offers suggestions for additions and changes. Ultimately, the two readers must agree on all aspects of the commentary as well as the ratings and score. At the final stage, two faculty members evaluate a portfolio independently. The two then meet to synthesize findings. Ordinarily, there is strong congruence, and in the two decades in which the program has been in place, only on the rarest of occasions has there been a discrepancy. In such cases, a third reader resolves the difference. In all cases where two independent readers score a portfolio “below satisfactory” or “poor,” two additional readers must read the portfolio independently. Thus, for a portfolio to receive an unsatisfactory score, four faculty members must agree on that score. On the other hand, students who receive the top score of “5” are commended through awards and the transcript notation of “Writing Portfolio Completed with Distinction,” as opposed to the regular designation of “Writing Portfolio Completed.”

    The portfolio process has remained fairly consistent since its inception and is well articulated for students, faculty, and academic advisors. Our “Writing Portfolio Booklet” explains the process, and annual portfolio reports, enumerating results and recommendations, keep the college community abreast of the program and student progress. Class visits and presentations at orientations and other events further serve as communication channels.

    Assessment Principles

    The portfolio program is based on assessment practices that mirror National Council of Teachers of English (NCTE) and CCCC position statements on writing assessment. Principles that have guided our practices include:

    • Writing skills develop over time and a writer’s rhetorical strategies shift to accommodate context and audience. To understand a writer’s full range of skills, a collection of samples written in different genres and for different purposes is essential. This principle reflects the CCCC Position Statement: “Best assessment practice uses multiple measures.”
    • Evaluation of student writing skills needs to reflect the curriculum as well as inform curricular practice. To that end, students submit a sample of papers from a variety of courses. The results of student outcomes on portfolios are annually communicated to the college community and used to make curriculum decisions and shape classroom teaching practices. We believe that timely communication of results is a critical part of the process and keeps the college community informed of the link between the curriculum and student performance.
    • For assessment to truly benefit students, they need timely review and valued response to their work. Our system provides students with detailed feedback on their growth as writers, and students are invited to confer with evaluators, advisors, faculty, and writing center staff regarding writing progress, portfolio submissions, and evaluations. We see the portfolio process as fostering a community of learners and as a means for promoting conversations about writing on campus.
    • Writing assessment needs to value teachers’ classroom work. Faculty members are an integral component of the process, and they guide the program. As such, the program is sensitive to faculty response, and all faculty members are invited to be portfolio readers. Training is provided to new evaluators, and faculty reading days are considered a faculty development activity, rewarded in the tenure and promotion process, as well as through monetary compensation. Faculty members have found that reading days provide them with an opportunity to confer with colleagues on assignment design, methods of response to student writing, and the overall college curriculum.

    In addition to the specific guidelines set forth by NCTE and CCCC, scholarship in the field of writing assessment, including work done by Edward White, Kathleen Blake Yancey, Brian Huot, Chris Anson, Richard Haswell, and Nancy Sommers, has informed our work.

    Assessment Results and Follow-Up Activities

    Assessment results are reported directly to students via a written evaluation. In addition to the score sheet and narrative, students are encouraged to meet with evaluators and writing center staff, as well as advisors, to discuss portfolio results and writing skills in general. In the past, students have been surveyed to capture their perceptions regarding the portfolio process. Findings revealed overwhelming support for the continuance of the program and the evaluative techniques, particularly the commentaries.

    Faculty advisors are considered essential to the success of the program. In fact, all advisors receive a duplicate set of students’ papers and evaluations. Advisors meet with students to discuss progress and set goals. In addition, faculty are attuned to using the program to evaluate the curriculum, and, of late, plans are underway to use portfolios to assess the college’s new general education curriculum. A representative sample of portfolios will be used to assess how well the new curriculum addresses critical thinking skills.

    Collective program results are communicated through an annual portfolio report disseminated to members of the college community. These reports not only give results of student performance but also include recommendations for curricular reform and areas of focus for classroom instruction. For instance, annual reports have led to increased attention to teaching research skills, as portfolio results evidenced a significant number of students struggled in this area.

    In terms of informing the greater community, we have made presentations on our program at national and regional conferences, including the American Association for Higher Education, CCCC, Northeast Writing Centers Association, International Writing Centers Association Conference, and New England Assessment Network. The program is listed in AAHE assessment publications. Routinely, we submit proposals for conferences and make presentations at other campuses interested in implementing a writing assessment program. In addition, faculty from other colleges have attended our portfolio faculty reading days to observe the evaluative process.

    Assessment Resources and Transferability

    The writing portfolio is integrated into the services provided through the Center for Academic Excellence (CAE). Thus, the only additional funding required is compensation for faculty who read portfolios, and the per diem rate for a full reading day is $300. We also compensate faculty who occasionally read portfolios outside of reading days to expedite the evaluations, and these faculty are paid $30 per portfolio. Faculty readers who periodically evaluate portfolios throughout the year are still expected to confer with a second reader, either in person or online.

    The program is easily transferable to other institutions, and Springfield College, in Springfield, Massachusetts, has already adopted some aspects of our program to assess student writing in specific programs. The evaluative criteria we use reflect research and scholarship on writing assessment in composition and rhetoric, follow common standards for assessing writing based on that work, and have also been adapted from other assessment processes. An examination of portfolio programs at other institutions revealed use of similar criteria, developed after those institutions established that these criteria were also appropriate for their contexts. The training program is also easy to implement, and the reading days, which are an excellent faculty development activity, are comparable to similar portfolio reading days or faculty workshops held at other institutions.

    The program has thrived since 1989 and has continued to be supported by students, faculty, and administration. Despite the program’s solid foundation and successful track record, the college continues to streamline the program to ensure sustainability.






    STUDENT’S NAME:          



    Writing portfolios are scored holistically on a scale of 5 to 1. A score of 3 or higher is needed to fulfill graduation requirements. The score scale is as follows:

                5 = excellent
                4 = good
                3 = satisfactory
                2 = below satisfactory
                1 = poor
         Inc = Incomplete (work is missing)



    A plus mark in front of an area indicates strength; a check mark indicates an area is satisfactory; a minus mark indicates an area in need of improvement.

                _________ 1. fluidity and clarity of expression
    _________ 2. use of appropriate organizational structure
                _________ 3. sufficient use of details and elaboration
                _________ 4. critical thinking skills
                _________ 5. effective and correct use of research techniques
                _________ 6. effective use of language and diction
                _________ 7. correct mechanics and usage



    Readers:                                                                                                         Date:




    Arzt, Judy. “Electronic Portfolios’ Transformative Effects on Assessment.” Paper presented at Conference on College Composition and Communication Convention. New York City. 21 Mar. 2003. <>.
    Arzt, Judy. “Writing Portfolio as an Exit Requirement.” Reviews and Descriptions of Assessment Instruments. Knoxville, TN: Clearinghouse for Higher Education Assessment Instruments, 1994.
    “Portfolio Project.” Center for Academic Excellence, Saint Joseph College, CT. <>.

    Assessment Narrative - Salt Lake Community College Community Writing Center

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.


    Assessment Narrative - Salt Lake Community College Community Writing Center

    Institution: Salt Lake Community College Community Writing Center
    Type of Writing Program: Community Writing Center
    Contact Information: Tiffany Rousculp, Director
                                          SLCC Community Writing Center
                                          210 E. 400 South, Suite 8, Salt Lake City, UT 84111
                                          (801) 957-4992

    Assessment Background and Research Question

    The Community Writing Center (CWC) is an outreach site of Salt Lake Community College (SLCC), a drop-in writing center for all members of the Salt Lake community. The mission of the CWC is to support, motivate, and educate people of all abilities and educational backgrounds who use writing for practical needs, civic engagement, and personal expression. The CWC combines pedagogical strategies of academic writing centers with the responsive nature common to nonprofit organizations. Participants in CWC programs are self-motivated to develop their literacy abilities and can do so through one or more of our four programs: Writing Coaching, Writing Workshops, Writing Partners, and the DiverseCity Writing Series.

    Writing Coaching provides free, one-on-one individual assistance on any writing task—résumés to poetry to letters—in a supportive mentoring environment. Writing Workshops introduce community members to a variety of writing genres in low-cost, low-stakes, short-term, small-group workshops. The CWC collaborates with nonprofit educational and/or governmental organizations through the Writing Partners program. These partnerships include workshops for staff/clients, mentoring for writing group development, and shared promotion of literacy/writing resources throughout the Salt Lake valley. Finally, the DiverseCity Writing Series is a multigroup, year-round writing group and publishing program that engages community members in personal expression and shared celebration of the written word.

    The CWC is directed by a faculty member from the SLCC English department and is staffed by five part-time writing assistants (students from the college and the local university). Recently, the CWC added an assistant director (an English department faculty member with 50 percent reassigned time) to its staffing. The CWC opened in October 2001 and is located on the plaza of the Salt Lake City Main Library in downtown Salt Lake City. To date, the CWC has served over 2,000 community members and has partnered with nearly 80 community organizations.

    The CWC does not generate income through number of students served and thus must continually justify the budget expenditures for its programming to SLCC administration and trustees. Although the CWC receives excellent support from SLCC administrators, our budget is tenuous when compared with programs at the college that deal only with registered students. Therefore, it is imperative for the CWC to anticipate—and implement without prompting—the type of assessment that administrators and trustees value. To address this need, and to shape the direction and work of the SLCC Community Writing Center, center staff has engaged in two main types of assessment since our inception:

    1. User satisfaction surveys on each of our four programs that try to address the question, “How satisfied are community writers with the services that the CWC provides?”
    2. Work Plans (reestablished every two years) to assess the use and achievements of the CWC, specifically addressing the question, “Who uses the CWC and what resources are they using?”

    These ongoing assessment practices generate mostly quantitative data targeted toward our funding sources (SLCC administration and Boards of Trustees and Regents).

    In 2006, after five years of conducting assessment through our Work Plan process, the CWC was well supported by SLCC administrators, yet we believed an external review would enhance our credibility as a well-theorized and responsive program with a new president who arrived at the college that year. As a site of learning, our primary concern was how to establish an assessment process that moved beyond the quantitative findings of our Work Plan and into a deeper examination of the learning (or not learning) that our writers experienced. We had not been able to track individual literacy development in our Writing Coaching programs, and the responses to our public workshop evaluations—though always positive—did not adequately assess acquisition of new or developing writing abilities. As a first step in that process, we invited Dr. Eli Goldblatt from Temple University to conduct an External Review of our programs in the spring of 2007. In that review, we asked Dr. Goldblatt to examine whether the CWC appeared to be fulfilling its mission and to make recommendations on what assessment strategies we might pursue to more thoroughly evaluate our programs.

    Assessment Methods

    The user satisfaction surveys have not—at this point—provided sufficiently useful data on our programs to merit inclusion in this narrative. (Two sample surveys are provided in the appendix for informational purposes.) We focus instead on the Work Plans and the External Review, which have provided useful information.

    Work Plans
    Every two years, the CWC director, assistant director, and writing assistants come together in a retreat to establish objectives for the next two years. Before proposing objectives, the CWC staff analyzes the “successes and failures” from the current work plan according to the criteria set forth for each objective. For example, we examine whether we have maintained adequate diversity of demographics in our writer population, or we analyze whether the partnerships we have maintained prioritize underserved populations. During this retreat, we collaboratively prioritize center-wide goals, such as “increase diversity of writers,” “establish connections with K–12 institutions,” and/or “broaden reach of community publications.” While the director and assistant director guide this process, all staff members contribute valuable and insightful ideas to the goal-setting process.

    After the retreat, the director and assistant director group the center-wide goals into program areas (e.g., Writing Coaching, Workshops) and priority areas (e.g., Diversity, Fundraising). These goals are revised into measurable objectives with deadlines and aligned with SLCC goals and objectives, which group into four main areas: “Provide Quality Education,” “Provide Lifelong Learning,” “Serve People of Diverse Cultures, Abilities, and Ages,” and “Serve the Needs of Community and Government Agencies, Business, and Industry.” The draft Work Plan is returned to the CWC staff members for feedback, revision, and approval. The Work Plan is then distributed to the CWC’s Advisory Committee for review. Any suggested changes are brought back to the CWC staff for consideration, revision, and group consensus.  

    Upon completion, the Work Plan is distributed to the SLCC administration for final approval. (Since our opening, no recommendations for change to previous Work Plans have been made by upper administration. Should such recommendations be made, the CWC director would negotiate with administration to balance their requests with staff preferences in order to maintain the stability of the CWC in both financial and staffing terms.)

    After final approval, the Work Plan is used by the CWC director and staff to evaluate progress toward the goals on a monthly, quarterly, and annual basis. The Work Plan is revisited each month in one-on-one meetings between the director and staff members through assessment of individual staff performance. Staff members set “semester work goals” based on the Work Plan at the beginning of each semester. Significant portions of the Work Plan (numbers of new writers, workshops offered, partnerships) are assessed quarterly for the SLCC English department and for the immediate administrative supervisor overseeing the CWC, the dean of the School of Humanities and Social Sciences. The entire Work Plan is assessed annually for SLCC upper administration and the Boards of Trustees and Regents, as well as for public dissemination on the CWC website ( (See selections from the 2006–2008 Work Plan in the appendix.)

    External Review
    In the spring of 2006, the English department chair approached the CWC director with a proposal to conduct an external review of the Community Writing Center’s programs. The chair, Stephen Ruffus, had spoken with Dr. Goldblatt at the 2006 Conference on College Composition and Communication Convention after a panel that Dr. Goldblatt shared with the CWC director and other community literacy scholars and activists. Over the summer of 2006, we shared emails with Dr. Goldblatt to narrow down our research questions and flesh out the scope of what could be accomplished in a brief external review.

    In February 2007, the CWC director compiled a portfolio of CWC documents to present a broad picture of CWC programming successes and challenges. Dr. Goldblatt received the portfolio approximately two weeks prior to his visit. The portfolio included (among other items):

    • Quantitative Data
      • Previous Work Plans and Their Assessments
      • Lists of Community Partners
      • Demographics of Writers Who Utilize CWC Services
      • Budget Analyses
    • Narrative/Program Data
      • Evolution (including revisions) of CWC Mission Statement
      • Brief History of the CWC
      • Selected Reports from Staff Meetings
      • Selections from Writing Assistant Training Manual
      • Procedures for Workshop Management
    • Assessment Samples
      • Workshop User Satisfaction Surveys
      • Writing Coaching Comment Cards
      • Writing Partner Satisfaction Surveys
      • “Success Stories” (letters, emails, comments from partners and individuals)
    • Artifacts
      • Publications from the DiverseCity Writing Series
      • Sample CWC Newsletters
      • Bibliography of Publications/Presentations from CWC Staff Members

    In early March 2007, Dr. Goldblatt arrived in Salt Lake City for a two-day visit. On the first day, he met with the CWC director, who gave him a brief bird’s-eye view tour of the Salt Lake valley (from the foothills overlooking the city) to establish a sense of place for his review. After talking with the director for the morning, Dr. Goldblatt met with current and previous writing assistants for lunch and discussion of how the CWC impacts students who work there. Following lunch, he met with representatives from a few community partners that the CWC had collaborated with during the past year. In each of these meetings, the director was not present to influence conversation. During dinner, Dr. Goldblatt met with the current DiverseCity Writing Series coordinator (a student writing assistant) and the two previous DWS coordinators. We made this a priority due to Dr. Goldblatt’s expertise in community publishing projects. Next, Dr. Goldblatt was scheduled to attend a DiverseCity Writing Series writing group meeting, but due to miscommunication, did not meet up with the group. Instead, he continued his discussion of community publishing with the DWS coordinator.

    In order to get a sense of the college’s relationship with the Community Writing Center, the next morning Dr. Goldblatt met with the SLCC English department chair and the SLCC student writing center coordinator. Lunch followed, with a meeting including the aforementioned, as well as the academic vice president, associate academic vice president, and CWC director. Then, on a request from Dr. Goldblatt, the CWC director arranged a brief meeting with the SLCC president to congratulate her on the center (thus bolstering the CWC in her mind). Next, Dr. Goldblatt met with a few members of the CWC Advisory Committee, followed by dinner with SLCC English department faculty and attendance at a writing workshop in the evening.

    Approximately two months after the visit, Dr. Goldblatt sent a letter of review to SLCC administration with his findings.

    Assessment Results and Follow-Up Activities

    Work Plans
    Since the CWC Work Plans are a continual assessment process, findings from each Work Plan inform CWC programming throughout the year as described earlier. The Work Plan is discussed at each staff meeting as a way to consider and prioritize efforts and resources for programming, at quarterly English department meetings, and in one-on-one meetings with the dean of the School of Humanities (the new institutional home of the CWC). The findings are also distributed to the CWC advisory committees and SLCC upper administration on a yearly basis.

    External Review
    The External Review produced challenging and useful findings for the CWC and SLCC administrators. The primary findings, which were delineated in a letter to SLCC administrators (the entire letter can be found on the CWC’s assessment website; see address below in the appendix), included successes and suggestions.

    The successes included:

    1. Establishing an environment in which a wide range of writers unaffiliated with SLCC or other nonprofit academic institutions could pursue their own projects and receive criticism, support, and access to publication they would have no other means to obtain.
    1. Training, mentoring, and launching student writing assistants on career paths connected to literacy and community engagement.
    1. Focusing attention and material resources on the literacy needs of low-income, marginalized, or otherwise underserved learners in many of the neighborhoods of the city.
    1. Modeling a democratic educational approach, one that is not motivated primarily by competition, not measured by grades or standardized tests, and not limited to groups segregated by age, race, ethnicity, religion, sexual orientation, or socioeconomic status.
    1. Offering literacy instruction in a uniquely interdisciplinary and multidimensional context that truly integrates writing, reading, listening, and speaking abilities whose purposes are practical, civic, and social.
    1. Indicating the possibilities of collaboration between and among large and small local institutions such as youth services, the public library, refugee aid groups, agencies preventing domestic violence, the University of Utah, and theater companies for the enrichment of literacy-related activities in the civic arena.

    This was followed by suggestions, which can be summarized as:

    1. Enhanced effort for financial development.
    1. Increased regional connectivity and circulation.
    1. Strengthened internal college alliances.
    1. Focused research as a theme.

    The CWC staff, while buoyed by the successes, focused on the suggestions as we made some immediate changes over the summer of 2007. Specifically, follow-up activities based on the External Review have been:

    1. Dismantling the existing Advisory Committee, which was made up of community members and SLCC faculty and staff, and creating a new one more representative of our constituency and better suited to the CWC’s needs. The new advisory structure consists of two advisory committees, the Community Advisory Committee (CAC) and the Academic Advisory Committee (AAC). The CAC focuses on fundraising, community exposure, and volunteer recruitment. The AAC will be responsible for increasing student involvement in CWC programming, research, and scholarship opportunities and, primarily, implementing useful strategies for assessing the impact of our programs on the writing skills/abilities of our community users.
    1. The CWC moved its institutional home from the associate academic vice president’s office to that of the dean of the School of Humanities and Social Sciences. This has put the CWC director in close contact with department and division chairs in that school, which should improve opportunities for student involvement. Also, this move has provided additional support for the CWC director to engage in research and publication work regarding the CWC.
    1. The SLCC president has committed to providing more support for the CWC in our fundraising endeavors. While resources are limited at the college, contributions have been made when possible. For example, a local citywide newspaper donated 12 pages of space to the college to use for promotional purposes. The college’s president has given half of those pages to the CWC for a feature article on our programs and writers.  

    In addition, the External Review has informed our Work Plan process as we develop our newest Work Plan, which will span 2008–2010. As we have begun work on it, we are prioritizing the need to develop valid assessment instruments of the learning that we assume takes place through the CWC programs. However, as the NCTE-WPA white paper on writing assessment points out in its principles, writing assessment characterizes writing as complex, inherently social, and context driven. In first-year composition programs, this principle challenges assessment practices; in a community writing center context, with multiple persons, purposes, and genres, all outside of an institutional setting, such assessment can appear quixotic at best. However, this will be a priority for the new Academic Advisory Committee as we move into the next two years of the Community Writing Center.  

    Assessment Principles

    Because we anticipated a need for external funding, in 1999—prior to the establishment of the CWC—the future CWC director worked with the SLCC Development Office on documents found in grant-writing and fundraising genres. These documents, which spoke to SLCC administrators in their favored discourses, became effective proposals for funding by the administration. The rhetorical strategies used in these proposals were then utilized in the CWC Work Plan assessment process, as the primary audiences remained SLCC administration and other external funding sources.

    The Work Plan documents and the External Review are—and have been—based on the following principles of writing assessment, though perhaps not fully articulated as such at the time:

  • Writing assessment is based on continuous conversations with as many stakeholders as possible. The CWC is founded on principles of collaboration between the college and the community and challenges the assumption “that higher education can know what a community needs or wants without entering into full and mutually-beneficial partnership with that community” (“Founding Principles”). The Work Plan is a collaborative document, with voices from all among the CWC staff, community members, advisory committees, and SLCC administration. The External Review provided Dr. Goldblatt with open access to each stakeholder in the CWC’s spheres of contact: community, staff, and academy.
  • Writing assessment articulates and communicates clearly its values and expectations to all stakeholders. Both the Work Plans and the results of the External Review have been disseminated widely through various means. CWC staff and SLCC faculty/administrators are continually provided with reports on the Work Plan, and each was provided with copies of the External Review report. Findings from the Work Plan are incorporated into CWC public documents, such as our newsletter and features in local newspapers. Both assessments are also available on the CWC’s website:
  • Writing assessment is site-specific. Each assessment tool, including the Work Plans and the External Review, that the CWC has utilized in its six years of operation has been locally developed by CWC staff members. Input is gathered from multiple local stakeholders and is reported within the context of the Salt Lake community. For example, comparisons of income, ethnicity, and education among CWC writers are done with Salt Lake City, county, and SLCC populations.
  • Writing assessment uses multiple measures and engages multiple perspectives to make decisions and provide formative feedback. One of the main reasons the CWC undertook the External Review was to get a new perspective on our programming. The Work Plans demonstrated “success” to many of our audiences, but their limited perspectives necessitated an outsider’s eye on the center.
  • Writing assessment is ongoing, providing the foundation for data-driven, or evidence-based, decision making. The CWC Work Plan is continual, always providing a rubric both for hard data and for critical reflection on our programs. The External Review is not an ongoing assessment practice, but we plan to conduct another external review in four years’ time.
  • Writing assessment places priority on improvement of teaching and learning. The Work Plan assesses programs, not the writing abilities of the CWC’s community users. This type of assessment, along with the aforementioned user satisfaction surveys, both point to improving our services rather than improving competence or writing skills. It is terribly difficult to measure the writing skills/abilities of community members in such a wide variety of contexts. As such, CWC assessment practices have solely engaged with asking how we can improve our offerings of literacy instruction to the Salt Lake community.
  • Assessment Resources

    Work Plan
    The Work Plan requires time to create and assess on an ongoing basis. Initially, there is a significant time commitment (approximately 10 hours) for retreats with staff and meetings with stakeholders. Year-round, it takes time to meet with CWC staff members and to collect information on accomplishments and challenges that lie within the Work Plan. However, those conversations are essential to the collaborative principles of the CWC. In addition, the Work Plan provides credibility to the CWC in the eyes of the SLCC administrators who fund us.

    External Review
    The total cost for the CWC’s external review was just under $1,500. That included the flight and hotel for Dr. Goldblatt, meals for Dr. Goldblatt and the stakeholders he met with, and a stipend. Personnel commitments included meetings with CWC staff members, college faculty, and administrators on the two days of his visit. Planning was the purview of the CWC director, who compiled the evaluation portfolio and organized the schedule of meetings/observations with the various stakeholders and communicated with Dr. Goldblatt prior to his arrival. The English department administrative assistant helped in purchasing the flight ticket and reserving the hotel room.


    The Work Plan design is certainly sustainable; in fact, it is necessary to the CWC’s own sustainability. The External Review is, by nature, not a sustainable assessment process. However, we do plan to conduct another external review similar to this one in our tenth year of operation.


    SLCC Community Writing Center Assessment Website
    This website includes the External Review findings, the full Work Plan assessment, and other assessment materials:



     slccSample User Satisfaction Surveys


    Writing Workshop Evaluation

    Please respond to the following so we know how effective our writing workshops are and how to best improve our programs.

    Workshop: Workshop Title


    Somewhat Agree

    Neither Agree Nor Disagree

    Somewhat Disagree



    Workshop Leader: Your Name


    Date: Workshop Date


    1. I felt encouraged to participate in the workshop discussions and writing activities.






    2. The workshop was interactive and collaborative.






    3. I received helpful feedback on my writing from other participants.






    4. I received helpful feedback on my writing from the workshop facilitator.






    5. This workshop provided me with writing tools and strategies that I will continue to use.






    6. The workshop facilitator was well-prepared and knowledgeable.







    What worked well for you in this workshop?


    What could have been better?


    What types of writing workshops would you be interested in attending in the future?


    How did you hear about this workshop?                                                                     



    CWC Writing Partners Evaluation

    Please respond to the following so we know how effective our writing assistance is and how to best improve our programs.

    Organization: Name of Organization
     Not Satisfied Very SatisfiedWorkshop/Partnership: Brief Description of Workshop/Partnership
     SatisfiedContact: Name of Contact
    Date: Date Range of project


    The CWC responded to our request within a reasonable time frame.




    The CWC listened to our needs and goals in developing the workshop/partnership.




    The CWC communicated sufficiently with us about the project planning and implementation.




    The project was useful to our organizational needs.




    Overall, the collaborative nature of the CWC’s approach worked well for us.




    Based on this experience, we would work with the CWC again.





    What worked well for your organization in this collaboration?


    What improvements would you suggest for future collaborations?


    Please either mail to CWC, 511 W. 200 South, #100, SLC, UT 84101; email responses to, or call us at 957-4992 and we’ll come pick it up.




    Selections from SLCC Community Writing Center
    Report on 2006–2008 Work Plan

    SLCC Goals


    Assessment July 2006–June 2007


    2a, 2b, 3a, 3b, 4a, 4b

    From June 2006–2007, 250 new community members will have participated in CWC educational programs. From July 2007–June 2008, 350 new community members will have participated in CWC educational programs. ($0)

    In 2006–2007, 341 new community members registered as writers at the CWC. We far surpassed our goal of 250 new learners.

    In July and August 2007, 48 new people have registered with the CWC. If this stays consistent during the year, we will be 78 short of our goal of 350 new writers.

    Writing Coaching

    2a, 2b, 3a, 3b, 4a, 4b

    In 2006–2007, the CWC will provide 200 Writing Coaching sessions to community learners; in 2007–2008, the CWC will provide 500 Writing Coaching sessions. ($0)

    In 2006–2007, we provided a total of 523 writing sessions, far surpassing our goal of 200 sessions.

    In July and August 2007, we have provided 90 writing sessions. If this stays consistent during the year, we should reach 540 sessions in 2007–2008.

    Writing Workshops

    2a, 2b, 3a, 3b, 4a, 4b

    By January 2007, community requests will account for 50% of all writing workshop offerings. ($0)

    63% of workshop offerings from January 2007 through June 2007 were based on community requests.

    2a, 2b, 3a, 3b, 4a, 4b

    Throughout 2006–2008, at least one low-cost public workshop will be provided to community learners per month; topics will rotate through pragmatic, personal, and social writing contexts. ($0)

    Fourteen public writing workshops were offered during 2006–2007:

    • Creative Writing
      • Letters to Public Officials
        • Letters to the Editor
          • Haunted Thoughts
            • Pamphlets, Press Releases, Posters
              • Poetry Basics
                • Beat Grammarphobia
                  • Write Your Legislator
                  • Résumés and Cover Letters
                    • Writing for Screen and Stage
                      • Memoir
                        • Writing for the Web
                          • Beginning Creative Writing
                            • Business Writing: Electronic Communications
                            • *Note: These are workshops offered solely through the CWC site. It does not include all partnership workshops that the CWC has offered.

                          2a, 2b, 3a, 3b, 4a, 4b

                          Throughout 2006–2008, all workshops will receive an 85% “very satisfied” rate from community learners based on end-of-workshop surveys. ($0) By December 2007, we will revise the workshop assessment form to provide more useful feedback.

                          The 2006–2007 workshops received 100% “very satisfied/satisfied” responses. The six evaluative questions received the following “very satisfied” percentages:

                          • “The writing strategies in this workshop will be useful to me.”—84%
                            • “The instruction was clear and well-paced.”—82%
                              • “The workshop leader(s) responded well to my questions.”—83%
                                • “The interactive manner of the workshop worked well for me.”—77%
                                  • “I gained some new knowledge or understanding about writing.—73%
                                    • “This workshop has been a valuable experience.”—77%

                                  Writing Partners

                                  2a, 2b, 3a, 3b, 4a, 4b

                                  From 2006–2008, the CWC will partner with at least 30 different nonprofit organizations/government agencies to provide community learners with educational opportunities. (TBD)

                                  In 2006–2007, the CWC partnered with:

                                  • Salt Lake Peer Court: Youth Offender Writing Group series
                                    • KRCL/Spy Hop: Radio Essay ’Zine Writing Workshop
                                      • GLBTCCU: Breast Dialogues Writing Workshop
                                        • TreeUtah: City Nature Writing Workshops
                                          • Romance Writers of America, Utah Chapter: Workshop for Annual Conference
                                            • Utah Arts Council: Introduction to Grant-Writing Workshop for 15 organizations.
                                              • Youth Providers Association: Workshop on proposal writing
                                                • HEAL Utah: Workshop for essay contest (no-show)
                                                  • Literacy Action Center: Fundraiser at Barnes and Noble, DWS group
                                                    • Univ. of Utah English Department: Guest Writers Series, Community workshop facilitation for Bino Realuyo
                                                      • Salt Lake Film Center/Hood River Productions: Community workshop facilitation for David and Julie Chambers
                                                        • University/Neighborhood Partners: Ad hoc committee for SLCC/UofU/community partnerships.
                                                          • SL County Youth Government: Advocacy Writing Group meetings

                                                        2a, 2b, 3a, 3b, 4a, 4b

                                                        By 2008, the CWC will have partnered with the city/county library systems on at least 12 instances of community education/engagement. ($0)

                                                        In 2006–2007, the CWC partnered with: :

                                                        • City/County: Writing Coaching sites (3)
                                                          • City Main: Scenes of Salt Lake Writing Contest
                                                            • City Main: Dewey Lecture Series (9)
                                                              • City Main: Teen Services Screenwriting Workshop
                                                                • City Main: Literacy Day Panel Discussion
                                                                  • City Main: Literary Luminaries event coordination
                                                                    • City Main: Workshop for Staff Development Day, Performance Plans
                                                                      • City Main: Arbuthnot Lecture application
                                                                        • City Main: The SLC Reads Together, The Big Read programming
                                                                          • City Main: Teen Services Fantasy Writing Workshop
                                                                            • City Chapman: Dungeons and Dragons writing group


                                                                          3a, 3b

                                                                          Reviewed annually, throughout 2006–2008, the diversity of demographics (ethnicity, income, education) will exceed that of the SLCC student body, Salt Lake City, and Salt Lake County populations. ($0)

                                                                          Ethnicity (percentage of ethnic minority representation):
                                                                          —CWC: 31%
                                                                          —SLCC: 14%
                                                                          —SLC: 21%
                                                                          —SLCo: 8%

                                                                          —CWC: 60% of writers have an annual household income of less than $30,000
                                                                          —SLCC: N/A
                                                                          —SLC: Median household income: $36,944
                                                                          —SLCo: Median household income: $49,003

                                                                          3a, 3b

                                                                          At least 50% of all Writing Partnerships will be with organizations serving underrepresented populations. ($0)

                                                                          Not including partnerships with the city/county library systems, the CWC has had 13 partnerships since June 2006. Seven of these partnerships have been with organizations serving underrepresented populations. One partnership, the Intro to Grant-Writing Workshop, served 5 participants from underrepresented groups.



                                                                          Other assessment

                                                                          The DWS coordinator surveyed DWS mentors to assess their level of satisfaction with CWC support and inquire about needed changes. After receiving feedback, new support programs have been implemented: visiting the DWS groups regularly, creating mentor blog sites, and providing more writing resources.

    Assessment Narrative - Seattle University

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.


    Assessment Narrative - Seattle University

    Institution: Seattle University
    Type of Writing Program:  Writing in the Disciplines
    Contact Information: John C. Bean
     Consulting Professor for Writing and Assessment
     Department of English
     Seattle University
     Seattle, WA 98122
     206 5296-5421

    Assessment Background and Research Question

    Our research question is simple: to what extent do seniors in each undergraduate major produce “expert insider prose” in their disciplines? (For the term expert insider prose, see later references to Susan Peck MacDonald.)

    Seattle University has no formalized “W-course” program in either WAC or WID. Rather, we have a Core Curriculum that requires “a substantial amount of writing” in every core course. When students enter their majors, instructors in each field assume responsibility for teaching students how to think and write within the discipline. The assessment movement on our campus has encouraged departmental faculty to think systematically about how students learn to produce disciplinary discourse. Initially driven by accreditation pressure, we soon discovered how the assessment process could lead to improvement of assignments, instructional methods, and curriculum design. We discovered particularly that assessment could help departments achieve better vertical integration of their curricula and lead to higher-quality capstone writing projects from their students.

    Our approach to assessment adapts insights from three theoretical perspectives:

    • The assessment strategy of the “embedded assignment” developed by Barbara Walvoord and Virginia Anderson in Effective Grading (see also Walvoord’s Assessment Clear and Simple). In this approach to assessment, the basic assessment act is the individual instructor’s grading of students’ performance on an assignment already embedded in a course. The instructor develops a rubric to grade the assignment and to report results to departmental faculty. The process for this strategy is described later in this document.
    • Susan Peck MacDonald’s theory of how students’ growth as writers progresses in stages from “pseudo-academic prose” upon entry into college through “generalized academic” writing and “novice approximations” of disciplinary ways of writing to (one hopes) “expert, insider prose” in the senior year. (See Professional Writing in the Humanities and Social Sciences, p 187.) One goal of our writing program is to promote students’ growth toward expert insider prose as revealed in disciplinary capstone projects in their senior year.
    • A rising junior writing assessment modeled after the work of Richard Haswell, Bill Condon, and their colleagues at Washington State University. Because of the way our Core is designed, we have placed this assessment at the beginning of the sophomore year rather than the junior year. It consists of both an impromptu timed essay and a first-year seminar assessment made by each student’s first-year seminar instructor (our adaptation of the portfolio requirement in the WSU model). The goal of this mid-career assessment, which is still under development, is to identify weak writers and provide them with extra instruction and support before they enter their majors.

    Assessment Methods

    Our method for assessing writing in the majors is surprisingly simple. Currently the method has been implemented primarily in finance, chemistry, history, economics, and English. In 2007–2008, through a planning grant from the Teagle Foundation, it is being extended to political science, and more and more departments are interested in trying it or are already doing their own variations.

    Using this method, a department’s first task is to create learning outcomes for the major. Almost always, one of these outcomes asks students to produce some kind of professional paper within the discipline. MacDonald’s stage theory of writing development helps focus departmental discussions: what constitutes expert insider prose for undergraduates within our discipline? The resulting disciplinary descriptions of “expert insider prose” map well on the taxonomy of genres identified by Michael Carter in his excellent CCC article “Ways of Knowing, Doing, and Writing in the Disciplines.”

    After a department has defined the kinds of expert insider prose it expects from seniors, it initiates an assessment process as follows:

    • The department chooses a senior-level course in which the designated kind of expert insider prose is required (the embedded assignment).
    • The instructor grades the assignment using a rubric. (Sometimes departmental faculty work together to create the rubric.)
    • The instructor analyzes rubric scores to uncover patterns of strengths and weaknesses in student performance.
    • The instructor presents the results at a department meeting, initiating faculty discussion. (In some cases, randomly selected sample papers are graded by the whole department in a departmental norming session.)
    • The department discusses strategies that might be implemented earlier in the curriculum (new kinds of assignments, additional instructional units/modules, redesign of a course) to ameliorate weaknesses. This is the essential “feedback loop” stage of the assessment process.
    • The department tries out the new methods, often deciding to use for the following year’s assessment project an embedded assignment from earlier in the curriculum (what MacDonald’s theory would identify as the “novice approximation” stage of student development). The purpose of this follow-up activity is to determine the effectiveness of the new assignments or instructional methods.

    The power of MacDonald’s stage theory is that it helps departmental faculty appreciate the importance of early courses in their major for teaching disciplinary discourse. To improve disciplinary writing in the senior year, faculty need to teach disciplinary methods of inquiry, research, and argument in their sophomore- and junior-level courses through better assignments and instruction. Moreover, this approach has led many departments to coordinate with research librarians to develop structured assignments for teaching discipline-specific information literacy.

    One should note that the assessment process just described places almost no emphasis on high-stakes testing or on accountability. Departments are not trying to weed out weak writers or to provide administrators with statistical evidence that the department’s graduates are meeting certain standards. Rather, the goal is to discover weaknesses in senior-level papers and to make changes in curriculum and instruction to address them. The process and the data are entirely owned by the department.

    Assessment Principles 

    We believe that our program for assessing writing in the majors follows the best principles of assessment identified by the National Council of Teachers of English (NCTE) and the Council of Writing Program Administrators (WPA). It is low stakes, directly tied to improvement of teaching practice, locally designed and implemented, inherently social, authentic, performance-based, and aimed at aligning testing and curriculum. What we like about this approach is that it has no direct consequences on individual students; rather, it focuses faculty attention on characteristic patterns of weakness in student performance and creates discussion of how to ameliorate them. These discussions often focus on widely encountered problems (e.g., students not understanding the demands of a disciplinary genre) as well as on particular problems associated with second language speakers or persons with disabilities. Often extra support is provided for weaker writers through the Learning Center of the university’s peer-tutoring writing center.

    Assessment Results

    As can be expected from our decentralized approach, each department has its own assessment story. Initial departmental discussions often reveal faculty disagreement about what constitutes “expert insider prose” for undergraduates. Professors often realize that they haven’t been explicit about “insider” features and that their assignments sometimes evoke what MacDonald would call “pseudo-academic” writing rather than disciplinary arguments. The resulting discussions have typically led to clarification of expectations for seniors and to the “backward design” of the curriculum whereby departments have made changes earlier in the curriculum to teach the processes of inquiry, thinking, and research needed for capstone papers. Here are some examples:

    • The history department, unhappy with senior papers that were often narratives without theoretical sophistication, changed its sophomore-level gateway course to introduce historical theory and to develop new kinds of assignments. Particularly, faculty wanted students to apply different theoretical perspectives to historical problems and to create thesis-governed historical arguments using primary sources and archival data.
    • The chemistry department decided that the typical lab report was “pseudo-academic prose” that didn’t teach students to construct themselves as scientists. Two professors redesigned the labs for sophomore organic chemistry in order to teach the empirical research report in the manner of professional chemists. They discovered that the introduction to a scientific paper requires the highest level of contextualized critical thinking since the problem being investigated is always connected in complex ways to unknowns identified in a review of the literature. Discussions of these issues within the whole department have been one of the rewards of this approach to assessment.
    • The English department has redesigned its curriculum so that every 400-level literature course requires a researched literary argument informed by theory and aimed at presentation at an undergraduate research conference. Using the principles of backward design, the department has created an integrated sequence of writing assignments that increase in complexity from 200-level survey courses (where the focus is on close reading and formalist analysis) through 300-level courses (where one of the required courses introduces postmodern theory and explicitly teaches students how to position their own views in a conversation of critics) to the final capstone papers in 400-level courses.
    • The economics department discovered that economics majors, unlike professional economists, did not instinctively draw graphs on the backs of envelopes in the first 30 seconds of economic discussions. The department’s assessment focus has been on “rhetorical mathematics”—increasing students’ ability to interpret graphs and to construct graphs that tell a significant economic story. This approach has led to new kinds of numbers-based writing assignments throughout the curriculum.
    • The finance department has defined its capstone projects as short persuasive memos, addressed to specified audiences, arguing for a “best solution” to an ill-structured (open-ended) finance problem. Because finance professionals must frequently address lay audiences as well as finance experts, the department is especially interested in students’ ability to shift audiences, constructing some arguments in an expert-to-lay context (with appropriate use of language and graphics) and some in expert-to-expert context. Faculty are beginning to add new kinds of writing assignments to the curriculum.

    Assessment Follow-Up Activities

    As explained earlier, we have used our assessment data primarily to drive a robust feedback loop process so that assessment data lead to improvements in curricula and instruction. Our approach has led to a planning grant from the Teagle Foundation (jointly with Gonzaga University) in which we are attempting to use embedded reflection assignments to assess the impact of our Catholic/Jesuit mission on students’ commitment to social and environmental justice in a broad multicultural context.  

    In terms of accreditation, we have yet to test this approach to assessment in a full-blown accreditation review. We are confident, however, that our approach—despite its lack of psychometric benchmark data—will meet with approval.

    Assessment Resources

    Our basic approach to assessment of writing in the majors requires minimal resources or faculty time. We ask departments to spend one department meeting per year discussing the results of an embedded assignment project. Because the project itself uses an assignment already embedded in an instructor’s class, the instructor’s “extra time” consists of creating a rubric (although many instructors already use well-designed rubrics), analyzing the rubric data for patterns of strengths and weaknesses, and preparing a short report for the department. What often requires extra time and resources is the feedback loop if the department wants to make significant changes in curricula or instruction. But this kind of work is already embedded in the everyday lives of professors with strong commitment to students and to teaching. In the early days of our assessment initiatives, some departments received inhouse grants to fund departmental projects—mostly used to provide food for meetings or stipends for a short summer workshop. But in general, this process can proceed without additional funding. (In contrast to our methods for assessing writing in the majors, our mid-career writing assessment, mentioned earlier, has required considerable university resources for administering the impromptu essay and for paying readers for attending norming sessions and doing the scoring.)


    The embedded assignment approach seems easy to adapt to any setting, as has been shown by Walvoord and her colleagues in their influential publications. In fact, what hinders the embedded assignment approach, ironically enough, is faculty belief that authentic assessment needs to involve more work.


    Bean, John C., David Carrithers, and Theresa Earenfight. “How University Outcomes Assessment Has Revitalized Writing-Across-the-Curriculum at Seattle University.” WAC Journal: Writing Across the Curriculum 16 (2005): 5–21.
    Bean, John C., and Nalini Iyer. “‘I Couldn’t Find an Article That Answered My Question’: Teaching the Construction of Meaning in Undergraduate Literary Research.” Teaching Literary Research. Ed. Steven R. Harris and Kathy Johnson. New York: American Library Association [forthcoming].
    Carrithers, David, and John C. Bean. “Using a Client Memo to Assess Critical Thinking of Finance Majors.” Business Communication Quarterly [in press].
    Carrithers, David, Teresa Ling, and John C. Bean. “Messy Problems and Lay Audiences: Teaching Critical Thinking within the Finance Curriculum.” Business Communication Quarterly [forthcoming].
    Carter, Michael. “Ways of Knowing, Doing, and Writing in the Disciplines.” College Composition and Communication 58.3 (2007): 385–418.
    MacDonald, Susan Peck. Professional Writing in the Humanities and Social Sciences. Carbondale: Southern Illinois UP, 1994.
    Walvoord, Barabara. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. San Francisco: Jossey-Bass, 2004
    Walvoord, Barbara, and Virginia Anderson. Effective Grading: A Tool for Learning and Assessment. San Francisco: Jossey-Bass, 1998.


    Assessment Narrative - Tidewater Community College

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.


    Assessment Narrative - Tidewater Community College, Virginia Beach

    Institution: Tidewater Community College, Virginia Beach Campus
    Type of Writing Program: FIPSE Writing Coalition of secondary and postsecondary institutions
    Contact Information: Chris Jennings Dixon, Professor Emeritus,
                                          Past Project Director, FIPSE Writing Coalition
                                          14391 Tamarac Drive
                                          Bokeelia, FL 33922

    Background and Assessment Questions

    This project extended over a period of seven years with collaboration between secondary and postsecondary English faculties. It began when Alma Hall, a Salem High School (SHS) English department chairperson, contacted Tidewater Community College (TCC) to open discussion about the college’s method of placing students in dual enrollment classes and college remedial composition courses. That inquiry became the jumping-off point for exploration of writing assessment initiatives with support from TCC and Virginia Beach City Public Schools (VBCPS) and funding from the U.S. Department of Education’s Fund for the Improvement of Postsecondary Education (FIPSE) for two comprehensive projects (1998–2001; 2001–2005) to explore solutions and disseminate results.

    Teachers have been the crux of this project that began with a simple question: “What are differences between the expectations of college and high school instructors?” Teachers have come together to investigate the problem and explore remedies. Teachers have empowered their students and themselves through reflective practices. And teachers have not only found answers but also developed innovative strategies to improve student readiness for success in college composition.

    Assessment Methods

    Teacher Designed Workshops
    To promote meaningful conversations, teams from SHS and TCC planned and facilitated professional development workshop activities each semester in response to topics initiated in roundtable discussions. Six instructional needs were identified: (1) engage students’ interest in writing, (2) clearly articulate college writing requirements, (3) emphasize instruction on editing and proofreading, (4) clarify requirements of the state assessment tool, i.e., the Virginia Standards of Learning, (5) revise syllabi to include collaborative writing strategies, and (6) develop ongoing teacher self-assessment. Sessions were usually scheduled at noninstructional sites where participants discarded institutional titles, convened informally in roundtable settings, brainstormed and reflected on teaching practices, and continued lively repartees over box lunches. Collaboratively, they identified what they valued in writing and what they expected of their students.

    The next step was to bring in a consultant, Kathleen Blake Yancey, to help with assessment strategies. Beginning in 1999 as a writing consultant for VBCPS, Yancey led sessions to train 600 teachers in portfolio methodology over a six-year period. Subsequently, a cadre of participants evolved for peer training in all schools, and every VBCPS English curriculum guide now starts with a unit on the use of portfolios.

    Yancey turned the normal negative tone of “grading” or “marking” student compositions into a positive one focusing on what a student could do well, to promote more of that skill set. Reading portfolios as a whole text, teachers looked for evidence of reflection and control of language instead of comma splices and split infinitives. Teaching strategies were developed and refined following each of the all-day, project-sponsored workshops, usually two per semester. Initially suspicious of the portfolio method, secondary and postsecondary teachers, after exposure to the process through workshops and roundtable discussions, set aside their reservations and experimented with collection, reflection, and presentation concepts in their classrooms. Through faculty participation in workshops and portfolio grading sessions, high school and college instructors became comfortable with this teaching culture. Both adjunct and full-time college instructors implemented and honed portfolio strategies in their classrooms as they discovered that their students were taking greater ownership of the writing process.

    Following TCC’s initial experimentation with the use of SHS seniors’ portfolios as an alternative placement method, the program was made available to 4 project schools and subsequently to all 13 VBCPS high schools. To support this methodology, over 30 high school and college teachers were trained each year in development and use of rubrics, anchors, and scoring guides to evaluate senior-year portfolios and use the assessments for college placement in developmental and college-transfer writing courses.

    Portfolio readings demonstrated an increased understanding among educators of what student skills are necessary for college work. At the TCC site, high school and college instructors who participated in readings of over 300 portfolios each year repeatedly demonstrated over 92 percent inter-reader reliability rates.

    Assessment Results

    As a result of this extensive collaboration, each institutional partnership has developed lines of communication and contacts between postsecondary and secondary faculties to improve student preparation for college writing. Both teachers and students have benefited from the collaborative activities. Many of the college and university sites have expanded their programs to additional secondary sites and are actively developing institutional measures to support collaboration between their faculties.

    Identifying a large population of students from the two FIPSE projects (1998–2005), TCC’s Institutional Effectiveness Office gathered and interpreted data on student placement, success, and retention. Project students were found to more frequently place into college-level work using portfolios rather than through traditional placement methods, as demonstrated in spring 2001 when project students placed into first-year composition with COMPASS at a rate of 54.4 percent. A control group placed at a rate of 36.96 percent with COMPASS. More important, those same project students placed into first-year composition at a rate of 75.2 percent using portfolio assessments. With increased accessibility to college transfer work through the portfolio methodology, critics still questioned those students’ preparation for the rigors of college work. Following the success rates (A, B, or C in course work) of project students each year, the TCC assessment office found that project students consistently matched the performance levels of traditionally placed students. From 2001 through 2005, final placement levels into first-year composition for project students increased each year. In the last year of the FIPSE Writing Coalition, 70 percent of project students received a first-year composition placement using their senior-year portfolios. Moreover, the overall retention rate for project high school students in three identified high schools who entered TCC each fall over the period of 1999–2002 was 63 percent versus that of nonproject students, whose rate was 48 percent. Additionally, as compared to the 68 percent retention rate for all TCC students in spring 2004, the retention rate for project students in spring 2005 grew to 88 percent.

    Further qualitative reflection on the success of this project as measured by the portfolio component is offered by Michele Marits, TCC instructor and project team member:

    I emphasize “accomplishment” because these portfolios represented the unique collaboration between area high schools and TCC; they represented all we had learned from the workshops, such as those offered by Kathleen Yancey and by The Bard Institute; they represented all the collegial discussions at the roundtables and seminars; and they represented all the years of ponderings about “what we value in a piece of writing,” which culminated in the assessment rubric and the Placement Portfolio Scoring Guide. But, most of all, they represented students’ accomplishments—students’ essays, rough and final drafts, their letters to us, the readers, and their reflections on their bodies of work. We heard their “voices,” their hopes and aspirations for the future, and we all became better teachers in the process.

    Partnering institutions found similar results with students and teachers. Some of the institutions were able to identify positive trends in student achievement via overall state-mandated writing assessments. Using pre- and postwriting samples to garner data during the secondary school year, Greenville Technical College (GTC) project students demonstrated a 15 percent improvement from pre- to post-tests of college writing. Enlisting help from their offices of institutional assessment, the postsecondary institutions attempted to track the progress of their project students from high school to college, although most of the sites found these data difficult to identify due to small numbers or lack of follow-up information. Fear of identity theft prompted many students and teachers to dismiss requests for social security numbers that are essential to acquire such data. Additionally, many of the two-year institutions were unable to monitor performance of project students due to the transitory nature of their student bodies.

    Follow-up information from the Florida Community College at Jacksonville site found 38 Wolfson project graduates at the college campus in fall 2004 placing into college transfer composition courses. Of those students, 90 percent completed the course successfully and 95 percent reenrolled for the next semester. In spring 2005, 28 Wolfson project graduates enrolled in college composition for the first time and 83 percent completed the course successfully. Totals for the year show that 90 percent of the Wolfson project students completed college composition successfully. Further encouraging data were found by the assessment office at Southwestern Michigan College (SMC) in its review of data for Ross Beatty project graduates: “Since the FIPSE program has been in place, 100 percent of students taking English 103 (college transfer) have passed with a ‘C’ or above, as opposed to the 78 and 73 percent in the two years preceding the grant.”

    Assessment Principles

    The basic principles informing our project included our belief that assessment should be consistent with what we know about language and literacy, that it should improve teaching and learning, and that it should be accessible to all stakeholders. We also endeavored to make our assessment meet professional guidelines while also meeting local needs.

    As high school students enrolled in college and found their writing skills deemed deficient by college placement tests, high school teachers asked, “What is it you want my students to be able to do?” High school and college teachers felt disconnected from the other’s institution and wondered if they would have administrative support to try new approaches to writing instruction. Although surveys and research confirmed the need to open dialogue, teachers were initially suspicious of yet another mandate from afar, especially in light of ever increasing accountability requirements brought on by high-stakes testing. They raised the question, “How can you be innovative in a structured environment?”

    Experimenting with Writing Assessment

    Emerging as a proverbial “guiding force” for an examination of writing practices and assessment, Kathleen Blake Yancey became the project’s informal writing advisor and head cheerleader. Her work on portfolios lent further justification to another project goal—to demonstrate the effectiveness of portfolio instruction, evaluation, and placement. From October 1998, when Yancey led a FIPSE-TCC-sponsored session entitled “Engaging Student Interest in Writing and Development of Writing Portfolios,” portfolios permeated secondary and postsecondary composition classrooms.

    Using Assessment to Identify Good Writing

    Not only did portfolios provide an important link between institutions, but the approach also promoted innovations in assessment. The routine testing practice at TCC, as at many colleges across the nation, requires that all entering students be placed in writing, reading, and mathematics courses by COMPASS, a multiple-choice, commercially developed, computerized assessment tool. The writing section is essentially an editing test of a few selected pieces. If a student’s score falls into a borderline “gray” placement area, he or she may be required to write to a prompt for 20 minutes. As an aside, with the need to ensure student readiness for timed writing samples, writing-on-demand strategies were identified and refined for classroom use to provide opportunities for students to practice writing to a prompt in a limited period; however, the use of a single indicator and/or a timed writing sample for demonstration of a student’s readiness for college work was and remains a concern of students and of teachers who utilize the writing process in their classrooms.

    Assessment Resources and Sustainability

    Propogating Portfolios

    Since the initial sessions, participants and consultant Yancey have engaged in workshops exploring print and digital portfolios at multiple national project sites. Many dissemination sites found the portfolio to be a fundamental element of collaboration and a vehicle for alignment of writing.

    At the conclusion of the FIPSE grant, TCC supported the portfolio project for area VBCPS senior English students for one year; however, problems in administration affected continuation of the program because of cost and labor. Administrators seem to view the activity as labor-intensive, unwieldy, and yet another item to add to their already overextended budgets. Despite the validation for authentic assessment provided by the portfolio placement methodology and its attendant demonstration of success for students and teachers, institutionalizing this approach requires identification of additional sources of funding, reenergizing secondary and postsecondary staff, and renewed administrative direction. Fortunately, grant funding enabled dedicated project personnel to receive monetary compensation for their efforts to resurrect additional reserves of energy and time to develop innovative approaches to writing instruction.

    Problems and Opportunities

    As with any innovation, unexpected hurdles were encountered and challenges were met through adjustments and alternative strategies. Personnel changes, increasing personal responsibilities of teachers, and faculty attrition were all part of the growing pains of this project. A lack of continuity in administrative and instructional partnerships at all sites created a constantly changing canvas of educators, necessitating repeated orientations, updating, and retraining. GTC site leader Allen describes the problem of maintaining momentum despite teacher turnover: “Surprising and challenging.” Likewise, SMC site leader Lemrow comments on the repercussions of reassigned principals: “A good deal of time will have to be spent just to arrive at where we were.” Locally, targeted high schools in VBCPS rotated staffs and altered teams. However, when one high school team “disappeared,” other teams were forged.

    However, this project demonstrates that the real solution to the problem of student writing success is not a strategy or a skill set, or even an assessment tool. Working through two FIPSE projects over a seven-year period, teachers demonstrated amazing resiliency to overcome the public’s finger-pointing when headlines claim “Johnny Cannot Write” or “Senior Year Is Largely a Waste” and to deal with unspoken state mandates that seem to promote teaching to the test. Through partnerships in Virginia, North Carolina, South Carolina, Georgia, and Florida to Michigan, Arizona, and California, secondary and postsecondary teachers have demonstrated a common belief in student success and diligently sought new routes for student preparation for college writing.

    When teachers are given the tools and support they need to instruct, students succeed. Those who produce the tests or pen the news articles need to listen to high school and college teachers, as teachers have listened to and responded to each other. Despite time constraints and multiple social and education issues inherent in teaching in public secondary schools, teachers in this project adopted a focused approach to writing instruction and altered their roles from dispensers of information to coaches of composition. While institutions seem more than willing to find funding for outside consultants, testing firms, and electronic software programs, they rarely turn inward to mine the treasures within. Opportunities for reflection and dialogue need to be built into the fiber of educational research and measurement of student success.

    For more information on this project, see Lesson Plans for Teaching Writing edited by Chris Jennings Dixon (NCTE, 2007).


    Assessment Narrative - University of Kentucky

    This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.


    Assessment Narrative


    Institution: University of Kentucky
    Type of Writing Program: First-Year Composition (required)
    Contact Information:
    Dr. Connie Kendall (former WPA)

    (513) 556-1427

    Deborah Kirkman (Assistant Director)
    (859) 257-1115

    Dr. Darci Thoune (Associate Director)
    (859) 257-6995

    Assessment Background and Research Question

    In the fall of 2004, the University of Kentucky’s University Writing Requirement was revised significantly from a two-course first-year composition sequence (ENG 101 and Eng 102) to a single four-credit-hour first-year writing course (ENG 104) linked to an upper-level graduation writing requirement. Since the new course, ENG 104, entailed dramatic changes relative to its “fit” within a newly conceived, two-tier structure and an explicitly inquiry-based curriculum, a comprehensive review of the course was undertaken in fall 2006. The timing was fortuitous: the English department was embarking on a self-study in preparation for external review; the University of Kentucky (UK) president had focused the attention of the campus on assessment; and interest in undergraduate writing instruction was high. The WPAs at UK took advantage of this confluence of circumstances to create a new and much more comprehensive assessment than had previously been undertaken. Similarly, heightened interest in assessment at UK allowed us to secure funding more easily.

    The most important question guiding the assumption was: to what extent are pedagogical practices in ENG 104 encouraging and enabling students to achieve the expected learning outcomes for the course? These outcomes reflected the new emphasis on critical inquiry and experientially based research and writing, a shift from its former and more narrow focus on argument and exposition. Course outcomes focused on students’ developing abilities requisite to framing and writing projects of a substantial intellectual character, including comprehending, interpreting, and responding to written texts; developing complex questions and problems of public concern for research; and finding and incorporating pertinent academic scholarship and other sources, including personal experience, in their writing. From this broad goal, the following outcomes were defined. Students will:

    • Develop perspectives that take into account various forms of evidence and points of view
    • Engage in a range of writing activities to explore and express their experiences and perspectives
    • Research subject matter thoroughly and put readings into service of a stance or argument
    • Formulate a writing project coherently and organize it effectively
    • Collaborate with class members to investigate, share findings, and advance multiple viewpoints
    • Observe conventions of Standard Written English in paragraphs and sentences
    • Edit, proofread, and revise effectively
    • Develop a fluent prose style appropriate to the purposes for writing

    Because UK’s writing program is very large (serving over 4,000 students annually and employing a cadre of roughly 100 writing instructors [adjuncts, TAs, and lecturers]), we also wanted to use this assessment to learn how consistently the effective writing strategies and critical thinking skills included in the outcomes were being incorporated into the course design and pedagogical practices of instructors. Additionally, of course, we wanted to learn about the level at which first-year students were employing these strategies and skills in their writing following their experiences in the course.

    Assessment Methods

    To assess these outcomes, we outlined three focus areas for programmatic review. Assessment instruments were designed to gather information from different perspectives on these areas.

    Focus area I (course design and pedagogy) focused on the extent to which instructor assignments fostered the course goals of developing students’ critical thinking capacities and effective writing skills, and employed a 3-point rubric to gauge the explicitness or embeddedness of the learning outcomes for a 10-page research-based essay assigned across all sections of the first-year writing courses.

    Focus area II surveyed student and instructor perceptions of the scope and quality of writing instruction in meeting course objectives, as well as the extent to which instruction fostered the development of cognitive skills and affective dispositions relative to critical thinking capacities.

    Focus area III aimed at creating a scoring rubric that would help us determine the extent to which first-year student writing demonstrated effective writing strategies and critical thinking skills through the direct assessment of the 10-page research-based essay.

    The process for determining criteria for this assessment was a crucial part of the project and reflects UK’s commitment to locally designed and developed writing assessment. Toward this end, the 10-member assessment committee engaged in a yearlong series of conversations about what we valued in student writing, what was essential for a good assignment, and which of the various approaches for the direct assessment of student writing seemed most applicable to our situation.

    We began by constructing the two surveys (focus area II). We had a student survey already in place and thus felt this was a good place to start, tweaking the language and revising the questions to more specifically address the new curricular goals. For example, the old survey had students responding to the statement, “I am a better reader after having taken this course,” whereas the revised survey had students responding to the statement, “I improved as a critical reader after having taken this course.” We changed the language in the statement to, in part, determine whether instructors were using terms such as critical thinking and critical reading in their classrooms. However, we also included a variety of new statements in the survey, such as “Reading responses, journals, and in-class, and/or collaborative writing activities helped me to explore and develop ideas” and “I feel confident using a wide variety of methods for obtaining research (online databases, fieldwork, surveys, interviews, the library, etc.).”

    We then created an entirely new instructor survey (no such survey existed previously) that addressed these same questions, but with adjusted focus, to elicit responses from a teacherly perspective. We used a 5-point Likert scale for both surveys. Additionally, each survey included space for narrative responses. Given that our assessment committee largely consisted of teaching assistants pursuing English literature degrees (UK does not offer graduate study in composition/rhetoric) with little formal training in composition theory/pedagogy (only one required course completed at the start of their programs) but with rich and various teaching experience in the composition classroom, our decision to begin with the creation of the two surveys helped us build community and lay the groundwork for the more difficult work that lay ahead—devising workable rubrics to assess instructor assignments and actual student writing. We distributed the surveys to 1,620 first-year students and 50 writing instructors at the end of fall semester. With the help of UK’s Office of Assessment, we were able to report preliminary results to our writing instructors at the all-staff meeting held in January 2007. We repeated only the student survey at the end of the spring semester, reaching another 1,260 first-year students, to bring our total student surveys collected to 2,880.

    To develop criteria for the direct assessment of student writing (focus area III), our committee reconvened at the start of the spring semester and engaged in a series of structured conversations about our program’s “rhetorical values” for first-year writing. By far, these conversations proved to be our most contentious and ultimately the most productive for the creation of our scoring rubric—an analytical (as opposed to holistic) rubric that took into account dimensions of critical thinking skills and effective writing strategies by designating five specific traits that could be scored according to varying levels of student mastery. Disenchanted with the rubrics that were available to us from outside sources even with modification, the committee sought another approach to the formation of a rubric that could be more responsive to local needs and dynamics. We ultimately took our lead from Bob Broad’s What We Really Value and his notion of “dynamic criteria mapping” as the process by which we would identify the values that matter most to our UK first-year writing community, and thus would help us define the criteria for the scoring rubric.

    The three focus areas in this assessment were not understood as hierarchical in nature. That is, we viewed each component as necessarily in conversation with the others, and we sought to design assessment tools that would help us triangulate our data. It was important to us to demonstrate to the audiences who would receive our final reports that understanding the status of first-year writing at UK meant more than directly assessing writing so as to draw quick conclusions about how well (or how poorly) that writing met university-approved standards. Instead, we were interested in showing the many nuances that evaluating student writing entails, in including student and instructor perceptions of the course itself, and in more clearly articulating what we meant by the terms identified in the learning outcomes.

    Developed through an extensive process of structured discussion and revision, the scoring rubric settled on five primary traits of effective writing:

    • Ethos (engaging with issues, demonstrating an awareness of intended audience, using distinctive voice and tone)
    • Structure (arrangement that complements the writer’s purpose, logical structure, awareness of rhetorical moves)
    • Analysis (taking a stance, considering multiple perspectives, avoiding easy conclusions)
    • Evidence (selecting and incorporating appropriate sources, presenting evidence in a balanced way, using sources to advance ideas)
    • Conventions (use of appropriate citation methods and conventions of Standard Written English and attendance to sentence-level concerns appropriate to the project’s purpose)

    Each of the five traits was scored on a 4-point rubric to describe the level of mastery: scant development, minimal development, moderate development, substantial development.

    The director of UK’s Office of Assessment assisted the writing program directors in generating a credible sample size of first-year writing by using a simple random sampling method across all sections of ENG 102 and ENG 104. Before grading the 10-page research-based essay, individual instructors were directed to make clean copies of the randomly selected student essays and deliver these to the writing program office. Office staff then removed all identifying information relative to students, instructors, and sections, and then made copies available for the direct assessment. In total, we collected approximately 250 student essays. In preparation for the scoring sessions, the Assessment Coordinating Committee (the three directors and the writing program intern) read approximately 50 essays and, from their reading, decided on six anchor essays to facilitate “norming” (or what we called “articulation”) conversations prior to the scoring of “live” essays. Another 15 essays were used periodically to help the group recalibrate during the live scoring sessions. None of the essays that were used to help us articulate the criteria and standards was included in the final results. Three of these essays were used to check inter-rater reliability during the scoring session in a “blind” fashion (i.e., the raters were unaware that inter-rater reliability was being checked).

    We designed a second rubric to assess instructor assignments (focus area I) last, after the direct assessment of student writing. Following the design of the essay scoring rubric, the instructor assignment rubric used the same five criteria. However, instead of using a Likert scale, we adopted, with the approval of the Office of Assessment, a 3-point scale indicating whether each criteria was explicit, implicit, or absent in the assignment.

    Assessment Principles

    During the programmatic review of first-year writing at UK, the following assessment
    principles were generally viewed as the most important for our needs and purposes:

    1. The assessment process should build community among our large cadre of writing teachers—a group consisting mainly of teaching assistants and contingent faculty members, who often feel excluded from the life and governance of the English department community at large.
    1. Assessment should be both descriptive, helping instructors to understand what is going on in the courses, and informative, providing data to simultaneously help us improve our instruction and help our students achieve the course goals.
    1. Assessing student writing must always take into account the rhetorical values that writing teachers bring to the process, and must be locally designed and implemented. In addition, reports about the status of first-year students’ writing should not be limited to data collected through direct assessments, but should also include a range of data (e.g., student and instructor perceptions, the efficacy of writing assignments and course design) that can enable a WPA to bring much-needed context to the reporting of scores.
    1. Assessments should be low stakes for instructors and students. We attempted to minimize the perception of risk to both students and teachers in several ways: Student and instructor surveys were universally distributed but were nevertheless defined as voluntary. Because the surveys were anonymous, respondents could in fact opt out without repercussion. The surveys were also brief, requiring only 5 to 10 minutes to complete, and were given at the end of the semester and aligned with the usual university-approved protocol for course/instructor evaluations.

    All identifying information was removed from all documents collected (e.g., surveys, writing assignments, essays), and all reported data were aggregated. Each component of the assessment was programmatic in nature and not tied to either course grades or instructor evaluation.

    The assessment process, as it was designed and as it unfolded, was made transparent to all participants. Communicating early and often with students and teachers about the goals, uses, and meaning of the assessment not only helped to alleviate potential concerns but also helped us reinforce the idea of assessment as an ongoing and organic process at UK, responsive to local needs and altogether necessary for the health and maintenance of the first-year writing program.

    Assessment Results and Follow-Up Activities

    With the assistance of UK’s Office of Assessment, we are in the process of a full analysis of the data collected. Initial findings from the direct assessment (focus area III) suggest that students completing first-year writing at UK are well versed in how to use Standard American English and general documentation conventions. However, students appear to be less able to engage in sophisticated analysis, to establish a strong sense of ethos, to use supporting evidence effectively, and to evince an awareness of multiple perspectives on a given topic, all elements associated with good writing in our UK context. These factors speak to the need for the writing program to foster the development of our students’ critical thinking skills. A revision to the ENG 104 curriculum supports this development by promoting academic inquiry and the discovery of knowledge through experiential, collaborative learning.

    Preliminary data from the student surveys suggest the following:

    Program Strengths

    1. The mean score for overall preparedness and fairness of writing program instructors was 3 on a 4-point scale.
    2. The mean score for perceived opportunity for considering issues/questions of public significance from multiple perspectives was 3 on a 4-point scale.
    3. The mean score for teaching of writing as a process was 3 on a 4-point scale.
    4. The mean score for student perceptions of writing program instructors as caring, committed, respectful, and approachable individuals was 3.5 on a 4-point scale.

    Opportunities for Improvement

    1. Students did not perceive their critical reading capacities as generally improved after having taken this course.
    2. Students wanted instructors to offer a wider variety of writing opportunities (e.g., more practice with new tasks without grades attached).
    3. Students did not perceive much time spent on issues of standard usage and grammar.

    In General

    1. Relative to student perceptions of the course curriculum, there is general correspondence between ENG 102 and ENG 104, which suggests there is a discernable measure of parity across first-year writing courses.
    2. Students generally agree that first-year writing courses are either “more” or “much more” challenging than other 100-level or introductory courses.
    3. Students generally agree on the overall value of the course (3 on a 4-point scale) and the overall quality of teaching (3 on a 4-pont scale).

    Assessment Follow-Up Activities

    Thus far, these findings have influenced both our new instructor orientation and our mandatory all-staff meeting this year. We have used our initial findings to help train new instructors, to create professional development sessions, and to gradually begin shifting our instructors toward creating assignments that are driven by inquiry, focused on issues of public intellectual significance, draw on multiple perspectives, and utilize a variety of evidence and research methods.

    In addition, the assessment plan we devised has been the subject of a university-wide conversation as UK revises its general education course requirements, a process that involves assessment of ongoing assessment practices relative to the stated learning outcomes among and across a variety of disciplines.

    Assessment Resources and Transferability

    The writing program received $16,000 from the UK provost’s office to conduct the review and assessment. This money, along with writing program funds (roughly $2,500), was used to compensate 15 raters during the scoring sessions, to bring an expert consultant to campus to conduct a half-day workshop on writing assessment, and to pay for the costs incurred during our weeklong scoring sessions (e.g., printing costs, food/beverage costs, etc.).

    As mentioned briefly earlier, our Assessment Coordinating Committee spent many hours in conversation and consultation—meeting roughly every two weeks throughout the year for the purposes of identifying (and revising) our community’s rhetorical values for student writing, designing our assessment tools, selecting raters from a pool of applicants, and generally coming to terms with our goals and protocols for the direct assessment of student writing, in other words, our “live” scoring sessions held in May 2007. This was, in all ways, a yearlong project, fully dependent on the generosity and spirit of goodwill to be found among our writing teachers at UK. As is usually the case, financial compensation never fully “covers” the amount of time and energy spent on a project of this size. A final report will be submitted to the Department of English, the Office of Assessment, and the Office of the Associate Provost for Undergraduate Education.

    The associate provost’s office has already indicated that our assessment should be viewed as a viable model for thinking about assessment across the university landscape. On a more personal level, we believe that our process of creating a rubric was both amazing and revealing. As teachers of college writers, we learned that the articulation of values about student writing—fraught as that is with contestation and emotion and deeply held beliefs about what makes a “good” piece of writing—is in fact the very foundation for what we do (and hope to do) in the composition classroom.


    Broad, Bob. What We Really Value: Beyond Rubrics in Teaching and Assessing Writing. Logan: Utah State UP, 2003.