Assessment Narrative - George Mason University





This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.

Assessment Narrative - George Mason University

Institution: George Mason University

Type of Writing Program: Writing across the Curriculum; required upper-division writing-intensive courses in the major

Contact Information: Terry Myers Zawacki

Director, WAC and University Writing Center

tzawacki@gmu.edu

Assessment Background and Research Question

George Mason University, a large Virginia state institution located outside of Washington, D.C., has had a well-established Writing across the Curriculum (WAC) program dating from 1977. The components of the program include an upper-division required composition course in a disciplinary field relevant to the student’s major (e.g., Advanced Composition in the Social Sciences) and an upper-division designated writing-intensive course(s) in the major. In 2001, our State Council of Higher Education in Virginia (SCHEV) required all institutions to develop definitions of six specific learning competencies, one of which was writing, and plans for assessing them, with reporting to begin two years later. Each institution was allowed to develop its own assessment plan. The director of the Office of Institutional Assessment (OIA) consulted with me about how we might respond to this mandate so that we would be able to use the results of the assessment to improve the way writing is taught across the disciplines, not just to prove something about our students’ writing competence to an external audience.

The year before we received the 2001 mandate to assess writing, the OIA director and the WAC director had already begun to set in place a process for determining the effectiveness of our writing-intensive (WI) requirement in the major. As a first step, we asked the provost to convene the Writing Assessment Group (WAG), comprising representatives from each of the colleges, many of whom had served or were currently also serving on the senate-elected WAC committee. Our first WAG task was to design a survey, described in detail under Assessment Methods, which we circulated to all faculty to determine the number and kinds of writing tasks they assigned and their level of satisfaction with students’ performance on these tasks. Based on the results of this assessment and in response to the state mandate, we developed a second set of research questions related to students’ competence as writers in their majors.

To fulfill the state’s mandate, all institutions had to (1) submit a plan for assessing students’ writing competence, (2) include a definition of standards for writing competence, along with methods to be used to measure competence, and (3) report results to stakeholders, as well as actions that would be taken based on the results. Mason’s plan focused on the writing of upper-division students in the majors with assessment to be conducted by departmental faculty who would assess representative samples of student writing in the major according to a discipline-specific rubric they had developed. In addition to these departmental results, the proposal also noted that we would include data from the results of the faculty survey on student writing and responses to questions about writing from graduating senior and alumni surveys. Based on all of these findings, we would determine what changes and/or enhancements might need to be made in the WI course(s), to its role in the sequence of major courses, and/or in the faculty development workshops that are targeted to faculty teaching WI courses.

For purposes of reporting to the state higher education council, our writing assessment group decided to aggregate the results from all of the departments that had conducted assessment, so that individual departments would not be singled out for producing unsatisfactory numbers of less-than-competent writers. However, we asked departmental liaisons to write longer, more detailed reports on their assessment findings to be kept in the Office of Institutional Assessment and to be circulated to department members. In a concluding section of the longer reports, departments are asked to describe the actions they will take, as a result of their findings, to improve the way writing instruction is delivered in the major. The report to SCHEV can be found at http://research.schev.edu/corecompetencies/GMU/comp_writing.asp. Departmental reports are not publicly available; however, scoring rubrics are posted at http://wac.gmu.edu/program/assessing/phase4.html.

Assessment Methods

Faculty Survey on Student Writing
For the first assessment measure in fall 2000, the Faculty Survey on Student Writing was distributed to all faculty, who were asked about student writing at different points along a continuum, such as the writing preparedness of first-year students and transfers and their level of satisfaction with the ability of seniors on 17 writing criteria. Faculty also noted the number and kinds of writing assignments they use in their undergraduate classes, as well as their perception of and interest in overall departmental support and resources for teaching with writing. While, as could be predicted, response to the survey was disappointingly low, a number of units (Biology, College of Nursing and Health Sciences, Computer Science, Electrical and Computer Engineering, English, New Century College, Public and International Affairs, School of Management) had initial response rates of 40 percent or higher. Some units subsequently readministered the survey and achieved higher response rates. A detailed description of the survey results can be found on page 3 of the InFocus newsletter at http://assessment.gmu.edu/Results/InFocus/2002/WritingAssessment.pdf.

Questions on Writing on Graduating Senior Survey
Supplementing the information from the faculty survey are results on the writing questions asked each year on the Graduating Senior Survey. The 2006 senior survey included questions about students’ opportunities for revision and feedback in 300-level courses and above, and the effect of feedback on improving their writing, their confidence, and their understanding of their field. The results can be seen at http://assessment.gmu.edu/Results/GraduatingSenior/2006/index.cfm by selecting “Writing Experiences.”

Course-Embedded Holistic Assessment by Faculty in Majors
Our current and ongoing assessment is embedded in required upper-division WI courses in the major. Every department offering undergraduate degrees is asked to appoint a liaison who organizes the assessment effort. The liaisons attend a cross-disciplinary workshop, which is designed to teach them methods for developing criteria and assessing papers holistically. The liaison then goes back to his or her department to lead a similar workshop using papers collected from writing-intensive or writing-infused courses. The following paragraphs give a fuller description of these workshops.

Cross-Disciplinary Training Workshops.
For the cross-disciplinary training workshop, departmental liaisons read, discuss, and rank sample student papers written in sections of English 302, an advanced writing-in-the-disciplines course required of all students; the papers were written in response to a standardized assignment prompt for a literature review. After the sample papers have been ranked, the faculty go through the process of developing a scoring rubric based on criteria derived from their discussion of traits they valued in the papers. While the purpose of the cross-disciplinary workshop is to teach the liaisons the process to be used in the departmental workshops, the participants always leave with an awareness of how much their expectations may differ from those in other disciplines and even from members of their own disciplines; they also acquire a greater understanding of the challenges student writers face in meeting the expectations of teachers across disciplines. The WAC director leads these “training-the-liaison” workshops with the assistance of other composition faculty as available. She also leads or co-leads (with the designated liaison or another assessment group member) the half-day departmental workshops.

Departmental Assessment Workshops.
Before the departmental scoring session, liaisons determine what assignment will be used to evaluate students’ competence. They are asked to select an assignment that requires students to demonstrate the skills and abilities most characteristic of those that writers should possess in the major. Papers written in response to the assignment or set of assignments are collected from all students with their names removed. Then papers are selected at random to provide a representative sample for scoring (the number of papers scored is based on a reliable percentage of the number of majors). Participants in the workshops are typically those faculty who most often teach the WI course(s) or teach with writing in most of their courses. As in the training workshop, they read and discuss three or four sample papers as a group, articulate traits they value in each of the papers, rank the papers, and, finally, develop a rubric with criteria that reflect the traits they’ve listed. Thus the criteria and the scoring rubric are not only discipline-specific but also specific to courses and assignments.

Using this rubric, faculty score the papers. Each of the papers being assessed gets two readings and a third if the first two overall scores do not agree. Because overall scores can be difficult to determine if there is a spread of scores over individual criteria, faculty, as a group, must decide how they will determine overall competence when some criteria may be assessed as “less-than-satisfactory.” Some groups have decided that any paper receiving a “less-than-satisfactory” on the top one or two criteria must receive an overall “less-than-satisfactory” score. The School of Management decided, for example, that papers assessed as “not competent” in the category of “Formatting and Sentence-Level Concerns” must receive an overall score of “not competent.” Biology faculty agreed that any paper receiving an “unacceptable” rating on “Demonstrates Understanding of Scientific Reasoning” must be judged as “unacceptable” overall. (Note: Departments decide on the language they will use to describe the level of competence, e.g., “less than satisfactory,” “not competent,” “unacceptable.”) 

Once the scoring has been completed, the departmental liaison is responsible for analyzing the distribution of scores overall and on each criteria and for writing a report on the results to be circulated to the department and sent to OIA. While an analysis of the overall scores on the rubrics gives departments a general picture of students’ writing competence in the major, it is the analysis of the scores for each of the criteria that is most instructive for the purposes of faculty development, i.e., developing teaching strategies and assignments targeted to those areas in which papers were judged to be weak. As explained below, the assessment results also help departments make decisions about where writing is best placed in the curriculum. A more detailed explanation of our assessment process is available on our WAC site at http://wac.gmu.edu/program/assessing/phase4.html/, as are a number of rubrics developed by the departments that have conducted assessment.

Assessment Principles

We view assessment as part of an overall philosophy about education that states that good assessment—its methods, practices, and results—can be used to correct, change, and enhance the learning experience for our students. Central to our assessment process is the belief that faculty own the curriculum and, further, that program faculty must share a sense of direction and purpose to establish a coherent learning experience for students—in this case, a coherent writing experience in the major. When writing assessment is embedded in writing-intensive courses in the major and when faculty buy into the process, both the process and the results contribute to the development of teachers, to their greater understanding of student writers, and to the effectiveness of the writing instruction in their classes.  

Our assessment principles and decisions are also guided by composition and writing-in-the-disciplines research and theory, including Cooper and Odell’s 1977 collection Evaluating Writing: Describing, Measuring, Judging, which describes and provides a rationale for holistic scoring,and Huot’s 2002 (Re)Articulating Writing Assessment for Teaching and Learning, which argues that assessment should be site-based and locally controlled, that writing professionals should lead these efforts, and that our practices should be theoretically grounded, practical, and politically aware. Our process is also informed by genre and activity theory, which accounts for the fact that there are significant disagreements among faculty across and in the same disciplines about what constitutes competent writing. A fuller listing of sources is included at the end of this document.

Assessment Results

It would be difficult to sum up in a brief statement all that we have learned from our assessment efforts. The rubrics that departmental faculty develop as a result of our holistic reading and scoring process reveal widely varied expectations for student writing, based in the discipline but also on faculty members’ sense of the writing that is appropriate for undergraduates in their disciplines. Some results can be found on the website pages listed above. I and coauthor Chris Thaiss also discuss assessment results in Engaged Writers and Dynamic Disciplines: Research on the Academic Writing Life.

One of the most significant things faculty discover as part of the workshop scoring process is that they may not agree with one another on what “good” writing or what “serious” error looks like. While they may start from a position that surface errors are the strongest indicator that students “can’t write,” they see, as a result of collaboratively constructing a scoring rubric, that students’ performance on other higher-order criteria (clear argument, focused thesis, logical evidence, etc.) might be better indicators of students’ ability to write well in the discipline. Faculty can also see how flaws in their assignments might contribute to students’ less-than-successful performance. The subsequent analysis of the scoring results is also useful in helping faculty create more effective assignments, make decisions about appropriate assignments, decide on the best sequence for assignments, and/or improve their teaching-with-writing practices in the areas indicated by the assessment. Further, the reports are also useful for departments in determining appropriate course sequences and whether the current designated WI course is the most appropriate for the major. A more specific discussion of how the assessment results are being used by departments can be found in the InFocus newsletters at http://assessment.gmu.edu/Results/InFocus/2007/CompetenciesSummary_FINAL.pdf .

Assessment Follow-Up Activities

In addition, the Southern Association of Colleges and Schools (SACS) requires the assessment of learning outcomes for every academic program, including general education, for accreditation purposes. The writing assessment we have been doing contributes to this report, with each individual unit discussing the results of its assessment of writing in the major and follow-up actions it will take. The university will also include the assessment of writing as part of our larger assessment of general education for the SACS’s  review.

State Council of Higher Education in Virginia has recently mandated that Virginia institutions include a “value-added” component to our assessment plan. We will build on our current plan by adding a preassessment of students’ writing competence at the completion of first-year composition (FYC), using a random and representative sample of research-based essays. Faculty who teach the course will participate in a scoring workshop, in which they first develop a rubric to specify standards and then blind-rate the papers. In addition to providing comparison data for the postassessment that occurs in the WI courses, the results should also allow us to begin assessing our required English 302 advanced writing-in-the-disciplines course.

Assessment Resources

Departmental liaisons are given a very small stipend and a free lunch for participating in the cross-disciplinary training workshops. In some departmental workshops, faculty are given lunch and, if funding is available, a small stipend. In 2004 the provost funded a university-wide reception to recognize faculty for their assessment efforts. Posters describing each department’s assessment procedures, rubrics, and results were created for the reception and subsequently displayed, at the request of our university president, at a meeting of the Board of Visitors. Some posters were also displayed in the bookstore and in departments. Some of the posters can be viewed online at http://wac.gmu.edu/program/assessing/powerpoint.html. Other than this recognition and some small compensation for term and adjunct faculty who participate in scoring, there are no incentives; we must rely on the goodwill of full-time faculty and their commitment to student learning.

The WAC director co-chairs the assessment initiative with the OIA director as part of her responsibilities, not because this is part of the job description but because of what the WAC program gains from participating in the process. The assessment workshop is a valuable faculty development opportunity, and both the process and the resulting data provide the director with a valuable perspective on writing in the disciplines across the university, which, in turn, informs ongoing WAC program and faculty development efforts.

Assessment Design Sustainability and Adaptability

Our assessment efforts are sustainable up to a certain point. A joint WAC-OIA position has been approved for the next fiscal year for an assistant to help with both writing assessment and the WAC program. However, we still need more resources to enable us to recognize the efforts of those faculty who have participated and to provide incentives to encourage more faculty to participate.

Our process is adaptable, as proven by departments using the methods for their own ends, e.g., the Department of Communication using a holistic method to develop a rubric for faculty, mostly adjunct, to use in grading papers from lower-division general education and majors courses; departments also find the process useful for calibrating teachers’ reading and evaluation practices. Our School of Management is using the process to develop writing outcomes for their majors and also to measure growth in writing from the gateway to the capstone course.

The frequent queries we receive from program leaders across the country about our assessment process is evidence of the adaptability of our assessment design to other programs. Indeed, our program has been referred to as “the Mason Model” by some of the WAC and assessment people who frequently contact our program.

Useful References

Bazerman, Charles, and David R. Russell. Writing Selves/Writing Societies: Research from Activity Perspectives. Perspectives on Writing. Fort Collins, CO: The WAC Clearinghouse and Mind, Culture, and Activity. 2002. 11 June 2008 <http://wac.colostate.edu/books/selves_societies/>.
Cooper, Charles R. “Holistic Evaluation of Writing.” Evaluating Writing: Describing, Measuring, Judging. Urbana, IL: National Council of Teachers of English, 1977. 3–32.
Cooper, Charles R., and Lee Odell, eds. (Eds). Evaluating Writing: Describing, Measuring, Judging. Urbana, IL: National Council of Teachers of English, 1977
Haswell, Richard, and Susan McLeod. “WAC Assessment and Internal Audiences: A Dialogue.” Assessing Writing Across the Curriculum: Diverse Approaches and Practices. Ed. Kathleen Blake Yancey and Brian Huot. Greenwich, CT: Ablex, 1997.
Huot, Brian. (Re)Articulating Writing Assessment for Teaching and Learning. Logan: Utah State UP, 2002.
Miller, Carolyn R. “Genre as Social Action.” Quarterly Journal of Speech 70.2 (1984): 151–67.
Russell, David R. “Rethinking Genre in School and Society: An Activity Theory Analysis.” Written Communication 14.4 (1997): 504–54. Accessed online.
Thaiss, Christopher, and Terry Myers Zawacki. Engaged Writers and Dynamic Disciplines: Research on the Academic Writing Life. Portsmouth, NH: Boynton/Cook, 2006.
Walvoord, Barbara E. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. San Francisco: Jossey-Bass, 2004.
White, Edward M. Teaching and Assessing Writing: Recent Advances in Understanding, Evaluating, and Improving Student Performance. San Francisco: Jossey-Bass, 1994.
Yancey, Kathleen Blake, and Brian Huot, eds. Assessing Writing Across the Curriculum: Diverse Approaches and Practices. Greenwich, CT: Ablex, 1997.