Assessment Narrative - Frederick Community College

This assessment model is part of the WPA Assessment Gallery and Resources and is intended to demonstrate how the principles articulated in the NCTE-WPA White Paper on Writing Assessment in Colleges and Universities are reflected in different assessments. Together, the White Paper and assessment models illustrate that good assessment reflect research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.




 

Assessment Narrative - Frederick Community College

Institution: Frederick Community College
Type of Writing Program: Freshman Composition
Contact Information: Kenneth Kerr, EdD
Professor of English
Frederick Community College
7932 Opossumtown Pike
Frederick, MD 21702
301-846-2646

Assessment Background and Research Questions

Frederick Community College has embraced Terry O’Banion’s concept of the “learning college.” One of the principles of that philosophy is, “[T]he learning college and its facilitators succeed only when improved and expanded learning can be documented for its learners” (47). Aside from that, as an institution accredited by the Middle State Commission, we have an obligation under standard 14 to assess student learning in a meaningful and rigorous way and to use the results of assessment activities to improve teaching and learning.

FCC has created an ongoing assessment model stemming from these two points in which questions are raised by and stem from teachers’ classroom work, and the curriculum is continuously revised based on responses to those questions. These questions always focus on how well students are achieving the learning outcomes for courses that are at the center of the assessment. Instructors volunteer to participate in the assessment on a three-year cycle.

As part of their annual self-evaluations, each full-time faculty member is required to report what he or she has done over the previous year to improve teaching and learning in his or her courses. In this self-evaluation, faculty identify an area of concern in one or more course or core learning outcome. They discuss what was done to improve teaching and learning in that area and report how learning improved as a result of the change.

On a regular three-year cycle, faculty then self-select to participate in a formal, rigorous assessment project that focuses on issues of learning improvement. Often these issues come from the self-evaluations. These self-selected faculty volunteers shoulder the responsibility for their departments and oversee the three-year assessment project. They are responsible for designing, implementing, collecting, and analyzing data, and for submitting the final course-level assessment report. Over time, all members of the department are expected to take a turn leading the assessment efforts. This is considered service to the college equivalent to a committee assignment. To ensure that the assessment projects are sufficiently rigorous and likely to yield meaningful, reliable, and valid data about teaching and learning, each department submits an assessment project plan to the Outcome Assessment Council, which then considers and approves the plan. Once approval is given, the department proceeds to collect and analyze data, implement change, and reassess for effectiveness. The cycle works like this:

Year 1: Departments submit a plan for assessing student learning. This plan will assess all four aspects of the Maryland Higher Education Commission (MHEC) general education requirements.

Year 2: Departments collect and analyze data and recommend changes to teaching and learning.

Year 3: Departments reassess to determine the effectiveness of the changes and submit a final report.

The cycle then begins again with a new team investigating a new area of learning in a different high-enrollment course.

The Assessment

In the fall of 2004, the English department developed an assessment project to determine how well students are learning college-level communication skills in our English Composition (EN101) classes. Initially, we conducted a general assessment to gather some baseline data about how well our students were meeting our expectations at the end of the course. We developed a rubric to assess what we thought were the most important skills we wanted students to gain from the course: Content, Organization, Grammar/Punctuation/Mechanics.

In 2005, we decided to focus on the same general question, “How well are students learning college-level communication skills in our English Composition (EN101) classes?” We took as our criteria the outcomes for our course:

  • Write effective, organized, clear, concise, grammatically correct compositions using appropriate stylistic options (tone, word choice, and sentence patterns) for a specific subject, audience, and purpose (informing, arguing, or persuading).
  • Demonstrate the ability to understand and interpret both written texts and oral presentations in English.
  • Understand the critical role of listening in communication.
  • Demonstrate an ability to organize ideas effectively by selecting and limiting a topic.
  • Develop and support a thesis with relevant and well-reasoned material.
  • Employ a logical plan of development and use effective transitions.
  • Demonstrate an understanding of the conventions of the English language writing essays that are substantially free of errors in grammar, spelling, punctuation, and mechanics.
  • Develop critical thinking skills by:
  • Evaluating evidence by differentiating among facts, opinions, and inferences;
  • Generating and evaluating alternative solutions to problems; and
  • Researching, analyzing, comparing, synthesizing, and drawing inferences from readings and other research materials in order to make valid judgments and rational decisions.
  • Assessment Methods

    All twelve full-time English department faculty collected final research papers (with all identifying information removed) from their regular sections during the spring of 2005. Using the outcomes-based criteria, we examined a sample of the student papers. The results indicated that students were strongest in the area of grammar and mechanics and also demonstrated competency in organizing written communication. However, we found that students did not effectively use information and material to support assertions in their writing.

    We then developed a follow-up study focusing on two new research questions: Is the problem that our students do not know how to find information and materials to support the assertion of their theses? Or, once students have found information, do they not know how to use it effectively?
    In the fall of 2006, we repeated the assessment project, specifically addressing these two questions. We provided students with specific, college-level reading material (articles from a scholarly journal and a general periodical) to be studied outside the classroom. Faculty explained the assignment, distributed materials, and set aside a full class period during the final week of class to have students write an essay responding to the readings. Instructors were cautioned to refrain from dissecting and discussing the materials with students because we were trying to determine their ability to read critically. Students were expected to prepare materials beforehand and could bring in materials and notes to class for drafting. (Although part of our study, the assignment was also a regular class assignment graded by the teacher.)

    In total, 281 student papers were in the sample representing 24 sections taught by the full-time faculty and three adjuncts. From this pool, we randomly selected 60 papers (20 percent) for evaluation.
    The papers were assessed using a four-point rubric, which we developed based on our outcomes and earlier study, that ranged from “accomplished” to “not evident.”
    EN101 In-Class Writing Assessment Rubric

    Learning Objective

    4
    Accomplished

    3
    Competent

    2
    Developing

    1
    Not Evident

    The writing demonstrates that the student can differentiate facts, opinions, and inferences.

    Student has clearly indicated when he or she is stating opinion as opposed to fact.

    Student does not specifically state whether the evidence is based on fact or opinion.

    Student uses data incorrectly to support a position on the issue.

    The response is entirely based on personal opinion.

    The student analyzes information from various sources.

    Student uses information from both sources provided and clearly identifies the specific source of the material.

    Student uses information from both sources.

     

    Student uses only one source to support the position.

    Student uses no information from the sources provided.

    The student recognizes and develops alternative perspectives and solutions.

    The response indicates that an original alternative solution to the prompt has been developed.

    Student accepts a solution proposed by one or the other of the provided sources.

    Information from the material is summarized, but no solution is offered.

    Student misreads the data and/or misinterprets what he or she has been asked to write about.

    The student evaluates alternatives to make a sound judgment.

    The student presents and considers multiple alternatives before proposing one as preferred.

    The student mentions the existence of opposing positions.

    The student presents a single position as the only possible solution.

    No alternatives are presented. No judgment is present.

    Three readers read each paper. When a reader determined that a paper was “competent,” or fell above or below “competent,” he or she was asked to provide some explanation for that determination.
    The purpose of the second assessment project was to determine if students could effectively use college-level materials provided to them in response to a prompt they were given in advance. We found that they had trouble handling this information in a competent manner. Consequently, we concluded that we needed to concentrate our efforts on helping students become better critical readers.

    Assessment Principles

    Since FCC’s assessment is ongoing, the methods for assessing questions depend on the questions being asked. Both departments and individual faculty are expected to participate in ongoing assessments. In the English department, we used the following principles to help develop our assessment program and projects:

    • The use of writing assessment should be appropriate, fair, valid, reliable, and equitable. We made sure that every paper had an equal chance of being included in the sample. To show that the emphasis was on student work, we removed any indication of the instructor and the student. All papers were read by all three evaluators, who were calibrated prior to evaluating papers.
    • Writing assessment places priority on improvement of teaching and learning. To reinforce the idea that the purpose of assessment is improvement, results were not presented in such a way that any individual student or instructor could be identified as either weak or exceptional. The idea is to improve the course as a whole. Assessment must be seen as ameliorative, not punitive.

    Student learning outcomes assessment at Frederick Community College is a comprehensive effort focused on measuring student academic achievement. In writing courses, this means assessment results can be used to improve student writing in other classes and other disciplines. As we introduce learning innovations that are effective in improving student learning, these are communicated to other instructors through professional development opportunities throughout the year.

    • Writing assessment provides evidence of student learning. By assessing random samples of students over many semesters, we are able to determine whether the learning innovations we have introduced as a result of assessment data are effective.

    Students also are active partners in the learning process. (Could this extend to a principle of involving students in assessment, or using student work as a central focus of assessment?) Our students are made aware of what is being assessed in the work they produce. We specify what evidence of learning will be looked at closely and remind them.

    • Writing assessment is ongoing, providing the foundation for data-driven, or evidence-based, decision making. Because the assessment project takes place over a period of years and provides data that can be used to measure effectiveness, we can develop changes to our courses based on evidence and then measure the effectiveness of those changes.
    • Writing assessment articulates and communicates clearly its values and expectations. Our students are told what is being assessed and how it will be assessed. They are provided with the assessment rubric. Faculty are aware that they have nothing to fear from assessing their students’ learning. The values and expectations are that everyone is committed to improvement and no one need fear reprisal because of disappointing results.
    • Writing assessment is site-specific. We wanted to develop a program of assessment that would fit within our current practices. We didn’t want to “invent” a way to assess. We wanted to “find” existing opportunities to assess student learning. We wanted to incorporate assessment into what we were already doing so as to cause the least disruption of teaching practices. Our program was also developed with consideration of our size, our resources, and our mission.

    Assessment Results

    The results of the 2005 assessment project indicated that students were strongest in the area of grammar and mechanics. Students also demonstrated competency in organizing written communication. As previously stated, the area of greatest weakness was content. Specifically, we learned that students did not effectively use information and material to support assertions in their writing. This led us to the two new research questions described earlier: Is the problem that our students do not know how to find information and materials to support the assertion of their theses? Or, once students have found information, do they not know how to use it effectively? We believed that the answer to improving student learning in the area of content lay in the answer to these questions.

    The purpose of the second assessment project was to determine whether students could effectively use college-level materials provided to them in response to a prompt they were given in advance. If we found that they could not handle this information in a competent manner, we could then concentrate our efforts on helping students become better critical readers. Analysis of the data showed that, under the best possible interpretation, 67 percent were determined to be fully competent or better in all three learning objectives by at least two of the three evaluators. Only 27 percent of the sample papers were determined to be fully competent or better by all three evaluators. Additionally, fewer than 50 percent of the papers were competent or better on any individual learning objective.

    Assessment Follow-Up Activities

    Our second research project—spring 2006—showed us that even though we provided the students with appropriate college-level material, they were not consistently able to use it to support a thesis. Our assumption is that we must spend more time teaching critical reading and integration of that reading into essay writing to support a thesis. This assumption is bolstered by the placement data that show the number of students requiring developmental writing decreasing while the number requiring developmental reading increasing at a greater-than-inverse relationship.

    Based on these findings, we developed a version of EN101 that will focus more directly on teaching critical reading. We have also designed a new assessment project. We proposed a true experimental design project in which six sections of EN101 Freshman Composition will be given intensive critical reading instruction and practice. Six other sections will be designated as control sections, in which no teaching/learning innovation will be introduced. These students will get the same class they normally would. We will then repeat the in-class essay using provided material for all twelve sections and compare student performance. If the study group out-performs the control group, we will know that we need to rework the EN101 course to include extensive critical reading. We can begin developing professional development materials to teach EN101 instructors how to improve learning in this area.

    Assessment Resources

    While there was no additional funding for these assessments, FCC invests significantly in the assessment process. FCC provides institutional support in the Office of Assessment, where we have two full-time researchers available to assist with design and analysis. There is professional development money available for anyone involved in assessment who wants additional training. Requests for equipment are made from existing initiative money; there is no supply budget for assessment. Finally, faculty involved in assessment are recognized for their work at a status equivalent with serving on an important college committee. Assessment work is considered the type of service we would expect to see from someone applying for promotion.

    Portability and Sustainability of the Design

    The course-level assessment used by FCC is sustainable because of its ongoing, three-year project design. This design encourages that assessment data be used to improve teaching and learning. Furthermore, the effectiveness of the instructional innovations, developed as a result of assessment data analysis, is itself assessed for efficacy. Once an opportunity to improve teaching and learning is identified and ameliorated, the methods by which student learning is improved are presented to all instructors through professional development. After that, a new area of student learning is identified and the cycle continues.

    This design is portable and adaptable to other two-year colleges. We all have course outcomes that have been identified as important skills, knowledge, or abilities we want our learners to demonstrate at the end of a course. We all have embedded opportunities within these courses that provide students a chance to demonstrate their mastery of these skills, knowledge, or abilities. This design identifies existing assessment opportunities rather than creating new tasks for instructors and students. In this way, we think more deeply about the types of work we are asking students to do and why we are asking them to do them. In doing so, we give more meaningful assignments to our students and receive better data on how well they are learning what it is we want them to learn.

    This design requires only a little bit more work by a few people for a limited period. Three people from each department work about 25 to 40 additional hours over the course of a year for three years. Much of this work is done over the summer months when they are not teaching and can be done asynchronously when it best suits their schedules and preferences. These three are then excused from departmental assessment projects until everyone else in the department has served in that capacity.

    Additionally, we all complete annual self-evaluations. It is proper that we should self-evaluate the effectiveness of the courses we teach. Reflecting each year on one aspect of one of our courses that we think can be improved, on what we did to improve it, and on how well the change worked is not only an appropriate professional self-reflection but also valuable data for our colleagues and our college.

    This design is minimally intrusive, involves minimal additional resources, and requires minimal additional effort on the part of a very few faculty above what they are already doing.

    Reference

    O’Banion, Terry. A Learning College for the 21st Century. Phoenix: American Council on Education and Oryx Press, 1997.