Students' Reactions To Performance-Based Versus Traditional Objective Assessment
Popular and professional articles have been explicit about the loss of meaningful assessment of student learning through traditional objective measures, as defined by objective tests, particularly published tests mandated by state governments. Most would agree that these measures have proven to be highly efficient. The question now is what is known about student abilities as a result of such testing? That is, beyond the numerical indices generated by objective testing what do educators, parents, business, and students really know about student performance? When asked to recall or recommend a student a semester or two hence, what remains is little more than vague recollections of student attributes and the course grade. Transcripts do provide patterns of strengths and weaknesses, but scant data regarding other variables such as attitudes, competencies, or processing skills. The grades themselves provide an index on how students respond to certain kinds of questions in highly structured settings. Hardly the stuff of what the "real" world is like. Is it any wonder that many question these kinds of sterile results that place such high value on skills little valued elsewhere? Is this what college teaching is supposed to provide?
These questions and concerns focus on assessment and the nature of the teaching/learning model. As learning is more clearly and broadly defined and articulated, more reflective teaching should result. Traditional approaches tend to lead to traditional conclusions and outcomes. If we desire different outcomes it will require different inputs.
To break away from traditional assessment molds, the first issue is over-dependence on objective tests. Should the tests be eliminated? There remains in my mind a place for objective tests. These tests are well suited to test knowledge level learning, as well as higher-order thinking (Bloom, 1954). Further, objective tests can provide documentation relative to the quality and nature of the test itself, unavailable to alternative assessment strategies. There is a certain finality in this approach, and it provides a reality check, as well. The objectiveness of these tests provides a floor effect for quality.
The project I envisioned was applied to the required course Measurement and Evaluation. I began by changing just one assignment, then expanding it. The first assignment asked students to develop "one general educational objective." From the objective, each student generated one multiple-choice question. These were collected and evaluated using a format that looked primarily at the mechanics of the project rather than the process or the outcomes. The project evolved into a more elaborate assignment, with the instructions and the evaluation requiring students to demonstrate not just the finished product, but the means and justification on how they arrived at the end product. That is, I asked to hear how students were thinking about the project as they created it. And, I asked that the assignments be completed on a diskette. The product served as a means of demonstrating the quality of the output and could be used in the development of portfolios.
The performance aspect of the assignments attempted to model the kind of assessment and product evolution that would be required in most settings where the students would be employed upon graduation. But there was more. The student crafted questions were used on one exam in the course. The quality of the items was determined primarily by other students, who respond to them. Students were designing their own assessment instruments and determining what was important for them to learn. Student self-assessment began to be a reality, matching the intrinsic assessment that individuals do in professions. Expanding the concept of alternative assessment, I also required students to write a one-page assessment philosophy paper that links their personal experiences with objective and item construction with their personal viewpoint on assessment.
This strategy requires more faculty involvement in student work, as well as an expanding role in the product development. Students, too, are more willing to evaluate their own work in light of peer review and comments. This engagement provides invaluable insights. First, I am understanding better the misunderstandings that students have of the material. My teaching strategies are beginning to reflect the subtleties of learning that were unrecognized using only traditional evaluations. Students seem to appreciate the work that goes into the projects and the time spent evaluating them. My ability to evaluate students is getting better, as is my ability to articulate quality and craftsmanship issues as their writing and thinking improve.
But not all is positive. This approach is much more costly in terms of time and effort. Given the workloads, are these alternative assessment strategies justified? or is some other method equally effective? One outgrowth of the reevaluation of course assessment is my use of the computer for all assignments. Feedback to students via a diskette has reduced my workload. At times, however, the mechanics appear to steer the type and nature of the interactions. Is this simply replacing one objective means with another? Student reactions have been mixed, but improving.
How would I change this process in the future? More group work on projects would make the assessment not only performance-based, but authentic as well. I have found that ideas are better and more thoughtful in group efforts rather than on an individual basis. Specifically, each group is assigned to develop only one worthwhile objective--a valuing issue. By working together each group must collectively generate one multiple-choice question per student. Implementation of this particular aspect of the assignment is just beginning and feedback from the students is interesting. I expect greater coherence of the project, as well as greater creativity. What the student gains is a series of intensive exchanges of ideas from the peer review process. Feedback is less generic and more specific, and as the items develop, there is direct access to the process of learning--doing learning rather than learning about it. Portfolios could be developed to provide concrete evidence of performance, and would remain available if the professor is ever asked to recommend a student. This evidence affirms my faith in the student's ability to perform the work as well as the processing high-order thinking skills. This appears to more closely align class assignments with professional placement or graduate work.
In summary, the struggle to obtain more performance-based evaluation goes on. The results to date support the idea that more student involvement is better. The issue of cost-benefit analysis must be continually addressed, while not losing either the vision to make assessment intrinsic to the learning process, nor the efficiency of objective detachment. The really exciting aspect will be the personal growth of the students and their instructor.
Bloom, B.S., (Ed.). (1954). Taxonomy of education objectives: The classification of educational goals; Handbook I: The cognitive domain. New York: Longman Publishers.