Collecting Feedback | When and How

Course feedback can be collected at any time you think appropriate. Early feedback (at midterm or before) is most helpful in suggesting changes that can be made during the offering of the course and is thus likely to positively influence course success and final student ratings. The use of a questionnaire can also be supplemented by other methods during the term, such as asking students at the end of a given class to provide brief written comments on the perceived effectiveness of that class or appointing a course committee to provide feedback at various intervals.

Administering Survey

Unlike a summative instrument, such as the SEI, which is intended for personnel purposes, the FYI diagnostic questionnaire is for your purposes alone. For this reason, security is not an issue and you do not need to have a person other than yourself distribute and collect the questionnaires. You will want students to answer honestly and to know that what they say will not influence your attitude toward them, so you will probably want replies to be anonymous. If you think handwriting would interfere with anonymity, you might ask the students to respond to the questionnaire between classes using a computer for the open-ended items. If you think your presence would also inhibit responses, the out-of-class option would be helpful. You can also arrange for forms to be returned to a third party who will hold them until after the course grades are turned in, if you think this will increase student honesty. Be sure to give students enough time to answer thoughtfully, particularly if open-ended items are used. Some instructors may want to use a take-home option, asking that the form be completed between class sessions.

Compiling Results

Since this information is for your own use, you can compile the results yourself and obtain quick feedback. If you have a very large class and are using scaled items only, you can have the students use scannable response sheets, which can be purchased from Stores. These will be scanned free by the Office of Testing at the University Registrar, which will provide results to you. If you are hand-scoring the completed forms, tabulate the scaled items to compute the number of responses in each category (SA, A, N, D, SD). If computing a mean will help you to understand the results, do this by multiplying each response in a given category by an assigned point value for each category (SA=5, A=4, N=3, D=2, and SD=1). Then, add up the responses and divide by the total responses. An example is below:

For a class of 25 students, the response set for the question, "Exam questions were clear,” contains two students’ choice of “Strongly Disagree," three students’ choice of “Disagree," fourteen students’ choice of “Agree," and six students’ choice of "Strongly Agree." Using the following formula fill in the appropriate numbers to generate the average: ((# of students giving SD Rating*1(SD Rating)) + (# of students giving D Rating*2(D Rating)) + (# of students giving N Rating*3(N Rating)) + (# of students giving A Rating*4(A Rating)) + (# of students giving SA Rating*5(SA Rating)))/total # of responses = Average rating; ((2*1)+(3*2)+(0*3)+(14*4)+(6*5))/25 = 3.76 Rating

Interpreting the Results

When scores are less than “Agree” the course or instructor characteristic is not getting the desired student response. If the reasons for this response are not clear from open-ended information or other information available to the instructor, dialogue with the students is important. A consultant from the University Center for the Advancement of Teaching (UCAT) may be called to help elicit reasons for low ratings on certain items or to suggest strategies for addressing low ratings. In the case of open-ended items, it is useful to group these by item and look for patterns. If, for example, more than half of the responses to a question on the value of course readings identifies a certain text as being poorly written or difficult to understand, this is important information for course improvement. If only one student mentions this factor, the opinion of other students on this specific text should be solicited or reasons for mentioning this text should be sought in an in-class conversation following the feedback. Once again, a consultant from University Center for the Advancement of Teaching (UCAT) may be asked to help summarize and interpret results.

Using the Results

Even when there are no ambiguous results, it is helpful to talk with students about questionnaire results so that you can respond to them about what changes you can make, which ones you cannot, and reiterate your goals for the course. Such conversations demonstrate your commitment to the course and help clarify your intentions for the course. They also communicate the message that students will be heard and that they are active participants in the success of the course. Asking students to reflect on their own efforts (amount of study time, class attendance and participation, and the like) when a course is being evaluated underlines this notion of joint responsibility. Your analysis of student feedback is for your own purposes and should not be included in portfolios or materials intended for personnel purposes unless you want to show a pattern of improvement or continuing efforts to know about the effects of your teaching. When this is the case, a descriptive statement rather than raw results should be used, since the methods for collecting the information, the reliability and comparability of different items, and the method of analysis were not controlled. You might also ask a peer to look over the raw results and write a summary of the direction of the findings when you want to make use of information collected for improvement purposes at a time when a personnel decision (merit pay, promotion) is being made.