Für neue Kunden:
Für bereits registrierte Kunden:
107 Seiten, Note: 18.50
CHAPTER ONE Introduction
1.2. The Nature of Written Response
1.3. Assessing Written Response
1.4. The Role of Feedback in Improving Teaching
1.5. Corrective Feedback in Natural and Instructed FL Learning
1.6. Types of Corrective Feedback in Instructed FL Learning
1.7. Feedback and Performance
1.8. Peer Feedback and Performance
1.8.1. Peer Feedback in the Domain of Writing
1.9. Teacher vs. Peer feedback
1.10. Instructional Interventions to Foster Peer Feedback Effectiveness
1.11. Error Correction, Revision, and Learning
1.12. Statement of the Problem
1.13. Research Questions
1.14. Research Hypotheses
1.15. Significance of the Study
1.16. Scope of the Study
1.17. Definition of Key Terms
CHAPTER TWO Review of the Literature
2.1. Effectiveness of Teacher's Feedback
2.1.1. The Impact of Peer and Teacher Feedback
2.1.2 Student Views of Peer and Teacher Feedback
2.2. The Pedagogical Effectiveness of Peer Reviews
2.3. Surveys of Teachers’ Assessment Practice
2.4. Academic Writing
2.5. Empirical Studies
2.5.1. Studies Comparing Different Types of Corrective Feedback
2.6. Design Issues
2.7. Feedback and Performance
2.8. L2Writers in the Writing Center
2.9. Responding to Language in L2 Writers’ Texts
2.10. Impact of Subject Matter on Writing Performance
2.10.1. Effective Practices in Assessing and Providing Feedback on Classroom Writing
2.11. Students’ Perceptions of Feedback by Teachers
2.12. Repeated Feedback
2.13. Context and Overview of Bitchener (2008)
2.14. Response to Xu’s Critique of Bitchener (2008)
2.15. The Relative Effectiveness of Explicit and Implicit Written Corrective Feedback
CHAPTER THREE Methodology
3.4. Data Analysis
Previous research has shown that corrective feedback on an assignment helps learners reduce their errors during the revision process. Does this finding constitute evidence that learning resulted from the feedback? Differing answers play an important role in the ongoing debate over the effectiveness of error correction, suggesting a need for empirical investigation. In this study, two groups of EFL learners were asked to write an in-class narrative. Their papers were collected, revised and returned to them in the next session. Half of the students had their errors underlined and used this feedback in the revision task while the other half did the same task without feedback. Results matched those of the previous studies: the underlined group was significantly more successful than the control group. Later on, the students were identically taught in 9 sessions. In the 12th session, however, the students were asked to write the same narrative they had produced in the first and second session as a measure of long-term learning. On this measure, the two groups were virtually identical. Thus, successful error reduction during revision is not a predicator of learning as the two groups differed dramatically on the former but were indistinguishable on the later. Improvements made during revision are not evidence on the effectiveness of correction for improving learners’ writing ability in the long run.
Keywords: Feedback, Written Response, Error Correction.
Written response to writing, while considered a common form of writing instruction, has not been a central theoretical concern for research (Phelps, 2000). Although such feedback is intended to improve student learning in writing, ironically, most studies of written feedback have been conducted outside of a pedagogical context (Fife & O’Neill, 2001) or, indeed, of any theoretical or communicative frame (Huot, 2002). Research has largely focused on describing characteristics of the response and student interpretation of, and attitudes to, such response. With some exceptions, there is a lack of work that considers the interactive and contextual nature of response or work that considers response in relation to writing outcomes. This research locates written response as an instructional act of feedback within the theory of formative assessment (often now known as assessment for learning). The study defines the dimensions of response likely to make it effective in relation to this concept of assessment and feedback for learning, and tests the idea that a teacher’s ability to provide quality, written feedback for learning is an important component of teacher practice in writing by examining its relationship to student progress in writing.
Providing written feedback to writers is presented in the literature as a problematic practice. Comments on students’ drafts are seen, in general, as not effective in improving writing (Hyland, 2000; Muncie, 2000). Generally, it seems that the nature of feedback influences impact. Certain sorts of feedback, like that focusing on personal qualities, can impede learning by shifting focus from instructional to social goals (Kluger & DeNisi, 1998), while outcomes-focused feedback (e.g. a grade) seldom provides sufficient information to advance learning. The nature of feedback can also encourage surface versus deep learning. Feedback that focuses on the correctness of content in a domain generally contains insufficient information to affect the development of knowledge construction, whereas feedback directed at deeper learning may trigger forms of cognitive processing such as searching for relationships or developing knowledge to elaborate information (Balzer, Doherty, & O’Connor, 1989).
Studies on the nature of written response to writing have shown that college teachers’ comments tend to focus on low level, technical concerns, rather than on meaning-making (e.g. Connors & Lunsford, 1993; Sommers, 1982). It has also been reported that school teachers similarly give excessive consideration to surface features, particularly with regard to revision (Hargreaves & McCallum, 1998). Teachers have been portrayed as unable to articulate deeper feature, rhetorical concerns (Schwartz, 1984). Although several of the studies documenting the nature of teachers’ responses have been criticized as having methodological weaknesses (Ferris, 1997), findings regarding surface level feedback have been replicated more recently (Stern & Solomon, 2006).
However, this research has largely neglected the influence of context (Huot, 2002), treating the texts that teacher–responders create as if they stand alone, ignoring the perspective that the meaning of text will be constructed differently depending on the ‘discourses’ brought to bear on the text by the reader (Murphy, 2000). In particular, theoretical and cultural orientations affect interpretations (Ball, 1997); teachers also respond to extra-textual features within the context of the classroom (Wyatt-Smith, Castleton, Freebody, & Cooksey, 2003). Research adopting a more contextualized view (for example, by Weigle, Bildt, & Valsecchi, 2003) suggests that the characteristics of the writing task and of respondents, in particular the conventions and emphasis of a discourse community, play a part in influencing the criteria used to evaluate (and, presumably, to respond to) writing. Similarly, there is some evidence from research into the writing of primary students that it is amount and type of feedback that predicts the quality of final drafts (Matsumura, Patthey-Chavez, Valdes, & Garnier, 2002).
An additional consideration in evaluating the impact of feedback concerns the potential for action suggested by the feedback. This potential is compromised if students, as reported, have difficulty in processing teacher written response (Zellermayer, 1989), or are confused, misinterpret feedback or cannot decipher comments (e.g. Nelson, 1995; Richardson, 2000; Sperling & Freedman, 1987). Further, the comments may not invite student response. Huot, for example, argues that response should be more transformative, the comments “open-ended, forcing students back into the text” (2002, p. 132). Feedback that is corrective rather than designed to foster development is unlikely to engage students. If students are not critically involved, the likelihood of their acting on feedback, or of the evaluative process becoming internalized and having effect beyond the current piece, is lessened (Muncie, 2000). Not only do students often find written feedback unhelpful regarding what to do but it is not seen as useful as a catalyst for discussion (Maclellan, 2001). Arguably, if messages are complex or difficult to decipher, students need opportunities to construct, through discussion, an understanding of them (Higgins, Hartley, & Skelton, 2001). The non-developmental and non-dialogic nature of much of the reported practice may contribute to student reaction to feedback and the reported lack of impact.
Assessment for learning is a pedagogical context designed to promote learning and student engagement in their learning (Black & Wiliam, 2009). Feedback is a key element. Assessment for learning is designed to provide information about student performance that can be used to support learning and to modify teaching (Black&Wiliam, 1998; Shepard, 2005; Torrance&Pryor, 1998). Using assessment information in this way can improve the quality of teaching and learning outcomes (Black, Harrison, Lee, Marshall, &Wiliam, 2003; Black &Wiliam, 1998). While early formative assessment discourse focused on the role of teachers in gathering information and using it to inform their teaching, more recently there has been a re-conceptualization. Formative assessment has been reframed as a social, collaborative activity, aligned more with learning (Black, McCormick, James, & Pedder, 2006; Gardner, 2006). The emphasis has shifted to the teacher and the students, working in partnership (Hawe, Dixon, & Watson, 2008) to enhance student learning.
Realizing the benefits of assessment for learning requires that teachers help their writers to understand what the goals of learning are and provide opportunities for them to have feedback on progress towards such goals. The learners’ understanding of the quality performance aimed for, what success in a task looks like, and what they might do to achieve it is directly related to the instruction and feedback received (Black &Wiliam, 1998). Effective feedback not only helps learners to evaluate where they are but provides them with an indication of where to proceed next and how best to accomplish this forward movement (Hattie & Timperley, 2007). Writers need response in the form of feedback not only for monitoring their progress and moving forward but also as a means of discovering their readers’ needs (Zellermayer, 1989).
Implementing assessment for learning in classrooms is challenging (Hall & Burke, 2003; Torrance & Pryor, 2001). It requires a culture change in the classroom culture and expansive learning on the part of the teacher (Webb & Jones, 2009). Research literature suggests that the practices associated with assessment for learning, in general, make a far greater demand on a teacher’s expertise than where judgments are simply about a student’s ability in relation to others (Marshall & Drummond, 2006). Numerous researchers contend that knowledge of the subject, particularly from the point of view of teaching it to others (the pedagogical content knowledge discussed by Shulman, 1986, 1987), is important in order to respond appropriately to students while they are engaged in learning. In formative assessment terms, teachers need to notice a ‘gap’ (Sadler, 1989) in student performance between the current and the desired; they need to recognize contradictions in student learning (Ball & Bass, 2000; Cowie & Bell, 1999; Newton & Newton, 2001). They then need to work out how to move forward (Black & Wiliam, 1998) in order to effect improvement. Without a sound grasp of subject matter, from the point of view of teaching others, teachers are not able to formulate effective comments or questions. They are less able to anticipate where students may have conceptual difficulties or to develop support for learners to take the next learning steps, to scaffold that learning (Jones & Moreland, 2005; Shepard, 2005; Twiselton, 2000).
In writing, a problematic issue concerns the body of subject matter or content knowledge that a teacher needs to know, to transform in order to teach it. As Phelps and Schilling (2004) point out with respect to reading, it is not a discipline; there is no single group of scholars who identify what there is to be known about it. In terms of a disciplinary base, Snow, Griffin, & Burns (2008), writing about knowledge to support the teaching of reading, argue it is about language structure, its systems and sub-systems and how it is used but that teachers need usable knowledge, embedded in practice. Studies have shown that teachers tend to lack basic knowledge of how language works (Moats, 2000; Moats & Lyon, 1996; Wong-Fillimore & Snow, 2002).
In writing, like reading, there is no group of scholars who define what there is to be known about it. Indeed, the field is decidedly ecumenical in terms of epistemology. And, writing, as a construct, is complex. As a social and cultural act, it is problematic to specify what ‘develops’ or progresses and what it develops towards (Marshall, 2004), under what conditions. There are different Discourses (Gee, 1990) that operate when writing in different contexts and discipline areas. With respect to subject matter knowledge needed to teach writing, there is a dearth of research that suggests what such might be and how it relates to student learning outcomes. There is a body of work that relates professional learning opportunities with respect to writing to practice and, subsequently, to features of student writing (e.g. research on the National Writing Project, cited in Borko, 2004).
Clearly, teachers of writing need more knowledge than an average, competent adult writer. They need to know, at a conscious level, how texts work to achieve their communicative, rhetorical purposes, including knowledge of the features of text most commonly employed to support writing for a particular purpose. This involves a detailed knowledge of language and of text structures, which might be considered subject matter knowledge. But, to teach this to others also involves the ability to articulate and make accessible to developing writers what is implicit and often at a level below conscious thought. Teachers have to be able to unpack what writers are doing as they engage in the writing process, including the strategies more expert writers use in the complex activity of writing.
This explicit knowledge of language and how texts work and are formed in contexts, together with the knowledge of processes and strategies, needs to be married with knowledge of the developmental trajectory that may operate in learning to write and of the approaches, activities and resources most efficacious to employ with students. Most importantly, in the context of the present study, teachers need to utilize this package of knowledge and apply it in the light of their interpretation of evidence of writing achievement to provide feedback and support to move learning forward.
The role of corrective feedback in the process of learning a foreign language is still much debated, and is closely related to the conception of the role of different kinds of language input in language acquisition (Doughty & Williams, 1998). Input can be defined as ‘‘the language, which the learners hear or read—that is, the language samples to which they are exposed’’ (Allwright & Bailey, 1991, p. 120), while the language the learner produces is referred to as written or spoken (learner) output. In both first and second language acquisition research, a distinction is made between positive and negative input—also referred to as positive and negative evidence. Negative input or evidence is the language input that points out to the learners that their language output is faulty in some way. Positive input is all input that is not negative input (Long & Robinson, 1998).
According to nativist or mentalist theories, which are based on Chomsky’s (1986) ‘Theory of Universal Grammar’ (UG) and which are in the first place designed for understanding first language acquisition, human beings depend on a ‘Language Acquisition Device’ (LAD), a kind of rule learning mechanism in their minds, which allows them to learn language on the basis of the positive input that is available to them. The fact that the input can never comprise the whole language, that the language input is often ‘degenerate’ (Chomsky, 1986), and that native speakers can build an infinite number of new and nevertheless correct sentences on the basis of their exposure to this limited input is considered as proof for the existence of such an LAD (Farrar, 1992; Pinker, 1989). Interaction with other speakers in itself is of no importance, but is seen as a means to be provided with more positive input. Consequently, the role of negative evidence in the sense of corrective feedback is rather limited within these theories, whose advocates reason that although parents hardly ever explicitly correct grammatical mistakes, children still manage to acquire their first language. The question is whether this also holds true for FL learning in a classroom environment.
As opposed to the nativist or mentalist theories on language learning, interactionist theories argue that it is interaction with other speakers which fosters language learning (Ellis, 1994). Classroom research, when concerned with the investigation of classroom interaction, is only possible from an interactionist perspective (Long, 1983; van Lier, 1988). Since corrective feedback by its nature only occurs as embedded in interaction, these theories leave an opportunity for negative input to play a part in language learning. However, this depends of course on how corrective feedback is defined. If corrective feedback is seen as explicit error correction, the nativists might be right, since research has shown that such correction is indeed very rare in everyday situations outside the classroom. Parents hardly ever say things like ‘‘no, that’s wrong’’ in response to the form of children’s utterances. This led researchers to look for features of the interaction that included negative feedback in less direct ways (Farrar, 1992; Gordon, 1990). Long (1996, p. 429) has characterized such negative input as recasts which occur ‘‘in the form of implicit correction immediately following an ungrammatical learner utterance’’. A recast can be defined as a reformulation of all or part of a child’s utterance minus the grammatical error but without changing the meaning. One condition for recasts to foster FL learning is, therefore, that they should be embedded in the process of ‘negotiation of meaning’ and, in this way, do not interrupt the flow of discourse (Long & Robinson, 1998).
Negotiation of meaning can be defined as ‘‘exchanges between learners and their interlocutors as they attempt to resolve communication breakdowns and to work toward mutual comprehension’’ (Pica, Holliday, Lewis, & Morgenthaler, 1989, p. 65). It is considered as a feature of real language use, because it is both a characteristic of mother and child dyads in first language acquisition and of native– non-native speakers’ dyads in true conversation (Long, 1983, 1996). Interestingly, it has been observed that corrective feedback often serves as the starting point for the negotiated interaction (Long, 1983; Lyster & Ranta, 1997). Moreover, several empirical studies (Long, Inagaki, & Ortega, 1998; Mackey & Philp, 1998) suggest that, when recasts embedded in negotiations of meaning are also considered a form of corrective feedback, negative input seems to play a role in natural FL learning, which is not to be neglected. The function and role of such corrective feedback in the FLC is quite a different issue, however (see Lyster, 1998).
The role of corrective feedback in instructed FL learning should be situated within the current discussion on the role of focusing on the form of the FL. Stern (1990, 1992) discusses FLT in terms of the relationship between the concepts of ‘experience’ and ‘analysis’. Experiential FLT mainly focuses on the meaning conveyed in the FL, while analytic FLT for the most part focuses on the form of the FL itself. Corrective feedback should be attributed to the analytic side of the experiential vs. analytic FLT continuum, because the focus is on the foreign code (Stern, 1990, 1992). Stern defines analytic FLT as ‘teaching about communication’ instead of ‘teaching through communication’ (Stern, 1990, p. 94), which is typical for experiential FLT. This technique is among others represented by the advocates of, e.g. content-based education, because here the focus is on content or the message instead of on form or medium, the content being another school subject (e.g. history or geography) and not the foreign language itself. According to Stern (1990, p. 103), this FL learning context should create ‘‘conditions for real language use and, [y], true conversation’’. Yet, one important feature of traditional FL classrooms seems to be a lack of such true meaningful interaction (Ellis, 1994; Lochtman, 1997; van Lier, 1988; Wolff, 1997).
Although the learners who took part in the immersion programs were very fluent in the FL, several studies have shown that meaningful interaction on its own is not enough for students to achieve high levels of accuracy (e.g. Lyster, 1994; Swain, 1985). It is now suggested that some focus on form is also needed in order for learners to improve their accuracy (Long & Robinson, 1998; Lyster & Ranta, 1997). It has been proposed that foreign language students should also ‘notice the gap’ between their erroneous output and the target language (Lightbown, 2001; Schmidt & Frota, 1986; Swain, 1998) in order to convert the FL input to intake, a first condition for FL learning. Ellis (1994, p. 708) defines intake as ‘‘that portion of the input that learners notice and therefore take into temporary memory’’. As Schmidt (1995, p. 20) puts it in the ‘noticing hypothesis’: ‘‘what learners notice in input becomes intake for learning’’. Corrective feedback could provide such ‘noticing’ and/ or ‘comprehensible output producing’ opportunities. The question that remains is what type of corrective feedback is (most) effective.
A first type of oral corrective feedback to be discussed is the recast, which with regard to FLT could be defined as ‘‘the teacher’s reformulation of all or part of a student’s utterance, minus the error’’ (Lyster & Ranta, 1997, p. 48). The psycholinguistic idea behind it is that FL learners make an immediate cognitive comparison between their own erroneous utterance and the target language, recast by the discourse partner (Doughty & Varela, 1998; Long et al., 1998; Mackey & Philp, 1998; Saxton, 1997). In order to be able to make such a cognitive comparison, it is commonly assumed that learners should note the feedback in the input. Whether recasts are a salient enough type of corrective feedback is still much debated (Lightbown, 2001).
Lyster and Ranta (1997) introduced the concept of ‘negotiation of form’ as opposed to the concept of ‘negotiation of meaning’ (Long, 1983; Lyster, 2002) in FLT. Negotiation of form in classroom interaction can only occur when the teacher initiates a correction move, i.e. indicates that there is a formal error, and the learner is left the opportunity to correct his or her own error. When the learner actually corrects or tries to correct the error, negotiation of form has occurred. When there is no reaction after the teacher initiated a possible learner correction, no negotiation has taken place (Lyster & Ranta, 1997).
The students’ immediate responses to oral corrective feedback are defined as learner uptake. As Lyster and Ranta (1997, p. 52) put it ‘‘uptake [y] refers to a student’s utterance which immediately follows the teacher’s feedback and which constitutes a reaction in some way to the teacher’s intention to draw attention to some aspect of the student’s initial utterance (this overall intention is clear to the student although the teacher’s specific linguistic focus may not be)’’. Negotiations of form in corrective feedback may create opportunities for FL learning, because they may help foreign language learners to ‘‘notice the gap’’ (Schmidt & Frota, 1986) between their own utterance and the target language and may help them to produce accurate output (Lyster & Ranta, 1997; Swain, 1985, 1998) in the process of meaningful interaction.
Narciss (2008) defines feedback as ‘‘all post-response information that is provided to a learner to inform the learner on his or her actual state of learning or performance’’ (p. 127) and differentiates between external (e.g., peer or teacher) and internal (the learner) sources of feedback. Feedback can have a strong positive effect on learning under certain conditions (see Bangert-Drowns et al., 1991; Kluger & DeNisi, 1996), however, effects can be absent or even negative depending on the instructional conditions.
Feedback research not only addresses whether feedback improves learning, but also how feedback improves learning. Mory (2003) discusses four perspectives on how feedback supports learning. First, feedback can be considered as an incentive for increasing response rate and/or accuracy. Second, feedback can be regarded as a reinforcer that automatically connects responses to prior stimuli (focused on correct responses). Third, feedback can be considered as information that learners can use to validate or change a previous response (focused on erroneous responses). Finally, feedback can be regarded as the provision of scaffolds to help students construct internal schemata and analyze their learning processes.
Apart from these perspectives on how feedback supports learning, the type of feedback varies considerably as well. Sometimes feedback is mere ‘‘knowledge of performance’’ (e.g., percentage of correctly solved tasks), ‘‘knowledge of result’’ (e.g., correct/incorrect) or ‘‘knowledge of correct response’’ (e.g., the correct answer to the given task), whereas in other cases it includes elaborated information strategically useful for task completion (e.g., ‘‘Do this/Add that/Avoid this’’, without giving the answer) or explanations for error correction (e.g., ‘‘Your response is incorrect, because.’’) (Narciss, 2008). Feedback messages differ in the volume of the elaborated informational component, and this appears to be related to their effectiveness in altering performance (Narciss & Huth, 2006).
Peer feedback is provided by equal status learners and can be regarded both as a form of formative assessment - the counterpart of teacher feedback - (Topping, 1998), and as a form of collaborative learning (Van Gennip, Segers, & Tillema, 2010; Webb, 1991). Taking the perspective of formative assessment, the main difference between teacher and peer feedback is that peers are not domain experts, as opposed to teachers. As a consequence the accuracy of peer feedback varies. Peer judgments or advice may be partially correct, fully incorrect or misleading. Moreover, the peer assessor is usually not regarded as a ‘‘knowledge authority’’ by an assessee, leading to more reticence in accepting a peer’s judgment or advice (Hanrahan & Isaacs, 2001; Strijbos, Narciss, & Du¨nnebier, 2010).
Nevertheless, peer feedback can be beneficial for learning, which might even be due to the difference from teacher feedback (Topping, 1998), since the absence of a clear ‘‘knowledge authority’’ (e.g., the teacher) alters the meaning and impact of feedback. Bangert-Drowns et al. (1991) argue that ‘‘mindful reception’’ is crucial for the instructional benefits of feedback, and this might be stimulated through the uncertainty induced by a peer’s relative status. In the study by Yang, Badger, and Yu (2006) revision initiated by teacher feedback was less successful than revision initiated by peer feedback, probably because peer feedback induced uncertainty. Teacher feedback was accepted as such, but proved to be associated with misinterpretation and miscommunication, whereas reservations regarding the accuracy of peer feedback induced discussion about the interpretation. Students’ reservations prompted them to search for confirmation by checking instruction manuals, asking the teacher, and/or performing more self-corrections. As a result, students acquired a deeper understanding of the subject. In contrast, teacher feedback lowered students’ self-corrections, perhaps students assumed that the teacher had addressed all errors and that no further corrections were required (Yang et al., 2006).
In addition to stimulating the ‘‘mindful reception’’, peer feedback may also increase the frequency, extent and speed of feedback for students while keeping workload for teachers under control. Involving students in the assessment process increases the number of assessors and feedback opportunities. Although the accuracy might be lower compared to teacher feedback, this can be considered an acceptable trade-off for increased follow-up of students’ progress (Gibbs & Simpson, 2004).
Peer feedback as part of first language composition classes (L1 writing) has yielded beneficial effects. In this domain peer feedback is often referred to as ‘‘peer review’’ or ‘‘peer assistance when writing’’. In a recent meta-analysis Graham and Perin (2007) report a large positive effect size for peer feedback during writing instruction (Grade 4 through high school) when compared to students writing individually. Some studies have found peer comments to be as effective as teacher comments (Cho & Schunn, 2007, in the case of single peer feedback; Gielen, Tops, Dochy, Onghena, & Smeets, 2010), or even enhance performance beyond teacher feedback (Cho & Schunn, 2007, in the case of multiple peer feedback; Karegianes, Pascarella, & Pflaum, 1980).
Nevertheless, peer feedback is not always as effective as teacher feedback. Tsui and Ng (2000) found that teacher feedback was more often incorporated in revisions than peer comments when students received both peer and teacher comments. Students also perceived teacher comments as more useful, but the impact of comments on the quality of final assignments was not examined. The study outlines some of the problems associated with peer comments, such as depth of the feedback, accuracy and credibility. However, these appear to be more present in second language classes than first language classes (Nelson & Murphy, 1993).
A series of studies by Cho et al. (Cho, Chung, King, & Schunn, 2008; Cho & MacArthur, 2010; Cho, Schunn, & Charney, 2006) revealed qualitative differences in peer and expert feedback. Experts provide more ideas and longer explanations and typically include less praise, whereas peers’ comments request more clarification and elaboration. Yet, students (as opposed to other experts) perceive peer and expert feedback as equally helpful, given a blinded source. In addition, there were qualitative differences in the type of initiated revisions. Feedback by multiple peers induced more complex repairs (compared to teacher or single peer feedback) and extension of content (compared to single peer feedback). Expert feedback induced more simple repairs than single peers, but these had no effect on writing quality after revision. Complex repairs improved writing quality, whereas adding new content had a negative influence.
There are various perspectives on peer feedback quality. A first perspective defines peer feedback quality in terms of accuracy, consistency across assessors and/or concordance with teacher feedback (see Van Steendam, Rijlaarsdam, Sercu, & Van den Bergh, 2010). Examples of quality criteria for peer feedback from this perspective are (a) the number of errors detected from the total number of errors; (b) the number of errors accurately and completely corrected and justified out of the total number of errors, and (c) a holistic score for the correctness, exhaustiveness and explicitness of peer comments. This definition originates from a summative view on peer assessment, where scoring validity and reliability are leading concepts. However, from an interventional point of view it is problematic, because - even if more accurate feedback is assumed to be better than inaccurate feedback - peers are not experts. Peer assessors are inevitably novices, unless peer assessment is transformed into cross-level peer tutoring.
A second perspective defines peer feedback quality in terms of content and/or style characteristics. The advantage of this approach is that such characteristics are not domain- and/or task-specific, thus teaching students to focus on content and style characteristics results in a generic skill transferable to other settings. Examples of this perspective are the studies of Kim (2005), Sluijsmans et al. (2002), and Prins, Sluijsmans, and Kirschner (2006).
Kim (2005) studied the relationship between feedback composition (in terms of four characteristics listed in the first column of Table 1) and performance. She considered feedback as constructive when marks and comments for each content criterion were present and supported by a rationale and revision suggestion. However, no performance increase was observed for assessees who received this type of high quality peer feedback. Kim (2005) argues that this might have been due to the limited variance in peer feedback quality (SD ¼ 0.53) and/or assessees’ skepticism toward their peers’ ability as assessors, preventing students from internalizing peer feedback. This lack of internalization might have decreased the impact of constructive feedback on performance. Hence, peer feedback quality might still be important for performance, provided that assessees are stimulated to apply the feedback.
It should be noted that the skepticism toward peers’ ability is precisely the argument used by Yang et al. (2006) to explain a higher impact of peer feedback, as compared to teacher feedback. The arguments by Kim (2005) and Yang et al. (2006) appear to be contradictory; however, assessees’ reservations in Yang et al.’s (2006) study prompted them to initiate discussion and self-correction, which in turn led to successful revisions. Performance improvement was not directly related to feedback composition (and thus the quality of peer feedback) e as is the focus in the study by Kim (2005) e but rather to the critical attitude of the assessee toward peer feedback.
Another example of the content and style perspective is the study by Sluijsmans et al. (2002). They extracted characteristics of constructive peer feedback from expert assessment reports and identified seven key characteristics and associated criteria (see second column of Table 1). Contrary to Kim (2005), Sluijsmans et al. (2002) adopted an interventional perspective and examined how students can be instructed to apply the key characteristics more frequently in their feedback. However, the relationship between the feedback characteristics and effectiveness of the peer feedback was not investigated.
A third example is the study by Prins et al. (2006). They compared the style and quality of peer feedback by general practitioners in training to that by expert trainers, but they did not relate style or quality to performance. Prins et al. (2006) developed the Feedback Quality Index, an elaboration of the form used by Sluijsmans et al. (2002). However, instead of counting the number of criteria used and the number of comments or certain words present, the index evaluates the presence of a set of necessary ingredients e with a certain weight applied. Although the Feedback Quality Index is derived from expert feedback reports and grounded in learning theories, the contribution of the identified feedback characteristics on performance was not empirically tested.
Studies on instructional interventions to enhance the effectiveness of peer feedback have focused thus far on the impact of (a) indicators to clarify evaluation criteria (Orsmond, Merry, & Reiling, 2002), (b) the number of peer assessors by comparing a single peer assessor versus multiple peer assessors (Cho & Schunn, 2007), (c) the training of peer assessors in assessment skills (Sluijsmans et al., 2002), (d) methods of teaching students how to provide peer feedback (Van Steendam et al., 2010), (e) the matching principles for peers (Van den Berg, Admiraal, & Pilot, 2006), and (f) teacher support for peer feedback (Van den Berg et al., 2006). All studies have in common that the focus mainly lies on the assessor.
Instructional interventions to raise peer feedback quality include, for example, the use of directed questions (such as ‘‘Did the assessee cover all relevant topics?’’) to stimulate comments on assessment criteria (Miller, 2003) and sentence openers (such as ‘‘I propose to. /I think that.’’) to promote task-focused and reflective interaction between learners (Baker & Lund, 1997). Yet, these types of interventions might have negative motivational effects in the long run, because they can interrupt the natural interaction process by enforcing the use of the same communication structures on all occasions. Another type of intervention used to raise the quality of peer feedback is training students to adopt specific quality criteria.
A third type of intervention is the use of a quality control system that rewards or sanctions assessors for the quality of their feedback (Bloxham & West, 2004; Kali & Ronen, 2008; Searby & Ewers, 1997) or that filters unreliable feedback (Cho & MacArthur, 2010; Rada & Hu, 2002). In line with Gibbs and Simpson (2004), however, it is not sufficient to focus on the type or quality of peer feedback to foster its effectiveness, but the assessee’s response should be addressed as well. A fourth type of intervention aiming both at raising feedback quality and the response to it, is the adoption of an ‘a priori question form’ (assessees formulate their feedback needs; see, e.g., Gielen et al., 2010) combined with a feedback form prompting the assessor to address these needs in the feedback. Such an intervention may enhance both ‘‘individual accountability’’ and ‘‘positive interdependence’’ (Slavin, 1989), and motivate and guide assessors to provide ‘‘responsive’’ feedback (Webb, 1991). It may also result in more appropriate feedback (Webb, 1991) and promote ‘‘mindful reception’’ (Bangert- Drowns et al., 1991), that is, make assessees feel more personally addressed and subsequently more inclined to apply the feedback. Finally, a fifth type of intervention to foster the use of feedback is an ‘‘a posteriori reply form’’ (assessee reflects on and replies to the assessor’s comments; see, e.g., Gielen et al., 2010; Kim, 2005). The ‘‘a posteriori reply form’’ stimulates students to reflect on the peer feedback they received and demonstrate how they used the peer feedback in their revisions, closing the feedback loop (Boud, 2000; Webb & Mastergeorge, 2003).
Prior to 1996 when Truscott claimed that written corrective feedback (error correction on L2 student writing) is ineffective and harmful, the assumption that corrective feedback helps L2 writers improve the accuracy of their writing had not been challenged. In fact, as Truscott (1996, 1999) and Ferris (1999) explained, research evidence was limited in terms of the range of studies that had attempted to address the question of efficacy and in terms of the quality of the research design. Although more than a decade has now passed and considerable debate has been presented in journal articles and conference papers, limited research has been undertaken on this key issue. In considering that which has been published, it is clear that a conclusive answer to the question will not be possible unless researchers make a concerted effort to conduct well designed studies that examine over time the effectiveness of different corrective feedback options on new pieces of writing and by comparing them with the texts of students who do not receive corrective feedback. An attempt at addressing these needs lay behind the design and execution of the study reported in this article.
The aim of the study was twofold: (1) to investigate whether targeted corrective feedback on ESL student writing results in improved accuracy in new pieces of writing over a 2-month period and (2) to investigate whether there is a differential effect on accuracy for different corrective feedback options. Each component of these aims was carefully selected to address the design limitations of earlier studies and thereby provide a more robust platform from which answers to both questions might be sought. Thus, the study focuses on one targeted error category (two functional uses of the English article system) rather than a myriad of error categories. Secondly, it examines longitudinally, by means of a pre-test/post-test design, the effectiveness of corrective feedback on new pieces of writing within the same genre rather than single or multiple text revisions across different genres. Thirdly, it incorporates a control group (one that does not receive corrective feedback) so that its error ratios can be compared with those of the treatment groups. Fourthly, it investigates the extent to which three under-researched direct feedback options, typically offered in L2 classrooms, determine accuracy performance: direct corrective feedback; direct corrective feedback plus written meta-linguistic explanation; and direct corrective feedback plus written and oral meta-linguistic explanation.
Since Truscott published his 1996 article, ‘‘The case against grammar correction in L2 writing classes’’, debate about whether and how to give L2 students feedback on their written grammatical errors has been of considerable interest to researchers and classroom practitioners (Ferris, 1999, 2002, 2004; Truscott, 1996, 1999). On several grounds, Truscott (1996) claimed that grammar correction has no place in writing courses and should be abandoned. From an analysis of studies by Kepner (1991), Semke (1984) and Sheppard (1992), he concluded that there is no convincing research evidence that error correction ever helps student writers improve the accuracy of their writing. For two major reasons, he explained that this finding should not be surprising. On the one hand, he argued that error correction, as it is typically practiced, overlooks SLA insights about the gradual and complex process of acquiring the forms and structures of a second language.
On the other hand, he outlined a range of practical problems related to the ability and willingness of teachers to give and students to receive error correction. Moreover, he claimed that error correction is harmful because it diverts time and energy away from the more productive aspects of a writing program. Not surprisingly, these claims have since generated a considerable amount of vigorous debate at international conferences and in published articles (Ellis, 1998; Ferris, 1999; Ferris & Hedgcock, 1998; Truscott, 1999). Championing the case against Truscott’s firmly held position, Ferris (1999) claimed that his arguments were premature and overly strong given the rapidly growing research evidence pointing to ways in which effective error correction can and does help at least some student writers, providing it is selective, prioritized and clear. While acknowledging that Truscott had made several compelling points concerning the nature of the SLA process and practical problems with providing corrective feedback, Ferris maintained that the evidence he cited in support of his argument was not always complete. As Chandler (2003) also points out, Truscott did not always take into account the fact that reported differences need to be supported with statistically significant evidence. In addition, Ferris maintained that there were equally strong reasons for teachers to continue giving feedback, not the least of which is the belief that students have regarding its value. However, she did accept that it is necessary to consider ways of improving the practical issues highlighted by Truscott. Despite his call for the abandonment of error correction, Truscott (1999), in his response to Ferris, acknowledged that many interesting questions remain open and that it would be premature to claim that research has proven error correction can never be beneficial under any circumstances. However, he suggested that researchers and teachers should acknowledge that grammar correction is, in general, a bad idea until future research demonstrates that there are specific cases in which it might not be a totally misguided practice. Agreeing with the future research focus proposed by Ferris (1999), he suggested that attention be given to investigating which methods, techniques, or approaches to error correction lead to short-term or long-term improvement and whether students make better progress in monitoring for certain types of errors than others.
It is generally agreed today that revision plays a central role in good writing, in terms of both content and form. Not surprisingly then, a considerable amount of research has been devoted to exploring issues regarding the revision process (see, for example, Ferris, 2006; Goldstein, 2006; Sachs & Polio, 2007). Our present concern, however, is with a small subset of these studies, those that investigated the effects of teachers’ form-based feedback on learners’ success in the revision process (Ashwell, 2000; Fathman & Whalley, 1990). In these experiments learners were asked to revise their writings, some with the benefit of written error correction and some without, and the effects of feedback were measured by the extent to which they managed to improve their accuracy of their essays. An additional study, that of Lee (1997), is sometimes included in this category, though it differed from the others in that the task did not involve revision of the learners’ own writing but rather identification, classification, and correction of errors implanted in a newspaper.
One clear conclusion from this research is that teachers’ corrections do indeed help learners reduce their errors: the revised manuscripts of learners who received it showed significantly more improvement in accuracy than those of learners who did not receive it. This research also produced a number of interesting findings on related issues, such as the relation between feedback on form and feedback on content or the timing of the two types. Our concern, however, is not with these results but rather with the question of how research on revision relates to broader issues regarding the role of error correction in writing classes.
In particular, we are concerned with the differing positions that have appeared in the literature on the use of grammar correction as a teaching device and with the place revision research in evaluations of these positions. One view (see especially Truscott, 1996, 1999, 2007) holds that correction makes little or no contribution to the development of the accuracy in writing, possibly even harming the learning process, and therefore has no place in writing classes. The other (see especially Ferris, 1999, 2003, 2004) takes a more favorable view of correction, recommending its use in writing instruction. Not surprisingly, the two sides in this debate have offered very different interpretations of the revision research, and these differences have become an important part of the debate. The disagreement is about whether the findings substantiate learning, which we will define as improvements in learners’ ability to write accurately. We will not be concerned here with the question of whether such improvements represent new knowledge or simply priming of existing knowledge; the issue is simply whether learners become better writers as a result of the treatment.
View # 1: Error reduction during revision is not a measure of learning
Truscott (1996, 1999, 2007) took the position that the revision research had no implications for the issue he was addressing, because the point of his “case against grammar correction” was that correction is not useful as a teaching device, while the revision research was about its usefulness specially as an editing tool, a way to improve a particular manuscript. Evidence of its value in this function does not constitute evidence of its value for learning; the argument goes, because evidence on learning necessarily involves a comparison between two independently written works. So the revision research is not relevant to the case against grammar correction. In brief, there are two opposing views about the utility of error correction and revision.
View # 2: Error reduction during revision is a measure of learning
A sharply contrasting view has been offered by a number of other authors, for whom the revision research is an integral part of efforts to understand the value of correction as a teaching tool. They claim that leaving the learners to themselves without providing sufficient feedback on error correction stops, or slows down, their improvement of accuracy in their writing.
Our study is an effort to fill the need for research of this sort. We began with an in-class narrative writing task which is gathered, revised and returned to them in the next session. In this session, the learners are asked to reproduce the same narrative getting advantage of the feedback they have received on their first draft. Meanwhile, a control group is doing the same without the above mentioned of received feedback. Having been taught identically for the following 7 sessions, they are asked to reproduce the same narrative they had written in the first and second sessions. A comparison of the two groups regarding their grammatical improvement is considered as long-term measure of learning regarding the corrective feedback they received.
Learners usually commit a variety of errors in their writings. Dealing with the students’ errors plays an important role in their improvement of writing. The researcher theorizes that providing the corrective feedback to the students’ writings is not the same in short-term and long-term learning. The results of this research will hopefully help the teachers to know how and when they should provide corrective feedback to the students’ errors in writing.
According to the inefficient previous researches in this field of study, the researcher believes that it is necessary to do a research whose results will help the Iranian EFL teachers to tolerate the learners’ errors and address non-offensively in order to help them to minimize the number of their errors in their writings.
The writer aims at discovering the effects of providing both short-term and long-term corrective feedback among the Iranian EFL learners in order to uncover the proper kind of feedback that the Iranian teachers can select and adapt to their students to improve their writing. The writer believes that quickly revising the students’ errors in their writings, typically seen as student’s error commission, teacher’s immediate oral or written correct utterance of the student’s error and student’s immediate imitation of the teacher’s provided correct utterance of his committed error, cannot be a measure of long-term learning. Consequently, the writer intends to prove the above mentioned claim by doing the experiments on two groups of Iranian EFL learners in a TOEFL writing course.
Subsequently, the same experiment will be replicated to two other Iranian EFL learners so that the experiment will have been applied three times to three matching Iranian EFL learners by the end of the study. Having done so, the writer will be able to make a clear judgment about the short-term and long-term feedback as a measure of the learning of the Iranian EFL learners.
The following are among the issues this study aims at exploring:
1. Does corrective feedback play an important role in teaching essay writing to Iranian EFL learners?
2. Does teachers’ corrective feedback tend to provide the same short-term and long-term essay writing improvement?
Depending upon the above–mentioned questions, the following hypotheses can be formulated:
1. The corrective feedback plays an important role for teaching essay writing to Iranian EFL learners.
2. The effect of teachers’ corrective feedback is not the same in short-term and long-term learning on learners’ essay writings.