Investigating the impact of digital tools on written second language output
Individual paperwriting01:45 PM - 03:45 PM (Europe/Amsterdam) 2022/08/25 11:45:00 UTC - 2022/08/25 13:45:00 UTC
Digital translation tools (e.g. DeepL, GoogleTranslate) and online dictionaries have become an indispensable part of language use. Several issues regarding multilinguals' use of these tools deserve scholarly attention. In our talk, we present evidence from an ongoing project that tries to tackle two questions: Firstly, we investigate how texts produced with the help of these tools compare to texts produced without them, in terms of linguistic features (lexical, syntactic, textual levels). Second, we aim to shed more light on the conditions under which second language users learn anything in and about the target language by using these tools. The talk focuses mainly on the former question. We present evidence from an experimental study with learners enrolled in vocational training for business, management, and services. We investigate how digital translation aids are used in text production and compare text quality (lexical diversity and sophistication, syntactic complexity, and readability) of written texts in English as a foreign language produced by Francophone and German-speaking participants. To this aim, participants write texts related to a professional topic. The first text is written without any assistance from digital tools (baseline text), the second text is produced in one of three conditions: 1) without digital translation tools, 2) with digital translation tools, 3) with digital translation tools and an introductory lesson on how to use the tools efficiently. The online behavior during writing is logged with the software Inputlog (Lejiten & Van Waes, 2013). Pilot data are available, but the main data collection takes place from February--April 2022 with 278 participants currently having signed up for the study. The lexical, syntactic and textual metrics mentioned above are compared across the different conditions. The main analyses are still to be done, but the available pilot data suggest that using digital tools leads to shorter and slightly more accurate texts, but there are no substantial differences with respect to lexical and syntactic diversity and complexity. We will include the number of switches to electronic resources and the length of time spent on writing and revising the text vs. on the online platforms as covariates in the analysis of the text differences. Furthermore, a between-group analysis is done on the impact of pedagogical guidance on the use of these online tools. We conclude our talk by briefly presenting our plans for the second stage of the project. Here, we will investigate vocabulary learning as a function of different degrees and types of involvement load (Laufer & Hulstijn, 2001) and of the extent to which the learners notice lexical holes or not (Vos et al., 2019). Laufer, B., & Hulstijn, J. (2001). Incidental vocabulary acquisition in a second language: The construct of task-induced involvement. Applied Linguistics, 22(1), 1–26. Lejiten, M., & Van Waes, L. (2013). Keystroke Logging in Writing Research. Written Communication, 30(3), 358–392. Vos, J. F. D., Schriefers, H., & Lemhöfer, K. (2019). Noticing vocabulary holes aids incidental second language word learning: An experimental study. Bilingualism: Language and Cognition, 22(3), 500–515.
Presenters Raphael Berthele Scholar, Université De Fribourg Co-authors
Isabelle Udry Research Manager, Institute Of Multilingualism (University Of Fribourg)
Evaluating lexical diversity of Korean as a second language learners’ writing using NLP tool
Individual papervocabulary01:45 PM - 03:45 PM (Europe/Amsterdam) 2022/08/25 11:45:00 UTC - 2022/08/25 13:45:00 UTC
Indices of lexical diversity (LD), or the variety of words, are commonly used in L2 writing assessments (e.g., Engber, 1995). The simplest LD index is the type-token ratio (TTR; Johnson, 1944), the number of types divided by the number of tokens. However, due to its sensitivity to text length, many indices like MATTR and MTLD have been developed to objectively measure the variety of words used in a text while minimizing the text length effects (e.g., McCarthy & Jarvis, 2007). Previous research on Korean as a second language (KSL) learners has examined the use of LD in writing assessment (e.g., Choi & Jeong, 2016), but they were limited in terms of the following reasons. First, the previous researchers mostly analyzed TTR as the primary LD index, neglecting the text length effects. Second, the studies used a small number of texts. Given the availability of measuring various indices of LD and the large size of the learner corpus, more research into the aforementioned issues is warranted. In this study, we evaluate twelve established LD indices for their correlations with the learners' L2 proficiency levels. The data comprises a sample of 4,208 argumentative essays extracted from the National Institution of Korean Language. The proficiency levels of the KSL learners differed from 3 to 6, and the number of tokens ranged from 80 to 480. The corpus was preprocessed and tokenized with the variation of six different tokenizers from the KoNLPy Python package, including Hannanum. Finally, researchers analyzed LD indices by modifying the codes developed in the English writing assessment (Kyle et al., 2021), considering the syntactic difference of Korean. The results indicate that MATTR (Level: r = .29***) and MTLD (Level: r = .28***) reflected the learners' proficiency minimizing the text length effects, even though the figures differed slightly based on how different tokenizers analyzed the morphemes. To be specific, Type was the most highly correlated with language proficiency level (r = .32***), but it was extremely sensitive to text length. TTR was the least correlated to the proficiency level (r = .14***) and susceptible to the text length. Implications and limitations related to the ways of tokenization and how they affect the calculation of LD indices will be discussed.
References Choi, W., & Jeong, H. (2016). Finding an appropriate lexical diversity measurement for a small-sized corpus and its application to a comparative study of L2 learners' writings. Multimedia Tools and Applications, 75(21), 15-22. Engber, C. A. (1995). The relationship of lexical proficiency to the quality of ESL compositions. Journal of Second Language Writing, 4(2), 139–155. Johnson, W. (1944). Studies in language behavior 1: A program of research. Psychological Monographs, 56, 1–15. Kyle, K., Crossley, S. A., & Jarvis, S. (2021). Assessing the validity of lexical diversity indices using direct judgements.Language Assessment Quarterly,18(2), 154-170. McCarthy, P. M., & Jarvis, S. (2007). Vocd: A theoretical and empirical evaluation. Language Testing, 24(4), 459–488.
Written Corrective Feedback in real time: the why and the how
Individual paperwriting01:45 PM - 03:45 PM (Europe/Amsterdam) 2022/08/25 11:45:00 UTC - 2022/08/25 13:45:00 UTC
The advent of online collaborative editing software (e.g. Google Docs) opens the door to new teaching practices (Bikowski & Vithanage, 2016), such as providing synchronous written corrective feedback (SWCF) while learners are engaged in a writing task. From cognitive-interactionist (Doughty, 2001) and sociocultural (Storch, 2017) perspectives, the provision of SWCF seems conducive to learning. Recent studies have found that immediate oral corrective feedback (CF) seems more effective than delayed CF (Arroyo & Yilmaz, 2018; Fu & Li, 2020). However, online SWCF was only studied in small-scale studies in a lab setting (Shintani, 2016; Shintani & Aubrey, 2016), and the implications for classroom practices are limited. This study attempts to fill that gap by exploring the potential of SWCF for L2 learning and the perceptions of students and teachers regarding this practice in classroom settings.
Participants were learners from three intact groups from B1 to C1 level (N = 75) taking a university level French L2 course. They took part in two remote collaborative writing tasks in small groups (3-4/groups) while teachers (N = 3) provided them with CF in real time using Google Docs. During the writing tasks, students worked in a breakout room of a videotelephony software that allowed them to see and talk to each other. Screens from the students and teachers were recorded as were the interactions between the students. Teachers and students participated in distinct focus groups to share their experiences, and students answered an online questionnaire.
We will present and discuss the number, type, and target of CF provided, delays between CF and students' reactions, percentage of correct uptakes, and language-related episodes occurring after SWFC. Results also show that students and teachers had a positive attitude towards the experience, underlying the felt benefits for L2 learning and willingness to engage in this type of practice in the future.
Kevin Papin Assistant Professor, Université Du Québec à Montréal
Affordances of a multimodal project on children’s FL writing in a CLIL science class
Individual paperwriting01:45 PM - 03:45 PM (Europe/Amsterdam) 2022/08/25 11:45:00 UTC - 2022/08/25 13:45:00 UTC
While research into early second language writing has accumulated considerable evidence describing how young English language learners (ELL) engage in academic literacy in mainstream classrooms in Australia or the USA, much less is known about how children learn through writing within a language or content-based curriculum (CLIL) in instructed foreign language (FL) contexts. At the same time, CLIL writing research in Europe has focused primarily on comparing the written performance of high school students enrolled in either traditional EFL or CLIL plus EFL classrooms. As a result, information on the development of academic literacy skills and content knowledge with young FL learners is extremely limited. The present study aims to advance our understanding of children's FL writing within a primary school science unit by exploring the impact of genre-based pedagogy on children's written explanations. Over the course of three weeks, a grade 4 CLIL teacher implemented a multimodal teaching unit on simple and complex machines with two intact classes of 9-10-year-olds. After guided instruction, which combined the introduction of conceptual scientific content with literacy awareness-raising activities, the children, working either individually or in pairs, produced an illustrated design of their own complex machine and a video recording of an oral explanation. The children then produced a handwritten explanation during a live Zoom session with the researchers. The results of a functional analysis of the children's written language use (Fang & Schleppegrell, 2008) and an intermodal analysis of their drawings and videos revealed developmental patterns in their academic language competence (Meyer et al, 2015.) In-group variability was also found to be more important than writing modality (individual or collaborative) in determining the quality of the children's texts. Conclusions are drawn for the teaching of writing in CLIL classrooms with younger learners