Journal of Language Learning and Assessment
http://e-journal.naifaderu.org/index.php/jlla
<h1><strong style="font-size: 14px;">Call for Papers</strong></h1> <p>The<em> Journal of Language Learning and Assessment</em> welcomes papers with the scope of language education, language learning, methods and approaches of language learning, language identity, cultural identity, assessment and testing, assessment of language learning, language test accountability, language proficiency assessment and language achievement test, language identity, and language and gender.</p> <p> </p>Naifaderu Cipta Sejahtera en-USJournal of Language Learning and Assessment3031-5115An Exploratory Factor Analysis of the Strategy Inventory for Language Learning (SILL) in Ukrainian Philology Students
http://e-journal.naifaderu.org/index.php/jlla/article/view/122
<p>Purpose</p> <p>This study examines the factor structure of the Strategy Inventory for Language Learning (SILL) in a sample of English- and Ukrainian-speaking, Ukrainian college students. There is abundant research supporting the pedagogical belief that the use of learning strategies by second language learners is strongly associated with language learning success. Worldwide, the SILL is the most popular instrument that assesses these skills; however, studies evaluating the psychometric properties of the scale have offered mixed results.</p> <p> </p> <p>Methodology</p> <p>This study used exploratory factor analysis (EFA) to examine the factor structure of the SILL with a sample of 193 Philology students enrolled in a large university in northeastern Ukraine.</p> <p> </p> <p>Results/Findings</p> <p>Our modification of the 50-item SILL resulted in a condensed, 38-item, two-factor revision of the scale which was more psychometrically defensible for the assessment of second language learner characteristics. To reflect the more psychometrically-sound revision, the modified scale is called the SILL-KK. Results of our study failed to find support for the original six-factor structure put forward by Oxford and instead found the data best characterized by a two-factor model. Our refinement of Oxford’s 50-item SILL resulted in an abbreviated, 38-item, two-factor version of the instrument</p> <p> </p> <p>Implications</p> <p style="font-weight: 400;">First, this study addresses criticisms of the dearth of SILL psychometric validation research. Second, the study responds to some theoretical concerns raised in the literature regarding strategy category overlap. From a practical standpoint, the SILL-KK will hopefully lead to more accurate diagnoses of language learning strategies for both students and instructors in addition to the design and development of better individualized instructional materials. </p>Joseph KushAlla Krasulia
Copyright (c) 2025 Journal of Language Learning and Assessment
2025-12-312025-12-31506110.71194/jlla.v3i2.122The Implementation of Project Based Learning to Enhance Computational Thinking Ability for Primary School Students
http://e-journal.naifaderu.org/index.php/jlla/article/view/139
<p>Purpose</p> <p>This study examines the efficacy of Project-Based Learning (PBL) in enhancing Computational Thinking Ability (CTA) among fifth-grade students in Guilin, Guangxi, China, addressing a critical gap in primary education by integrating PBL with computational pedagogy. Grounded in Pappert’s (1980) constructionist theory and Vygotsky’s (1978) sociocultural theory, the research aims to compare CTA development through PBL versus traditional methods, offering insights into innovative teaching strategies aligned with the Curriculum Standards for Information Technology in Compulsory Education (2022) and global digital literacy demands (OECD, 2019).</p> <p> </p> <p>Methodology</p> <p>A quasi-experimental design was employed, involving 60 fifth-grade students from a primary school in Guilin, divided into an experimental group (n=30) using PBL and a control group (n=30) using traditional teaching. The 7-week intervention, conducted from May to July 2025 with 14 class hours, utilized pre- and post-tests, observational checklists, and teacher interviews to assess CTA across decomposition, abstraction, pattern recognition, and algorithmic thinking. Data were analyzed using paired- and independent-samples t-tests.</p> <p> </p> <p>Results/Findings</p> <p>(1) The experimental group’s total CTA mean score increased significantly from 63.43 (SD = 2.81) to 90.74 (SD = 2.43), with effect sizes (Cohen’s d) ranging from 3.36 to 10.32 , indicating substantial within-group improvement; (2) Compared to the control group’s post-test mean of 63.79 (SD = 2.85), the experimental group outperformed with a mean difference of 26.95 (t (58) = 39.45, p < 0.001, d = 10.01) , confirming PBL’s superiority; (3) Among CTA dimensions, pattern recognition showed the largest gain (mean difference = 7.52, d = 8.91), while algorithmic thinking had a smaller effect (d = 2.95), suggesting variability due to limited coding experience. Qualitative feedback supported PBL’s collaborative benefits, though time and resource challenges were noted</p>Li YingJiraporn Chano
Copyright (c) 2025 Journal of Language Learning and Assessment
2025-12-312025-12-31627110.71194/jlla.v3i2.139The Complex Interplay of Factors and Strategies on Second Language Acquisition
http://e-journal.naifaderu.org/index.php/jlla/article/view/140
<p>Purpose</p> <p>This research paper aims to investigate the methods learners use to acquire second languages and the factors that affect Second Language Acquisition (SLA).</p> <p> </p> <p>Methodology</p> <p>The study collected data from two participants via semi-structured interviews: one via Zoom and the other face-to-face.</p> <p> </p> <p>Results/Findings</p> <p>The study found that SLA results from a combination of personal factors and environmental factors, which include age of acquisition, motivation, attitude, self-confidence, and anxiety, along with the teacher's role, technology use, social environment, and family support. Students utilized cognitive, metacognitive, social, and affective learning approaches that proved successful and appeared unique to them.</p> <p> </p> <p>Implication</p> <p>Research indicates that second language acquisition involves a complex and individualized process that demands contextually appropriate instruction along with emotional and social and technological assistance.</p>Muh. Hasrun FitriadyRiska Alifiah ZalsabilaRamlah RamlahLia Maisyaratul Hayah
Copyright (c) 2025 Journal of Language Learning and Assessment
2025-12-312025-12-31728210.71194/jlla.v3i2.140Investigating Impact of ChatGPT on Creativity Among Literature and Non-English Majors in Bangladeshi Private Universities
http://e-journal.naifaderu.org/index.php/jlla/article/view/141
<p>Purpose</p> <p>The rapid rise of artificial intelligence tools, particularly ChatGPT, has transformed academic learning and creativity among university students. This study explores the comparative impact of ChatGPT usage on creativity among literature students and non-English majors. Specifically, it examines how students use ChatGPT, the challenges they face, and the effects of AI reliance on creative thinking and problem-solving.</p> <p> </p> <p>Methodology</p> <p>A mixed-methods approach was adopted, utilizing a structured questionnaire to collect data from both groups. The quantitative component, through a structured survey, will measure usage patterns, perceived benefits, challenges, and the perceived impact on creativity. The qualitative component, using open-ended survey responses, will capture deeper insights into students’ experiences, helping to explain the patterns observed in the quantitative data. This approach allows for triangulation, enhancing the validity and reliability of the study’s findings</p> <p> </p> <p>Results/Findings</p> <p>Findings suggest that while literature students tend to use ChatGPT as a tool to enhance idea generation and critical thinking, non-English majors are more likely to depend on it for ready-made answers, potentially limiting their creative engagement. Common challenges identified include inaccurate outputs, difficulty in formulating effective prompts, and occasional over-reliance on AI assistance. The study proposes a “Differential AI-Mediated Creativity” framework, highlighting that the impact of AI on creativity is mediated by students’ academic background, motivation, and engagement.</p> <p> </p> <p>Implication</p> <p>These insights emphasize the need for guided AI integration in academic settings, enabling students to harness ChatGPT effectively without compromising independent thinking. The research contributes to understanding how AI tools can be leveraged to foster creativity while minimizing dependency across diverse academic disciplines</p>Tasleem Ara Ashraf
Copyright (c) 2025 Journal of Language Learning and Assessment
2025-12-312025-12-31839110.71194/jlla.v3i2.141Developing and Scaling a CEFR-Aligned English Placement Test for University Level: A Psychometric Validation Study
http://e-journal.naifaderu.org/index.php/jlla/article/view/135
<p>Purpose<br>This study reports on the development, scaling, and validation of the AIU-STEP, a CEFR-aligned English placement test designed for incoming university students. The aim was to create a psychometrically robust instrument capable of accurately classifying learners across CEFR levels and supporting institutional decisions related to course placement and curriculum planning.</p> <p>Methodology<br>The test, consisting of reading, grammar, writing analysis, and listening components, was administered to two large student cohorts (N = 1,942 in 2024/2025; N = 2,662 in 2025/2026). Analyses included descriptive statistics, reliability estimation, item difficulty and discrimination indices, exploratory factor analysis, and ROC-based standard setting to establish cut scores linked to CEFR bands.</p> <p>Results/Findings<br>The AIU-STEP demonstrated strong psychometric properties, with reliability coefficients ranging from .72 to .93 across subtests and .95–.96 for the full test. Most items fell within optimal difficulty and discrimination ranges, and cut scores remained highly stable across administrations. CEFR distributions revealed an upward shift in proficiency in the second cohort, particularly at the C1 and C2 levels. Factor analysis confirmed a clear four-factor structure aligned with the test's intended constructs.</p> <p>Implications<br>Findings indicate that the AIU-STEP is a valid, reliable, and scalable tool for CEFR-aligned placement in higher education. The test provides accurate classification across proficiency levels, supports data-driven curriculum placement, and offers a model for institutional adoption of CEFR-based assessment. Ongoing validation and periodic recalibration are recommended to maintain long-term alignment and responsiveness to changing student proficiency profiles.</p>Eslam Yacoub
Copyright (c) 2026 Journal of Language Learning and Assessment
2025-12-312025-12-319210410.71194/jlla.v3i2.135