IMPROVING PEDAGOGICAL MECHANISMS IN ASSESSING SPEAKING AND WRITING IN ENGLISH
Main Article Content
Abstract:
This article explores the issue of improving pedagogical mechanisms in assessing speaking and writing skills in English. The study is based on the theoretical approaches proposed by O’Sullivan (2012) and Weigle (2012) and analyzes ways to enhance the reliability, validity, and practical effectiveness of the assessment process. In assessing speaking, factors such as task design, interlocutor influence, test-taker’s personal characteristics, and assessor subjectivity are considered significant. Moreover, diversifying task formats — including interviews, monologues, role plays, and group discussions — is emphasized as an effective means to comprehensively demonstrate learners’ communicative competence. Regular training and calibration of assessors are also highlighted as essential for ensuring consistency and objectivity in scoring. In writing assessment, maintaining a balance between grammar accuracy, vocabulary richness, and logical organization of ideas is of great importance. Analytical rating scales, such as the FSI model, allow for more detailed identification of learners’ strengths and weaknesses. Furthermore, ensuring fairness in the assessment process — through gender and age balance among interlocutors and equal testing conditions — guarantees the validity and reliability of test results. In conclusion, the article acknowledges that the implementation of modern pedagogical mechanisms in assessing speaking and writing skills plays a crucial role in improving the overall quality of language teaching.
Article Details
How to Cite:
References:
Barkaoui, K. (2007) ‘Participants, text, and processes in ESL/EFL essay tests: A narrative review of the literature’, Canadian Modern Language Review, 64(1), 99–134.
Crusan, D. (2010) Assessment in the Second Language Writing Classroom. Ann Arbor: University of Michigan Press.
Douglas, D. (2000) Specific Purpose Tests of Reading and Writing. Cambridge: Cambridge University Press.
Foster, P. va Skehan, P. (1999) ‘The influence of source of planning and focus of planning on task-based performance’, Language Teaching Research, 3(3), 215–247.
Hamp-Lyons, L. va Condon, W. (2000) Assessing the Portfolio: Principles for Practice, Theory, and Research. Cresskill, NJ: Hampton Press.
Kroll, B. va Reid, J. (1994) ‘Guidelines for designing writing prompts: Clarifications, caveats, and cautions’, Journal of Second Language Writing, 3(3), 231–255.
McNamara, T. va Lumley, T. (1997) ‘The effect of interlocutor and assessment mode variables in overseas assessments of speaking skills’, Language Testing, 14(2), 140–156.
O’Sullivan, B. (2000) ‘Exploring gender and oral proficiency interview performance’, System, 28(3), 373–386.
O’Sullivan, B. (2002) ‘Learner acquaintanceship and oral proficiency test pair-task performance’, Language Testing, 19(3), 277–295.
O’Sullivan, B. (2012) ‘Assessing Speaking’, in Coombe, C., Davidson, P., O’Sullivan, B. va Stoynoff, S. (eds.) The Cambridge Guide to Second Language Assessment. Cambridge: Cambridge University Press, 234–246.
Shaw, S. va Weir, C. (2007) Examining Writing: Research and Practice in Assessing Second Language Writing. Cambridge: Cambridge University Press.
Weigle, S.C. (2002) Assessing Writing. Cambridge: Cambridge University Press.
Weigle, S.C. (2012) ‘Assessing Writing’, in Coombe, C., Davidson, P., O’Sullivan, B. va Stoynoff, S. (eds.) The Cambridge Guide to Second Language Assessment. Cambridge: Cambridge University Press, 218–224.
Wilds, C.P. (1975) ‘The oral interview test’, in Jones, R.L. va Spolsky, B. (eds.) Testing Language Proficiency. Arlington, VA: Center for Applied Linguistics.

