Automated Essay Evaluation Systems and Their Use in Efl Writing Curriculum in Chinese Universities: Expectations, Comparisons and Perceived Effectiveness
This paper focuses, by incorporating three AEE (Automated Essay Evaluation) systems into a writing course at China Agricultural University, on students’ expectations for AEE, comparisons of AEE with human rating, as well as perceived effectiveness of AEE systems in EFL (English as Foreign Language) writing improvement. The study adopts AHP (analytical Hierarchy Process) and data collection and analysis method over a 11-week span, where students were asked to write and submit four essays—one essay every two weeks—on the web sites of three AEE systems. After all essays were submitted, two experienced teacher-raters were invited to evaluate the first and fourth essays. In the meanwhile, all participants were encouraged to interact by cross-reviewing each other’s essays and responding to reviews, using the three AEE web sites as academic social network tools. It is found that students at large have high expectations for AEE systems, offered more favorable evaluations of AEE functions than that of human rating, and showed marked improvement in writing ability over the course of the writing project. It is also proved that social interactions positively adjust participants’ cognitive comparison between expectancy and perceived results. Providing a practical insight into the application of AEE systems in EFL writing classrooms in Chinese Universities, both as rating tools and as knowledge sharing social networks, the study offers implications for extensive pedagogy rejigging.