Wednesday

Debate Arises Over Automated Grading of Student Essays

According to USA Today, the Educational Testing Service (ETS) recently presented evidence suggesting that its computer testing program can grade freshman writing placement tests as well as human beings can. Their study examined tests that were taken at the New Jersey Institute of Technology.
The computer grading program, E-Rater, evaluated short essays that were completed in the SAT writing portion. The ETS found that the human grades and E-Rater grades corresponded strongly, according to the news source. Researcher Chaitanya Ramineni noted that "human scoring suffers from flaws."
Many companies and colleges use or plan to utilize the E-Rater tool to grade placement tests. Last November, for instance, Turnitin integrated E-Rater into its GradeMark tools to give students more detailed feedback, according to a report on eschoolnews.com.
However, Les Perelman, the director of MIT's writing across the curriculum program, has emerged as firm opposition to the automated grading tool. According to the news source, he feels the service punishes student creativity and encourages test-takers to use trite vocabulary over clear, concise language. For instance, because E-Rater examines the ratios of grammar and mechanical errors to the total number of words, longer essays - even poorly written ones - may score higher.
The answer most likely lies in between both solutions. Essay writers are best served by combining their own review of their work with an automated grammar and spelling check. 

No comments:

Post a Comment