- Presentation abstract発表の要約
This presentation will summarize an ongoing research project on computerized-scoring of speech from L2 learners of English using a new Moodle quiz question-type. The question-type employs Google’s ASR engine to transcribe students’ speech, so that it can be compared to a target phrase and awarded a score based on how closely the phonemes in the student’s transcribed speech match the phonemes from the target phrase. Examples of both extensive and intensive computer-scored speaking tasks used in the study will be provided along with a correlation analysis that focuses on relationships between students’ standardized language test scores, speaking scores derived from a series of computer-scored speech tasks, and speaking scores from human-rated presentations. Participants will be given access to a demo Moodle course where they can create their own sample speaking tasks. The speaking assessment question type is open source and can be downloaded from Github.com.
- Original submission元の原稿
発表の題名: Computer-scored speaking activities in Moodle
発表の種類: Presentation (20 mins) プレゼンテーション(20分)
発表の言語: English 英語
発表のキーワード: Speech recognition, computer-scored speech
発表の要約: This presentation will summarize an ongoing research project on computerized-scoring of L2 learner speech using a new Moodle quiz question-type. The question-type employs Google’s ASR engine to transcribe students’ speech, so that it can be compared to a target phrase and awarded a score based on how closely the phonemes in the student’s transcribed speech match the phonemes from the target phrase. Examples of both extensive and intensive computer-scored speaking tasks used in the study will be provided along with a correlation analysis that focuses on relationships between students’ standardized language test scores, speaking scores derived from a series of computer-scored speech tasks, and speaking scores from human-rated presentations. Participants will be given access to a demo Moodle course where they can create their own sample speaking tasks. The speaking assessment question type is open source and can be downloaded from Github. com.
- Peer review details査読詳細
Peer Review 1
Criteria | Assessment |
---|
Clarity of Submission | 8 / 10 |
Presentation Length | 5 / 10 |
Originality of Submission | 7 / 10 |
Appropriateness & Relevance to the Moot | 9 / 10 |
Quality of Content & Writing | 8 / 10 |
Overall evaluation | 40 / 50 |
| 77 / 100 |
Feedback Sharing links with the audience is a good idea. I think the presentation length should be longer to accommodate the audience should they have questions about using the tools. 20 minutes is probably too short. If you skip audience participation, then maybe you've got enough time.
Peer Review 2
Criteria | Assessment |
---|
Clarity of Submission | 9 / 10 |
Presentation Length | 7 / 10 |
Originality of Submission | 10 / 10 |
Appropriateness & Relevance to the Moot | 10 / 10 |
Quality of Content & Writing | 9 / 10 |
Overall evaluation | 45 / 50 |
| 90 / 100 |
Feedback Transcribing and assessing student speech is an interesting and important topic. I would like to hear more about it. Meanwhile, three remarks come to mind: 1. Time: Can this topic be sufficiently addressed in 20 minutes? Maybe a 40-minute presentation would be better. 2. Language: The abstract does not mention anything about the language that is being transcribed. Is this limited to English? 3. Universality: A score is awarded on how closely the student speech matches the phonemes from the target phrase. What is considered as a correct phoneme? With English as a lingua franca, pronunciation has become more varied. Accordingly, what is "the standard" to work with?
- Peer review notes査読メモ
Thanks for your submission!
Your proposal has been conditionally accepted.
- For this submission to be fully accepted, please make the requested changes to your abstract/presentation before 2021 Feb 1 (Mon) 23:55.
- When the changes have been made, they will be reviewed and you will be notified of the new acceptance status.