Archive for July, 2012

Listening is one of the most pivotal skills that language learners need to improve, and thanks to technological tools such as MP3/4 players, websites such as YouTube, and companies producing language learning software such as TELLMEMORE, language learners as well as teachers have almost no difficulty in accessing listening materials, whether authentic or created for language learning purposes. Today’s teachers and students, with no doubt, are luckier than the ones in the past when it comes to listening, not to mention reading and other skills.

Research on language learners’ listening skill in L2 has looked into various aspects of listening such as speed of delivery, note-taking, and background knowledge. East and King (2012), in their article, investigates whether slowing down the tempo of IELTS-type listening materials will have any effect on the participants’ performance on the tasks given and their perceptions regarding the difficulty of these tasks. With this aim in mind,

The authors worked with 120 intermediate-level (B1 level on the Common European Framework) English language learners in New Zealand. The participants were divided into four groups considering the results of initial listening test based on the published materials of the IELTS examination and delivered once at normal speed:

a) normal speed (control group)

b) tempo reduced by 15%

c) tempo reduced by 22.5%

d) tempo reduced by 30%.

The same listening test materials were used in all these four groups. The independent variable was the speed of delivery and the dependent variable was the scores that each group had at the end of the test. In the control group, no change has been made to the speed; however, in the other groups, the listening materials were slowed down by using Audacity. The participants were also asked to provide responses to the questionnaire on the speed and the test difficult of the listening materials.

The results showed that all the experimental groups performed significantly better than the control group. However, there was no difference among the experimental groups. In other words, slowing the speed of the listening materials did affect the performance, but the degree of slowing speed (-15%, 22.5%, -30%) did not lead to any significant difference in performance. Moreover, the responses provided to the questionnaire showed that the participants in the experimental groups perceived the test less difficult. As stated by the authors, the study provided contradictory results regarding the review of the studies discussed in the literature review of the article and indicated that the speed of a listening test greatly influenced the test results and the participants’ perceptions.

I thought as much when I read the results since my own experience and the discussions that I hold with my students clearly show that although the level of the students and their listening habits play a great role in their performance on this kind of test, the speed of delivery of any listening material will naturally affect their performance. However, as also mentioned in the article, there remains a question: Since high stakes language exams such as TOEFL and IELTS include authentic listening materials; that is, speed of delivery is by no way slowed down or changed, will it be a good strategy to expose our students to listening materials whose speed or tempo is slowed down? For beginner students, the answer will most probably be ‘yes’, but what about students of higher levels?

By the way, I have included some links to popular websites that aim to provide learners of English with ample practice in listening. For those who are willing to create quizzes using YouTube Videos.


East, M., & King, C. (2012). L2 learners’ engagement with high stakes listening tests: Does technology have a beneficial role to play? CALICO Journal, 29(2), 208-223.


Thompson and Lee (2012) provide the following quotation by a freshman composition student at the very beginning of their study:

I can’t tell you how many times I’ve gotten a paper back with underlines and marks that I can’t figure out the meaning of.

What this student tries to tell reminds us that for any feedback provided to students should be comprehensible enough for the student to act on his/her work considering the comments and suggestions. However, in most cases, students’ work is returned with several marks, comments and suggestions, which make little sense, as stated by the authors.

The authors, in order to deeply engage and motivate students, used screencasting in college-level writing courses lectured by two instructors in five sections. Instead of providing written comments, the instructors used Jing to create videos, and through these videos, they provided their comments and suggestions, employing a color-coding system. In the two sections, students were surveyed about the feedback provided on their essay drafts and PowerPoint presentations by using the screencasting software. In the other three online sections, an informal survey was applied to inquire about students’ attitudes. The surveys conducted included questions that have students reflect on achieving the objectives set for the course and how they perceived the use of recorded videos as feedback, rather than the written comments and suggestions.

The results of the study revealed that the participants provided positive feedback regarding the use of Jing to provide digital feedback and found this tool beneficial. Some of the responses provided by the participants were as follows:

“Video feedback helped me to better improve my work because it was almost like a classroom setting that allowed the teacher to fill in the interaction gaps without actually having an in-class setting. Not only that, the information could be replayed repetitively, allowing me to review them and reflect on them once I need help with my work.”

“It’s great to be able to get the feedback while watching it being addressed on the essay itself.”

“It’s one thing to just read your instructors feedback but to be able to see it and understand what you are talking about really helps!”

“I like it better than normal comments because I can hear your thought process when you are making a comment so it is easier to understand what you’re trying to say.”

“I can see and follow the instructor as she reads through my writing with the audio commentaries. It helps me to pin-point exactly what areas need to be corrected, what is hard to understand, which areas I did well on, and which areas could be improved.”

However, some participants suffered from some technical issues or did not like this mode of feedback and said that

“Jing feedback videos and [Dropbox] comments still do not work on my end. I have talked with tech guys and they can’t figure it out. I can’t find out how I did and ways to improve my writing.”

“I like the videos but they were really hard to get them to work.”

“Sometimes it’s hard to open the videos.”

“Personally I don’t like the jing videos. I’d rather have the comments written down so that I can quickly access the notes and not have to keep track of just where in the video a certain comment is.”

The authors, regarding the above comments, provided several reasons. The very first explanation was that the participants were not use to receiving feedback through videos as they were coming from a ‘print-based culture’ and they did not know how to deal with videos and how to work with this kind of feedback.

The authors provided and discussed several advantages of providing digital feedback over the written mode in detail. However, as also stated by the authors, one of the weaknesses of the study, or challenging issue for further research, was that the study focused on the participants’ attitudes on digital feedback, but could not compare the participants’ work with their attitudes. That is, we have no information on whether providing digital feedback resulted in ‘better’ written essays. The study could not assess the impact of this mode of feedback on the participants’ learning, and thus leaves a pivotal question to be answered: Does providing digital feedback improve students’ learning more than the traditional written mode?

Thompson, R., & Lee, M. J., (2012). Talking with students through screencasting: Experiments with video feedback to improve student learning. The journal of Interactive Technology and Pedagogy, 1, Retrieved from

A list of references on technology and providing feedback

Chiu, C.Y., & Savignon, S. (2006) Writing to mean: Computer-mediated feedback in online tutoring of multidraft compositions. CALICO Journal 24(1), 97-114.

Denton, P., Madden, J., Roberts, M., & Rowe, P. (2008). Students’ response to traditional and Computer-Assisted formative feedback: A comparative case study. British Journal of Educational Technology, 39(3), 486-500.

Dickinson, M., Eom, S., Kang, Y., Lee, C. M., & Sachs, R. (2008). A balancing act: How can intelligent computer-generated feedback be provided in learner-to-learner interactions? Computer Assisted Language Learning, 21(4), 369-382.

Dippold, D. (2009). Peer feedback through blogs: Student and teacher perceptions in an advanced german class. ReCALL, 21(1), 18-36.

Heift, T. (2001). Error-specific and individualized feedback in a web-based language tutoring system: Do they read it? ReCALL, 13(1), 99-109.

Heift, T. (2004). Corrective feedback and learner uptake in CALL. ReCALL, 16(2), 416-431.

Heift, T., & Rimrott, A. (2008). Learner responses to corrective feedback for spelling errors in CALL. System, 36(2), 196-213.

Hoppe, D., Sadakata, M., & Desain, P. (2006). Development of real-time visual feedback assistance in singing training: A review. Journal of Computer Assisted Learning, 22(4), 308-316.

Issac, F., & Hu, O. (2002). Formalism for evaluation: Feedback on learner knowledge representation. Computer Assisted Language Learning, 15(2), 183-199.

Loewen, S., & Erlam, R. (2006). Corrective feedback in the chatroom: An experimental study. Computer Assisted Language Learning, 19(1), 1-14.

Nagata, N. (1993). Intelligent computer feedback for second language instruction. The Modern Language Journal, 77(3),330-339.

Özdener, N. & Satar, H. M. (2009). Effectiveness of various oral feedback techniques in CALL vocabulary learning materials.  Egitim Arastirmalari-Eurasian Journal Of Educational Research, 34, 75-96.

Pramela K. (2006). The power of feedback in an online learning Environment. Journal of Language Teaching Linguistics & Literature, XII, 95-106

Pujola, J. T. (2001). Did CALL feedback feed back? Researching learners’ use of feedback. ReCALL, 13(1), 79-98.

Richard Towell (1991). Innovation and feedback in a self-access learning project in modern languages. British Journal of Educational Technology, 22(2), 119-128.

Ros i Solé, C., &  Truman, M.  (2005).  Feedback in distance language learning: Current practices and new Directions. In B. Holmberg, & M. A. Shelley and C. J. White (Eds),  Distance education and languages: Evolution and change (pp. 72-91).  Clevedon: Multilingual Matters.

Sanz, C., & Morgan-Short, K. (2004). Positive evidence versus explicit rule presentation and explicit negative feedback: A computer assisted study. Language Learning, 54(1), 35-78.

Sotillo, S. M. (2005). Corrective feedback via Instant Messenger learning activities in NS-NNS and NNS-NNS dyads. CALICO Journal, 22(3), 467-496.

Tokuda, N., & Liang, C. (2004). A new ke-free online ICALL system featuring error contingent feedback. Computer Assisted Language Learning, 17(2), 177-201.

Tsutsui, M. (2004). Multimedia as a means to enhance feedback. Computer Assisted Language Learning, 17(3/4), 377-402.

In the field of computer assisted language learning, the advantages of online platforms have always been listed; and in most of the studies, the superiority of online environments over face-to-face platforms was mentioned. As one of the main advantages of online platforms, it has been claimed that computer mediated environments provide opportunities for students to participate equally (Chun, 1994, Kern, 1995, Warshauer, 1996). Even shy students are willing to participate in online discussions more. However, the equality of the turn distribution has been measured in few studies.

In this article by Fitze, as in previous research studies, it was mentioned that:

“[researchers]have also reported more balanced participation in written electronic as opposed to face-to-face conferences. The term more balanced means that in written electronic conferences, rather than the discussion being dominated by a few members, participation tends to be more equally distributed among participants (69).

In this study, Fitze (2006) compared face-to-face environments with electronic conferences. The participants of the study were all advanced learners of English as a second language. He summarized the benefits of written electronic conferences as the data collected in these environments displayed a greater lexical range, the students participating into the study produced discourse demonstrating interactive competence, the students were better able to use and practice a wider range of vocabulary and there was a balance of participation. When these features were taken into account, he claimed that written electronic conferences were more beneficial for the students when compared to the face-to-face settings. However, he stated that the equality of the participation should be investigated in more detail in order to find out what kind of variables were effective in maintaining the equality of the participation.

Fitze used Gini Coefficient in order to measure the equality of participation and found that:

In partial confirmation of these research findings, my analysis revealed that participation was significantly more balanced among students in written electronic conferences. However, when the classes were considered separately, analysis revealed that while for class B, participation in the written electronic conferences was considerably more balanced, for class A, conference setting had almost no impact on the degree to which participation was balanced among students (79).

I believe that the balanced participation should be considered among the most important features of online environments; because it is mostly very difficult to control the turn distribution in classroom settings. Some students attempt to dominate the discussions and shy students prefer not to talk in front of the other students. However, in online discussions, you might be surprised to see how some students express themselves.

In the final part of this blog post, I would like to present how to calculate and interpret Gini Coefficient.

First of all, the words written by the students are counted and ordered from smallest to the largest. Then, as it can be seen in the following formula, a set of operations are conducted.

E.g.: There are 6 students and they participated in the study. The number of words they uttered are 88, 98, 76, 120, 102 and 68. These numbers are ordered from smallest to the largest as







After that, the cumulative column is computed summing down the column.

Thus, the second value is 68 + 76 = 144; and the third value is 68 + 76 + 88 = 232.

68 68

76 144

88 232

98 330

102 432

120 552

The last value in the cumulative column, 552, is T, the total of the column; and all but the last value of the cumulative column are summed to give Sigma,

68 + 144 + 232 + 330 + 432 = 1206.

The formula for calculating Gini Coefficient is 1 – (2 / T * Sigma + 1)/n, which means, the Gini is 1 – (2/552*1206+1)/6 = 0.105172, which means equal.

The Gini Coefficient which is lower than 0.5 degree is considered as equal. This coefficient is mostly used to compare two or more settings in terms of equality and the one with lower score is considered as more equal.


Chun, D. M. (1994). Using computer networking to facilitate the acquisition of interactive competence. System, 22 (1), 17-31.

Fitze, M. (2006). Discourse and participation in ESL face-to-face and written electronic conferences. Language Learning & Technology, 10(1), 67-86.

Kern, R. G. (1995). Restructuring classroom interaction with networked computers: Effects on quantity and characteristics of language production. The Modern Language Journal, 79 (4), 457-475.

Warschauer, M. (1996). Comparing face-to-face and electronic communication in the second language classroom. CALICO Journal, 13 (2), 7-26.

As a language teacher, I am always confronted with the same question by almost all of my students: How can I learn vocabulary in English? The answer to this question depends on learners’ aims and needs. That is, if learners are studying for a nation-wide high stakes exam in which there are multiple-choice questions directly and/or indirectly testing vocabulary knowledge, the very first answer will be to memorize a list of frequently asked lexical items on the exam, practice it frequently using the traditional ways such as flash cards, notebooks, and try to read as much as you can. In this mode of study, the focus will be on recognition, ignoring pronunciation and contexts in which it can be used. However, when it comes to vocabulary acquisition for negotiation of meaning and communication, or in other words, production ability, everything seems to change, from methodology to activities to be implemented.

Yanguas (2012) focuses on enhancing fifty-eight third semester college Spanish Students’ L2 vocabulary acquisition through a within groups experimental design. The study uses Skype as a way of learner-to-learner interaction and investigates whether traditional face-to-face interaction and oral CMC (Computer Mediated Communication- audio and video) interaction will lead to differences in learners’ development of vocabulary knowledge. Furthermore, the article also explores participants’ perceptions of CMC modes. When we look at the materials used in the study, we see that the participants collaborated on jigsaw tasks to combine and use information to achieve their goals. This task is based on The Amazing Race, which is a reality television game show. As for assessment tasks, 16 target words were presented in the jigsaw task to check the participants’ development in recognition, production and listening abilities.

If briefly stated, Yanguas’ findings show that there are no statistically significant differences considering production and recognition abilities among the three groups: participants completing the tasks through video CMC (VidCMC), through audio CMC (AudCMC), and through face-to-face interaction in class. As the post-test which was conducted after two weeks indicate, all the groups were able to recognize the target words, with no significant difference. However, an interesting finding was found regarding aural comprehension development. The participants who interacted through audio CMC group outperformed the other two groups, which was accounted for the fact that they did not focus on visual cues. The results also showed that there were no significant differences among the groups in their development of production or written recognition, which I think need pivotal care. Considering the participants’ attitudes, most of the participants highly valued CMC modes and provided positive feedback.

The author, in the discussion and conclusion section, touches upon a very important issue by saying that

The results of this study seem to support the notion that receptive and productive learning processes are different and, as such, learners might need diverse treatments so that these abilities can be developed (p. 523).

I think this is the very point that we should focus on. In one way or another, through traditional methods and/or CMC modes, we, language teachers, seem to have achieved helping our language learners to develop their receptive or recognition ability. However, we seem to go back to initial stages when it comes to productive ability. Therefore, we should look for ways to enhance their productive skills through various methods and materials.

Yanguas, I. (2012). Task-based oral computer-mediated communication and L2 vocabulary acquisition. CALICO Journal, 29(3), 507-531.


I have provided some references on vocabulary development and CALL for those interested in.

Abraham, L. B. (2008). Computer-mediated glosses in second language reading comprehension and vocabulary learning: A meta-analysis. Computer Assisted Language Learning, 21(3), 199-226.

Acha, J. (2009). The effectiveness of multimedia programmes in children’s vocabulary learning. British Journal of Educational Technology, 40(1), 23-31.

Allum, P. (2004). Evaluation of CALL: Initial vocabulary learning. ReCALL, 16(2), 488-501.

Al-Seghayer, K. (2001). The effect of multimedia annotation modes on L2 vocabulary acquisition: A comparative  study. Language Learning  and Technology5(1).

Baturay, M., Yıldırım, S., & Daloğlu, A. (2009). Effects of web-based spaced repetition on vocabulary retention of foreign language learners.  Egitim ArastirmalariEurasian Journal of Educational Research, 34, 17-36.

Belz, J. A. (2004). Learner corpus analysis and the development of foreign language proficiency. System, 32(4), 577-591.

Blok, H., Van Daalen-Kapteijns, M. M., Otter, M. E., & Overmaat, M. (2001). Using computers to learn words in the elementary grades: An evaluationframework and a review of effect studies. Computer Assisted Language Learning, 14(2), 99-128.

Braun, S. (2005). From pedagogically relevant corpora to authentic language learning contents. ReCALL, 17(1), 47-64.

Browne, C., & Culligan, B. (2008). Combining technology and IRT testing to build student knowledge of high frequency vocabulary. The JALT CALL Journal, 4(2), 3-16.

Chambers, A. (2007). Integrating corpora in language learning and teaching. ReCALL, 19(3), 249-251.

Christensen, E., Merrill, P., & Yanchar, S. (2007). Second language vocabulary acquisition using a diglot reader or a computer-based drill and practice program. Computer Assisted Language Learning, 20(1), 67-77.

Chun, D. M., & Plass, J. L. (1996). Effects of multimedia annotations on vocabulary acquisition. The Modern Language Journal, 80(2),183-198.

Coll, J. F. (2002). Richness of semantic encoding in a hypermedia-assisted instructional environment for ESP: Effects on incidental vocabulary retention among learners with low ability in the target language. ReCALL, 14(2), 263-284.

Daloğlu, A. Baturay, M., & Yildirim, S. (2009). Designing a constructivitist vocabulary learning material. In R. C. V. Marriott &  P. L. Torres.  (Eds.). Research on e-learning methodologies for language acquisition (pp. 186-203).New York: Information Science Reference.

De la Fuente, M.J. (2003). Is SLA interactionist theory relevant to CALL? A study of the effects of computer-mediated interaction in L2 vocabulary acquisition. Computer Assisted Language Learning, 16(1),47-81.

Deridder,  I. (2003). Reading from the screen in a second language: Empirical studies on the effect of marked hyperlinks on incidental vocabulary learning, text comprehension and the reading process. Antwerp-Apeldoorn: Garant.

Gabel, S. (2001). Over-indulgence and under-representation in interlanguage: Reflectionson the utilization of concordancers in self-directed foreign language learning. Computer Assisted Language Learning, 14(3/4), 269-288.

Gettys, S., Imhof, L., & Kautz, J. (2001).Computer-assisted reading: The effect of glossing format on comprehension and vocabulary retention. Foreign Language Annals, 34(2), 91-106.

Ghadirian, S. (2003). Providing controlled exposure to target vocabulary through the screening and arranging of texts. Language Learning & Technology, 6(1), 147-164.

Goodfellow, R., & Laurillard, D. (1994). Modeling lexical processes in lexical CALL. CALICO Journal, 11(3), 19-46.

Groot, P.  (2000). Computer  assisted  second  language vocabulary acquisition. Language Learning & Technology, 4(1), 60-81.

Guillory. H. G. (1998). Retention of word meanings inferred from context and sentence-level translations: Implications for the design of beginning-level CALL software. The Modern Languge Journal, 82(4), 533-544.

Hill, M., & Laufer, B. (2003). Type of task, time-on-task, and electronic dictionaries in incidental vocabulary acquisition. IRAL, 41(2), 87-106.

Hirata, Y., & Hirata, Y. (2007). Independent research project with web-derived corpora for language learning. The JALT CALL Journal, 3(3), 33-48.

Horst, M., Cobb,T., & Nicolae, I. (2005). Expanding academic vocabulary with an interactive on­line database. Language Learning & Technology, 9(2),90-110.

Hu, H.-P., & Deng, L.-J. (2007). Vocabulary acquisition in multimedia environment. US-China Foreign Language, 5(8), 55-59.

Joana Acha (2009). The effectiveness of multimedia programmes in children’s vocabulary learning. British Journal of Educational Technology, 40(1), 23-31.

Johnson, A., & Heffernan, N. (2006). The short readings project: A CALL reading activity utilizing vocabulary recycling. Computer Assisted Language Learning, 19(1), 63-77.

Jones, L.  (2003). Supporting listening comprehension and vocabulary acquisition with multimedia annotations: the students’ voice. CALICO Journal, 21(1), 41-65.

Jones, L. (2004). Testing L2 vocabulary recognition and recall using pictorial and written test items. Language Learning & Technology, 8(3), 122-143.

Jones, L. (2009). Supporting student differences in listening comprehension and vocabulary learning with multimedia annotations. CALICO Journal, 26(2).

Jones, L., & Plass, J. (2002). Supporting listening comprehension and vocabulary acquisition  in French with multimedia annotations.  The Modern Language Journal, 86(4), 546-561.

Kaltenb, Ouml, Ck, G., & Mehlmauer-Larcher, B. (2005). Computer corpora and the language classroom: On the potential and limitations of computer corpora in language teaching. ReCALL, 17(1), 65-84.

Kaltenböck,  G.,  &  Mehlmauer-Larcher,  B.  (2005). Computer corpora and the language classroom: On the potential and limitations of computer corpora in language teaching. ReCALL, 17(1), 65-84.

Kaur, J., & Hegelheimer, V. (2005). ESL students’ use of concordance in the transfer of academic word knowledge: An exploratory study. Computer Assisted Language Learning, 18(4), 287-310.

Kim, D., & Gilman, D. A. (2008). Effects of text, audio, and graphic aids in multimedia instruction for vocabulary learning. Educational Technology & Society, 11(3), 114-126.

Laufer, B., & Hill, M. (2000). What lexical information do L2 learners select in a CALL dictionary and how does it affect word retention? Language Learning and Technology, 3(2), 58-76.

Lenders, O. (2008). Electronic glossing– Is it worth the effort? Computer Assisted Language Learning, 21(5), 457-481.

Loucky, J. P. (2002). Improving access to target vocabulary using computerized bilingual dictionaries. ReCALL, 14(2), 295-314.

Loucky, J. P. (2005). Combining the benefits of electronic and online dictionaries with CALL web sites to produce effective and enjoyable vocabulary and language learning lessons. Computer Assisted Language Learning, 18(5), 389-416.

Lu, M. (2008). Effectiveness of vocabulary learning via mobile phone. Journal of Computer Assisted Learning, 24(6), 515-525.

Ma, Q., & Kelly, P. (2006). Computer assisted vocabulary learning: Design and evaluation. Computer Assisted Language Learning, 19(1), 15-45.

Nakata, T. (2006). Implementing optimal spaced learning for English vocabulary learning. The JALT CALL Journal, 2(2), 19-36.

Nakata, T. (2008). English vocabulary learning with word lists, word cards and computers: Implications from cognitive psychology research for optimal spaced learning. ReCALL, 20(1), 3-20.

Nesselhauf, N., and Tschichold, C. (2002). Collocations in CALL: An investigation of vocabulary-building software for EFL. Computer Assisted Language Learning, 15(3) 251-279.

Ringlstetter, C., Schulz, K. U., & Mihov, S. (2006). Orthographic errors in web pages: Toward cleaner web corpora. Computational Linguistics, 32(3), 295-340.

Şahin, M. (2009). Second language vocabulary acquisition in synchronous computer-mediated communication.  Egitim Arastirmalari-Eurasian Journal of Educational Research, 34, 115-132.

Segers, E., & Verhoeven, L. (2003). Effects of vocabulary training by computer in kindergarten. Journal of Computer Assisted Learning, 19(4), 557-566.

Segler, T. M., Pain, H., & Sorace, A. (2002). Second language vocabulary acquisition and learning strategies in ICALL environments. Computer Assisted Language Learning, 15(4), 409-422.

Smidt, E., & Hegelheimer, V. (2004). Effects of online academic lectures on ESL listening comprehension, incidental vocabulary acquisition, and strategy use. Computer Assisted Language Learning, 17(5), 517-556.

Song, Y. (2008). SMS enhanced vocabulary learning for mobile audiences. International Journal of Mobile Learning and Organisation, 2(1), 81-98.

Song, Y., & Fox, R. (2008). Using PDA for undergraduate student incidental vocabulary testing. ReCALL, 20(3), 290-314.

Stevens, V. (1995). Concordancing with language learners: Why? When? What? CAELL Journal, 6(2), 2-10.

St-Jacques, C., & Barriãre, C. (2005). Search by fuzzy inference in a children’s dictionary. Computer Assisted Language Learning, 18(3), 193-215.

Stockwell, G. (2007a). Vocabulary on the move: Investigating an intelligent mobile phone-based vocabulary tutor. Computer Assisted Language Learning, 20(4), 365-383.

Suarcaya, P. (2008). Increasing student participation in English vocabulary classes by providing time flexibility in accomplishing exercises. The JALT CALL Journal, 4(1), 19-29.

Sun, Y.-C & Dong, Q. (2004). An experiment on supporting children’s English vocabulary learning in multimedia context. Computer Assisted Language Learning, 17(2), 131-147.

Sun, Y.-C., & Wang, L.-Y. (2003). Concordancers in the EFL classroom: Cognitive approaches and collocation difficulty. Computer Assisted Language Learning, 16(1), 83-94.

Tozcu, A., & Coady, J. (2004). Successful  learning of frequent vocabulary through CALL also benefts reading comprehension and speed. Computer Assisted Language Learning, 17(5), 473-495.

Turnbull, J., & Burston, J. (1998). Towards independent concordance work for students: Lessons from a case study. ON-CALL, 12(2), 10-21.

Van De Poel, K., & Swanepoel, P. (2003). Theoretical and methodological pluralism n designing effective lexical support for CALL. Computer Assisted Language Learning, 16(2/3), 173-211.

Yanqing, S., & Qi, D. (2004). An experiment on supporting children’s English vocabulary learning in multimedia context. Computer Assisted Language Learning, 17(2), 131-147.

Yeh, Y., &  Wang,  C. (2003). Effects of multimedia vocabulary annotations and learning styles  on vocabulary learning. CALICO Journal, 21(2), 131-144.

Yip, F.W.M., & Kwan, A.C.M. (2006). Online vocabulary games as a tool for teaching and learning English vocabulary.  Educational  Media  International,  43(3), 232-249.

Yoshii, M., & Flaitz, J. (2002). Second language incidental vocabulary retention: The effect of picture and annotation types. CALICO, 20(1), 33-58.

Yun, S., Miller, P. C., Baek, Y., Jung, J., & Ko, M. (2008). Improving recall and transfer skills through vocabulary building in web-based second language learning: An examination by item and feedback type. Educational Technology & Society, 11(4), 158–172.

Zapata, G., & Sagarra, N. (2007). CALL on hold: The delayed benefits of an online workbook on L2 vocabulary learning. Computer Assisted Language Learning, 20(2), 153-171.

The book entitled ‘Why do I need a teacher when I’ve got Google?’ includes discrete chapters focusing on questions and issues on education, which the author, Ian Gilbert considers pivotal in today’s twenty-first education. Through the book, the author provides readers, especially teachers, with challenging and thought-provoking questions. The author starts the first chapter through a quotation from Albert Einstein

“We can’t solve the problems by using the same kind of thinking we used when we created them.

We can adapt this famous quotation to education, which might be the following:

We cannot teach students by using the same kind of materials and techniques that we used when we were students.

For long years, students have been dependent on their teachers for information to be passed on them, and especially, it is especially true for students in Turkey, though it changes from one country to another. As stated by Gilbert (2011, pp. 23-24),

For years, teachers have been the primary source of information in the classroom, backed up by textbooks that have been jealously guarded and kept locked in a cupboard or guarded by Conan the Librarian. But now, within a few years, the primary source will be a piece of technology children put in their pockets.

Considering the quotation provided above, it can be put forward that language teachers are no longer the only input providers to language learners, and the classrooms, likewise, are no longer the only place where language learners are exposed to target language. When I look back upon my secondary and high school years (1980s and 1990s), I can easily remember, as some might do, that the sole opportunity for us to practice English was the language classroom and our English teacher. We were having difficulty in finding authentic materials such as cassettes and short stories, which were too expensive to buy. Our teacher was providing photocopies of several stories which were accompanied by several comprehension and multiple-choice questions. We could hardly practice pronunciation and listening skills. We were dealing with paperback dictionaries to look up unknown words and try to understand their meanings in one or two sentences provided.

Today, as stated by Gilbert (2011), learners as well as students live in a digital world where they can instantly search for specific information instantly in a huge amount of websites wherever they are, be it on a bus, or in their bedroom, not just their classrooms. From language learning perspective, today’s leaners have a huge digital world in their hands. They can easily access electronic dictionaries such as Online Macmillan Dictionary (, and Longman Dictionary of Contemporary English (online or CD/DVD version) ( Whenever they need to practice listening skills, they do not have to depend on cassettes which provide low quality, but search Google or YouTube to find authentic videos from famous movies (, or videos created for language purposes ( When they need help, they can search Google or they just google, and find a website that may offer help (

So, there comes the question: ‘Why do I need a teacher when I’ve got Google?’ As suggested by the author of the book, the answer depends on your role as a teacher. To the author, the role of today’s and the next century’s teacher is to help students how to find information on the net or the library, how to be sure that that information is accurate, what to do that information, and how to be creative with it. More importantly, the teacher’s role is to probe students’ critical thinking skills and increase their curiosity.

Of course, Gilbert offers more than this in the book. I have just touched upon the core issue that the author deals with. I suggest the readers go through this book for more discussions on issues such as the real of point of school and exams.

Gilbert, I. (2011). Why do I need a teacher when I’ve got Google? The essential guide to the big issues for
every twenty- first century teacher. New York, NY: Routledge.