Evaluating the Impact of Technology on Student Learning

Evaluating the impact of innovations in education has long been recognized as a particularly challenging endeavor. In the early 1970s, Gross expressed the opinion that given the complexity of educational environments, in which multiple factors simultaneously influence student outcomes, identifying and measuring the impact of a single innovation was extremely problematic. (Gross et al., 1971)

Evaluation theorists, such as Mackie and Cronbach, have similarly argued that social phenomenon is too complex to be adequately captured by traditional assessment methods, “Social programs are far more complex composites, themselves produced by many factors that interact with one another to produce quite variable outcomes. Determining contingent relations between the program and its outcomes is not as simple as the regulatory theory posits” (House, 1993, p. 135-6). More recently, Petigrew further emphasized the challenges facing researchers attempting to analyze the impact of educational changes on learning outcomes, ’evaluating the success of change initiatives is replete with practical difficulties’ (Petigrew et al.,2001, p. 6).

Despite these complexities, the need to comprehensively understand the impact of educational innovations, requires schools, educators and researchers to continually tackle this challenging endeavor and with the growing adoption and investment in technology for educational technologies, the need to assess the impact of technology on student learning has become imperative.

Alongside the traditional challenges of evaluating innovations, there are added complications when it comes to assessing the impact of technology (Noeth & Volkov). Foremost amongst these challenges is the difficulty of separating the effects of technology from the “complex environments in which technology projects are embedded’, environments which, ‘make inference of causal relations between project activities and outcomes tenuous’ (p. 20).

Furthermore, the rapid pace of technological development and the increasing approaches with which digital resources can be used to support learning, across the entire spectrum of learners and in diverse range of educational settings, often limits the usefulness of conclusions drawn from this body of research.

Much of the early research on educational technologies was conducted to measure, ‘the impact of technology on teaching and learning in schools …… across a range of tested curriculum outcomes’  (Noeth), with many of these studies recording negligible improvements to student attainment. Sipe and Curlette’s study from 1997 and Weaver’s study from 2000 are typical of the research from this period. Weaver’s report concluded that computer use makes very little difference to student achievement (Weaver, 2000) while Sipe & Curlette’s research, which took a comparative approach, concluded, ‘when compared with typical effect of innovation on educational achievement, computer innovations are not that different from the average innovation.’ (p 608) (Sipe & Curlette, 1997).

These early reports focused on measuring the impact of technology primarily by examining to what extent technologies influenced student attainment in traditional tests. This research often neglected to consider other important outcomes influenced by the use of technology; such as learner motivation, interaction with the technologies and the potential for developing creative abilities. Assessing the value of technology by simply measuring students’ results in traditional assessments focuses on too narrow a set of objectives, and this approach has been criticized by numerous academics, including Joy and Garcia, who, in their report from 2000, ‘Measuring Learning Effectiveness: A New Look at No-Significant-Difference Findings Ernest’, argue that that much of the research on asynchronous learning networks (ALNs) and similar educational technologies is flawed and the conclusions drawn from many of these reports are both inaccurate and controversial.

Over the past decade researchers have focused on a wider of variety of outcomes from educational technologies, looking at the impact on specific areas of learning such as mathematics and literacy, as well as the impact of technology on student motivation, remedial support, differentiated learning, and the development of higher order thinking skills. However, despite an increased scope of research, conclusions remain mixed, with many reports continuing to report only negligible gains.

In an attempt to summarize the existing literature and draw broader conclusion on the impact of technology on classroom learning, a number of researchers have initiated meta-analysis studies by conducting a statistical synthesis from the findings of numerous quantitative studies.

Meta-analysis studies, which have become increasingly popular over the past 25 years, are often considered as a means of deterring a truth from a broad body of evidence, however, meta-analysis is neither flawless nor without its critics. In regard to using meta-analysis with research on educational technology there is the problem of including all different kinds of technologies into a single category which creates a body of work which may be overly broad and unsuitable for drawing meaningful conclusions. Furthermore the challenges in synthesizing evidence from widely differing methodologies can make it difficult to identify clear and specific implications for the use of educational technologies in schools.

Early meta-analysis studies on educational technologies, such as those conducted by Soe, Koki, & Chang in 2000 and Bayraktar in 2002, concluded that computer-assisted instruction (CAI) had a small but positive effect on students’ reading achievement and was effective in science education – indicating there were areas where the adoption of technology was more effective than others. These conclusions have been supported by more recent research, such as Higgins’ study from 2012 and Cheung & Slavin’s study from 2013, which indicate that computers in the classroom produce small positive improvements in reading, and modest improvements in mathematics (Cheung & Slavin, 2013).

The 2012 report by Higgins, ’The Impact of Digital Technology on Learning:  A Summary for the Education Endowment Foundation’, which synthesized 48 primary research studies, concluded that over the last forty years digital technologies have had positive benefits on learning, although most studies linked the use of technology with only small improvements to learning.

Higgins’ report also identified a number of circumstances in which the use of technology had been found to work with greater effect, these include 1) Collaborative use of technology rather than individual use, 2) Remedial use of technology with students who had either, lower levels of attainment, special educational needs or disadvantaged backgrounds, 3) As a supplement to normal teaching rather than a replacement, 4) By subject, attainment tended to be greater in mathematics and science.

The challenge of successfully utilizing education technologies made international headlines in 2015, with the release of the OECD report, ‘Students, Computers and Learning: Making the Connection’ which concluded that access to technology does not guarantee educational benefits and there had been, “no appreciable improvements in student achievement in reading, mathematics, or science in the countries that had invested heavily in ICT for education”.

THE OECD report received considerably criticism regarding the research methodology, which forged a link between access to computers and results from the OCED’s PISA assessments, primarily because the research had not looked at how these technologies were being used at home or school. While these criticisms are certainly valid, the underlying argument that technology is ineffective without the appropriate aims, objectives, structures, and clearly envisioned plans for evaluating effectiveness, remains an important point for educators to remember.

With the growing use of educational technologies in schools and colleges, there is a need to reevaluate how these resources are evaluated and utilized. Measuring the success of the educational technology in terms of student performance in traditional tests which assess knowledge of basic concepts and the ability to recall facts is no longer sufficient. Furthermore, the potential for educational technologies’ remains largely untapped by the majority of educators. If schools continue to employ these tools to simply improve basic skills through automated practice of drills, the impact of technology will remain modest. However, educational technology has the potential to promote more advanced skills, such as; critical thinking, creativity, higher order thinking and problem solving abilities. When more schools begin to embrace the potential of educational technologies, and evaluate the impact of technology on these advanced skills, greater rewards are inevitable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s