Go home now Header Background Image
Search
Submission Procedure
share: |
 
Follow us
 
 
 
 
Volume 4 / Issue 3 / Abstract

available in:   PDF (24 kB) PS (30 kB)
 
get:  
Similar Docs BibTeX   Write a comment
  
get:  
Links into Future
 
DOI:   10.3217/jucs-004-03-0210

Advanced Educational Technologies - Promise and Puzzlement

Patricia A. Carlson
(Rose-Hulman Institute of Technology
Patricia.Carlson@Rose-Hulman.Edu)

Abstract: Enormous sums of money and human effort have gone into educational technologies over the past decade. Yet nagging questions surface as to whether this tremendous investment produces advantageous results. While we intuitively feel that the influence of technology should be substantial, little sound guidance exists as to what is effective and why or how to use it. We seem to have cleared several of the hurdles for building a computer-aided instruction infrastructure; now we must turn our attention to richer understandings of research into the impact of technologies in the classroom. This special issue of the Journal of Universal Computer Science focuses on assessment and evaluation practices. The six articles in this collection have been clustered around three major issues: (1) pragmatics - cost estimations and product reviews, (2) measuring the effectiveness of theory-driven design, (3) extending paradigms for capturing more profound understanding of variables and outcomes.
Categories: K.3 - Computers and Education

1 A Critical Period for Educational Technology

Increasingly, educational technology has come to mean some form of computer-mediated product. More affordable hardware, growing awareness of new media (information technology in general), and the combined push from education reform mandates and the pull of marketing for new media all contribute to an environment of rapid change.

Yet, reports both from the workplace and from higher education indicate that meaningful integration of advanced information technologies is not a matter of simply making the hardware/software available. On all levels, many multimedia products available today are conceived of as "super-books" or elaborate "page-turners." They present students with multi-modal representations of materials and explanations, but their "pedagogy" is primarily didactic and their assumptions about learning are fairly static. Increasingly, we are seeing a backlash in the popular press against advanced educational technologies [Oppenheimer 1997]. These claims center on charges that most software engages only at a superficial level and - over time - both "deskills" the student and "disenfranchises" the teacher.

In general, educational software has not reached its potential for a variety of reasons. I suggest that the bulk of these inhibitors can be clustered under four categories:

  • Lack of meaningful formative evaluation - While change is in the offing, the current-traditional design methods for courseware/educational software do not include adequate feedback loops. Although this may be typical for an "immature" industry, it is to be hoped that developers of educational technologies will at some
  • Page 210

    point have as much regard for product design and end-user needs as does the automobile industry.

  • Emphasis on the student as end-user - Certainly, the learner (primarily, the learner's measurable achievement) drives much of today's concern for educational reform. However, a technology-enabled curriculum should be conceptualized as a dynamic partnership among three agents: the student, the teacher, and the computer-mediated learning tools.
  • Lack of a robust new paradigm for technology in education - Education's recent history contains several examples in which a new entity emerged in a glow of promise but never reached fruition. We seem to be at that awkward transition point for computer-mediated learning -- one in which the old methods are falling away, but the new paradigm, enabled by advanced information technologies, is not apparent.
  • Overemphasis on teachers as technophobic - Too often, the teacher's role in technology adoption has been characterized as inhibiting and conservative. While advanced technologies may be a threat to some instructors, dwelling on negative "attitude" and on the need to "educate" teachers on hardware and software operation misses the real strengths teachers can bring to the process.

Assessment and evaluation emerge as a crucial topic in advanced educational technology today. As we clear the logistical and technical hurdles for getting reasonable computer applications into the classroom, we must now focus attention on how well these interventions perform - that is, achieve definable effects. Additionally, both nations and companies now demonstrate growing recognition that human potential and ability to learn are the fundamental resources for any organization. This awareness results in more funding for development, but also mandates increasing accountability. The question becomes: how do we "define and measure effectiveness" in an environment that increasingly asks for proof of results?

2 The Role of Assessment in Mediating Change

"Re-inventing" - the word floats in the air these days as a talisman both for forcing and for managing change. Certainly education has been increasingly scrutinized as our society comes to see basic incongruities between goals and achievements, course content and demands of modern life, and teaching methods and student outcomes. The current spirit of re-examining and reinventing education posits new media and information technologies as meeting three major criteria: (1) lowering costs/increasing productivity, (2) addressing changing student populations, (3) breaking the "cartel" of traditional education providers.

Despite these bold and far-reaching claims, our understanding of how best to design and integrate computer-mediated software remains tenuous. For one thing, harvesting knowledge about what works and what does not is seldom done in a systematic way.

Page 211

Developing software applications in education has been a process of individual or small groups coming up with a prototype, testing the application in classrooms, and then moving toward a product and wide distribution. While such innovation characterizes technological change, this development cycle leaves too many important questions unanswered [Tinker 1997]. More sensitive models of assessment along with more sophisticated techniques for extending empiricism into design philosophy and for consolidating discrete observations into a meta-awareness are needed if we are to design and implement educational technologies wisely.

While change is necessary for growth and improvement, unmanaged change potentially results in chaos. Additionally, experiences in the workplace and in higher education indicate that ingesting new technologies into an established organization must be done with careful planning. Such change brings a host of issues - each posing at least one complex question. The list below names only a few:

  • Qualitative and Quantitative Approaches
  • New Methods or New Uses for More Traditional Methods
  • Technology Assessment versus Experimental Approaches
  • Case Studies and other Forms of Naturalistic Studies
  • The Political/Economic Issues of Assessment
  • Questions of Validity and Reliability
  • How Assessment Results are Used/Abused
  • Trade Journal Evaluation of Applications - Consistency and Fairness
  • Organizational Awards - Criteria Development and Application
  • Building "Assessment" into the Development Process
  • Assessing Technology's Effects on Teachers as well as Students
  • Assessment in Training versus Assessment in Education
  • Designing/Managing Large-scale Assessment Projects

3 The Changing Role of Assessment

The six articles collected here present compelling examinations of a changing role for assessment in educational delivery and instructional enhancement. These articles provide understanding for a complex socio-technical shift and serve as the basis for more informed decision-making among all constituencies concerned with education. I have grouped these six papers into three clusters: (1) pragmatics, (2) principled design, and (3) broadening paradigms.

3.1 Pragmatics - Development Costs and Marketing Issues

Certainly in making a case for replacing one technology with another, it is paramount to demonstrate some type of cost/benefits gains. While this may not always be possible, developers need a mechanism by which fairly accurate cost measures can be derived for building educational software. Thackaberry and Rada (Estimation Metrics

Page 212

for Courseware Maintenance Effort) point out that cost-estimation models for traditional IT systems are not adequate for CAI applications. These authors focus on this crucial software engineering question and suggest a matrix for estimating investment for small (that is, less than 2 staff-month) courseware projects.

An additional - but equally fundamental - concern is the method(s) by which commercialized software products are rated. Information CD-ROMs - potentially the ultimate end of knowledge as a commodity - are given extensive "reviews" in both paper and electronic publications. With the explosion of educational supplements on the market today, no one can become expert in all areas; thus such rating and reviewing systems are essential for "smart shopping." Buckner and Gillham (A Comparative Evaluation of Print and Electronic Reviews of Mutimedia Information Products) investigate the relationship between three factors: (1) elements in a review, (2) the medium (print or electronic), and (3) the type of publication.

3.2 Theory-Driven Design - Building on Educational Constructs

Many large software development projects are informed by a particular learning theory or model of cognition. As an example, one might cite the Cognition and Technology Group's emphasis on "anchored" instruction in developing educational software [Cognition 1990]. A second example of informing theory are the various examples of CSILE instantiations, modeled on concepts developed by Bereiter and Scardamalia (and their various colleagues) [Scardamalia et al. 1989].

These large-scale development groups resemble "schools of thought," putting forth explanatory frameworks in a period of early emergence (or the pre-paradigm phase to use Thomas Kuhn's terminology). Other researchers/developers look at more specific educational concepts in their efforts at theory-driven design. In this collection, Esma Aimeur (Application and Assessment of Cognitive Dissonance Theory in the Learning Process) reports on her work with an intelligent tutoring system (ITS) that enhances the traditional notions of what components are embedded in an adaptive learning environment. Drawing from fundamental research on motivation and learning, Aimeur and her colleagues include within the adaptive environment an "intelligent agent" for inducing dissonance. This novel concept goes somewhat counter to the notion that ITS environments include a supportive mentor agent that prods and encourages. To the contrary, the dissonance agent challenges and confronts the students, forcing her to re-examine her learning, defend her conclusions, and perhaps uncover flawed reasoning or conceptual misunderstandings. The experiment reported on here tests to see if an established principle of learning theory can be instantiated in a software environment and retain its efficacy.

Ruokamo and Poholainen (Pedagogical Principles for Evaluation of Hypermedia- Based Learning Environments in Mathematics) are also concerned with theory-driven design for new media in the classroom. In this case, these particular authors are interested in developing ways of measuring how well "modern educational standards" can be mirrored in software. Primarily, the authors are interested in measuring

Page 213

software's abilities to foster seven qualities: (1) active engagement in the learning task, (2) construction of new knowledge from old, (3) collaboration, (4) authentic tasks and contexts, (5) intentional or goal oriented activities, (6) transference of new knowledge, (7) reflection and consolidation of gains.

3.3 New Methods of Assessment

Assessment methodologies continue to evolve. However, the best-known and perhaps most consistently applied to research design are variations of the classic quasi- experimental method codified by Cook and Campbell. [Cook and Campbell 1979] The traditional pre- and post- test, with a comparison of control and treatment results produces what appear to be compelling results because of their simplicity and their straight-forwardness. However, this method does little to facilitate change because of the narrowness and specificity of each study. McGee and Howard (Evaluating Educational Multimedia in the Context of Use) call for more sophisticated methods of evaluation that measure not only evidence of gain but also provide insights into the rich context of teaching and learning so that "best practices" are recorded and reported as part of the assessment results.

Makrakis, Retalis, Koutoumanos, Papasyprou, and Skordalakis (Evaluating the Effectiveness of an ODL Hypermedia System and Courseware at the National Technical University of Athens: A Case Study) look at new methods to incorporate feedback from design of the technology as well as factors in its implementation. The framework suggested by these authors takes into account that computer-mediated learning systems are complex - incorporating a "variety of organisational, administrative, instructional, and technological components." Conflating two types of self-report survey instruments, this study affirms the intuitively held position that - even in distance education - interactions with the instructor are central to the effectiveness of an educational treatment.

4 Assessment and the Ecology of an Educational System

Educational technologies have reached a turning point: while most educators recognize the enormous potential for well-designed software, few studies provide compelling evidence of a significant effect on educational outcomes. In looking for new methods, we seem to be caught between quantitative and qualitative approaches; we want the reliability of empirical investigations and the richness of ethnographic studies.

While it may take some time to develop a consistent and reliable ecological paradigm, taking a broader perspective will help to produce results that leverage the best use of appropriate technologies in classrooms. One avenue for taking a holistic approach to evaluation of advanced educational technologies is to adopt the methods of "technology assessment" [Baker and O'Neil 1994] [O'Neil and Baker 1994]. Some hallmarks of this type of inquiry include:

Page 214

  • A systemic approach that focuses on evaluating new technology in a context.
  • Subject-driven considerations that take into account the impact on human systems (sociological and psychological perspectives).
  • Object-driven considerations that take into account feed-back loops for constant quality improvement and sustainability of the technology itself.

Geoghegan estimates that of all the educational technologies implemented in higher education, no more than five percent of instructors use computers as anything more than high-tech substitutes for the blackboard and the overhead projector [Geoghegan 1994]. Identifying and extending creative use of educational courseware should be central to studies dealing with implementation. Moving beyond the dominance of legacy and tradition is necessary in order to harvest the full potential of information technology in the classroom. Additionally, pragmatics and cost/benefit analysis underscore the criticality of extending theory into a robust pedagogy. In short, assessment should make possible "managed change" and meaningful technology transfer for education by pursuing an agenda that addresses people, policies, plans, and programs.

References

[Baker and O'Neil 1994] Baker, E. L. & O'Neil, H.F. (Eds.) (1994). Technology Assessment in Education and Training. Hillsdale, NJ: Lawrence Erlbaum Associates.

[Cognition 1990] Cognition and Technology Group at Vanderbilt (1990). Anchored instruction and its relationship to situated cognition. Educational Researcher, 19(6), 2-10.

[Cook and Campbell 1979] Cook, T. D. and Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.

[Geoghegan 1994] Geoghegan, W. H. (1994). Stuck at the barricades: Can information technology really enter the mainstream of teaching and learning? AAHE Bulletin, September, pp. 13 - 16.

[O'Neil and Baker 1994] O'Neil, H. & Baker, Eva L. (Eds) (1994). Technology Assessment in Software Applications. Hillsdale, NJ: Lawrence Erlbaum Associates.

[Oppenheimer 1997] Oppenheimer, T. (July 1997). The computer delusion. The Atlantic Monthly, pp. 45-62.

[Scardamalia et al. 1989] Scardamalia, M., Bereiter, C., McLean, R.S.,, Swallow, J., and Woodruff, E. (1989). Computer-supported intentional learning environments. Journal of Educational Computing Research, 5(1), 51-68.

[Tinker 1997] Tinker, R. (Fall 1997). Perspective: Addressing the crisis in educational R & D. Newsletter: The Concord Consortium. http://www.concord.org.

Page 215