Digital Libraries as Learning and Teaching Support
Hermann Maurer
Graz University of Technology, Graz/Austria
and The University of Auckland, Auckland/NZ
email: hmaurer@iicm.tu-graz.ac.at
Jennifer Lennon
The University of Auckland, Auckland/NZ
email:J_Lennon@cs.auckland.ac.nzAbstract: For 30 years repeated attempts have been made to
use computers to support the teaching and learning process, albeit
with only moderate success. Whenever major attempts failed, some
seemingly convincing reasons were presented the for less than
satisfactory results. In the early days cost or even lack of suitable
equipment was blamed; after colour graphics computers started to be
widespread, production costs of interactive and graphically appealing
material were considered the main culprits; when modern multimedia
authoring techniques did not change the situation either, the lack of
personalized feed-back, of network support and the difficulty of
producing high quality simulations were seen as main obstacles. With
networks now offering excellent multimedia presentation and
communication facilities the final breakthrough of computers as
ultimate teaching and learning tool is (once more) predicted. And
once more results will be disappointing if one crucial component is
again overlooked: good courseware must give both guidance to students
but also provide a rich variety of background material whenever such
is needed. It is the main claim of this paper that the advent of
sizeable digital libraries provides one of the most significant
chances for computer based training ever. We will argue that such
libraries not only allow the efficient production of courseware but
also provide the extensive background reservoir of material needed in
many situations. Keywords: Digital libraries, electronic libraries, learning
support, teaching support, instructional technology, CAI, CBT Category: H.3, H.4, H.5, K.3
1 Introduction
The saga of trying to use computers for
education and training goes back well over thirty years. Rather than
repeating all that has been said over and over we refer to one of the
last major ``presentation type'' efforts in computer assisted
instruction, COSTOC, and the arguments presented there, see [Makedon et al 1987] and [Huber et al 1989]. The essence is, however, that even
large scale projects involving fairly high quality animated colour
graphics and sophisticated question-answer mechanisms did not succeed
as replacement or even support for teaching on a large scale: although
some successes due to better production tools ranging from Hyper-Card
and Toolbook to Authorware and particularly HM-Card [Maurer et al
1995] have been attained, supported by the integration of advanced
media such as digital audio and video segments a decisive change has
not occured mainly due to three facts: (a) courseware has been always
difficult to customize; (b) students did not have sufficient
background material available Page 719
in electronic form when getting stuck;
and (c) no direct feedback from students to teachers has been provided
in most cases in the past. We believe that the proper use of digital libraries imbedded in modern
``second generation'' hypermedia systems is about to offer the best
chance ever to succeed in using computers for educational purposes. This paper is structured as follows. In Section 2 we explain the need
for second generation hypermedia systems and how they relate to
digital libraries. In Section 3 we describe how such digital libraries
can address the points (a) - (c) mentioned above. And we will argue in
Section 4 that the always mentioned problem of copyrights and payments
for electronic books can indeed be solved easily using existing
techniques. The paper concludes with references in section 5. For a general look on hypermedia see [Nielsen 1995]. For a survey of
applications [Lennon et al 1994b].
2 Second Generation Systems and Digital Libraries
In this
section we explain some of the properties of first generation
hypermedia systems, using WWW as the most prominent example. We
contrast them with those of Hyper-G, the first second generation
model. We confine attention to networked hypermedia systems with a
client/server architecture. For completeness' sake we refer to papers
describing important attempts such as Gopher [Alberti et al 1992], WWW
[Berners-Lee et al 1992], IRIS [Haan et al 1992], WAIS [Kahle et al
1992]. For papers on Hyper-G see [Andrews et al 1995a], [Andrews et
al 1995b], [Kappe et al 1994], [Maurer 1994], and the book [Maurer et al 1996b]. Information in a hypermedia system is usually stored in
``chunks''. Chunks consist of individual documents which may
themselves consist of various types of ``media''. Typically, a
document may be a piece of text containing a picture. Each document
may contain links leading to (parts of) other documents in the same or
in different chunks. Typical hypertext navigation through the
information space is based on these links: the user follows a sequence
of links until all relevant information has hopefully been
encountered. In WWW, a chunk consists of a single document. Documents consist of
textual information and may include pictures and the (source) anchors
of links. Pictures and links are an integral part of the
document. Pictures are thus placed in fixed locations within the text
(``inline images''). Anchors can be attached to textual information
and inline images, but not to parts of images. Links may lead to audio
or video clips which can be activated. The textual component of a
document is stored in so-called HTML format, a derivative of SGML. In Hyper-G the setting is considerably more general: chunks, called
``clusters'' in Hyper-G terminology consist of a number of
documents. A typical cluster may, for example, consist of five
documents: a piece of text (potentially with inline images), a second
piece of text (for example in another language, or a different version
of the same text, or an entirely different text), a third piece of
text (the same text in a third language perhaps), an image and a film
clip. Anchors can be attached to textual information, to parts of
images, and even to regions in a film clip. Links are not part of the
document but are stored in a separate database. They are both typed
and bidirectional: they can be followed forward (as in WWW) but also
backwards. Page 720
Hyper-G allows multiple pieces of text within a cluster to handle
e.g., multiple languages in a natural way. This also elegantly solves
the case where a document comes in two versions: a more technical (or
advanced) one and one more suitable for the novice reader. As
indicated, pictures can be treated as inline images or as separate
documents. Often, inline images are convenient, since the ``author''
can define where the user will find a picture in relation to the
text. On the other hand, with screen resolution varying tremendously,
the rescaling of inline images may pose a problem: if a picture is
treated as separate document, however, it appears in a separate
window, can be manipulated (shifted, put in the background, kept
on-screen while continuing with other information, etc.) independent
of the textual portion (which in itself can be manipulated by for
example narrowing or widening its window). Thus, the potential to deal
with textual and pictorial information separately provides more
flexibility when required. Text can be stored in Hyper-G in a number
of formats, clearly important for digital library purposes. In addition to the ``usual'' types of documents found in any modern
hypermedia system, Hyper-G also supports 3D objects and scenes. Let us now turn to the discussion of the philosophy of links in WWW
versus Hyper-G. The ability to attach links to parts of a picture is
clearly desirable, when additional information is to be associated
with certain sub-areas of an image. That links are bidirectional and
not embedded in the document has a number of very important
consequences: first, links relating to a document can be modified
without necessarily having access rights to the document itself. Thus,
private links and a certain amount of customisation are possible;
second, when viewing a document it is possible to find all documents
refering to the current one. This is not only a desirable feature as
such, but is of crucial importance for being able to maintain the
database. After all, when a document is deleted or modified, all
documents refering to it may have to be modified to avoid the
``dangling link syndrome'', or to avoid being directed to completely
irrelevant documents. Hyper-G offers the possibilty of automatically
notifying the owner of a document that some of the documents that are
being refered to have been changed or deleted, an important step
towards ``automatic link maintainance''. Thirdly, the bidirectionality
of the links allows the graphic display of a ``local map'' showing the
current document and all documents pointing to it and being pointed
at, an arbitrary number of levels deep. Harmony makes full use of this
fact and provides local maps as an invaluable navigational aid that
cannot be made available for WWW databases [Fenn et al 1994]. Finally,
the fact that links can have types can be used to show to the user
that a link just leads to a footnote, or to a picture, or to a film
clip, or is a counter or supporting argument of some claim at issue:
typed links enhance the perception of how things are related and can
be used as tool for discussions and collaborative work. Navigation in WWW is performed solely using the hypertext paradigm of
anchors and links. It has become a well accepted fact that structuring
large amounts of data using only hyperlinks such that users don't get
``lost in hyperspace'' is difficult to say the least. WWW databases
are large, flat networks of chunks of data and resemble more an
impenetrable maze than well-structured information. Indeed every WWW
database acknowledges this fact tacitly, by preparing pages that look
like menus in a hierarchically structured database: items are listed
in an orderly fashion, each with an anchor leading to a subchapter
(subdirectory). If links in WWW had types, such links could be
distinguished from others. But as it is, all links look the same:
whether they are ``continue'' links, ``hierarchical'' Page 721
links,
``referential'' links, ``footnote links'', or whatever else. In Hyper-G not only can have links a type, links are by no means the
only way to access information. Clusters of documents can be grouped
into collections, and collections again into collections in a
pseudo-hierachical fashion. We use the term ``pseudo-hierarchical''
since, technically speaking, the collection structure is not a tree,
but a DAG. I.e., one collection can have more than one parent: an
impressionist picture X may belong to the collection ``Impressionist
Art'', as well as to the collection ``Pictures by Manet'', as well as
to the collection ``Museum of Modern Art''. The collection
``hierarchy'' is a powerful way of introducing structure into the
database. Indeed many links can be avoided this way [Maurer et al
1994], making the system much more transparent for the user and
allowing a more modular approach to systems creation and
maintainance. Collections, clusters and documents have titles and
attributes. These may be used in Boolean queries to find documents of
current interest. Finally, Hyper-G provides sophistacted full-text
search facilities. Most importantly, the scope of any of such searches
can be defined to be the union of arbitrary collections, even if the
collections reside on different servers. Note that some WWW applications also permit full-text
searches. However, no full-text search engine is built into WWW. Thus,
the functionality of full text search is bolted ``on top'' of WWW:
adding functionality on top of WWW leads to the ``Balkanisation'', the
fragmentation of WWW, since different sites will implement missing
functionality in different ways. Thus, to stick to the example of the
full text search engine, the fuzzy search employed by organisation X
may yield entirley different results from the fuzzy search employed by
organisation Y, much to the bewilderment of users. Actually, the
situation concerning searches in WWW is even more serious: since
documents in WWW do not have attributes, no search is possible on such
attributes; even if such a search or a full text search is
artificially implemented, it is not possible to allow users to define
the scope for the search, due to the lack of structure in the WWW
database. Hence full-text searches in WWW always work in a fixed,
designated part of the WWW database residing on one particular server. The acceptance of a hypermedia system is certainly not only dependent
on deep technical features, but above all on the information content
and the ease of use. Due to the fact that large hypermedia systems
tend to lead to disorientation, second generation hypermedia systems
have to try very hard, both at the server and at the client end, to
help users with navigational tools. Some navigational tools, like the
structuring and search facilities have already been described; others,
such as maps, history lists, specific and personal collections can
also be of great help and are available in Hyper-G; a particular
speciality of the Harmony client (assuming an OpenGL environment) is a
3D browser: the ``information landscape'' depicts collections and
documents (according to their size) as blocks of varying size spread
out across a three-dimensional landscape, over which the user is able
to fly. The architecture of Hyper-G is clearly well suited for handling the
material one would want to put into an electronic library. Such
material includes tradititional journals in electronic form, see e.g.,
[Calude et al 1994] and [Maurer et al 1994], books in electronic form,
see e.g., the Internet guide found under
http://iicm.tu-graz.ac.at/Cinternet_guide, courseware such as found
under http://iicm.tu-graz.ac.at/Chmcard, diverse pictures, audio- and
video-clips, etc. The structuring and search facilities of Hyper-G
allow easy access to informa- Page 722
tion, the variety of navigational tools
help to prevent the ``lost in hyperspace'' phenomenon, and the fact
that links can be added to documents even if one is not allowed to
change the contents of the document allow a high degree of
customisation and personalisation. Finally, the multimedia
capabilities available supported by the animation and question-answer
facilities of HM-Card [Maurer et al 1996a] provide the possibility to
prepare modern instructional material and integrate it into the
digital library.
3 How to use a Digital Library
From the student's point of view a digital library is used by
``activating'' certain material as ``relevant background material'' to
define the scope of future searches (in Hyper-G this is done by
defining suitable collections) and then by starting an instructional
unit. While working through such a unit students can perform all kinds
of searches within the background material as it becomes
necessary. Indeed they can create their own links, thus being able to
personalize and customize as they find it appropriate. Students cannot
only read documents, they can prepare their own documents by
integrating into passages they have written (using links) sections of
books, journals or other works as may be pertinent. From the teacher's point of view the availability of the digital
library and the possibility to prepare instructional material by
combining existing ressources with those made up by the teachers for a
particular purpose is especially important. This reduces the amount of
work to prepare courseware and allows to change courseware at will by
adding, deleting or changing sections. Note that this is done by using
the flexible link concept of Hyper-G to combine components rather than
``copying and pasting'' components together. Using the link structure
for this purpose has two significant advantages: first, link
consistency is much easier to maintain; second, copying and pasting
results is usually violating copyrights and royalties to be payed for
material being used. We will see in the next section that this problem
does not occur when proper linking mechanisms are used as already
proposed by Ted Nelson's ``transclusions'' [Nelson 1987]. Indeed
preparation of courseware can be done ``on the fly'', i.e., as part of
presenting lectures as we have first pointed out in [Lennon et al
1994a]. For a more complete description of digital libraries see
[Marchionini et al 1995].
4 Commercial Aspects of a Digital Library
When electronic material is sold over networks some charging
mechanisms are necessary. Assuming that users identify themselves with
some username/password combination, charging mechnisms typically are:
(1) by time,
(2) by volume,
(3) by number of accesses,
(4) by subscription.
Note that in cases (1) - (3) there is some potential danger to the
user (if others manage to get access to the username/password --
something fairly easy to do on the Internet), and that (4) is
dangerous to the publisher: persons may allow friends to use their
username/password, resulting in many more readers Page 723
than paid
subscribers. Thus, (1) - (3) requires sophisticated security
mechanisms (see e.g., [Posch 1995]) to protect users, (4) requires
additional mechanisms to protect publishers. An alternative is to use pre-paid accounts that e.g. allow a fairly
liberal yet limited number of accesses. This is a mix of essentiallly
(3) and (4) that offers a number of advantages: Even if the
username/password is dedected by others it is only of limited use and
can cause only limited damage (by illegally ``exhausting'' the
pre-paid sum). Thus, the consumer is better protected in this mode,
hence complicated security measures are not that essential. The
publisher is also protected: users who pass on their username/password
are welcome to do so. After all, whatever use they or friends make of
the material offered has been pre-paid. This latter technique has e.g.
been used with the electronic version of the ED-MEDIA'95 proceedings,
see http://hyperg.iicm.tu-graz.ac.at/Cedmedia or
http://hyperg.iicm.tu-graz.ac.at/Celectronic_library. All of the above assumes that the information resides on one (or a
few) servers in the Internet and users access that server. We consider
such assumptions unrealistic for two reasons. First, world-wide access
to Internet nodes is often painfully slow; second, if large numbers of
users access a server, it and Internet connections leading to it will
be overloaded unless complex counter-measures are taken. For this reason we have been propagating for some time that electronic
material should reside in local servers, with Internet used to update
the material, but users accessing the local (library) servers. This
approach taken in J.UCS (see [Maurer et al 1994], [Calude et al 1994])
improves performance for users and also allows publishers a new way of
selling information: they sell for each server a license for ``at most n simultaneous users'', where n can be
arbitrary. Typically, a university library could start with n = 1
and increase n as necessary (i.e. if the material is popular
enough that often more than a single user wants to access it at the
same time). Note that ``same time'' is really a parameter that can be
adjusted by publishing companies: it means ``an electronic book
accessed by a person is blocked for other persons for m
minutes'' (values of m that we have actually experimented with
are between 3 and 10 minutes); observe in passing that connection -
less protocols such as http in WWW are not well suited for this
approach since persons accessing a book block themselves from
accessing it again shortly thereafter; this is one of the many reasons
why a connection-oriented protocol is built into Hyper-G. The above
charging regime is easy to administer for publishers, and allows to
install electronic material on a network for all users of the network
at low cost (a single license), yet upgrading is easy. Note that the
upgrades can even be system supported: Suppose the license is set to
``20 users at a time'' and the system monitors that usage is
increasing steadily and peaks of 19 simultaneous users become more and
more frequent; then the system can alert the librarian to buy
additional licenses. Indeed variants are possible: no limit to the
number of simultaneous users is fixed, but the charges for some
electronic material are made to depend on the ``average number of
simultaneous users''. We now come to the main point: using second generation systems like
Hyper-G and a version of above charging regime it is possible to
customize material from various sources without violating copyright
restrictions. This is so since customisation in systems such as
Hyper-G is done by linking not by copying, very much in the spirit of
Nelson's Xanadu proposal. To be concrete, suppose on a server a number of books from various
publishers Page 724
have been installed, each licensed for some number of
simultaneous users. An instructor wanting to customise information by taking material from
various books and combining it (potentially integrating own material)
can do so in Hyper-G by not copying the material but by adding links
that are only visible to a specific group, e.g. the group of intended
students. This is depicted in a small example in Fig.1 . 
Fig.1 The arrows in Fig. 1 present links created by the teacher
and are only visible to the intended usergroup. Note that the teacher has thus combined the information on the
``Startpage'' with a section of Book 1, followed by ``Some comments'',
a section in Book 2, a further section in Book 1 and some material on
the ``Endpage''. Even if Book 1 and Book 2 are from different
publishers no copyright violation accurs: if many students use the
material as customised, Book 1 will be accessed more often then Book
2, hence the number of licenses for Book 1 may have to be increased,
as should be the case. Thus, holding electronic material in LAN's and using systems such as
Hyper-G customisation is not only possible but can be done without any
infringements on copyrights. In a way what the proposed set-up does is
to implement Nelson's vision of ``world-wide publishing using
transclusion'' [Nelson 1987] in a local environment.
References
[Alberti et al 1992] Alberti, B., Anklesaria, F., Lindnder,
P., McCahill, M., & Torrey, D.(1995). Internet Gopher Protocol: A Distributed Document
Page 725
Search and Retrieval Protocol; FTP from
boombox.micro.umn.edu, directory pub/gopher/gopher_protocol.
[Andrews et al 1995a] Andrews, K., Kappe, F., Maurer, H., & Schmaranz, K. (1995). On Second Generation Hypermedia
Systems. Proc. ED-MEDIA'95, Graz (June 1995), 75-80. See also J.UCS 0,0 (1994),127-136 at http://www.iicm.tu-graz.ac.at/Cjucs_root.
[Andrews et al 1995b] Andrews, K., Kappe,
F., & Maurer, H. (1995). Serving Information to the Web with
Hyper-G; Computer Networks and ISDN Systems 27 (1995), 919-926.
[Berners-Lee et al 1992] Berners-Lee, T.,
Cailliau, R., Groff, J., & Pollermann, B. (1992) WorldWideWeb: The Information Universe. Electronic Networking: Research,
Applications and Policy 1,2 (1992), 52-58.
[Calude et al 1994] Calude, C., Maurer, H., & Salomaa, A. (1994). J.UCS: The Journal of Universal Computer Science. J.UCS 0,0 (1994),109-117 at http://www.iicm.tu-graz.ac.at/Cjucs_root.
[Fenn et al 1994] Fenn, B.& Maurer, H. (1994). Harmony on an Expanding Net; ACM Interactions 1,3 (1994), 26-38.
[Haan et al 1992] Haan, B.J., Kahn, P., Riley, V.A.,
Coombs, J.H. & Meyrowitz, N.K. (1992). IRIS Hypermedia Services; Communications of the ACM 35,1 (1992), 36-51.
[Huber et al 1989] Huber, F., Makedon, F., & Maurer,
H. (1989). HyperCOSTOC: a Comprehensive Computer-Based Teaching
Support System. J.MCA 12 (1989), 293-317.
[Kahle et al 1992] Kahle, B., Morris, H., Davis, F., Tiene, K., Hart, C., & Palmer, R. (1992). Wide Area Information Servers: An Executive Information System for Unstructured Files; Electronic Networking: Research, Applications and Policy 1,2 (1992), 59-68.
[Kappe et al 1994] Kappe, F., Andrews, K., Faschingbauer,
J., Gaisbauer, M., Maurer, H., Pichler, M., & Schipflinger,J. (1994). Hyper-G: A New Tool for Distributed Multimedia;
Proc.Conf.on Open Hypermedia Systems, Honolulu (1994), 209-214.
[Lennon et al 1994a] Lennon, J., & Maurer, H. (1994). Lecturing Technology: A Future With Hypermedia; Educational Technology 34,4 (1994),5-14.
[Lennon et al 1994b] Lennon, & Maurer, H. (1994). Applications and Impact of Hypermedia Systems: An Overview. J.UCS 0,0 (1994), 54-108 at http://www.iicm.tu-graz.ac.at/Cjucs_root.
[Makedon et al 1987] Makedon, F., Maurer, H., & Ottmann,
Th. (1987). Presentation Type CAI in Computer Science Education at University Level. J.MCA 10 (1987), 283-295.
[Marchionini et al 1995] Marchionini, G. & Maurer, H. (1995). The Role of Digital Libraries in Teaching and Learning. Communications of the ACM 38,4 (1995), 67-75.
[Maurer 1994] Maurer, H. (1994). Advancing the ideas of
WorldWideWeb. Proc. Conf. on Open Hypermedia Systems, Honolulu
(1994), 201-203.
[Maurer et al 1994] Maurer, H. & Schmaranz, K.(1994). J.UCS -- The Next Generation in Electronic Journal
Publishing. Proc. Electronic Publ. Conference, London (November1994) in: Computer Networks for Research in Europe 26, Supplement 2-3 (1994), 63-69.
[Maurer et al 1995] Maurer, H., Schneider A., & Scherbakov,
N. (1995). HM-Care: A New Hypermedia Authoring System. To appear in: Multimedia Tools and Applications.
[Maurer et al 1996a] Maurer, H., & Scherbakov,
N. (1996). HM-Card. Addison Wesley Pub.Co. Germany (1996).
[Maurer et al 1996b] Maurer, H. (1996). Power to the
Web! The Official Guide to Hyper-G. Addison Wesley Pub.Co., UK
(1996).
Page 726
[Nelson 1987] Nelson, T.H. (1987). Literary machines. Edition 87.1, 702 South Michigan, South Bend,IN 46618, USA (1987)
[Nielsen 1995] Nielsen, J. (1995). Hypertext and Hypermedia;
Academic Press (1995)
[Posch 1995] Posch, R. (1995). Information Security in Educational Networks. Proc.ED-MEDIA'95, Graz (June 1995), 45-50.
Page 727
|