Towards a Systematic Study of Representational Guidance
for Collaborative Learning Discourse1
Daniel D. Suthers
(Dept. of Information and Computer Sciences
University of Hawai'i at Manoa
suthers@hawaii.edu)
Abstract: The importance of collaborative and social learning
processes is well established, as is the utility of external representations
in supporting learners' active expression, examination and manipulation
of their own emerging knowledge. However, research on how computerbased
representational tools may support collaborative learning is in its infancy.
This paper motivates such a line of research, sketches a theoretical analysis
of the roles of constraint and salience in the representational guidance
of collaborative learning discourse, and reports on an initial study that
compared textual, graphical, and matrix representations. Differences in
the predicted direction were observed in the amount of talk about evidential
relations and the use of epistemological categories.
Keywords: collaborative learning, representational bias, visual
languages
Category: K.3.1
1 Introduction
Research into the cognitive and social aspects of learning has developed
a clear picture of the utility of external representations in supporting
learners' active expression, examination, and manipulation of their own
knowledge (e.g., [Koedinger 1991], [Novak
1990], [Reusser 1993], [Scardamalia
et al. 1992], [Snir et al. 1995]), as well as
the equal importance of collaborative and social learning processes (e.g.,
[Brown and Campione 1994], [Lave
and Wenger 1991], [Slavin 1980], [Webb
and Palincsar 1996]). Yet, there is insufficient research on how these
two techniques may be constructively combined. In this paper I motivate
and introduce a line of research on representational tools in support
of collaborative learning.
Representational tools range from basic data manipulation and
office tools such as spreadsheets and outliners to knowledge mapping software
and enhanced modeling and simulation tools. Such tools can help learners
see patterns, express abstractions in concrete form, and discover new relationships.
Ideally, they function as cognitive tools that lead learners into
knowledgebuilding interactions [Collins and Ferguson
1993], [Lajoie and Derry 1993]. My research is
based on the hypothesis that properly designed representational tools can
guide collaborative as well as individual learning interactions. Specifically,
when learnerconstructed external representations become part of the
collaborators' shared context, the distinctions and relationships made
1This
is an extended version of a paper presented at the ICCE/ICCAI2000 conference
in Taipei, Taiwan. The paper received an Outstanding Paper Award and is
published in J.UCS with the permission of ICCE/ICCAI.
salient by these representations may guide their interactions in ways
that influence learning outcomes.
Collaborative learning has been shown to correlate with greater
learning, increased productivity, more time on task, transfer of knowledge
to related tasks, and
higher motivation [Johnson and Johnson 1989],
[Rysavy and Sales 1991], [Sharan
1980], [Slavin 1980], provided that the learning
groups and tasks are well chosen [see Webb and Palincsar
1996]. Similarly, collaborative use of instructional software (often
necessary in primary and secondary education due to limited availability
of equipment) can be at least as effective as individual use [Johnson
et al. 1985], [Justen et al. 1990], [Webb
1989]. In postsecondary distance education, electronic forms of collaborative
learning can help reduce the isolation of telecommuting learners and increase
the interactivity of the distance learning experience [Abrami
and Bures 1996], [Jonassen et al. 1995]. Nevertheless,
we cannot expect learning gains just because learners are sitting together
or connected by a wire. A goal of computer supported collaborative learning
(CSCL) systems [Koschmann 1994], [Pea
1994] and of my work is to improve the effectiveness of collaborative
learning as an instructional format: i.e., to support peer interactions
in a manner that increases learning gains.
For a number of years, my colleagues and I have been building, testing,
and refining a diagrammatic environment ("Belvédère")
intended to support secondary school children's learning of critical inquiry
skills in the context of science [Suthers and Weiner
1995], [Suthers et al. 1997], [Toth
et al. 2001]. The diagrams were first designed to capture scientific
argumentation during interaction with an intelligent tutoring system, and
later simplified to focus on evidential relations between data and hypotheses.
This change was driven in part by a refocus on collaborative learning,
which led to a major change in how we viewed the role of the diagrammatic
representations. Rather than viewing the representations as medium of communication
or a formal record of the argumentation process, we came to view them as
resources (stimuli and guides) for conversation [Roschelle
1994], [Suthers 1995]. Meanwhile, various projects
with similar goals (i.e., critical inquiry in a collaborative learning
context) were using radically different representational systems, such
as various forms of hypertext/hypermedia [Guzdial et al.
1997], [O'Neill and Gomez 1994], [Scardamalia
et al. 1992], [Wan and Johnson 1994], nodelink
graphs representing rhetorical, logical, or evidential relationships between
assertions [Ranney et al. 1995], [Smolensky
et al. 1987], [Suthers and Weiner 1995], containment
of evidence within theory boxes [Bell 1997], and
evidence or criteria matrices [Puntambekar et al. 1997].
Both empirical and theoretical inquiry suggest that the expressive constraints
imposed by a representation and the information (or lack thereof) that
it makes salient may have important effects on students' discourse during
collaborative learning. Specifically, as learnerconstructed external
representations become part of the collaborators' shared context, the distinctions
and relationships made salient by these representations may influence their
interactions in ways that influence learning outcomes. However, to date
little systematic research has undertaken to explore possible effects of
this variable on collaborative learning. One exception is [Guzdial
1997], who undertook a comparison of two forms of threaded discussion.
Given that external representations define the fundamental character of
software intended to

Figure 1: Representational Guidance
guide collaborative learning, a systematic comparison is overdue. The
question is not "which representation is better?" but
rather "what kinds of interactions, and therefore learning, does each
representational notation encourage?" This paper motivates a research
program and reports initial results from our laboratory.
2 Representational Guidance
The major hypothesis of this work is that variation in features of representational
tools used by learners working in small groups can have a significant effect
on the learners' knowledgebuilding discourse and on learning outcomes.
The claim is not merely that learners will talk about features of the software
tool being used. Rather, with proper design of representational tools,
this effect will be observable in terms of learners' talk about and use
of subject matter concepts and skills. I have begun investigations to determine
what features have what kind of effect. This section develops an initial
theory of how representations guide learning interactions, and applies
this analysis to make specific predictions concerning the effects of selected
features of representational tools. The discussion begins with some definitions.
2.1 Definitions
Representational tools are software interfaces in which users
construct, examine, and manipulate external representations of their knowledge.
My work is concerned with symbolic as opposed to analogical representations.
A notation/artifact distinction [Stenning and Yule 1997]
is critical, as depicted in [Fig. 1]. A representational
tool is a software implementation of a representational notation
that provides a set of
primitive elements out of which representations can be constructed.
(For example, in [Fig. 1], the representational notation
is the collection of primitives for making hypothesis and data statements
and "+" and "" links, along with rules for their
use.) The software developer chooses the representational notation and
instantiates it as a
representational tool, while the user of the tool constructs particular
representational artifacts in the tool. (For example, in [Fig.
1] the representational artifact is the particular diagram of evidence
for competing explanations of mass extinctions.)
Learning interactions include interactions between learners and the
representations, between learners and other learners, and between learners
and mentors such as teachers or pedagogical software agents. Our work focuses
on interactions between learners and other learners, specifically verbal
and gestural interactions termed collaborative learning discourse.
Each given representational notation manifests a particular representational
guidance, expressing certain aspects of one's knowledge better than others
do. The concept of representational guidance is borrowed from artificial
intelligence, where it is called representational bias [Utgoff
1986]. The phrase guidance is adopted here to avoid the negative
connotation of bias. The phrase knowledge unit will be used
to refer generically to aspects of one's knowledge that one might wish
to represent, such as hypotheses, statements of fact, concepts, relationships,
rules, etc. The use of this phrase does not signify a commitment to the
view that knowledge intrinsically consists of "units," but rather
that users of a representational system may choose to denote some aspect
of their thinking with a representational proxy. Representational guidance
manifests in two major ways:
- Constraints: limits on expressiveness, e.g., the representational
system may provide a limited ontology of objects and relations [Stenning
and Oberlander 1995].
- Salience: how the representation facilitates processing of certain
knowledge units, possibly at the expense of others [Larkin
and Simon 1987].
As depicted in [Fig. 1], representational guidance
originates in the notation and is further specified by the design of the
tool. It affects the user through both the tool and artifacts constructed
in the tool.
2.2 Thesis
The core idea of the theory may now be stated as follows: Representational
tools mediate collaborative learning interactions by providing learners
with the means to express their emerging knowledge in a persistent medium,
inspectable by all participants, where the knowledge then becomes part
of the shared context. Representational guidance constrains which knowledge
can be expressed in the shared context, and makes some of that knowledge
more salient and hence a likely topic of discussion. The following sections
clarify this thesis and detail several ways in which constraints and salience
are claimed to influence collaborative learning.
The Origins of Constraints and Salience
Zhang [Zhang 1997] distinguishes congnitive
and perceptual operators in reasoning with representations. Cognitive
operations operate on internal representations; while
perceptual operations operate on external representations. According
to Zhang, the perceptual operations take place without an internal copy
being made of the representation (although internal representations may
change as a result of these operations). Expressed in terms of Zhang's
framework, the present analysis is
concerned primarily with perceptual operations on external representations
rather than cognitive operations on internal representations. This is because
my work is concerned with how representations that reside in learners'
perceptually shared context mediate collaborative learning interactions.
While cognitive operations on internal representations do influence interactions
in the social realm, CSCL system builders do not design internal representations
- they design tools for constructing external representations. These external
representations are accessed by perceptual operations, so the perceptual
features of a representational notation are of interest for CSCL systems.
Stenning and Oberlander [Stenning and Oberlander 1995]
distinguish constraints inherent in the logical properties of a representational
notation from constraints arising from the architecture of the agent using
the representational notation. This corresponds roughly to my distinction
between constraints and salience. Constraints are logical and semantic
features of the representational notation. Salience depends on the
perceptual architecture of the agent. Differences in salience can be understood
in terms of Zhang's distinction between obtaining information by "direct
perception" versus application of perceptual operators. Information
that is recoverable from a representation is salient to the extent to which
it is recoverable by automatic perceptual processing rather than through
a controlled sequence of perceptual operators.
Zhang's "direct perception" should not be confused with the
view that no computation is required for perception. "Direct perception"
requires computation, albeit highly automatic and requiring no executive
control (e.g., [Triesman and Souther 1987]). Recovery
of certain information from a representation may require controlled application
of multiple direct perceptions. For example, upon examining a graph, one's
perception of the color of a node in a graph is more direct than one's
perception of whether this node is connected by links to another specified
node [Lohse 1997]. Visual search - a sequence of direct
perceptions - is required to make the latter judgement. For our purposes,
the important point is that the work required to retrieve any given information
from a representation can vary as the representational system changes.
2.4 External Representations in Individual and Collaborative Context
Substantial research has been conducted concerning the role of external
representations (as opposed to mental representations) in individual problem
solving. This research generally shows that the kind of external representation
used to depict a problem may determine the ease with which the problem
is solved [Kotovsky and Simon 1990], [Larkin
and Simon 1987], [Novick and Hmelo 1994], [Zhang
1997]. The constraints built into representations may restrict the
problem solver's search space, to the possible detriment or enhancement
of problemsolving success [Amarel 1968], [Hayes
1989], [Klahr and Robinson 1981], [Stenning
and Oberlander 1995].
One might ask whether this research is sufficient to predict the effects
of representations in collaborative learning.
A related but distinct line of work should be undertaken in collaborative
learning contexts for several reasons. The interaction of the cognitive
processes of several agents differs from the reasoning of a single agent
[Okada and Simon 1997], [Perkins
1993],
[Salomon 1993], and therefore may be affected by
external representations in different ways. In particular, shared external
representations can be used to coordinate distributed work, and will serve
this function different ways according to their representational guidance.
The act of constructing a shared representation may lead to negotiations
of meaning that may not occur in the individual case. Also, the mere presence
of representations in a shared context with collaborating agents may change
each individual's cognitive processes. One person can ignore discrepancies
between thought and external representations, but an individual working
in a group must constantly refer back to the shared external representation
while coordinating activities with others (Micki Chi, personal communication).
Thus it is conceivable that external representations have a greater effect
on individual cognition in a social context than they do when working alone.
Finally, prior work on the role of external representations in individual
problem solving has often used welldefined problems. Further study
is needed on ill structured, openended problems such as those typical
of scientific inquiry.
The discussion now turns to the identification of dimensions along which
different representational notations vary, and predictions that a given
kind of learning interaction will vary along that same dimension.
2.5 Representational Notations Bias Learners towards Particular Ontologies
The first hypothesis claims that important guidance for learning interactions
comes from ways in which a representational notation limits what
can be represented [Stenning and Oberlander 1995],
[Utgoff 1986]. A representational notation provides
a set of primitive elements out of which representational artifacts are
constructed. These primitive elements constitute an ontology of categories
and structures for organizing the task domain. Learners will see their
task in part as one of making acceptable representational artifacts out
of these primitives. Thus, they will search for possible new instances
of the primitive elements, and hence (according to this hypothesis) will
be guided to think about the task domain in terms of the underlying ontology.
For example, consider the following interaction in which students were
working with a version of Belvédère that required all statements
to be categorized as either data or claim. Belvédère
is an "evidence mapping" tool developed under the direction of
Alan Lesgold and myself while I was at the University of Pittsburgh [Suthers
and Jones 1997], [Suthers et al. 1997], [Suthers
and Weiner 1995]. The example is from videotape of students in a 10
th grade science class.
S1: So data, right? This would be data.
S2: I think so.
S1: Or a claim. I don't know if it would be claim or data.
S2: Claim. They have no real hard evidence. Go ahead, claim. I mean who
cares?
Who cares what they say? Claim.

Figure 2: Example of Elaboration Hypothesis
The choice forced by the tool led to a peercoaching interaction
on a distinction that was critically important for how they subsequently
handled the statement. The last comment of S2 shows that the relevant epistemological
concepts were being discussed, not merely which toolbar icon to press or
which shape to use.
2.6 Salient Knowledge Units are Elaborated
This hypothesis states that learners will be more likely to attend to,
and hence elaborate on, the knowledge units that are perceptually salient
in their shared representational workspace than those that are either not
salient or for which a representational proxy has not been created. The
visual presence of the knowledge unit in the shared representational context
serves as a reminder of its existence and any work that may need to be
done with it. Also, it is easier to refer to a knowledge unit that has
a visual manifestation, so learners will find it easier to express their
subsequent thoughts about this unit than about those that require complex
verbal descriptions [Clark and Brennan 1991]. These
claims apply to any visually shared representations. However, to the extent
that two representational notations differ in kinds of knowledge units
they make salient, these functions of reminding and ease of reference
will encourage elaboration on different kinds of knowledge units. The
ability to facilitate learners' elaboration is important because substantial
psychological research shows that elaboration leads to positive learning
outcomes, including memory for the knowledge unit and understanding of
its significance (e.g., [Chi et al. 1989], [Craik
and Lockhart 1972], [Stein and Bransford 1979]).
For example, consider the three representations of a relationship between
four statements shown in [Fig. 2]. The relationship
is one of evidential support. The Containment notation ([Fig.
2]b) uses an implicit device, spatial containment, to represent evidential
support, while the Graph notation ([Fig. 2]c) uses
an explicit
device, an arc. (Also, Graph supports explicit representation of negative
relationships, not present in Containment.) It becomes easier to perceive
and refer to the relationship as an object in its own right as one
moves from left to right in [Fig. 2]. Hence the present
hypothesis claims that relationships will receive more elaboration in the
rightmost representational notation.
An alternative line of thinking leads to a prediction that the elaboration
effect may be limited. Learners may see their task as one of putting knowledge
units "in their place" in the representational environment. I
will call this the Pigeonhole hypothesis. For example (according
to this hypothesis), once a datum is placed in the appropriate context
([Fig. 2]b) or connected to a hypothesis ([Fig.
2]c), learners may feel it can be safely ignored as they move on to
other units not yet placed or connected. Hence they will not elaborate
on represented units. This suggests the importance of making missing relationships
salient.
2.7 Salience of Missing Units Guides Search
Some representational notations provide structures for organizing knowledge
units, in addition to primitives for construction of individual knowledge
units. Unfilled "fields" in these organizing structures, if perceptually
salient, can make missing knowledge units as salient as those that are
present. If the representational notation provides structures with predetermined
fields that need to be filled with knowledge units, the present hypothesis
predicts that learners will try to fill these fields. For example, a two
dimensional matrix has cells that are intrinsic to the structure of the
matrix: they are there whether or not they are filled with content. Learners
using a matrix will look for knowledge units to fill the cells.
Artifacts from three notations that differ in salience of missing evidential
relationships are shown in [Fig. 3]. In the Text representation
([Fig. 3]a), no particular relationships are salient
as missing: no particular prediction about search for new knowledge units
can be made. In the Graph representation ([Fig. 3]b),
the lack of connectivity of the volcanic hypothesis to the rest of the
graph is salient. Hence this hypothesis predicts that learners will discuss
its possible relationships to other statements. However, once some connection
is made to the hypothesis, it will appear connected, so one might predict
(according to the Pigeon Hole hypothesis) that no further relationships
will be sought. The Matrix representation ([Fig. 3]c)
has columns for hypotheses and rows for data, with the relationships between
these being indicated by symbols "+" or "" in
the cells of the matrix. In the Matrix representation, all undetermined
relationships are salient as empty cells. The present hypothesis predicts
that learners will be discuss more relationships between statements when
using matrices.

Figure 3: Example of Salient Absence Hypothesis
2.8 Predicted Differences
Based on the discussion of this section, the following predictions were
made, and partially tested in the study reported below. The symbol ">"
indicates that the discourse phenomenon at the beginning of the list (concept
use, elaboration, or search) will occur at a significantly greater rate
in the treatment condition(s) on the left of the symbol than in those on
the right.
Concept Use" Representational notations bias learners towards
particular ontologies": {Graph, Matrix} > {Container, Text, Threaded
Discussion}. The Graph and Matrix representations require that one
categorize statements and relations. This will initiate discussion of the
proper choice, possibly including peer coaching on the underlying concepts.
The Container, Text, and Threaded Discussion representations provide only
implicit categorization. Students may discuss placement of information,
but this talk is less likely to be expressed in terms of the underlying
concepts.
Elaboration on relations ("Salient knowledge units are elaborated":
{Graph, Matrix} > Container > {Text, Threaded Discussion}. Graphs
and Matrices make relations explicit as objects that can be pointed to
and perceived, while this is not the case in the other two representations.
The appearance of one statement inside another's container constitutes
a more specific assertion than contiguity of statements in a Threaded Discussion.
Hence participants may be more likely to
talk about whether a statement has been placed correctly in the Container
representation.
Search for Missing Relations Salience of missing units guides
search #) Matrix > {Container, Graph} > {Text, Threaded Discussion}.
The matrix representation provides an empty field for every undetermined
relationship, prompting participants to consider all of them. In Graphs
or the Container representations, salience of the lack of some relationship
disappears as soon as a link is drawn to the statement in question or another
is placed in its container, respectively. Text and Threaded Discussion
do not specifically direct searches toward missing relationships.
The Elaboration hypothesis was not tested independently of the Search
hypothesis in this study.
3 An Initial Study
This section reports on an initial study that was conducted to identify
trends suggesting that there is a phenomenon worthy of further study; and
to refine analytic techniques. Specifically, the study examined how the
amount of talk about evidence and the amount of talk about the epistemological
status of propositions (empirical versus theoretical) differed across three
representational tools, and provided qualitative observations to guide
further study.
3.1 Design
Six pairs (twelve participants) were distributed evenly between three
treatment conditions in a betweensubjects design. The three treatment
conditions corresponded to three notations: Text, Graph, and Matrix. These
notations differ on more than one feature, such as ontology, whether inconsistency
relations are represented, and visual and textual notations. I intentionally
chose this research strategy (instead of manipulating precisely one feature
at a time) in order to maximize the opportunity to explore the large space
of representations within the time scale on which collaborative technology
is being adapted.
3.2 Method
3.2.1 Participants
Participants aged 1314 were recruited by an assistant (Cynthia
Liefeld) from soccer practice, and were paid for their participation. All
of the participants were male. Two pairs of participants were run in each
of the three conditions. Each pair consisted of boys who knew each other,
a requirement intended to minimize negotiation of a new interpersonal relationship
as a complicating factor.

Figure 4: Screen Layout for Studies
3.2.2 Materials
3.2.2.1 Computer Setup
The computer screen was divided in half as shown in [Fig.
4]. The lefthand side contained the representational tool - any
one of Text, Graph (shown), or Matrix. The right hand side contained a
web browser open to the entry page for the problem materials.
3.2.2.2 Software
Three existing software packages were used: Microsoft Word (Text), Microsoft
Excel (Matrix), and Belvédère (Graph).
Groups using MS Word were free to use it as they wished. We did not
restrict participants' appropriation of typographical devices for organizing
information, but neither did we encourage any particular use of the textual
medium, other than to suggest that they label their statements as Data
and Hypotheses (this was done to make the instructions more similar
across groups).
Groups using MS Excel were provided with a prepared matrix that had
the labels "Hypotheses" and "Data" in the upper left
corner, and cells formatted sufficiently large to allow entry of textual
summaries of the same [see Fig. 6]. Participants were
specifically told to enter hypotheses as column headers, data as row headers,
and to record the relationships in the internal cells.
The Graph condition used Belvédère 2.1 [Suthers
and Jones 1997], which provides rounded nodes for hypotheses, rectangles
for data, and links for consistency and inconsistency relations between
them. Hypothesis and data shapes are filled with textual summaries of the
corresponding claims - see the left hand side of [Fig.
4].
3.2.2.3 Science Challenge Problems
Participants were presented with problems in a webbrowser (right
side of [Fig. 4]). A science challenge problem
presents a phenomenon to be explained (e.g., determining the cause of the
dinosaur extinctions, or of a mysterious disease on Guam known as Guam
PD). These are relatively illstructured problems: at any given point
many
possible knowledge units may reasonably be considered. The webbased
problem materials provided indices to relevant resources, such as lists
of articles posing possible explanations of the phenomenon, reporting empirical
findings from fieldwork or laboratory work, or explaining basic domain
concepts. As shown in [Fig. 4], the materials included
summaries of professional journal articles. The materials used in the present
study were modified from the classroom versions of science challenge problems
developed by Arlene Weiner and Eva Toth. The classroom versions may be
viewed at http://lilt.ics.hawaii.edu/belvedere/materials/index.html.
The experimental versions excluded handson activities, links to external
sites and an activity guide. See [Toth et al. 2001]
for details on the classroom research.
3.2.3 Procedure
Participants were seated in front of a single monitor and keyboard.
After an introduction to the study and signing of permission forms, participants
were shown the software and allowed to practice the basic manipulations
such as creating and linking nodes or filling in matrix cells. This training
did not involve any mention of concepts of evidence or of the problem domain.
Participants were then presented with the problem statement in the web
browser on the right. The problem solving session was initiated when they
were instructed to identify hypotheses that provide candidate explanations
of the phenomenon posed, and to evaluate these hypotheses on the basis
of laboratory studies and field reports obtained through the hypertext
interface. They were instructed to use the representational tool during
the problem solving session to record the information they find and explore
how it bears on the problem. Participants were responsible for deciding
how to share or divide use of the keyboard and mouse. The procedure described
in this paragraph was repeated, first with a "warmup" problem,
and then with the problem for which data is reported below (Guam PD).
Sessions were videotaped with the camera pointed at the screen over
the shoulder of one of the participants. The camera was adjusted to show
the screen in sufficient detail to see its contents, yet also to show the
immediate space around the screen to capture gestures in the vicinity of
the screen.
At the conclusion of the problem solving session, participants were
asked to write individual essays in which they describe the problem they
worked on, any conclusions they reached, the information they used to solve
the problem and how they used that information to arrive at their conclusions.
Other outcome measures were not attempted because we did not expect learning
outcomes after a short treatment time, and because participants were free
to browse or ignore the information pages, making it difficult to compare
memory for information across individuals.
3.3 Results
Analysis was based primarily on coding of transcripts of participants'
spoken discourse, and secondarily on participants' representational artifacts.
Each is discussed in turn below. Essay results were inconclusive and are
not reported here.

Table 1: Coding Categories
3.3.1 Coding and Analysis of Discourse
Videotapes from the six onehour problemsolving sessions were
transcribed and segmented. A segment was defined to be a modification to
the external representation or a single speaker's turn in the dialogue,
except that turns that expressed multiple propositions were broken into
multiple segments. Segments were coded on the
dimensions indicated in [Tab. 1] using the QSR Nud*ist
software package. Italicized labels in the table are abstract categories:
only the SMALL CAPS labels were actually applied to transcript
segments.
The Representation coding applies uniformly to all segments in
any given transcript, and indicates the independent variable for the session
(TEXT, GRAPH or MATRIX). The remaining coding is done
on a persegment basis.
Topic provides the primary dependent variables. It divides into
two categories (DOMAIN TALK and EPISTEMOLOGICAL CLASSIFICATION)
and two subdimensions of categories (Evidential Relation and
Other Topic).
Topic category DOMAIN TALK codes discourse about the problem
domain (e.g., "See if they are close to each other," "They
get it from the rivers," "Maybe they don't soak it long enough").
Topic category EPISTEMOLOGICAL CLASSIFICATION codes discourse
about the epistemological status of a statement, including classification
as empirical (e.g., "that's data"), theoretical (e.g., "that's
a hypothesis, isn't it?") or discussion of the choice (e.g., "do
you want me to go data or hypothesis?"). In subsequent work, EPISTEMOLOGICAL
CLASSIFICATION will be subdivided into Theoretical, Empirical, and
Equivocal, in a manner similar to Evidential Relation (discussed
below). In the present study I only wanted to see whether the tools differed
in their prompting for making this choice. To avoid confusion it should
be noted that this code is not applied to statements of hypotheses
or data themselves: only explicit discussion of whether statements belong
to one of these categories.
Topic subdimension Evidential Relation is applied to segments
where participants discuss or identify the nature of the evidential relationship
between two statements. The codes are CONSISTENCY (e.g., "it's
also for," "that confirms"), INCONSISTENCY ("so
that's against," "with this one, no, conflicts, right?"),
or EQUIVOCAL, applied when participants raise the question of
which relationship holds, if any, without identifying one specifically
("is that for or against?," "it can neither confirm nor
deny"). In some cases, evidential relationships were apparently being
expressed in terms of the representational primitives provided by the software
(e.g., "connect these two"). These utterances were also coded
with the appropriate Evidential Relation category, but marked with
the Level code (discussed below) so that such toollevel talk
could be distinguished during the analysis.
Topic subdimension Other Topic codes segments not coded
as one of the above topics. The codes include ONTASK (e.g.,
"are we done with this?"), OFFTASK (e.g., "what's
for lunch?"), or UNCLASSIFIABLE (e.g., "uh," mumbles,
etc.).
The remaining coding dimensions are used to select out relevant segments
for particular analyses. Mode indicates whether the segment is coded
for its VERBAL content or for an action taken on the REPRESENTATIONAL
artifact. The final two dimensions only apply to verbal segments. Level
is applied only to VERBAL EPISTEMOLOGICAL CLASSIFICATION and EVIDENTIAL
RELATION segments, and indicates whether an utterance made direct
use of epistemological or evidential
concepts (e.g., "supports," "hypothesis": CONCEPTUAL)
or was expressed in terms of the software (e.g., "link to this,"
"round box": TOOLBASED). Ownership indicates
whether the participant was merely reading text that we provided (RECITED)
or making one's own contribution (NONRECITED).

Table 2: Summary of Verbal Coding
Coding was performed by two of my assistants (Chris Hundhausen and Laura
Girardeau). Questions of interpretation, problematic segments, etc. were
discussed among the three of us during meetings, but the coding itself
was done independently. Interrater reliability was computed using
the Kappa statistic across all of the categories described above, producing
a value of 0.92 (n=1942).
Selected results of coding are shown in [Tab. 2],
focusing on segments coded as Mode=VERBAL, and showing both counts
and percentages for each of the three treatment groups. Percentages are
taken relative to NONRECITED on task utterances, shown in
the second row. Counts and percentages for EVIDENTIAL RELATION
are broken down in two orthogonal ways: by whether the relation was CONSISTENCY,
INCONSISTENCY, or EQUIVOCAL; and by whether the talk about
evidence was CONCEPTUAL or TOOLBASED. EPISTEMOLOGICAL
CLASSIFICATION was broken down by CONCEPTUAL or TOOLBASED.
Due to the small sample size we did not perform statistical testing.
3.3.2 Qualitative Observations
The document created by one TEXT group contained no expression
of evidential relations [see Tab. 3], and the transcript
of verbal discourse for this group contained no overt discussion of evidential
relations. We did observe an effect of the representation on participants'
discourse: discussion of spelling was prompted by MS Word's red underlining
of unrecognized words [Tab. 4]. All of the discussion
of evidence in TEXT occurred in the other group at the end of
the session (the longest
session in the pilot study). This group had run
over time in their session. The experimenter was concerned that the participants'
mothers were waiting, so prompted the participants to draw their conclusions.
All of the talk about evidence took place in response to this prompt. This
second group's document was substantially more verbose than that of the
first group shown in [Tab. 3].
Research Question: What is the mysterious muscle
and mind killer?
H1: It could have come from the water.
H2: Somebody could have put it in the water.
Data1: The scientists believe fadang didnt cause it.
Data 2: When you get the disease you hands feel numb One mans wife got
it and she feels numbness through her whole body she can still move parts
of her body but not her feet
Data 3: There is a mineral imbalance in the water.
Data 4: An unusual amino acid has been isolated in chickling peas in the
area.
Table 3: Sample Text Document
L: Okay, um, scientists don't believe that fadang
is a, is the cause of the virus.
R: Believe what? Believe what?
L: Fadang didn't cause it. <typing>FADANG
R: Mmm.
L: What?
R: Nothing. You spelled it wrong.
L: <clicks>What is it, 'A'?
R: Yeah. <mumble>It's still wrong.
L: No it isn't.
R: Yes it is. Then why is it, why is it underlined with the x thing?
<points to l-hand side of screen>
L: Not wrong.
Table 4: Text Transcript Sample
A document produced by one of the GRAPH groups is shown in
[Fig. 5]. This graph is somewhat linear, in spite of
the fact that graphs are normally considered a nonlinear medium. A pattern
of identify information, categorize information, add it to the diagram,
link it in is typical of interactions in this transcript [see Tab.
5]. This pattern of activity, which leads to the linearity of the graph,
is consistent with the Pigeonhole Hypothesis: participants may feel that
the primary task is to connect each new statement to something else, after
which it can be ignored.

Figure 5: Graph Artifact
L: Look, everybody knows that fadang is a toxic;
people who go to a lot of trouble to for...
R: Yeah, that means that, uh, if they don't do it right, they...
L: They could die; that might not cause the disease, though
R: It's a hypothesis <typing hypothesis "using fadang that is prepared
incorrectly could result in sickness or death">
L: What if they don't prepare it right?
R: Right
L: Oops R: Okay, huh? L: <mumble>
R: <mumble>, this? add to our investigation?
L: Yeah, add to investigation <adding link to "By soaking fadang
...">; and that also goes to cycad, too
R: Well, wait, uh L: Hold over <adding link to "people who eat
large amounts of cycad ...">
R: And this stuff is the same thing as fadang, right?
L: Yeah
Table 5: Graph Transcript Sample

Figure 6: Matrix Artifact
L: Peas. All right. <points to screen>Now.
What hypothesis is this? Food poisoning. Yeah. Okay. Go down. Let's see.
Supports. R: Confirms. Confirm. You might want to use the same word you're
using the whole time.
L: Is there any chance we could use this for the science fair? No. It conflicts.
R: Diet in South Pacific, yes?
L: No, no, no, no, no. <points to screen>R: Conflicts and then confirms.
L: Conflicts. Is it. Confirms. What is this one? Oh, the brain. R: Well,
yeah. L: Yeah, yeah. Okay. Confirms. Well, I think now we should, like,
rule out this <points to 2nd hypothesis in table>one because it has
had nothing but conflicts. The Hiroshima ...that we thought of ourselves.
R: It conflicts, conflicts, conflicts. It hasn't had one, uh...
Table 6: Matrix Tanscript Sample
Finally, the MATRIX artifacts were especially striking because
participants were not specifically instructed to fill in all the cells,
yet they did so [see Fig. 6]. The transcripts illustrated
participants' systematic identification of evidential relations as they
worked down the columns, and in one case their appropriate use of the table
to rule out a hypothesis that they had proposed. Both of these points are
illustrated in the transcript segment of [Tab. 6].
3.4 Discussion
Recall that the Search hypothesis predicts that participants will be
more likely to seek evidential relations when using representations that
prompt for these relations with empty structure (TEXT < GRAPH
< MATRIX). The [Tab. 2] row labeled "EVIDENTIAL
RELATION" is relevant to the Search hypothesis. This row counts,
for each treatment group, the percentage of verbal segments that were coded
with any one of the three evidential values (CONSISTENT, INCONSISTENT,
EQUIVOCAL). The results are consistent with the Search hypothesis:
TEXT=0.58% < GRAPH=5.22% < MATRIX=19.69%.
This trend holds even when limited to CONCEPTUAL expressions of
evidential relations: TEXT=0.43% < GRAPH=1.47% <
MATRIX=8.48%. Note however that a substantial portion of talk
about evidence in the GRAPH and MATRIX conditions is
tool based (about twothirds of Graph and half of Matrix evidential
utterances are TOOLBASED). This is as expected, since these
tools, unlike TEXT, provide objects that may be referred to as
proxies for evidential relations.
The breakdown of evidential talk according to the type of relation shows
the influence of the exhaustive prompting of MATRIX. In TEXT
and GRAPH, participants focused primarily on CONSISTENCY
relations, a possible manifestation of the confirmation bias [Klayman
and Ha 1987]. Treatment was more balanced in MATRIX, with
almost half of the talk about evidential relations being concerned with
inconsistency or equivocal relations. This may be because MATRIX
prompts for consideration of relationships between all pairs of items:
participants are more likely to encounter inconsistency or indeterminate
relations when considering pairs that others may have neglected in the
GRAPH or TEXT conditions.
Turning to EPISTEMOLOGICAL CLASSIFICATION, the difference between
TEXT and MATRIX was not as strong as expected. The ontological
prompting of MATRIX may be obscured by the fact that the instructions
for all three conditions directed participants to consider and record hypotheses
and empirical evidence. TEXT participants, like others, complied
with these instructions, for example labeling propositions as "Data"
or Hypothesis" in the document reproduced in [Tab.
3]. However, the GRAPH condition shows a greater proportion
of epistemological classification talk. This is expected because of GRAPH'S
use of visually distinct shapes to represent data and hypotheses.
Overall, the results are encouraging with respect to the question of
whether there is a phenomenon worth investigating. Differences in the predicted
directions were seen in both talk about evidence and about the epistemological
status of statements. However, this sample data cannot be taken as conclusive.
Caveats, all of which are being addressed by ongoing work, include the
small sample size (hence no test of significance), the lack of a learning
outcomes measure, and the need for a more direct test of the claim that
representational state affects subsequent discourse processes.
Furthermore, analyses based on frequencies of utterances across the
session as a whole fail to distinguish utterances seeking evidential relations
from those elaborating on previous ones (i.e., between the Search and Elaborate
hypotheses), or to show a causal relationship between the state of the
representation and the subsequent discourse. A more sophisticated coding
is required to test whether the representation or salient absence of a
particular (kind of) knowledge unit influences search for or elaboration
on that unit. For example, one might code changes to the representations
with the set of knowledge units that are (a) expressed or (b) saliently
missing from that point onwards to the next change. Then, subsequent utterances
within that timewindow could be tested for either (a) elaboration
on those knowledge units or (b) search for other knowledge units related
by evidential relations. This would provide a more stringent test of the
causal relationship between salience and discourse claimed by the research
hypotheses.
All of these issues are being addressed in a study underway at this
writing. The study involves a larger sample size and learning outcome measures,
as well as a more controlled presentation of materials. Instead of commercial
software, we are using versions of Belvédère that have been
modified to provide the alternative representations in [Fig.
3], in order to reduce nonessential differences between the representational
tools and enable automated recording of all manipulations of the representations
in log files. Initial results, to be reported in future publications, show
effects on discourse processes similar to the results reported here. See
also [Toth et al. 2001] for a classroom study with
consistent results based on an analysis of artifacts created by students
during a twoweek project. Plans for future work include attempts to
replicate selected experimental results in distance learning situations,
both synchronous and asynchronous, as well as further classroom work.
4 Summary
This paper introduced the hypothesis that variation in features of representational
tools could have a significant effect on the learners' knowledgebuilding
discourse. I sketched a theoretical analysis of the role of constraints
and salience in representational guidance, and reported results from the
first in a series of investigations. The study suggests that appropriate
representational guidance may result in increased consideration of evidential
relations by collaborating learners. Further work is needed to provide
more stringent testing of the hypotheses and to situate results in terms
of possible learning outcomes. This line of work promises to inform the
design of future software learning environments and to provide a better
theoretical understanding of the role of representations in guiding group
learning processes.
Aknowledgements
I am grateful to Alan Lesgold, who initiated the Belvédère
project, for his mentorship; to Laura Girardeau, Cynthia Liefeld, Chris
Hundhausen, and David Pautler for assistance with the studies; to Micki
Chi, Martha Crosby, and John Levine for discussions concerning the role
of representations in learning, visual search, and social aspects of learning,
respectively; and to Violetta CavalliSforza, John Connelly, Kim Harrigal,
Dan Jones, Sandy Katz, Massimo Paolucci, Eva Toth, Joe Toth, and Arlene
Weiner for related work on Belvédère not reported
here. Work on Belvédère, development of these ideas, and
the experimental sessions were funded by DoDEA's Presidential Technology
Initiative and by DARPA's Computer Aided Education and Training Initiative
while I was at the University of Pittsburgh. Data analysis and the writing
of this document were supported by NSF's Learning and Intelligent Systems
program under my present affiliation with the University of Hawai'i at
Manoa.
References
[Abrami and Bures 1996] Abrami, P.C. & Bures,
E.M.: "Computersupported collaborative learning and distance
education, Reviews of lead article"; The American Journal of Distance
Education, 10, 2 (1996), 3742.
[Amarel 1968] Amarel, S.: "On representations
of problems of reasoning about actions"; In Donald Michie (Ed.): "Machine
Intelligence 3"; Amsterdam, Elsevier/NorthHolland (1968), 131171.
[Bell 1997] Bell, P.: "Using argument representations
to make thinking visible for individuals and groups"; Proc. 2 nd International
Conference on Computer Supported Collaborative Learning (CSCL'97), University
of Toronto, Toronto (1997), 1019.
[Brown and Campione 1994] Brown, A. L. & Campione,
J. C.: "Guided discovery in a community of learners"; In K. McGilly
(Ed.): "Classroom Lessons: Integrating Cognitive Theory and Practice"; Cambridge,
MIT Press (1994), 229270.
[Chi et al. 1989] Chi, M.T.H., Bassok, M., Lewis,
M., Reimann, P., & Glaser, R.: "Selfexplanations: How students
study and use examples in learning to solve problems"; Cognitive Science,
13 (1989), 145182.
[Clark and Brennan 1991] Clark, H.H. & Brennan,
S.E.: "Grounding in communication"; In L.B. Resnick, J.M. Levine
and S.D. Teasley (Eds.): Perspectives on Socially Shared Cognition; American
Psychological Association (1991), 127149.
[Collins and Ferguson 1993] Collins, A. & Ferguson,
W.: "Epistemic forms and epistemic games: Structures and strategies
to guide inquiry"; Educational Psychologist, 28, 1 (1993), 2542.
[Craik and Lockhart 1972] Craik, F. I. M., &
Lockhart, R. S.: "Levels of processing: A framework for memory research";
Journal of Verbal Learning and Verbal Behavior, 11 (1972), 671684.
[Guzdial 1997] Guzdial, M.: "Information ecology
of collaborations in educational settings: Influence of tool"; Proc.
2 nd International Conference on Computer Supported Collaborative Learning
(CSCL'97), University of Toronto, Toronto (1997), 8390.
[Guzdial et al. 1997] Guzdial, M., Hmelo, C.,
Hubscher, R., Nagel, K., Newstetter, W., Puntambekar, S., Shabo, A., Turns,
J., & Kolodner, J. L.: "Integrating and guiding collaboration:
Lessons learned in computersupported collaborative learning research
at Georgia Tech"; Proc. 2 nd International Conference on Computer
Supported Collaborative Learning (CSCL'97), University of Toronto, Toronto
(1997), 91100.
[Hayes 1989] Hayes, J. R.: "The complete problem
solver 0 Lawrence Erlbaum Associates, Hillsdale, NJ (1989).
[Johnson and Johnson 1989] Johnson, D. & Johnson,
R.: "Cooperation and competition: Theory and research 0 Interaction
Book Company (1989).
[Johnson et al. 1985] Johnson, R.T., Johnson, D.W.,
& Stanne, M.B.: "Effects of cooperative, competitive, and individualistic
goal structures on computerassisted instruction"; Journal of
Educational Psychology, 77, 6 (1985), 668677.
[Jonassen et al. 1995] Jonassen, D.H., Davidson,
M., Collins, M., Campbell, J., & Bannan Haag, B.: "Constructivism
and computermediated communication in distance education"; Journal
of Distance Education, 9, 2 (1995), 726.
[Justen et al. 1990] Justen, III, J.E., Waldrop,
P.B., & Adams, II: "Effects of paired versus individual user computerassisted
instruction and type of feedback on student achievement"; Educational
Technology, 30, 7 (1990), 5153.
[Klahr and Robinson 1981] Klahr, D. and Robinson,
M.: "Formal assessment of problem solving and planning processing
in school children"; Cognitive Psychology 13 (1981), 113148.
[Klayman and Ha 1987] Klayman, J. & Y.W. Ha:
"Confirmation, disconfirmation, and information in hypothesis testing";
Psychological Review 94 (1987), 211228.
[Koedinger 1991] Koedinger, K.: "On the design
of novel notations and actions to facilitate thinking and learning";
Proc. International Conference on the Learning Sciences, AACE, Charlottesville,
VA (1991), 266273.
[Koschmann 1994] Koschmann, T. D.: "Toward
a theory of computer support for collaborative learning"; The Journal
of the Learning Sciences, 3, 3 (1994), 219225.
[Kotovsky and Simon 1990] Kotovsky, K. and H. A.
Simon: "What makes some problems really hard: Explorations in the
problem space of difficulty"; Cognitive Psychology 22 (1990), 143183.
[Lajoie and Derry 1993] Lajoie, S. P., & Derry,
S. J. (Eds.): "Computers as cognitive tools"; Lawrence Erlbaum
Associates, Hillsdale, NJ (1993).
[Larkin and Simon 1987] Larkin, J. H. & Simon,
H. A.: "Why a diagram is (sometimes) worth ten thousand words";
Cognitive Science 11, 1 (1987), 6599.
[Lave and Wenger 1991] Lave, J. & Wenger, E.:
"Situated learning: Legitimate peripheral participation"; Cambridge
University Press, Cambridge (1991).
[Lohse 1997] Lohse, G. L.: "Models of graphical
perception"; In M. Helander, T. K. Landauer, & P. Prabhu (Eds.):
"Handbook of Humancomputer Interaction"; Elsevier Science
B.V, Amsterdam (1997), 107135.
[Novak 1990] Novak, J.: "Concept mapping: A
useful tool for science education"; Journal of Research in Science
Teaching 27, 10 (1990), 93749.
[Novick and Hmelo 1994] Novick, L.R. & Hmelo,
C.E.: "Transferring symbolic representations across nonisomorphic
problems"; Journal of Experimental Psychology: Learning, Memory, and
Cognition 20, 6 (1994), 12961321.
[Okada and Simon 1997] Okada, T. & Simon, H.
A.: "Collaborative discovery in a scientific domain"; Cognitive
Science 21, 2 (1997), 109146.
[O'Neill and Gomez 1994] O'Neill, D. K., &
Gomez, L. M.: "The collaboratory notebook: A distributed knowledgebuilding
environment for projectenhanced learning"; Proc. EdMedia
'94. AACE, Charlottesville, VA (1994).
[Pea 1994] Pea, R.: "Seeing what we build together:
Distributed multimedia learning environments for transformative communications";
Journal of the Learning Sciences, 3, 3 (1994), 285299.
[Perkins 1993] Perkins, D.N.: "Personplus:
A distributed view of thinking and learning"; In G. Salomon (Ed.):
"Distributed Cognitions: Psychological and Educational Considerations";
Cambridge University Press. Cambridge (1993), 88111.
[Puntambekar et al. 1997] Puntambekar, S., Nagel,
K., Hübscher, R., Guzdial, M., & Kolodner, J.: "Intragroup
and intergroup: An exploration of learning with complementary collaboration
tools"; Proc. 2 nd International Conference on Computer Supported
Collaborative Learning (CSCL'97), University of Toronto, Toronto (1997),
207214.
[Ranney et al. 1995] Ranney, M., Schank, P., &
Diehl, C.: "Competence versus performance in critical reasoning: Reducing
the gap by using Convince Me"; Psychology Teaching Review 4, 2 (1995).
[Reusser 1993] Reusser, K.: "Tutoring systems
and pedagogical theory: Representational tools for understanding, planning,
and reflection in problem solving"; In S. P. Lajoie & S. J. Derry
(Eds.): "Computers as Cognitive Tools"; Lawrence Erlbaum Associates,
Hillsdale, NJ (1993), 143177.
[Roschelle 1994] Roschelle, J.: "Designing
for cognitive communication: Epistemic fidelity or mediating collaborative
inquiry?"; The Arachnet Electronic Journal of Virtual Culture (1994).
[Rysavy and Sales 1991] Rysavy, D. M. & Sales,
G. C.: "Cooperative learning in computer based instruction";
Educational Technology Research & Development, 39, 2 (1991), 70
79.
[Salomon 1993] Salomon, G.: "No distribution
without individuals' cognition: A dynamic interactional view"; In
G. Salomon (Ed): "Distributed Cognitions: Psychological and Educational
Considerations"; Cambridge University Press, Cambridge (1993), 111138.
[Scardamalia et al. 1992] Scardamalia, M., Bereiter,
C., Brett, C., Burtis, P.J., Calhoun, C., & Smith Lea, N.: "Educational
applications of a networked communal database"; Interactive Learning
Environments, 2, 1 (1992), 4571.
[Sharan 1980] Sharan, S.: "Cooperative learning
in small groups: Recent methods and effects on achievement, attitudes,
and ethnic relations"; Review of Educational Research, 50 (1980),
241272.
[Slavin 1980] Slavin, R. E.: "Cooperative learning:
Theory, research, and practice"; PrenticeHall, Englewood Cliffs,
NJ (1980).
[Smolensky et al. 1987] Smolensky, P., Fox, B.,
King, R., & Lewis, C.: "Computeraided reasoned discourse,
or, how to argue with a computer"; In R. Guindon (Ed.): "Cognitive
Science and Its Applications for HumanComputer Interaction";
Erlbaum, Hillsdale NJ (1987), 109162.
[Snir et al. 1995] Snir, J., Smith, C., & Grosslight,
L.: "Conceptually enhanced simulations: A computer tool for science
teaching"; In D. N. Perkins, J. L. Schwartz, M. M. West, & M.
S. Wiske (Eds.): "Software Goes to School: Teaching for Understanding
with New Technologies"; Oxford University Press, New York (1995),
106129.
[Stein and Bransford 1979] Stein, B. S., &
Bransford, J. D.: "Constraints on effective elaboration: Effects of
precision and subject generation"; Journal of Verbal Learning and
Verbal Behavior, 18 (1979), 769777.
[Stenning and Oberlander 1995] Stenning, K. &
Oberlander, J.: "A cognitive theory of graphical and linguistic reasoning:
Logic and implementation"; Cognitive Science, 19, 1 (1995), 97140.
[Stenning and Yule 1997] Stenning, K. & Yule,
P.: "Image and language in human reasoning: A syllogistic illustration";
Cognitive Psychology, 34 (1997), 109159.
[Suthers 1995] Suthers, D. Designing for Internal
vs. External Discourse in Groupware for Developing Critical Discussion
Skills. Presented at the CHI'95 Research Symposium, May 67. 1995,
Denver CO. Available: http://lilt.ics.hawaii.edu/lilt/papers/chi95learning.ps
[Suthers and Jones 1997] Suthers, D. & Jones,
D.: "An architecture for intelligent collaborative educational systems";
Proc. 8th World Conference on Artificial Intelligence in Education
(AI-ED'97), IOS Press, Kobe (1997), 55-62.
[Suthers et al. 1997] Suthers, D., Toth, E.,
and Weiner, A.: "An integrated approach to implementing collaborative
inquiry in the classroom"; Proc. 2 nd International Conference on
Computer Supported Collaborative Learning (CSCL'97), University of Toronto,
Toronto (1997), 272279.
[Suthers and Weiner 1995] Suthers, D. and Weiner,
A.: "Groupware for developing critical discussion skills"; Proc.
1 st International Conference on Computer Supported Collaborative Learning
(CSCL'95), Bloomington, Indiana (1995).
[Toth et al. 2001] Toth, E., Suthers, D., &
Lesgold, A.: "Mapping to know: The effects of evidence maps and reflective
assessment on scientific inquiry skills"; To appear in Science Education
(2001).
[Triesman and Souther 1987] Treisman, A. &
Souther, J.: "Search Asymmetry: A diagnositic for preattentive processing
of separable features"; Journal of Experimental Psychology: General,
114 (1987) 285310.
[Utgoff 1986] Utgoff, P.: "Shift of bias for
inductive concept learning"; In R. Michalski, J. Carbonell, T. Mitchell
(Eds.): "Machine Learning: An Artitificial Intelligence Approach,
Volume II"; Morgan Kaufmann, Los Altos CA (1986), 107148.
[Wan and Johnson 1994] Wan, D., & Johnson, P.
M.: "Experiences with CLARE: a Computersupported collaborative
learning environment"; International Journal of HumanComputer
Studies 41 (1994), 851879.
[Webb 1989] Webb, N.: "Peer interaction and
learning in small groups"; International Journal of Education Research# 13
(1989), 2140.
[Webb and Palincsar 1996] Webb, N. & Palincsar,
A.: "Group processes in the classroom"; In D. Berlmer & R.
Calfee, (Eds.): "Handbook of Educational Psychology"; Simon &
Schuster Macmillian, New York (1996).
[Zhang 1997] Zhang, J.: "The nature of external
representations in problem solving"; Cognitive Science, 21, 2 (1997),
179217.
|