Go home now Header Background Image
Search
Submission Procedure
share: |
 
Follow us
 
 
 
 
Volume 9 / Issue 12 / Abstract

available in:   PDF (89 kB) PS (67 kB)
 
get:  
Similar Docs BibTeX   Write a comment
  
get:  
Links into Future
 
DOI:   10.3217/jucs-009-12-1487

A Tool Kit for Measurement of Organisational Learning: Methodological Requirements and an Illustrative Example1

Anna Mette Fuglseth
(The Norwegian School of Economics and Business Administration, Norway
anna-mette.fuglseth@nhh.no)

Kjell Grønhaug
(The Norwegian School of Economics and Business Administration, Norway
kjell.gronhaug@nhh.no)

Abstract: Few studies attempt to measure organisational learning. Measurement is critical to evaluate relationships between initiatives to support learning and organisational performance. This paper proposes a theory-based tool kit for measurement of organisational learning. By tool kit we mean a collection of methods that each captures elements of the phenomenon 'organisational learning'. The paper clarifies the term and discusses requirements of theories and methods to be included in the tool kit. Some examples of theories with methods are given. Emphasis is placed on Kelly's Personal Construct Theory with the accompanying Role Construct Repertory Test to illustrate methodological requirements.

Keywords: Organisational learning, Measurement, Personal Construct Theory, Role Construct Repertory Test.

Categories: A.m, J.4

1 Introduction

The purpose of this paper is to contribute to the development of a tool kit for measurement of organisational learning. By tool kit we mean a collection of methods to capture and analyse the various aspects of the complex phenomenon 'organisational learning'. In the past two decades there has been an increased interest in the ability of organisations to learn. Technological developments and political changes have led to a liberalisation and globalisation of markets that have sharpened competition. In such environments the ability of organisations to acquire new knowledge is considered increasingly more important and has been termed the only sustainable competitive advantage.

A review of the literature on organisational learning reveals, however, that there are very few studies actually measuring such learning. Measurement is critical to evaluate the relationships between various initiatives to support learning and effects on organisational performance. Without measurement we are not able to assess whether learning is actually taking place, and whether it has any effect on performance.


1A short version of this article was presented at I-Know '03, (Graz, Austria, July 2-4, 2003).

Page 1487

One reason for the rather limited amount of quantitative empirical research on organisational learning is, according to [Miner and Mezias 1996], that it is "excruciatingly hard to do well". We agree that measurement of organisational learning is difficult, and we do not attempt to give the final answer to this challenge here. Rather, we provide a starting point for development of a theory-based tool kit as one possible way to advance measurement of organisational learning. The proposal of a tool kit is based on the assumption that multiple methods are necessary to capture and analyse a complex phenomenon such as organisational learning and its possible effects on performance, cf. [Wöls, Kirchpal and Ley 2003]. Based on contributions from various research disciplines it is possible to identify elements that are in general considered to be included in the term 'organisational learning'. Which elements that are relevant to measure in each project, will depend on the purpose of the learning effort. We believe, however, that a collection of theory-based methods to measure such elements may be useful to enhance our understanding of whether, and possibly how, various training activities actually influence learning and organisational performance (see [Stabell 1979] for an analogical discussion of evaluating complex decision processes).

The rest of this paper proceeds as follows: In the next section we clarify the term 'organisational learning' and discuss some dimensions of learning that should be captured in a tool kit for measurement of organisational learning. Then we discuss problems related to measurement of learning and requirements of theories and methods to be included in the tool kit. In the following section we give some examples of theory-based methods to be included in the tool kit. The examples draw on our attempts in several research projects to measure the effects of computerised systems on learning in specific tasks. We believe, however, that our experiences are useful for measurement of effects also of other training activities, not only within organisational learning, but also in related research areas, such as knowledge and skills management, see, for example, [Garavan and McGuire 2001]. Because of space limitations we have selected Kelly's Personal Construct Theory with the accompanying Role Construct Repertory Test to illustrate methodological requirements [Kelly 1991, first published in 1955]. Proposals for further development of the tool kit will be given.

2 Organisational learning

The subject of organisational learning has attracted considerable interest the past two decades, and the divergence of theoretical perspectives is increasing. However, not only is the term ambiguous, but also the phenomenon itself, including surrounding questions such as who - or what - are the learners, and where and how does such learning take place. In line with the divergence of perspectives there is no agreement on a definition of organisational learning.

The term 'organisational learning' implies that it should denote learning beyond the individual level. There is, however, a general agreement in the research literature that it is individuals who learn, but also that individuals are social beings who construct their understanding and learn from social interaction, among others in the workplace.

Page 1488

Organisations are viewed as collectivities made up of individuals that think and act. It is assumed that learning in such collectivities can produce results that go beyond results that could be inferred by studying learning processes in isolated individuals [Argyris and Schön 1996], [Simon 1991]. The management literature has increasingly emphasised the importance of teams to enhance learning, particularly by creating new knowledge, see, for example, [Cohen and Bailey 1997]. It is, however, also stressed that individual and group/team learning should be linked to organisational references that are established to guide behaviours [Wenger 1998]. Examples of references are goals, strategies, policies and routines. Such linking is necessary to understand how individual and group/team learning can lead to concerted activities that improve organisational performance.

In line with the above discussion a tool kit for measurement of organisational learning should include levels of learning, such as individual, group/team, organisational and interorganisational learning. In this paper the discussion will be limited to the first three levels.

Multiple theories of learning exist. A crude distinction can be made between behavioural and cognitive theories. Behavioural theories attempt to explain learning as a result of training or reactions to performance feedback without considering conscious thought. Behavioural approaches study changes in performance, either as improved response to the same stimulus or as adaptation to changes in stimuli. Cognitive learning theories attempt to explain learning by considering changes in individuals' knowledge structures and information processing. Since cognitive learning does not necessarily lead to improvement in performance, a tool kit for measuring organisational learning should include both kinds of learning. Due to space limitations, however, we will focus on cognitive learning because measurement of such learning presents the main challenge.

Cognitive learning relates to mental processes, so is it meaningful to talk of cognitive learning at the group/team and organisational levels? Based on the view of organisations as collectivities of individuals that think and act, we believe that it is meaningful to infer such learning also at the higher levels. It is important, however, to specify the level of measurement correctly, for example that information processing can only be measured at the individual level.

Information processing comprises the individual's detection of data and other stimuli from the environment, interpretation of the data/stimuli, reflection and the coding of information as data to be communicated to others. It is argued that information processing and changes in level of information processing can be observed and analysed at both the individual and the group level, see, for example, [Schroder, Driver and Streufert 1967]. As mentioned above, we believe that it is essential to distinguish between individual and group/team learning to understand how development and transfer of knowledge takes place among the members, i.e. whether and how mutual understanding of terms and arguments develops, see [Wenger 1998] for a good explanation.

At the team/group level it is, therefore, important to study communication processes, i.e. interaction processes particularly involving language. Communication is derived from the Latin word "communicare" that means "to let into", "to give a share of", i.e. share (part of) one's knowledge with other people. Communication involves at least two persons, some kind of message and a medium for transfer of messages between the persons.

Page 1489

At the organisational level cognitive learning can be said to occur, among others, when new knowledge regarding organisational goals, strategies, policies and procedures are transferred among organisational members. Such transfer can take place in "rich" communication processes when for example a manager meets the members of a work group to explain a policy change, but it can also take place in interpretation processes, for example if the members of the work group receive an explanation of the policy change in a memo. The above discussion is summarised in the first two columns of [Table 1].

3 Methodological requirements

Measurement implies some linking between an unobservable concept and one or more empirical indicators. Learning, however, is a rather multi-faceted phenomenon. Furthermore, the concept of organisational learning and aspects of the concept can be operationalised and measured in a variety of ways. In order to measure learning in a particular study, therefore, a set of relevant empirical indicators must be explicitly defined and assigned. Which indicators that are relevant to measure, can be determined only within some kind of theory.

Thus, measurement of learning implies selection of one or more theories that address the relevance of the aspects of the phenomenon one intends to measure. Furthermore, measurement requires methods to guide the capture and analysis of learning in accordance with the selected theory/theories. In other words, measurements must be valid, i.e. capture what they purport to do, cf. [Cook and Campbell 1979].

Since organisational learning in this paper is related to creation of competitive advantages, the theories selected for the tool kit should indicate a direction of improvement. Theories are usually rather general, i.e. they employ general concepts to be able to subsume a great variety of events, tasks, and domains. Organisational learning takes, however, place in particular contexts. Therefore, to be applicable to a specific context/situation, the general concepts should be adjusted to allow for adequate measurement in that context.

To assess the need for learning in a specific task we have found it useful to make a distinction between structure and content. Content refers to the superordinate concept categories that individuals are expected to use in the interpretation and handling of a specific task. Structure refers to the way individuals combine information perceived from the outside world, as well as internally generated information. General theories are useful for measurement of structural aspects, for example an increase in level of information processing. General theories can also indicate the content categories that employees are expected to use. However, general theories cannot tell which are the relevant causes and consequences for handling the actual task being investigated. For example, according to Kelly's Personal Construct Theory [Kelly 1991] an experienced market analyst is expected to be able to interpret a market event, i.e. identify causes and predict consequences, but the theory cannot tell which are the relevant causes and consequences to interpret and handle a market event in a specific context.

Page 1490

Therefore, as an integral part of analysing data to assess the need for learning, it is often necessary to develop what we have termed a task model, i.e. a task and domain specific evaluation standard, see [Fuglseth and Grønhaug, 2002]. In our research we establish a task model by aggregating data from experienced participants handling the "same" task. The assumption underlying this approach is that the probability of capturing all task relevant concepts increases by using the data from several experienced individuals. The task model is not necessarily an ideal representation of the task. The quality of the task model depends on the knowledge and skill of the participants. In feedback meetings with each participant the validity and completeness of the model is evaluated. Thus, the task model represents the total knowledge of the participants, for further details on the establishment of a task model see [Fuglseth and Grønhaug 2002].

Furthermore, in order to assess employees' need for learning, to plan training and to measure the effects of training activities, an essential aspect of the data analysis is a diagnosis, adapted from [Stabell 1979]. Diagnosis is the process of finding out how employees' handling of a specific task can be improved. The term also denotes the result of the process. Diagnosis takes place in a co-operation between participants and researchers comprising several feedback meetings, for details see [Fuglseth and Grønhaug 2002]. It involves description of employees' current handling of the task and comparison of the description with the selected theories and the task model. Furthermore, diagnosis involves identification of differences between the description and the theories/task model and an understanding of why these differences exist. Such understanding then provides guidelines on how to improve knowledge and skills in individually adapted training activities, cf. [Beck 2003]. The description of the current handling of the task forms the basis of assessing whether the learning effort actually leads to improvements towards the task model, cf. [Stefanutti and Albert 2003].

4 A tool kit and an illustrative example

[Table 1] presents some examples of theories and methods that may be useful to include in a tool kit for organisational learning. As mentioned in [Section 2], we focus on cognitive learning in this paper. The first column shows the levels of organisational learning, and in the second column we present the aspects of cognitive learning that were discussed in [Section 2]. Then we give examples of theories that we have found useful for determination of indicators to be measured. The last column presents methods for collection of data according to the theories.

Due to space limitations we will only elaborate on Kelly's Personal Construct Theory with the accompanying Role Construct Repertory Test (Rep Test) [Kelly 1991]. The reason for selecting Kelly's theory with method, is that it has most of the qualities we seek for theories with methods to be included in the tool kit. The theory was originally developed for psychotherapeutic purposes, but has later been applied in a variety of studies where the researchers have been interested in measuring how individuals construe part of their environments.

Page 1491

Table 1: Examples of theories and methods for the tool kit

We will illustrate the use of Kelly's theory for organisational learning purposes with an example from a study to capture and diagnose shipping managers' understanding of their information environments, which is supposed to influence the effectiveness of their investment decisions. In our study several methods were applied, and the focus was on building a computerised system to support the managers. Therefore, we did not specifically measure the effects of our attempts to improve the managers' understanding of their information environments using the Rep Test data. We believe, however, that the data capture and diagnosis may still serve as illustration of the potential of Kelly's Personal Construct Theory in a tool kit for organisational learning. We do not enter into technical details in our analysis, but emphasise how we have used the theory for evaluation of strengths and weaknesses in the managers' evaluation of their information sources.

Kelly sees man as a scientist with the ultimate aim to predict and control events. A central element in the theory is that individuals hold constructs (concepts), through which they perceive and understand realities, and that the constructs are personal. A construct is a way in which a person construes elements (persons, things or events) as being alike and yet different from others [Kelly 1991]. In its minimum context a construct is a way in which two elements are alike and different from a third. For example, to say that two persons are 'gentle' implies at least one person who is 'not gentle'. According to Kelly, the way in which two elements are construed as alike should be the same as the way in which they are different, i.e. constructs are bipolar, for example gentle vs. not gentle, good vs. bad, descriptive vs. evaluative.

Kelly assumes that individuals seek to improve their constructs by increasing the repertory, by altering the constructs to provide better fits, and by subsuming them with super-ordinate constructs or systems. Thus, the theory satisfies the requirement mentioned above of indicating a direction for improvement of construct systems, i.e. an aspect of cognitive learning.

The Rep Test is Kelly's method for eliciting individuals' verbalisations of their constructs according to the theory. The researcher brings a role title list and a sorting list to the interview.

Page 1492

The role titles are supposed to suggest elements that the respondent is acquainted with in the area of interest. The subject is asked to respond to the list by designating elements that fit the role titles. [Table 2] gives examples of the role title list we used for elicitation of the shipping managers' constructs for evaluation of their information sources. last

Examples from the Role Title List:
1 A broker you have recently been in contact with
3 An external person you discuss shipping investments with
5 A colleague that helps you with investment analysis
8 A monthly broker report that you use regularly
10 A shipping journal you do not read very often
17 The internal accounts analysis that you have read
21 A computerised investment analysis system you know well

Table 2: Examples of role titles for elicitation of personal constructs

The role title list is expected to give the managers adequate signals to elicit a representative sample of their information sources. Thus, our list contains role titles regarding persons, written sources and computerised systems. Also, within each category there are role titles to indicate finer categories, for example both colleagues and external persons, and persons with different backgrounds.

The sorting list contains sorts of three elements, i.e. the minimum context of a construct according to the theory. The sorts should be designed to elicit constructs along various dimensions. In our case we presented the managers with sorts of three information sources, for example the name of a broker (role 1), the name of a bank manager (role 3), and the name of a colleague (role 5). Other sorts presented the managers with two persons and one written source, two written sources and one computerised system, two computerised systems and one person, etc. For each sort the researcher asks: "In what important way are two of [the elements] alike but different from the third." The response is recorded, and then the researcher points to the odd element and asks how it is different. The response is recorded as the contrasting pole of the construct. For each sort the researcher also asks: "Are there other important ways in which two of [the elements] are alike but different from the third?" Thus, it is essential that the researcher encourages the respondents to view the elements of each sort from various perspectives in order to elicit as many of their constructs as possible. The result of this stage of the interview is a list of each respondent's constructs for evaluation of the elements elicited. In our case we had a list of each manager's constructs for evaluation of information sources.

The final step of the data elicitation procedure is to ask the respondent to evaluate the elements elicited by the role title list along the constructs. The purpose is to have an understanding of how the respondents construe the part of their environment that is in focus of the interview. In our case we asked each manager to evaluate the information sources along each construct on a five-point scale in order to understand how they evaluate their information sources. [Figure 1] illustrates how the managers evaluated their information sources along their constructs.

Page 1493

Figure 1: Examples of evaluation of elements along constructs

When the constructs and evaluations have been captured, data must be analysed. There are many ways to analyse data from Rep Test interviews, and there are special software programs to facilitate both elicitation and analysis of such data. In our studies we have found hierarchical cluster analyses useful as a data reduction and exploratory method to analyse how each manager evaluates information sources. Cluster analyses are performed both of information sources (cases) and constructs (variables). [Figure 2] shows the hierarchical cluster analysis of manager 005's constructs.

Figure 2: Example of hierarchical cluster analysis of information sources

Page 1494

As discussed above, an essential aspect of data analysis related to organisational learning is to find out how the respondents can improve their knowledge and skills, i.e. a diagnosis. Diagnosis involves comparison of the current handling of a task with the selected theories and the task model. Furthermore, diagnosis involves identification of why the differences exist.

In our study the task model was generated based on a categorisation of the constructs elicited from all respondents, i.e. eight experienced managers. We as researchers developed the first version of the task model. The validity and completeness of the task model were then evaluated in feedback meetings with the respondents. When the task model was established, an essential aspect of the diagnosis was to compare the analysis of each manager's data with the task model and discuss differences and similarities.

As illustrated in [Figure 2], manager 005's constructs form three main clusters: The first cluster is related to a distinction between the shipping company and the shipping markets ("internal - external"). The second cluster comprises constructs related to his use of information sources ("frequent use - infrequent use"). The third cluster is evaluative, i.e. it expresses the usefulness of information sources, and whether manager 005 finds them interesting. In addition, manager 005 has descriptive constructs related to finance and general economics, and he has constructs related to the time perspective of his information sources. Compared to other managers, manager 005 has several relatively independent clusters, indicating rather complex evaluations of his information sources. The reason, for example, why the construct "short-term - long-term" is not closely related to other clusters is that manager 005 needs and uses information sources with various characteristics, such as internal, financial and historic. Some of these sources mainly provide him with short-term data, whereas others give him long-term data.

Compared to the task model, however, manager 005 did not mention constructs related to politics, technology and competitors. Lack of such constructs indicates that he does not monitor his environments in search of early warning signals. Furthermore, he did not mention constructs that distinguish between information sources for monitoring and analysing markets, and he has no constructs regarding data quality. These differences are related to the fact that he did not perform market analyses himself. He did not use computerised sources and was not acquainted with the possibilities and limitations of such sources. In feedback meetings with manager 005 we pointed out these differences to him, and discussed how he might enhance his understanding of his information environments and improve his use of information sources.

The purpose of the above discussion is to illustrate how Rep Test data may be used to detect strengths and weaknesses of the ways employees handle their jobs and be a starting point for individually adapted training activities to improve knowledge and skills.

Measures have also been established to evaluate development of knowledge structures. Well-developed knowledge structures involve among others that individuals have knowledge along different dimensions (differentiation), and that they are able to discriminate finely among elements (discrimination). Furthermore, well-developed knowledge structures imply that individuals have developed abstract or permeable [Kelly 1991] super-ordinate constructs that allow them to interpret new elements (for discussions of development of knowledge structures, see, for example, [Schroder et al. 1967], and for discussion and examples of measures related to Rep Test data, see, for example, [Stabell 1978].

Page 1495

In our research we have particularly found the measures presented in [Table 3] useful for evaluation of the development of construct systems.

Structural measures:
  Shipping managers
  005 007 205 207 403 405 407 aver.
# constructs 18 29 28 24 24 17 30 24
# evaluative 3 4 4 1 4 0 5 3
% evaluative 17 % 14 % 14 % 4 % 17 % 0 % 17 % 12 %
# descriptive 15 25 24 23 19 17 25 21
% descriptive 83 % 86 % 86 % 96 % 83 % 100 % 83 % 88 %
                 
CENTR 0,87 1,00 0,87 0,84 0,93 0,77 1,00 0,90
ARTCL 0,83 0,77 0,63 0,92 0,86 0,80 0,94 0,82

Table 3: Examples of structural measures

The number of constructs elicited is a measure of differentiation [Schroder et al., 1967], [Kelly 1991, p. 163], but it is also essential to consider the percentage of evaluative constructs. Evaluative constructs express a value judgement, for example whether an information source is considered good or bad. Respondents may have many constructs, but if they have a high percentage of evaluative constructs, their judgements may not be very well founded. In an earlier study, for example, a marketing manager had ten evaluative constructs, but only six constructs indicating the reasons for his evaluative judgements of his information sources.

ARTCL and CENTR are measures of discrimination. ARTCL (average construct articulation) reflects the extent to which the respondents have used the five-point scales in their evaluations. It is simply a count of the scale intervals applied divided by the total number of scale intervals. For example, manager 005 had mentioned 18 constructs, giving a total number of 18 x 5 = 90 scale intervals. In his evaluations of the information sources he had applied 75 of these intervals, giving ARTCL = 75/90 = 0.83.

CENTR (average centrality) is a measure of discrimination [Schroder et al., 1967, p. 25] that reflects the extent to which the elements are rated on the constructs. It can, however, also be considered a measure of permeability of the constructs. According to Kelly [Kelly 1991, p. 163], the number of elements to which a construct is applied, can be considered evidence of permeability. The measure is a count of the number of elements that are rated on the scales divided by the number of possible ratings. For example, manager 005 mentioned 19 information sources and 18 constructs, giving a total of 19 x 18 = 342 possible ratings. In his evaluations he had used 298 of these ratings, giving CENTR = 298/342 = 0.87.

It is important, however, that researchers do not just accept the structural measurements as expressing degrees of high and low development of knowledge structures, but understand the reasons of the results. For example, very high scores on CENTR may also indicate that the respondent has some vague constructs. This can be detected in inconsistent evaluations followed up in feedback meetings with the respondent. Seemingly inconsistent evaluations may, however, also be due to it that the researchers and the respondents attach different meanings to notions.

Page 1496

For example, manager 005 had evaluations along the construct "qualitative - quantitative" that did not seem logical. When we asked him to explain his ratings, it turned out that he primarily used the pole "qualitative" as meaning "of high quality", whereas the meaning he attached to the pole "quantitative" was "not of high quality".

In our study the structural measures were mainly used to support our analysis of differences among the managers' current understanding of their information sources. In studies of organisational learning we believe that the structural measures may also be useful indicators of improvements in construct systems as a result of training. After a period of training, the Role Rep interview can be repeated and differences in measurements of structural characteristics evaluated.

Thus, Kelly's Rep Test is based on a well-founded psychological theory and provides clear guidelines for how to elicit an individual's constructs within a specific domain according to the theory. Furthermore, several measures have been developed to evaluate changes in construct systems. We therefore believe that the method may be useful to researchers and knowledge/skill managers to understand why some individuals perform better than others, establish appropriate training activities and measure possible effects of the activities on individual learning.

5 Concluding comments

In this paper we have presented some examples of cognitive theories with methods to be included in a tool kit for organisational learning, see [Table 1]. We have illustrated with Kelly's Personal Construct Theory and the accompanying Role Construct Repertory Interview [Kelly, 1991] how the theories with methods may be used for measurement of organisational learning in order to improve knowledge and skills.

As mentioned in [Section 2], the tool kit should be extended to include theories with methods for measurements of behavioural learning. Such measurements should comprise not only task performance at the individual and group/team level, but also effects of task performance on organisational goal attainment. The aspects regarding cognitive learning can most likely also be extended. For example, different methods should probably be used to capture cognitive learning in simple and complex tasks.

As illustrated in [Section 4], the tool kit should also include methods not only for data capture, but also for analyses of the data captured. Furthermore, the tool kit should be developed with explanations and illustrations of how to apply the theories/methods for capture and analysis of learning. Evaluations of strengths and weaknesses of each method should also be provided.

We believe that a theory-based tool kit as proposed in this paper will help researchers and knowledge/skill managers to identify and apply a set of relevant methods for measurement of their particular activities to enhance organisational learning. In a wider perspective we believe that such a tool kit will also improve the understanding of whether, possibly how, and under which conditions organisational learning can create competitive advantages.

Page 1497

References

[Argyris and Schön 1996] Argyris, C., Schön, D. A.: "Organizational Learning II"; Addison-Wesley / Reading, Mass (1996).

[Beck 2003] Beck, S.: "Skill and Competence Management as a Base of an Integrated Personnel Development (IPD) - A Pilot Project in the Putzmeister, Inc./Germany"; Journal of Universal Computer Science, vol.9, No.12, 2003, 1381-1387.

[Bullen and Rockart 1986] Bullen, C. V., Rockart, J. F.: "A Primer on Critical Success Factors", in J. F. Rockart and C. V. Bullen (eds), "The Rise of Managerial Computing", Dow Jones-Irwin / Homewood, Ill. (1986), 383-423.

[Cohen and Bailey 1997] Cohen, S. G., Bailey, D. E.: "What makes teams work: Group effectiveness research from the shop floor to the executive suite"; Journal of Management, 23, 3 (1997), 239-290.

[Cook and Campbell 1979] Cook, T. D., Campbell, D. T.: "Quasi-Experimentation: Design & Analysis Issues for Field Settings"; Houghton Mifflin / Boston (1979).

[Fuglseth and Grønhaug 2002] Fuglseth, A. M., Grønhaug, K.: "Theory-driven Construction and Analysis of Cause Maps"; International Journal of Information Management, 22 (2002), 357-376.

[Garavan and McGuire 2001] Garavan, T. N., McGuire, D.: "Competencies and workplace learning: some reflections on the rhetoric and the reality"; Journal of Workplace Learning, 13, 4 (2001), 144-163.

[Kelly 1991] Kelly, G. A.: "The Psychology of Personal Constructs Vol. I"; Routledge / London (1991).

[Miner and Mezias 96] Miner, A. S., Mezias, S. J.: "Ugly Duckling No More: Past and Futures of Organizational Learning Research"; Organization Science, 7, 1 (1996), 88-99.

[Schroder, Driver and Streufert 1967] Schroder, H. M., Driver, M. J., Streufert, S.: "Human Information Processing: Individuals and Group Functioning in Complex Social Situations"; Holt, Rinehart and Winston / New York (1967).

[Simon 1991] Simon, H. A.: "Bounded Rationality and Organizational Learning"; Organization Science, 2, 1 (1991), 125-133.

[Stabell 1978] Stabell, C. B.: "Integrative Complexity of Information Environment Perception and Information Use"; Organizational Behavior and Human Performance, 22 (1978), 116-142.

[Stabell 1979] Stabell, C. B.: "Decision Research: Description and Diagnosis of Decision Making in Organizations"; Working Paper No. 79.006, Institute for Information Systems Research, Norwegian School of Economics and Business Administration, Bergen, Norway (1979).

[Stefanutti and Albert 2003] Stefanutti, L., Albert, D.: "Skill Assessment in Problem Solving and Simulated Learning Environments"; Journal of Universal Computer Science, vol.9, No.12, 2003, 1455-1468.

[Wenger 1998] Wenger, E.: "Communities of practice: Learning, meaning, and identity"; Cambridge University Press / Cambridge (1998).

Page 1498

[Wöls, Kirchpal and Ley 2003] Wöls, K., Kirchpal, S., Ley, T.: "Skills Management - an "all-purpose" Tool?"; Proc. I-KNOW '03, 3rd International Conference on Knowledge Management, J.UCS, Graz (2003), 138-143.

Page 1499