these systems, at acceptable and predictable cost, time, and risk. Forecast growth in complexity threatens to accelerate these trends.
There is evidence that these challenges are not well understood, much less solved, for increasingly complex families of hard technology systems. Thanks to competitive pressure and enabling technologies, we are increasingly more likely and able to create complex systems than to completely understand them. A rising human activity is the reverse engineering of hard technology systems that were created only recently, attempting to recover understanding of the requirements these systems are expected to meet, and their designs. [Rugaber et al 1999]
These challenges are greater in systems that are organizations of people, as such systems are more complex and even less well defined than hard technology systems. A portion of the work described here is based upon progress in understanding families of complex systems of the "simpler" hard technology type, followed by adapting these techniques to the more complex madeofman type. For readers interested only in hard technology systems, it is worth remembering that many organizations find their ability to engineer hard technology successfully is limited by their institutional processes of system engineering. So, even for the hardcore technologist, there is no escaping from the importance of understanding institutional processes.
Emerging understanding of principles of organization of processoriented institutions is (at best) recent. No less an authority on organizational change than Michael Hammer, pioneer noted business reengineering advocate, writes in 1999 of organizing around the key enterprise processes, the difficult tradeoffs between process standardization and diversity, challenges of organizing for ongoing change, and the evolving role of management in such an organization [Hammer and Stanton 1999]. This suggests the nature of reengineering is still being explored.
Modeling a human organization and acting on that model is the task of management, whether performed by a traditional line manager or by selfmanaging teams and selfdirected individuals, coached by newage managers.
Management's pioneering scholar Peter Drucker's Management By Objectives [Drucker 1954] shows that the concept of system requirements applies to institutional systems. The specification of requirements and design of both hard technology and institutional systems is an economic and marketdriven activity. Establishing accountability for system performance is not possible without a method of accounting. The work summarized in this article provides a system of "bookkeeping" for families of configurable intelligent systems, based upon system engineering and economic principles.
Many historical efforts at improved organizational performance have included the aggressive investment in information technology. Information systems authority Paul Strassmann writes of the lack of correlation in the returns gained with these investments, implying a gap in understanding by those investing in, implementing, operating, or managing these systems. [Strassmann 1990], [Strassmann 1994] More recent explosive growth of Internet utilization illustrates Drucker's [Drucker 1999] thesis that we proliferate data more readily than we distill meaningful information. Combating the modern day "Tower of Babel", the embedded intelligence methodology described here deals with the underlying semantics or meaning of information about systems. [Schindel 1997]
Drucker reminds us that before 1900, hardly anyone other than soldiers, clergy, and teachers worked for organized institutions, that four fifths of the U.S. population performed manual labor in farming, domestic service, and bluecollar work, and that only today has the largest single occupational group become that of "knowledge workers", another Drucker category [Drucker 1999]. The appearance of commercial enterprise management in the 1800's and its growth in the 1900's followed shifts in patterns of demand, productivity, communication, and ownership. Emergence of professional management shifted coordination of production and distribution from the market to explicit management processes, as chronicled by Alfred Chandler [Chandler 1977]. New patterns involving the emergence of knowledge work, creation of value, and management of institutional performance are again shifting the work and responsibility of management [Drucker 1999]. The structure of the economic system is itself projected by Drucker to shift, and he marks improving the productivity of knowledge workers as the greatest management challenge of the twentyfirst century.
A further indication of the inconsistency in prevailing system views of human organizations is that we often demonize "Taylorism" as the historic archetype of wrongheadedness in seeking to standardize work of individuals and organizations, while simultaneously glorifying "best practices" as valued patterns we must install and vigorously emulate to optimize our performance. [Kanigel 1997], [O'Dell and Grayson 1998] Drucker, on the other hand, views Taylor as the single most influential American, the inventor of the single American philosophy that has most profoundly swept the world.
A unifying theme in the work of Taylor, Drucker, Strassmann, and system engineers is the importance of sound (economic, scientific, or engineering) models and measures of system performance. Today's environment still includes emotionladen issues, nonscientific technical jargon, and gaps in understanding of both humanmade technology systems and complex processoriented institutions.
3 System Engineering Methodology and Tools
In Section 3, we review the use of a System Engineering Methodology and Tools to improve complex system outcomes. These outcomes are essential to continuous improvement of hard technology systems and institutional processes in all types of institutions.
3.1 System Engineering Methodology
The commercial enterprise of one of the authors performs system engineering for industrial clients at three progressive levels:
3.1.1 Level 1 SE Methodology: System Engineering of Single Systems
The problems of system engineering have been widely studied for fifty years, stimulated by generation of complex humanmade systems during World War II [Bate et al 1995]. This is not to say that system engineering is mastered in practice by most organizations. Continuous improvement of teambased system engineering processes in large organizations is a subject of great attention, and one of the applications of continuous improvement addressed by the methodologies of this article.
In Level 1 practices, system engineering methodology is structured for later extension in Levels 2 and 3. Supporting the SECMM reference model [Bate et al 1995], it particularly deals with the issues of interactions between systems:
[Figure 2] provides the context for these issues. Requirements statements apply at the external boundary of the Subject System, indicating what it must be or do as seen from its environment. Design statements apply to the system interior, indicating how internal components and relationships are arranged to support the requirements.
Figure 2: Subject System Interacting with Systems in Its Environment
In spite of the fact that designs must be traceable to requirements they support, it is not the case that design steps always follow requirements steps in time, or even that they should. [Figure 3] illustrates the iterative "no beginning" nature of the RequirementsDesign process.
Figure 3: The RequirementsDesign Iteration
There are many historic examples in which "requirements" have been driven by "designs", in the sense that new technologies or design ideas have permitted the inclusion or recognition of "requirements" that would otherwise not have been practical to include in system goals. The methodology recognizes this relationship, while supporting the real need that all design aspects be traceable to the requirements that they were intended to support.
The required systemenvironment interaction functions in this methodology are formally modeled without reliance on natural language as the substrate of specification, using RNA Transaction ModelsTM. These functional requirements are furthermore placed into class hierarchies of types of functions and decomposed into containment hierarchies of subfunctions.
Figure 4: System Interactions
Level 1 practices include establishment of formal interfaces. The systems may be mechanical, electronic, chemical, biological, software, human enterprises, etc. Interactions at interfaces may involve physical variables or symbolic information.
The "when" portion of the Level 1 methodology groups functions into environmental Situations, which are the system engineering analogue of software use cases. [Jacobson, 1992] These are treated formally as states, and modeled in finite state machines or continuous trajectories. The states are similarly placed into class and containment (sub state) hierarchies.
Tool support for Level 1 practices includes the requirements, design, validation, documentation, and change management process, including integration with other engineering tools. Traditional automated tracing of decomposition and rationale relationships is supplemented by trace of inheritance of patterns of requirements and designs. This establishes an environment for system engineering that can simultaneously support document driven and database driven team processes.
Some aspects of the Level 1 system engineering practices could be viewed as object oriented analysis of general (non software) systems. In fact, UML notation [Booch, Rumbaugh, and Jacobson 1999] is utilized, borrowing again from the software world.
3.1.2 Level 2 SE Methodology: System Engineering of Families of Configurable Systems
The Level 2 practices add to Level 1 practices, for engineering families of configurable systems with common content. These practices recognize the historical problems encountered by different practitioners:
All of these practitioners are faced with natural pressures that erode standardization with variation, reducing economic leverage. These pressures are powerful, and not dismissed by force of standardization alone. Gaining and
maintaining leverage in an environment of multiple product lines, organizational divisions, or operational systems requires understanding of these forces and methodology that allows specialization without sacrificing standardization.
The Level 2 Practices use class hierarchies that allow development and evolution of families of configurable systems, shown in [Figure 5]. Whether these systems are hard technology, institutional process, or both, they consist of multiple specialized systems organized into classes, product lines, or common categories. These higher categories are in turn arranged based on their class similarity in the same way, progressing eventually to the core technologies, processes, or competencies of the organization, at the top of [Figure 5]. Whereas this approach is often invoked informally, this methodology converts it to a quantitative model with system bookkeeping, resulting in a discipline for developing, auditing, and evolving families of systems.
This process does not force the amount of common content in systems, but allows its optimization. Based upon economic judgment, an optimal arrangement in one business process or product line may be a high degree of commonality, with only a limited variation allowed in specializing for local needs. In another process or product, a higher degree of local specialization may be optimal, with only limited common content. Level 2 Practices provide means of having this argument objectively, and keeping track of decisions and implementation as a quantitative discipline. This approach attacks a problem of standardization versus diversity, reported for institutional processes by Hammer [Hammer and Stanton 1999].
Scientific bookkeeping allows calculation of Return on VariationTM when we model variations on common themes, to understand the benefit and cost of specialization for individual markets, customers, organizational units, etc. It also allows the application of Gestalt RulesTM supporting the use of patterns in system engineering.
[Figure 6] illustrates the idea of class hierarchy for Engines as systems. [Figure 7] illustrates those same Engines in containment hierarchy. Class and containment are organizing relationships for systems, which are different from interaction relationships [Figure 4] for the same systems. [Figure 8] illustrates how all of these organizing and interaction relationships are conceptually combined by the Systematica methodology and supporting tools.
Use of patterns to simplify and understand complex systems has a long heritage:
System patterns become most sophisticated when they are patterns of behavior. These reduce complexity, through behavioral abstractions that recur across different systems. In this methodology, we classify dynamic behaviors into class and containment hierarchies with the same ease as classification of static objects. The idea of classifying dynamic processes into formal class hierarchies may seem different from classifying static patterns such as shapes, natural objects, people, etc. However, experience shows that we can describe both in the same way. In fact, study of interacting systems in physics and cognitive sciences shows that objects owe their
origin to processes, and that classification of patterns of objects in hierarchies is in fact classification of patterns of processes. This is related to the ideas of polymorphism and classes of interfaces in object oriented software.
Figure 5: Families of Systems
Figure 6: Class Hierarchy
Figure 7: Containment Hierarchy
Figure 8: Combined Class and Containment Hierarchy, and Interaction
Figure 9: Class Patterns of Dynamic Behavior
Tool support for Level 2 Practices includes the modeling of class hierarchies of systems, interfaces, interaction functions, and states, measurement of common content
and variation, and audit of pattern conformance in support of a disciplineoriented approach.
3.1.3 Level 3 SE Methodology: System Engineering of Intelligent Systems
18.104.22.168 Management as Embedded Intelligence
A uniform approach is used for the modeling of embedded intelligence, whether automated agents (e.g., programs, controllers, regulators, operations systems) or expert human agents. In commercial, military, and institutional systems, this reflects today's reality. In the airplane cockpit, network control center, truck cab, fleet operations center, machine shop, facility control room, construction site, factory, or hazardous environment site, it is common to find intelligent sensing, analysis, and control shared between embedded human and automated agents. [Stiles and Glickstein 1991], [Glickstein 1984]
In some environments, human agents are dominant and limited automated assisting agents appear. In other environments, automated agents are dominant and human agents intervene only for exceptional tasks requiring high expertise. Many examples exist of fully automated agent populations. Other examples exist in which only human agents are visible. The trend over time is that in a given environment, human agents' previous tasks are replaced as technologists and process experts learn how to automate tasks. The centuryold battleground of Taylorism's origin was the individual machinist position in a machine shop. Today, that individual position is often 100% automated, with no operator in sight either no human at all, or a human very distant, in a slower response supervisory control loop. As more sophisticated automated agents take over former tasks of human agents, new, higherskill demanding tasks are often developed and assigned to human agents.
In this methodology, we define "intelligence" of a system as the degree of its functional adaptability to operate effectively within different environmental states. This approach avoids issues of technology, and specifically avoids reference to computers, software, or people. The centrifugal ball governor on Watt's steam engine made that system more capable of delivering constant speed drive to varying external loads, therefore more intelligent in this perspective, without use of human or electronic technology.
Figure 10: Interpretation of Intelligence
The "IQ Test" for Systems 1 and 2 of [Figure 10] is to provide varying environmental states, then decide which performs better using objective functions of performance.
We use a definition of management consistent for institutional human management and embedded technological control systems. We define "management" of a system as the modeling, monitoring, or control of the performance, configuration, fault, security, or accounting aspects of that system. This approach borrows and abstracts from narrower use of the System Management Functional Areas (SMFAs) originating in the ISO model of computer and network system management [ISO 1991].
Our definition of management is consistent with the definitions of human management of institutions found in Drucker [Drucker 1999]. It is related to the definition of management by Strassmann [Strassmann 1990]. Strassmann's definition is concerned with institutional level management (supported by use of institution level financial data to measure Return on ManagementTM). He excludes from his definition of management embedded systems contained in institutional production and service operations directly serving customers. Our definition treats embedded intelligence at all these levels as management (supporting our use of a universal model of management), but can distinguish institutional level management to break out a management component consistent with Strassmann's. Our approach and Strassmann's are consistent in viewing all management as information management.
Using the Level 2 pattern discipline, in Level 3 we find patterns of management system intelligence emerge from this approach. These originate from the management model of [Figure 11]. In this model:
Tool support for Level 3 includes support of patterns of intelligence, human agent and automated agent process assignments, standardized and specialized processes in general, and modeling and implementation of system engineering process standards.
Practiced across multiple technical and institutional environments, this has led to a family of reusable patterns of intelligence that apply well across all the intelligent system environments we have encountered.
Figure 11: Embedded Intelligence Summary Model
22.214.171.124 Relation to Modeling Conscious Systems
Modeling of conscious systems in nature has moved in recent years from a speculative or philosophical activity [Dennett 1996] [Churchland and Sejnowski 1992] to a pursuit in the biological and cognitive sciences. [Edelman 1988] [Edelman 1987] [Edelman 1989] [Crick 1994] [Damasio 1999] The Level 3 system engineering practices referenced here borrow perspectives from those endeavors.
Without suggesting that consciousness, if defined and present, would be of the same nature in different kinds of systems, our interest is in systems exhibiting aspects of conscious behavior.
For example, robust institutions exhibit a shared explicit awareness of the goals, status, direction, environmental threats and opportunities, and spirit of the organization. These institutions are better prepared to engage in continuous or ongoing change. [Senge 1990] Likewise, a higher value is placed on technical products that adapt to differing environmental or application missions without being reengineered. These ideas suggest that situationally aware systems are of interest, and the situational state based model of performance provided by the Level 1 practices are enhanced by the Level 3 model of intelligent management.
From the cognitive sciences, we have borrowed ideas such as attention modeling to expand the detailed model behind [Figure 11]. The patterns are consistent with Damasio's model of being aware of awareness in biological consciousness [Damasio 1999].
3.2 System Engineering Tools
Methodologies and tools never stand alone, but interact with each other. Frederick Taylor's holistic vision included not just the (rigid) machine operator procedures for which he became widely known. He simultaneously developed equally revolutionary new tools and machines, high speed machineable steel alloys, employee compensation methods, and even a financial accounting infrastructure.
As shown in [Figure 12], all of these combine in an interacting system fabric in which each is dependent upon the other. They all must evolve together, and the result is only as good as the weakest link.
The resulting combined system is itself subject to system engineering. This approach has been used in the development of the Systematica methodology and tools summarized in this article. This results in a situationbased model of system engineering tasks, appropriate tools, assignment of tasks to system engineers, managers, and automated agents, and the specialization of the model to local process needs.
Figure 12: Methodologies, Tools, and Tasks Move Together
The methodology does not depend upon use of a specific tool, and can use a variety of commercial or customized tools. Systematica tools have been constructed as a part of the work summarized here. These tools actually constitute an infrastructure technology that automatically generates specialized tools fit to the local engineering process needs of individual enterprises. These tools are modeldriven, meaning that instead of writing and modifying programs, the keepers of these tools adapt them to local needs by entering the models of the organization's desired processes and systems. This explicit modeling, instead of encoding in implicit program algorithms, also opens and enhances the process for continuous improvement.
The technology components of the Systematica tool infrastructure fall into several categories:
providing an adaptable interface technology not requiring programming for basic specialization. The interface technology includes embedding of technical system graphical models consistent with the methodology.
3.3 Applications of the System Engineering Methodology
Systematica methodology has been applied to both hard technology and institutional process system improvement. The following sections summarize domains of application experience with both types of systems.
3.3.1 Application to Hard Technology Systems
Over the last thirty years, the system engineering concepts described have been developed and applied in a number of settings involving realworld hard technology systems, including:
material processors, track and wheel based traction and steering systems, and machine applications for construction, excavation and underground boring, transportation, mining, agriculture, and power generation. These settings have included some of the largest scale construction, mining, and power systems on the globe. The period has seen the invasion of these previously mechanical systems by embedded electronic and software control system technologies, creating new levels of complexity of behavior, configuration, engineering, and service. Competition and governmental regulations of air quality and other factors have combined with technology to drive and enable new levels of complexity in these systems. Following patterns of other industries, these systems increasingly embed intelligence managing the performance, configuration, security, accounting, and faults of the systems. Purely mechanical system efforts have included concerns with greater modularity of mechanical assemblies and reduced parts counts with higher common content. These industries are major drivers of trends to embed mobile distributed, networked systems.
3.3.2 Application To Institutional Systems: Engineering of Freedom and Constraint
The following sections first summarize domains of institutional process experience with the methodology, then elaborate on the challenges special to human processes.
126.96.36.199 Institutional Applications
During the last twenty years, the system engineering concepts described have been applied to a number of institutional system settings, including:
model, control, and monitor the core institutional processes of managing carrier networks. These have included customer service provisioning, from order entry through service cut over and billing, network engineering and equipment provisioning processes, customer trouble reporting bureaus, testing, and service dispatch processes, network equipment fault analysis and response processes, and network performance tracking, analysis, and response processes.
188.8.131.52 Special Considerations for Engineering Human Processes
The combined methodology addresses the need to be sensitive to the complexity of human processes in comparison to hard technology systems, and to recognize the challenges inherent in modeling, much less changing, work processes. The centurylong stormy history of Taylor's Scientific Management offers lessons to which we have paid heed. [Kanigel 1997] Addressing these issues are key aspects of this approach, including:
Improvement involves learning, whether in an academic institution or during ongoing professional development. Continuous institutional improvement includes improvement to the process and content of learning. [Senge 1990] This section summarizes the model of professional development process improvement within a commercial enterprise, and [Section 4] discusses the educational institution case.
The application of embedded management systems in engines, terrestrial and space vehicles, airplanes, factory machines, pipelines, networks, and other manmade systems is often focused on improving the performance or behavior of these engineered systems. Hardcore numbers and objective criteria are used to measure this improvement, in miles per gallon, reduction of waste, reliability and availability, response time, reduced emission levels, costs, production per day, or other performance attributes.
When the same ideas are applied to embed management systems into organizations of people, equally clear crisp criteria, traceable from original intentions through implementation, may appear lacking.
Before Frederick Taylor developed Scientific Management of (initially) machine shops, the shop workforce was made up of highly experienced expert workmen, each of whom had his own ideas about the best way to produce parts, apply tools, and interact with the organization. Taylor's approach of detailed codified production procedures was fought by a resistant workforce that felt invaded by rote directions and productivity pressure. Even while it was being widely adopted, Taylor's "one best way" approach was called into question by various detractors, and a century later "Taylorism" has become a term synonymous with overly rigid management philosophy. However, looking beyond the charges of rigidity and conformism to the underlying scientific basis of this work, Drucker considers Taylor to have made the greatest contribution of any American to a world wide exported influence on the development of production. [Drucker 1999]
Ironically, today's machine tools and shop operations have carried this model much farther than Taylor, encoding tool selection, tool speeds, machining trajectories, and sequences into fully automated machine tool processes with little or no human intervention. A new and elevated generation of tasks for humans in programming the production process and integrated design and production resulted. Was Taylor wrong on all counts, or were critics missing the underlying point of objective system modeling?
The landscape has changed since Taylor's Scientific Management became twentieth century Industrial Engineering. Today, knowledge workers (another Drucker concept) constitute 40% of the U.S. workforce, and are becoming the largest component of the workforces of each developed nation. The capital assets of the corresponding businesses have become these knowledge workers, not the machine tools of Taylor's time. [Drucker 1999]
When business America jumped into computerized mechanization of enterprise information processes, the promise of higher corporate productivity provided the rationale for investment of billions of dollars in computer hardware, software, and operations. But Strassmann's studies indicate that this return was poorly correlated with IT investments over many enterprises. The very concept of Return on ManagementTM taught by Strassmann illustrates that the idea of just what such a return might mean apparently escaped many business leaders or implementers somewhere between intention and implementation. [Strassmann 1990]
In [Section 4], we review the use of "portfolios" as a part of assessment of processes for improving the performance of human agents over time (learning). In preparation, we review the system modeling of the human portion of these systems in commercial organizations of knowledge workers. This model is used to support a common perspective of the complex human tasks to be performed, the processes used to improve that performance, and the assessment of those improvement programs to determine their effectiveness.
[Figure 13] summarizes the model in simplified form.
This model has been chosen to parallel the system models used for the humanmade hard technology system components that work in parallel with the human agents. It describes aspects of performance learning and improvement that can apply
within both educational institutions (where the task performers and learners may be students) and commercial enterprises or other institutions (where the task performers and learners may be staff members improving their performance over time).
In this model, the following definitions apply:
Task Performer: A human agent, intended to perform complex tasks to a given level of performance. This agent may be, for example, an engineer, scientist, writer, or other professional. This agent typically interacts with other agents in this performance.
Figure 13: Informal SE Model of Performance, Assessment, Improvement Environment
Managed System: The system upon which the human agent's tasks are performed. For example, a Managed System may be a natural or manmade system that is being analyzed by an engineer or scientist, or designed or synthesized by an engineer, or composed or rendered by a writer or an artist, or operated by an operator. The Managed System provides most of the semantics of (domain knowledge about) the tasks to be performed by the Task Performer.
Performance Situation Type: A situation is a state of the environment of the Task Performer. The Performance Situation Type is the system engineering equivalent of a Use Case in the software world, modeled in advance of the performance of Tasks. The content of these situation models include a description of performance objectives and indicators of performance. These models are developed from the mission and vision of the organization in which the Task Performer operates, based on inputs from the organization stakeholders.
Performance Situation Instance: Actual occurrences of Performance Situations, in which tasks are performed by the Task Performer. These are recorded by ePortfolio when the Task Performer considers them examples of best work.
Mentor, Coach, Teacher, Process Specifier: A human agent responsible to instruct Task Performers in the performance of the tasks, coach them in that performance, evaluate performance, or specify performance.
Coaching or Instructional Situation Type: A situationbased state (Use Case equivalent) in which instruction or coaching is to occur. Examples include classroom situations, laboratory situations, onthejob performance situations, project situations, cocurricular activity situations, etc. These models include overall educational practices and strategies, along with expected learning outcomes.
Instructional Curriculum: Coaching or Instructional Situation Types are aggregated into collections that constitute a curriculum planned and optimized to accomplish an overall educational outcome.
Curriculum Map: A mapping of Instructional Situations to Performance Situations, indicating the plan of coverage of the Performance Situations by the Curriculum. This is valuable for understanding how Curriculum addresses the intended outcomes.
Curriculum Planner, Assessor: A human agent responsible for establishing, analyzing, and improving the Curriculum over time, including advance planning, retrospective assessment, and improvement planning.
Curriculum Change Situation Type: A situationbased state (Use Case equivalent) representing possible requested change to improve the Curriculum. These are premodeled as into Types representing possible future requests.
Curriculum Change Situation Instance: An actual occurrence of a Change Situation (Change Request), provided for resolution. The ongoing improvement process analyzes and prioritizes these instances for disposition as (possible) curricular change.
3.4 System Engineering Lessons Learned
Experience in using this methodology has taught lessons including:
In the previous section, we discussed the use of system engineering methodology and tools to solve complex system problems. In this section, we will describe the use of assessment methodology and the application of information technology to assess educational outcomes. The use of quality methods to assess outcomes are is essential to the continuous improvement of institutional processes in all types of institutions - commercial, educational, military, governmental, and others."
4.1 Assessment Methodology
Institutions are coming under increased pressure to demonstrate to their stakeholders that they are producing quality products and/or services. This pressure comes from state and/or federal agencies, investors, customers and regulating bodies. For example, in education new accreditation standards developed by regional accrediting bodies now require institutions of higher education to conduct selfstudies that demonstrate that the institution is achieving its academic objectives and has processes in place to ensure the continuous improvement of the educational institution. This requires institutions to identify their objectives in ways that are measurable and develop processes to support the continuous improvement efforts. Few institutions have a well developed set of specific, measurable student outcome goals that are tied to their institutional mission and guide the educational delivery system both in and out of the classroom. Because most educational curricula are packaged in a structure of required and elective courses, it is difficult to see how the content and strategies that are being employed in the classroom contribute to the expected general student learning outcomes (if, in fact, the outcomes have been defined). Generally, there is also no system of individual faculty responsibility to see that students are acquiring knowledge or skills outside the individual faculty members' course content area. At the same time, students are often uninformed about the specific learning outcomes they are expected to achieve during their collegiate experience but are fairly well versed in which combinations of courses are acceptable to get a college degree. In manufacturing, companies must meet externally developed quality standards in order to be competitive in many markets. This requires them to develop and demonstrate processes and procedures that ensure the customer that the products that are produced meet a minimal acceptable quality standard.
At the same time that institutions are being pressured by external forces to develop a system of accountability, there is also a need to manage the process of change that is the result of the rapid rate at which customer and client needs are evolving. Making decisions about the addition of new services or product lines to meet the needs of society and the specific needs of individual customers/clients, and finding ways to produce the desired outcome or product more effectively and efficiently are often hampered by the lack of a well defined system of continuous improvement to support the decision making process.
To support organizational change and accountability and to create a true learning organization where all constituents are involved in meaningful and collaborative ways, it is essential that a continuous improvement (CI) system be developed that supports the effective and efficient delivery of the primary functions of the institution and the monitoring of institutional progress in achieving its desired outcomes. Models have been developed that describe the processes required to support continuous improvement of the educational system. One such model is summarized in [Figure 14]. This model is designed to guide the development of a system of continuous improvement of general outcomes.
Figure 14. Model for Continuous Improvement
The process begins with a vision and mission statement. A clear sense of what the institution mission is and its vision of the future is the foundation for developing its objectives and indicators of whether or not those objectives are being met. The development of the objectives and indicators requires the input of many constituents both inside and outside the institution. For example, in an educational institution, these constituents include faculty, staff, students, employers, recruiters, boards of advisors and trustees, and, in the case of public institutions, appropriate state agencies. The objectives and indicators and even the mission of the institution may
change over time as the needs of the customer/client change and the technology and knowledge base grows. Once the objectives and indicators have been developed, the institution must make decisions about the strategies and practices that it will employ to achieve them. The outcomes that result from the implementation of the institutional strategies and practices are the focus of study to determine whether or not the intended objectives were achieved. This study takes the form of collecting evidence through the assessment process. The assessment process includes the development of a data collection method(s), collection of data and their analysis and the report of findings. Once the findings have been reported, it is necessary to evaluate the findings and, where appropriate, make recommendations for change based the evidence and examination of the strategies and practices that have been used to achieve the objectives. These findings and/or change recommendations are then fed back to the appropriate internal and external groups for implementation. This process is continuous and, although the processes in the model appear to be sequential and cumulative they are, in fact, iterative with multiple feedback loops. As one process in the CI model is employed, it informs previous or planned processes and improvements are made in the processes and or objectives before the cycle has been completed.
4.2 Assessment Tools
The tools that support the assessment process are many and vary with the institutional context and focus of the outcome being studied [Prus and Johnson 1994]. For example, in an educational institution, tools to measure student learning outcomes may include:
In a manufacturing facility, tools commonly used to measure processes and outcomes may include:[Brassard, 1989]
As commercial institutions adopt more quality improvement processes the distinction between the types of tools used to measure institutional processes, productivity and outcomes is becoming less distinct across different types of institutions. from those used in industry. In whatever environment measurement tools are being used, it is important to consider that there will always be more than one way to measure any objective and no single method is good for measuring a wide variety of different objectives. In evaluating the methods to be used, it has been found that there is a consistently inverse relationship between the quality of measurement methods and their expediency. Whatever assessment method is chosen and implemented, it is important to pilot test it to see if the method is appropriate for the objective being measured. Before developing "inhouse" assessment tools, research should be conducted to see if there are appropriate tools that have already been developed and tested.
In addition to the traditional format of assessment tools, electronic forms of the tools are being developed. With the proliferation of computing technologies in all areas of society, electronic assessment tools are becoming very prevalent. The use of Webbased surveys, documentation software, and softwarebased analytical tools for both quantitative and qualitative assessment are commonplace in both commercial and educational institutions. In this section we discuss the development of an electronic portfolio system as a method for assessing institutional effectiveness related to student outcomes.
In education, a portfolio has been described as a "purposeful collection of student work that exhibits the student's efforts, progress, and achievements. The collection must include student participation in selecting contents, the criteria for selection, the criteria for judging merit, and evidence of student selfreflection." [Paulson et al 1991] While there is agreement on the definition of a portfolio, there is no one correct way to design a portfolio process. The design should be driven by a clear understanding of the desired outcome from using portfolios and the specific skills to be assessed. The desired outcome will determine the design and focus of the portfolio process. Portfolios are not an end in themselves and must be developed with a clear vision of the desired outcome. [Arter et al 1995]
There are multiple benefits and some disadvantages in using portfolios of student work as a means of assessing student outcomes. [Prus and Johnson 1994] Portfolios can:
The use of portfolios also has some disadvantages that need to be considered when choosing an assessment method. They include:
The ePortfolio was designed at RoseHulman Institute of Technology as a means of collecting rich multimedia portfolios of each studentsstudent's best work across the whole population of students. Modules were created to provide for the asynchronous assessment of student work by trained raters and the aggregate statistical reporting of results. The ePortfolio system was designed to measure the effectiveness of the overall institutional educational process reflected by student learning outcomes.
The ePortfolio has been developed to include the following modules:
4.3 Applications of the Assessment Methodology
In applying the continuous improvement model to institutional processes, RoseHulman has completed the first iteration of defining the student learning objectives. A team of faculty then met to evaluate a number of assessment methods to determine what primary method would be used to collect evidence of student learning outcomes. The criteria established to evaluate the different methods were: 1) the method should provide rich information, 2) the method had to be valid in that it focused on RoseHulman specific outcomes, 3) the method needed to be nonintrusive on faculty and students, and 4) the method should be relatively easy to administer. The result of the deliberations was a decision to use portfolios as the primary means of collecting evidence and evaluating student learning outcomes.
The was concern about the workload required to administer a portfolio process led to the decision to utilize information technology to facilitate the process. The use of an electronic portfolio system would not only provide efficiency in data storage and acquisition but also promote asynchronous assessment in that students could submit documents and review their portfolios and faculty could rate the portfolios at any time from anywhere.
The ePortfolio CI system requirements were developed primarily by faculty and were designed to be sensitive to the heavy workloads confronting both faculty and students at RoseHulman. System designers from the Office of Institutional Research met with faculty and administrators to determine design requirements, preferences for options, and possible future development of the system. Each of these aspects of the portfolio was important to build into the design. Once the system requirements were defined, institutional resources were used to create the electronic infrastructure needed to support the design. The CI system was designed to minimize the amount of human intervention necessary for routine tasks to support and maintain the process. Special attention was given to provide both faculty and student development opportunities through the system design and implementation process. This was done primarily through the prototyping process and assessing both the software and the student/faculty experience. The ePortfolio design provides for maximum flexibility to accommodate multiple portfolio uses and institutional contexts. The administration module enables adaptability to local requirements.
Secondary applications have emerged during the process of implementatingimplementing the primary system. These include supporting the reflection by individual students on their own progress, the accumulation of information in support of individual student resumes, and the awareness of student learning by faculty raters in areas outside their own discipline. In addition, the ePortfolio system is being adapted to support courselevel and departmentlevel portfolios. The institution will soon be piloting the use of the ePortfolio as a means to archive and review faculty promotion, tenure, and retention documentation.
First year students are introduced to the learning objectives and the portfolio system during their first quarter on campus. Because the maintenance of portfolios is the responsibility of the students, faculty are not asked to select or collect materials that are to be placed in the system. Students are expected to choose from among the many artifacts that they produce during their collegiate experience, selecting those they feel best represent their progress toward achieving the desired student outcomes that are formally displayed as process standards in the system. In the process of submitting evidence, students are asked to include reflective statements in their portfolios explaining why they believe the chosen submission meets the stated criteria. The ePortfolio system is webbased and enhanced by a powerful database. Students can easily submit any documents they can put in an electronic format (e.g., video and/or audio files, scanned documents, etc.).
Faculty are asked to indicate on a webbased, curriculummapping survey which of the nine student learning objectives they cover in the specific courses they teach. This reinforces the overall RHIT learning objectives to the faculty and provides a curriculum map for the institute that identifies where in the curricula the specific learning objectives are being taught, reinforced, and where students are getting formal feedback on their progress toward achieving the objectives.
Prior to the rating process, faculty are assigned to work in teams to assess specific learning objectives. Each team rates one of the learning objectives. The first step in the rating process is to establish interrater reliability to assure the reliability of the process. During the interrater reliability process each rater on a team is assigned the same students' electronic files to assess independently. During the independent rating each rater keeps a log of the rationale behind his/her assessment decisions for each performance criteria that define the learning objective. Logs are used to document the rating process and provide the basis for the development of meaningful rubrics for rating. These logs are electronic and a part of the ePortfolio rating module. After each faculty member has made his/her ratings, the team meets together to compare and discuss their ratings and the rationale behind their decisions. When the team is comfortable with the fact that they are rating the material using the same criteria, the rubric is formalized for each of the performance criteria and used during the subsequent rating process. The rubrics can be changed at the mutual agreement among the raters.
Expected measurable learning outcomes are explicit, reinforced, and provide a common language and expectations for teaching and learning for both students and faculty. On most college campuses, this is no small feat. However, the CI system requires more than just the collection of data. It also requires a process to review the data as it relates to how the assessment results match the expected outcomes. The powerful database approach allows for the efficient reporting of results using multiple criteria (e.g., major, class, sex, high school size, etc.). Closing the loop in the CI system is accomplished by reviewing the assessment data (that is electronically analyzed and generated) and making recommendations to the Institute. At the same time that the Institute will be considering recommendations, individual departments will receive and review the results for the students in the department curricular programs.
4.4 Assessment Lessons Learned
During the development and implementation of the CI system, several lessons were learned. The following represent lessons learned areas in the development and implementation of the general CI system when applied within an educational institution environment. We believe these lessons have counterparts in most other environments.
Institutional Activity Involve Key Stakeholders: The development and implementation of a campus CI system is an institutionwide process. It is critical to involve key stakeholders throughout the process, but particularly in the development of objectives and in the feedback process. This increases buyin and the likelihood that recommendations will receive broad acceptance. As in the case of the learning objectives, some processes require extensive involvement of faculty and students. Other institutional objectives may require heavy involvement from other institutional constituents. For example, in the case of institutional objectives that deal with resource allocations, those constituents that are responsible for establishing budget guidelines and priorities should be involved in the development of measurable objectives and establishing strategies for achieving those objectives. In all instances, key constituencies need to be identified and included in the CI process and effective and efficient methods of data collection and analysis must be implemented.
Allocate Resources: When information technologies are utilized in the CI processes, there is a cost. This may take the form of human and/or capital resources. It requires effort for inhouse experts to develop software according to specifications, to perform testing, and to carry out ongoing improvement of the software. Servers and databases need to be purchased and maintained to support the software and growing needs for data storage and retrieval. However, the initial development process is a small fraction of the total cost of software, the larger costs being specification, testing, maintaining and evolving, documenting, etc. In addition to the development of the electronic infrastructure, there also needs to be training of students and faculty on the use of the system and coordination of rating sessions and feedback of evaluation results. The value of the electronic portfolio process needs to be clear and accepted so that students and faculty both recognize that the benefits of the process outweigh the costs. Support must begin with high levels of leadership, and a reward system that reinforces faculty participation in the process needs to be in place.
Ask Them, Ask Them, and Ask Them Again: The importance of the support of faculty cannot be understated. It is important that faculty are involved early in the process and participate in identifying learning objectives and developing performance indicators. To make the portfolio process work, it is necessary that faculty champion the system with their peers and that they encourage students to participate in the process. It is unlikely that the process can be successful, long term, without active faculty support.
Tell Them, Tell Them, and Tell Them Again: Faculty have many demands on their time and attention. It is important to reinforce the CI processes at every opportunity. Getting them involved in some part of the process providing some mechanism to reward their participation is the best way of keeping them engaged.
Dynamic Process: It is important to recognize that the CI process is a living process. That is, it is continually changing and must be nurtured. Institutional
resources must be allocated at a level that ensures its maturation and optimizes its potential.
Ambiguous: In developing a CI process, it is important to recognized that using simple models of a complex system results in ambiguities. Many different models can be used, and there may not be a one best model. Nevertheless, in the spirit of Frederick Taylor we are seeking "best practices" with the idea that one practice may produce an outcome that is significantly superior to another.
Iterative and Integrated: The CI process is made up of multiple steps and processes. Each step needs to fit together to form a cohesive whole. In addition, the process is iterative in that findings from one step of the process serve to inform the other parts of the process. Multiple iterations may take place within the process cycle before a entire cycle has been completed. For example, during the portfolio rating process, faculty made several changes to the performance indicators to increase clarity of the learning objective. This happened before the results of the rating process were analyzed.
Start Early: It is important that institutions and programs allow enough time to develop a process in a way that is responsive to the demands for information and accountability. Engaging faculty and other stakeholders in meaningful ways, integrating information technology in the CI process, and evaluating and improving processes as they are implemented takes time. For the purposes of accreditation, it is desirable to have completed the continuous improvement cycle at least once before an accreditation visit. This is estimated to take at least two years for an inclusive process with initial demonstrated results. This is also true in commercial continuous improvement processes in connection with structures such as ISO9000, TQM, and CMM where all advise that significant improvement programs should be expected to take years of time.
Decouple From Faculty Evaluation: Faculty participation is critical to the success of an institutional CI process. If faculty believe that they, personally, are going to be evaluated by this process they will be resistant to its use. Most institutions have faculty evaluation processes in place that provide individual feedback in areas that are valued by the institution. If not, they should be developed, but the institutional CI process should not be the vehicle.
Review Existing Technology: Much is currently taking place in the area of integrating technology into the assessment process in higher education. In some cases the technology is being developed "inhouse" and in others by commercial enterprises. Investigating what others are doing in this area provided valuable information and insights into where the gaps were between what we wanted to do and what was available from other sources. It also helped to inform us on areas where we could improve our own view of how technology could be used.
Allocate Resources: Development of technologybased assessment processes is very resource intensive. If there is local expertise to manage and develop the technology, they will need to allocate time and funding for tools needed for the development effort. Depending on the tool being developed, this is a serious consideration when making the decision to integrate technology into the CI process.
Test the Technology: After the development of the alpha version of the student module of the ePortfolio, a pilot project was conducted using upperclass students to test the module. This turned out to be a critical step prior to implementing the system
with all the students. The students made many good suggestions about both the user interface and the implementation process. As a result of this experience, the faculty rating module was also tested and significant improvements and enhancements added. Whether or not the technology is developed internally or purchased from a vendor, experience indicates the value of pilot testing before ramping up to full implementation.
Narrow the Scope: Decisions about the use of technology and how to integrate it are very similar to the assessment process itself. Once there are multiple users and/or benefactors of the technology, there are many suggestions about enhancements and additional features that could be added. It is important to clearly define the purpose of the technology and stay focused on the task at hand. In the design of the system, it is helpful to make it as versatile as possible for a large number of applications; however, this should not be done at the expense of producing a timely and efficient local process.
Compatibility Issues: Issues of the compatibility of various database platforms and computing systems need to be considered throughout the design process. These also evolve constantly, and updates are required to maintain compatibility. When multiple raters are involved in rating student portfolios, the issue of being able to access student material can become a problem. For example, in the ePortfolio system discussed here, students are encouraged to submit documents in .pdf format for ease of multiple user access via the web. However, it is not a requirement and most students, at this point, do not convert their files. It is important that compatibility issues are addressed early on in the process.
Importance of Support Services: Even though the technology may be locally developed by a nonIT department, it is important to coordinate the use of the system with local information technology professionals and institutional IT resource managers. It is unlikely that responsibility for the assessment process will reside in the computing center. Therefore, to have appropriate access to servers and integrated networks to support the eportfolio process is critical. The smooth implementation of an integrated technologyenhanced assessment system will, in part, be dependent on the ability of those responsible for the assessment process to coordinate efforts with those responsible for computing resources.
Confusion of the Tool With the Process: It is important not to confuse the technology tools with the assessment process. No matter how well the technology is designed, it serves as a conduit for the assessment process; it is not the assessment process. The technology tool must support the assessment process in ways that provide efficient, accessible, and userfriendly interfaces. It does not substitute for welldeveloped, clearly stated, and measurable institutional objectives or evaluation processes that utilize the assessment results and are locally valid.
5 Conclusions and Future Plans
Sections 3 and 4 list individual lessons we have learned. More global conclusions and plans are as follows:
delivery. Methods and tools such as those described here will be needed to assess the relative merits of competing approaches as they are piloted, deployed, and evolved.
6 Appendix: About The Organizations
Two organizations collaborated in the work described here, and the authorship of this article.
RoseHulman Institute of Technology (RHIT) is a leading undergraduate collegiate institution, providing baccalaureate and masters level degree programs in engineering, science, and mathematics. RHIT was recently voted the number one school offering these programs at the B.S./M.S. level by representatives of its peer institutions nationally. RHIT has been a leader in the educational assessment movement, and has served as the host location and sponsor for the Best Assessment Processes Symposia held in 1997, 1998, and scheduled for April, 2000. Dr. Gloria Rogers, Vice President of Institutional Research and Assessment, is author of the assessment guidebook, "Stepping Ahead: An Assessment Plan Development Guide." The school is 125 years old, and offers degree programs in Applied Optics, Civil Engineering, Chemistry, Chemical Engineering, Computer Science, Economics, Electrical Engineering, Computer Engineering, Mathematics, Mechanical Engineering, and Physics. As a part of its interaction with business and industry, RHIT recently established the Center for an Innovation Economy (CIE), located on a 180 acre business campus near its academic campus in Terre Haute, Indiana. RHIT was an early pioneer in the development and application of electronic portfolios for use in educational program assessment, resulting in the ePortfolio system.
International Centers for Telecommunication Technology, Inc. (ICTT) is a specialist in system engineering services for its complex systems clients in telecommunications, mobile equipment and power systems, and other markets. The
company specializes in providing system engineering across multiple technologies (electronic, mechanical, software, chemical, and human enterprise organizations), for which the focal issue is the complexity of families of configurable systems. A commercial enterprise, ICTT is partly owned by RoseHulman Institute of Technology, with one of its offices located at Aleph Park, RHIT,s business campus. Through its research, publishing, and licensing affiliate, System Sciences, LLC, the company supplies SystematicaTM, a system engineering methodology and supporting tool set for use by organizations in which high complexity systems and organization are a central issue. ICTT is also the commercial distributor for the ePortfolioTM assessment tools that originated within RoseHulman Institute of Technology. William D. Schindel is President of ICTT and System Sciences, LLC.
[Rugaber et al 1999] Rugaber, Spencer, et al: Georgia Tech Reverse Engineering Group Web Site: http://www.cc.gatech.edu/reverse/
Systematica, Gestalt Rules, Return on Variation, and RNA Transaction Model are trademarks of System Sciences, LLC. ePortfolio is a trademark of International Centers for Telecommunication Technology, Inc. Return on Management is a trademark of Strassmann, Inc.