JUCS - Journal of Universal Computer Science - Knowledge Management
http://www.jucs.org/jucs_articles_by_Category/M.
JUCS Topic M. - Knowledge ManagementVerlag der Technischen Universität GrazUniversiti Malaysia Sarawak2011-07-28T11:21:46+02:00An Ontology based Agent Generation for Information Retrieval on Cloud Environment
http://www.jucs.org/jucs_17_8/an_ontology_based_agent
Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture.Yang, Chao-Tung; Luo, Yu-Cheng; Chang, Yue-Shan2011-07-20T10:35:16+02:00agent generationcloud computinginformation retrievalontologyAn Ontology based Agent Generation for Information Retrieval on Cloud EnvironmentYang, Chao-TungLuo, Yu-ChengChang, Yue-Shanagent generation,cloud computing,information retrieval,ontologyJUCS - Journal of Universal Computer Science Vol. 172011041135-1160Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaOntology-based Competency Management: the Case Study of the Mihajlo Pupin Institute
http://www.jucs.org/jucs_17_7/ontology_based_competency_management
Semantic-based technologies have been steadily increasing their relevance in recent years in both the research world and business world. Considering this, the present article discusses the process of design and implementation of a competency management system in information and communication technologies domain utilizing the latest Semantic Web tools and technologies including D2RQ server, TopBraid Composer, OWL 2, SPARQL, SPARQL Rules and common human resources related public vocabularies. In particular, the paper discusses the process of building individual and enterprise competence models in a form of ontology database, as well as different ways of meaningful search and retrieval of expertise data on the Semantic Web. The ontological knowledge base aims at storing the extracted and integrated competences from structured, as well as unstructured sources. By using the illustrative case study of deployment of such a system in the Human Resources sector at the Mihajlo Pupin Institute, this paper shows an example of new approaches to data integration and information management. The proposed approach extends the functionalities of existing enterprise information systems and offers possibilities for development of future Internet services. This allows organizations to express their core competences and talents in a standardized, machine processable and understandable format, and hence, facilitates their integration in the European Research Area and beyond.Vraneš, Sanja; Janev, Valentina2011-07-20T10:20:38+02:00ICTSemantic Webcase studycompetenciesexpertisehuman resourcesOntology-based Competency Management: the Case Study of the Mihajlo Pupin InstituteVraneš, SanjaJanev, ValentinaICT,Semantic Web,case study,competencies,expertise,human resourcesJUCS - Journal of Universal Computer Science Vol. 172011041089-1108Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaTowards Classification of Web Ontologies for the Emerging Semantic Web
http://www.jucs.org/jucs_17_7/towards_classification_of_web
The massive growth in ontology development has opened new research challenges such as ontology management, search and retrieval for the entire semantic web community. These results in many recent developments, like OntoKhoj, Swoogle, OntoSearch2, that facilitate tasks user have to perform. These semantic web portals mainly treat ontologies as plain texts and use the traditional text classification algorithms for classifying ontologies in directories and assigning predefined labels rather than using the semantic knowledge hidden within the ontologies. These approaches suffer from many types of classification problems and lack of accuracy, especially in the case of overlapping ontologies that share common vocabularies. In this paper, we define an ontology classification problem and categorize it into many sub-problems. We present a new ontological methodology for the classification of web ontologies, which has been guided by the requirements of the emerging Semantic Web applications and by the lessons learnt from previous systems. The proposed framework, OntClassifire, is tested on 34 ontologies with a certain degree of overlapping domain, and effectiveness of the ontological mechanism is verified. It benefits the construction, maintenance or expansion of ontology directories on the semantic web that help to focus on the crawling and improving the quality of search for the software agents and people. We conclude that the use of a context specific knowledge hidden in the structure of ontologies gives more accurate results for the ontology classification.Bouras, Abdelaziz; Qadir, Muhammad Abdul; Fahad, Muhammad; Farukh, Muhammad; Moalla, Nejib2011-07-20T10:20:32+02:00ontology classification and retrievalontology searchingsemantic matchingsemantic web portalsweb page classificationTowards Classification of Web Ontologies for the Emerging Semantic WebBouras, AbdelazizQadir, Muhammad AbdulFahad, MuhammadFarukh, MuhammadMoalla, Nejibontology classification and retrieval,ontology searching,semantic matching,semantic web portals,web page classificationJUCS - Journal of Universal Computer Science Vol. 172011041021-1042Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaAlgorithms for the Evaluation of Ontologies for Extended Error Taxonomy and their Application on Large Ontologies
http://www.jucs.org/jucs_17_7/algorithms_for_the_evaluation
Ontology evaluation is an integral and important part of the ontology development process. Errors in ontologies could be catastrophic for the information system based on those ontologies. As per our experiments, the existing ontology evaluation systems were unable to detect many errors (like, circulatory error in class and property hierarchy, common class and property in disjoint decomposition, redundancy of sub class and sub property, redundancy of disjoint relation and disjoint knowledge omission) as defined in the error taxonomy. We have formulated efficient algorithms for the evaluation of these and other errors as per the extended error taxonomy. These algorithms are implemented (named as OntEval) and the implementations are used to evaluate well-known ontologies including Gene Ontology (GO), WordNet Ontology and OntoSem. The ontologies are indexed using a variant of already proposed scheme Ontrel. A number of errors and warnings in these ontologies have been discovered using the OntEval. We have also reported the performance of our implementation, OntEval.Qadir, Muhammad Abdul; Qazi, Najmul Ikram2011-07-20T10:20:31+02:00information systems applicationsknowledge engineeringknowledge modellingontological engineeringontology evaluationsemantic computingAlgorithms for the Evaluation of Ontologies for Extended Error Taxonomy and their Application on Large OntologiesQadir, Muhammad AbdulQazi, Najmul Ikraminformation systems applications,knowledge engineering,knowledge modelling,ontological engineering,ontology evaluation,semantic computingJUCS - Journal of Universal Computer Science Vol. 172011041005-1020Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaKnowledge Extraction from RDF Data with Activation Patterns
http://www.jucs.org/jucs_17_7/knowledge_extraction_from_RDF
RDF data can be analyzed with various query languages such as SPARQL. However, due to their nature these query languages do not support fuzzy queries that would allow us to extract a broad range of additional information. In this article we present a new method that transforms the information presented by subject-relationobject relations within RDF data into Activation Patterns. These patterns represent a common model that is the basis for a number of sophisticated analysis methods such as semantic relation analysis, semantic search queries, unsupervised clustering, supervised learning or anomaly detection. In this article, we explain the Activation Patterns concept and apply it to an RDF representation of the well known CIA World Factbook.Lackner, Günther; Teufl, Peter2011-07-20T10:20:30+02:00RDFactivation patternsfuzzy queriesknowledge miningmachine learningsemantic similarityKnowledge Extraction from RDF Data with Activation PatternsLackner, GüntherTeufl, PeterRDF,activation patterns,fuzzy queries,knowledge mining,machine learning,semantic similarityJUCS - Journal of Universal Computer Science Vol. 17201104983-1004Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaLeveraging Web 2.0 in New Product Development: Lessons Learned from a Cross-company Study
http://www.jucs.org/jucs_17_4/leveraging_web_20_in
The paper explores the application of Web 2.0 technologies to support product development efforts in a global, virtual and cross-functional setting. It analyses the dichotomy between the prevailing hierarchical structure of CAD/PLM/PDM systems and the principles of the Social Web under the light of the emerging product development trends. Further it introduces the concept of Engineering 2.0, intended as a more bottom up and lightweight knowledge sharing approach to support early stage design decisions within virtual and cross-functional product development teams. The lessons learned collected from a cross-company study highlight how to further developblogs, wikis, forums and tags for the benefit of new product development teams, highlighting opportunities, challenges and no-go areas.Chirumalla, Koteshwar; Bertoni, Marco2011-07-08T12:31:43+02:00Cross-functional TeamsEngineering 2.0Functional Product DevelopmentProduct DevelopmentWeb 2.0Leveraging Web 2.0 in New Product Development: Lessons Learned from a Cross-company StudyChirumalla, KoteshwarBertoni, MarcoCross-functional Teams,Engineering 2.0,Functional Product Development,Product Development,Web 2.0JUCS - Journal of Universal Computer Science Vol. 17201102548-564Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaEnterprise Microblogging for Advanced Knowledge Sharing: The References@BT Case Study
http://www.jucs.org/jucs_17_4/enterprise_microblogging_for_advanced
Siemens is well known for ambitious efforts in knowledge management, providing a series of innovative tools and applications within the intranet. References@BT is such a web-based application with currently more than 7,300 registered users from more than 70 countries. Its goal is to support the sharing of knowledge, experiences and best-practices globally within the Building Technologies division. Launched in 2005, References@BT features structured knowledge references, discussion forums, and a basic social networking service. In response to use demand, a new microblogging service, tightly integrated into References@BT, was implemented in March 2009. More than 500 authors have created around 2,600 microblog postings since then. Following a brief introduction into the community platform References@BT, we comprehensively describe the motivation, experiences and advantages for an organization in providing internal microblogging services. We provide detailed microblog usage statistics, analyzing the top ten users regarding postings and followers as well as the top ten topics. In doing so, we aim to shed light on microblogging usage and adoption within a globally distributed organization.Stocker, Alexander; Müller, Johannes2011-07-08T12:31:42+02:00Enterprise 2.0Web 2.0enterprise microbloggingknowledge managementknowledge sharingmicrobloggingsocial mediaEnterprise Microblogging for Advanced Knowledge Sharing: The References@BT Case StudyStocker, AlexanderMüller, JohannesEnterprise 2.0,Web 2.0,enterprise microblogging,knowledge management,knowledge sharing,microblogging,social mediaJUCS - Journal of Universal Computer Science Vol. 17201102532-547Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaIDEA: A Framework for a Knowledge-based Enterprise 2.0
http://www.jucs.org/jucs_17_4/idea_a_framework_for
This paper looks at the convergence of knowledge management and Enterprise 2.0 and describes the possibilities for an over-arching exchange and transfer of knowledge in Enterprise 2.0. This will be underlined by the presentation of the concrete example of T-System Multimedia Solutions (MMS), which describes the establishment of a new enterprise division "IG eHealth". This is typified by the decentralised development of common ideas, collaboration and the assistance available to performing responsibilities as provided by Enterprise 2.0 tools. Taking this archetypal example and the derived abstraction of the problem regarding the collaboration of knowledge workers as the basis, a regulatory framework will be developed for knowledge management to serve as a template for the systemisation and definition of specific Enterprise 2.0 activities. The paper will conclude by stating factors of success and supporting Enterprise 2.0 activities, which will facilitate the establishment of a practical knowledge management system for the optimisation of knowledge transfer.Lin, Dada; Schoop, Eric; Geißler, Peter; Ehrlich, Stefan2011-07-08T12:31:41+02:00Enterprise 2.0IDEAenabling factorsexpert knowledgeknowledge managementregulatory frameworksocial softwareIDEA: A Framework for a Knowledge-based Enterprise 2.0Lin, DadaSchoop, EricGeißler, PeterEhrlich, StefanEnterprise 2.0,IDEA,enabling factors,expert knowledge,knowledge management,regulatory framework,social softwareJUCS - Journal of Universal Computer Science Vol. 17201102515-531Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaEarly Results of Experiments with Responsive Open Learning Environments
http://www.jucs.org/jucs_17_3/early_results_of_experiments
Responsive open learning environments (ROLEs) are the next generation of personal learning environments (PLEs). While PLEs rely on the simple aggregation of existing content and services mainly using Web 2.0 technologies, ROLEs are transforming lifelong learning by introducing a new infrastructure on a global scale while dealing with existing learning management systems, institutions, and technologies. The requirements engineering process in highly populated test-beds is as important as the technology development. In this paper, we will describe first experiences deploying ROLEs at two higher learning institutions in very different cultural settings. The Shanghai Jiao Tong University in China and at the “Center for Learning and Knowledge Management and Department of Information Management in Mechanical Engineering” (ZLW/IMA) at RWTH Aachen University have exposed ROLEs to theirs students in already established courses. The results demonstrated to readiness of the technology for large-scale trials and the benefits for the students leading to new insights in the design of ROLEs also for more informal learning situations.Richert, Anja; Heiden, Bodo von der; Ullrich, Carsten; Renzel, Dominik; Friedrich, Martin; Wolpers, Martin; Klamma, Ralf; Shen, Ruimin2011-04-24T11:15:28+02:00inter-widget communicationlanguage learningpersonalized learning environmentresponsive open learning environmentEarly Results of Experiments with Responsive Open Learning EnvironmentsRichert, AnjaHeiden, Bodo von derUllrich, CarstenRenzel, DominikFriedrich, MartinWolpers, MartinKlamma, RalfShen, Ruimininter-widget communication,language learning,personalized learning environment,responsive open learning environmentJUCS - Journal of Universal Computer Science Vol. 17201102451-471Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaLaSca: a Large Scale Group Decision Support System
http://www.jucs.org/jucs_17_2/lasca_a_large_scale
Decision-making involves choosing between one ore more alternatives, to achieve one or more goals. To support this process, there are decision support systems that employ different approaches, supporting groups or not. Generally, however, these systems do not have great flexibility; their users have to follow preestablished decision methods. This paper, after exposing some decision-making processes, describes a system, LaSca (from Large Scale), to support decisions in large-scale groups. This system, besides allowing effective achievement of the benefits of deciding in large groups through the proper structuring of the group, also allows its users to define themselves how this structuring will happen, based or not in the existing theories on the subject. So, in addition to facilitate the decision-making process, LaSca also allows its users to decide how to decide.Vivacqua, Adriana S.; Carvalho, Gustavo; Souza, Jano M.; Medeiros, Sérgio Palma J.2011-07-08T12:30:04+02:00CSCWdecision-makinglarge groupsparticipatory designLaSca: a Large Scale Group Decision Support SystemVivacqua, Adriana S.Carvalho, GustavoSouza, Jano M.Medeiros, Sérgio Palma J.CSCW,decision-making,large groups,participatory designJUCS - Journal of Universal Computer Science Vol. 17201101261-275Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaAn Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise Networks
http://www.jucs.org/jucs_17_2/an_empirical_study_on
Small and Medium Enterprises (SMEs) face new challenges in the global market as customers require more complete and flexible solutions and continue to drastically reduce the number of suppliers. SMEs are trying to address these challenges through cooperation within collaborative enterprise networks (CENs). Human aspects constitute a fundamental issue in these networks as people, as opposed to organizations or Information Technology (IT) systems, cooperate. Since there is a lack of empirical studies on the role of human factors in IT-supported collaborative enterprise networks, this paper addresses the major human aspects encountered in this type of organization. These human aspects include trust issues, knowledge and know-how sharing, coordination and planning activities, and communication and mutual understanding, as well as their influence on the business processes of CENs supported by IT tools. This paper empirically proves that these aspects constitute key factors for the success or the failure of CENs. Two case studies performed on two different CENs in Switzerland are presented and the roles of human factors are identified with respect to the IT support systems. Results show that specific human factors, namely trust and communication and mutual understanding have to be well addressed in order to design and develop adequate software solutions for CENs.Choudhary, Alok; Huber, Charles; Pouly, Michel; Cheikhrouhou, Naoufel2011-07-08T12:30:00+02:00ICT supportcollaborative enterprise networkscommunicationhuman aspectstrust issuesAn Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise NetworksChoudhary, AlokHuber, CharlesPouly, MichelCheikhrouhou, NaoufelICT support,collaborative enterprise networks,communication,human aspects,trust issuesJUCS - Journal of Universal Computer Science Vol. 17201101203-223Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaInformation Consolidation in Large Bodies of Information
http://www.jucs.org/jucs_16_21/information_consolidation_in_large
Due to information technologies the problem we are facing today is not a lack of information but too much information. This phenomenon becomes very clear when we consider two figures that are often quoted: Knowledge is doubling in many fields (biology, medicine, computer science, ...) within some 6 years; yet information is doubling every 8 months! This implies that the same piece of information/knowledge is published a large number of times with small variations. </P> <P>Just look at an arbitrary news item. If considered of some general interest reports of it will appear in all major newspapers, journals, electronic media, etc. This is also the problem with information portals that tie together a number of large databases. </P> <P>It is our contention that we need methods to reduce the huge set of information concerning a particular topic to a number of pieces of information (let us call each such piece an "essay" in what follows) that present a good cross-section of potential points of view. We will explain why one essay is usually not enough, yet the problem of reducing a huge amount of contributions to a digestible number of essays is formidable, indeed is science fiction at the moment. We will argue in this paper that it is one of the important tasks of computer sciences to start tackling this problem, and we will show that in some special cases partial solutions are possible.Wurzinger, Gerhard2011-03-18T16:21:18+01:00information consolidationInformation Consolidation in Large Bodies of InformationWurzinger, Gerhardinformation consolidationJUCS - Journal of Universal Computer Science Vol. 162010123314-3323Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaTowards a Theory of Conceptual Modelling
http://www.jucs.org/jucs_16_20/towards_a_theory_of
Conceptual modelling is a widely applied practice and has led to a large body of knowledge on constructs that might be used for modelling and on methods that might be useful for modelling. It is commonly accepted that database application development is based on conceptual modelling. It is however surprising that only very few publications have been published on a <I>theory of conceptual modelling</I>.</P> <P><I>Modelling</I> is typically supported by languages that are well-founded and easy to apply for the description of the application domain, the requirements and the system solution. It is thus based on a theory of modelling constructs. At the same time, modelling incorporates a description of the application domain and a prescription of requirements for supporting systems. It is thus based on methods of <I>application domain gathering</I>. Modelling is also an engineering activity with engineering steps and engineering results. It is thus <I>engineering</I>. The first facet of modelling has led to a huge body of knowledge. The second facet is considered from time to time in the scientific literature. The third facet is underexposed in the scientific literature.</P> <P>This paper aims in developing principles of conceptual modelling. They cover modelling constructs as well as modelling activities as well as modelling properties. We first clarify the notion of conceptual modelling. Principles of modelling may be applied and accepted or not by the modeler. Based on these principles we can derive a theory of conceptual modelling that combines foundations of modelling constructs, application capture and engineering.</P> <P>A general theory of conceptual modelling is far too comprehensive and far too complex. It is not yet visible how such a theory can be developed. This paper therefore aims in introducing a framework and an approach to a general theory of conceptual modelling. We are however in urgent need of such a theory. We are sure that this theory can be developed and use this paper for the introduction of the main ingredients of this theory.Thalheim, Bernhard2011-01-31T10:32:54+01:00conceptual modellinggeneral theory of modelsmodellingmodelling act(ivity)principles of models and modellingTowards a Theory of Conceptual ModellingThalheim, Bernhardconceptual modelling,general theory of models,modelling,modelling act(ivity),principles of models and modellingJUCS - Journal of Universal Computer Science Vol. 162010113102-3137Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaProviding a Proof-Theoretical Basis for Explanation: A Case Study on UML and <I>ALCQI</I> Reasoning
http://www.jucs.org/jucs_16_20/providing_a_proof_theoretical
In this article we argue in favour of Natural Deduction Systems as a basis for formal proof explanations. We illustrate our choice presenting a Natural Deduction for <I>ALCQI</I> and use it to help explain UML reasoning.Rademaker, Alexandre; Haeusler, Edward Hermann2011-01-31T10:32:36+01:00ALCALCQIDescription LogicsProof TheorySequent CalculusUMLnatural deductionProviding a Proof-Theoretical Basis for Explanation: A Case Study on UML and <I>ALCQI</I> ReasoningRademaker, AlexandreHaeusler, Edward HermannALC,ALCQI,Description Logics,Proof Theory,Sequent Calculus,UML,natural deductionJUCS - Journal of Universal Computer Science Vol. 162010113016-3042Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaMerging Strategies for Authoring QoE-based Adaptive Hypermedia
http://www.jucs.org/jucs_16_19/merging_strategies_for_authoring
Personalization is desirable, but writing the <I>adaptation behaviour description</I> to go with it is taxing. Even more challenging is the application of multiple adaptation strategies over the same static content. This paper focuses on recent work on <I>strategy modularisation and merger</I> development in the authoring process of adaptive hypermedia. The reason for the <I>modularisation of strategies</I> is to break a complex adaptation decision into a number of simpler ones, which may be reused more easily and applied in different orders. The rationale for <I>strategy merger</I> is to be able to apply multiple adaptation strategies over the same content — a challenge which is not yet fully addressed in current adaptive hypermedia systems. To demonstrate the proposed method we present an example case study and sample strategies written in the LAG adaptation language. The case study is based on a recently proposed model for Quality of Experience in e-learning. This model exposes the complex interaction between a number of factors affecting QoE and hence presents a good candidate for the application of a strategy merger, as well as modularisation. We have then evaluated this approach via structured questionnaires used with a number of design experts of hypermedia content creation, especially in the domain of education. This allows us to draw generic conclusions for both our own further research, as well as for the community at large, interested in the area of reuse and modularisation of adaptation.Cristea, Alexandra I.; McManis, Jennifer; Scotton, Joshua; Moebs, Sabine2011-01-27T15:29:24+01:00AdaptationAdaptive (Educational) HypermediaLAGMultimedia LearningQuality of ExperienceQuality of ServiceStrategy mergerStrategy modularisationMerging Strategies for Authoring QoE-based Adaptive HypermediaCristea, Alexandra I.McManis, JenniferScotton, JoshuaMoebs, SabineAdaptation,Adaptive (Educational) Hypermedia,LAG,Multimedia Learning,Quality of Experience,Quality of Service,Strategy merger,Strategy modularisationJUCS - Journal of Universal Computer Science Vol. 162010102756-2779Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaExecution Model and Authoring Middleware Enabling Dynamic Adaptation in Educational Scenarios Scripted with PoEML
http://www.jucs.org/jucs_16_19/execution_model_and_authoring
The design of adaptive e-learning systems has been approached from different points of view. Adaptive Educational Hypermedia (AEH) conceptual frameworks, usually decompose this problem into separate concerns: a User Model (UM), an Adaptation Model (AM), and a Domain Model (DM). Regarding Educational Modelling Languages (EMLs), they provide adaptation mechanisms such as the modelling of participants following conditional <I>learning paths</I> over a <I>common content structure</I>. The design of adaptive learning paths in EMLs (the Adaptation Model) is predefined during design-time, and no changes on it are allowed during run-time. In this paper we describe the support of dynamic adaptation features (run-time changes on models) using PoEML (Perspective-oriented EML) as modelling language, with focus on the execution model of the PoEML engine and on a SOA-based middleware used by authoring tools to invoke change primitives.Anido-Rifon, Luis; Caeiro-Rodriguez, Manuel; Llamas-Nistal, Martín; Perez-Rodriguez, Roberto2011-01-27T15:29:03+01:00Adaptive Learning SystemsDynamic AdaptationEducational Modelling LanguagesExecution Model and Authoring Middleware Enabling Dynamic Adaptation in Educational Scenarios Scripted with PoEMLAnido-Rifon, LuisCaeiro-Rodriguez, ManuelLlamas-Nistal, MartínPerez-Rodriguez, RobertoAdaptive Learning Systems,Dynamic Adaptation,Educational Modelling LanguagesJUCS - Journal of Universal Computer Science Vol. 162010102821-2840Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaWeb Context Classification Based on Information Quality Factors
http://www.jucs.org/jucs_16_16/web_context_classification_based
The fact that the World Wide Web is being used for various purposes also implies that users may have various information quality factors to consider according to their current context. In this regards, it is important for Web recommendation services to recognize what quality factors should be considered in current context in order to enhance user satisfaction. We showed that it is necessary to classify Web contexts based on the information quality factors users consider in their minds when they choose websites or Web pages. The results of user interviews showed that there are four quality factors: credibility, recency, popularity, and relevance. From survey data analysis, we recognized that user tasks can be clustered into two groups based on the quality factors that users consider. Finally, the results of log data analysis and performances of our proposed algorithm showed that it is possible to enable Web services to infer the context group. This result implies that context recognition is possible using the limited data that are collected at browser side.Lee, Geehyuk; Choi, Jinhyuk; Moon, Junghoon2010-12-03T09:05:07+01:00Web Recommendation ServicesWorld Wide Webcontext classification and inferenceinformation quality factorsWeb Context Classification Based on Information Quality FactorsLee, GeehyukChoi, JinhyukMoon, JunghoonWeb Recommendation Services,World Wide Web,context classification and inference,information quality factorsJUCS - Journal of Universal Computer Science Vol. 162010082232-2251Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaContent Recommendation in APOSDLE using the Associative Network
http://www.jucs.org/jucs_16_16/content_recommendation_in_aposdle
One of the success factors of Work Integrated Learning (WIL) is to provide the appropriate content to the users, both suitable for the topics they are currently working on, and their experience level in these topics. Our main contributions in this paper are (i) overcoming the problem of sparse content annotation by using a network based recommendation approach called Associative Network, which exploits the user context as input; (ii) using snippets for not only highlighting relevant parts of documents, but also serving as a basic concept enabling the WIL system to handle text-based and audiovisual content the same way; and (iii) using the Web Tool for Ontology Evaluation (WTE) toolkit for finding the best default semantic similarity measure of the Associative Network for new domains. The approach presented is employed in the software platform APOSDLE, which is designed to enable knowledge workers to learn at work.Stern, Hermann; Kraker, Peter; Scheir, Peter; Hofmair, Philip; Kaiser, Rene; Lindstaedt, Stefanie N.2010-12-03T09:04:58+01:00associative networksmultimedia information systemsrecommender systemswork integrated learningContent Recommendation in APOSDLE using the Associative NetworkStern, HermannKraker, PeterScheir, PeterHofmair, PhilipKaiser, ReneLindstaedt, Stefanie N.associative networks,multimedia information systems,recommender systems,work integrated learningJUCS - Journal of Universal Computer Science Vol. 162010082214-2231Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaThe 3A Personalized, Contextual and Relation-based Recommender System
http://www.jucs.org/jucs_16_16/the_3A_personalized_contextual
This paper discusses the 3A recommender system that targets CSCL (computer-supported collaborative learning) and CSCW (computer-supported collaborative work) environments. The proposed system models user interactions in a heterogeneous graph. Then, it applies a personalized, contextual, and multi-relational ranking algorithm to simultaneously rank actors, activity spaces, and assets. The results of an empirical evaluation carried out on an Epinions dataset indicate that the proposed recommendation approach exploiting the trust and authorship networks performs better than user-based collaborative filtering in terms of recall.Salzmann, Christophe; Gillet, Denis; Helou, Sandy El2010-12-03T09:04:43+01:00CSCLCSCWRecommender systemsalgorithmsdesignpageranktrustThe 3A Personalized, Contextual and Relation-based Recommender SystemSalzmann, ChristopheGillet, DenisHelou, Sandy ElCSCL,CSCW,Recommender systems,algorithms,design,pagerank,trustJUCS - Journal of Universal Computer Science Vol. 162010082179-2195Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaA Context Model based on Ontological Languages: a Proposal for Information Visualization
http://www.jucs.org/jucs_16_12/a_context_model_based
In the last few years, people are increasingly demanding personalized information to carry out their daily activities. Information systems are needed to manage a representation of the user's situation, identify user needs and preferences, and implement information retrieval techniques that pull together data from diverse and heterogeneous sources. It is necessary to define and formalize context models for achieving these goals. In this paper, we present a formal context model based on advances on the Semantic Web. The model is compounded by four independent and related ontologies: users, devices, environment and services. Each of these ontologies describes general concepts and relationships involved in intelligent environments. The proposed design enables model specializations to particular domains and interoperability with external ontologies. Moreover, the model supports inference mechanisms to enhance the automatic context generation and the proactive behavior of particular services. Finally, this paper shows a specific prototype that offers personalized and context-aware information to the user, aided by the context model.Fontecha, Jesús; Bravo, José; Hervás, Ramón2010-09-17T17:52:28+02:00Semantic Webambient intelligencecontext-awarenessinformation visualizationknowledge managementA Context Model based on Ontological Languages: a Proposal for Information VisualizationFontecha, JesúsBravo, JoséHervás, RamónSemantic Web,ambient intelligence,context-awareness,information visualization,knowledge managementJUCS - Journal of Universal Computer Science Vol. 162010061539-1555Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaKnowledge Authoring with ORE: Testing, Debugging and Validating Knowledge Rules in a Semantic Web Framework
http://www.jucs.org/jucs_16_9/knowledge_authoring_with_ore
Ontology rule editing, testing, debugging and validation are still handcrafted and painful tasks. Nowadays, there is a lack of tools that take these tasks into consideration in order to ease the work of the developer. This paper is devoted to explain how we have come to a new tool, ORE (Ontology Rule Editor), which significantly eases these tasks. It rests on a Semantic Web framework together with reasoning engines, which operate with semantic representations. Its design maintains a loosely coupling from the framework and from rule engines. Collaborative functionalities have been tackled in order to enable a real integration of the rule authoring across different tools and/or users. A practical validation of the approach by instantiating our tool with Jena and Pellet reasoning engines is presented here. In order to demonstrate its use, the tool is applied to the task of rule-based management in a ubiquitous computing scenario.Ortega, Andres Muñoz; Clemente, Felix J. Garcia; Pérez, Gregorio Martínez; Calero, Jose M. Alcaraz; Blaya, Juan A. Botía2010-06-15T12:42:23+02:00conflict managementknowledge authoringontology rule editorreasoning enginessemantic webKnowledge Authoring with ORE: Testing, Debugging and Validating Knowledge Rules in a Semantic Web FrameworkOrtega, Andres MuñozClemente, Felix J. GarciaPérez, Gregorio MartínezCalero, Jose M. AlcarazBlaya, Juan A. Botíaconflict management,knowledge authoring,ontology rule editor,reasoning engines,semantic webJUCS - Journal of Universal Computer Science Vol. 162010051234-1266Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaIntroducing Living Lab's Method as Knowledge Transfer from one Socio-Institutional Context to another: Evidence from Helsinki-Tallinn Cross-Border Region
http://www.jucs.org/jucs_16_8/introducing_living_labs_method
The present article aims to describe the Living Lab's method as a method innovation in institutional activities and the problems of taking this innovation into use. Possibilities to transfer the Living Lab's method from one country, Finland, to other, Estonia, potential implementation fields and obstacles are studied. Considerations on the process of utilising the Living Lab's method in Tallinn are given. Living Lab's is a human-centric research and development approach in which new technologies are co-created, tested, and evaluated in the users' own private context. This method is coming into use in several countries among which Finland is in the forefront but is not yet in use in Tallinn, Estonia. The empirical part of the research is based on the analyses of fourteen interviews conducted among Tallinn and Helsinki city officials, representatives of technology enterprises, experts of the fields that are internationally most wide-spread Living Labs' testing grounds, using structured interviews and discussions. The article concludes by discussing possibilities to use the Living Lab's method in enhancing Helsinki-Tallinn cross-border co-operation and thus metropolitan regional integration.Terk, Erik; Lepik, Katri-Liis; Krigul, Merle2011-07-20T09:20:51+02:00Helsinki-Tallinn EuregioLiving LabLiving Labs methodcross-border co-operationknowledge transfermethod innovationopen innovationIntroducing Living Lab's Method as Knowledge Transfer from one Socio-Institutional Context to another: Evidence from Helsinki-Tallinn Cross-Border RegionTerk, ErikLepik, Katri-LiisKrigul, MerleHelsinki-Tallinn Euregio,Living Lab,Living Labs method,cross-border co-operation,knowledge transfer,method innovation,open innovationJUCS - Journal of Universal Computer Science Vol. 162010041089-1101Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaLocating and Crawling eGovernment Services<br> A Light-weight Semantic Approach
http://www.jucs.org/jucs_16_8/locating_and_crawling_egovernment
The application of Web 2.0 tools and methodologies in the domain of eGov-ernment is not yet a fully exploited area due to the immaturity of the software support, and the lack of commitment from Public Administrations. This paper proposes asolution to locate a service which a citizen may be interested in. The solution uses particular features from this environment, such as microformats, metadata and dynamicprocedures on the Web. The paper describes a semantic model for the domain, and tools to annotate, publish and crawl services in Public Administrations are discussedin depth. The paper details the entire software platform, and it presents conclusions and a number of proposals for future research efforts.Rifón, Luis Anido; Sabucedo, Luis Álvarez2011-07-20T09:20:48+02:00eGovernmentknowledge managementmetadataLocating and Crawling eGovernment Services<br> A Light-weight Semantic ApproachRifón, Luis AnidoSabucedo, Luis ÁlvarezeGovernment,knowledge management,metadataJUCS - Journal of Universal Computer Science Vol. 162010041117-1137Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaA Model for Capturing and Managing Software Engineering Knowledge and Experience
http://www.jucs.org/jucs_16_3/a_model_for_capturing
During software development projects there is always a particular working "product" that is generated but rarely managed: the knowledge and experience that team members acquire. This knowledge and experience, if conveniently managed, can be reused in future software projects and be the basis for process improvement initiatives. In this paper we present a model for managing the knowledge and experience team members acquire during software development projects in a non-disruptive way, by integrating its management into daily project activities. The purpose of the model is to identify and capture this knowledge and experience in order to derive lessons learned and proposals for best practices that enable an organization to preserve them for future use, and support software process improvement activities. The main contribution of the model is that it enables an organization to consider knowledge and experience management activities as an integral part of its software projects, instead of being considered, as it was until now, as a follow-up activity that is (infrequently) carried out after the end of the projects.Silva, Andrés; Matturro, Gerardo2011-04-18T12:32:04+02:00experience captureknowledge managementsoftware engineeringA Model for Capturing and Managing Software Engineering Knowledge and ExperienceSilva, AndrésMatturro, Gerardoexperience capture,knowledge management,software engineeringJUCS - Journal of Universal Computer Science Vol. 16201002479-505Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaMulti-criteria Group Decision Support with Linguistic Variables in Long-term Scenarios for Belgian Energy Policy
http://www.jucs.org/jucs_16_1/multi_criteria_group_decision
Real world decisions often made in the presence of multiple, conflicting, and incommensurate criteria. Decision making requires multiple perspectives of different individuals as more decisions are made now in groups than ever before. This is particularly true when the decision environment becomes more complex such as sustainability policies study in environmental and energy sectors. Group decision making processes judgments or solutions for decision problems based on the input and feedback of multiple individuals. Multi-criteria decision and evaluation problems at tactical and strategic levels in practice involve fuzziness in terms of linguistic variables vis-à-vis criteria, weights, and decision maker judgments. Relevant alternatives or scenarios are evaluated according to a number of desired criteria. A fuzzy multi-criteria group decision software tool is developed to analyze long-term scenarios for Belgian energy policy in this paper.Ruan, Da; Laes, Erik; Meskens, Gaston; Zhang, Guangquan; Lu, Jie; Ma, Jun2011-07-20T09:12:16+02:00Fuzzy numbersenergy policyevaluation modelgroup decision supportlinguistic variablesmulti-criteria decision making (MCDM)Multi-criteria Group Decision Support with Linguistic Variables in Long-term Scenarios for Belgian Energy PolicyRuan, DaLaes, ErikMeskens, GastonZhang, GuangquanLu, JieMa, JunFuzzy numbers,energy policy,evaluation model,group decision support,linguistic variables,multi-criteria decision making (MCDM)JUCS - Journal of Universal Computer Science Vol. 16201001103-120Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaSome Views on Information Fusion and Logic Based Approaches in Decision Making under Uncertainty
http://www.jucs.org/jucs_16_1/some_views_on_information
Decision making under uncertainty is a key issue in information fusion and logic based reasoning approaches. The aim of this paper is to show noteworthy theoretical and applicational issues in the area of decision making under uncertainty that have been already done and raise new open research related to these topics pointing out promising and challenging research gaps that should be addressed in the coming future in order to improve the resolution of decision making problems under uncertainty.Ruan, Da; Liu, Jun; Martínez, Luis; Xu, Yang2011-07-20T09:12:09+02:00computing with wordsdecision makinginformation fusionlogicsuncertain information processinguncertaintySome Views on Information Fusion and Logic Based Approaches in Decision Making under UncertaintyRuan, DaLiu, JunMartínez, LuisXu, Yangcomputing with words,decision making,information fusion,logics,uncertain information processing,uncertaintyJUCS - Journal of Universal Computer Science Vol. 162010013-19Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaExtending SD-Core for Ontology-based Data Integration
http://www.jucs.org/jucs_15_17/extending_sd_core_for
This paper describes the main elements of a functional platform for building Semantic Web Applications, the Semantic Directory (additional information can be found at http://khaos.uma.es/SD-Core). A Semantic Directory provides a resource directory, in which web resources are registered and their semantics published. Using the Semantic Directories we provide a solution for publishing the semantics of resources, and interoperating them with some other applications in the same or different domains. The main idea behind this proposal is to help developers build Semantic Web applications by providing them with functional components for this task. This paper also describes some applications that have been developed using an SD-Core extension: SD-Data. Then, we describe the instantiation of the Khaos Ontology-based Mediation Framework (KOMF) in the Systems Biology domain. This framework provides an architecture that enables the research on the development of ontology-based mediators. Thus, an ontology-based mediator has been produced that has demonstrated its utility in two applications developed in the Amine System Project using the SD-Data for registering semantics: AMMO-Prot and SBMM Assistant. The use of ontologies is limited in the current version of the mediator, but its development as a framework enables the implementation of improvements based on the use of reasoning.Navas-Delgado, Ismael; Aldana-Montes, José F.2010-11-25T12:34:10+01:00knowledge managementmetadataExtending SD-Core for Ontology-based Data IntegrationNavas-Delgado, IsmaelAldana-Montes, José F.knowledge management,metadataJUCS - Journal of Universal Computer Science Vol. 152009113201-3230Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaInteractive Learning of Independent Experts' Criteria for Rescue Simulations
http://www.jucs.org/jucs_15_13/interactive_learning_of_independent
Efficient response to natural disasters has an increasingly important role in limiting the toll on human life and property. The work we have undertaken seeks to improve existing models by building a Decision Support System (DSS) of resource allocation and planning for natural disaster emergencies in urban areas. A multi-agent environment is used to simulate disaster response activities, taking into account geospatial, temporal and rescue organizational information. The problem we address is the acquisition of situated expert knowledge that is used to organize rescue missions. We propose an approach based on participatory design and interactive learning which incrementally elicits experts' preferences by online analysis of their interventions with rescue simulations. An additive utility functions are used, assuming mutual preferential independence between decision criteria, as a preference for the elicitation process. The learning algorithm proposed refines the coefficients of the utility function by resolving incremental linear programming. For testing our algorithm, we run rescue scenarios of ambulances saving victims. This experiment makes use of geographical data for the Ba-Dinh district of Hanoi and damage parameters from well-regarded local statistical and geographical resources. The preliminary results show that our approach is initially confident in solving this problem.Boucher, Alain; Drogoul, Alexis; Zucker, Jean-Daniel; Chu, Thanh-Quang2010-11-24T16:38:54+01:00decision support systemdisaster responseinteractive learningmulti-agent simulationmulti-criteria decision makingparticipatory designpreference elicitationutility functionInteractive Learning of Independent Experts' Criteria for Rescue SimulationsBoucher, AlainDrogoul, AlexisZucker, Jean-DanielChu, Thanh-Quangdecision support system,disaster response,interactive learning,multi-agent simulation,multi-criteria decision making,participatory design,preference elicitation,utility functionJUCS - Journal of Universal Computer Science Vol. 152009072701-2725Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaA Joint Web Resource Recommendation Method based on Category Tree and Associate Graph
http://www.jucs.org/jucs_15_12/a_joint_web_resource
Personalized recommendation is valuable in various web applications, such as e-commerce, music sharing, and news releasing, etc. Most existing recommendation methods require users to register and provide their private information before gaining access to any services, whereas a majority of users are reluctant to do so, which greatly limits the range of application of such recommendation methods. In the non-register environments, the only available information is the content or attributes of resources and the click-through chains of user sessions, so that many recommendation methods fail to work effectively due to the rating sparsity [Adomavicius and Tuzhilin, 2005] and illegibility of user identity, collaborative filtering [Goldberg et al. 1992] is an example of this case. In this paper we propose a joint recommendation method combining together two approaches, namely the domain category tree and the associate graph, to make full use of all available information. Further, an associate graph propagation method is designed to improve the traditional associate filtering method by integrating additional graphical considerations into them. Experiment results show that our method outperforms either the single category tree approach or the single associate graph approach, and it can provide acceptable recommendation services even in the non-register environment.Yang, Laurence T.; Weng, Linkai; Zhong, Ming; Tian, Pengwei; Zhang, Yaoxue; Zhou, Yuezhi2010-11-24T15:42:47+01:00category treegraph propagationpersonalized recommendationpersonalized serviceA Joint Web Resource Recommendation Method based on Category Tree and Associate GraphYang, Laurence T.Weng, LinkaiZhong, MingTian, PengweiZhang, YaoxueZhou, Yuezhicategory tree,graph propagation,personalized recommendation,personalized serviceJUCS - Journal of Universal Computer Science Vol. 152009062387-2408Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaA QoS Perspective on Exception Diagnosis in Service-Oriented Computing
http://www.jucs.org/jucs_15_9/a_qos_perspective_on
Unlike object-oriented applications it is difficult to address exceptions in multi-agent systems due to their highly dynamic and autonomous nature. Our previous work has examined exception diagnosis in multi-agent systems based on a heuristic classification method. In this paper, we extend our work by applying an exception diagnosis method to web services (WS) by proposing a unified framework for dealing with exceptions occurring in multi-agent systems as well as in web services. Importantly, we relate the impact of exceptions to Quality of Service (QoS), as exceptions normally degrade the quality of service offered to a service consumer. Our framework consists of a QoS monitoring agent that monitors all interactions taking place between service consumers and service providers. The monitoring agent encodes the knowledge of exceptions, their causes and applies the heuristic classification method for reasoning in order to diagnose underlying causes of monitored exceptions. In this paper, we categorize exceptions into three levels in multi-agent systems: Environment Level Exception; Knowledge Level Exception and Social Level Exception. This paper also discusses different classes of exceptions in web services based on the web service stack.James, Anne; Iqbal, Kashif; Shah, Nazaraf; Iqbal, Rahat2010-11-24T12:19:20+01:00QoSexception diagnosesheuristic classficationmulti-agent systemsA QoS Perspective on Exception Diagnosis in Service-Oriented ComputingJames, AnneIqbal, KashifShah, NazarafIqbal, RahatQoS,exception diagnoses,heuristic classfication,multi-agent systemsJUCS - Journal of Universal Computer Science Vol. 152009051871-1885Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaEstimating Software Projects Based On Negotiation
http://www.jucs.org/jucs_15_9/estimating_software_projects_based
The Software Engineering community has been trying to get fast and accurate software estimations for many years. Most of the proposed methods require historical information and/or experts' judgment. Because of that, the current methods are not suitable for novice developers or persons who do not know the company development capability. In order to help overcome such need, this paper proposes a software estimation method named CEBON (Collaborative Estimation Based On Negotiation). The method is applicable to small/medium-size projects (1-6 months). It focuses on supporting estimation of Web information systems in scenarios where historical data is not available. The CEBON method has been used to estimate eight real projects. The obtained results were compared with the real projects execution, which were carried out by novice developers in Chile. The comparison indicates the method is able to deliver quite accurate results. In addition, a survey applied to the involved developers shows they feel comfortable using the estimation method. The article also describes a collaborative software application supporting the CEBON process and a preliminary evaluation of both the estimation method and the supporting tool.Poblete, Fabián; Pino, José A.; Ochoa, Sergio F.2010-11-24T12:19:02+01:00Novice Software Developerscollaborative workgroupware systemsoftware estimationEstimating Software Projects Based On NegotiationPoblete, FabiánPino, José A.Ochoa, Sergio F.Novice Software Developers,collaborative work,groupware system,software estimationJUCS - Journal of Universal Computer Science Vol. 152009051812-1832Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaA Tree Similarity Measuring Method and its Application to Ontology Comparison
http://www.jucs.org/jucs_15_9/a_tree_similarity_measuring
Classical tree similarity measuring approaches focus on the structural and geometrical characteristics of the trees. The degree of similarity between two trees is measured by the minimal cost of editing sequences that convert one tree into the other one from pure structural perspective. Differently, when the trees are created to represent concept structures in a knowledge context (known as concept trees), the tree nodes represent concepts, not merely abstract elements occupying specific positions. Therefore, measuring similarity of such trees requires a more comprehensive method which takes the position, significance of the concepts (represented by the tree nodes), and conceptual similarity among the concepts from different trees into consideration. This paper extends the classical tree similarity measuring method to introduce tree transformation operations which transform one concept tree to another one. We propose definitions for the costs of the operations based on the position, importance of each concept within a concept structure, and similarity between individual concepts from different concept structures in a knowledge context. The method for computing the transformation costs and measuring similarity between different trees is presented. We apply the proposed method to ontology comparison where different ontologies for the same domain are represented as trees and their similarity is required to be measured. We show that the proposed method can facilitate the initiation of ontology integration and ontology trust evaluation.Wang, Chun; Ghenniwa, Hamada H.; Shen, Weiming; Xue, Yunjiao2010-11-24T12:18:45+01:00ontology comparisonontology integrationsimilarity measuringtransformation costtransformation operationtreeA Tree Similarity Measuring Method and its Application to Ontology ComparisonWang, ChunGhenniwa, Hamada H.Shen, WeimingXue, Yunjiaoontology comparison,ontology integration,similarity measuring,transformation cost,transformation operation,treeJUCS - Journal of Universal Computer Science Vol. 152009051766-1781Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaFostering Knowledge Flow and Community Engagement in the Development of Interactive Entertainment
http://www.jucs.org/jucs_15_8/fostering_knowledge_flow_and
Due to an increasing professionalization, specialization, and globalization in the development of interactive entertainment new demands for comprehensive knowledge management support emerge. This article aims at sensitizing and systematizing the needs and potentials for continuous knowledge flow and community engagement in this application area. It starts with an analysis of typical development activities and involved parties that could benefit from a continuous knowledge management support. Then, a general framework architecture and implementation examples are presented that provide different levels of knowledge management support for interactive entertainment development.Niesenhaus, Jörg; Ziegler, Jürgen; Heim, Philipp; Lohmann, Steffen2010-11-18T13:59:57+01:00community engagementcontinuous integrationinteractive entertainmentknowledge managementopen innovationFostering Knowledge Flow and Community Engagement in the Development of Interactive EntertainmentNiesenhaus, JörgZiegler, JürgenHeim, PhilippLohmann, Steffencommunity engagement,continuous integration,interactive entertainment,knowledge management,open innovationJUCS - Journal of Universal Computer Science Vol. 152009041722-1734Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaModelling Weblog Success: Case of Korea
http://www.jucs.org/jucs_15_8/modelling_weblog_success_case
Weblogs have received attention as new, personalized media. Yet, only few of them attract public attention and become successful. This study explores weblog success factors in three categories: content creation, content features and content diffusion. During the process of content creation in weblogs, we argue that weblog service providers (WSPs) support bloggers' resource collection. We also presume that the volume or the quality of posts in weblogs could be matter to gain visitors' attention when weblog content (i.e., post) is generated. During the process of content diffusion, we assume that use of blogging technologies (BTs) such as trackback or RSS would enhance content-sharing activities between weblogs. Based on the data from a sample of Korean individual weblogs, our analysis indicates that weblog success (in terms of the number of unique visitors per week) is related to the WSP's support level for content creation as well as content features.Hwang, Junseok; Kim, Seunghyun; Lee, Youngjin2010-11-18T13:59:28+01:00BTs Contribution for Content DiffusionWSPs Support for Content CreationWeblog successcontent featuresModelling Weblog Success: Case of KoreaHwang, JunseokKim, SeunghyunLee, YoungjinBTs Contribution for Content Diffusion,WSPs Support for Content Creation,Weblog success,content featuresJUCS - Journal of Universal Computer Science Vol. 152009041589-1606Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaSemantic Spiral Timelines Used as Support for e-Learning
http://www.jucs.org/jucs_15_7/semantic_spiral_timelines_used
This article presents Semantic Spiral Timelines (SST) as an interactive visual tool aimed at the exploration and analysis of additional academic information stored in current e-learning platforms. Despite the development of contents specifically for these platforms, and in spite of the various features they provide, knowledge of the actual use made by individual participants is emerging as an unavoidable necessity, so as to ensure proper operation and effective use of e-learning platforms. SST supports the discovery of temporal patterns by incorporating an innovative highly interactive visual representation, which can be explored at various levels. This tool makes it possible to assess, at first glance, the use of the e-learning platform during the development of courses; one can also perceive how it is used by class participants. Then, through different interaction mechanisms, it is possible for students and professors to uncover specific details about courses, which would otherwise remain hidden.Aguilar, Diego Alonso Gómez; García-Peñalvo, Francisco J.; Therón, Roberto2011-01-21T11:59:03+01:00Moodlee-learningspiraltimelinevisualizationSemantic Spiral Timelines Used as Support for e-LearningAguilar, Diego Alonso GómezGarcía-Peñalvo, Francisco J.Therón, RobertoMoodle,e-learning,spiral,timeline,visualizationJUCS - Journal of Universal Computer Science Vol. 152009041526-1545Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaComplexity Analysis of Ontology Integration Methodologies:a Comparative Study
http://www.jucs.org/jucs_15_4/complexity_analysis_of_ontology
Most previous research on ontology integration has focused on similarity measure-ments between ontological <I>entities</I>, e.g., <I>lexicons</I>, <I>instances</I>, <I>schemas</I> and <I>taxonomies</I>, resulting in high computational costs of considering all possible pairs between two given ontologies. In this paper, we propose a novel approach to reducing computational complexity in ontology integration. Thereby, we address the <I>importance</I> and <I>types of concepts</I>, for priority matching anddirect matching between concepts, respectively. <I>Identity-based similarity</I> is computed, to avoid comparisons of all properties related to each concept, while matching between concepts. Theproblem of conflict in ontology integration has initially been explored on the <I>instance-level</I> and <I>concept-level</I>. This is useful to avoid many cases of mismatching.Jo, Geun Sik; Jung, Jason J.; Nguyen, Ngoc Thanh; Duong, Trong Hai2010-11-15T16:00:07+01:00conflictidentity-based similarityimportance conceptsontology integrationComplexity Analysis of Ontology Integration Methodologies:a Comparative StudyJo, Geun SikJung, Jason J.Nguyen, Ngoc ThanhDuong, Trong Haiconflict,identity-based similarity,importance concepts,ontology integrationJUCS - Journal of Universal Computer Science Vol. 15200902877-897Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaKnowledge Sharing and Collaborative Learning in Second Life: A Classification of Virtual 3D Group Interaction Scripts
http://www.jucs.org/jucs_15_3/knowledge_sharing_and_collaborative
In this paper we propose a classification and systematic description structure based on the pattern paradigm for interaction scripts in Second Life that aim at facilitating on the one side knowledge sharing and knowledge integration in groups, and on the other side knowledge creation in formal and informal ways. We present 13 examples of interaction patterns, a description structure to formalize them, and classify them into four classes according to their design effort and added value. Based on this classification we distinguish among sophisticated 3D collaboration patterns, seamless patterns, decorative patterns, and pseudo patterns.Schmeil, Andreas; Eppler, Martin J.2010-11-15T15:15:47+01:00MUVESecond Lifecollaboration patternsknowledge sharingonline collaborationvirtual worldsKnowledge Sharing and Collaborative Learning in Second Life: A Classification of Virtual 3D Group Interaction ScriptsSchmeil, AndreasEppler, Martin J.MUVE,Second Life,collaboration patterns,knowledge sharing,online collaboration,virtual worldsJUCS - Journal of Universal Computer Science Vol. 15200902665-677Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaA Generic Architecture for the Conversion of Document Collections into Semantically Annotated Digital Archives
http://www.jucs.org/jucs_14_18/a_generic_architecture_for
Mass digitization of document collections with further processing and semantic annotation is an increasing activity among libraries and archives at large for preservation, browsing and navigation, and search purposes. In this paper we propose a software architecture for the process of converting high volumes of document collections to semantically annotated digital libraries. The proposed architecture recognizes two sources of knowledge in the conversion pipeline, namely document images and humans. The Image Analysis module and the Correction and Validation module cover the initial conversion stages. In the former information is automatically extracted from document images. The latter involves human intervention at a technical level to define workflows and to validate the image processing results. The second stage, represented by the Knowledge Capture modules requires information specific to the particular knowledge domain and generally calls for expert practitioners. These two principal conversion stages are coupled with a Knowledge Management module which provides the means to organise the extracted and acquired knowledge. In terms of data propagation, the architecture follows a bottom-up process, starting with document image units, called <I>terms</I>, and progressively building meaningful <I>concepts</I> and their <I>relationships</I>. In the second part of the paper we describe a real scenario with historical document archives implemented according to the proposed architecture.Karatzas, Dimosthenis; Sánchez, Gemma; Mas, Joan; Lladós, Josep2009-08-04T22:40:54+02:00digital librariesdocument image analysis andunderstandingdocument miningsoftware architecturesA Generic Architecture for the Conversion of Document Collections into Semantically Annotated Digital ArchivesKaratzas, DimosthenisSánchez, GemmaMas, JoanLladós, Josepdigital libraries,document image analysis andunderstanding,document mining,software architecturesJUCS - Journal of Universal Computer Science Vol. 142008102912-2935Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaA Spiral Model for Adding Automatic, Adaptive Authoring to Adaptive Hypermedia
http://www.jucs.org/jucs_14_17/a_spiral_model_for
At present a large amount of research exists into the design and implementation of adaptive systems. However, not many target the complex task of authoring in such systems, or their evaluation. In order to tackle these problems, we have looked into the causes of the complexity. Manual annotation has proven to be a bottleneck for authoring of adaptive hypermedia. One such solution is the reuse of automatically generated metadata. In our previous work we have proposed the integration of the generic Adaptive Hypermedia authoring environment, MOT (My Online Teacher), and a semantic desktop environment, indexed by Beagle++. A prototype, Sesame2MOT Enricher v1, was built based upon this integration approach and evaluated. After the initial evaluations, a web-based prototype was built (web-based Sesame2MOT Enricher v2 application) and integrated in MOT v2, conforming with the findings of the first set of evaluations. This new prototype underwent another evaluation. This paper thus does a synthesis of the approach in general, the initial prototype, with its first evaluations, the improved prototype and the first results from the most recent evaluation round, following the next implementation cycle of the spiral model [Boehm, 88].Cristea, Alexandra; Hendrix, Maurice2009-08-04T21:33:52+02:00Adaptive Educational HypermediaCAFRDFauthoringevaluationmetadatasemantic desktopsemi-automatic addingA Spiral Model for Adding Automatic, Adaptive Authoring to Adaptive HypermediaCristea, AlexandraHendrix, MauriceAdaptive Educational Hypermedia,CAF,RDF,authoring,evaluation,metadata,semantic desktop,semi-automatic addingJUCS - Journal of Universal Computer Science Vol. 142008092799-2818Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaMarket Microstructure Patterns Powering Trading and Surveillance Agents
http://www.jucs.org/jucs_14_14/market_microstructure_patterns_powering
Market Surveillance plays important mechanism roles in constructing market models. From data analysis perspective, we view it valuable for smart trading in designing legal and profitable trading strategies and smart regulation in maintaining market integrity, transparency and fairness. The existing trading pattern analysis only focuses on interday data which discloses explicit and high-level market dynamics. In the mean time, the existing market surveillance systems available from large exchanges are facing crucial challenges of diversified, dynamic, distributed and cyber-based misuse, mis-disclosure and misdealing of information, announcement and orders in one market or crossing multiple markets. Therefore, there is a crucial need to develop innovative and workable methods for smart trading and surveillance. To deal with such issues, we propose the innovative concept microstructure pattern analysis and corresponding approaches in this paper. Microstructure pattern analysis studies trading behaviour patterns of traders in market microstructure data by utilizing market microstructure knowledge. The identified market microstructure patterns are then used for powering market trading and surveillance agents for automatically detecting/designing profitable and legal trading strategies or monitoring abnormal market dynamics and trader's behaviour. Such trading/surveillance agent-driven market trading/surveillance systems can greatly enhance the analytical, discovery and decision-support capability of market trading/surveillance than the current predefined rule/alert-based systems.Cao, Longbing; Ou, Yuming2009-10-23T13:29:18+02:00agentsdata miningmarket microstructure patternmarket surveillanceMarket Microstructure Patterns Powering Trading and Surveillance AgentsCao, LongbingOu, Yumingagents,data mining,market microstructure pattern,market surveillanceJUCS - Journal of Universal Computer Science Vol. 142008072288-2308Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaThe APS Framework For Incremental Learning of Software Agents
http://www.jucs.org/jucs_14_14/the_aps_framework_for
Adaptive behavior and learning are required of software agents in many application domains. At the same time agents are often supposed to be resource-bounded systems, which do not consume much CPU time, memory or disk space. In attempt to satisfy both requirements, we propose a novel framework, called APS (standing for Analysis of Past States), which provides agent with learning capabilities with respect to saving system resources. The new solution is based on incremental association rule mining and maintenance. The APS process runs periodically in a cycle, in which phases of agent's normal performance intertwine with learning phases. During the former ones an agent stores observations in a history. After a learning phase has been triggered, the history facts are analyzed to yield new association rules, which are added to the knowledge base by the maintenance algorithm. Then the old observations are removed from the history, so that in the next learning runs only recent facts are processed in search of new association rules. Keeping the history small can save both processing time and disk space as compared to batch learning approaches.Dudek, Damian2009-10-23T13:29:13+02:00incremental methodssoftware agentsstatistical learningweb browsing assistantThe APS Framework For Incremental Learning of Software AgentsDudek, Damianincremental methods,software agents,statistical learning,web browsing assistantJUCS - Journal of Universal Computer Science Vol. 142008072263-2287Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaIntelligence Metasynthesis and Knowledge Processing in Intelligent Systems
http://www.jucs.org/jucs_14_14/intelligence_metasynthesis_and_knowledge
Intelligence and Knowledge play more and more important roles in building complex intelligent systems, for instance, intrusion detection systems, and operational analysis systems. Knowledge processing in complex intelligent systems faces new challenges from the increased number of applications and environment, such as the requirements of representing domain and human knowledge in intelligent systems, and discovering actionable knowledge on a large scale in distributed web applications. In this paper, we discuss the main challenges of, and promising approaches to, intelligence metasynthesis and knowledge processing in open complex intelligent systems. We believe (1) ubiquitous intelligence, including data intelligence, domain intelligence, human intelligence, network intelligence and social intelligence, is necessary for OCIS, which needs to be meta-synthesized; and (2) knowledge processing should pay more attention to developing innovative and workable methodologies, techniques, tools and systems for representing, modelling, transforming, discovering and servicing the uncertain, large-scale, deep, distributed, domain-oriented, human-involved, and actionable knowledge highly expected in constructing open complex intelligent systems. To this end, the meta-synthesis of ubiquitous intelligence is an appropriate way in designing complex intelligent systems. To support intelligence meta-synthesis, m-interaction can play as the working mechanism to form m-spaces as problem-solving systems. In building such m-spaces, advancement in knowledge processing is necessary.Cao, Longbing; Nguyen, Ngoc Thanh2009-10-23T13:29:08+02:00intelligence meta-synthesisknowledge processingm-interactionm-spaceopen complex intelligent systemsIntelligence Metasynthesis and Knowledge Processing in Intelligent SystemsCao, LongbingNguyen, Ngoc Thanhintelligence meta-synthesis,knowledge processing,m-interaction,m-space,open complex intelligent systemsJUCS - Journal of Universal Computer Science Vol. 142008072256-2262Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaUsing Taxonomies to Support the Macro Design Process for the Production of Web Based Trainings
http://www.jucs.org/jucs_14_10/using_taxonomies_to_support
Recently Web Based Training (WBT) starts to be widely used as a new way of teaching. Unfortunately, this mode of teaching imposes new requirements and constraints. It has made the creation of learning material a complex and demanding task for the instructors as it takes much time and demands a multitude of skills, in particular technical skills that must be developed and continuously updated. Hence, we propose a collaborative authoring methodology based on division of labour as a way to produce WBTs where the processes of production are clearly separated to meet the existing and needed skills of persons involved in WBT production. This paper presents an efficient method to support instructor's guidance during the first phase of the WBT production called the Macro Design using the Rhetorical Structure Theory (RST) and the taxonomies we developed.Berraissoul, Abdelghafour; Aqqal, Abdelhak; Rensing, Christoph; Elkamoun, Najib; Steinmetz, Ralf2009-04-28T09:28:11+02:00E-learningcollaborative authoringinstructional design support toolknowledge modellingproduction of web based trainingsemantic designtaxonomiesUsing Taxonomies to Support the Macro Design Process for the Production of Web Based TrainingsBerraissoul, AbdelghafourAqqal, AbdelhakRensing, ChristophElkamoun, NajibSteinmetz, RalfE-learning,collaborative authoring,instructional design support tool,knowledge modelling,production of web based training,semantic design,taxonomiesJUCS - Journal of Universal Computer Science Vol. 142008051763-1774Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaFeature Selection for the Classification of Large Document Collections
http://www.jucs.org/jucs_14_10/feature_selection_for_the
Feature selection methods are often applied in the context of document classification. They are particularly important for processing large data sets that may contain millions of documents and are typically represented by a large number, possibly tens of thousands of features. Processing large data sets thus raises the issue of computational resources and we often have to find the right trade-off between the size of the feature set and the number of training data that we can taken into account. Furthermore, depending on the selected classification technique, different feature selection methods require different optimization approaches, raising the issue of compatibility between the two. We demonstrate an effective classifier training and feature selection method that is suitable for large data collections. We explore feature selection based on the weights obtained from linear classifiers themselves, trained on a subset of training documents. While most feature weighting schemes score individual features independently from each other, the weights of linear classifiers incorporate the relative importance of a feature for classification as observed for a given subset of documents thus taking the feature dependence into account. We investigate how these feature selection methods combine with various learning algorithms. Our experiments include a comparative analysis of three learning algorithms: Naïve Bayes, Perceptron, and Support Vector Machines (SVM) in combination with three feature weighting methods: Odds ratio, Information Gain, and weights from the linear SVM and Perceptron. We show that by regulating the size of the feature space (and thus the sparsity of the resulting vector representation of the documents) using an effective feature scoring, like linear SVM, we need only a half or even a quarter of the computer memory to train a classifier of almost the same quality as the one obtained from the complete data set. Feature selection using weights from the linear SVMs yields a better classification performance than other feature weighting methods when combined with the three learning algorithms. The results support the conjecture that it is the sophistication of the feature weighting method rather than its compatibility with the learning algorithm that improves the classification performance.Mladenić, Dunja; Brank, Janez; Grobelnik, Marko; Milić-Frayling, Nataša2009-04-28T09:27:35+02:00data preprocessingfeature selectioninformation retrievalknowledge representationmachine learningsupport vector machinestext classificationFeature Selection for the Classification of Large Document CollectionsMladenić, DunjaBrank, JanezGrobelnik, MarkoMilić-Frayling, Natašadata preprocessing,feature selection,information retrieval,knowledge representation,machine learning,support vector machines,text classificationJUCS - Journal of Universal Computer Science Vol. 142008051562-1596Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaCompensation Models for Interactive Advertising
http://www.jucs.org/jucs_14_4/compensation_models_for_interactive
Due to a shift in the marketing focus from mass to micro markets, the importance of one-to-one communication in advertising has increased. Interactive media provide possible answers to this shift. However, missing standards in payment models for interactive media are a hurdle in the further development. The paper reviews interactive advertising payment models. Furthermore, it adapts the popular FCB grid as a tool for both advertisers and publishers or broadcasters to examine effective interactive payment models.Dickinger, Astrid; Zorn, Steffen2008-09-17T14:39:58+02:00classificationcompensation modelinteractive advertisingCompensation Models for Interactive AdvertisingDickinger, AstridZorn, Steffenclassification,compensation model,interactive advertisingJUCS - Journal of Universal Computer Science Vol. 14200802557-565Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaSuccess Factors in a Weblog Community
http://www.jucs.org/jucs_14_9/success_factors_in_a
User generated content published via weblogs (also known as blogs) has gained importance in the last years, and the number of globally available weblogs increases. However, a large fraction of these show low publishing activity and are rarely read. This paper is a quantitative analysis of success factors in a community of over 15.000 weblogs, hosted by a local Austrian newspaper. We looked at publishing activity by content type, community activity and writing style. Also, the interconnectedness of the community was analyzed.Safran, Christian; Kappe, Frank2008-09-17T14:39:36+02:00blogginge-communitiesuser generated contentweblog analysisSuccess Factors in a Weblog CommunitySafran, ChristianKappe, Frankblogging,e-communities,user generated content,weblog analysisJUCS - Journal of Universal Computer Science Vol. 14200802546-556Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaAnalyzing Wiki-based Networks to Improve Knowledge Processes in Organizations
http://www.jucs.org/jucs_14_4/analyzing_wiki_based_networks
Increasingly wikis are used to support existing corporate knowledge exchange processes. They are an appropriate software solution to support knowledge processes. However, it is not yet proven whether wikis are an adequate knowledge management tool or not. This paper presents a new approach to analyze existing knowledge exchange processes in wikis based on network analysis. Because of their dynamic characteristics four perspectives on wiki networks are introduced to investigate the interrelationships between people, information, and events in a wiki information space. As an analysis method the Social Network Analysis (SNA) is applied to uncover existing structures and temporal changes. A scenario data set of an analysis conducted with a corporate wiki is presented. The outcomes of this analysis were utilized to improve the existing corporate knowledge processes.Baumgraß, Anne; Meuthrath, Benedikt; Müller, Claudia2008-09-17T14:39:29+02:00collaboration networkknowledge worknetwork analysissocial softwarewikiAnalyzing Wiki-based Networks to Improve Knowledge Processes in OrganizationsBaumgraß, AnneMeuthrath, BenediktMüller, Claudiacollaboration network,knowledge work,network analysis,social software,wikiJUCS - Journal of Universal Computer Science Vol. 14200802526-545Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, AustriaOptimizing Assignment of Knowledge Workers to Office Space Using Knowledge Management Criteria<BR>The Flexible Office Case
http://www.jucs.org/jucs_14_4/optimizing_assignment_of_knowledge
Even though knowledge management has been around for more than a decade, so far concrete instruments that can be systematically deployed are still rare. This paper presents an optimization solution targeted at flexible management of office space considering knowledge management criteria in order to enhance knowledge work productivity. The paper presents the Flexible Office conceptual model and optimization solution. It discusses the theoretical foundation, assumptions and reasoning. A corresponding prototype was field-tested, successfully introduced, evaluated with the help of a series of interviews with users and improved according to their requirements. The paper also reflects on the organizational impact and lessons learned from field test and practical experience.Sandow, Alexander; Bayer, Florian; Nitz, Hendrik; Krüger, Michael; Maier, Ronald; Thalmann, Stefan2008-09-17T14:39:23+02:00flexibility, hypertext organizationknowledge managementknowledge workoffice spaceoptimizationOptimizing Assignment of Knowledge Workers to Office Space Using Knowledge Management Criteria<BR>The Flexible Office CaseSandow, AlexanderBayer, FlorianNitz, HendrikKrüger, MichaelMaier, RonaldThalmann, Stefanflexibility, hypertext organization,knowledge management,knowledge work,office space,optimizationJUCS - Journal of Universal Computer Science Vol. 14200802508-525Verlag der Technischen Universität GrazUniversiti Malaysia SarawakGraz, Austria