Journal Articles (Peer-Reviewed)
Weiß, Andreas and Dimka Karastoyanova (2016):Enabling coupled multi-scale, multi-field experiments through choreographies of data-driven scientific simulations, Computing, 98(4): 439-467.
Abstract: Current systems for enacting scientific experiments, and simulation workflows in particular, do not support multi-scale and multi-field problems if they are not coupled on the level of the mathematical model. To address this deficiency, we present an approach enabling the trial-and-error modeling and execution of multi-scale and/or multi-field simulations in a top-down and bottom-up manner which is based on the notion of choreographies. The approach defines techniques for composing data-intensive, scientific workflows in more complex simulations in a generic, domain-independent way and thus provides means for collaborative and integrated data management using the workflow/process-based paradigm. We contribute a life cycle definition of such simulations and present in detail concepts and techniques that support all life cycle phases. Furthermore, requirements on a respective software system and choreography language supporting multi-scale and/or multi-field simulations are identified, and an architecture and its realization are presented.
Reiter, Michael, Uwe Breitenbucher, Oliver Kopp and Dimka Karastoyanova (2014):Quality of Data Driven Simulation Workflows, Journal of Systems Integration, 5(1): 3-29.
Abstract: Simulations are long-running computations driven by non-trivial data dependencies. Workflow technology helps to automate these simulations and enable using Quality of Data (QoD) frameworks to determine the goodness of simulation data. However, existing frameworks are specific to scientific domains, individual applications, or proprietary workflow engine extensions. In this paper, we propose a generic approach to use QoD as a uniform means to steer complex interdisciplinary simulations implemented as workflows. The approach enables scientists to specify abstract QoD requirements that are considered to steer the workflow for ensuring a precise final result. To realize these Quality of Data-driven workflows, we present a middleware architecture and a WS-Policy-based language to describe QoD requirements and capabilities. To prove technical feasibility, we present a prototype for controlling and steering simulation workflows and a real world simulation scenario.
Strauch, Steve, Vasilios Andrikopoulos, Dimka Karastoyanova, Frank Leymann, Nikolay Nachev and Albrecht Stäbler (2014):Migrating enterprise applications to the cloud: methodology and evaluation, International Journal of Big Data Intelligence, 1(3): 127-140.
Abstract: Migrating existing on-premise applications to the cloud is a complex and multi-dimensional task and may require adapting the applications themselves significantly. For example, when considering the migration of the database layer of an application, which provides data persistence and manipulation capabilities, it is necessary to address aspects like differences in the granularity of interactions and data confidentiality, and to enable the interaction of the application with remote data sources. In this work, we present a methodology for application migration to the cloud that takes these aspects into account. In addition, we also introduce a tool for decision support, application refactoring and data migration that assists application developers in realising this methodology. We evaluate the proposed methodology and enabling tool using a case study in collaboration with an IT enterprise.
Karastoyanova, Dimka (2013):Springer computing special issue: adaptation in service-oriented and Cloud Computing, Computing, 95(6): 449-451.
Sonntag, Mirko and Dimka Karastoyanova (2013):Model-as-you-go: An Approach for an Advanced Infrastructure for Scientific Workflows, Journal of Grid Computing, 11(3): 553-583.
Abstract: Most of the existing scientific workflow systems rely on proprietary concepts and workflow languages. We are convinced that the conventional workflow technology that is established in business scenarios for years is also beneficial for scientists and scientific applications. We are therefore working on a scientific workflow system based on business workflow concepts and technologies. The system offers advanced flexibility features to scientists in order to support them in creating workflows in an explorative manner and to increase robustness of scientific applications. We named the approach Model-as-you-go because it enables users to model and execute workflows in an iterative process that eventually results in a complete scientific workflow. In this paper, we present main ingredients of Model-as-you-go, show how existing workflow concepts have to be extended in order to cover the requirements of scientists, discuss the application of the concepts to BPEL, and introduce the current prototype of the system.
Sonntag, Mirko and Dimka Karastoyanova (2012):Ad hoc Iteration and Re‐execution of Activities in Workflows, International Journal on Advances in Software, 5(1&2).
Abstract: The repeated execution of workflow logic is usually modeled with loop constructs in the workflow model. But there are cases where it is not known at design time that a subset of activities has to be rerun during workflow execution. For instance in e-Science, scientists might have to spontaneously repeat a part of an experiment modeled and executed as workflow in order to gain meaningful results. In general, a manually triggered ad hoc rerun enables users reacting to unforeseen problems and thus improves workflow robustness. It allows natural scientists steering the convergence of scientific results, business analysts controlling their analyses results, and it facilitates an explorative workflow development as required in scientific workflows. In this paper, two operations are formalized for a manually enforced repeated enactment of activities, the iteration and the re-execution. The focus thereby lies on an arbitrary, user-selected activity as a starting point of the rerun. Important topics discussed in this context are handling of data, rerun of activities in activity sequences as well as in parallel and alternative branches, implications on the communication with partners/services and the application of the concept to workflow languages with hierarchically nested activities. Since the operations are defined on a meta-model level, they can be implemented for different workflow languages and engines.
Retter, Ralph, Christoph Fehling, Dimka Karastoyanova, Frank Leymann and Daniel Schleicher (2012):Combining horizontal and vertical composition of services, Service Oriented Computing and Applications, 6(2): 117-130.
Abstract: Service composition is a well-established field of research in the service community. Services are commonly regarded as black boxes with well-defined interfaces that can be recursively aggregated into new services. The black-box nature of services does not only include the service implementation but also implies the use of middleware and hardware to run the services. Thus, service composition techniques are typically limited to choosing between a set of available services. In this paper, we keep the black-box nature and the principle of information hiding of services, but in addition we break up services vertically. By introducing vertical service composition, we allow services to be provisioned on demand using the middleware and runtime environment that specifically meets user-required quality of services. Therefore, a service is setup individually for services requestors instead of providing them with a pre-determined list of available services to choose from. We introduce the concept of vertical service composition and present an extension to an enterprise service bus that implements the concept of vertical service composition by combining concepts from provisioning with those of (dynamic) service binding.
Wetzstein, Branimir, Asli Zengin, Raman Kazhamiakin, Annapaola Marconi, Marco Pistore, Dimka Karastoyanova and Frank Leymann (2012):Preventing KPI Violations in Business Processes based on Decision Tree Learning and Proactive Runtime Adaptation, Journal of Systems Integration, 3(1): 3-18.
Abstract: The performance of business processes is measured and monitored in terms of Key Performance Indicators (KPIs). If the monitoring results show that the KPI targets are violated, the underlying reasons have to be identified and the process should be adapted accordingly to address the violations. In this paper we propose an integrated monitoring, prediction and adaptation approach for preventing KPI violations of business process instances. KPIs are monitored continuously while the process is executed. Additionally, based on KPI measurements of historical process instances we use decision tree learning to construct classification models which are then used to predict the KPI value of an instance while it is still running. If a KPI violation is predicted, we identify adaptation requirements and adaptation strategies in order to prevent the violation.
Kopp, Oliver, Matthias Wieland, Tobias Unger, Steve Strauch, Mirko Sonntag, David Schumm, Michael Reiter, Frank Leymann, Dimka Karastoyanova, Katharina Görlachf and Rania Khala (2011):A Classification of BPEL Extensions, Journal of Systems Integration, 2(4): 3-28.
Abstract: The Business Process Execution Language (BPEL) has emerged as de-facto standard for business processes implementation. This language is designed to be extensible for including additional valuable features in a standardized manner. There are a number of BPEL extensions available. They are, however, neither classified nor evaluated with respect to their compliance to the BPEL standard. This article fills this gap by providing a framework for classifying BPEL extensions, a classification of existing extensions, and a guideline for designing BPEL extensions.
Schumm, David, Dimka Karastoyanova, Oliver Kopp, Frank Leymann, Mirko Sonntag and Steve Strauch (2011):Process Fragment Libraries for Easier and Faster Development of Process-based Applications, Journal of Systems Integration, 2(1): 39-55.
Abstract: The term “process fragment” is recently gaining momentum in business process management research. We understand a process fragment as a connected and reusable process structure, which has relaxed completeness and consistency criteria compared to executable processes. We claim that process fragments allow for an easier and faster development of process-based applications. As evidence to this claim we present a process fragment concept and show a sample collection of concrete, real-world process fragments. We present advanced application scenarios for using such fragments in development of process-based applications. Process fragments are typically managed in a repository, forming a process fragment library. On top of a process fragment library from previous work, we discuss the potential impact of using process fragment libraries in cross-enterprise collaboration and application integration.
Karastoyanova, Dimka, Tammo van Lessen and Ralph Mietzner (2010):BPM außerhalb der Verwaltung: Ein Blick über den Tellerrand, Business Technology ‐ Prozesse, 3: 54-58.
Abstract: Beim Thema Geschäftsprozessmanagement (Business Process Management (BPM)) denken wir unweigerlich an Dokumentation und Werkzeugunterstützung für administrative Prozesse wie Kreditgenehmigungs-, Reisebuchungs- und Versicherungsantragsprozesse. Doch auch in anderen Domänen wie der Produktion, dem Systems Management, der Softwareentwicklung, der Forschung oder der Simulation etc. kommen Methoden und Techniken des Geschäftsprozessmanagements zunehmend zum Einsatz. In diesem Artikel stellen wir Anwendungsfälle und BPM-Lösungen für diese Domänen vor und beleuchten die Vorteile, die aus einem durchgängigen BPM-Ansatz entstehen.
Danylevych, Olha, Dimka Karastoyanova and Frank Leymann (2010):Service Networks Modelling: An SOA & BPM Standpoint, Journal of Universal Computer Science, 16(13): 1168-1693.
Abstract: Services are quintessential in the current economical landscape. Enterprises and businesses at large rely on the consumption and providing of services to ensure their operations and to realize their business offers. That is, nowadays businesses all over the world are interconnected with each other by complex service-centric webs called service networks. The ubiquity and pervasiveness of service networks call for models, methods, mechanisms and tools to understand them and harness their potential. This paper investigates the modelling of the service networks with a focus on business relationships and exchanges of software services among the involved parties. The contribution of this work is threefold. Firstly, we provide an overview of what service networks modelling can offer in combination with Business Process Management (BPM) and Service Oriented Architecture (SOA) technologies. Secondly, we propose a formalism to model service networks that depicts them as aggregations of participants - e.g. enterprises or individuals - that offer, request, consume and provide services to each other. With the goal of providing a foundation for the alignment between service network- and business process models, we finally map the constructs of our service networks modelling formalism to the ones of the Business Process Modelling Notation (BPMN).
Sonntag, Mirko, Katharina Gorlach, Dimka Karastoyanova, Frank Leymann and Michael Reiter (2010):Process space-based scientific workflow enactment, International Journal of Business Process Integration and Management, 5(1): 32-44.
Abstract: In the scientific field, workflow technology is often employed to conduct computer simulations or computer supported experiments. The underlying IT infrastructure typically comprises resources distributed among different institutes and organisations all over the world. Traditionally, workflows are executed on a single machine while the invoked software is accessed remotely. This approach imposes many drawbacks which are outlined in this paper. To address these weaknesses, we investigate the application of decentralised workflow enactment in the scientific domain. In this context, we explore the employment of process spaces, a middleware for the decentralised execution of workflows. Furthermore, we propose the combination of process spaces with the concept of data references to increase the overall performance of distributed simulations based on workflows. The considerations are discussed with the help of a scenario that calculates and visualises the ink diffusion in water over a period of time.
Nitzsche, Jörg, Tammo van Lessen, Dimka Karastoyanova, Frank Leymann, Pilar Herrero and Gonzalo Méndez (2009):Composing services on the grid using BPEL4SWS, Multiagent and Grid Systems, 5(3): 287-309.
Abstract: Service composition on the Grid is a challenging task as documented in existing research work. Even though there are initial attempts to use the Business Process Execution Language (BPEL) to compose services on the Grid, still there is a significant lack of flexibility and reusability needed in scientific applications. In this paper we present BPEL for Semantic Web Services (BPEL4SWS) – a language that facilitates the orchestration of Grid Services exposed as traditional Web Services or Semantic Web Services using a process-based approach. It is based on the idea of WSDL-less BPEL and incorporates semantic descriptions of process activity implementations which increases the flexibility of business workflows as well as scientific workflows. Following an approach that uses a set of composable standards and specifications, BPEL4SWS is independent of any Semantic Web Service framework and therefore can also utilize any kind of Semantic Grid services. The advantages of BPEL4SWS are: (1) compliance with standards, (2) independence on service technologies, (3) applicability for both business applications as well as scientific workflows that use Grid services, (4) improved flexibility of processes.
Nitzsche, Jörg, Tammo van Lessen, Dimka Karastoyanova and Frank Leymann (2007):WSMO/X in the context of business processes: improvement recommendations, International Journal of Web Information Systems, 3(1/2): 89-103.
Abstract: Purpose – Service‐oriented architecture (SOA) is an architecture paradigm targeting integration of applications within and across enterprise boundaries. It has gathered research and industry acceptance and has given an enormous impetus to the business process management technology. Web service (WS) technology is one implementation of the SOA paradigm. It enables seamless integration of new and legacy applications through a stack of standardized composable specifications. WS orchestration is facilitated by the Business Process Execution Language which provides a recursive service composition model. While the programming model the WS technology provides is very flexible, a major deficiency is the need to discover services implementing a particular abstract interface, whereas functional similarities of services are disregarded. The Semantic Web Service technologies, like Web Service Modelling Ontology (WSMO) and Web Ontology Language for Services have been developed with the purpose of eliminating these deficiencies by enabling service discovery based on functional and non‐functional properties. The paper aims to focus on these issues.Design/methodology/approach – This paper presents a list of requirements that business processes impose on SOA applications. It analyzes the support that WSMO/Web Service Model eXecution environment (WSMX) provides to address these requirements and compares it with the support enabled by the WS specification stack.Findings – The paper identifies major flaws in the WSMO model and its reference implementation with respect to business process support.Originality/value – The paper recommends possible solutions for eliminating the lack of needed features on behalf of WSMO/WSMX. It presents in detail how to enable asynchronous stateful communication among WSMO WS and partner‐based WS discovery by extending the WSMO model. Additionally, it extends the API of the reference implementation to facilitate the execution of services communicating asynchronously.
Weiß, Andreas, Vasilios Andrikopoulos, Santiago Gómez Sáez, Michael Hahn and Dimka Karastoyanova (2016): ChorSystem: A Message-Based System for the Life Cycle Management of Choreographies, in: Debruyne, Christophe, Hervé Panetto, Robert Meersman, Tharam Dillon, eva Kühn, Declan O’Sullivan and Claudio Agostino Ardagna (ed.): On the Move to Meaningful Internet Systems: OTM 2016 Conferences: Confederated International Conferences: CoopIS, C&TC, and ODBASE 2016, Rhodes, Greece, October 24-28, 2016, Proceedings, Springer International Publishing: Cham, 503-521.
Andrikopoulos, Vasilios, Marina Bitsaki, Santiago Goméz Sáez, Michael Hahn, Dimka Karastoyanova, Giorgos Koutras and Alina Psycharaki (2016): Evaluating the Effect of Utility-based Decision Making in Collective Adaptive Systems, in: Cardoso, Jorge, Donald Ferguson, Víctor Méndez Muñoz and Markus Helfert (ed.): 6th International Conference on Cloud Computing and Services Science (CLOSER 2016), 39-47.
Abstract: Utility, defined as the perceived satisfaction with a service, provides the ideal means for decision making on the level of individual entities and collectives participating in a large-scale dynamic system. Previous works have already introduced the concept into the area of collective adaptive systems, and have discussed what is the necessary infrastructure to support the realization of the involved theoretical concepts into actual decision making. In this work we focus on two aspects. First, we provide a concrete utility model for a case study that is part of a larger research project. Second, we incorporate this model into our implementation of the proposed architecture. More importantly, we design and execute an experiment that aims to empirically evaluate the use of utility for decision making by comparing it against simpler decision making mechanisms.
Gómez Sáez, Santiago, Vasilios Andrikopoulos, Michael Hahn, Dimka Karastoyanova, Frank Leymann, Marigianna Skouradaki and Karolina Vukojevic-Haupt (2016): Performance and Cost Trade-Off in IaaS Environments: A Scientific Workflow Simulation Environment Case Study, in: Helfert, Markus, Víctor Méndez Muñoz and Donald Ferguson (ed.): Cloud Computing and Services Science: 5th International Conference, CLOSER 2015, Lisbon, Portugal, May 20-22, 2015, Revised Selected Papers, Springer International Publishing: Cham, 153-170.
Abstract: The adoption of the workflow technology in the eScience domain has contributed to the increase of simulation-based applications orchestrating different services in a flexible and error-free manner. The nature of the provisioning and execution of such simulations makes them potential candidates to be migrated and executed in Cloud environments. The wide availability of Infrastructure-as-a-Service (IaaS) Cloud offerings and service providers has contributed to a raise in the number of supporters of partially or completely migrating and running their scientific experiments in the Cloud. Focusing on Scientific Workflow-based Simulation Environments (SWfSE) applications and their corresponding underlying runtime support, in this research work we aim at empirically analyzing and evaluating the impact of migrating such an environment to multiple IaaS infrastructures. More specifically, we focus on the investigation of multiple Cloud providers and their corresponding optimized and non-optimized IaaS offerings with respect to their offered performance, and its impact on the incurred monetary costs when migrating and executing a SWfSE. The experiments show significant performance improvements and reduced monetary costs when executing the simulation environment in off-premise Clouds.
Weiß, Andreas, Vasilios Andrikopoulos, Michael Hahn and Dimka Karastoyanova (2015): Enabling the Extraction and Insertion of Reusable Choreography Fragments, in: Miller, John A. (ed.): 2015 IEEE International Conference on Web Services (ICWS), 686-694.
Abstract: Reuse of service orchestrations or service compositions is extensively studied in the literature of process modeling. Sub-processes, process templates, process variants, and process reference models are employed as reusable elements for these purposes. The concept of process fragments has been previously introduced in order to capture parts of a process model and store them for later reuse. However, similar efforts on facilitating the reuse of processes that cross the boundaries of organizations expressed as service choreographies are not available yet. In this paper, we introduce the concept of choreography fragments as reusable elements for service choreography modeling. Choreography fragments can be extracted from choreography models, adapted, stored, and later inserted into new models. Based on a formal model for choreography fragments, we define methods and algorithms for the extraction and insertion of fragments from and into service choreographies. We then discuss an experimental and proof-of-concept evaluation of our proposal.
Gómez Sáez, Santiago, Vasilios Andrikopoulos, Michael Hahn, Dimka Karastoyanova and Frank Leymann (2015): Performance and Cost Evaluation for the Migration of a Scientific Workflow Infrastructure to the Cloud, in: Helfert, Markus, Donald F. Ferguson and Víctor Méndez Muñoz (ed.): Proceedings of the 5th International Conference on Cloud Computing and Services Science (CLOSER 2015), 352-361.
Abstract: The success of the Cloud computing paradigm, together with the increase of Cloud providers and optimized Infrastructure-as-a-Service (IaaS) offerings have contributed to a raise in the number of research and industry communities that are strong supporters of migrating and running their applications in the Cloud. Focusing on eScience simulation-based applications, scientific workflows have been widely adopted in the last years, and the scientific workflow management systems have become strong candidates for being migrated to the Cloud. In this research work we aim at empirically evaluating multiple Cloud providers and their corresponding optimized and non-optimized IaaS offerings with respect to their offered performance, and its impact on the incurred monetary costs when migrating and executing a workflow-based simulation environment. The experiments show significant performance improvements and reduced monetary costs when executing the simulation environment in off-premise Clouds.
Vukojevic-Haupt, Karolina, Santiago Goméz Sáez, Florian Haupt, Dimka Karastoyanova and Frank Leymann (2015): A Middleware-Centric Optimization Approach for the Automated Provisioning of Services in the Cloud: Proceedings of the 7th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2015), 174-179.
Abstract: The on-demand provisioning of services, a cloud-based extension for traditional service-oriented architectures, improves the handling of services in usage scenarios where they are only used rarely and irregularly. However, the standard process of service provisioning and de-provisioning shows still some shortcomings when applying it in real world. In this paper, we introduce a middleware-centric optimization approach that can be integrated in the existing on-demand provisioning middleware in a loosely coupled manner, changing the standard provisioning and de-provisioning behavior in order to improve it with respect to cost and time. We define and implement a set of optimization strategies, evaluate them based on a real world use case from the eScience domain and provide qualitative as well as quantitative decision support for effectively selecting and parametrizing a suitable strategy. Altogether, our work improves the applicability of the existing on-demand provisioning approach and system in real world, including guidance for selecting the suitable optimization strategy for specific use cases.
Weiß, Andreas, Vasilios Andrikopoulos, Michael Hahn and Dimka Karastoyanova (2015): Fostering Reuse in Choreography Modeling Through Choreography Fragments: Proceedings of the 17th International Conference on Enterprise Information Systems - Volume 2, 28-36.
Abstract: The concept of reuse in process models is extensively studied in the literature. Sub-processes, process templates,process variants, and process reference models are employed as reusable elements for process modeling.Additionally, the notion of process fragments has been introduced to capture parts of a process model and store them for later reuse. In contrast, concepts for reuse of processes that cross the boundaries of organizations, i.e., choreographies, have not yet been studied in the appropriate level of detail. In this paper, we introduce the concept of choreography fragments as reusable elements for choreography modeling. Choreography fragments can be extracted from choreography models, adapted, stored, and inserted into new models. We provide a formal model for choreography fragments and identify a set of patterns constituting frequently occurring meaningful choreography fragments.
Weiß, Andreas, Vasilios Andrikopoulos, Michael Hahn and Dimka Karastoyanova (2015): Rewinding and Repeating Scientific Choreographies, in: Debruyne, Christophe, Hervé Panetto, Robert Meersman, Tharam Dillon, Georg Weichhart, Yuan An and Claudio Agostino Ardagna (ed.): Proceedings of the OTM 2015 Conferences: Confederated International Conferences: CoopIS, ODBASE, and C&TC 2015,, 337-347.
Abstract: Scientists that use the workflow paradigm for the enactment of scientific experiments need support for trial-and-error modeling, as well as flexibility mechanisms that enable the ad hoc repetition of workflow logic for the convergence of results or error handling. Towards this goal, in this paper we introduce the facilities to repeat partially or completely running choreographies on demand. Choreographies are interesting for the scientific workflow community because so-called multi-scale/field (multi-*) experiments can be modeled and enacted as choreographies of scientific workflows. A prerequisite for choreography repetition is the rewinding of the involved participant instances to a previous state. For this purpose, we define a formal model representing choreography models and their instances as well as a concept to repeat choreography logic. Furthermore, we provide an algorithm for determining the rewinding points in each involved participant instance.
Weiß, Andreas and Dimka Karastoyanova (2014): A Life Cycle for Coupled Multi-scale, Multi-field Experiments Realized through Choreographies: Proceedings of the 2014 IEEE 18th International Enterprise Distributed Object Computing Conference, 234-241.
Abstract: Current systems for enacting scientific experiments, and in particular simulation workflows, do not support multi-scale and multi-field problems if they are not coupled on the level of the mathematical model. We present a life cycle that utilizes the notion of choreographies to enable the trial-and-error modeling and execution of multi-scale and/or multi-field simulations. The life cycle exhibits two views reflecting the characteristics of modeling and execution in a top-down and bottom-up manner. It defines techniques for composing data-intensive, scientific workflows in more complex simulations in a generic, domain-independent way, and thus provides scientists with means for collaborative and integrated data management based on the workflow paradigm.
Hahn, Michael, Santiago Gómez Sáez, Vasilios Andrikopoulos, Dimka Karastoyanova and Frank Leymann (2014): SCE^MT: A Multi-tenant Service Composition Engine, in: Kim, Jong-Chang (ed.): 2014 IEEE 7th International Conference on Service-Oriented Computing and Applications, 89-96.
Abstract: The support of multi-tenancy is an essential requirement for leveraging the full capacity of Cloud computing. Multi-tenancy enables service providers to maximize the utilization of their infrastructure and to reduce the servicing costs per customer, thus indirectly benefiting also the customers. In addition, it allows both providers and consumers to reap the advantages of Cloud-based applications configurable for the needs of different tenants. Nowadays, new applications or services are typically compositions of multiple existing services. Service Composition Engines (SCEs) provide the required functionality to enable the definition and execution of such compositions. Multi-tenancy on the level of SCEs allows for both process model, as well as underlying infrastructure sharing. Towards the goal of enabling multi-tenancy of SCEs, in this paper, we investigate the requirements and define a general architecture for the realization of a multi-tenant SCE solution. This architecture is prototypically realized based on an open-source SCE implementation and integrated into an existing multi-tenant aware Enterprise Service Bus (ESB). The performance evaluation of our prototype shows promising results in terms of the degradation introduced due to processing and communication overhead.
Andrikopoulos, Vasilios, Santiago Gómez Sáez, Dimka Karastoyanova and Andreas Weiß (2014): Collaborative, Dynamic & Complex Systems - Modeling, Provision & Execution, in: Helfert, Markus, Frédéric Desprez, Donald Ferguson and Víctor Méndez Muñoz (ed.): Proceedings of the 4th International Conference on Cloud Computing and Services Science (CLOSER 14), 276-286.
Abstract: Service orientation has significantly facilitated the development of complex distributed systems spanning multiple organizations. However, different application areas approach such systems in domain-specific ways, focusing only on particular aspects relevant for their application types. As a result, we observe a very fragmented landscape of service-oriented systems, which does not enable collaboration across organizations. To address this concern, in this work we introduce the notion of Collaborative, Dynamic and Complex (CDC) systems and position them with respect to existing technologies. In addition, we present how CDC systems are modeled and the steps to provision and execute them. Furthermore, we contribute an architecture and prototypical implementation, which we evaluate by means of a case study in a Cloud-enabled context-aware pervasive application.
Vukojevic‐Haupt, Karolina, Florian Haupt, Dimka Karastoyanova and Frank Leymann (2014): Replicability of Dynamically Provisioned Scientific Experiments, in: Kim, Jong-Chang (ed.): 2014 IEEE 7th International Conference on Service-Oriented Computing and Applications, 119-124.
Abstract: The ability to repeat an experiment, known as replicability, is a basic concept of scientific research and also an important aspect in the field of eScience. The principles of Service Oriented Computing (SOC) and Cloud Computing, both based on high runtime dynamicity, are more and more adopted in the eScience domain. Simulation experiments exploiting these principles introduce significant challenges with respect to replicability. Current research activities mainly focus on how to exploit SOC and Cloud for eScience, while the aspect of replicability for such experiments is still an open issue. In this paper we define a general method to identify points of dynamicity in simulation experiments and to handle them in order to enable replicability. We systematically examine different types of service binding strategies, the main source of dynamicity, and derive a method and corresponding architecture to handle this dynamicity with respect to replicability. Our work enables scientists to perform simulation experiments that exploit the dynamicity and flexibility of SOC and Cloud Computing but still are repeatable.
Haupt, Florian, Dimka Karastoyanova, Frank Leymann and Benjamin Schroth (2014): A model driven approach for REST compliant services, in: Roure, David, Bhavani Thuraisingham and Jia Zhang (ed.): Preceedings of the 21st IEEE International Conference on Web Services (ICWS 2014), 129-136.
Abstract: The design of applications that comply to the REST architectural style requires observing a given set of architectural constraints. Following these constraints and therefore designing REST compliant applications is a non-trivial task often not fulfilled properly. There exist several approaches for the modeling and formal description of REST applications, but most of them do not pay any attention to how these approaches can support or even force REST compliance. In this paper we propose a model-driven approach for modeling REST services. We introduce a multi layered model which enables (partially) enforcing REST compliance by separating different concerns through separate models. We contribute a multi layered meta-model for REST applications, discuss the connection to REST compliance and show an implementation of our approach based on the proposed meta-model and method. As a result our approach provides a holistic method for the design and realization of REST applications exhibiting the desired level of compliance to the constraints of the REST architectural style.
Haupt, Florian, Markus Fischer, Dimka Karastoyanova, Frank Leymann and Karolina Vukojevic‐Haupt (2014): Service Composition for REST: Proceedings of the 2014 IEEE 18th International Enterprise Distributed Object Computing Conference, 110-119.
Abstract: One of the key strengths of service oriented architectures, the concept of service composition to reuse and combine existing services in order to achieve new and superior functionality, promises similar advantages when applied to resources oriented architectures. The challenge in this context is how to realize service composition in compliance with the constraints defined by the REST architectural style and how to realize it in a way that it can be integrated to and benefit from existing service composition solutions. Existing approaches to REST service composition are mostly bound to the HTTP protocol and often lack a systematic methodology and a mature and standards based realization approach. In our work, we follow a comprehensible methodology by deriving the key requirements for REST service composition directly from the REST constraints and then mapping these requirements to a standard compliant extension of the BPEL composition language. We performed a general requirements analysis for REST service composition, defined a meta model for a corresponding BPEL extension, realized this extension prototypically and validated it based on a real world use case from the eScience domain. Our work provides a general methodology to enable REST service composition as well as a realization approach that enables the combined composition of WSDL and REST services in a mature and robust way.
Andrikopoulos, Vasilios, Alexander Darsow, Dimka Karastoyanova and Frank Leymann (2014): CloudDSF ‐ The Cloud Decision Support Framework for Application Migration, in: Villari, Massimo, Wolf Zimmermann and Kung-Kiu Lau (ed.): Service-Oriented and Cloud Computing: Proceedings of the Third European Conference, ESOCC 2014, 1-16.
Andrikopoulos, Vasilios, Marina Bitsaki, Santiago Goméz Sáez, Dimka Karastoyanova, Christos Nikolaou and Alina Psycharaki (2014): Utility-based Decision Making in Collective Adaptive Systems, in: Helfert, Markus, Frédéric Desprez, Donald Ferguson and Víctor Méndez Muñoz (ed.): Proceedings of the 4th International Conference on Cloud Computing and Services Science (CLOSER 14), 308-314.
Abstract: Large-scale systems comprising of multiple heterogeneous entities are directly influenced by the interactions of their participating entities. Such entities, both physical and virtual, attempt to satisfy their objectives by dynamically collaborating with each other, and thus forming collective adaptive systems. These systems are subject to the dynamicity of the entities’ objectives, and to changes to the environment. In this work we focus on the latter, i.e. on providing the means for entities in such systems to model, monitor and evaluate their perceived utility by participating in the system. This allows for them to make informed decisions about their interactions with other entities in the system. For this purpose we propose a utility-based approach for decision making, as well as an architecture that allows for the support of this approach.
Andrikopoulos, Vasilios, Marina Bitsaki, Antonio Bucchiarone and Santiago Gómez Sáez (2014): A Game Theoretic Approach for Managing Multi‐Modal Urban Mobility Systems: Proceedings of the 5th International Conference on Applied Human Factors and Ergonomics (AHFE 2014).
Abstract: Collective adaptive systems provide secure and robust collaboration between heterogeneous entities such as humans and computer systems. Such entities have potentially conflicting goals that attempt to satisfy by interacting with each other. Understanding and analyzing their behavior and evolution requires technical, social and economic aspects of modeling. In this paper, we develop a new design principle to describe an integrated and multimodal urban mobility system and model the interactions of various entities by means of game theoretic techniques.
Vukojevic‐Haupt, Karolina, Florian Haupt, Dimka Karastoyanova and Frank Leymann (2014): Service Selection for On‐demand Provisioned Services: Proceedings of the 2014 IEEE 18th International Enterprise Distributed Object Computing Conference, 120-127.
Abstract: Service selection is an important concept in service oriented architectures that enables the dynamic binding of services based on functional and non-functional requirements. The introduction of the concept of on-demand provisioned services significantly changes the nature of services and as a consequence the traditional service selection process does not fit anymore. Existing approaches for service selection rely on the always on semantic of services, an assumption that is not valid for on-demand provisioned services. We tackle this problem by adapting the traditional service selection process and by defining an additional step covering the changes introduced by the concept of on-demand provisioning. Our solution comprises an extended architecture for on-demand provisioning, a metamodel for a service registry, and a detailed definition and discussion of the adapted and extended service selection process. The work presented in this paper allows keeping the advantages of dynamic service binding at runtime and combining them with the advantages of Cloud computing exploited through the concept of on-demand provisioning.
Weiß, Andreas, Santiago Gómez Sáez, Michael Hahn and Dimka Karastoyanova (2014): Approach and Refinement Strategies for Flexible Choreography Enactment, in: Meersman, Robert, Hervé Panetto, Tharam Dillon, Michele Missikoff, Lin Liu, Oscar Pastor, Alfredo Cuzzocrea and Timos Sellis (ed.): Proceedings of the OTM 2014 Conferences: Confederated International Conferences CoopIS, and ODBASE 2014, 93-111.
Abstract: Collaborative, Dynamic & Complex (CDC) systems such as adaptive pervasive systems, eScience applications, and complex business systems inherently require modeling and run time flexibility. Since domain problems in CDC systems are expressed as service choreographies and enacted by service orchestrations, we propose an approach introducing placeholder modeling constructs usable both on the level of choreographies and orchestrations, and a classification of strategies for their refinement to executable workflows. These abstract modeling constructs allow deferring the modeling decisions to later points in the life cycle of choreographies. This supports run time scenarios such as incorporating new participants into a choreography after its enactment has started or enhancing the process logic of some of the participants. We provide a prototypical implementation of the approach and evaluate it by means of a case study.
Hahn, Michael, Santiago Gómez Sáez, Vasilios Andrikopoulos, Dimka Karastoyanova and Frank Leymann (2014): Development and Evaluation of a Multi-tenant Service Middleware PaaS Solution: Proceedings of the 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing, 278-287.
Abstract: In many modern systems, applications or services are realized as compositions of multiple existing services that can be enacted by Service Composition Engines (SCEs), which provide the required functionality to enable their definition and execution. SCEs typically use the capabilities of an Enterprise Service Bus (ESB) which serves as the messaging hub between the composed services aiming at ensuring their integration. Together, an SCE and ESB solution comprise the service middleware required for the definition and execution of service-based composite applications. Offering a service middleware solution as a service creates a PaaS offering that allows the service consumers to share the service middleware solution in a multi-tenant manner. However, multi-tenancy support for service middleware solutions remains an open issue. For this purpose, in this work we introduce a general architecture for the realization of a multi-tenant service middleware PaaS solution. This architecture is prototypically realized based on open-source, multi-tenant ESB and SCE solutions. The resulting service middleware provides configurability for service compositions, tenant-aware messaging, and tenant-based administration and management of the SCE and the ESB. We also present an empirical evaluation of the multi-tenant service middleware with focus on the SCE. The results of these experiments show a performance degradation within acceptable limits when scaling the number of tenants and tenant users.
Strauch, Steve, Vasilios Andrikopoulos, Thomas Bachmann, Dimka Karastoyanova, Stephan Passow and Karolina Vukojevic‐Haupt (2013): Decision Support for the Migration of the Application Database Layer to the Cloud, in: Liaquat, Saad (ed.): Proceedings of the IEEE 5th International Conference on Cloud Computing Technology and Science (CloudCom 2013), 639-646.
Abstract: Migrating an existing application to the Cloud is a complex and multi-dimensional problem requiring in many cases adapting the application in significant ways. Looking specifically into the database layer of the application, i.e. the aspect providing data persistence and manipulation capabilities, this involves dealing with differences in the granularity of interactions, refactoring of the application to cope with remote data sources, and addressing data confidentiality concerns. Toward this goal, in this work we present an application migration methodology which incorporates these aspects, and a decision support, application refactoring and data migration tool that assists application developers in realizing this methodology. For purposes of evaluating our proposal we present the results of a case study conducted in the context of an eScience project.
Vukojevic‐Haupt, Karolina, Dimka Karastoyanova and Frank Leymann (2013): On-demand Provisioning of Infrastructure, Middleware and Services for Simulation Workflows: Proceedings of the IEEE 6th International Conference on Service-Oriented Computing and Applications (SOCA 2013), 91-98.
Abstract: Service orientation is a mainstream paradigm in business applications and gains even greater acceptance in the very active field of eScience. In SOC service binding strategies have been defined to specify the point in time a service can be discovered and selected for use, namely static binding, dynamic binding at deployment or at run time, and dynamic service deployment. The basic assumption in all these strategies is that the software stack and infrastructure necessary to execute the services are already available. While in service-based business applications this is typically a valid assumption in scientific applications it is often not the case. Therefore, in this work we introduce a new binding strategy for services we call on-demand provisioning which entails provisioning of the software stack necessary for the service and subsequent dynamic deployment of the service itself. Towards this goal, we also contribute a middleware architecture that enables the provisioning of the software stack - functionality unavailable in conventional service middlewares. We demonstrate the approach and the capabilities of the middleware and the current state of the implementation of our approach. For this purpose we use an example application from the field of eScience that comprises a scientific workflow management system for simulations.
Vasilios Andrikopoulos, Santiago Gómez Sáez, Dimka Karastoyanova and Andreas Weiß (2013): Towards Collaborative, Dynamic and Complex Systems (Short Paper): Proceedings of the IEEE 6th International Conference on Service-Oriented Computing and Applications (SOCA 2013), 241-245.
Abstract: Service orientation has significantly facilitated the development of complex distributed systems spanning multiple organizations. However, different application areas approach such systems in domain-specific ways, focusing on particular aspects relevant only for their application types. As a result, we observe a very fragmented landscape of service-oriented systems, which does not enable collaboration across organizations. To address this concern, in this work we introduce the notion of Collaborative, Dynamic and Complex (CDC) systems and position them with respect to existing technologies. In addition, we present how CDC systems are modeled and the steps to provision and execute them. We also contribute an architecture enabling CDC Systems with full life cycle coverage that allows for leveraging service-oriented and Cloud-related technologies.
Wagner, Sebastian, Christoph Fehling, Dimka Karastoyanova and David Schumm (2012): State propagation-based monitoring of business transactions, in: Leu, Jenq-Shiou (ed.): Proceedings of the Fifth IEEE International Conference on Service-Oriented Computing and Applications (SOCA2012).
Abstract: Business analysts want to monitor the status of their business goals in a business-centric manner, without any knowledge of the actual implementation artifacts that contribute to achieve these goals. Business transactions are one means to represent business goals and requirements. A business transaction is typically implemented by a choreography of different parties contributing to the accomplishment of a common agreement. To meet the constantly changing requirements for all parties in a business transaction choreographies often have to be adapted (e.g. based on the capabilities of different execution environments). The resulting challenge is that the execution state of a choreography executed on several locations has to be propagated to the business analyst to enable monitoring of the (adapted) business transaction. For this purpose we introduce a meta-model and state model of business transactions. Based on these models, we introduce a two-stage monitoring approach involving state propagation of the execution status of the adapted choreography to the original choreography and from there to the business transaction.
Nowak, Alexander, Dimka Karastoyanova, Frank Leymann, Andrej Rapoport and David Schumm (2012): Flexible Information Design for Business Process Visualizations, in: Leu, Jenq-Shiou (ed.): Proceedings of the Fifth IEEE International Conference on Service-Oriented Computing and Applications (SOCA2012).
Abstract: Profound understanding of business processes is a key success factor for Business Process Management (BPM). As more and more analytical information like runtime data from process execution or statistical data from business intelligence are available, the problem of business process complexity becomes apparent. Process-relevant information needs to be provided as fast as possible while considering easy and fast interpretation and dynamic changes in stakeholders' demands. The static and use-case specific creation or modification of process visualizations shown in current approaches and tools, however, is complex, time consuming, inflexible and thus costly. To address these shortcomings, we introduce a template-based approach that decouples the creation of visualization templates from concrete process visualizations. The binding of customization points of visualization templates to analytical process information is supported by a graphical editor that enables customization of visualizations in a fast and flexible manner. Moreover, due to the separation of concerns, our approach improves the usability of process visualizations because templates may be created by graphic experts independently from specific visualization demands. The feasibility of our concept is demonstrated by a prototypical implementation.
Oliver Kopp, Uwe Breitenbucher, Michael Reiter and Dimka Karastoyanova (2012): Quality of data driven simulation workflows: Proceedings of the 8th IEEE International Conference on eScience (eScience 2012), 1-8.
Abstract: Simulations are characterized by long running calculations and complex data handling tasks accompanied by non-trivial data dependencies. The workflow technology helps to automate and steer such simulations. Quality of Data frameworks are used to determine the goodness of simulation data, e.g., they analyze the accuracy of input data with regards to the usability within numerical solvers. In this paper, we present generic approaches using evaluated Quality of Data to steer simulation workflows. This allows for ensuring that the predefined requirements such as a precise final result or a short execution time will be met even after the execution of simulation workflow has been started. We discuss mechanisms for steering a simulation on all relevant levels - workflow, service, algorithms, and define a unifying approach to control such workflows. To realize Quality of Data-driven workflows, we present an architecture realizing the presented approach and a WS-Policy-based language to describe Quality of Data requirements and capabilities.
Karastoyanova, Dimka, Dimitrios Dentsas, David Schumm, Mirko Sonntag, Lina Sun and Vukojevic (2012): Service-based integration of human users in workflow-driven scientific experiments: Proceedings of the 8th IEEE International Conference on eScience (eScience 2012).
Abstract: The use of information technology in research and practice leads to increased degree of automation of tasks and makes scientific experiments more efficient in terms of cost, speed, accuracy, and flexibility. Scientific workflows have proven useful for the automation of scientific computations. However, not all tasks of an experiment can be automated. Some decisions still need to be made by human users, for instance, how an automated system should proceed in an exceptional situation. To address the need for integration of human users in such automated systems, we propose the concept of Human Communication Flows, which specify best practices about how a scientific workflow can interact with a human user. We developed a human communication framework that implements Communication Flows in a pipes-and-filters architecture and supports both notifications and request-response interactions. Different Communication Services can be plugged into the framework to account for different communication capabilities of human users. We facilitate the use of Communication Flows within a scientific workflow by means of reusable workflow fragments implementing the interaction with the framework.
Sonntag, Mirko, Michael Hahn and Dimka Karastoyanova (2012): Mayflower ‐ Explorative Modeling of Scientific Workflows with BPEL, in: Lohmann, Niels and Simon Moser (ed.): Proceedings of the Demonstration Track of the 10th International Conference on Business Process Management (BPM 2012), CEUR Workshop Proceedings, 2012, 45-50.
Abstract: Abstract Using workflows for scientific calculations, experiments and simulations has been a success story in many cases. Unfortunately, most of the existing scientific workflow systems implement proprietary, non-standardized workflow languages, not taking advantage of the achievements of the conventional business workflow technology. It is only natural to combine these two research branches in order to harness the strengths of both. In this demonstration, we present Mayflower, a workflow environment that enables scientists to model workflows on the fly using extended business workflow technology. It supports the typical trial-and-error approach scientists follow when developing their experiments, computations or simulations and provides scientists with all crucial characteristics of the workflow technology. Additionally, beneficial to the business stakeholders, Mayflower brings additional simplification in workflow development and debugging.
David Schumm, Dimitrios Dentsas, Michael Hahn, Dimka Karastoyanova, Frank Leymann and Mirko Sonntag (2012): Web service composition reuse through shared process fragment libraries, in: Brambilla, Marco, Takehiro Tokuda and Robert Tolksdorf (ed.): Proceedings of the 12th international conference on Web Engineering (ICWE'12 ), 498-501.
Abstract: More and more application functionality is provided for use over corporate and public networks. Standardized technology stacks, like Web services, provide abstraction from the internal implementation. Coarse-grained units of Web service composition logic can be made reusable by capturing it as ‘process fragment'. Such fragments can be shared over the Web to simplify and accelerate development of process-based service compositions. In this demonstration, we present a framework consisting of an Eclipse-based process design environment that is integrated with a Web-based process fragment library. The framework enables extracting process fragments, publishing and sharing them on the Web, as well as search, retrieval, and their reuse in a given process. Process fragments can be shared with others using a Web frontend or through a plug-in within the process design environment which is building on Web service technology.
Reiter, Michael, Uwe Breitenbücher, Schahram Dustdar, Dimka Karastoyanova and Frank Leymann (2011): A Novel Framework for Monitoring and Analyzing Quality of Data in Simulation Workflows: Proceedings of the IEEE Seventh International Conference on eScience (eScience2011), 105-112.
Abstract: In recent years scientific workflows have been used for conducting data-intensive and long running simulations. Such simulation workflows have processed and produced different types of data whose quality has a strong influence on the final outcome of simulations. Therefore being able to monitor and analyze quality of this data during workflow execution is of paramount importance, as detection of quality problems will enable us to control the execution of simulations efficiently. Unfortunately, existing scientific workflow execution systems do not support the monitoring and analysis of quality of data for multi-scale or multi-domain simulations. In this paper, we examine how quality of data can be comprehensively measured within workflows and how the measured quality can be used to control and adapt running workflows. We present a quality of data measurement process and describe a quality of data monitoring and analysis framework that integrates this measurement process into a workflow management system.
Sonntag, Mirko and Dimka Karastoyanova (2011): Enforcing the Repeated Execution of Logic in Workflows, in: Santos, Maribel Y. and Vagan Terziyan (ed.): Proceedings of the first International Conference on Business Intelligence and Technology (BUSTECH2011), 20-25.
Abstract: The repeated execution of workflow logic is a feature needed in many situations. Repetition of activities can be modeled with workflow constructs (e.g., loops) or external workflow configurations, or can be triggered by a user action during workflow execution. While the first two options are state of the art in the workflow technology, the latter is currently insufficiently addressed in literature and practice. We argue that a manually triggered rerun operation enables both business users and scientists to react to unforeseen problems and thus improves workflow robustness, allows scientists steering the convergence of scientific results, and facilitates an explorative workflow development as required in scientific workflows. In this paper, we therefore formalize operations for the repeated enactment of activities—for both iteration and re-execution. Starting point of the rerun is an arbitrary, manually selected activity. Since we define the operations on a meta-model level, they can be implemented for different workflow languages and engines.
Sonntag, Mirko and Dimka Karastoyanova (2011): Compensation of Adapted Service Orchestration Logic in BPEL’n’Aspects, in: Hutchison, David, Farouk Toumani, Stefanie Rinderle-Ma, Gerhard Weikum, Moshe Y. Vardi, Doug Tygar, Demetri Terzopoulos, Madhu Sudan, Bernhard Steffen, C. Pandu Rangan, Oscar Nierstrasz, Moni Naor, John C. Mitchell, Friedemann Mattern, Jon M. Kleinberg, Josef Kittler, Takeo Kanade and Karsten Wolf (ed.): Proceedings of the 9th International Conference Business Process Management (BPM 2011), 413-428.
Abstract: BPEL’n’Aspects is a non-intrusive mechanism for adaptation of control flow of BPEL processes based on the AOP paradigm. It relies on Web service standards to weave process activities in terms of aspects into BPEL processes. This paper is a logical continuation of the BPEL’n’Aspects approach. Its main objective is to enable compensation of weaved-in Web service invocations (activities) in a straightforward manner. We present (1) requirements on a mechanism for compensation of weaved-in process activities; (2) the corresponding concepts and mechanisms to meet these requirements; (3) an example scenario to show the applicability of the approach; and (4) a prototypical implementation to prove the feasibility of the solution. This work represents an improvement in the applicability of this particular adaptation approach since processes in production need the means to compensate actions that are included into processes as result of an adaptation step, too. The concept is generic and hence can also be used by other approaches that adapt control flow.
Peter Reimann, Michael Reiter, Holger Schwarz, Dimka Karastoyanova and Frank Leymann (2011): SIMPL - A Framework for Accessing External Data in Simulation Workflows, in: Theo Härder, Wolfgang Lehner, Bernhard Mitschang and Harald Schöning (ed.): Datenbanksysteme für Business, Technologie und Web (BTW), Proceedings of the 14. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssysteme (DBIS),, 534-553.
Abstract: Adequate data management and data provisioning are among the most important topics to cope with the information explosion intrinsically associated with simulation applications. Today, data exchange with and between simulation applications is mainly accomplished in a file - style manner. These files show proprietary formats and have to be transformed according to the specific needs of simulation applications. Lots of effort has to be spent to find appropriate data sources and to specify and implement data transformations. In this paper, we present SIMPL – an extensible framework that provides a generic and consolidated abstraction for data management and data provisioning in simulation workflows. We introduce extensions to workflow languages and show how they are used to model the data provisioning for simulation workflows based on data management patterns. Furthermore, we show how the framework supports a uniform access to arbitrary external data in such workflows. This removes the burden from engineers and scientists to specify low-level details of data management for their simulation applications and thus boosts their productivity.
Schumm, David, Jiayang Cai, Christoph Fehling, Dimka Karastoyanova, Frank Leymann and Monika Weidmann (2011): Composite Process View Transformation, in: Huemer, Christian and Thomas Setzer (ed.): E-Commerce and Web Technologies: Proceedings of the 12th International Conference, EC-Web 2011, 52-63.
Abstract: The increasing complexity of processes used for design and execution of critical business activities demands novel techniques and technologies. Process viewing techniques have been proposed as means to abstract from details, summarize and filter out information, and customize the visual appearance of a process to the need of particular stakeholders. However, composition of process view transformations and their provisioning to enable their usage in various scenarios is currently not discussed in research. In this paper, we present a lightweight, service-oriented approach to compose modular process view transformation functions to form complex process view transformations which can be offered as a service. We introduce a concept and an architectural framework to generate process view service compositions automatically with focus on usability. Furthermore, we discuss key aspects regarding the realization of the approach as well as different scenarios where process view services and their compositions are needed.
Sonntag, Mirko, Katharina Görlach, Dimka Karastoyanova, Frank Leymann, Polina Malets and David Schumm (2011): Views on Scientific Workflows, in: Grabis, Janis and Marite Kirikova (ed.): 10th International Conference on Perspectives in Business Informatics Research (BIR 2011), 321-335.
Abstract: Workflows are becoming more and more important in e-Science due to the support they provide to scientists in computer simulations, experiments and calculations. Our experiences with workflows in this field and the literature show that scientific workflows consist of a large number of related information. This information is difficult to deal with in a single perspective and has changing importance to scientists in the different workflow lifecycle phases. In this paper we apply viewing techniques known from business process management to (service-based) scientific workflows to address these issues. We describe seven of the most relevant views and point out realization challenges. We argue that the selected views facilitate the handling of workflows to scientists and add further value to scientific workflow systems. An implementation of a subset of the views based on Web services and BPEL shows the feasibility of the approach. The presented work has the goal to increase additionally the acceptance of the workflow technology in e-Science.
Sonntag, Mirko, Sven Hotta, Dimka Karastoyanova, David Molnar and Siegfried Schmauder (2011): Using Services and Service Compositions to Enable the Distributed Execution of Legacy Simulation Applications, in: Hutchison, David, Takeo Kanade and Josef Kittler (ed.): Towards a Service-Based Internet, Springer, 242-253.
Abstract: In the field of natural and engineering science, computer simulations play an increasingly important role to explain or predict phenomena of the real world. Although the software landscape is crucial to support scientists in their every day work, we recognized during our work with scientific institutes that many simulation programs can be considered legacy monolithic applications. They are developed without adhering to known software engineering guidelines, lack an acceptable software ergonomics, run sequentially on single workstations and require tedious manual tasks. We are convinced that SOA concepts and the service composition technology can help to improve this situation. In this paper we report on the results of our work on the service- and service composition-based re-engineering of a legacy scientific application for the simulation of the ageing process in copper-alloyed. The underlying general concept for a distributed, service-based simulation infrastructure is also applicable to other scenarios. Core of the infrastructure is a resource manager that steers server work load and handles simulation data.
Strauch, Steve, Vasilios Andrikopoulos, Dimka Karastoyanova and Karolina Vukojevic-Haupt (2015): Migrating e-Science Applications to the Cloud: Methodology and Evaluation: Cloud Computing with e-Science Applications, CRC Press.
Görlach, Katharina, Mirko Sonntag, Dimka Karastoyanova, Frank Leymann and Michael Reiter (2011): Conventional Workflow Technology for Scientific Simulation, in: Yang, Xiaoyu, Lizhe Wang and Wei Jie (ed.): Guide to e-Science: Next Generation Scientific Research and Discovery, Springer London: London, 323-352.
Abstract: Workflow technology is established in the business domain for several years. This fact suggests the need for detailed investigations in the qualification of conventional workflow technology for the evolving application domain of e-Science. This chapter discusses the requirements on scientific workflows, the state of the art of scientific workflow management systems as well as the ability of conventional workflow technology to fulfill requirements of scientists and scientific applications. It becomes clear that the features of conventional workflows can be advantageous for scientists but also that thorough enhancements are needed. We therefore propose a conceptual architecture for scientific workflow management systems based on the business workflow technology as well as extensions of existing workflow concepts in order to improve the ability of established workflow technology to be applied in the scientific domain with focus on scientific simulations.
Karastoyanova, Dimka (2010): On Scientific Experiments and Flexible Service Compositions, in: Sachs, Kai, Ilia Petrov and Pablo Guerrero (ed.): From Active Data Management to Event-Based Systems and More: Papers in Honor of Alejandro Buchmann on the Occasion of His 60th Birthday, Springer Berlin Heidelberg: Berlin, Heidelberg, 175-194.
Abstract: The IT support for scientific experimenting and e-science is currently not at the level of maturity of the support enterprises obtain. Since recently there is a trend of reusing existing enterprise software and related concepts for scientific experiments, scientific workflows and simulation. Most notably these are the workflow technology, which is widely used in business process management (BPM), and integration paradigms like the service oriented architecture (SOA). In this work we give an overview of open issues in the support for scientific experiments and possible approaches to addressing them in a service-based environment. We identify the need for enhancing the BPM practices, technologies and techniques in order to render them applicable in the area of scientific experimenting. We stress on the even greater importance of workflow flexibility and also show why flexibility techniques are crucial when it is about improving the IT support for scientists.
Baryannis, George, Olha Danylevych, Dimka Karastoyanova, Kyriakos Kritikos, Philipp Leitner, Florian Rosenberg and Branimir Wetzstein (2010): Service Composition, in: Papazoglou, Mike P., Klaus Pohl and Michael Parkin (ed.): Service Research Challenges and Solutions for the Future Internet: S-Cube ‐ Towards Engineering, Managing and Adapting Service-Based Systems, Springer Berlin Heidelberg: Berlin, Heidelberg, 55-84.
Abstract: In the S-Cube research framework, the Service Composition and Co-ordination (SCC) layer encompasses the functions required for the aggregation of multiple services into a single composite service offering, with the execution of the constituent services in a composition controlled through the Service Infrastructure (SI) layer. The SCC layer manages the control and data flow between the services in a service-based application by, for example, specifying workflow models and using a workflow engine for runtime control of service execution.This chapter presents an overview of the state-of-the-art in service composition modeling and covers two main areas: service composition models and languages and approaches to the synthesis of service compositions including model-driven, automated, and QoS-aware service composition. The contents of this chapter can be seen as a basis for aligning and improving existing approaches and solutions for service composition and provide directions for future S-Cube research.
Karastoyanova, Dimka and Frank Leymann (2010): Making Scientific Applications on the Grid Reliable Through Flexibility Approaches Borrowed from Service Compositions, in: Antonopoulos, Nick, Georgios Exarchakos, Maozhen Li and Antonio Liotta (ed.): Handbook of Research on P2P and Grid Systems for Service-Oriented Computing, IGI Global, 635-656.
Abstract: The current trend in Service Oriented Computing (SOC) is to enable support for new delivery models of software and applications. These endeavours impose requirements on the resources and services used, on the way applications are created and on the QoS characteristics of the applications and the supporting infrastructure. Scientific applications on the other hand require improved robustness and reliability of the supporting Grid infrastructures where resources appear and disappear constantly. Enabling business model like Software as a Service (SaaS), Infrastructure as a Service (IaaS), and guaranteeing reliability of Grid infrastructures are requirements that both business and scientific application nowadays impose. The convergence of existing approaches from SOC and Grid Computing is therefore an obvious need. In this work we give an overview of the state-of-the-art of the overlapping research done in the area of SOC and Grid computing with respect to meeting the requirements of the applications in these two areas. We show that the requirements of business applications that already exploit service-oriented architectures (SOA) and the scientific application utilizing Grid infrastructures overlap. Due to the limited extent of cooperation between the two research communities the research results are either overlapping or diverging in spite of the similarities in requirements. Notably, some of the techniques developed in each area are needed but still missing in the other area and vice versa. We argue therefore that in order to enable an enterprise-strength service-oriented infrastructure one needs to combine and leverage the existing Grid and Service middleware in terms of architectures and implementations. We call such an infrastructure the Business Grid. Based on the Business Grid vision we focus in this work on presenting how reliability and robustness of the Business Grid can be improved by employing approaches for flexibility of service compositions. An overview and assessment of these approaches are presented together with recommendations for use. Based on the assumption that Grid services are Web services, these approaches can be utilized to improve the reliability of the scientific applications thus drawing on the advantages flexible workflows provide. This way we improve the robustness of scientific applications by making them flexible and hence improve the features of business applications that employ Grid resources and Grid service compositions to realize the SaaS, IaaS etc. delivery models.
Leymann, Frank, Dimka Karastoyanova and Michael P. Papazoglou (2010): Business Process Management Standards, in: Vom Brocke, Jan and Michael Rosemann (ed.): Handbook on Business Process Management 1: Introduction, Methods, and Information Systems, Springer Berlin Heidelberg: Berlin, Heidelberg, 513-542.
Abstract: This chapter discusses the evolution of standards for BPM. The focus is on technology-related standards, especially on standards for specifying process models. A discussion of the two fundamental approaches for modeling processes, graph-based and operator-based, supports a better understanding of the evolution of standards. For each standard discussed, we describe its core concepts and its impact on the evolution of standards. The corresponding influence on the overall architecture of BPM environments is worked out.
Mietzner, Ralph, Dimka Karastoyanova and Frank Leymann (2009): Business Grid: Combining Web Services and the Grid, in: Jensen, Kurt and van der Aalst, Wil M. P (ed.): Transactions on Petri Nets and Other Models of Concurrency II: Special Issue on Concurrency in Process-Aware Information Systems, Springer Berlin Heidelberg: Berlin, Heidelberg, 136-151.
Abstract: The common overarching goal of service bus and Grid middleware is "virtualization" – virtualization of business functions and virtualization of resources, respectively. By combining both capabilities a new infrastructure called "Business Grid" results. This infrastructure meets the requirements of both business applications and scientific computations in a unified manner and in particular those that are not addressed by the middleware infrastructures in each of the fields. Furthermore, it is the basis for enacting new trends like Software as a Service or Cloud computing. In this paper the overall architecture of the Business Grid is outlined. The Business Grid applications are described and the need for their customizability and adaptability is advocated. Requirements on the Business Grid like concurrency, multi-tenancy and scalability are addressed. The concept of "provisioning flows" and other mechanisms to enable scalability as required by a high number of concurrent users are outlined.
Karastoyanova, Dimka, Tammo van Lessen, Frank Leymann, Zhilei Ma, Joerg Nitzche and Branimir Wetzstein (2009): Semantic Business Process Management, in: Cardoso, Jorge and Wil van der Aalst (ed.): Handbook of Research on Business Process Modeling, IGI Global, 299-317.
Abstract: Even though process orientation/BPM is a widely accepted paradigm with heavy impact on industry and research the available technology does not support the business professionals’ tasks in an appropriate manner that is in a way allowing processes modeling using concepts from the business domain. This results in a gap between the business people expertise and the IT knowledge required. The current trend in bridging this gap is to utilize technologies developed for the Semantic Web, for example ontologies, while maintaining reusability and flexibility of processes. In this chapter the authors present an overview of existing technologies, supporting the BPM lifecycle, and focus on potential benefits Semantic Web technologies can bring to BPM. The authors will show how these technologies help automate the transition between the inherently separate/detached business professionals’ level and the IT level without the burden of additional knowledge acquisition on behalf of the business professionals. As background information they briefly discuss existing process modeling notations like the Business Process Modeling Notation (BPMN) as well as the execution centric Business Process Execution Language (BPEL), and their limitations in terms of proper support for the business professional. The chapter stresses on the added value Semantic Web technologies yield when leveraged for the benefit of BPM. For this the authors give examples of existing BPM techniques that can be improved by using Semantic Web technologies, as well as novel approaches which became possible only through the availability of semantic descriptions. They show how process model configuration can be automated and thus simplified and how flexibility during process execution is increased. Additionally, they present innovative techniques like automatic process composition and auto-completion of process models where suitable process fragments are automatically discovered to make up the process model. They also present a reference architecture of a BPM system that utilizes Semantic Web technologies in an SOA environment.