Henrik Leopold is Associate Professor for Data Science and Business Intelligence. Before joining KLU in February 2019, he held positions as Assistant Professor at Vrije Universiteit Amsterdam (2015 to 2019) and WU Vienna (2014 to 2015). In 2013, he obtained his PhD degree (Dr. rer. pol.) in Information Systems from the Humboldt University of Berlin. For his thesis he received the TARGION Dissertation Award 2014 for the best doctoral thesis in the field of Information Management and the runner-up of the McKinsey Business Technology Award 2013.
In his research, Henrik Leopold is mainly concerned with the interplay between information systems and business processes. Such business processes can range from manufacturing a car to delivering a medical service to a patient. He is particularly interested in how to leverage technology from the field of artificial intelligence (such as machine learning and natural language processing) to analyze and support the execution of business processes. Outcomes of his research range from techniques for the automated analysis of process model collections to novel process mining techniques. The results of his research have been published, among others, in the journals IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Software Engineering, Decision Support Systems, and Information Systems.
For more details about Henrik Leopold, including downloads of research articles, you can also visit his personal website at www.henrikleopold.com.
Up Close & Personal
For me, the combination of being a top research institution while maintaining a friendly, family-like atmosphere is what really sets KLU apart.”
– Prof. Dr. Henrik Leopold
van der Aa, Han, Henrik Leopold and Hajo A. Reijers (2018): Checking process compliance against natural language specifications using behavioral spaces, Information Systems, 78: 83-95.
Abstract: Textual process descriptions are widely used in organizations since they can be created and understood by virtually everyone. Because of their widespread use, they also provide a valuable source for process analysis, such as compliance checking. However, the inherent ambiguity of natural language impedes the automated analysis of textual process descriptions. While human readers can use their context knowledge to correctly understand statements with multiple possible interpretations, automated tools currently have to make assumptions about their correct meaning. As a result, compliance-checking techniques are prone to draw incorrect conclusions about the proper execution of a process. To provide a comprehensive solution to these reasoning problems, we use this paper to introduce the concept of a behavioral space as a means to deal with behavioral ambiguity in textual process descriptions. A behavioral space captures all possible interpretations of a textual process description in a systematic manner. Thus, it avoids the problem of focusing on a single, possibly incorrect interpretation. We use a quantitative evaluation with a set of 47 textual process descriptions to demonstrate the usefulness of a behavioral space for compliance checking in the context of ambiguous texts.
Leopold, Henrik, Jan Mendling and Oliver Gunther (2016): Learning from Quality Issues of BPMN Models from Industry, IEEE Software, 33 (4): 26-33.
Abstract: Many organizations use business process models to document business operations and formalize business requirements in software-engineering projects. The Business Process Model and Notation (BPMN), a specification by the Object Management Group, has evolved into the leading standard for process modeling. One challenge is BPMN's complexity: it offers a huge variety of elements and often several representational choices for the same semantics. This raises the question of how well modelers can deal with these choices. Empirical insights into BPMN use from the practitioners' perspective are still missing. To close this gap, researchers analyzed 585 BPMN 2.0 process models from six companies. They found that split and join representations, message flow, the lack of proper model decomposition, and labeling related to quality issues. They give five specific recommendations on how to avoid these issues.
Pittke, Fabian, Henrik Leopold and Jan Mendling (2015): Automatic Detection and Resolution of Lexical Ambiguity in Process Models, IEEE Transactions on Software Engineering, 41 (6): 526-544.
Abstract: System-related engineering tasks are often conducted using process models. In this context, it is essential that these models do not contain structural or terminological inconsistencies. To this end, several automatic analysis techniques have been proposed to support quality assurance. While formal properties of control flow can be checked in an automated fashion, there is a lack of techniques addressing textual quality. More specifically, there is currently no technique available for handling the issue of lexical ambiguity caused by homonyms and synonyms. In this paper, we address this research gap and propose a technique that detects and resolves lexical ambiguities in process models. We evaluate the technique using three process model collections from practice varying in size, domain, and degree of standardization. The evaluation demonstrates that the technique significantly reduces the level of lexical ambiguity and that meaningful candidates are proposed for resolving ambiguity.
Leopold, Henrik, Jan Mendling and Artem Polyvyanyy (2014): Supporting Process Model Validation through Natural Language Generation, IEEE Transactions on Software Engineering, 40 (8): 818-840.
Abstract: The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.
|since 2018||Assistant Professor of Data Science and Business Intelligence, Kühne Logistics University, Hamburg, Germany|
|2015-2018||Assistant Professor (Tenure Track) + Tenured Assistant Professor (2018) at Department of Computer Science, VU University Amsterdam, The Netherlands|
|2014-2015||Visiting Researcher at University of Mannheim, Chair of Artificial Intelligence, Mannheim, Germany|
|2014-2015||Assistant Professor at Institute for Information Business, Vienna University of Economics and Business, Austria|
|2012||Research Visit at UNIRIO, Department of Applied Informatics, Rio de Janeiro, Brazil|
|2011-2014||Research Assistant at Institute of Information Systems, Humboldt University of Berlin, Berlin, Germany|
|2013||Ph.D. Information Systems (Dr. rer. pol. (summa cum laude)), Humboldt University of Berlin, Germany|
|2010||M.Sc. Information Systems, Humboldt University of Berlin, Germany|
|2008||B.Sc. Information Systems, Berlin School of Economics and Law (Dual study program), Germany|