The Explainable Business Process (XBP) - An Exploratory Research

Riham Alhomsi, Adriana Santarosa Vivacqua

Abstract


Providing explanations to the business process, its decisions and its activities, is an important key factor for the process in order to achieve the business objectives of the business process, and to minimize and deal with the ambiguity of the business process that causes multiple interpretations, as well as to engender the appropriate trust of the users in the process. As a first step towards adding explanations to business process, we present an exploratory study to bring in the concept of explainability into business process, where we propose a conceptual framework to use the explainability with business process in a model that we called the Explainable Business Process XBP, furthermore we propose the XBP lifecycle based on the Model-based and Incremental Knowledge Engineering (MIKE) approach, in order to show in details the phase where explainability can take a place in business process lifecycle, noting that we focus on explaining the decisions and activities of the process in its as-is model without transforming it into a to-be model.

Keywords


business process; explainability; explainable business process

Full Text:

PDF

References


HAMMER, M.; CHAMPY, J. Reengineering the Corporation: A Manifesto for Business Revolution. Business Horizons, Amsterdam, v. 36, p. 90–91, fevereiro de 1993.

DAVENPORT, T. Process Innovation: Reengineering Work through Information Technology. Cambridge: Harvard Business Review Press, 1992.

YU, E.; MYLOPOULOS, J.; LESPÉRANCE, Y. AI models for business process reengineering. IEEE Expert, Washington, v. 11, p. 16–23, setembro de 1996.

GUIDOTTI, R. et al. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, New York, v. 51, p. artigo 93, fevereiro de 2018.

(erro) [5] ACHARYA, V. Google Scholar. 2020. Disponível em: 〈scholar.google.com〉.

GUNNING, D. DARPA’s explainable artificial intelligence (XAI) program. In: IUI ’19: INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 24., 2019, Marina del Ray. Proceedings of the [...]. New York: Association for Computing Machinery, 2019. p. artigo 2.

ANTUNES, P.; aO, H. M. Resilient Business Process Management: Framework and services. Expert Systems with Applications, Amsterdam, v. 38, p. 1241–1254, fevereiro de 2011.

DUMAS, M. et al. Fundamentals of Business Process Management. Berlin: Springer-Verlag, 2013.

WESKE, M. Business Process Management: Concepts, Languages, Architectures. 2. ed. Berlin: Springer, 2019.

BURATTIN, A. Process Mining Techniques in Business Environments: Theoretical Aspects, Algorithms, Techniques and Open Challenges in Process Mining. 1. ed. Cham: Springer International Publishing, 2015. v. 207. (Lecture Notes in Business Information Processing, v. 207).

ROSEMANN, M.; RECKER, J.; FLENDER, C. Contextualisation of business processes. International Journal of Business Process Integration and Management, Genebra, v. 3, n. 1, p. 47–60, julho de 2008.

AALST, W. et al. Business process mining: An industrial application. Information Systems, Amsterdam, v. 32, n. 5, p. 713–732, julho de 2007.

BUIJS, A. et al. Understanding People’s Ideas on Natural Resource Management: Research on Social Representations of Nature. Society and Natural Resources, Logan, v. 25, n. 11, p. 1167–1181, novembro de 2012.

GRUHN, V.; LAUE, R. Reducing the cognitive complexity of business process models. In: IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS, ICCI, 8., 2009, Hong Kong. Proceedings of the [...]. Washington: Institute of Electrical and Electronics Engineers (IEEE), 2009. p. 339–345.

ABPMP. BPM CBOK: Guia para Gerenciamento de Processos de Negócio. 1. ed. Brasília: Association of Business Process Management Professionals, 2013.

POLANČIČ, G.; CEGNAR, B. Complexity Metrics for Process Models: A Systematic Literature Review. Computer Standards and Interfaces, Amsterdam, v. 51, p. 104–117, março de 2017.

LINDSAY, A.; DOWNS, D.; LUNN, K. Business processes: Attempts to find a definition. Information and Software Technology, Amsterdam, v. 45, n. 15, p. 1015–1019, dezembro de 2003.

KELLER, G.; NÜTTGENS, M.; SCHEER, A.-W. Semantische Prozeßmodellierung auf der Grundlage ”Ereignisgesteuerter Prozeßketten (EPK)”. Saarbrücken: Institut für Wirtschaftsinformatik, 1992. v. 89. (Veröffentlichungen des Instituts für Wirtschaftsinformatik, v. 89).

OMG. Business Process Model And Notation 2.0 (BPMN). Milford: Object Management Group (OMG), 2011. Disponível em: 〈https://www.omg.org/spec/BPMN/2.0/About-BPMN/〉.

GUIDOTTI, R. et al. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, New York, v. 51, n. 5, p. artigo 93, fevereiro de 2018.

NAKATSU, R. Explanatory power of intelligent systems: a research framework. In: GUPTA, JATINDER N. D. AND FORGIONNE, GUISSEPPI A. AND MORA, MANUELT. Intelligent Decision-making Support Systems. Londres: Springer-Verlag, 2006. (Decision Engineering), p. 123–143.

SOKOL, K.; FLACH, P. One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency. KI - Künstliche Intelligenz, Cham, v. 34, p. 235–250, fevereiro de 2020.

SELFRIDGE, M.; DANIELL, J.; SIMMONS, D. Learning Causal Models by Understanding Real-World Natural Language Explanations. In: CONFERENCE ON ARTIFICIAL INTELLIGENCE APPLICATIONS (CAIA), 2., 1985, Miami Beach. Proceedings of the Second Conference [...]. New York: IEEE Computer Society / North-Holland, 1985. p. 378–383.

RASMUSSEN, J. Human Error and the Problem of Causality in Analysis of Accidents. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, Londres, v. 327, n. 1241, p. 449–60; discussion 460, maio de 1990.

HECKERMAN, D.; SHACHTER, R. A Decision-Based View of Causality. In: UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE, 10., 1994, Seattle. Proceedings of the [...]. Amsterdam: Elsevier, 2013. p. 302–310.

AHN, W. Why are different features central for natural kinds and artifacts?: The role of causal status in determining feature centrality. Cognition, Amsterdam, v. 69, n. 2, p. 135–78, janeiro de 1999.

AHN, W.; KALISH, C. The role of mechanism beliefs in causal reasoning. In: KEIL, F. C. AND WILSON, R. A., 1. Explanation and Cognition. Cambridge: The MIT Press, 2000. p. 199–225.

FELTOVICH, P. J. et al. Keeping it too simple: How the reductive tendency affects cognitive engineering. Intelligent Systems, Washington, v. 19, n. 3, p. 90–94, junho de 2004.

LOMBROZO, T. Simplicity and probability in causal explanation. Cognitive psychology, Amsterdam, v. 55, n. 3, p. 232–57, dezembro de 2007.

LOMBROZO, T. Causal-Explanatory Pluralism: How Intentions, Functions, and Mechanisms Influence Causal Ascriptions. Cognitive psychology, Amsterdam, v. 61, n. 4, p. 303–32, dezembro de 2010.

KLEIN, G. et al. Influencing Preferences for Different Types of Causal Explanation of Complex Events. Human factors, Thousand Oaks, v. 56, n. 8, p. 1380–400, dezembro de 2014.

ROSENBERG, J. On Understanding the Difficulty in Understanding Understanding. In: PARRET, HERMAN ANDBOUVERESSE, JACQUES. Meaning and Understanding. Berlim: Walter de Gruyter, 1981. (Grundlagen der Kommunikation und Kognition / Foundations of Communication and Cognition), p. 29–43.

KEIL, F. C. Explanation and Understanding. Annual Review of Psychology, Palo Alto, v. 57, n. 1, p. 227–254, janeiro de 2006.

SMITH, R. Explanation, understanding, and control. Synthese, Berlim, v. 191, p. 4169–4200, novembro de 2014.

HITCHCOCK, C.; WOODWARD, J. Explanatory Generalizations, Part II: Plumbing Explanatory Depth. Noûs, Hoboken, v. 37, n. 2, p. 181–199, maio de 2003.

BYRNE, R. Cognitive Processes in Counterfactual Thinking About What Might have been. Psychology of Learning and Motivation, Amsterdam, v. 37, p. 105–154, dezembro de 1997.

BYRNE, R. Counterfactual Thinking: From Logicto Morality. Current Directions in Psychological Science, Washington, v. 26, n. 4, p. 314–322, agosto de 2017.

HOFFMAN, R. et al. Accelerated Expertise: Training for High Proficiency in a Complex World. 1. ed. New York: Psychology Press, 2013. 1-256 p.

HOFFMAN, R. et al. Accelerated Expertise: Training for High Proficiency in a Complex World. 1. ed. New York: Psychology Press, 2013. 1-256 p.

CHAKRABORTI, T. et al. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. In: INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 26., 2017, Melbourne. Proceedings of the [...]. San Francisco: International Joint Conferences on Artificial Intelligence, 2017. p. 156–163.

FOX, M.; LONG, D.; MAGAZZENI, D. Explainable Planning. CoRR, Cornell University, Ithaca, abs/1709.10256, setembro de 2017. Disponível em: 〈http://arxiv.org/abs/1709.10256〉.

GRAESSER, A.; BAGGETT, W.; WILLIAMS, K. Question-driven Explanatory Reasoning. Applied Cognitive Psychology, Hoboken, v. 10, n. 7, p. 17–31, novembro de 1996.

AHN, W.-k. et al. The role of covariation versus mechanism information in causal attribution. Cognition, Amsterdam, v. 54, n. 3, p. 299–352, abril de 1995.

GOPNIK, A. Explanation as orgasm and the drive for causal knowledge: The function, evolution, and phenomenology of the theory formation system. In: KEIL, F. C. ANDWILSON, R. A. Explanation and Cognition. Cambridge: The MIT Press, 2000. p. 299–323.

KEIL, F. Explanation and Understanding. Annual Review of Psychology, Palo Alto, v. 57, p. 227–54, fevereiro de 2006.

KOEHLER, D. Explanation, Imagination, and Confidence in Judgment. Psychological Bulletin, Worcester, v. 110, n. 3, p. 499–519, dezembro de 1991.

LOMBROZO, T.; CAREY, S. Functional explanationand the function of explanation. Cognition, Amsterdam, v. 99, n. 2, p. 167–204, abril de 2006.

MITCHELL, D.; RUSSO, J.; PENNINGTON, N. Backto the future: Temporal perspective in the explanation of events. Journal of Behavioral Decision Making, Hoboken,v. 2, n. 1, p. 25–38, janeiro de 1989.

GIFFIN, C.; WILKENFELD, D.; LOMBROZO, T. The explanatory effect of a label: Explanations with named categories are more satisfying. Cognition, Amsterdam, v. 168, p. 357–369, novembro de 2017.

TWOREK, C.; CIMPIAN, A. Why Do People Tendto Infer ”Ought” From ”Is”? The Role of Biases in Explanation. Psychological Science, Washington, v. 27, n. 8, p. 1109–1122, abril de 2016.

LINDEN, S. van der et al. Inoculating the Public against Misinformation about Climate Change. Global Challenges, Weinheim, v. 1, n. 2, p. artigo número 1600008, fevereiro de 2017.

BIRAN, O.; COTTON, C. V. Explanation and Justification in Machine Learning : A Survey Or. In: INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 26., 2017, Melbourne. Proceedings of the [...]. San Francisco: International Joint Conferences on Artificial Intelligence, 2017. p. 8–13.

EINHORN, H.; HOGARTH, R. Judging Probable Cause. Psychological Bulletin, Worcester, v. 99, n. 1, p. 3–19, janeiro de 1986.

HILTON, D. Mental Models and Causal Explanation: Judgements of Probable Cause and Explanatory Relevance. Thinking and Reasoning, Abingdon, v. 2, n. 4, p. 273–308, novembro de 1996.

MURPHY, G.; MEDIN, D. The role of theories in conceptual coherence. Psychological Review, Worcester, v. 92, n. 3, p. 289–316, janeiro de 1985.

LOMBROZO, T. Explanation and Categorization: How “why?” Informs “What?”. Cognition, Amsterdam, v. 110, n. 2, p. 248–53, fevereiro de 2009.

MUELLER, S. et al. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI. CoRR, University of Cornell, Ithaca, abs/1902.01876, fevereiro de 2019. Disponível em: 〈http://arxiv.org/abs/1902.01876〉.

CHUNG, K. Asset Characteristics and Corporate Debt Policy: An Empirical Test. Journal of Business Finance and Accounting, Hoboken, v. 20, n. 1, p. 83–98, dezembro de 2006.

HAMSCHER, W. AI in Business-Process Reengineering. AI Magazine, Washington, v. 15, n. 4, p. 71–72, dezembro de 1994.

VERENICH, I. Explainable predictive monitoring of temporal measures of business processes. Tese (Doutorado em Filosofia) — Queensland University of Technology, 2018.

SCHNEIDER, T. et al. Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations. Geophysical Research Letters, Hoboken, v. 44, n. 24, p. 12, 396–12, 417, agosto de 2017.

GUIDOTTI, R. et al. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, New York, v. 51, n. 5, p. artigo 93, fevereiro de 2018.

HARL, M. et al. Explainable predictive business process monitoring using gated graph neural networks. In: HEAVIN, Ciara et al. Journal of Decision Systems - Latest Articles. Londres, 2020.

FOX, M. The TOVE Project Towards a Common-Sense Model of the Enterprise. In: IEA/AIE: INTERNATIONAL CONFERENCE ON INDUSTRIAL, ENGINEERING AND OTHER APPLICATIONS OF APPLIED INTELLIGENT SYSTEMS, 5., 1992, Paderborn. Proceedings of the [...]. Berlim: Springer, 1992. (Lecture Notes in Computer Science, v. 604), p. 25–34.

ANGELE, J. et al. Developing Knowledge-Based Systems with MIKE. Automated Software Engineering, Berlim, v. 5, p. 389–418, outubro de 1998.

GUNNING, D. et al. XAI—Explainable artificial intelligence. Science Robotics, Washington, v. 4, n. 37, dezembro de 2019.

ARRIETA, A. B. et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, Amsterdam, v. 58, p. 82–115, junho de 2020.

HOFFMAN, R. et al. Metrics for Explainable AI: Challenges and Prospects. CoRR, Ithaca, abs/1812.04608, dezembro de 2018.

CARDOSO, J. Business Process Control-Flow Complexity: Metric, Evaluation, and Validation. International Journal of Web Services Research, Hershey, v. 5, n. 2, p. 49–76, abril de 2008.




DOI: https://doi.org/10.22456/2175-2745.107964

Copyright (c) 2021 Riham Alhomsi, Adriana Santarosa Vivacqua

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Indexing databases:
        

Acknowledgments: