Toward a Sociology of Machine Learning Explainability: Human–machine Interaction in Deep Neural Network-based Automated Trading

Christian Borch*, Bo Hee Min

*Corresponding author for this work

    Research output: Contribution to journalJournal articleResearchpeer-review

    152 Downloads (Pure)

    Abstract

    Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.
    Original languageEnglish
    JournalBig Data & Society
    Volume9
    Issue number2
    Number of pages13
    DOIs
    Publication statusPublished - Jul 2022

    Keywords

    • Algorithmic ethnography
    • Automated trading
    • Deep neural networks
    • Explainability
    • Machine learning
    • Human-machine companionship

    Cite this