Toward a Sociology of Machine Learning Explainability: Human–machine Interaction in Deep Neural Network-based Automated Trading

Christian Borch*, Bo Hee Min

*Corresponding author af dette arbejde

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

    22 Downloads (Pure)


    Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.
    TidsskriftBig Data & Society
    Udgave nummer2
    Antal sider13
    StatusUdgivet - jul. 2022


    • Algorithmic ethnography
    • Automated trading
    • Deep neural networks
    • Explainability
    • Machine learning
    • Human-machine companionship