Abstract
Despite the growing promise of AI in B2B sales, frontline adoption remains limited— particularly when AI replaces rather than supports human judgment. This research examines how AI-based lead recommenders compare to human decision-making and whether improvements in predictive accuracy and transparency can drive adoption. Across two datasets from European and U.S. B2B markets and a large-scale experiment, we find that marketers and salespeople rely on fundamentally different signals when qualifying leads—marketers prioritize content-based cues, while salespeople attend to process-based indicators. This misalignment partly explains the low adoption of marketing and AI-generated recommendations. While black-box AI models outperform human and logistic models descriptively, their predictive advantage is context-dependent. Most critically, adoption behavior hinges not only on model accuracy, but also on the interplay between source (AI vs. human) and explanation. Salespeople grant human recommendations more benefit of the doubt—even when less accurate—whereas AI recommendations require both high accuracy and explanatory support to gain trust. These findings offer actionable insights for designing effective, explainable AI systems that align with salespeople’s decision-making logic.
| Originalsprog | Engelsk |
|---|---|
| Publikationsdato | 2026 |
| Antal sider | 18 |
| Status | Udgivet - 2026 |
| Begivenhed | 9th Industrial Marketing Management Summit - Campus Lyon, Lyon, Frankrig Varighed: 13 jan. 2026 → 16 jan. 2026 Konferencens nummer: 9 https://em-lyon.com/en/agenda/9th-industrial-marketing-management-summit |
Konference
| Konference | 9th Industrial Marketing Management Summit |
|---|---|
| Nummer | 9 |
| Lokation | Campus Lyon |
| Land/Område | Frankrig |
| By | Lyon |
| Periode | 13/01/2026 → 16/01/2026 |
| Internetadresse |
Emneord
- AI
- Explainability
- B2B sales
- Trust
- Adoption
Citationsformater
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver