Customizing Contextualized Language Models for Legal Document Reviews

Shohreh Shaghaghian, Luna Yue Feng, Borna Jafarpour, Nicolai Pogrebnyakov

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

127 Downloads (Pure)


Inspired by the inductive transfer learning on computer vision, many efforts have been made to train contextualized language models that boost the performance of natural language processing tasks. These models are mostly trained on large general-domain corpora such as news, books, or Wikipedia. Although these pre-trained generic language models well perceive the semantic and syntactic essence of a language structure, exploiting them in a real-world domain-specific scenario still needs some practical considerations to be taken into account such as token distribution shifts, inference time, memory, and their simultaneous proficiency in multiple tasks. In this paper, we focus on the legal domain and present how different language models trained on general-domain corpora can be best customized for multiple legal document reviewing tasks. We compare their efficiencies with respect to task performances and present practical considerations.
TitelProceedings - 2020 IEEE International Conference on Big Data. Big Data 2020
RedaktørerXintao Wu, Chris Jermaine, Li Xiong, Xiaohua Hu, Olivera Kotevska, Siyuan Lu, Weija Xu, Srinivas Aluru, Chengxiang Zhai, Eyhab Al-Masri, Zhiyuan Chen, Jeff Saltz
Antal sider10
UdgivelsesstedLos Alamitos, CA
ISBN (Trykt)9781728162522
ISBN (Elektronisk)9781728162515
StatusUdgivet - 2020
BegivenhedEighth IEEE International Conference on Big Data. IEEE BigData 2020 - Virtual Event
Varighed: 10 dec. 202013 dec. 2020
Konferencens nummer: 8


KonferenceEighth IEEE International Conference on Big Data. IEEE BigData 2020
LokationVirtual Event


  • Adaptation models
  • Law
  • Computational modeling
  • Big data
  • Natural language processing
  • Task analysis
  • Context modeling