ELECTRAMed: a new pre-trained language representation model for biomedical NLP
ELECTRAMed: a new pre-trained language representation model for biomedical NLP
Giacomo Miolo Giulio Mantoan Carlotta Orsenigo

Abstract
The overwhelming amount of biomedical scientific texts calls for the development of effective language models able to tackle a wide range of biomedical natural language processing (NLP) tasks. The most recent dominant approaches are domain-specific models, initialized with general-domain textual data and then trained on a variety of scientific corpora. However, it has been observed that for specialized domains in which large corpora exist, training a model from scratch with just in-domain knowledge may yield better results. Moreover, the increasing focus on the compute costs for pre-training recently led to the design of more efficient architectures, such as ELECTRA. In this paper, we propose a pre-trained domain-specific language model, called ELECTRAMed, suited for the biomedical field. The novel approach inherits the learning framework of the general-domain ELECTRA architecture, as well as its computational advantages. Experiments performed on benchmark datasets for several biomedical NLP tasks support the usefulness of ELECTRAMed, which sets the novel state-of-the-art result on the BC5CDR corpus for named entity recognition, and provides the best outcome in 2 over the 5 runs of the 7th BioASQ-factoid Challange for the question answering task.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| drug-drug-interaction-extraction-on-ddi | ELECTRAMed | Micro F1: 79.13 |
| named-entity-recognition-ner-on-bc5cdr | ELECTRAMed | F1: 90.03 |
| named-entity-recognition-ner-on-ncbi-disease | ELECTRAMed | F1: 87.54 |
| relation-extraction-on-chemprot | ELECTRAMed | F1: 72.94 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.