Bangla BERT Model

Sagor Sarker
2 min readSep 11, 2020

Long way later just published Bengali Bert Language Model Bangla-Bert-Base. Model has published at Hugging Face model hub.

Pre-training Corpus Details

Corpus was downloaded from two main sources:

After downloading these corpus, we preprocessed it as a Bert format. which is one sentence per line and an extra newline for new documents.

sentence 1
sentence 2
sentence 1
sentence 2

Vocab Building

We used BNLP package for training bengali sentencepiece model with vocab size 102025. We preprocess the output vocab file as Bert format. Our final vocab file availabe at https://github.com/sagorbrur/bangla-bert and also at huggingface model hub.

Training Details

  • Bangla-Bert was trained with code provided in Google BERT’s github repository (https://github.com/google-research/bert)
  • Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
  • Total Training Steps: 1 Million
  • The model was trained on a single Google Cloud TPU

Evaluation Results

After training 1 millions steps here is the evaluation resutls.

global_step = 1000000
loss = 2.2406516
masked_lm_accuracy = 0.60641736
masked_lm_loss = 2.201459
next_sentence_accuracy = 0.98625
next_sentence_loss = 0.040997364
perplexity = numpy.exp(2.2406516) = 9.393331287442784
Loss for final step: 2.426227

Downstream Task Evaluation Results

Huge Thanks to Nick Doiron for providing evalution results of classification task. He used Bengali Classification Benchmark datasets for classification task. Comparing to Nick’s Bengali electra and multi-lingual BERT, Bangla BERT Base achieves state of the art result. Here is the evaluation script.

NB: If you use this model for any nlp task please share evaluation results with us. We will add it here.

How to Use

You can use this model directly with a pipeline for masked language modeling:

from transformers import BertForMaskedLM, BertTokenizer, pipelinemodel = BertForMaskedLM.from_pretrained("bangla-bert-base")
tokenizer = BertTokenizer.from_pretrained("bangla-bert-base")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"আমি বাংলায় {nlp.tokenizer.mask_token} গাই।"):
print(pred)
# {'sequence': '[CLS] আমি বাংলায গান গাই । [SEP]', 'score': 0.13404667377471924, 'token': 2552, 'token_str': 'গান'}

Author

Sagor Sarker

Acknowledgements

  • Thank to all the people around, who always helping us to build something for Bengali.

Reference

--

--