AI Blog Category: NLP
How contextualized are BERT, GPT-2 and ELMo word representations?
Until these major breakthroughs happened recently, NLP approaches were built around static representations of words (word2vec). Static embeddings of a word, say “mouse”, would fare poorly in accounting for variance in the various contextualized representations of the word (as rodent or gadget). BERT, GPT-2 and ELMo changed all that – and in a big way.…
Deep learning models to find causality?
The Holy Grail for machine learning models is whether a model can infer causality, instead of finding correlations in data. Well, it isn’t like this is a big focus area among researchers currently, but it is a fascinating challenge. A researcher at Facebook, Leon Bottou, presented an interesting framework that shows a path forward. His…
TextFooler fools BERT
It was a humbling moment for the state-of-the-art NLP models when an adversarial test compromised the output significantly. Yes, this included BERT as well, where its classification task prediction accuracy in a set of text analytics tasks reduced by 5 to 7 times! TextFooler is a baseline framework for synthetically creating adversarial samples, was created…
Ernie: New type of masking to understand language meaning
Baidu recently beat Google and Microsoft in ongoing competition, General Language Understanding Evaluation, or GLUE. And in the process, it beat the comprehension score
Steering NLP-generated output
Astounding though have been the achievements by GPT-2 and BERT, these text prediction systems still produce output that is only apparently meaningful.