Assessing BERT’s Syntactic Abilities (Yoav Goldberg) January 17, 2019 by admin I expected the Transformer-based BERT models to be bad on syntax-sensitive dependencies, compared to LSTM-based models.So I run a few experiments. I was mistaken, they actually perform *very well*. More details in this tech report: https://t.co/6hV9YoOvN8 pic.twitter.com/O0YwRnp7QH— (((ل()(ل() 'yoav)))) (@yoavgo) January 6, 2019