Adversarial attacks can mislead strong neural models; as such, in NLP ta...
Pre-trained models are widely used in fine-tuning downstream tasks with
...
Pre-Trained Models have been widely
applied and recently proved vulnerab...
Adversarial attacks in texts are mostly substitution-based methods that
...