New Code-poisoning Attack could Corrupt Your ML Models
A group of researchers discovered a new type of code-poisoning attack that can manipulate natural-language modeling systems via a backdoor. By nature, this is a blind attack, in which the attacker does not require...