Unintended Interactions among ML Risks and Defenses

Background

Machine learning models are susceptible to a wide range of risks to security, privacy, and fairness. Several defenses have been proposed to mitigate these risks. However, defending against a specific risk can result in an unintended increase (or decrease) in susceptibility to other risks. Similarly when defenses against multiple risks are applied to a machine learning model, there could be conflicting interactions among among them. This blog article provides additional context for this work.

Conference/journal paper publications

Posters

Talks

Source code