This is a 30 minutes presentation I did for Global AI Nights Istanbul where I introduced the concept of Responsible ML and talked about projects such as Fairlearn, InterpretML, and SmartNoise.
behave unfairly by negatively impacting groups of people, such as those defined in terms of race, gender, or age. • Interpretability: Ability to explain what parameters are used and how the models “think” to explain the outcome for regulatory oversight. • Differential Privacy: Monitoring applications’ use of personal data without accessing or knowing the identities of individuals
that its behavior hardly changes when a single individual joins or leaves the dataset. • Smart Noise https://smartnoise.org/ This toolkit uses state-of-the-art differential privacy (DP) techniques to inject noise into data, to prevent disclosure of sensitive information and manage exposure risk.
https://drn.fyi/2QRL3V1 Capgemini “AI and the ethical conundrum” Report • https://drn.fyi/3gBsz5G IDC report: Empowering your organization with Responsible AI • https://drn.fyi/3sKilCx