Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Philips-Do_ethics_and_compliance_go_hand_in_han...

Avatar for Marketing OGZ Marketing OGZ PRO
September 17, 2025
1

 Philips-Do_ethics_and_compliance_go_hand_in_hand_with_AI_initiatives_.pdf

Avatar for Marketing OGZ

Marketing OGZ PRO

September 17, 2025
Tweet

More Decks by Marketing OGZ

Transcript

  1. September 10, 2025 AI Ethics & Compliance Lead, Responsible AI

    Office, Philips Ger Janssen Do ethics & compliance go hand in hand with AI initiatives? Image created by ChatGPT5
  2. © 2024 Koninklijke Philips N.V. All rights reserved. *https://www.weforum.org/agenda/2023/05/how-will-generative-ai-impact-healthcare There’s

    a 1 in 300 chance of harm during the patient journey. Up to 50% of all medical errors in primary care are due to administrative reasons. Projected shortfall of 10 million health workers by 2030, mostly in low- and lower- middle-income countries making it increasingly challenging to provide care to everyone in need.
  3. © 2024 Koninklijke Philips N.V. All rights reserved. We are

    drowning in information while starving for wisdom. Prof. Dr. E.O Wilson Professor emeritus, Harvard University “
  4. © 2024 Koninklijke Philips N.V. All rights reserved. How is

    Philips organized? Responsible AI Office
  5. © 2024 Koninklijke Philips N.V. All rights reserved. The Responsible

    AI Office Compliant Are we allowed? Feasible Can we? Ethical Should we? Compliance Governance Best practices Responsible AI is the practice of designing, developing, and deploying AI in a way that results in safe, compliant, ethical and trustworthy AI solutions.
  6. © 2024 Koninklijke Philips N.V. All rights reserved. 7 Responsible

    AI Principles Responsible AI is the practice of using, designing, developing, and deploying AI in a way that results in safe, compliant, ethical and trustworthy AI solutions. HUMAN OVERSIGHT We design our AI-enabled solutions to augment and empower people, with appropriate human supervision. SAFETY We develop AI-enabled solutions that are robust with protection against potential harm. SECURITY We protect our AI-enabled solutions against vulnerabilities and mitigate risks. PRIVACY We handle all personal data with integrity and respect the rights of individuals. WELL-BEING We design our AI-enabled solutions to benefit the health and well-being of individuals. SUSTAINABILITY With our AI-enabled solutions we pursue long-term value and sustainable development for people and planet. TRANSPARENCY We are transparent about which functions and features of our offerings are AI-enabled, their capabilities and limitations. FAIRNESS We develop and validate AI-enabled solutions using data that is representative of the target group for the intended use, and we aim to avoid bias and discrimination.
  7. © 2024 Koninklijke Philips N.V. All rights reserved. “We can

    be blind to the obvious, and we are also blind to our blindness.” (Daniel Kahneman, Nobel laureate and author of Thinking, Fast and Slow)
  8. © 2024 Koninklijke Philips N.V. All rights reserved. 12 Responsible

    AI Principles Responsible AI is the practice of using, designing, developing, and deploying AI in a way that results in safe, compliant, ethical and trustworthy AI solutions. HUMAN OVERSIGHT We design our AI-enabled solutions to augment and empower people, with appropriate human supervision. SAFETY We develop AI-enabled solutions that are robust with protection against potential harm. SECURITY We protect our AI-enabled solutions against vulnerabilities and mitigate risks. PRIVACY We handle all personal data with integrity and respect the rights of individuals. WELL-BEING We design our AI-enabled solutions to benefit the health and well-being of individuals. SUSTAINABILITY With our AI-enabled solutions we pursue long-term value and sustainable development for people and planet. TRANSPARENCY We are transparent about which functions and features of our offerings are AI-enabled, their capabilities and limitations. FAIRNESS We develop and validate AI-enabled solutions using data that is representative of the target group for the intended use, and we aim to avoid bias and discrimination.
  9. © 2024 Koninklijke Philips N.V. All rights reserved. What is

    bias and why is it important in healthcare? Healthcare: there is a persistent gender bias in the diagnosis, prevention and treatment of CVD in women. Hofstra, B., & Mulders, A. M. (2024) Bias: strong inclination or preconceived opinion in favor of or against a thing/person/group compared with another
  10. © 2024 Koninklijke Philips N.V. All rights reserved. Bias occurs

    across the whole AI lifecycle Confirmation bias Selection bias Overfitting Automation bias
  11. © 2024 Koninklijke Philips N.V. All rights reserved. A few

    of the biggest misconceptions on bias “We have a bias-free solution!” “Bias is only about the input data” “I’m not biased!”
  12. © 2024 Koninklijke Philips N.V. All rights reserved. Critical reflection

    Ravit Dotan: “We find that typical governance signals, including the existence AI ethics principles, do not correlate with implementation” More bluntly (my words): without implementation principles mean nothing
  13. © 2024 Koninklijke Philips N.V. All rights reserved. Risk Management

    @ Philips Focused on product safety risk, NOT organizational or business risks Not specifically addressed: AI related risks & risks to fundamental rights AI: data drift! • changing population • changing medical practice
  14. © 2024 Koninklijke Philips N.V. All rights reserved. Integrating AI

    in Risk Management @ Philips  Failure Mode and Effect Analysis (FMEA) is one of the core methodologies we use  Bias risk assessment output is used as input for FMEA  Standard risk management processes are being updated with this addition
  15. © 2024 Koninklijke Philips N.V. All rights reserved. Mitigating bias

    in ICU length of stay prediction  Length of stay is an important indicator in the ICU for clinical quality, efficiency and patient care.​  We identified bias in current ICU length of stay models, tending to favor predictions for surviving patients  Developing new model using patient data from the first 24 hours in the ICU, using data from 600k ICU admissions, more accurate and balanced predictions were reached.
  16. © 2024 Koninklijke Philips N.V. All rights reserved. Critical reflection

    It’s not just about the AI Act itself MDR AI Act GDPR It’s also about understanding the interplay between different regulations: MDR, AI Act, GDPR, and beyond And address this in the most efficient way
  17. © 2024 Koninklijke Philips N.V. All rights reserved. How to

    keep track of all AI in the company? (and its challenges)
  18. © 2024 Koninklijke Philips N.V. All rights reserved. 26 While

    recognizing that Philips must abide by definitions of AI set by applicable regulations in different regions in the world, we embrace the following, internationally recognized definition of artificial intelligence (AI) from the Organization for Economic Co-operation and Development (OECD): “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” Philips’ AI definition What is an AI system? (good question…)
  19. © 2024 Koninklijke Philips N.V. All rights reserved. Four classes

    of AI, in the company – getting visibility and reducing shadow AI Public AI Public domain AI used by employees Commercial AI AI developed by Philips for customers Embedded AI SaaS offerings including AI used in Philips Enterprise AI Enterprise AI capabilities developed or finetuned by Philips for in-house use Internal AI AI used in Philips AI in development For customers AI deployed To customers
  20. © 2024 Koninklijke Philips N.V. All rights reserved. “Culture is

    not just one aspect of the game – it is the game” (Lou Gerstner, former CEO of IBM)
  21. © 2024 Koninklijke Philips N.V. All rights reserved. “Many are

    concerned about the lack of AI Experts. The lack of Critical Thinkers is even more alarming.” Murat Durmus Murat Durmus | LinkedIn
  22. © 2024 Koninklijke Philips N.V. All rights reserved. Dr. Brendon

    Stubbs’ post on MIT study Critical reflection: does AI have an effect on our brain? Some findings of this specific study:  83% of ChatGPT users could not a recall a single sentence they had written just minutes earlier.  47% decrease in brain connectivity in ChatGPT users  33% decrease in the mental effort that leads to learning. Extended mind Outsourced mind vs Image from Dr. Brendon Stubbs post
  23. © 2024 Koninklijke Philips N.V. All rights reserved. Responsible AI

    assessments Facilitation of practical discussion sessions with whole product development team:  Early in development process  Using the Plot4AI methodology  Re-iterated in later stages of the development process  Output can be used as input in FMEA  Report will be issued as outcome of the sessions PLOT4ai - Practical Library Of Threats 4 AI
  24. © Koninklijke Philips N.V. What is the key value that

    the AI model aims to deliver? How can we detect wrong outputs? Are we including all target groups? Which types of errors are most impactful? Does our AI do better than other (simpler, or human) solutions? Do we have the right data to create a high- quality model? How can the performance of the AI be measured? Ethical assessments What do I need to do? Do I need to use AI?
  25. © 2024 Koninklijke Philips N.V. All rights reserved. Overall approach

    on AI Data & AI governance • Monitor regulatory developments • Evaluate regional differences & synergies • Adapt regulatory strategy accordingly Regulatory compliance & tracking Data & AI lifecycle management • Principles • Policies • Standards • Processes • Tooling • Control & traceability • Inventory • Catalogues Change management • Training, literacy • Privacy & data protection • Ethics
  26. © 2024 Koninklijke Philips N.V. All rights reserved. Fines AI

    Act: 3% of worldwide annual turnover or € 15 million (non-compliant system) 1% of worldwide annual turnover or € 7.5 million (incorrect information) 7% of worldwide annual turnover or € 35 million (prohibited system) Data Act: 4% of worldwide annual turnover or € 20 million Trust Brand value
  27. © 2024 Koninklijke Philips N.V. All rights reserved. Guide Coach

    Provide insights & tools Explain the guardrails: the what & the why If something is not possible, help to explore what is possible