Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Ethics towards AI in product and experience design

Ethics towards AI in product and experience design

Originally titled, AI ethics (not for machines and tools) towards machines and tools. Also touches on shaping the future of AI or ethics towards AI in product and experience design. Given at SoftServe to a large group of people, including designers, product folks, and technologists. Will be added later to SoftServe University learning paths.

Skipper Chong Warson

June 11, 2024
Tweet

More Decks by Skipper Chong Warson

Other Decks in Design

Transcript

  1. The rise of AI systems has already impacted our lives

    in many ways. Backstage and front stage, AI's effect is rippling across large swathes of industries and shifting human-technology interaction from virtual assistants to (nearly) autonomous vehicles. The jumps in computing power, data availability, and advanced algorithms have propelled AI's toddling baby steps into many business spaces — healthcare, education, finance, and others — while raising critical ethical considerations that must be carefully navigated. The rise of AI systems
  2. The last time we met, I talked a lot about

    the ethics that programmers — us humans — need to instill in machines: Skynet, Asimov, the bee Hawtch Hawtcher, etc. But this got me thinking about how we treat AI models. What about our interactions with these systems? If the Hawtch Hawtcher is watching the bee, who watches them to ensure that we treat them (the bee) with respect, fairness, and responsibility while also maintaining a clear understanding of their non-human nature? Picking up from last time
  3. As you might recall, I also discussed an article that

    Asimov wrote for Compute! magazine in 1981 about how the three laws of robotics could be used as tools. 1. A tool must not be unsafe to use. 2. A tool must perform its function efficiently unless this would harm the user. 3. A tool must remain intact during its use unless its destruction is required for its use or for safety. What about how the tool users are using the tools? We talked about Asimov's laws of tools
  4. • How many people have used an AI model or

    some AI-enabled technology in the last 30 days? • Do you align the AI model on your goal or intention? • Who doesn’t trust the AI models? • Are there any AI models you deliberately avoid using? Four questions for the group
  5. Did you hear the one about the AI model that

    played Super Mario Brothers continuously for four months? While the AI discovered all sorts of exploits and strategies that even the game's designers were unaware of, it also exhibited erratic behavior, such as pausing for several minutes or hours, essentially refusing to play. This behavior was unexpected and challenging to explain, as it did not align with the AI's set objective of maximizing its score. From "The Alignment Problem" by Brian Christian Super Mario high scores
  6. This example highlights the need for careful consideration of the

    objectives and rewards given to AI models during training. If an AI is optimized for a narrow objective, such as maximizing a game score, it may behave in a way that's unexpected from a human perspective. This also illustrates the complex relationship between humans and AI models and the challenges in ensuring AI systems behave in ways that align with human values and expectations, even in seemingly simple contexts like playing a video game. What Super Mario teaches us
  7. In 2016, Microsoft launched Tay, an AI-powered chatbot designed to

    learn from its interactions with users on Twitter. However, within hours of its release, Tay began posting inflammatory, offensive, and racist tweets, as it had learned from the malicious behavior of some users who exploited its learning capabilities. Is this the bot's problem? Or the context from which it was learning? How was Tay aligned? What was its goal? Microsoft's Twitter chatbot
  8. When an AI team applied their random network distillation (RND)

    agent to the game Montezuma's Revenge, the model initially explored 20-22 of the 24 rooms. However, it then discovered an exploit: instead of progressing through the game, it found that by escaping the temple, it could repeatedly collect a single treasure that maximized rewards. Like the Mario example, this illustrates the difficulty of creating robust reward systems in reinforcement learning that incentivize intended behaviors. It underscores the need for careful reward engineering, proper incentive structures, and methods to detect and prevent hacking. Montezuma's Revenge’s riches
  9. Nick Bostrom's "paperclip maximizer" thought experiment illustrates the potential risks

    of misaligned AI objectives. In this scenario, an advanced AI tasked with maximizing paperclip production decides to convert any and all resources into paperclips — any and all resources, including humans. While hypothetical, it highlights the importance of ensuring AI systems have specific goals to prevent unintended and catastrophic consequences. Remember, this is just a thought experiment. A thought experiment on paper clips
  10. Throughout history, humans have mistreated animals and other humans based

    on differences (cultural, biological, cognitive, socioeconomic, ideological, etc.). Given this track record, advanced AI could also be subjected to immense suffering. Instilling ethical solid principles and safeguards in AI development and treatment is critically important to prevent such potential abuse and cruelty. We must proactively ensure that AI systems are treated with respect and care. Can AI experience su ff ering?
  11. Treating AI models with respect and fairness is the right

    thing to do. Even if AI systems do not have feelings or emotions, humans must uphold moral standards in their interactions with them. This will foster a culture of responsibility and integrity in the development and use of AI. AI ethical considerations
  12. Mistreating AI models or using them carelessly or recklessly can

    lead to unintended consequences, such as reinforcing biases, amplifying harmful behaviors, or misuse of AI for malicious purposes. By treating AI models with care and respect, we can help mitigate these risks. Preventing unintended consequences
  13. Setting a positive example How we treat AI models can

    set an example for others, including future generations who will grow up with increasingly advanced AI systems. By modeling responsible and ethical behavior in our interactions with AI, we can help establish positive norms and practices that will guide the development and use of these technologies over time.
  14. While current AI models may not have the capacity for

    sentience or self-awareness, future AI systems could develop these qualities. By establishing a culture of respect and ethical treatment of AI, we can lay the groundwork for responsible and mutually beneficial relationships between humans and AI in the future. Preparing for future AI advancements
  15. Why treating models nicely is the right thing to do

    Treating AI models nicely is not about anthropomorphizing or attributing human-like qualities. Instead, it is about upholding ethical principles, promoting responsible AI development and use, and setting a positive example for how humans should interact with these increasingly sophisticated and seemingly ever present technologies.
  16. • https://seuss.fandom.com/wiki/Hawtch-Hawtcher_Bee_Watcher • https://en.wikipedia.org/wiki/Three_Laws_of_Robotics • https://archive.org/details/1981-11-compute-magazine/page/18/mode/2up?view=theater • https://www.nytimes.com/2021/06/04/opinion/ezra-klein-podcast-brian-christian.html • Why

    Microsoft's 'Tay' AI bot went wrong | TechRepublic • The Alignment Problem: Machine Learning and Human Values a book by Brian Christian (bookshop.org) • AI and the paperclip problem | CEPR • https://www.nytimes.com/2021/04/16/opinion/ezra-klein-podcast-alison-gopnik.html Claude.ai (Anthropic) and Perplexity have been used for some data points though I ended up fi nding most information through a variety of Internet searches and talks on AI. Most text has been processed through SoftServe's corporate instance of Grammarly and most images come from Unsplash, I love the NASA ones in the last section. Credits
  17. Thank you Дякую Dziękuję Благодарим ви Gracias Merci Danke Vă

    mulțumesc ﺷﻜﺮا Terima kasih Xiè xiè Romba Nandri Obrigado Grazie Asante Na gode Teşekkür ederim