Collins, K. M., Barker, M., Espinosa Zarlenga, M., Raman, N., Bhatt, U., Jamnik, M., ... & Dvijotham, K. (2023, August). Human uncertainty in concept-based ai systems. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 869-889) u 人間がAIの出力を検証できる ⇄ 人間とAIの相補性 u Fok, R., & Weld, D. S. (2024). In search of verifiability: Explanations rarely enable complementary performance in AI-advised decision making. AI Magazine. u 情報の非対称性 ⇄ 意思決定支援AI u Holstein, K., De-Arteaga, M., Tumati, L., & Cheng, Y. (2023). Toward supporting perceptual complementarity in human-AI collaboration via reflection on unobservables. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-20. u 集合知・意思決定 ⇄ 不完全なAI・XAI u Morrison, K., Spitzer, P., Turri, V., Feng, M., Kühl, N., & Perer, A. (2024). The Impact of Imperfect XAI on Human-AI Decision- Making. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1), 1-39. u Lai, V., Zhang, Y., Chen, C., Liao, Q. V., & Tan, C. (2023). Selective explanations: Leveraging human input to align explainable ai. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW2), 1-35. u Zöller, N., Berger, J., Lin, I., Fu, N., Komarneni, J., Barabucci, G., ... & Herzog, S. M. (2024). Human-AI collectives produce the most accurate differential diagnoses. arXiv preprint arXiv:2406.14981. u Agudo, U., Liberal, K. G., Arrese, M., & Matute, H. (2024). The impact of AI errors in a human-in-the-loop process. Cognitive Research: Principles and Implications, 9(1), 1.