- 수업명: <과학철학 과제연구: 인공지능의 철학>
- 서울대 과학사 및 과학철학 협동과정
- 2021년 2학기
- 담당교수: 천현득
■ 수업 개요 및 목표
• 이번 학기 세미나 수업은 인공지능의 철학적 쟁점들을 주제로 한다.
• 인공지능은 빠르게 발전하고 있으며 우리 삶의 여러 측면에 광범위한 영향을 끼칠 것으로 예상된다. 이러한 기술 발전은 의료, 교통, 교육, 안보 등에서 커다란 발전의 기회를 제공하지만, 동시에 여러 가지 위험도 내포하고 있다. 이 수업에서는 인공지능의 철학적 토대와 인공지능의 발전으로부터 야기될 수 있는 철학적, 윤리적 문제들을 다룬다.
• 이 수업을 통해 수강생들은 다음과 같은 능력을 갖도록 한다.
1) 인공지능의 간략한 역사와 발전 과정을 이해한다.
2) 인공지능의 관한 고전적인 철학적 논쟁들을 이해하고 성찰한다.
3) 인공지능의 윤리적 지위에 관해 숙고한다.
4) 인공지능의 제작 및 사용과 관련된 도덕적, 사회적, 법적 쟁점들을 파악하고, 이를 현실적인 문제들에 적용할 줄 안다.
5) 인공초지능과 실존적 위협에 관해 이해하고 이에 대비한다.
■ 참고문헌
• 주교재는 없고, 필요한 문헌들은 게시판 등을 통해 배부한다. [*]는 자주 활용될 예정이다.
• Carter, Matt (2007), Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence, Edinburgh University Press. [*]
• S. Matthew Liao (ed.)(2020), Ethics of Artificial Intelligence, Oxford Univ. Press. [*] [EAI]
• Patrick Lin et al. (ed)(2017), Robot Ethics 2.0, Oxford University Press. [*]
• Anderson, M. and Anderson S. L. (eds.)(2011), Machine Ethics, Cambridge University Press.
• Boden, Margaret (2016), AI, its Nature and Future, Oxford: Oxford University Press.
• Boden, Margaret (ed.)(1990), The Philosophy of Artificial Intelligence, Oxford: Oxford University Press.
• J. Copeland (1993), Artificial Intelligence: A Philosophical Introduction, Blackwell.
• Wallach, W., Allen, C. (2008), Moral Machines, Oxford University Press.
• O’Neil, C. (2016), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing Group.; 캐시 오닐 (2017), 『대량살상 수학무기』, 흐름출판.
• 신상규 외, 2020, 『포스트휴먼이 몰려온다』, 아카넷.
• 제리 카플란, 『인공지능의 미래』, 신동숙 옮김, 한스미디어.
• 마틴 포드, 『로봇의 부상: 인공지능의 진화와 미래의 실직 위협』, 이창희 옮김, 세종서적, 2016.
• Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press.
• Clark, A. (2000), Mindware: An Introduction to the Philosophy of Cognitive Science, New York: Oxford University Press USA.
• W. M. Ramsey and K. Frankish (eds.) (2014), The Cambridge Handbook of Artificial Intelligence, Cambridge University Press.
• Müller, Vincent C., “Ethics of Artificial Intelligence and Robotics”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.)
■ 성적 평가: 출석 및 참여도, 발제 및 비평문, 기말논문
• 1) 모든 수강생은 매 수업에 성실하게 참여할 것이 요구된다. 주어진 문헌을 미리 읽어오고 수업 중 토론에 참여한다.
• 2) 발제자는 핵심 내용을 논증적으로 재구성하는 방식으로 요약하고, 이를 비판적으로 검토한다. 비판적 검토란 주요 주장에 대한 가능한 반론, 숨겨진 전제, 저자가 제시하지 않은 지지 논증, 저자의 주장이 가진 추가적인 함의 등을 포함한다. 발제문은 수업시간 24시간 전까지 제출한다.(게시판 이용)
• 3) 기말논문은 6월 21일(월)까지 제출하고, 기한 내 제출된 논문은 간략한 논평과 함께 반환한다. 기한을 넘겨 제출하는 경우 늦은 만큼 감점될 수 있고 논평을 받지 못할 수 있다.
• 기말논문 작성 시 확인사항
(1) 논문에서 다루려는 철학적 문제는 세부적이고 분명인가?
(2) 핵심 주장과 논제는 분명히 진술되었는가?
(3) 그를 뒷받침할 논증은 상세히 서술되었는가? (단순하면서도 생생한 사례가 사용되었는가?)
(4) 논증에 대한 가능한 반론들을 고려했는가?
(5) 불필요한 전문용어(jargon)나 불분명한 어휘들이 사용되지 않았는가?
(6) 퇴고 과정은 거쳤는가?
■ Honor Code:
• 본 과목을 수강하는 학생들은 양심과 책임의식에 기반하며 시험, 과제물 및 보고서 작성시에 학문의 정직성을 훼손하지 않겠다는 서약을 한 것으로 간주한다.
■ Schedule and Readings
(아래 일정은 잠정적이며, 각 주의 읽기 자료도 변경될 수 있다.)
Week 1. (3/8) Introduction: Philosophy of Artificial Intelligence?
Week 2. (3/15) Foundations (1)
• Carter, Chapter. 2-6.
Week 3. (3/22) Foundations (2)
• Carter, Chapter. 7-10.
• J. Haugeland, “Semantic Engines”, in Haugeland (ed.), Mind Design (First Edition, MIT, 1981); reprinted in Cummins and Cummins (eds.), Minds, Brains and Computers (Blackwell, 2000)
Week 4. (3/29) Connectionist Models
• Carter, Chapter 19.
• Rumelhart, D. E. (1989). “The Architecture of the Mind: A Connectionist Approach.” In M.I. Posner (ed.), Foundations of Cognitive Science; or in Haugeland (ed.), Mind Design II (MIT, 1997)
• Fodor, J. and Pylyshyn, Z. (1988), “Connectionism and cognitive architecture: A critical analysis.” Cognition 28: 3-71; reprinted in Haugeland, J. (ed) (1997), Mind Design II. MIT Press.
• Smolensky, P. (1988), “On the Proper Treatment of Connectionism.” Behavioral and Brain Sciences, 11(1): 1-23.
• [opt] Nefdt, R. M. (2020), “A Puzzle concerning Compositionality in Machines”. Minds and Machines, 30(1): 47-75.
Week 5. (4/5) Turing Test and The Chinese Room
• Turing, A. (1950). “Computing Machinery and Intelligence.” Mind 59, 433-60.
• Searle, J. (1980), “Mind, Brain, Computers.” Behavioural and Brain Sciences, 3
• Searle, J. (1990), “Is the Brain’s Mind a Computer Program?”, Scientific American, 262(1), 26-31.
• Churchland, P.M and P.S. Churchland (1990), “Could a Machine Think?”, Scientific American, pp. 32-37.
• Copeland, J. (1993), “The Curious Case of the Chinese Gym”, Synthese 95(2), Excerpt pp. 173-177.
• Block, N. (2002) “Searle’s Arguments Against Cognitive Science.” in J. Preston and M. Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, pp. 70-79.
Week 6. (4/12) Nativism vs. Empiricism in the Deep Learning Era?
• Marcus, G. (2018). “Innateness, AlphaZero, and Artificial Intelligence.” arXiv.
• Buckner, C. (2018). “Empiricism without magic: transformational abstraction in deep convolutional neural networks.” Synthese, 195(12): 5339-5372.
• Crosby, M. (2020). “Building Thinking Machines by Solving Animal Cognition Tasks.” Minds and Machines, 30(4): 589-615.
Week 7. (4/19) Reading Week
Week 8. (4/26) Artificial Consciousness
• Dennett, D. (1997). “Consciousness in Human and Robot Minds.” In M. Ito, Y. Miyashita, & E. T. Rolls (Eds), Proceedings of the IIAS Symposium on Cognition, Computation, and onsciousness. New York: Oxford University Press. pp. 17-30.
• Harnad, S. (2003) “Can a Machine be Conscious? How?”, Journal of Consciousness Studies, 10: 67-75.
• McDermott, D. (2007) “Artificial Intelligence and Consciousness”, in P. D. Zelazo, M. Moscovitch and E. Thompson (eds.), The Cambridge Handbook of Consciousness, Cambridge: Cambridge University Press, pp. 117–150.
• Dehaene, S., Lau, H., Kouider, S. (2017). “What is consciousness, and could machines have it?”, Science 358(6362), 486-492.
• Susan Schneider, “How to Catch an AI Zombie: Testing for Consciousness in Machines.” In S. Matthew Liao (ed.) (2020) Ethics of Artificial Intelligence, Oxford Univ. Press.
Week 9. (5/3) Creativity
• Boden, M. A. (2014). “Creativity and artificial intelligence: a contradiction in terms?” In The Philosophy of Creativity: New Essays.
• Coeckelbergh, M. (2017). “Can Machines Create Art?” Philosophy & Technology, 30(3), 285-303.
• Halina, M. (2021). “Insightful artificial intelligence.” Mind & Language.
Week 10. (5/10) Building Artificial Moral Agents
• Moor, J. (2006), “The Nature, Importance, and Difficulty of Machine Ethics”, reprinted in M. Anderson & S. Anderson (eds), Machine Ethics. Cambridge University Press, pp. 13-20.
• Allen, C., Wallach, W., and Smit I. (2006), “Why Machine Ethics?”, reprinted in M. Anderson & S. Anderson (eds), Machine Ethics. Cambridge University Press, pp. 51-61.
• Allen, C., Varner, G., Zinser, J. (2000) ‘Prolegomena to any future artificial moral agent’, Journal of Experimental & Theoretical Artificial Intelligence 12: 251-261.
• Floridi, L. and J.W. Sanders (2004) “On the Morality of Artificial Agents,” Minds and Machines, 14(3): 349-379.
• Susan L. Anderson (2011). “The Unacceptability of Asimov’s Three Laws of Robotics as a Basis for Machine Ethics.” In M. Anderson & S. Anderson (eds), Machine Ethics. Cambridge University Press, pp. 285-296.
Week 11. (5/17) Self-driving Cars
• Nyholm, S. (2018) “The ethics of crashes with self-driving cars: A roadmap I & II”, Philosophy Compass 13(7).
• Lin, P. (2015) “Why Ethics Matters for Autonomous Cars.” in M. Maureretal. (eds.), Autonomous Driving, pp. 69-85.
• Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). “The social dilemma of autonomous vehicles.” Science 352, 1573-1576.
◦ Awad, E. et al. (2018). “The Moral Machine Experiment.” Nature. 563, pp. 59-64.
• Gogoll, J., and Muller, J. F. (2017) “Autonomous Cars: In Favor of a Mandatory Ethics Setting.” Science and Engineering Ethics 23, 681-700.
• F. M. Kamm, “The Use and Abuse of the Trolley Problem: Self Driving Cars, Innocent Threats, and the Distribution of Harm.” In S. Matthew Liao (ed.) (2020) Ethics of Artificial Intelligence, Oxford Univ. Press.
Week 12. (5/24) Killer Robots, or Autonomous Weapon system
• Sparrow, R. (2007) “Killer robots”, Journal of applied philosophy, 24(1): 62-77.
◦ Sharkey, N. (2012) “Killing Made Easy: From Joysticks to Politics.”, in P. Lin, K. Abnetm and G. Bekkey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, The MIT Press. (pp. 111-156.)
• Stuart Russell: “Take a stand on AI weapons”, Nature
• Simpson, T. W. and Muller, V. C. (2016) “Just war and robot’s killings,” Philosophical Quarterly, 66(263): 302-22.
• Peter Asaro, “Autonomous Weapons and the Ethics of Artificial Intelligence.” In S. Matthew Liao (ed.) (2020) Ethics of Artificial Intelligence, Oxford Univ. Press
Week 13. (5/31) Algorithmic Bias, Fairness, and Accountability
• Johnson, G. M. (2020). “Algorithmic bias: on the implicit biases of social technology”. Synthese
• Binns, R. (2018). “Algorithmic Accountability and Public Reason.” Philosophy & Technology, 31(4): 543-556.
• Cathy O’Neil and Hanna Gunn, “Near Term Artificial Intelligence and the Ethical Matrix”, In S. Matthew Liao (ed.) (2020) Ethics of Artificial Intelligence, Oxford Univ. Press.
Week 14. (6/7) Moral Status of Artificial Intelligence
• Bryson, J. (2010) “Robots Should Be Slaves” in Yorick Wilks (ed.) Close Engagements with Artificial Companions: Key social, psychological, ethical and design issue, pp. 63-74.
• Coeckelbergh, Mark (2014), “The Moral Standing of Machines: Towards a Relational and NonCartesian Moral Hermeneutics,” Philosophy and Technology 27, 61-77
• S. Matthew Liao, “The Moral Status and Rights of Artificial Intelligence”, In S. Matthew Liao (ed.) (2020) Ethics of Artificial Intelligence, Oxford Univ. Press.
Week 15. (6/14) Superintelligence and existential risk
• [32] Nick Bostrom (2012). “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines, 22(2): 71-85.
• [33] David Chalmers (2010) “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies, 17(9-10), 7-65. esp., pp. 1-15 & 19-56.
(2023.12.02.)