2025/11/16

[강의계획서] 과학철학 과제연구: 인공지능의 철학 (천현득, 2025년 1학기)



- 수업명: <과학철학 과제연구: 인공지능의 철학>

- 서울대 과학학과

- 2025년 1학기

- 담당교수: 천현득

■ 수업 개요 및 목표

• 이번 학기 세미나 수업은 인공지능의 철학적 쟁점들을 주제로 한다. 빠르게 발전하고 있는 인공지능 기술은 우리 삶의 양식과 사회의 기본 구조에 커다란 영향을 미치고 있다. 인공지능 기술의 개발과 활용은 의료, 교통, 교육, 안보 등 여러 면에서 다양한 사회적 편의를 약속하지만, 동시에 개인과 사회를 커다란 위험에 노출시킬 수도 있다.이 수업은 인공지능 기술을 둘러싸고 제기되는 여러 철학적, 윤리적 쟁점들을 비판적으로 검토한다.

• 수업에서 주로 다루게 될 질문은 다음과 같다. 인공지능은 진정으로 지능적인가? 그렇다면 어떤 의미에서 지능적인가? 미래의 인공지능은 초지능으로 발전하여 인류를 위협할 것인가? 알고리즘 기반 의사결정에서 어떻게 차별과 편향이 발생하고, 그러한 편향은 어떻게 방지할 수 있는가? 데이터 과학으로 인한 개인정보 및 사생활 침해 문제란 무엇인가? 불투명한 인공지능 시스템은 왜 문제가 되는가? 인공지능으로 인한 자동화는 일과 직업에 어떠한 영향을 미칠까? 인공지능은 도덕적 행위자가 될 수 있는가?

• 이 수업을 통해, 수강생들은 (인공지능, 빅데이터, 기계학습과 관련한) 복잡한 현실적 문제들로부터 제기되는 주요한 철학적 쟁점과 문제를 발견하고, 그 문제들을 명료하고 사고하고 공정하게 평가할 수 있는 개념적 도구와 이론적 도구들을 학습하며, 자신의 사고를 말과 글을 통해 적확하게 표현하는 법을 훈련한다.

■ 참고문헌

• 주교재는 없고, 필요한 문헌들은 게시판 등을 통해 배부한다.

• 이중원 엮음, (2019), 『인공지능의 윤리학』, 한울.

• Boden, Margaret (ed.)(1990), The Philosophy of Artificial Intelligence, Oxford: Oxford University Press.

• Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press.

• Buckner, C. (2024), From Deep Learning to Rational Machines, Oxford University Press.

• Carter, Matt (2007), Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence, Edinburgh University Press.

• Crawford, K. (2021), Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press.

• Coeckelbergh, M. (2020), AI Ethics, The MIT Press.

• Copeland, J. (1993), Artificial Intelligence: A Philosophical Introduction, Blackwell.

• Müller, Vincent C., “Ethics of Artificial Intelligence and Robotics”, The Stanford Encyclopedia of Philosophy (Fall 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.)

• Liao, S. Matthew (ed.)(2020), Ethics of Artificial Intelligence, Oxford University Press.

• Lin, Patrick et al. (ed.)(2017), Robot Ethics 2.0, Oxford University Press.

• O’Neil, C. (2016), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing Group.

• Wallach, W., and Allen, C. (2008), Moral Machines, Oxford University Press.

• Véliz, Carissa (ed.) (2022), Oxford Handbook of Digital Ethics, Oxford University Press.

■ 수업 진행방식 및 수강생 참고사항

• 이 수업은 주로 수강생들의 발제와 토론으로 이루어진다. 매 수업과 관련된 읽기 과제가 주어지고, 모든 수강생은 주어진 문헌을 미리 읽고 와서 수업 시간에 토론에 참여한다.

• 이 수업에서 다루는 여러 철학적, 윤리적 문제들에 관해서는 표준적인 답변이나 빠르고 손쉬운 정답이 제시되기 어려운 경우도 많다. 인공지능 기술의 발전과 더불어, 기술의 철학적, 윤리적 측면을 다루는 문헌들도 빠르게 축적되고 있다. 때로는 철학과 윤리학 밖의 영역에서 이루어지는 (공학, 법학, 사회과학) 연구들에서 도움을 받을 수도 있다. 인공지능 철학/윤리 분야에서 여러 논쟁들이 진행 중이라는 상황을 인식하고, 수강자 스스로가 직접 논의에 개입한다는 마음가짐으로 수업에 참여하기를 기대한다.

• eTL을 통해 각종 공지사항을 전달하고 자료들을 공유하고 과제를 제출한다.

• Honor Code: 본 과목을 수강하는 학생들은 양심과 책임의식에 기반하며 과제물 및 보고서 작성시에 학문의 정직성을 훼손하지 않겠다는 서약을 한 것으로 간주한다.

■ 성적 평가: 출석 및 참여도, 발제 및 비평문, 기말논문

• 1) 모든 수강생은 매 수업에 성실하게 참여할 것이 요구된다. 독서 과제를 바탕으로 수업 중 토론에 참여한다.

• 2) 발제자는 핵심 내용을 논증적으로 재구성하는 방식으로 요약하고, 이를 비판적으로 검토한다. 비판적 검토란 주요 주장에 대한 명료화, 논증에 대한 반론 제기, 숨겨진 전제 들추어내기, 저자가 제시하지 않은 지지 논증 구성, 저자의 주장이 가진 추가적인 함의 이끌어내기 등을 포함한다. 발제문은 수업시간 24시간 전까지 제출한다. (게시판 이용)

• 3) 기말논문은 6월 20일(금)까지 제출하고, 기한 내 제출된 논문은 간략한 논평과 함께 반환한다. 기한을 넘겨 제출하는 경우 늦은 만큼 감점될 수 있고 논평을 받지 못할 수 있다.

• 기말논문 작성 시 확인사항:

(1) 논문에서 다루려는 철학적 문제는 세부적이고 분명인가?

(2) 핵심 주장과 논제는 분명히 진술되었는가?

(3) 그를 뒷받침할 논증은 상세히 서술되었는가? (단순하면서도 생생한 사례가 사용되었는가?)

(4) 논증에 대한 가능한 반론들을 고려했는가?

(5) 불필요한 전문용어(jargon)나 불분명한 어휘들이 사용되지 않았는가?

(6) 퇴고 과정은 거쳤는가?

■ Schedule and Readings

(아래 일정은 잠정적이며, 각 주의 읽기 자료도 변경될 수 있다.)

Week 1. (3/5) - Introduction: Philosophy/Ethics of Artificial Intelligence?

Week 2. (3/12) - Foundations (1): a Brief History

• [2-1] Turing, Alan M. (1950), “Computing Machinery and Intelligence.” Mind 59, pp. 433-60.

• [2-2] Haugeland, J. “Semantic Engines” in Haugeland (ed.), Mind Design (First Edition, MIT, 1981); reprinted in Cummins and Cummins (eds.), Minds, Brains and Computers (Blackwell, 2000)

• [2-3] D. Rumelhart (1989), “The Architecture of the Mind: A Connectionist Approach.” In M.I. Posner (ed.), Foundations of Cognitive Science; or in Haugeland (ed.), Mind Design II (MIT, 1997)

Week 3. (3/19) - Foundations (2): the current status

• [3-1] Buckner, C. (2019), “Deep learning — A Philosophical Introduction.” Philosophy Compass. 2019; 14:e12625. https://doi.org/10.1111/phc3.12625

• [3-2] Marcus, G. (2018), “Innateness, AlphaZero, and Artificial Intelligence.” arXiv. https://arxiv.org/pdf/1801.05667

• [3-3] Millière, Raphaël and Buckner, Cameron (2024), “A Philosophical Introduction to Language Models — Part I: Continuity With Classic Debates.” arXiv:2401.03910

• [opt] LeCun, Y., Bengio, Y.& Hinton, G. (2015), “Deep learning.” Nature 521, 436-444.

• [opt] Floridi, L., & Chiriatti, M. (2020), “GPT-3: Its Nature, Scope, Limits, and Consequences.” Minds and Machines, 1-14.

• [opt] Chemero, A. (2023), “LLMs differ from human cognition because they are not embodied.” Nature Human Behaviour, 7(11), 1828-1829.

• [opt] Jones & Bergen (2024) “Does GPT-4 pass the Turing test?” https://arxiv.org/abs/2310.20216

Week 4. (3/26) - Reading Week

Week 5. (4/2) - The Chinese Room and Understanding

• [5-1] Searle, J. (1980), “Mind, Brain, Computers.” Behavioural and Brain Sciences, 3

• [5-1] Searle, J. (1990), “Is the Brain’s Mind a Computer Program?” Scientific American, 262(1), 26-31.

• [5-2] Churchland, P.M and P.S. Churchland (1990), “Could a Machine Think?” Scientific American, pp. 32-37.

• [5-2] Copeland, J. (1993), “The Curious Case of the Chinese Gym.” Synthese 95(2), Excerpt pp. 173-177.

• [5-3] Mitchell, M., & Krakauer, D.C. (2023), “The debate over understanding in AI’s large language models.” Proceedings of the National Academy of Sciences, 120(13), e2215907120.

• [opt] Block, N. (2002) “Searle’s Arguments Against Cognitive Science.” in J. Preston and M.Bishop (eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, pp. 70-79.

• [opt] Floridi, L. (2023), “AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models.” Philosophy & Technology, 36(1)

Week 6. (4/9) Artificial Consciousness

• [6-1] Harnad, S. (2003) “Can a Machine be Conscious? How?”, Journal of Consciousness Studies, 10: 67-75.

• [6-2] David J. Chalmers, “Could a Large Language Model Be Conscious?” https://arxiv.org/pdf/2303.07103

• [6-3] Dung, L., & Kersten, L. (2024), “Implementing artificial consciousness.” Mind & Language 40(1), 1-21. https://doi.org/10.1111/mila.12532

• [opt] Dennett, D. (1997), “Consciousness in Human and Robot Minds.” In M. Ito, Y. Miyashita, & E.T. Rolls (Eds), Proceedings of the IIAS Symposium on Cognition, Computation, and Consciousness. New York: Oxford University Press. pp. 17-30.

• [opt] McDermott, D. (2007) “Artificial Intelligence and Consciousness”, in P.D. Zelazo, M. Moscovitch and E. Thompson (eds.) The Cambridge Handbook of Consciousness, Cambridge: Cambridge University Press, pp. 117-150.

• [opt] Dehaene, S., Lau, H., Kouider, S. (2017), “What is consciousness, and could machines have it?” Science 358(6362), 486-492.

• [opt] Susan Schneider, “How to Catch an AI Zombie: Testing for Consciousness in Machines.” In S. Matthew Liao (ed.) (2020) Ethics of Artificial Intelligence, Oxford University Press.

Week 7. (4/16) Building Artificial Moral Agents

• [7-1] Allen, C., Varner, G., Zinser, J. (2000) ‘Prolegomena to any future artificial moral agent’, Journal of Experimental & Theoretical Artificial Intelligence 12: 251-261.

• [7-1] Allen, C., Smit, I., Wallach, W. (2005) ‘Artificial morality: Top-down, bottom-up, and hybrid approaches’, Ethics and Information Technology 7, 149-155.

• [7-2] Floridi, L. and J.W. Sanders (2004) “On the Morality of Artificial Agents,” Minds and Machines, 14(3), pp. 349-379.

• [7-3] P. Railton. “Machine Morality: Building or Learning.” in S. Matthew Liao (ed.) Ethics of Artificial Intelligence, Oxford Univ. Press.

• [opt] Moor, J. (2006), “The Nature, Importance, and Difficulty of Machine Ethics”, reprinted in M. Anderson & S. Anderson (eds), Machine Ethics. Cambridge University Press, pp. 13-20.

• [opt] Susan L. Anderson (2011), “The Unacceptability of Asimov’s Three Laws of Robotics as a Basis for Machine Ethics.” In M. Anderson & S. Anderson (eds), Machine Ethics. Cambridge University Press, pp. 285-296.

• [opt] Kiang, L. et al. “Can Machines Learn Morality? The Delphi Experiment” (arXiv:2110.07574)

• [opt] Hendrycks, D. “Aligning AI with Shared Human Values.” https://arxiv.org/pdf/2008.02275

Week 8. (4/23) Moral Status of Artificial Intelligence

• [8-1] Bryson, J. (2010) “Robots Should Be Slaves” in Yorick Wilks (ed.) Close Engagements with Artificial Companions: Key social, psychological, ethical and design issue, pp 63-74.

• [8-2] Coeckelbergh, Mark (2014), “The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics,” Philosophy and Technology 27, 61-77

• [8-3] S. Matthew Liao, “The Moral Status and Rights of Artificial Intelligence”, In S. Matthew Liao (ed.) (2020) Ethics of Artificial Intelligence, Oxford Univ. Press.

Week 9. (4/30) Big Data and Privacy

• [9-1] Crawford, Kate. Atlas of AI. Chapter 3. Data.

• [opt] boyd, D. and K. Crawford, “Six Provocations for Big Data”

• [9-2] Nissenbaum, H. (2004), “Privacy as Contextual Integrity.” Washington Law Review, 79, 119-157.

• Viljoen, “A Relational Theory of Data Governance”

• Véliz, “The Surveillance Delusion” (Oxford Handbook of Digital Ethics)

Week 10. (5/7) Algorithmic Bias and Fairness

• [10-1] O’Neil, C. (2016), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing Group. Chapter 5.

• [10-1] 오요한. 홍성욱 (2018), “인공지능 알고리듬은 사람을 차별하는가?” 과학기술학연구 18(3), 153-215. (COMPAS 사례를 중심으로)

• [10-2] Solon Barocas and Andrew D. Selbstt, (2016) “Big Data’s Disparate Impact”. California Law Review, 104(3), pp. 671-732.

• [10-3] Green, B. (2022) “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness”. Philosophy and Technology. 35, 90

• [opt] Johnson, G.M. (2020), “Algorithmic bias: on the implicit biases of social technology”. Synthese

• [opt] Sina Fazelpour and Zachary C.Lipton. 2020. “Algorithmic Fairness from a Non-ideal Perspective”. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20), Association for Computing Machinery, New York, NY, USA, 57-63.

Week 11. (5/14) Algorithmic opacity/transparency, and explainable AI

• [11-1] Binns, R. (2018), “Algorithmic Accountability and Public Reason.” Philosophy & Technology, 31(4), 543-556.

• [11-2] Creel, K.A. (2020), “Transparency in Complex Computational Systems”. Philosophy of Science, 87(4), 568-589. doi:10.1086/709729

• [11-3] Vredenburgh, K. (2022), “The Right to Explanation”. Journal of Political Philosophy, 30(2), 209-229. https://doi.org/10.1111/jopp.12262

• [opt] Zerilli, J., Knott, A., Maclaurin, J. et al. “Transparency in Algorithmic and Human Decision Making: Is There a Double Standard?”. Philosophy and Technology. 32, 661-683 (2019),

• [opt] Rudin, C. 2019. “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead”. Nature Machine Intelligence, 1, 206-215 (2019).

• [opt] Zednik, C. (2021) “Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence”. Philosophy and Technology. 34, 265-288 (2021),

Week 12. (5/21) Creativity

• [12-1] Boden, M.A. (2014), “Creativity and artificial intelligence: a contradiction in terms?” In The Philosophy of Creativity: New Essays.

• [12-2] Coeckelbergh, M. (2017), “Can Machines Create Art?” Philosophy & Technology, 30(3), 285-303.

• [12-3] Halina, M. (2021), “Insightful artificial intelligence.” Mind & Language. 36: 315-329.

• [opt] Langland-Hassan, Peter. (2024), ‘Imagination, Creativity, and Artificial Intelligence’ In Amy Kind & Julia Langkau (eds.), Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press. https://philarchive.org/rec/LANICA-5.

• [opt] Doshi Anil R.and Oliver P. Hauser (2024), “Generative AI enhances individual creativity but reduces the collective diversity of novel content.” Science Advances, 10(28), eadn5290.

Week 13. (5/28) The Future of Work: will AI replace our jobs?

• [13-1] Autor, D.H. (2015), ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’, The Journal of Economic Perspectives, 29, pp. 3-30.

• [13-2] Danaher, J. (2017), ‘Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life’, Science and Engineering Ethics, 23: 41-64.

• OR, [*] Danaher, “Automation and the Future of Work” (Oxford Handbook of Digital Ethics)

• [13-3] Aaron James. “Planning for Mass Unemployment: Precautionary Basic Income.” in S. Matthew Liao (ed.) Ethics of Artificial Intelligence, Oxford Univ. Press.

• [opt] Noy and Zhang, (2023), “Experimental evidence on the productivity effects of generative artificial intelligence”, Science 381 (6654), 187-192. DOI: 10.1126/science.adh2586

Week 14. (6/4) Superintelligence and existential risk

• [14-1] Nick Bostrom (2012), “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines, 22(2), pp. 71-85.

• OR, Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press, Chapters 2-6.

• [14-2] David Chalmers (2010) “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies, 17(9-10), 7-65. esp., pp. 1-15 & 19-56.

• [14-3] Morris, N.R.et al. “Levels of AGI for Operationalizing Progress on the Path to AGI”, arXiv:2311.02462

Week 15. (6/11) Presentation

Other Topics:

AI Scientists?

• Lu et al. “The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery”

• Krenn, M., Pollice, R., Guo, S.Y.et al.On scientific understanding with artificial intelligence. Nat Rev Phys 4, 761-769 (2022), https://doi.org/10.1038/s42254-022-00518-3

• Quentin Garrido, Nicolas Ballas, Mahmoud Assran, Adrien Bardes, Laurent Najman, Michael Rabbat, Emmanuel Dupoux, Yann LeCun. “Can AI Learn Common Sense Physics Just by Watching Videos?” https://arxiv.org/abs/2502.11831

• Sullivan, E. (2022), “Understanding from Machine Learning Models.” Br J Philosophy Sci 73(1), doi:10.1093/bjps/axz035.

• Duede, E. (2023), “Deep Learning Opacity in Scientific Discovery”. Philosophy of Science, 90(5), 1089-1099. doi:10.1017/psa.2023.8

• Boge, F.J. (2022), “Two Dimensions of Opacity and the Deep Learning Predicament”. Minds & Machines 32, 43-75. https://doi.org/10.1007/s11023-021-09569-4

Self-driving Cars

• Nyholm, S. (2018) “The ethics of crashes with self-driving cars: A roadmap I & II”, Philosophy Compass 13(7),

• Lin, P. (2015) “Why Ethics Matters for Autonomous Cars.” in M. Maureretal. (eds.), Autonomous Driving, pp. 69-85.

• Bonnefon, J.F., Shariff, A., & Rahwan, I. (2016), “The social dilemma of autonomous vehicles.” Science 352, 1573-1576.

⊳ Awad, E.et al. (2018), “The Moral Machine Experiment.” Nature 563, pp. 59-64.

• Gogoll, J., and Muller, J.F. (2017) “Autonomous Cars: In Favor of a Mandatory Ethics Setting.” Science and Engineering Ethics 23, 681-700.

• Geisslinger, M., Poszler, F. & Lienkamp, M. “An ethical trajectory planning algorithm for autonomous vehicles”. Nat Mach Intell 5, 137-144 (2023).

• OR, Geisslinger, M., Poszler, F., Betz, J. et al. “Autonomous Driving Ethics: from Trolley Problem to Ethics of Risk”. Philos. Technol 34, 1033-1055 (2021).

Killer Robots, or Autonomous Weapon system

• Sparrow, R. (2007) “Killer robots”, Journal of applied philosophy 24(1), 62-77.

• Sharkey, N. (2012) “Killing Made Easy: From Joysticks to Politics.”, in P. Lin, K. Abnetm and G. Bekkey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, The MIT Press. (pp. 111-156.)

• Simpson, T.W. and Muller, V.C. (2016) “Just war and robot’s killings,” Philosophical Quarterly, 66(263), 302-22.

• Peter Asaro, “Autonomous Weapons and the Ethics of Artificial Intelligence.” In S. Matthew Liao (ed.)(2020), Ethics of Artificial Intelligence, Oxford Univ. Press.

• Horowitz, M. 2016. ‘The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.’ Daedalus 145(4): 25-36.

(2025.11.25.)


댓글 없음:

댓글 쓰기

[경제학의 철학] Hands (2001), Ch 8 “The Economic Turn” 요약 정리 (미완성)

[ D. Wade Hands (2001), Reflection without Rules: Economic Methodology and Contemporary Science Theory (Cambridge University Press), pp. 35...