Becoming an AI Ethicist, Part I: On Choosing Your Education


Are you keen on helping to shape the future of AI from an ethical standpoint? Today, you’ll discover what it takes to become an AI ethicist and steer this ever-evolving tech toward a responsible tomorrow!
Becoming an AI ethicist is a unique opportunity to lend your voice to the development of world-changing technology, all while addressing key societal challenges. AI ethics focuses on ensuring AI systems are developed and used responsibly, considering their moral, social, and political impacts. The educational path to this career involves an interdisciplinary approach, combining philosophy, computer science, law, and social sciences.
Ethics is all about analyzing moral dilemmas and establishing principles to guide AI development, such as fairness and accountability. Unlike laws or social conventions, ethics relies on reasoned judgment, making it essential for crafting responsible AI frameworks.
Sociology and psychology also offer valuable insights. Sociology helps AI ethicists understand how AI systems interact with different communities and can highlight biases or inequalities in technology. On the other hand, psychology, which focuses on the individual, is crucial for understanding user trust and shaping the ethical design of AI interfaces.
A background in computer science can be a big help in providing the technical literacy needed to understand and influence AI systems. Computer scientists can audit algorithms, identify bias, and directly engage with the technology they critique. Legal expertise is also vital for creating policies and regulations that ensure fair and transparent AI governance.
Leading research institutions, such as Stanford, Oxford, and UC Berkeley, combine these disciplines to tackle AI's ethical challenges. As an aspiring AI ethicist, you might just benefit from taking part in these interdisciplinary programs, which integrate philosophical, technical, and social perspectives to ensure AI serves humanity responsibly!
Key Topics:
- Understanding AI Ethics (00:00)
- The Role of Philosophy in AI Ethics (03:10)
- Interdisciplinary Approaches to AI Ethics (05:30)
- Legal Perspective of AI Ethics (08:21)
- Interdisciplinary Research Hubs (10:55)
- Wrap Up: Career Paths in AI Ethics (15:49)
More info, transcripts, and references can be found at ethical.fm
AI is reshaping industries, society, and how we interact with the world. As the technology continues to mature, there is a crucial need for individuals who understand how these systems function and guide their development, ensuring beneficial outcomes for all. This responsibility falls to AI ethicists. If you are contemplating a career in this developing field, several paths lead to success. This two-part podcast will help you decide if AI ethics is the right career path for you. The first part is focused on the optimal educational foundations of an AI ethicist and leading research institutions.
Your educational journey toward becoming an AI ethicist begins with understanding what the field of AI ethics entails. AI ethics is a branch of philosophy that examines the moral, social, and political implications of AI systems. It involves analyzing how AI systems are designed, developed, and deployed while establishing principles to guide their responsible creation and use. This field intersects with numerous disciplines, which can be broadly categorized into four groups: philosophy, computer science, law, and interdisciplinary approaches.
The Philosophy Path
Ethics is a branch of philosophy that determines right and wrong by analyzing and defending different concepts of values. Ethics is a branch of philosophy, but what is philosophy? Philosophy is the pursuit of truth. The purpose of philosophy is to ask, “Why?” Science also aims at discovering truth, but science is concerned with knowledge that can be verified (or falsified) through the senses. Although philosophy and science are not opposed, philosophy often goes beyond the questions that science can ask or answer. For example, “Why do we do science? Should we have limitations on research — if yes or no, why?” Science cannot answer this question, but philosophy, in particular, ethics, can. Ethics involves the analysis of actions as allowed or not allowed by particular value systems. Studying philosophy helps you with critical analysis and understanding of how to break down moral dilemmas, as well as propose overarching principles for AI development.
Ethics, Social Convention, and Law
Ethics is often confused with issues of legality, social convention, politics, and religion. In reality,
ethics is about discovering the right thing based on reason and context. Following the law may make you a lawful person but not necessarily an ethical one. Laws are created by humans, who may be flawed in their judgment. Laws also may exist due to an archaic idea that no longer applies to the current context. Politics allows citizens or rulers to determine a given course of action for society but not to articulate reasons why that course is right or wrong.
Social convention is dependent on habit and religion is dependent on divine revelation, rather
than human reasoning.
Alternative Disciplines in the Humanities
In general, the humanities fields offer deep insight into human behavior and societal dynamics, especially sociology and psychology. It’s important to note that sociology and psychology do not aim to evaluate right and wrong but may provide key insights to complement the work of an AI ethicist.
Sociology focuses on the study of social structures, relationships, and cultural norms. For an AI ethicist, this knowledge is critical for analyzing how AI systems interact with and impact wider communities. For example, a sociological perspective can help identify how biases in AI systems may reinforce or exacerbate existing inequalities in specific cultures, as well as propose solutions to ensure equitable access to the technology and outcomes for diverse populations.
Rather than at the societal level, psychology zooms in on individual behavior, cognition, and decision-making processes. Psychology is particularly useful for understanding how users engage with AI systems, addressing concerns such as user trust, ethical design in user interfaces, and the psychological implications of AI-driven decisions. Psychologists can also contribute to the development of AI systems that align with human values by providing insights into emotional intelligence and ethical decision-making models. Together, sociology and psychology enrich the interdisciplinary approach to AI ethics by ensuring that both societal and individual perspectives are thoughtfully integrated into ethical frameworks.
The Technical Track: Computer Science
Computer science (CS) is the study of computation, algorithms, and information processing. It involves understanding the theoretical foundations of computing, as well as the practical methods for designing and implementing software and hardware systems. Unlike Information Technology (IT), which focuses on the implementation, management, and maintenance of technology infrastructure, computer science delves deeper into creating new technologies and understanding how they work. An IT professional might set up the internet access within a company but a computer scientist would develop and train the algorithms used within a product. Similarly, computer science differs from other technical fields like electrical engineering, which focuses on circuits, hardware, and power systems, by concentrating on software, algorithms, data, and computational theory.
A background in computer science provides an AI ethicist with an ideal advantage since CS equips you with technical literacy. A CS Bachelor’s degree includes courses on algorithms and data structures, providing the basic tools for training machine learning algorithms, as well as sources specializing in AI and machine learning. Computer scientists understand how AI systems behave the way they do, which is especially useful for addressing bias in training data, lack of transparency in model decisions, or vulnerabilities in system security. Computer science skills also enable AI ethicists to collaborate with an interdisciplinary team, especially with engineers and data scientists, bridging the gap on how to effectively build ethical principles into technical implementation.
Unlike other domains, computer science allows an AI ethicist to engage directly with the systems they critique. For example, while sociology can reveal the societal impact of AI, only computer science provides the tools to audit and modify the algorithms responsible for those impacts. This direct influence makes computer science uniquely powerful in ensuring that ethical principles are integrated into the core functionality of AI systems. By combining computational knowledge with the critical thinking of ethics, computer science-trained AI ethicists can lead the way in developing solutions that are both innovative and effective.
Law School
The field of law is dedicated to the creation, interpretation, and enforcement of rules that govern society. Law encompasses a range of practices, such as drafting legislation, litigating disputes, and advising on regulatory compliance. Unlike political science, which focuses on the theoretical and practical aspects of governance, or ethics, which explores questions of right and wrong, the rule of law makes societal issues extremely concrete and enforced by the government. While sociologists may identify systemic impacts and computer scientists might adjust algorithms, legal professionals are uniquely positioned to formalize these concerns into enforceable standards and protections. Law is especially critical for AI ethicists interested in regulation and policy development.
Law school provides skills for an AI ethicist to properly analyze and influence policies that govern AI development and deployment. You learn how to craft precise legal language and interpret complex regulatory frameworks, so you may advocate for changes that promote fairness, transparency, and accountability in AI systems. As governments and international organizations develop more legislation aimed at addressing the ethical challenges of AI, such as privacy breaches, algorithmic bias, and the misuse of autonomous systems, such skills are essential.
Moreover, AI ethicists with legal expertise can play pivotal roles as advisors to governments, corporations, and non-profits. They ensure compliance with existing laws while advocating for new policies that address emerging challenges. For example, they might work on drafting data protection regulations, guiding companies on AI-related liability issues, or creating legal frameworks for the ethical use of AI in sensitive areas such as criminal justice or healthcare. In this way, the legal profession provides essential tools for translating ethical principles into practical safeguards, making it a cornerstone of responsible AI development.
The Interdisciplinary Approach
Interdisciplinary approaches to AI ethics integrate knowledge from diverse fields, blending technical, philosophical, and social science perspectives to address complex ethical challenges. There are several research hubs dedicated to building interdisciplinary, human-centered AI, with the most prestigious and cutting-edge research coming out of schools located close to Silicon Valley in the US. The UK also has a strong dedication to AI ethics research at their leading universities, Oxford and Cambridge. The research focus of these centers often reflects the expertise of their faculty. For instance, institutions with a strong emphasis on philosophy may produce groundbreaking work on ethical theory, while those with technical strengths might advance practical solutions for AI safety and alignment.
Reserach Institutions
Stanford University, located in Silicon Valley, is home to the Human-Centered Artificial Intelligence (HAI) initiative. This program seeks to integrate human values into AI development by fostering interdisciplinary collaboration across computer science, ethics, and the social sciences. HAI houses renowned scholars such as Fei-Fei Li and John Etchemendy, focusing on research related to AI’s societal impact, ethical governance, and the development of tools for human-AI interaction. The proximity of HAI to major tech companies provides unique opportunities for immediate practical application and partnerships.
At Oxford University in the United Kingdom, the Institute for Ethics in AI is a pioneering research center that combines philosophical inquiry with technical expertise. Led by scholars like Professor Carissa Véliz and Professor John Tasioulas, the institute focuses on fairness, accountability, and the broader societal implications of AI. Located in Oxford, this program attracts global talent and produces groundbreaking work on ethical theory and its application to real-world AI challenges.
The Center for Human-Compatible AI (CHAI) at the University of California, Berkeley, is particularly renowned for its focus on AI alignment and safety. CHAI’s work revolves around ensuring that AI systems act in ways consistent with human values and intentions. Its research often delves into technical challenges, such as designing algorithms that prioritize ethical decision-making without compromising efficiency. Led by Professor Stuart Russell, CHAI is located in Berkeley, California, and collaborates with experts across disciplines to advance the understanding of safe and beneficial AI.
Harvard University’s Berkman Klein Center for Internet & Society, located in Cambridge, Massachusetts, investigates the ethical and legal dimensions of emerging technologies, including AI. Directed by thought leaders such as Urs Gasser, the center focuses on topics like digital privacy, algorithmic accountability, and the societal implications of AI. Its interdisciplinary approach brings together legal experts, computer scientists, and ethicists to create comprehensive solutions to complex problems.
Cambridge University’s Leverhulme Centre for the Future of Intelligence investigates the long-term implications of AI. The Leverhulme Centre is unique in its focus on existential risks and global strategies for ensuring AI benefits humanity as a whole. Located in Cambridge, United Kingdom, The Leverhulme Centre’s multidisciplinary team includes philosophers, cognitive scientists, and computer scientists who collaborate on projects ranging from ethical AI design to governance frameworks. Scholars like Huw Price and Stephen Cave lead groundbreaking research on the philosophical and practical challenges of integrating AI into society responsibly.
Other notable hubs include the Alan Turing Institute in the UK, which integrates ethics into broader AI research initiatives and the Schwartz Reisman Institute for Technology and Society at the University of Toronto, located in Canada, which focuses on the societal implications of AI blending research from philosophy, law, and computer science to address global challenges such as privacy, bias, and AI regulation.
These interdisciplinary programs provide AI ethicists with a rich ecosystem for learning and collaboration, blending theoretical knowledge with practical applications to tackle some of the most pressing challenges in the field. They underscore the importance of integrating diverse perspectives to ensure AI systems serve humanity responsibly and equitably.
Conclusion
Becoming an AI ethicist offers an opportunity to shape the trajectory of transformative technologies while addressing pressing societal concerns. The field demands a unique combination of technical knowledge, ethical understanding, and practical skills, tailored to the chosen career path. By reflecting on your interests, strengths, and aspirations, you can determine whether AI ethics aligns with your professional vocation.