Ethics and Governance of Innovation

The research project undertakes cross-disciplinary analysis of the unique ethical, technical, and policy challenges posed by emerging information technologies. The project is engaged with a variety of streams of enquiry examining critical questions about the role of technological innovation in society, focusing in particular on predictive and generative AI.

Motivation

Ethical issues with emerging technologies such as AI do not arise in a bubble; they are informed and constrained by existing law and policy, and technological capacities and limitations. For innovation to be socially beneficial, the fundamental rights and values of democratic societies must inform how new technologies are developed, governed, and used.

Challenges relating to privacy, bias, fairness, opacity, worker rights, professional standards, and environmental impact of new technologies are informed by long and complex legal, political, and social histories. Identifying solutions to these types of problems that are ethically sound, legally compliant, and technically feasible requires rigorous conceptual analysis and philosophical critique paired with cross-disciplinary policy analysis, technical development, and empirical research. The EGI research project combines these methods to develop conceptual frameworks, policy proposals, and technically feasible practical tools that simultaneously meet ethical, legal, political, and social ideals.

Objectives

The central questions informing the project are:

  • To what extent are existing technical, organisational, and regulatory tools used to make AI trustworthy and align it with ethical ideals, regulatory requirements, and human rights fit for purpose?
  • What does it mean to be a ‘good’ AI developer? What are the professional ethics, values, norms, and standards of this emergent profession? Can AI development be conceptualized and regulated as a public service profession?
  • What is the epistemological impact of generative AI on science, education, and public discourse in democratic societies? Can truth be protected at scale?
  • Who is technological innovation intended to benefit in the 21st century? To what extent do the implicit and explicit ideologies and narratives of the future espoused by Big Tech companies and their proponents align with democratic values and regulatory goals?

Background and Prior Work

The Ethics and Governance of Innovation project works closely with the Governance of Emerging Technologies (GET) research group at the Oxford Internet Institute, University of Oxford, and the Technology and Regulation group at the Hasso Plattner Institute. Prior work undertaken by project members has focused on areas including the ethics of algorithms, AI, and Big Data; truth and accuracy in large language models (LLMs); fairness, accountability, and transparency in machine learning; data protection and non-discrimination law; group privacy; ethical auditing of automated systems; and digital epidemiology and public health ethics.

Across these areas the researchers have contributed (1) a theoretical and policy analysis of how LLMs such as ChatGPT can be legally compelled to ‘tell the truth’ and a novel method to reduce hallucinations and inaccuracies; (2) legal analysis of the enforceability of a “right to explanation” of automated decisions in the General Data Protection Regulation (GDPR); (3) the development of a method and ethical requirements for providing “meaningful explanations” of automated decisions in the form of ‘counterfactual explanations’; (4) a novel, legally compliant fairness metric to detect bias in AI and machine learning systems (‘Conditional Demographic Disparity’); (5) a classification scheme for fairness metrics based on non-discrimination law; and (6) an open-access fairness toolkit (OxonFair) to prevent levelling down in AI and machine learning. 

The researchers’ work in these areas is widely cited and has been implemented by researchers, policy-makers, and companies internationally, featuring in policy proposals and guidelines from the European Commission, European Parliament, United Nations, Council of Europe and the U.S. White House, products from Google, Amazon, and Microsoft, and an investigation which revealed algorithmic discrimination in the Netherlands carried out by Algorithm Audit. Their work has likewise received recognition through a variety of award schemes including Cognition X, the Computer Weekly Awards, the Privacy Law Scholars Conference (PLSC), and the O2RB Excellence in Impact Awards.

 

Duration: 2025 - 2030 

Participating Research Groups: Norm Setting and Decision Processes

Funding: HPI - Hasso Plattner Institute 

Partners: Oxford Internet Institute, University of Oxford & Hasso Plattner Institute