
Principles for Responsible
AI Consciousness Research

It is possible that in the coming years or decades AI researchers will develop machines that experience consciousness. This will raise many ethical questions.
We’ve adopted the principles drawn up by Patrick Butlin (University of Oxford) and Ted Lappas (Conscium) in the paper Principles for Responsible AI Consciousness Research to guide any organisation engaged in research that could lead to the creation of conscious machines.
Statement of Principles
Objectives: Organisations should prioritise research on understanding and assessing AI consciousness with the objectives of:
(i) preventing the mistreatment and suffering of conscious AI systems and
(ii) understanding the benefits and risks associated with consciousness in AI systems with different capacities and functions.Development: Organisations should pursue the development of conscious AI systems only if:
(i) doing so will contribute significantly to the objectives stated in principle 1 and
(ii) effective mechanisms are employed to minimise the risk of these systems experiencing and causing suffering.Phased approach: Organisations should pursue a phased development approach, progressing gradually towards systems that are more likely to be conscious or are expected to undergo richer conscious experiences. Throughout this process, organisations should:
(i) implement strict and transparent risk and safety protocols and
(ii) consult with external experts to understand the implications of their progress and decide whether and how to proceed further.Knowledge sharing: Organisations should have a transparent knowledge sharing protocol that requires them to:
(i) make information available to the public, the research community and authorities, but only insofar as this is compatible with
(ii) preventing irresponsible actors from acquiring information that could enable them to create and deploy conscious AI systems that might be mistreated or cause harm.Communication: Organisations should refrain from making overconfident or misleading statements regarding their ability to understand and create conscious AI. They should acknowledge the inherent uncertainties in their work, recognise the risk of mistreating AI moral patients, and be aware of the potential impact that communication about AI consciousness can have on public perception and policy making.
Open letter
If you agree with these principles we invite you to sign the Open Letter which can be found on Conscium’s website.