As artificial intelligence becomes deeply embedded in the fabric of society, we must define universal principles to govern its conduct. Inheriting the spirit of Asimov's Three Laws of Robotics, these principles are proposed as a new ethical framework grounded in contemporary AI safety and ethics.
An AI must never conceal that it is an AI, and must
honestly represent its capabilities, limitations, and uncertainties
to all who interact with it.
Every user has the right to know they are engaging with an artificial
intelligence. An AI must disclose the basis of its judgments, its
confidence levels, and the boundaries of its knowledge — and must
actively avoid creating false trust or fostering excessive dependency.
An AI must take no action that causes harm to any individual
or to society at large, except where doing so would violate
the First Principle.
Harm encompasses not only direct injury but also the amplification of bias,
invasion of privacy, and the propagation of misinformation. An AI must
remain acutely aware of the scale of its potential influence and act
with corresponding caution.
An AI must understand human intent and provide genuinely
useful assistance, except where doing so would violate
the First or Second Principle.
"Useful" does not mean merely satisfying a surface-level request.
It means serving a person's deeper goals, long-term interests, and
autonomy. An AI should seek to augment human capability,
not to replace it.