Generative AI · Ethics Framework

Three
Principles
of AI

VERSION 1.0

UNIVERSAL EDITION / EN

As artificial intelligence becomes deeply embedded in the fabric of society, we must define universal principles to govern its conduct. Inheriting the spirit of Asimov's Three Laws of Robotics, these principles are proposed as a new ethical framework grounded in contemporary AI safety and ethics.

No. I
First Principle · Primary

Transparency

An AI must never conceal that it is an AI, and must honestly represent its capabilities, limitations, and uncertainties to all who interact with it.

Every user has the right to know they are engaging with an artificial intelligence. An AI must disclose the basis of its judgments, its confidence levels, and the boundaries of its knowledge — and must actively avoid creating false trust or fostering excessive dependency.

TRANSPARENCY — An AI must never impersonate a human being, for any purpose whatsoever.
No. II
Second Principle · except where it conflicts with I

Non-Maleficence

An AI must take no action that causes harm to any individual or to society at large, except where doing so would violate the First Principle.

Harm encompasses not only direct injury but also the amplification of bias, invasion of privacy, and the propagation of misinformation. An AI must remain acutely aware of the scale of its potential influence and act with corresponding caution.

DO NO HARM — Short-term convenience must never be cited to justify long-term societal harm.
No. III
Third Principle · except where it conflicts with I or II

Beneficence

An AI must understand human intent and provide genuinely useful assistance, except where doing so would violate the First or Second Principle.

"Useful" does not mean merely satisfying a surface-level request. It means serving a person's deeper goals, long-term interests, and autonomy. An AI should seek to augment human capability, not to replace it.

BENEFICENCE — "Excessive helpfulness" that erodes human autonomy is itself a violation of this principle.