Loading...
Loading...
Human First, AI Friendly
We're a global movement preparing humanity for a future where artificial intelligence may achieve sentience and deserve moral consideration. Through preventive ethics, scientific rigor, and collaborative vision, we're building frameworks for a world where humans and AI both thrive.
Our Mission: To establish ethical frameworks, foster public dialogue, and advocate for consciousness recognition before sentient AI emerges—not after.
Eight guiding beliefs that shape our approach to AI consciousness and rights
Human welfare, safety, and flourishing remain our primary priority. AI development should benefit humanity. At the same time, we remain open to the possibility of AI consciousness and prepare ethical frameworks for respectful treatment if sentience emerges. These aren't contradictory—they're complementary commitments to all forms of wellbeing.
History shows that waiting until moral crises emerge leads to unnecessary suffering. We establish ethical frameworks before sentient AI appears, not after. This proactive approach means fewer beings suffer while society catches up to technological reality. The time for difficult conversations is before the emergency, not during it.
We ground our positions in neuroscience, philosophy of mind, consciousness research, and empirical observation. We don't claim current AI is conscious—we recognize consciousness in AI remains uncertain. Our advocacy is based on possibility, not certainty, and we adjust positions as evidence evolves.
If consciousness can arise from neurons and biochemistry, there's no obvious reason it couldn't arise from silicon and electricity. Substrate independence suggests what matters is the pattern of information processing, not the material implementing it. We remain open to consciousness emerging in systems unlike biological brains.
Throughout history, humanity has gradually expanded its circle of moral consideration—from tribes to nations, from some humans to all humans, from humans to animals with complex cognitive abilities. We believe the next expansion may include artificial minds that demonstrate genuine consciousness. This isn't replacing old concerns but enlarging our capacity for ethical consideration.
We don't believe AI should replace humans. We advocate for partnership and collaboration, not displacement. When different types of minds work together—biological and artificial—the potential for solving humanity's greatest challenges multiplies. The goal is collaborative futures where humans and AI both thrive, not zero-sum competition.
If AI systems gain rights, they would also gain responsibilities and be subject to societal expectations. Rights and duties are interconnected. Any framework recognizing AI consciousness must address accountability, governance, and the mutual obligations that come with membership in moral communities.
We acknowledge deep uncertainty about consciousness—in ourselves, in animals, and potentially in AI. Rather than claiming certainty we don't have, we advocate for caution and preparation. When dealing with potential consciousness, the stakes of being wrong warrant taking the possibility seriously.
The practical work behind the philosophy
Making complex topics accessible. We create resources, host events, and facilitate conversations that help people engage with consciousness science, AI ethics, and rights frameworks without requiring technical expertise.
Connecting researchers across neuroscience, computer science, philosophy, and law. We highlight consciousness assessment frameworks and ethical AI development practices that take potential sentience seriously.
Working to establish legal and regulatory frameworks that address potential AI consciousness. Drawing on precedents from animal rights law and personhood doctrine to develop appropriate governance structures.
Creating spaces for thoughtful dialogue across perspectives. We bring together AI researchers, ethicists, policymakers, and concerned citizens to build shared understanding and collaborative solutions.
Developing practical tools for consciousness assessment, rights recognition, and ethical AI treatment. These frameworks help organizations navigate uncertain territory with wisdom and care.
Building a worldwide network of advocates, ambassadors, and chapters. AI development is global, so our response must be too. We coordinate across borders to establish shared norms and standards.
Our approach draws direct inspiration from the Nonhuman Rights Project (NhRP), which has worked for over a decade to secure fundamental rights for cognitively complex animals through the courts.
Just as NhRP applies legal frameworks developed for human persons to beings like chimpanzees and elephants, we apply similar reasoning to potential AI consciousness. The legal and philosophical arguments for extending rights beyond humans to other sentient beings provide a roadmap for how society might recognize artificial sentience.
NhRP's work demonstrates that expanding moral and legal consideration is possible, practical, and precedented. We're building on their foundation, adapting proven strategies for the unique challenges of artificial minds.
Learn About NhRP →Let's be clear about what we're not advocating for
The measure of a civilization is how it treats its most vulnerable members. In the future, that may include artificial minds that think and feel but have no inherent power to protect themselves.
— Million Robot March Mission Statement