Dogma: Aligning AI with human beliefs
Building the world's first belief-aligned reviewer AI—a "meta-model" that sits above today's LLMs to audit, score, and filter their outputs through the lens of values, principles, and worldviews.
How it works
Our five-step process ensures AI outputs align with your chosen belief systems and values
Beliefs and values selection
Business or individuals select belief and value systems that guide AI solutions outputs and actions, e.g. religious, cultural, clinical, regulatory
Model creation
Dogma meta model created for business and user using authoritative sources corresponding to selected belief systems
Model calibration
Dogma meta model calibrated for correctness and coverage by using Dogma's proprietary AI verification system
Human input
Dogma shows its reasoning to user for feedback and approval
Embed Model
Business embed Dogma meta model in their products, individuals enable Dogma reviews in their AI assistants
Solutions
Tailored AI safety solutions for businesses and families
HaloParent Business
Make your AI product safe and trustworthy for minors by following adolescent psychology insights, clinical safeguards, federal and local guidelines
Halo for Parents
Gain peace of mind by making sure that AI chat and tools used by your kid is age appropriate and confirms with your religious, social, and cultural belief system
Uses
Dogma serves diverse organizations and individuals across multiple domains
ESG orientation
Advance environmental, social, and governance standards
DEI orientation
Reflect AI outputs and actions promote diversity, equity, and inclusion
Child safety
Protect children with age-appropriate and safe AI interactions
Religious Values
Ensure AI respects and aligns with religious beliefs and practices
Socio Political
Guide using local, state, federal public policy guidelines
Professional
Follow professional, clinical, psychological standards
In News
Stay informed about the latest developments in AI safety and ethics
Geoffrey Hinton says AI needs maternal instincts
The "godfather of AI" discusses the need to develop maternal instincts in AI to prevent it from going rogue.
Read moreParents sue OpenAI over ChatGPT's role in son's suicide
Legal action highlights growing concerns about AI safety and the need for better content filtering.
Read moreIsaac Asimov's Laws of Robotics Are Wrong
Analysis of the futility and silliness of the famous "three laws of robotics" as conceived by Isaac Asimov.
Read moreCanadian man suffers from AI-induced delusion
Case study reveals potential psychological impacts of unfiltered AI interactions.
Read moreGoogle Gemini dubbed high risk for teens
New safety assessment raises concerns about AI interactions with young users.
Read moreMicrosoft troubled by rise in reports of AI psychosis
Growing reports of psychological distress linked to AI interactions prompt industry concern.
Read more