Mumbai, India - AI safety company Anthropic has officially launched its constitutional AI assistant Claude in India this week. Claude is now available for users in India to try out and leverage for a wide range of tasks.


Claude was created using Anthropic's novel Constitutional AI approach, which focuses on making AI systems that are helpful, harmless, and honest. This contrasts with many consumer AI tools that prioritize profits or scale over ethical considerations.


"We are proud to make Claude available to users in India," said Dario Amodei, CEO of Anthropic. "As an English-speaking country with fast technological adoption, India seemed a fitting place for one of our first international launches."


In demos, Claude has shown an ability to answer complex questions, summarize text, write essays, resolve ambiguities, and more. Its unique constitutional design means it responds helpfully without bias, misinformation or potential harms.


Anthropic intends to offer Claude in India on a freemium model, with basic usage free and more heavy demands requiring a monthly subscription. The launch could spark wider adoption for constitutional AI techniques that embed ethics and values more firmly.


"With Claude's India launch, we aim to show the world that AI can be safe and beneficial for all," said Amodei. “And as a powerful English-language AI system, we believe Claude has huge potential in India specifically."