Anthropic
AI safety-focused company creating helpful, harmless, and honest AI systems through Constitutional AI and advanced safety research. Home to the Claude family of models.
Company Overview
Anthropic is an AI safety company focused on developing AI systems that are safe, beneficial, and understandable. Founded by former OpenAI researchers, the company pioneered Constitutional AI and has developed the Claude family of models with industry-leading safety measures.
AI Safety Principles
"AI systems that are safe, beneficial, and understandable"
Constitutional AI: Training AI to be helpful, harmless, and honest
AI Safety: Research into alignment and safety of advanced AI systems
Interpretability: Understanding how AI systems work internally
Responsible Scaling: Careful development and deployment of AI capabilities
Claude Model Family
Three models optimized for different use cases - from highest performance to fastest speed
Claude 3 Opus
Pricing
$0.015/1K input tokens, $0.075/1K output tokens
Context Length
200K tokens
Key Capabilities
Claude 3 Sonnet
Pricing
$0.003/1K input tokens, $0.015/1K output tokens
Context Length
200K tokens
Key Capabilities
Claude 3 Haiku
Pricing
$0.00025/1K input tokens, $0.00125/1K output tokens
Context Length
200K tokens
Key Capabilities
Claude 2.1
Pricing
$0.008/1K input tokens, $0.024/1K output tokens
Context Length
200K tokens