Claude API Cheatsheet

Quick reference for Anthropic's Claude API

โšก Quick Start

import anthropic client = anthropic.Anthropic( api_key="your-api-key" ) message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[ {"role": "user", "content": "Hello, Claude!"} ] ) print(message.content[0].text)
import Anthropic from '@anthropic-ai/sdk'; const client = new Anthropic({ apiKey: 'your-api-key' }); const message = await client.messages.create({ model: 'claude-sonnet-4-20250514', max_tokens: 1024, messages: [ { role: 'user', content: 'Hello, Claude!' } ] }); console.log(message.content[0].text);
curl https://api.anthropic.com/v1/messages \ -H "x-api-key: your-api-key" \ -H "anthropic-version: 2023-06-01" \ -H "content-type: application/json" \ -d '{ "model": "claude-sonnet-4-20250514", "max_tokens": 1024, "messages": [ {"role": "user", "content": "Hello, Claude!"} ] }'

๐Ÿค– Available Models

Model ID Context Best For
Claude Opus 4 claude-opus-4-20250514 200K Complex reasoning, coding
Claude Sonnet 4 claude-sonnet-4-20250514 200K Balanced performance
Claude Haiku 3.5 claude-3-5-haiku-20241022 200K Speed, simple tasks

Tip: Use claude-sonnet-4-20250514 as your default. It balances quality and cost well for most use cases.

โš™๏ธ Key Parameters

Parameter Type Required Description
model string Yes Model ID to use
max_tokens integer Yes Maximum tokens in response
messages array Yes Conversation messages
system string No System prompt
temperature float No 0-1, randomness (default: 1)
top_p float No 0-1, nucleus sampling
stream boolean No Enable streaming response

๐Ÿ“ System Prompts

Set Claude's behavior and context at the start of the conversation.

message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, system="You are a senior Python developer. Write clean, well-documented code with type hints. Always explain your reasoning before showing code.", messages=[ {"role": "user", "content": "Write a function to validate emails"} ] )

๐Ÿ’ฌ Multi-turn Conversations

messages = [ {"role": "user", "content": "What is Python?"}, {"role": "assistant", "content": "Python is a high-level programming language..."}, {"role": "user", "content": "Show me a simple example"} ] response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=messages )

Note: Messages must alternate between user and assistant roles. Always start with a user message.

๐ŸŒŠ Streaming Responses

with client.messages.stream( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[{"role": "user", "content": "Write a haiku"}] ) as stream: for text in stream.text_stream: print(text, end="", flush=True)

๐Ÿ‘๏ธ Vision (Image Input)

import base64 # Read and encode image with open("image.png", "rb") as f: image_data = base64.standard_b64encode(f.read()).decode("utf-8") message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[{ "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/png", "data": image_data } }, { "type": "text", "text": "What's in this image?" } ] }] )

โš ๏ธ Error Handling

import anthropic try: message = client.messages.create(...) except anthropic.APIConnectionError: print("Failed to connect to API") except anthropic.RateLimitError: print("Rate limited - wait and retry") except anthropic.APIStatusError as e: print(f"API error: {e.status_code}")
Error Code Cause
Rate Limit 429 Too many requests
Invalid Request 400 Bad parameters
Authentication 401 Invalid API key
Overloaded 529 API temporarily overloaded

โœ… Best Practices