Loading...
Loading...
Build with Gemini 2.5 — the most cost-effective AI API with 1M context and native multimodality
import google.generativeai as genai
import os
genai.configure(api_key=os.getenv('GEMINI_API_KEY'))
model = genai.GenerativeModel('gemini-2.5-flash')
response = model.generate_content('What is the context window of Gemini 2.5 Flash?')
print(response.text)model = genai.GenerativeModel(
'gemini-2.5-flash',
system_instruction='You are a Python expert. Always include code examples.'
)model = genai.GenerativeModel('gemini-2.5-flash')
response = model.generate_content(
'Extract: John (28) from NYC.',
generation_config=genai.GenerationConfig(
response_mime_type='application/json'
)
)Free web IDE for prototyping. 60 requests/minute free tier.
Compare APIs:
| Task | OpenAI | Anthropic | Gemini |
|---|---|---|---|
| Basic chat | chat.completions.create() | messages.create() | generate_content() |
| Streaming | stream=True | with .stream() | stream=True |
| System prompt | In messages | Separate param | Constructor param |
| JSON output | response_format | No native | response_schema |