Version-Controlled Prompt Configs
Store prompt configurations with full versioning support. Switch model providers, A/B test prompts, and integrate with any application. Your code stays clean, prompts stay flexible.
Create Prompt Config
Define prompt template, model, and parameters via Dashboard or API
Integrate via API
Call the Prompts API from your application with version number
Test New Versions
Create new versions and test in parallel without affecting production
Switch Instantly
Update version number in API call to deploy new prompt configuration
Enterprise-Grade Prompt Management
Everything developers need for production AI applications. Versioning, provider switching, and zero-downtime updates.
Full Version Control
Maintain 10+ versions of prompt configurations. Test new versions without breaking production. Roll back instantly if needed.
- 10+ versions per prompt configuration
- Version-specific API calls
- Production-safe testing
- Instant rollback to any version
- Audit trail for all changes
Model Provider Switching
Switch between OpenAI, Anthropic, Google Gemini, and more without changing your application code. Just update the API configuration.
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude 3.5 Sonnet, Opus)
- Google (Gemini Pro, Ultra)
- Meta (Llama models)
- Switch providers in seconds
Developer-First Integration
RESTful API with TypeScript SDK available now. Python and C# SDKs coming soon. Complete documentation and code examples.
- RESTful API with API key authentication
- TypeScript/JavaScript SDK (available now)
- Python SDK (coming soon)
- C# SDK (coming soon)
- Comprehensive API docs
- Code examples and tutorials
Persistent Configuration Storage
Store prompt templates, system instructions, and parameters centrally. Your application logic stays clean and unchanged.
- Centralized prompt storage
- System instructions management
- Parameter configuration
- Template variables
- Application logic unchanged
Zero-Downtime Updates
Update prompts without redeploying your application. Test new versions in parallel, then switch production traffic instantly.
- Hot-swap prompt versions
- No application redeployment
- A/B testing support
- Gradual rollout capabilities
- Zero downtime guaranteed
Enterprise Security
API keys with granular permissions, rate limiting, and audit logs. Enterprise-grade security with encryption at rest and in transit.
- API key management
- Granular permission controls
- Rate limiting and quotas
- Detailed audit logs
- Enterprise security standards
Prompts API Methods
Three simple endpoints to run your prompts: non-streaming for complete responses, or streaming for real-time output.
Non-Streaming
POST /prompt-runs/run-syncGet the complete response in a single API call. Best for batch processing and when you don't need real-time streaming.
HTTP/2 Streaming
POST /prompt-runs/asyncStream responses using HTTP/2 protocol. Get tokens as they're generated for a responsive user experience.
SSE Streaming
POST /prompt-runs/run-sseServer-Sent Events streaming for real-time responses. Compatible with EventSource API in browsers.
Full API Reference
For complete details on request/response formats, parameters, error codes, and advanced usage, visit our comprehensive API documentation.
View Complete API DocumentationSimple REST API Integration
Just a simple HTTP POST request. Use curl, any HTTP library, or our optional SDKs—your choice.
- Version-Specific CallsReference exact version numbers for reproducible results
- Variable InjectionPass replacements at runtime to populate prompt templates
- No RedeploymentUpdate prompts without touching application code
# Non-streaming: Get complete response in one call
curl -X POST 'https://api.hyperleapai.com/prompt-runs/run-sync' \
-H 'Content-Type: application/json' \
-H 'x-hl-api-key: YOUR_API_KEY' \
-d '{
"promptId": "your-prompt-id",
"promptVersionId": "version-id",
"replacements": {
"topic": "AI in Healthcare",
"tone": "professional"
}
}'
# HTTP/2 Streaming: Get response as it's generated
curl -X POST 'https://api.hyperleapai.com/prompt-runs/async' \
-H 'Content-Type: application/json' \
-H 'x-hl-api-key: YOUR_API_KEY' \
-d '{
"promptId": "your-prompt-id",
"promptVersionId": "version-id",
"replacements": {"topic": "AI"}
}'
# SSE Streaming: Server-Sent Events for real-time streaming
curl -X POST 'https://api.hyperleapai.com/prompt-runs/run-sse' \
-H 'Content-Type: application/json' \
-H 'x-hl-api-key: YOUR_API_KEY' \
-d '{
"promptId": "your-prompt-id",
"promptVersionId": "version-id",
"replacements": {"topic": "AI"}
}'Built for Production
From startups to enterprises, Prompts API powers mission-critical applications
Content Generation
Blog posts, emails, social media
Data Extraction
Parse invoices, contracts, receipts
Classification & Tagging
Sentiment analysis, topic detection
Text Transformation
Summarization, translation, rewriting
Why Developers Choose Prompts API
Separation of Concerns
Keep prompts separate from application logic. Marketing can tweak copy, developers focus on features.
Production Safety
Test new prompts in staging, roll out gradually, roll back instantly if issues arise. No code changes needed.
Provider Flexibility
Switch between GPT-4, Claude, Gemini without touching code. Optimize for cost, quality, or speed.
Frequently Asked Questions
Everything you need to know
Why use an API for prompts?
Decoupling prompts from code allows you to iterate on AI logic without redeploying your app. It also enables non-developers to manage prompts.
Does it add latency?
Our API adds negligible latency (<20ms) as it simply proxies the request to the underlying model provider with your configured template.
Can I A/B test prompts?
Yes, you can deploy different versions to different user segments by calling specific version IDs in your application logic.
Which models are supported?
We support all major models including GPT-4o, Claude 3.5 Sonnet, Gemini 1.5 Pro, Llama 3, and Mistral.
Is it secure?
Yes, we use enterprise-grade encryption. API keys have granular scopes, and we do not store your input/output data for training purposes.
Still have questions?
Contact our support teamStart Building with Prompts API
Version-controlled prompts, model switching, zero-downtime updates. Everything you need for production AI. Start with our free plan.