Rate Limits
ProxyOmega implements intelligent rate limiting to ensure fair usage and optimal performance for all users.
Connection Limits
Proxy Type | Concurrent Connections | Requests/Second | Monthly Bandwidth |
---|---|---|---|
Premium Unlimited | Based on plan (up to 3000) | Based on plan | Unlimited |
Budget Unlimited | Based on plan | Based on plan* | Unlimited |
Platinum Residential | Unlimited | Unlimited | As purchased (GB) |
Mobile | Unlimited | Unlimited | As purchased (GB) |
IPv6 | Based on threads | Unlimited per thread | Unlimited |
* Budget Unlimited is not suitable for streaming services. All unlimited plans have fair usage policies.
** Premium Unlimited and Budget Unlimited are based on the number of threads and requests purchased with your plan.
*** Actual limits may vary based on target websites and usage patterns. Contact support for specific requirements.
API Rate Limits
All API responses include rate limit information in headers:
X-RateLimit-Limit
- Maximum requests per hourX-RateLimit-Remaining
- Remaining requestsX-RateLimit-Reset
- Unix timestamp when limit resets
Proxy Connection Limits
Connection Limits by Plan
- Premium Unlimited: Up to 3,000 concurrent requests
- Platinum Residential: Unlimited concurrent connections
- Mobile: Unlimited concurrent connections
- IPv6: Based on purchased threads
Note: Management API endpoints are currently under development.
Handling Rate Limits
429 Too Many Requests
When you exceed rate limits, you'll receive a 429 status code:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 3600
{
"error": "Rate limit exceeded",
"message": "Too many requests. Please retry after 3600 seconds"
}
Best Practices
- Implement exponential backoff - Gradually increase wait time between retries
- Cache responses - Reduce unnecessary API calls
- Use bulk operations - Batch multiple operations when possible
- Monitor rate limit headers - Proactively slow down before hitting limits
- Use connection pooling - Reuse connections for better performance
Concurrent Connection Management
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
# Create session with connection pooling
session = requests.Session()
# Configure retry strategy
retry_strategy = Retry(
total=3,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504]
)
# Set connection pool size based on your proxy type limits
adapter = HTTPAdapter(
max_retries=retry_strategy,
pool_connections=100, # Number of connection pools
pool_maxsize=100 # Maximum connections per pool
)
session.mount("http://", adapter)
session.mount("https://", adapter)
# Configure proxy
proxies = {
'http': 'http://user:[email protected]:10000',
'https': 'http://user:[email protected]:10000'
}
# Make requests with automatic retry
response = session.get('https://api.example.com', proxies=proxies)
Enterprise Limits
Enterprise customers can request custom rate limits. Contact our sales team at [email protected] to discuss your requirements.
- Custom concurrent connection limits
- Dedicated infrastructure
- Priority support
- SLA guarantees