⚙️ Serverless
Deploy and invoke functions with near-zero cold starts. 40,000× faster spin-up than Lambda.
API Endpoint
api.44s.io:9005
HTTP REST API
⚡ Why 40,000× Faster?
Lambda cold starts: 100-500ms. 44s cold start overhead: ~5 microseconds. Lock-free architecture means no container spin-up time.
Quick Start
Deploy a Function
curl -X POST https://api.44s.io:9005/functions \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{
"name": "hello-world",
"runtime": "node18",
"handler": "index.handler",
"code": "ZXhwb3J0cy5oYW5kbGVyID0gYXN5bmMgKGV2ZW50KSA9PiB7CiAgcmV0dXJuIHsKICAgIHN0YXR1c0NvZGU6IDIwMCwKICAgIGJvZHk6IEpTT04uc3RyaW5naWZ5KHsgbWVzc2FnZTogJ0hlbGxvIGZyb20gNDRzIScgfSkKICB9Owp9Ow==",
"memory_mb": 128,
"timeout_secs": 30
}'
# Response
{"id": "func_abc123"}
Note: code is base64-encoded. The example above decodes to:
exports.handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify({ message: 'Hello from 44s!' })
};
};
Invoke a Function
curl -X POST https://api.44s.io:9005/functions/hello-world/invoke \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{"payload": "{\"name\": \"World\"}"}'
# Response
{
"id": "inv_xyz789",
"function_id": "hello-world",
"started_at": 1705334401,
"duration_ms": 2,
"status": "Success",
"response": "{\"statusCode\":200,\"body\":\"{\\\"message\\\":\\\"Hello from 44s!\\\"}\"}"
}
List Functions
curl https://api.44s.io:9005/functions \
-H "X-API-Key: 44s_your_api_key"
# Response
[
{
"id": "func_abc123",
"name": "hello-world",
"runtime": "node18",
"memory_mb": 128,
"timeout_secs": 30,
"created_at": 1705334400
}
]
Supported Runtimes
| Runtime | Value | Description |
|---|---|---|
| Node.js 18 | node18 | JavaScript/TypeScript functions |
| Python 3.11 | python311 | Python functions |
| Rust | rust | Compiled Rust (WASM) |
| Go | go | Compiled Go |
| Custom | custom:image | Custom container image |
API Reference
GET
/functions
List all your functions
POST
/functions
Deploy a new function
# Request body
{
"name": "string", // Function name (unique per account)
"runtime": "node18", // Runtime environment
"handler": "index.handler", // Entry point
"code": "base64...", // Base64-encoded source code
"memory_mb": 128, // Memory allocation (optional, default 128)
"timeout_secs": 30, // Timeout (optional, default 30)
"env_vars": { // Environment variables (optional)
"DB_URL": "..."
}
}
GET
/functions/{name}
Get function details
DELETE
/functions/{name}
Delete a function
POST
/functions/{name}/invoke
Invoke a function
# Request body
{
"payload": "{...}" // JSON string passed to your function
}
GET
/functions/{name}/stats
Get function statistics
Python Deployment Example
import base64
import requests
BASE_URL = "https://api.44s.io:9005"
HEADERS = {
"Content-Type": "application/json",
"X-API-Key": "44s_your_api_key"
}
# Your Python function code
code = """
import json
def handler(event, context):
name = event.get('name', 'World')
return {
'statusCode': 200,
'body': json.dumps({'message': f'Hello, {name}!'})
}
"""
# Deploy
response = requests.post(f"{BASE_URL}/functions", headers=HEADERS, json={
"name": "greet",
"runtime": "python311",
"handler": "main.handler",
"code": base64.b64encode(code.encode()).decode()
})
print(f"Deployed: {response.json()}")
# Invoke
response = requests.post(f"{BASE_URL}/functions/greet/invoke", headers=HEADERS, json={
"payload": '{"name": "Alice"}'
})
print(f"Response: {response.json()}")
Use Cases
Webhooks
# Handle Stripe webhooks with zero cold start
exports.handler = async (event) => {
const sig = event.headers['stripe-signature'];
const payload = event.body;
// Process payment event instantly
const stripeEvent = stripe.webhooks.constructEvent(payload, sig, secret);
if (stripeEvent.type === 'payment_intent.succeeded') {
await fulfillOrder(stripeEvent.data.object);
}
return { statusCode: 200 };
};
Data Processing
# Process streaming data in real-time
def handler(event, context):
records = event['records']
for record in records:
# Process each record with microsecond latency
data = json.loads(record['value'])
enriched = enrich_data(data)
store_result(enriched)
return {'processed': len(records)}
Performance Comparison
| Metric | 44s Serverless | AWS Lambda |
|---|---|---|
| Cold start | ~5μs overhead | 100-500ms |
| Warm invocation | <1ms | ~10-50ms |
| Concurrent invocations | Unlimited (lock-free) | 1000 default limit |
| Spin-up ratio | 40,000× | 1× |