📡 Streaming
Event streaming platform. 100× faster than Kafka. No ZooKeeper required.
API Endpoint
api.44s.io:5470
HTTP REST API
Concepts
| Concept | Description |
|---|---|
| Topic | A named stream of messages (like a Kafka topic) |
| Partition | Ordered sequence within a topic for parallelism |
| Message | Key-value pair with optional headers |
| Consumer Group | Group of consumers sharing the workload |
| Offset | Position in a partition |
Quick Start
Create a Topic
curl -X POST https://api.44s.io:5470/topics \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{
"name": "user-events",
"partitions": 4,
"replication_factor": 1
}'
# Response
{
"name": "user-events",
"partitions": 4,
"replication_factor": 1
}
Produce Messages
curl -X POST https://api.44s.io:5470/topics/user-events/produce \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{
"messages": [
{"key": "user-123", "value": "{\"event\":\"login\",\"timestamp\":1705334400}"},
{"key": "user-456", "value": "{\"event\":\"purchase\",\"amount\":99.99}"}
]
}'
# Response
{
"results": [
{"partition": 2, "offset": 0},
{"partition": 1, "offset": 0}
]
}
Consume Messages
curl -X POST https://api.44s.io:5470/topics/user-events/consume \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{
"group_id": "analytics-service",
"max_messages": 100
}'
# Response
{
"messages": [
{
"partition": 0,
"offset": 0,
"key": "user-123",
"value": "{\"event\":\"login\",\"timestamp\":1705334400}",
"timestamp": 1705334401234,
"headers": []
}
]
}
Python Client
import requests
import json
BASE_URL = "https://api.44s.io:5470"
HEADERS = {
"Content-Type": "application/json",
"X-API-Key": "44s_your_api_key"
}
# Producer
def produce(topic: str, messages: list):
response = requests.post(
f"{BASE_URL}/topics/{topic}/produce",
headers=HEADERS,
json={"messages": messages}
)
return response.json()
# Produce events
produce("user-events", [
{"key": "user-1", "value": json.dumps({"action": "click", "page": "/home"})},
{"key": "user-2", "value": json.dumps({"action": "purchase", "item": "widget"})}
])
# Consumer
def consume(topic: str, group_id: str, max_messages: int = 100):
response = requests.post(
f"{BASE_URL}/topics/{topic}/consume",
headers=HEADERS,
json={"group_id": group_id, "max_messages": max_messages}
)
return response.json()
# Consume and process
while True:
batch = consume("user-events", "my-consumer-group")
for msg in batch["messages"]:
event = json.loads(msg["value"])
print(f"Processing: {event}")
# Commit offsets
requests.post(f"{BASE_URL}/topics/user-events/commit", headers=HEADERS, json={
"group_id": "my-consumer-group",
"offsets": [{"partition": m["partition"], "offset": m["offset"] + 1}
for m in batch["messages"]]
})
API Reference
Topics
GET
/topics
List all topics
POST
/topics
Create a topic
GET
/topics/{name}
Get topic info
DELETE
/topics/{name}
Delete a topic
Produce/Consume
POST
/topics/{name}/produce
Produce messages to topic
# Request body
{
"messages": [
{
"key": "string", // Optional partition key
"value": "string", // Message payload
"headers": [ // Optional headers
["key", "value"]
]
}
]
}
POST
/topics/{name}/consume
Consume messages from topic
# Request body
{
"group_id": "string", // Consumer group ID
"partition": 0, // Optional specific partition
"offset": 0, // Optional start offset
"max_messages": 100 // Max messages to return
}
POST
/topics/{name}/commit
Commit consumer offsets
Use Cases
Event Sourcing
# Store all events as immutable log
produce("orders", [
{"key": "order-123", "value": json.dumps({
"type": "OrderCreated",
"data": {"customer": "Alice", "total": 99.99}
})},
{"key": "order-123", "value": json.dumps({
"type": "OrderPaid",
"data": {"payment_id": "pay_abc123"}
})},
{"key": "order-123", "value": json.dumps({
"type": "OrderShipped",
"data": {"tracking": "1Z999AA10123456784"}
})}
])
Real-time Analytics
# Stream page views for real-time dashboards
produce("pageviews", [
{"key": session_id, "value": json.dumps({
"page": "/products/widget",
"referrer": "google.com",
"timestamp": time.time()
})}
])
# Multiple consumers process in parallel
# - Analytics aggregator
# - Real-time dashboard
# - Anomaly detector
Performance
| Metric | 44s Streaming | Kafka |
|---|---|---|
| Produce latency | <100μs | ~5ms |
| Throughput (single partition) | 1M+ msg/sec | ~100K msg/sec |
| Consumer lag | <1ms | ~10-100ms |
| Setup complexity | Zero (no ZooKeeper) | High |