⚡ Cache
Redis-compatible in-memory cache. 450× faster under high contention.
Connection
api.44s.io:6379
Protocol: Redis (RESP)
Quick Start
Using redis-cli
redis-cli -h api.44s.io -p 6379
# Authenticate with your API key
> AUTH 44s_your_api_key
OK
# Basic operations
> SET user:1 '{"name":"Alice","score":100}'
OK
> GET user:1
"{\"name\":\"Alice\",\"score\":100}"
> INCR counter
(integer) 1
> EXPIRE user:1 3600
(integer) 1
> TTL user:1
(integer) 3599
Python
import redis
# Connect
r = redis.Redis(
host='api.44s.io',
port=6379,
password='44s_your_api_key',
decode_responses=True
)
# String operations
r.set('user:1', 'Alice')
print(r.get('user:1')) # 'Alice'
# Hash operations
r.hset('user:2', mapping={'name': 'Bob', 'score': 150})
print(r.hgetall('user:2')) # {'name': 'Bob', 'score': '150'}
# Lists
r.rpush('queue', 'task1', 'task2', 'task3')
print(r.lpop('queue')) # 'task1'
# Sets
r.sadd('tags', 'python', 'redis', 'fast')
print(r.smembers('tags')) # {'python', 'redis', 'fast'}
Node.js
const Redis = require('ioredis');
const redis = new Redis({
host: 'api.44s.io',
port: 6379,
password: '44s_your_api_key'
});
// Basic operations
await redis.set('foo', 'bar');
const value = await redis.get('foo');
console.log(value); // 'bar'
// Pipelining for bulk operations
const pipeline = redis.pipeline();
for (let i = 0; i < 1000; i++) {
pipeline.set(`key:${i}`, `value:${i}`);
}
await pipeline.exec();
// Pub/Sub
const sub = redis.duplicate();
await sub.subscribe('notifications');
sub.on('message', (channel, message) => {
console.log(`Received: ${message}`);
});
await redis.publish('notifications', 'Hello!');
Go
package main
import (
"context"
"github.com/redis/go-redis/v9"
)
func main() {
ctx := context.Background()
rdb := redis.NewClient(&redis.Options{
Addr: "api.44s.io:6379",
Password: "44s_your_api_key",
DB: 0,
})
// SET and GET
rdb.Set(ctx, "key", "value", 0)
val, _ := rdb.Get(ctx, "key").Result()
fmt.Println(val) // "value"
// Atomic increment
rdb.Incr(ctx, "counter")
}
Supported Commands
String Commands
| Command | Description |
|---|---|
GET key | Get the value of a key |
SET key value [EX s] [PX ms] | Set a key with optional expiration |
MGET key [key ...] | Get multiple keys |
MSET key value [key value ...] | Set multiple keys |
INCR key | Increment integer value |
DECR key | Decrement integer value |
INCRBY key amount | Increment by specific amount |
APPEND key value | Append to string |
STRLEN key | Get string length |
Hash Commands
| Command | Description |
|---|---|
HGET key field | Get hash field |
HSET key field value | Set hash field |
HMGET key field [field ...] | Get multiple fields |
HMSET key field value [...] | Set multiple fields |
HGETALL key | Get all fields and values |
HINCRBY key field amount | Increment field value |
HDEL key field [field ...] | Delete fields |
HLEN key | Get number of fields |
List Commands
| Command | Description |
|---|---|
LPUSH key value [value ...] | Push to head |
RPUSH key value [value ...] | Push to tail |
LPOP key | Pop from head |
RPOP key | Pop from tail |
LRANGE key start stop | Get range of elements |
LLEN key | Get list length |
LINDEX key index | Get element by index |
Set Commands
| Command | Description |
|---|---|
SADD key member [member ...] | Add members |
SREM key member [member ...] | Remove members |
SMEMBERS key | Get all members |
SISMEMBER key member | Check membership |
SCARD key | Get set size |
SUNION key [key ...] | Union of sets |
SINTER key [key ...] | Intersection of sets |
Key Commands
| Command | Description |
|---|---|
DEL key [key ...] | Delete keys |
EXISTS key [key ...] | Check if keys exist |
EXPIRE key seconds | Set expiration |
TTL key | Get time to live |
KEYS pattern | Find keys by pattern |
SCAN cursor [MATCH pattern] | Iterate keys |
TYPE key | Get key type |
Performance
44s Cache achieves 450× faster performance under high contention through lock-free architecture.
| Metric | 44s Cache | Redis |
|---|---|---|
| Throughput (96 cores) | 50M+ ops/sec | ~100K ops/sec |
| P99 Latency | <10μs | ~100μs |
| Scaling | Linear with cores | Single-threaded |
See benchmarks for methodology.
Best Practices
Use Pipelining
Batch multiple commands to reduce round-trips:
# Python
pipe = r.pipeline()
for i in range(1000):
pipe.set(f'key:{i}', f'value:{i}')
pipe.execute() # Single round-trip!
Set Appropriate TTLs
Always set expiration on keys to prevent memory bloat:
r.set('session:abc123', data, ex=3600) # Expires in 1 hour
Use Hashes for Objects
Store related data in hashes instead of individual keys:
# Good: Single hash
r.hset('user:1', mapping={'name': 'Alice', 'email': 'alice@example.com', 'score': 100})
# Avoid: Multiple keys
r.set('user:1:name', 'Alice')
r.set('user:1:email', 'alice@example.com')
r.set('user:1:score', 100)