⚡ Cache

Redis-compatible in-memory cache. 450× faster under high contention.

Connection

api.44s.io:6379

Protocol: Redis (RESP)

Quick Start

Using redis-cli

redis-cli -h api.44s.io -p 6379

# Authenticate with your API key
> AUTH 44s_your_api_key
OK

# Basic operations
> SET user:1 '{"name":"Alice","score":100}'
OK
> GET user:1
"{\"name\":\"Alice\",\"score\":100}"
> INCR counter
(integer) 1
> EXPIRE user:1 3600
(integer) 1
> TTL user:1
(integer) 3599

Python

import redis

# Connect
r = redis.Redis(
    host='api.44s.io',
    port=6379,
    password='44s_your_api_key',
    decode_responses=True
)

# String operations
r.set('user:1', 'Alice')
print(r.get('user:1'))  # 'Alice'

# Hash operations
r.hset('user:2', mapping={'name': 'Bob', 'score': 150})
print(r.hgetall('user:2'))  # {'name': 'Bob', 'score': '150'}

# Lists
r.rpush('queue', 'task1', 'task2', 'task3')
print(r.lpop('queue'))  # 'task1'

# Sets
r.sadd('tags', 'python', 'redis', 'fast')
print(r.smembers('tags'))  # {'python', 'redis', 'fast'}

Node.js

const Redis = require('ioredis');

const redis = new Redis({
  host: 'api.44s.io',
  port: 6379,
  password: '44s_your_api_key'
});

// Basic operations
await redis.set('foo', 'bar');
const value = await redis.get('foo');
console.log(value); // 'bar'

// Pipelining for bulk operations
const pipeline = redis.pipeline();
for (let i = 0; i < 1000; i++) {
  pipeline.set(`key:${i}`, `value:${i}`);
}
await pipeline.exec();

// Pub/Sub
const sub = redis.duplicate();
await sub.subscribe('notifications');
sub.on('message', (channel, message) => {
  console.log(`Received: ${message}`);
});

await redis.publish('notifications', 'Hello!');

Go

package main

import (
    "context"
    "github.com/redis/go-redis/v9"
)

func main() {
    ctx := context.Background()
    
    rdb := redis.NewClient(&redis.Options{
        Addr:     "api.44s.io:6379",
        Password: "44s_your_api_key",
        DB:       0,
    })
    
    // SET and GET
    rdb.Set(ctx, "key", "value", 0)
    val, _ := rdb.Get(ctx, "key").Result()
    fmt.Println(val) // "value"
    
    // Atomic increment
    rdb.Incr(ctx, "counter")
}

Supported Commands

String Commands

CommandDescription
GET keyGet the value of a key
SET key value [EX s] [PX ms]Set a key with optional expiration
MGET key [key ...]Get multiple keys
MSET key value [key value ...]Set multiple keys
INCR keyIncrement integer value
DECR keyDecrement integer value
INCRBY key amountIncrement by specific amount
APPEND key valueAppend to string
STRLEN keyGet string length

Hash Commands

CommandDescription
HGET key fieldGet hash field
HSET key field valueSet hash field
HMGET key field [field ...]Get multiple fields
HMSET key field value [...]Set multiple fields
HGETALL keyGet all fields and values
HINCRBY key field amountIncrement field value
HDEL key field [field ...]Delete fields
HLEN keyGet number of fields

List Commands

CommandDescription
LPUSH key value [value ...]Push to head
RPUSH key value [value ...]Push to tail
LPOP keyPop from head
RPOP keyPop from tail
LRANGE key start stopGet range of elements
LLEN keyGet list length
LINDEX key indexGet element by index

Set Commands

CommandDescription
SADD key member [member ...]Add members
SREM key member [member ...]Remove members
SMEMBERS keyGet all members
SISMEMBER key memberCheck membership
SCARD keyGet set size
SUNION key [key ...]Union of sets
SINTER key [key ...]Intersection of sets

Key Commands

CommandDescription
DEL key [key ...]Delete keys
EXISTS key [key ...]Check if keys exist
EXPIRE key secondsSet expiration
TTL keyGet time to live
KEYS patternFind keys by pattern
SCAN cursor [MATCH pattern]Iterate keys
TYPE keyGet key type

Performance

44s Cache achieves 450× faster performance under high contention through lock-free architecture.

Metric44s CacheRedis
Throughput (96 cores)50M+ ops/sec~100K ops/sec
P99 Latency<10μs~100μs
ScalingLinear with coresSingle-threaded

See benchmarks for methodology.

Best Practices

Use Pipelining

Batch multiple commands to reduce round-trips:

# Python
pipe = r.pipeline()
for i in range(1000):
    pipe.set(f'key:{i}', f'value:{i}')
pipe.execute()  # Single round-trip!

Set Appropriate TTLs

Always set expiration on keys to prevent memory bloat:

r.set('session:abc123', data, ex=3600)  # Expires in 1 hour

Use Hashes for Objects

Store related data in hashes instead of individual keys:

# Good: Single hash
r.hset('user:1', mapping={'name': 'Alice', 'email': 'alice@example.com', 'score': 100})

# Avoid: Multiple keys
r.set('user:1:name', 'Alice')
r.set('user:1:email', 'alice@example.com')
r.set('user:1:score', 100)