🎯 Vector Database
AI vector database for embeddings and similarity search. 30× faster than Pinecone.
API Endpoint
api.44s.io:9002
HTTP REST API
Quick Start
Create a Collection
curl -X POST https://api.44s.io:9002/collections \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{
"name": "documents",
"dimension": 1536,
"metric": "cosine"
}'
# Response
{
"name": "documents",
"dimension": 1536,
"metric": "cosine"
}
Upsert Vectors
curl -X POST https://api.44s.io:9002/collections/documents/vectors \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{
"vectors": [
{
"id": "doc-1",
"values": [0.1, 0.2, 0.3, ...],
"metadata": {"title": "Introduction to AI", "category": "tech"}
},
{
"id": "doc-2",
"values": [0.4, 0.5, 0.6, ...],
"metadata": {"title": "Machine Learning Basics", "category": "tech"}
}
]
}'
# Response
{"upserted_count": 2}
Query Similar Vectors
curl -X POST https://api.44s.io:9002/collections/documents/query \
-H "Content-Type: application/json" \
-H "X-API-Key: 44s_your_api_key" \
-d '{
"vector": [0.15, 0.25, 0.35, ...],
"top_k": 5,
"include_metadata": true
}'
# Response
{
"matches": [
{"id": "doc-1", "score": 0.95, "metadata": {"title": "Introduction to AI"}},
{"id": "doc-2", "score": 0.87, "metadata": {"title": "Machine Learning Basics"}}
],
"namespace": "default"
}
Python Client
import requests
import numpy as np
BASE_URL = "https://api.44s.io:9002"
HEADERS = {
"Content-Type": "application/json",
"X-API-Key": "44s_your_api_key"
}
# Create collection
requests.post(f"{BASE_URL}/collections", headers=HEADERS, json={
"name": "embeddings",
"dimension": 1536,
"metric": "cosine"
})
# Upsert with OpenAI embeddings
from openai import OpenAI
client = OpenAI()
texts = ["Hello world", "How are you?", "Machine learning is cool"]
embeddings = client.embeddings.create(input=texts, model="text-embedding-3-small")
vectors = [
{"id": f"text-{i}", "values": e.embedding, "metadata": {"text": texts[i]}}
for i, e in enumerate(embeddings.data)
]
requests.post(f"{BASE_URL}/collections/embeddings/vectors", headers=HEADERS, json={
"vectors": vectors
})
# Query
query_embedding = client.embeddings.create(
input=["What is ML?"],
model="text-embedding-3-small"
).data[0].embedding
response = requests.post(f"{BASE_URL}/collections/embeddings/query", headers=HEADERS, json={
"vector": query_embedding,
"top_k": 3,
"include_metadata": True
})
for match in response.json()["matches"]:
print(f"{match['score']:.3f}: {match['metadata']['text']}")
API Reference
Collections
GET
/collections
List all collections
POST
/collections
Create a new collection
# Request body
{
"name": "string", // Collection name
"dimension": 1536, // Vector dimension (must match your embeddings)
"metric": "cosine" // cosine | euclidean | dotproduct
}
GET
/collections/{name}
Get collection info
DELETE
/collections/{name}
Delete a collection
Vectors
POST
/collections/{name}/vectors
Upsert vectors
# Request body
{
"vectors": [
{
"id": "string", // Unique ID
"values": [0.1, 0.2, ...], // Vector values
"metadata": {} // Optional metadata
}
],
"namespace": "default" // Optional namespace
}
POST
/collections/{name}/query
Query similar vectors
# Request body
{
"vector": [0.1, 0.2, ...], // Query vector
"top_k": 10, // Number of results
"include_values": false, // Return vector values?
"include_metadata": true, // Return metadata?
"filter": {} // Metadata filter (optional)
}
POST
/collections/{name}/fetch
Fetch vectors by ID
POST
/collections/{name}/delete
Delete vectors
Distance Metrics
| Metric | Use Case | Score Range |
|---|---|---|
cosine | Text embeddings, semantic similarity | 0 to 1 (higher = more similar) |
euclidean | Image features, spatial data | 0 to ∞ (lower = more similar) |
dotproduct | Normalized vectors, recommendations | -∞ to ∞ (higher = more similar) |
Common Embedding Dimensions
| Model | Dimension |
|---|---|
| OpenAI text-embedding-3-small | 1536 |
| OpenAI text-embedding-3-large | 3072 |
| Cohere embed-english-v3.0 | 1024 |
| Sentence Transformers all-MiniLM-L6 | 384 |
| CLIP ViT-B/32 | 512 |
Performance
| Metric | 44s Vector | Pinecone |
|---|---|---|
| Query latency (1M vectors) | <1ms | ~10-50ms |
| Throughput | 100K+ queries/sec | ~1K queries/sec |
| Index build time | Instant (lock-free) | Minutes |