Rust-native throughput
188,775 jobs/s sustained on a single M3 host. MessagePack on the wire. No Lua scripts. No JSON on the hot path. Pipelined XACK and XACKDEL close the round-trip tax that breaks naive Streams consumers.
See the numbersChasquiMQ ยท Redis 8.6 ยท Rust + Node + Python
Rust-native engine. Native Node.js and Python bindings. Built on Redis Streams with MessagePack on the wire and pipelined acks โ the things naive queues leave on the table.
The engine is the same Rust binary in every package. Install, enqueue a job, run a worker.
All three quickstarts assume Redis 8.6 running on 127.0.0.1:6379:
docker run -d --name chasquimq-redis -p 6379:6379 redis:8.6Requires Node.js 18+. In a fresh directory, save the snippet as worker.ts, then:
npm init -y && npm pkg set type=modulenpm install chasquimq tsxnpx tsx worker.tsimport { Queue, Worker } from "chasquimq"
const connection = { host: "127.0.0.1", port: 6379 }const queue = new Queue("emails", { connection })
const worker = new Worker( "emails", async (job) => { console.log(`[worker] sending email to ${job.data.to}`) return { delivered: true } }, { connection, storeResults: true },)
const job = await queue.add("welcome", { to: "ada@example.com" })console.log(`[producer] enqueued job ${job.id}`)
await job.waitForResult({ timeoutMs: 30_000 })console.log("๐ your first ChasquiMQ job ran end-to-end")
await worker.close()await queue.close()Requires Python 3.9+. In a fresh directory, save the snippet as worker.py, then:
python -m venv .venv && source .venv/bin/activatepip install chasquimqpython worker.pyimport asynciofrom chasquimq import Queue, Worker, Job
async def send_email(job: Job) -> dict: print(f"[worker] sending email to {job.data['to']}") return {"delivered": True}
async def main() -> None: queue = Queue("emails") worker = Worker("emails", send_email, store_results=True) asyncio.create_task(worker.run())
job = await queue.add("welcome", {"to": "ada@example.com"}) print(f"[producer] enqueued job {job.id}")
await job.wait_for_result(timeout=30.0) print("๐ your first ChasquiMQ job ran end-to-end")
await worker.close() await queue.close()
asyncio.run(main())Requires Rust 1.85+ (edition 2024). Spin up a project, paste the snippet into src/main.rs, then:
cargo new chasquimq-hello && cd chasquimq-hellocargo add chasquimq anyhow bytescargo add tokio --features macros,rt-multi-threadcargo add tokio-util --features rtcargo add serde --features derivecargo run # Ctrl+C to stop the consumer loopuse chasquimq::{Producer, Consumer, ProducerConfig, ConsumerConfig};use serde::{Serialize, Deserialize};use tokio_util::sync::CancellationToken;
#[derive(Clone, Serialize, Deserialize)]struct EmailJob { to: String }
#[tokio::main]async fn main() -> anyhow::Result<()> { let producer = Producer::<EmailJob>::connect( "redis://127.0.0.1:6379", ProducerConfig { queue_name: "emails".into(), ..Default::default() }, ).await?;
let id = producer.add(EmailJob { to: "ada@example.com".into() }).await?; println!("[producer] enqueued job {id}");
let consumer = Consumer::<EmailJob>::new( "redis://127.0.0.1:6379", ConsumerConfig { queue_name: "emails".into(), ..Default::default() }, );
let cancel = CancellationToken::new(); let cancel_after_first = cancel.clone(); consumer.run(move |job| { let cancel = cancel_after_first.clone(); async move { println!("[worker] sending email to {}", job.payload.to); cancel.cancel(); Ok(bytes::Bytes::new()) } }, cancel).await?;
println!("๐ your first ChasquiMQ job ran end-to-end"); Ok(())}Need a CLI? cargo install chasquimq-cli gets you chasqui inspect, chasqui watch, and chasqui dlq replay.
Three things justify a new queue. Performance, operability, and the option to write your handler in the language your team already ships.
188,775 jobs/s sustained on a single M3 host. MessagePack on the wire. No Lua scripts. No JSON on the hot path. Pipelined XACK and XACKDEL close the round-trip tax that breaks naive Streams consumers.
See the numbersPer-job retry budgets. Exponential backoff with jitter. DLQ replay tooling baked in. Repeatable jobs with cron and every patterns โ DST-aware, with a MissedFiresPolicy for clock skips.
Job lifecycleNative bindings via napi-rs (Node) and PyO3 (Python). The engine is the same Rust binary in every package โ no protocol bridge, no sidecar process. Pick your language; pay no performance tax for it.
188,775 jobs/s sustained on a single M3 host with Redis 8.6 โ measured against the leading Node.js Redis queue on identical hardware. Quiet-host runs reproduce 419k jobs/s on the consumer hot path.
Apple M3 ยท Redis 8.6.2 (loopback) ยท queue-add-bulk, 50 jobs ยท methodology & reproduction โ
Four kinds of documentation, four jobs they do.
Producers XADD onto a Redis Stream. Delayed jobs sit in a sorted set until a promoter moves them. Consumers XREADGROUP, dispatch to your handler, then XACKDEL in a single batched round trip. Failures retry with backoff; exhausted ones land in the DLQ.
Producer โโ XADD โโโโโโโโโโโโโโถ stream:emails โโโถ XREADGROUP โโโถ Consumer
โ โ
โโ delayed: ZADD โโโถ zset:emails:z โโโถ promoter โโโถ XADD โ
โผ
handler(job)
โ
success โโโโดโโโถ failure
โ โ
XACKDEL retry / DLQRead the five-minute getting-started. Hello-world job in your terminal, running through Redis, before your coffee cools.
Get started โ