Getting started
This is a five-minute walkthrough. By the end, you will have Redis running, a queue producing one job, and a worker consuming it.
The path here uses the Node binding because it has the widest reach. Python and Rust quickstarts are inline below — pick whichever you ship in production.
Prerequisites
Section titled “Prerequisites”- Docker, or Redis 8.6+ already on your machine.
- One of: Node.js 18+, Python 3.9+, or Rust 1.85+ (2024 edition).
1. Run Redis
Section titled “1. Run Redis”ChasquiMQ targets Redis 8.6+ for the modern Streams feature set (XACKDEL, idempotent XADD, idle-pending reads). Older Redis is not supported.
docker run -d --name chasquimq-redis -p 6379:6379 redis:8.6Verify it answers:
docker exec chasquimq-redis redis-cli PING# PONG2. Install one binding
Section titled “2. Install one binding”In a fresh directory:
npm init -y && npm pkg set type=modulenpm install chasquimq tsxPrebuilt binaries ship for darwin / linux / win32 on arm64 + x64. There is no install-time compilation. tsx runs the TypeScript snippet below without a build step; type=module enables top-level await.
In a fresh directory:
python -m venv .venv && source .venv/bin/activatepip install chasquimqabi3 wheels ship for Python 3.9+ on Linux (x86_64 + aarch64), macOS (x86_64 + aarch64), and Windows (x86_64).
In a fresh cargo new project:
cargo new chasquimq-hello && cd chasquimq-hellocargo add chasquimq anyhow bytescargo add tokio --features macros,rt-multi-threadcargo add tokio-util --features rtcargo add serde --features derive3. Produce one job, consume one job
Section titled “3. Produce one job, consume one job”Save the snippet from your tab as hello.ts, hello.py, or src/main.rs, then run it.
import { Queue, Worker } from "chasquimq";
const connection = { host: "127.0.0.1", port: 6379 };
const queue = new Queue("emails", { connection });
const worker = new Worker( "emails", async (job) => { console.log(`[worker] sending email to ${job.data.to}`); return { delivered: true }; }, { connection, storeResults: true },);
const job = await queue.add("welcome", { to: "ada@example.com" });await job.waitForResult({ timeoutMs: 30_000 });
await worker.close();await queue.close();
console.log("🎉 your first ChasquiMQ job ran end-to-end");Run it:
npx tsx hello.ts# [worker] sending email to ada@example.com# 🎉 your first ChasquiMQ job ran end-to-endimport asynciofrom chasquimq import Queue, Worker
async def handler(job): print(f"[worker] sending email to {job.data['to']}") return {"delivered": True}
async def main() -> None: queue = Queue("emails") worker = Worker("emails", handler, store_results=True) asyncio.create_task(worker.run())
job = await queue.add("welcome", {"to": "ada@example.com"}) await job.wait_for_result(timeout=30.0)
await worker.close() await queue.close()
print("🎉 your first ChasquiMQ job ran end-to-end")
asyncio.run(main())Run it:
python hello.py# [worker] sending email to ada@example.com# 🎉 your first ChasquiMQ job ran end-to-endRun it with cargo run from inside the project. The snippet goes in src/main.rs:
use chasquimq::{Consumer, ConsumerConfig, Producer, ProducerConfig};use serde::{Deserialize, Serialize};use tokio_util::sync::CancellationToken;
#[derive(Clone, Serialize, Deserialize)]struct EmailJob { to: String,}
#[tokio::main]async fn main() -> anyhow::Result<()> { let producer = Producer::<EmailJob>::connect( "redis://127.0.0.1:6379", ProducerConfig { queue_name: "emails".into(), ..Default::default() }, ).await?;
producer.add(EmailJob { to: "ada@example.com".into() }).await?;
let consumer = Consumer::<EmailJob>::new( "redis://127.0.0.1:6379", ConsumerConfig { queue_name: "emails".into(), concurrency: 8, ..Default::default() }, );
let cancel = CancellationToken::new(); let cancel_clone = cancel.clone(); tokio::spawn(async move { tokio::time::sleep(std::time::Duration::from_secs(1)).await; cancel_clone.cancel(); });
consumer.run(|job| async move { println!("[worker] sending email to {}", job.payload.to); Ok(bytes::Bytes::new()) }, cancel).await?;
println!("🎉 your first ChasquiMQ job ran end-to-end");
Ok(())}That is your first job round-tripping through Redis Streams.
What just happened
Section titled “What just happened”Queue.add(name, data)XADDs a MessagePack-encoded entry onto a Redis Stream named{chasqui:emails}.- The
Workerreads the entry withXREADGROUP, deserializes the payload, runs your handler, and acks viaXACKDEL(atomic ack-and-delete in one round trip). - No JSON. No
LPUSH/BRPOP. No per-job round trips when you batch.
Next steps
Section titled “Next steps”- Your first job with retries — handler errors, the retry path, and
UnrecoverableError. - Delayed and repeatable jobs — schedule a job for later or on a cron.
- Inspecting with the CLI —
chasqui inspect,chasqui watch, and the DLQ tools. - For mental model: Thinking in ChasquiMQ.