Skip to content

Migorithm/ipc-rs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rust Shm-Link: Mmap Data Producer & Multi-Reader Reporter

A low-latency IPC (Inter-Process Communication) demonstration using POSIX Shared Memory and Memory Mapping with multi-reader support in Rust.

Overview

This project consists of two decoupled Rust binaries that communicate via a POSIX shared memory segment created with shm_open.

writer (Producer): Simulates a high-frequency engine. It writes structured data into a pre-allocated memory-mapped file and atomically updates a write pointer.

reporter (Consumer): Acts as a sidecar process. Multiple reporters can run concurrently, each identified by a unique reader ID (0–7). Each reporter polls the shared write pointer for updates and persists entries to its own output CSV file. Reporters persist their read position in shared memory for crash recovery and late-join catch-up.

Requirements

Functional Requirements

Zero-Copy Communication: The writer and reporters access the same physical RAM pages to avoid expensive syscalls or socket overhead.

Multi-Reader Support: Up to 8 concurrent readers can independently consume entries. Readers poll the atomic write pointer via busy-waiting.

Persistence Isolation: The writer is never blocked by Disk I/O. Each reporter handles its own file-system operations independently.

Crash Resilience: Each reporter's read position is persisted in shared memory (read_ptrs[reader_id]). On restart, a reporter resumes from where it left off, catching up on missed entries.

State Tracking: The system uses a shared header with a write pointer and per-reader read pointers so each reporter knows exactly which entries have been consumed.

Technical Stack

  • Language: Rust (Edition 2024).
  • Platform: Linux/macOS (using portable POSIX shared memory APIs).
  • Crates:
    • memmap2: For cross-platform memory mapping.
    • nix: Safe Rust wrappers for POSIX shared memory (shm_open, ftruncate, fstat, shm_unlink).
    • signal-hook: Safe signal handler registration (SIGINT, SIGTERM) for graceful shutdown.

Data Layout (Memory Map)

The shared memory segment (named /rpt_sys_shm via shm_open) follows a fixed-size binary layout:

Offset Type Description
0x00 AtomicU64 Write Pointer: index of the next write slot
0x08 [AtomicU64; 8] Per-reader read pointers (persisted positions)
after header [Entry; 1024] Circular buffer of fixed-size data structures

Note: Proper memory alignment and #[repr(C)] structs are mandatory to ensure both binaries interpret the raw bytes identically.

Components

Binary A: The Writer

  1. Creates/Opens a POSIX shared memory segment via shm_open.
  2. Sets the file size using ftruncate.
  3. Maps the file into its address space.
  4. Writes data to the next available slot in the buffer.
  5. Atomically updates the "Write Pointer" with Release ordering.

Binary B: The Reporter

  1. Accepts a reader_id (0–7) as a CLI argument.
  2. Opens the existing shared memory file in read-write mode (needs write for read_ptr persistence).
  3. On startup, loads persisted read_ptrs[reader_id] for crash recovery and catches up to the current write_ptr.
  4. Polls the write pointer via busy-waiting (1ms sleep between checks).
  5. Upon detecting new entries, copies the data, appends to output_{reader_id}.csv, and atomically stores the updated read position.

Usage

# Terminal 1: Start the writer
cargo run --bin writer

# Terminal 2: Start reporter 0
cargo run --bin reporter -- 0

# Terminal 3: Start reporter 1
cargo run --bin reporter -- 1

# Both output_0.csv and output_1.csv will receive all entries independently.
# If a reporter is killed and restarted, it catches up from its persisted read position.

Communication Protocol

  1. Writer writes an entry to the circular buffer slot.
  2. Writer atomically stores write_ptr + 1 with Release ordering.
  3. Each reporter polls write_ptr with Acquire ordering (1ms busy-wait interval).
  4. Each reporter reads new entries from its own read_ptr up to write_ptr.
  5. Each reporter atomically stores its updated read_ptr in shared memory for persistence.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages