Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Advanced Zenoh Tutorial -- Part V

Advanced Zenoh Tutorial -- Part V

This six-part webinar series is designed for developers who want to master Zenoh’s full potential. We’ll cover everything from core concepts to advanced programming techniques, performance tuning, and deployment strategies. The sessions use Rust as the primary language with some complementary examples in Python. Whether you're building robotics systems, telemetry pipelines, or IoT infrastructures—this series will help you unlock the true power of Zenoh.

Avatar for Angelo Corsaro

Angelo Corsaro

July 02, 2025
Tweet

Video

More Decks by Angelo Corsaro

Other Decks in Programming

Transcript

  1. https://github.com/kydos/zsak Zenoh Swiss Army knife (ZSAK) We’ll extensively use ZSAK

    an experimental command line tool designed to learn and experiment with Zenoh
  2. Get It. Build It. Run It $ git clone [email protected]:kydos/zsak.git

    $ cd zsak $ cargo build —release $ ./target/release/zenoh -h
  3. Interceptor Framework Zenoh is equipped with an interceptor framework that

    gives the ability to intercept and operate on protocol messages on ingress and egress At the current stage available interceptors are de fi ned at compile time and activated by con fi guration Eth Wi f Serial Zenoh Eth Wi f Serial ingress egress Interceptors
  4. Downsampling Interceptors One of the available interceptors allows to do

    fl ow-control on an interface basis (ingress/egress) downsampling: [ { "id": "en0FlowControl", interfaces: [ "en0" ], flow: "egress", rules: [ { key_expr: “demo/*", freq: 1.0 }, ], }, ], Eth Wi f Zenoh Eth Wi f ingress egress Interceptors
  5. Access Control Interceptor The Access Control Interceptor allows to control

    the fl ow of zenoh messages based on interfaces and certi fi cate names Router Router Subscriber sensor/** Publisher sensor/temp Subscriber sensor/** Publisher sensor/temp Put Put Put Put Put Put ACL con fi gured ACL con fi gured
  6. Access Control Interceptor The Access Control Interceptor allows to control

    the fl ow of zenoh messages based on interfaces and certi fi cate names Router Router Subscriber sensor/** sensor/temp Subscriber sensor/** sensor/temp Put Put Put Put ACL con fi gured ACL con fi gured Put Put
  7. Shared Memory Highlights No topological constraints Distributed memory management Safe

    mutability Shared memory Providers Custom Layout Minimal system and network overhead for both normal and fail- recovery operations
  8. Shared Memory Architecture SHM Provider Backend SHM Provider POSIX GPU

    … Zenoh Shared Memory (SHM) is modular with respect to providers The default provider is POSIX, shared memory, but custom provider cam be implemented to deal with GPU, speci fi c SHM dedicated to CPU/MCU communication, etc.
  9. POSIX: Default Provider Backend Zenoh’s default SHM Provider back-end is

    POSIX For this back-end, which is what many of you will get started with, the details of provider back-end are somewhat shielded
  10. Creating a POSIX SHM Provider The following code, creates a

    provider with a total of 4096 bytes that will be usable to allocate shared memory bu ff ers The default alignment (1 byte) will be used in this case, as nothing else is speci fi ed let provider = ShmProviderBuilder::default_backend(4096).wait().unwrap(); let provider = ShmProviderBuilder::default_backend(4096) .with_alignment(alignment).wait().unwrap(); A speci fi c alignment can be provided when creating the provider.
  11. Allocating SHM Buffers Allocating a bu ff er is extremely

    straightforward: let buf = provider.alloc(1024).wait().unwrap(); let a = AllocAlignment::for_type::<MyType>(); let buf = provider.alloc(1024).with_alignment(a).wait().unwrap(); You can also allocate with a speci fi c alignment
  12. GC & Defragmentation While Zenoh tracks the liveliness of allocated

    Shared Memory Bu ff er, it does not perform garbage collection (GC) nor defragmentation implicitly In other terms, it gives control over garbage collection and defragmentation to the users. This choice was taken to ensure that you have maximum control
  13. GC & Defragmentation You can manually control when GC and

    defragmentation happens as follows: let freed = provider.garbage_collect(); let compacted = provider.defragment(); For instance you could run it periodically as part of a separate thread/task Otherwise you can run it as part of an allocation, if needed to fi nd more space. provider.alloc(1024).with_policy::<Defragment<GarbageCollect>>();
  14. Pub/Sub/Query with Shared Memory Buffers A Shared Memory bu ff

    er can be used with no distinction from regular bu ff ers with any Zenoh API — put, get, reply, query attachment, etc. Zenoh under-the-hood decides how this a should be “communicated” to relevant parties Zenoh understand when the bu ff er can be “shared” or when it needs to be sent over the network or to a process that is not on the same memory domain Receiving applications, do not need to know they are dealing with a shared memory bu ff er — unless this is important to them
  15. Publishing a SHM Buffer As shown in the example Zenoh

    API transparently deal with SHM Bu ff ers. use zenoh::Wait; use zenoh::shm::{ShmProviderBuilder, ZShm}; fn main() { let z = zenoh::open(zenoh::Config::default()) .wait() .unwrap(); let provider = ShmProviderBuilder::default_backend(64*1024) .wait() .unwrap(); let mut buf = provider.alloc(256).wait().unwrap(); let data_mut: &mut [u8] = &mut buf; data_mut.copy_from_slice(b"Hello from Zenoh's Shared Memory!"); // Make the buf immutable so that we can (shallow) clone it. let data: ZShm = buf.into(); loop { z.put("zenoh/shm/buffer",data.clone()) .wait() .unwrap(); std::thread::sleep(std::time::Duration::from_secs(1)); } }
  16. The subscriber does not need to know it is dealing

    with a SHM bu ff er… but if that matters, it can retrieve it Subscribing use zenoh::Wait; fn main() { let z = zenoh::open(zenoh::Config::default()) .wait() .unwrap(); let sub = z.declare_subscriber("zenoh/shm/buffer") .wait() .unwrap(); while let Ok(s) = sub.recv() { let buf = s.payload(); let is_shm = if buf.as_shm().is_none() { false } else { true }; buf.try_to_string() .map(|s| println!("Received (SHM: {}): {}",is_shm, s)) .unwrap_or_else(|_| println!("Received non-string payload")); } }
  17. Zero (or One) Copy For applications that deal with large

    data it would be highly desirable to avoid or limit the number of copies and also avoiding sharing these large data over the network stack when communicating on the same host One Copy. Trivial to achieve as you just need to either copy or serialize your data on a SHM bu ff er and “send it” across using Zenoh Zero Copy. In this case your data as to be in the SHM bu ff er. This is a slightly more advanced use case which in some circumstances requires writing a SHM provider back-end to deal with the underlying HW.
  18. Video Processing High resolution video processing pipelines can massively improve

    their performance and reduce resource usage by using Zenoh SHM
  19. Time-Triggered & Polling Based Communication In some applications domains due

    to real-time or safety requirements data is periodically polled To ensure high performance and low-latency, the data should be available via shared memory These patterns are relatively easy to implement in Zenoh, let’s see how
  20. Polling Producer / Consumer Let’s see how we can easily

    build on Zenoh shared memory based polling. Once we solve this problem we’ll be able to use it for periodic polling too P C SHM Buf Producer Consumer
  21. SHM Producer/Consumer We have single producer and a single consumer

    The bu ff er will be written by the producer only if marked as empty The bu ff er will be consumed by the consumer only if marked as full P C SHM Buf Producer Consumer
  22. Getting The Clue of It We’ll use a SharedData type

    that provides the length (len) of the data available to consume and a bu ff er with a maximum capacity of 1024 (for this example) The len is an AtomicUsize that we’ll use to coordinate access to the bu ff er The producer will only be able to produce if len == 0 The consumer will only be able to consume if len > 0 pub struct SharedData { pub len: AtomicUsize, pub data: [u8; 1024], }
  23. Sharing a Custom Data Structure let alignment = AllocAlignment::for_type::<SharedData>(); let

    size = std::mem::size_of::<SharedData>(); let buf = shm_provider .alloc(size) .with_alignment(alignment) .wait() .unwrap(); // initialize data let shared_data = unsafe { let ptr = buf.as_ptr() as *const SharedData; &*ptr }; shared_data.len .store(0, std::sync::atomic::Ordering::Release); #[repr(C)] pub struct SharedData { pub len: AtomicUsize, pub data: [u8; 1024], }
  24. Getting The Shared Buffer The Zenoh Query mechanism is perfect

    to let the consumer retrieve the bu ff er from the producer Once the bu ff er is retrieved, then the producer/consumer logic is ran over the SharedData type Let’s look at the full example
  25. zshm The zshm repository provides a series of shared-memory based

    and producer/consumer examples The examples provide both the 1-1 and 1-n case
  26. 1 Producer / N Consumers There is a single producer

    Consumers consume data concurrently Each consumer should consume the data only once If N consumers are known at the time of production, at most N consumers should consume Consumers can come and go dynamically P C1 SHM Buf Producer Consumers C2 Cn …
  27. Getting The Clue of It The data structure we need

    in place to coordinate is a bit more involved len is used to communicate the length of the data as well as the fact that there is something to read or not sn is needed to ensure that the same consumer does not consume the same data multiple times sub_count keeps track of the current number of consumers read_count keeps track of how many consumers have to read before this sample can be consumed and the producer can thus produce another #[repr(C)] pub struct SharedData { pub len: AtomicUsize, pub sn: AtomicU64, pub read_count: AtomicI32, pub sub_count: AtomicUsize, pub data: [u8; 1024], }
  28. Futex Linux Futex, or equivalent if available on your platform,

    can be used to e ff i ciently coordinate over shared memory leveraging atomics
  29. Getting The Clue of It The structure is the same

    as before with one addition futex is used to synchronize across process boundaries #[cfg(target_os = "linux")] // Shared data #[repr(C)] pub struct SharedData { pub futex: Futex<Shared>, pub len: AtomicUsize, pub sn: AtomicU64, pub read_count: AtomicI32, pub sub_count: AtomicUsize, pub data: [u8; 1024], }
  30. Key Highlights Zenoh is one of a kind protocol —

    it uni fi es data in motion, data at rest and computations It is the only protocol able to run from the microcontroller up to the datacenter It has not topological constraints It is extremely easy to use!