Skip to content

Commit

Permalink
doc(records): improve documentation (#197)
Browse files Browse the repository at this point in the history
Signed-off-by: Julien Loudet <[email protected]>
  • Loading branch information
J-Loudet authored Feb 26, 2024
1 parent c11307f commit cac9adb
Show file tree
Hide file tree
Showing 3 changed files with 50 additions and 8 deletions.
14 changes: 14 additions & 0 deletions zenoh-flow-records/src/connectors.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,13 @@ use std::fmt::Display;
use zenoh_flow_commons::NodeId;
use zenoh_keyexpr::OwnedKeyExpr;

/// A `SenderRecord` describes the sending end of a "Zenoh connection" between Zenoh-Flow runtimes.
///
/// Effectively, a `Sender` performs `put` operations on Zenoh. The main difference with an out-of-the-box `put` is
/// that Zenoh-Flow manages when they are done and on which resource.
///
/// Specifically, Zenoh-Flow ensures that each resource stays unique. This allows deploying the same data flow multiple
/// times on the same infrastructure and keeping them isolated.
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct SenderRecord {
pub(crate) id: NodeId,
Expand All @@ -39,6 +46,13 @@ impl SenderRecord {
}
}

/// A `ReceiverRecord` describes the receiving end of a "Zenoh connection" between Zenoh-Flow runtimes.
///
/// Effectively, a `Receiver` encapsulates a Zenoh subscriber. The main difference with out-of-the-box subscriber is
/// that Zenoh-Flow manages how it is pulled and the resource it declares.
///
/// Specifically, Zenoh-Flow ensures that each resource stays unique. This allows deploying the same data flow multiple
/// times on the same infrastructure and keeping them isolated.
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ReceiverRecord {
pub(crate) id: NodeId,
Expand Down
31 changes: 23 additions & 8 deletions zenoh-flow-records/src/dataflow.rs
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,17 @@ use zenoh_keyexpr::OwnedKeyExpr;
const SENDER_SUFFIX: &str = "__zenoh_flow_sender";
const RECEIVER_SUFFIX: &str = "__zenoh_flow_receiver";

/// TODO@J-Loudet
/// A `DataFlowRecord` represents a single deployment of a [FlattenedDataFlowDescriptor] on an infrastructure, i.e. on a
/// set of Zenoh-Flow runtimes.
///
/// A `DataFlowRecord` can only be created by processing a [FlattenedDataFlowDescriptor] and providing a default
/// Zenoh-Flow [runtime](RuntimeId) -- that will manage the nodes that have no explicit mapping. See the
/// [try_new](DataFlowRecord::try_new()) method.
///
/// The differences between a [FlattenedDataFlowDescriptor] and a [DataFlowRecord] are:
/// - In a record, all nodes are mapped to a Zenoh-Flow runtime.
/// - A record leverages two additional nodes: [Sender](SenderRecord) and [Receiver](ReceiverRecord). These nodes take
/// care of connecting nodes that are running on different Zenoh-Flow runtimes.
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct DataFlowRecord {
pub(crate) id: InstanceId,
Expand All @@ -45,19 +55,24 @@ pub struct DataFlowRecord {
}

impl DataFlowRecord {
/// TODO@J-Loudet
/// Attempts to create a [DataFlowRecord] from the provided [FlattenedDataFlowDescriptor], assigning nodes without
/// a mapping to the default [runtime](RuntimeId).
///
/// If the [FlattenedDataFlowDescriptor] did not specify a unique identifier, one will be randomly generated.
///
/// # Errors
///
/// The creation of the `DataFlowRecord` should, in theory, not fail. The only failure point is during the creation
/// of the connectors: the [`SenderRecord`] and [`ReceiverRecord`] that are automatically generated when two nodes
/// that need to communicate are located on different runtimes.
/// The creation of the [DataFlowRecord] should, in theory, not fail. The only failure point is during the creation
/// of the connectors: the [Sender](SenderRecord) and [Receiver](ReceiverRecord) that are automatically generated
/// when two nodes that need to communicate are located on different runtimes.
///
/// To generate these connectors, a Zenoh key expression is computed. Computing this expression can result in an
/// error if the [`NodeId`] or [`PortId`] are not valid chunks. This should not happen as, when deserializing from a
/// descriptor, the necessary verifications are performed.
/// error if the [NodeId] or [PortId](zenoh_flow_commons::PortId) are not valid chunks (see Zenoh's
/// [keyexpr](https://docs.rs/zenoh-keyexpr/0.10.1-rc/zenoh_keyexpr/key_expr/struct.keyexpr.html) documentation for
/// more details).
///
/// However, we cannot guarantee that the structures were not modified later on.
/// Node that this should not happen if the [FlattenedDataFlowDescriptor] was obtained by parsing and flattening a
/// [DataFlowDescriptor](zenoh_flow_descriptors::DataFlowDescriptor).
pub fn try_new(
data_flow: &FlattenedDataFlowDescriptor,
default_runtime: &RuntimeId,
Expand Down
13 changes: 13 additions & 0 deletions zenoh-flow-records/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,19 @@
// ZettaScale Zenoh Team, <[email protected]>
//

//! This crate exposes `*Record` structures. A *record* in Zenoh-Flow is a description of a data flow (or part of it)
//! that is tied to a specific infrastructure and deployment.
//!
//! In particular, a [DataFlowRecord] represents a single deployment of a
//! [FlattenedDataFlowDescriptor](zenoh_flow_descriptors::FlattenedDataFlowDescriptor) on an infrastructure: all the
//! nodes have been assigned to a Zenoh-Flow runtime. This is why to each [DataFlowRecord] is associated a unique
//! [InstanceId](zenoh_flow_commons::InstanceId) which uniquely identifies it.
//!
//! # ⚠️ Internal usage
//!
//! This crate is (mostly) intended for internal usage within the
//! [Zenoh-Flow](https://github.com/eclipse-zenoh/zenoh-flow) project.

mod dataflow;
pub use dataflow::DataFlowRecord;

Expand Down

0 comments on commit cac9adb

Please sign in to comment.