LogoLogo
Home
Core Platform
Core Platform
  • Introduction
    • Overview
    • Use Cases
    • Architecture
    • Built with ML & AI
    • Quick Start
  • Examples
    • Training Examples
      • API Flow Examples
      • Microservice Examples
      • UI Example
      • Exercise: Hello World API
      • Exercise: Test State
      • Exercise: Test UI
    • Exercise: To-do List
      • To-do List Runner
      • To-do List Gateway
      • To-do List UI
      • To-do List Query
  • Troubleshooting
    • Rierino Packages
    • Release Notes
    • Useful Checks
    • Error Codes
  • Devops
    • Overview
    • API Flows
      • Using the Saga Screen
      • Defining a Saga
      • Configuring Saga Steps
        • Event Step
        • Transform Step
          • Transform Classes
        • Condition Step
          • Condition Classes
        • Step Link
      • Injecting Variables
    • Microservices
      • Runners
        • Using the Runner Screen
        • Defining a Runner
        • Managing Runner Settings
        • Adding Runner Elements
        • Deploying Runners
          • Spring Runners
          • Samza Runners
          • Camel Runners
      • Elements
        • Systems
        • State Managers
          • Typical Use Cases
          • State Data Structure
          • Local States
            • In-Memory Map
            • Caffeine Cache
            • Samza Based
            • Lucene Based
            • Single File
            • Multiple Files
            • Selected IDs Map
            • Indexed Map
          • Shared States
            • MongoDB Collection
            • Jooq (SQL) Table
            • Redis Map
            • Couchbase Collection
            • Elasticsearch Index
            • Elasticsearch Joined
            • Etcd Namespace
          • Specialized States
            • CRUD Service
            • Odata Service
          • State Coordinators
            • Lazy Cache Coordinator
            • Event Store Coordinator
            • Write thru Coordinator
          • Loading Strategies
          • ID Generators
        • Listeners
        • Query Managers
          • MongoDB
          • Elasticsearch
          • Lucene
          • SQL Based
          • Odata Service
        • Handlers
          • Core Handlers
            • Write Data
            • Read Data
            • Query Data
            • Apply Rules
            • Call Rest API
            • Generate Text/Html
            • Parse Html
            • Generate Secrets
            • Orchestrate User Task
            • Perform File Operation
            • Run Shell Command
            • Send/Receive Emails
          • Custom Code Handlers
            • Run Scripts
            • Run Java Code
            • Run Java Package
          • Flow Handlers
            • Orchestrate Saga
            • Loop Each Entry
            • Run Multiple Steps
            • Buffer Payloads
            • Merge Parallel Steps
            • Log Event
            • Send Event
            • Validate Event
            • Transform Event
            • Perform DB Transaction
            • Trigger Runner Command
            • Do Nothing
            • Modify Role Data
            • Enrich Role Data
            • Convert Pulse to Journal
          • Gateway Handlers
            • Authenticate
              • No Authentication
              • State Based
              • Keycloak Based
            • Sessionize
          • Specialized Handlers
            • Apply Advanced Rules
            • Calculate Real-time Metrics
            • Score ML Models
            • Score LangChain Models
            • Service MCP Requests
            • Service A2A Requests
            • Consume Web of Things
            • Perform Text Embedding
            • Run Python Procedure
            • Generate Excel
            • Generate PDF
            • Call SOAP API
            • Integrate with Camel
        • Actions
        • Streams
          • Kafka Topic
          • CDC Feed
          • Camel Component
        • Roles
        • Generic Settings
        • Global Settings
      • Deployments
        • Defining a Deployment
        • Alternative Loaders
    • Gateway & Security
      • Gateway Servers
        • Gateway Systems
        • Gateway Channels
        • Gateway Services
        • Gateway Tokens
      • APIs
        • OpenAPI Specification
        • Response Formats
    • Administration
      • Managing Deployments
      • Sending Commands
      • Streaming Messages
      • Migrating Assets
    • Batch Tasks
      • Python Processes
      • Python Iterators
      • Python Processors
    • Pro-Code
      • Custom Handlers
      • Custom State Managers
      • Custom Query Managers
      • Custom CDC Managers
  • Design
    • Overview
    • User Interface
      • Apps
      • UIs
        • Listers
        • Widgets
          • Value Widgets
          • Array Widgets
          • Object Widgets
          • Indirect Widgets
          • Atom Widgets
        • Menus
          • Lister Menu Actions
          • Selection Menu Actions
          • Editor Menu Actions
          • Widget Menu Actions
          • Custom Menu Actions
          • RAI Menu Actions
        • Extended Scope
          • Conditional Display
          • Data Context
          • Extra Data
          • Default Item
          • Extra Events
      • Options
      • Translations
      • Icons
      • Styles
      • Components
    • API Mapping
    • Data Schema
      • Common Data
  • Configuration
    • Overview
    • Queries
      • Query Types
      • Query Platforms
        • MongoDB Queries
        • Odata Queries
        • SQL Queries
        • Elasticsearch Queries
        • Lucene Queries
        • Siddhi Queries
    • Business Rules
      • Drools Rules
    • Dynamic Handlers
  • Data Science
    • Overview
    • ML Models
      • Scheduler Platforms
        • Airflow Scheduler
    • GenAI Models
    • MCP Servers
    • Complex Event Processing
      • Siddhi Data Flows
    • Data Visualizations
    • Customizations
  • EXTENSIONS
    • JMESPath
    • Handlebars
Powered by GitBook

© Rierino Software Inc. 2025. All rights reserved.

On this page
  1. Devops
  2. Microservices
  3. Elements

Streams

Streams are used for defining input and output data flows (such as Events, Pulses, Journals) between different runners.

PreviousActionsNextKafka Topic

Last updated 1 year ago

Streams configure input / output relations between different runners via different communication channels (such as Kafka topics, API routes). It is optional to define and add streams explicitly to runners except for event streams (i.e. Kafka), which require active set-up of connections during runner start-up.

Most stream definitions only require a system name mapping, but it is also possible to add more specialized configurations (such as offset reset details for Kafka consumers) when creating stream elements.

Stream Partitions

A key concept in using streams, especially with Kafka or similar systems is stream partitioning. Partitioning splits a stream into a set of sub-streams, each with an assigned unique partition id, responsible for transporting events only related to that partition id. Partitioning of event streams provides efficient scalability for the services consuming such streams.

Stream partition assignment and management varies between the event runner type used. For example, the Samza event runner automatically distributes partitions to runner instances and is able to reassign partitions in case of instance failures, if its configured to use a service coordinator (e.g. Zookeeper). On the other hand, Spring event runners typically only consume broadcasted event streams (such as commands, pulses, journals), as they primarily use REST / socket channels for communications.

It is possible to utilize partitions in various use cases with Rierino:

  • CDC: Change data records (i.e. pulses, journals) for large data sets are typically partitioned, based on partitionId field of the aggregate record. This allows synchronization of distributed runners and state managers, always receiving the updates for their records from the same stream partition and in the right order.

  • Requests: All requests from the API gateway to event stream based runners are sent in a partitioned manner, where each API gateway instance is assigned a partition id (e.g. stateful set number if on kubernetes). All requests coming from these gateways are assigned requests ids in the same partition as their assigned partition id (e.g. their request ids ending with the partition id) and they receive their responses on the same partition of their response event streams.

  • Saga Flows: Every step of a saga flow allows selection of the partitioning method (i.e. key strategy), to ensure that the runner responsible for that step receives the request on the right partition (e.g. if the runner is responsible for records of a specific partition only). Step results are always returned to the saga runner on the same partition (i.e. partition of the request id), so that the same runner can manage each API call end to end.

  • Stateful Runners: In case a runner is managing a local state or cache (e.g. caching calculation results for a customer), partitions allow ensuring that the same runner which holds local cache receives upcoming requests of the same nature (e.g. from the same customer).

A number of settings are shared across all stream types:

Setting
Definition
Example
Default

system

Name of the system this stream belongs to

kafka_default

-

parameter.ignoreOffset

Whether stream offset should be used or not for checks (i.e. is sequential)

true

false

meta.[field].default

Default event metadata field values for requests received on this topic

action=Get

-

meta.[field].override

Final event metadata field values for requests received on this topic

domain=product

-

It is possible to define any number and type of new streams with specialized configurations for different event runners.

338B
stream-saga-0001.json
Example Stream Definition (Can be Imported on Element Screen)