"Loose Systems last Longer and work better", John Gall (1978)
Passing FPs from one Brick to another without blocking requires buffers. FPs can be pushed into a buffer from multiple bricks without locking to prevent deadlocks in case of errors.
Between Bricks FIFO buffers are used (First in First Out).
Bricks, Ports and buffers
A Brick has 0-n input ports. If no input port is available the brick is an inlet or an active element (producer) and does not require a buffer. If 1-n input ports are available one and only one buffer (Highlander principle) is assigned to the brick. FPs are tagged with the input port id and pushed into the buffer to be processed.
Using a buffer offers an opportunity of automatic scaling. If the number of FPs increase, multiple instances of a Brick can be initialized to handle the load and balance the buffers. This type of scaling is referred to as Brick Concurrency. All instances of a Brick assigned to a Flow pull FPs from the same buffer.
Flow Life cycle
It is required to define a life span of a Flow. Buffers have to be created and destroyed. Processes have to be suspended to allow the usage of system resources by other tasks. While Flow processing is a data driven approach it seems to be natural to use data to define the lifespan.
A Flow is started (startup) if data is available and waits for processing. The Flow is running (run time) while the data is processed. The Flow ends (shutdown) if the data processing is finished.
There might be discussions about examples where data is delivered as a never ending stream which requires an infinite life span of a Flow. At this point I assume that even in this case the FPs can be divided into logical groups with a start and an end to define the life span of a Flow.
If an Inlet or a Producer is delivering a set of FPs, a Flow is initialized. Each Brick triggers the creation of instances of Bricks and related buffers connected to its output ports in the moment an FP has to be passed.
While the Inlet or Producer is pushing a set of FPs the subsequent processing utilizes the same set of buffers. Automatic scaling requires to start more instances of some Bricks, but not additional buffers are created.
If the Inlet or Producer signals end of data (EOD), the Brick instances and buffers are decomposed step by step by moving the EOD signal through the Flow.
A Flow instance is defined by a set of buffers. If the same flow uses different buffers the situation is referred to as flow concurrency. If an inlet sends EOD but the data was not completely processed yet, the subsequent processing continuous while a new set of data creates a new set of buffers for the same flow.