- The Fundamental Rule
- Outputs Are First-Class Data
- How Data Is Passed Conceptually
- Named Variables and References
- Injecting Data Into AI Prompts
- One Responsibility Per AI Step
- Reusing the Same Output Multiple Times
- Transforming Data Between AI Steps
- Passing Lists and Iterative Data
- Avoiding Implicit Context
- Scope and Lifetime of Data
- Debugging Data Flow
- Cost and Performance Benefits
- Common Mistakes
- Best Practices
- Summary
Passing data between AI steps is the core capability that makes OmniBlocks powerful. Without data passing, OmniBlocks would just be a collection of isolated prompts. With it, they become true multi-step execution pipelines where each step builds on the previous one in a controlled, explicit way.
This section explains how data flows, how it should be designed, and how to avoid the common traps.
The Fundamental Rule #
In OmniBlocks:
AI does not remember — the execution stream does.
Every AI step receives:
- explicit inputs
- from previous blocks
- via named outputs or variables
Nothing is “implicitly understood”.
Outputs Are First-Class Data #
Every OmniBlock produces structured output.
That output can be:
- raw text
- structured text
- parsed fields
- lists
- variables
- metadata
Once produced, outputs can be:
- referenced later
- injected into prompts
- transformed further
- reused multiple times
If a block does not expose an output, it cannot be reused.
How Data Is Passed Conceptually #
Data passing follows a simple rule:
- Block A produces output
- Output is stored under a known reference
- Block B explicitly consumes that reference
There is no global context guessing.
This makes execution deterministic and debuggable.
Named Variables and References #
Each block output is referenced by a name, not position.
Examples (conceptual, not syntax):
serp_resultsproduct_specsvideo_captionsoutlinesection_content
Good naming is critical. Poor naming leads to fragile streams.
Injecting Data Into AI Prompts #
When an AI block runs, its prompt can include:
- static instructions
- dynamic references to earlier outputs
Example logic:
- “Using the outline generated earlier, write section 1”
- “Summarize the product specs listed below”
- “Rewrite the following scraped content into prose”
The AI only sees what you explicitly inject.
One Responsibility Per AI Step #
A strong OmniBlocks stream uses multiple small AI steps, not one large one.
Bad pattern:
- scrape → giant AI prompt → final article
Good pattern:
- scrape → parse → AI summary → AI expansion → AI polish
Passing data between steps makes each AI call:
- smaller
- cheaper
- more predictable
- easier to debug
Reusing the Same Output Multiple Times #
A single output can feed multiple downstream blocks.
Example:
outlinefeeds:- section writer
- meta description generator
- FAQ generator
This avoids:
- duplicate AI calls
- inconsistent structure
- unnecessary costs
Reuse is intentional, not automatic.
Transforming Data Between AI Steps #
Not all blocks are AI blocks.
Between AI steps, you can:
- clean text
- split lists
- normalize formatting
- remove noise
- extract fields
- merge content
This ensures AI blocks receive clean, focused inputs, not raw mess.
Passing Lists and Iterative Data #
When dealing with lists (products, keywords, RSS items):
- a loop block controls iteration
- each iteration passes a single item forward
- AI blocks operate on one item at a time
This is how bulk generation stays consistent.
AI is never asked to “handle the whole list intelligently”.
Avoiding Implicit Context #
A common mistake is assuming AI remembers earlier steps.
Wrong assumption:
- “The AI already knows the outline”
Correct approach:
- explicitly inject the outline output again
If data is not passed explicitly, it does not exist for that step.
Scope and Lifetime of Data #
Data exists:
- only within the execution stream
- only for the duration of execution
- only for the blocks that reference it
Outputs do not:
- persist globally
- carry over to other streams
- survive execution unless stored intentionally
This prevents hidden dependencies.
Debugging Data Flow #
If an AI step behaves unexpectedly:
- inspect the inputs it received
- verify upstream outputs
- check transformations
- confirm references are correct
Most OmniBlocks issues are data flow issues, not model issues.
Cost and Performance Benefits #
Explicit data passing:
- reduces prompt size
- avoids re-scraping or re-generating
- minimizes token usage
- improves response stability
Well-designed streams are cheaper and faster than monolithic prompts.
Common Mistakes #
- relying on AI memory
- passing too much raw data
- reusing poorly structured outputs
- vague variable names
- skipping normalization steps
- chaining AI blocks without transformation
Structure matters more than model choice.
Best Practices #
- name outputs clearly
- keep AI steps narrow
- transform data between AI calls
- reuse outputs intentionally
- log intermediate results
- treat data flow like code, not prose
OmniBlocks reward explicit thinking.
Summary #
Passing data between AI steps is what turns OmniBlocks into a true execution framework. Each block produces explicit outputs that are intentionally reused, transformed, and injected into later steps. AI does not “remember” — the execution stream does. When data flow is designed clearly, OmniBlocks deliver predictable, scalable, and cost-efficient multi-step AI workflows that are impossible to achieve with single-prompt approaches.