Pure Python language Patterns for mshell Workflow — CompleteReference Guide (p1–p24).
Pure Bash language Patterns for mshell Workflow — CompleteReference Guide (P1–P24)
Pure mshell language Patterns for mshell Workflow — CompleteReference Guide (P1–P24)
mshell Workflow Patterns — Reference Guide – Part II (p13-p24).
mshell Ecosystem — Tools & Components Reference that relates to workflow inside it.
mshell Workflow Patterns — Reference Guide – Part I (p1-p12).
mshell Workflow Guide, February 22nd, 2026.
Implementation of all five canonical agentic workflow patterns in mshell
This diagram illustrates the complete implementation of all five canonical agentic workflow patterns in mshell, a polyglot AI-and-mathematics powered shell environment that combines: 7 programming languages (Bash, Python, C, C++, Rust, Go, Lua), 3 LLM vendor backends (Ollama, Claude, OpenAI or connecting throw llm linux evaluation framework), up to 3 different models active simultaneously within a single workflow, direct execution of pre-written code blocks in any supported language, and native mshell commands — all orchestrated from plain Markdown documents.
Pattern 1 — Prompt Chaining shows a sequential pipeline where language blocks and LLM calls alternate, each step consuming the previous output via <var and producing the next via >var. Different models (@1, @2, @3) can be assigned to different steps in the same chain.
Pattern 2 — Routing demonstrates LLM-driven conditional branching: a router model classifies the task and emits a single keyword; the if=var:value fence attribute gates execution so only the matching language branch runs.
Pattern 3 — Parallelization is the newest addition: the async fence attribute triggers a fork(), launching the block in a child process. Results are written to uniquely named temp files (keyed by parent/child PID pair) and collected at an await=var1,var2 barrier via waitpid(). Writes to the shared session context (/tmp/mshell_ctx_<pid>/) are protected by flock().
Pattern 4 — Evaluator-Optimizer implements iterative refinement via <!–@loop max=N until=var:value–> / <!–@end_loop–>. A generator model produces output, an evaluator model scores it, and the loop continues until the verdict matches the expected value or the safety cap is reached.
Pattern 5 — Orchestrator uses <!–@Nx_md–> to have an LLM generate a complete Markdown document at runtime, which is then executed recursively by parse_and_execute_markdown(). The subtask structure is entirely dynamic — unknown at authoring time.
The Full Pipeline at the bottom shows all five patterns composing naturally in a single .md document, sharing session state through the context directory.
Because mshell supports all five patterns natively in plain Markdown — across seven programming languages, multiple LLM vendors and models, executable code blocks, and native shell commands — it serves as a universal agentic execution environment requiring no external orchestration framework and supporting all five canonical agentic workflow patterns.

mshell Interlang synchronous Workflow Patterns
This diagram documents the core workflow patterns available in the mshell Ecosystem — an AI-powered polyglot shell that executes Markdown documents containing mixed-language code blocks and inline LLM directives. The diagram covers seven patterns, arranged from simple to complex.
The first three patterns (Linear Data Pipeline, LLM-in-the-Middle, Fan-Out) represent foundational data flow: code blocks in different languages pass data to each other through named session variables (>var / <var), with an optional LLM processing step in the middle, or a single producer writing to multiple consumers simultaneously.
The next two patterns (LLM Code Gen → Exec via Variable, Two-LLM Review Chain) show more advanced LLM integration: a model generates executable code that is stored in a variable and then run via exec(), and a multi-model pipeline where Model 1 generates, Model 2 reviews and improves, and Model 1 finalizes in exec mode — mirroring a human code review cycle.
Exec Mode (<!–@Nx–>) is a unique mshell primitive: the LLM response is intercepted by the mshell Parser, the code fence is extracted, and the Validator automatically resolves compiler flags and library dependencies before running the binary.
The bottom-spanning Full Pipeline combines all stages into a single chain: C/Rust computes raw data → Python transforms it into statistics → LLM analyzes → LLM generates visualization code → LLM reviews → Exec mode compiles and runs — all driven from one .md file without any manual editing.
All patterns execute strictly top-to-bottom and share a session context at /tmp/mshell_ctx_<pid> where variables persist as plain files for the duration of the process. These are example patterns for the synchronous usage patterns without extension.

