Document Type: Deep-Dive Technical Architecture Reference
Project: AX — The Cybernetic Workflow Architect
Developer: Nikan Eidi
Version: 1.0.0
Scope: Complete structural documentation covering the Tree-Logic Scanner, the Hybrid Detection Engine internals, the State Machine architecture, the Animation subsystem IPC design, the payload construction pipeline, and all configuration constants.
- Script-Level Architecture
- Configuration Constants
- State Machine Design
- Tree-Logic Scanner — Deep Dive
- Hybrid Detection Engine — Deep Dive
- Payload Construction Pipeline
- Animation Subsystem — IPC Architecture
- ANSI Rendering Architecture
- Signal & Trap Architecture
- TUI Input Architecture
- Error String Protocol
- File & Directory Outputs
- Dependency Map
ax.sh is a single-file self-contained Bash script. It has no external Bash dependencies beyond the standard utilities available on any modern Linux system. It does embed Python3 logic at two points — but calls Python as a subprocess rather than requiring a separate file.
ax.sh
├── ── Configuration ──────────────────────────── API_URL, MODEL_NAME
├── ── ANSI Color Palette ─────────────────────── 14 color/style variables
├── ── State Management ───────────────────────── STATE, set_state(), get_state()
├── ── Signal / Event Handlers ────────────────── cleanup(), on_error(), trap registrations
├── ── Loader — Progress Bar Builder ──────────── _build_bar()
├── ── Loader — Eye Frame: OPEN ───────────────── _eye_open()
├── ── Loader — Eye Frame: HALF-CLOSED ────────── _eye_half()
├── ── Loader — Eye Frame: FULLY CLOSED ───────── _eye_closed()
├── ── Loader — Master Frame Renderer ─────────── _draw_frame()
├── ── Loader — Background Animation Loop ─────── _loader_loop()
├── ── Loader — Public Interface ──────────────── start_loader(), set_loader_progress(), stop_loader()
├── ── Project Scanning ───────────────────────── get_project_tree()
├── ── Language Detection ─────────────────────── get_context()
├── ── Payload Construction ───────────────────── create_payload() [embeds Python3]
├── ── API Communication ──────────────────────── request_ai()
├── ── Response Parsing ───────────────────────── parse_response() [embeds Python3]
├── ── Interactive git-sync prompt ────────────── prompt_git_sync() [contains _draw_prompt()]
└── ── Main Logic ─────────────────────────────── main()
main ← entry call
The very last line of ax.sh is:
mainThere is no argument parsing, no --help flag, and no mode switching in v1.0.0. AX is invoked from the project root with no arguments.
All tunable configuration is declared at the very top of ax.sh, making it trivial to adapt AX for different API providers or models.
API_URL="https://api.openai.com/v1/chat/completions"
MODEL_NAME="gpt-4o"| Constant | Type | Default | Effect of Changing |
|---|---|---|---|
API_URL |
String | OpenAI v1 completions | Change to point at any OpenAI-compatible API (Azure OpenAI, local llama.cpp server, etc.) |
MODEL_NAME |
String | gpt-4o |
Change to gpt-4-turbo, gpt-3.5-turbo, or any model your API key has access to |
_LOADER_LINES is a second configuration constant defined in the State Management section:
_LOADER_LINES=18This integer must always equal the exact number of terminal lines the animation block occupies. Changing any eye sub-renderer to emit a different line count requires updating this value — otherwise the cursor rewind will misalign the animation.
AX tracks a single global STATE string variable that progresses through a linear sequence. Error conditions branch to ERROR from any phase.
IDLE → SCANNING → BUILDING → REQUESTING → PARSING → WRITING → DONE
↓
ERROR ← ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
| From State | Event | To State | Triggered By |
|---|---|---|---|
IDLE |
main() called |
SCANNING |
set_state "SCANNING" |
SCANNING |
Tree scan complete | BUILDING |
set_state "BUILDING" |
BUILDING |
Payload built | REQUESTING |
set_state "REQUESTING" |
REQUESTING |
Response received | PARSING |
set_state "PARSING" |
PARSING |
YAML extracted | WRITING |
set_state "WRITING" |
WRITING |
File written | DONE |
set_state "DONE" |
| Any | Fatal error | ERROR |
set_state "ERROR" then exit 1 |
| Any | SIGINT/TERM/HUP | (cleanup) | cleanup() trap |
The STATE variable is currently used for tracking and potential future conditional logic. In v1.0.0 it is not branched on by any function — it serves as an audit trail readable by process monitors or future additions to AX.
get_project_tree() {
local ignore="(node_modules|.git|build|dist|bin|obj|target|venv)"
find . -maxdepth 3 -not -path '*/.*' \
| grep -vE "$ignore" \
| sed -e 's/[^-][^\/]*\// |/g' -e "s/|/ /g"
}What it does: Recursively enumerates all filesystem entries (files and directories) under the current working directory, stopping at depth 3.
Why depth 3?
| Depth | Typical content at that level |
|---|---|
| 1 | ./src, ./tests, ./docs, ./package.json |
| 2 | ./src/utils, ./src/components, ./tests/unit |
| 3 | ./src/utils/helpers, ./src/components/Button |
| 4+ | Individual implementation files, deeply nested modules |
Depths 1–3 capture the structural intent of the project — what packages, modules, and directories exist — without enumerating every implementation file. This keeps the GPT-4o prompt token count manageable and focused on architecture rather than exhaustive inventory.
Output format at this stage:
.
./package.json
./src
./src/index.js
./src/utils
./src/utils/helpers.js
./node_modules ← still present at this point
./.git ← still present at this point
What it does: Instructs find itself to skip any path component that begins with .. This is evaluated at the find level — the filesystem traversal never enters hidden directories.
Affected directories:
.git/— version control internals.github/— will exclude the generated workflow directory from future scans (by design — AX should not analyze its own output).env— environment files.DS_Store— macOS metadata- Any dotfile or hidden directory
Why at find level and not grep?
Excluding at find level is more efficient than post-filtering with grep because find never descends into the excluded directories. For a large repo with a deep .git history, this avoids enumerating potentially thousands of object files.
The ignore pattern is:
(node_modules|.git|build|dist|bin|obj|target|venv)
What it does: A second pass that removes lines containing any of these directory names. This catches directories that may have slipped through stage 2 (e.g., ./build which is not a hidden directory) and any that match the pattern anywhere in their path.
Why include .git here when stage 2 already excluded it?
Belt-and-suspenders design. If find ever encounters a .git that wasn't hidden (edge case: bare repos configured differently), the grep pass catches it.
Directory-to-exclusion rationale:
| Directory | Reason for Exclusion |
|---|---|
node_modules |
Third-party packages — not project code, massive file count |
.git |
Version control — not project structure |
build / dist |
Generated output — not source structure |
bin |
Compiled binaries — not source |
obj |
C/C++ object files — not source |
target |
Rust/Maven build output — not source |
venv |
Python virtual environment — not project code |
What it does: Transforms raw find path strings into a visually indented tree representation.
First sed expression: 's/[^-][^\/]*\// |/g'
This regex matches any sequence of characters that:
- Does not start with
-(to avoid matching./as a "path segment start") - Followed by any characters that are not
/ - Followed by
/
Each such match is replaced with | — a space and a pipe character.
Second sed expression: "s/|/ /g"
Replaces every | with two spaces, creating the indentation effect.
Example transformation:
Input: ./src/utils/helpers.js
After 1: . |utils |helpers.js
After 2: utils helpers.js
This is a simple approximation of a tree view — not as precise as tree(1) but sufficient for the structural narrative passed to GPT-4o.
Why not use tree(1) directly?
tree is not guaranteed to be available on all systems. The find + sed combination uses only POSIX-standard utilities that are present on every Unix system where Bash 4+ runs. AX has zero optional dependencies in its scanner.
| Limitation | Impact | Mitigation |
|---|---|---|
| Does not read file contents | Cannot detect languages from #! shebangs |
get_context() uses glob patterns independently |
| Does not follow symlinks | May miss symlinked source trees | Acceptable for standard project structures |
| Truncates at depth 3 | May miss deeply nested language signals | get_context() scans at root level with globs |
| Does not count files | Cannot distinguish "5 Python files" from "1" | Context string treated as categorical, not quantitative |
get_context() uses a sequential non-exclusive test array accumulation pattern. Every test is independent — a match does not skip subsequent tests. This is what enables hybrid detection.
get_context() {
local tags=()
[ -f "package.json" ] && tags+=("Node.js")
([ -f "requirements.txt" ] || ...) && tags+=("Python")
...
if [ ${#tags[@]} -eq 0 ]; then
echo "General/Single-File"
else
echo "${tags[*]}"
fi
}Each language uses a two-tier detection strategy:
Tier 1 — Manifest file check:
[ -f "package.json" ] && tags+=("Node.js")Manifest files are definitive — if package.json exists, this is a Node.js project. No ambiguity.
Tier 2 — Source glob check:
(ls *.py &>/dev/null) && tags+=("Python")Used when no manifest is present but source files exist (e.g., a single-file Python script with no requirements.txt). The ls *.py &>/dev/null pattern:
- Uses
lsfor glob expansion (notfind, avoiding recursion) - Redirects both stdout and stderr to
/dev/null— the return code is what matters, not the output - Returns exit code 0 if any
*.pyfile exists in the current directory, 1 otherwise
Combined logic for Python:
([ -f "requirements.txt" ] || [ -f "pyproject.toml" ] || ls *.py &>/dev/null) && tags+=("Python")The subshell ( ... ) groups the OR conditions so that the && applies to the entire group as a unit. Python is detected if ANY of the three signals are present.
All glob checks (ls *.ext) scan only the current working directory — they are not recursive. This is intentional: the presence of any source file at root level is sufficient signal for the language to be considered "in scope" for CI.
For deeply nested single-language projects (e.g., a Go project where all .go files are under ./cmd/server/), the go.mod manifest at root provides the detection signal.
# Used in ax.sh
ls *.py &>/dev/null
# Not used
find . -name "*.py" -maxdepth 1 &>/dev/nullls *.go is faster for the shallow check — it delegates glob expansion to the shell itself. find with -maxdepth 1 would work but is more verbose. In the context of get_context() running dozens of checks, the simpler ls form is preferred.
local tags=()tags is a local indexed Bash array. Each matching language appends its label string:
tags+=("Node.js") # tags=("Node.js")
tags+=("Python") # tags=("Node.js" "Python")
tags+=("Shell") # tags=("Node.js" "Python" "Shell")The final echo uses ${tags[*]} — the * expansion joins all elements with the first character of IFS (a space by default):
echo "${tags[*]}"
# Output: Node.js Python ShellThis space-separated string is passed verbatim into the AI user message:
Detected context: Node.js Python Shell.
The AI is trained to interpret this as "generate a hybrid workflow for all three contexts."
Bash has no native JSON serialisation. Building JSON with string concatenation is:
- Fragile — any quote, backslash, or newline in the project tree string would break the JSON
- Unmaintainable — escaping rules compound with nesting depth
- Unnecessary — Python3 is available wherever AX can be installed
The create_payload() function passes data as command-line arguments to an inline Python3 script:
python3 -c '...' "$tree" "$context" "$MODEL_NAME"Inside Python, these are sys.argv[1], sys.argv[2], sys.argv[3]. The json.dumps() call handles all necessary escaping automatically.
The system prompt is a single string assembled from multiple directive sentences. Each directive targets a specific failure mode observed in AI-generated CI workflows:
| Directive Name | Problem it Prevents | Implementation |
|---|---|---|
IDENTIFICATION |
Generic boilerplate workflow regardless of stack | Instructs AI to base output on actual detected types |
EXCLUSION: ax.sh |
AX's own script treated as a project dependency | Named exclusion of the utility file |
MINIMALISM |
Over-engineered workflows for simple shell scripts | Constrains shell-only output to exactly 2 steps |
SINGLEQUOTE |
Shell single-quotes breaking Python string embedding | Token substitution pattern for find ... '*.sh' |
NO GHOST DEPENDENCIES |
Setup steps for languages not in the project | Explicit prohibition |
SINGLE FILE |
Full multi-step workflows for trivial single files | Scale-down instruction |
CLEAN OUTPUT |
Markdown fences and prose commentary in output | Raw YAML instruction (belt + parse_response() suspenders) |
STANDARDS |
Non-standard runner OS or outdated action versions | Explicit version pinning instruction |
The shell-only minimalism directive needs to embed a find command with single quotes into the JSON string:
find . -name '*.sh' -not -path '*/.*' | xargs shellcheck --severity=warning
Single quotes inside a Python string defined via sys.argv (which passes through shell quoting) create escaping conflicts. The solution is a token substitution pattern:
system_prompt = (
"...find . -name SINGLEQUOTE*.shSINGLEQUOTE "
"-not -path SINGLEQUOTE*/.*SINGLEQUOTE | xargs shellcheck... "
"In the final YAML output replace every token SINGLEQUOTE with an actual single-quote character."
)The AI is instructed to replace every occurrence of the literal string SINGLEQUOTE with ' in its YAML output. This avoids shell quoting conflicts entirely while producing correct YAML.
"temperature": 0.1Temperature controls the randomness of the AI's token sampling:
| Temperature | Effect | Use Case |
|---|---|---|
0.0 |
Fully deterministic (greedy decoding) | Exact reproducibility required |
0.1 |
Near-deterministic with minimal variation | AX's choice — YAML structure is correctness-sensitive |
0.7 |
Moderate creativity | General-purpose chat |
1.0+ |
High creativity/randomness | Creative writing, brainstorming |
For CI workflow generation, creativity is an anti-feature. The same project scanned twice should produce functionally equivalent workflows. 0.1 provides the near-determinism needed while avoiding the occasional issues with 0.0 (some models become overly terse at exactly 0).
Parent Process (main())
│
├── Writes to: $_PROG_FILE (progress %)
├── Writes to: $_LABEL_FILE (label text)
├── Signals via: touch $_STOP_FILE
│
└── Background Subshell (_loader_loop &)
├── Reads from: $_PROG_FILE every frame
├── Reads from: $_LABEL_FILE every frame
├── Polls: $_STOP_FILE every frame
└── Renders: 18-line frame to stdout
The three tmpfiles serve as a unidirectional message channel from parent to child:
| File | Direction | Content | Update Frequency |
|---|---|---|---|
_PROG_FILE |
Parent → Child | Integer 0–100 | Per phase (6 times) |
_LABEL_FILE |
Parent → Child | Label string | Per phase (6 times) |
_STOP_FILE |
Parent → Child | Existence = stop signal | Once, at end |
Why filesystem IPC?
- Bash subshells cannot share variables with their parent — any variable set in a subshell is invisible to the parent and vice versa
- Pipes would require the parent to have an open file descriptor to the child, which complicates cleanup and signal handling
- tmpfiles are simple, debuggable (you can
catthem while AX is running), and survive race conditions gracefully (the child always reads the last-written value)
# Parent signals stop:
touch "$_STOP_FILE"
# Child checks on every frame:
while [ ! -f "$stop_file" ]; do
...
doneWhy existence-based rather than content-based?
Writing a value to the file (e.g., echo "1" > $_STOP_FILE) requires the parent to write AND the child to read and compare. Using file existence as the signal requires only:
- Parent: one
touchsyscall - Child: one
statsyscall (via[ -f ... ])
This minimises the window for race conditions at shutdown.
_STOP_FILE=$(mktemp /tmp/ci_stop.XXXXXX)
# ...
rm -f "$_STOP_FILE" # ← Delete it immediately after creationmktemp creates the file as part of its atomic creation guarantee. But the loop checks for the file's absence to continue running. If the file existed when the loop started, it would immediately exit.
The solution: create with mktemp (to get a unique name with guaranteed uniqueness across processes), then immediately delete it, leaving only the path string in $_STOP_FILE.
The animation block relies on being able to return the cursor to the exact same position at the start of each frame. The naive approach:
# WRONG — breaks on terminal scroll
printf "\033[18A" # move cursor up 18 linesThis fails because \033[18A cannot move the cursor above the current viewport top. If the user's terminal scrolls (even one line) during a long API call, the "18 lines up" would no longer point to the animation block's start — it would point to wherever the current viewport top is.
The correct approach:
# CORRECT — scroll-safe
tput sc # save absolute cursor address in terminal's memory
# ... render 18 lines ...
tput rc # restore cursor to saved absolute addresstput sc (save cursor) stores the current cursor position in the terminal emulator's own state memory. tput rc (restore cursor) retrieves that stored position unconditionally — it is not relative, it is not affected by scroll, and it is not affected by how many lines have been printed since the save.
Every call to _draw_frame() must output exactly _LOADER_LINES (18) lines. This invariant is what makes the tput rc + print-18-lines pattern work: if any frame outputs 17 or 19 lines, subsequent frames will drift.
The 18 is composed as:
_eye_open() → 13 lines (always exactly 13, regardless of eye state)
_eye_half() → 13 lines (same)
_eye_closed() → 13 lines (same)
separator → 1 line (blank printf)
bar top → 1 line (┌────┐)
bar middle → 1 line (│ ██░ │)
bar bottom → 1 line (└────┘)
label → 1 line (⠋ Processing...)
──────────
Total = 18 lines
All color codes use the 256-color escape format: \033[38;5;{n}m for foreground. This provides more consistent colour rendering across terminal emulators than the 8/16-color alternatives.
CYAN=$'\033[38;5;51m' # 256-color index 51 — bright aqua
MAG=$'\033[38;5;201m' # 256-color index 201 — neon pink/magenta
GLOW=$'\033[38;5;87m' # 256-color index 87 — pale cyan glow
GOLD=$'\033[38;5;220m' # 256-color index 220 — amber goldEvery line in the animation is prefixed with $CLR (\033[2K — erase entire line):
printf '%s %s| || | || | | || | || |%s\n' "$CLR" "${BOLD}${MAG}" "$R"Why?
When tput rc restores the cursor, it places the cursor at the saved position — but does not clear the lines below. Without \033[2K, if a previous frame was longer or had wider characters, artifacts from the previous frame would remain visible at the right edge of lines in the current frame.
Prefixing every line with \033[2K ensures the entire line is cleared before the new content is printed, making each frame a clean render.
Every styled segment ends with $R (\033[0m — reset all attributes):
printf '%s%s%s' "${BOLD}${CYAN}" "content" "$R"This prevents color bleeding — if a reset is missed, subsequent output would inherit the style of the last active attribute, potentially corrupting the terminal's display until the user runs reset.
trap cleanup INT TERM HUP
trap 'on_error $LINENO' ERR| Signal | Handler | Trigger |
|---|---|---|
SIGINT (2) |
cleanup() |
Ctrl+C |
SIGTERM (15) |
cleanup() |
External kill / system shutdown |
SIGHUP (1) |
cleanup() |
Terminal disconnection |
ERR (pseudo) |
on_error $LINENO |
Any command returns non-zero |
$LINENO must be expanded at the trap registration point or at the point of trap invocation. By quoting the entire string with single quotes, the expansion of $LINENO is deferred to when the trap fires — meaning it captures the line number of the failing command, not the line of the trap registration itself.
The ERR trap fires on any command that returns a non-zero exit code unless the command is:
- Part of an
ifcondition (e.g.,if some_command; then) - Part of a
whileoruntilcondition - On the right side of
&&or|| - Preceded by
!
This means all the intentional non-zero exit code uses in AX (the ls *.py &>/dev/null checks, the kill -0 "$LOADER_PID" 2>/dev/null check) do not trigger on_error because they are structured in ways that exempt them from the ERR trap.
IFS= read -rsn1 key| Flag | Effect |
|---|---|
-r |
Raw mode — backslash is not treated as escape character |
-s |
Silent — input is not echoed to terminal |
-n1 |
Read exactly 1 character |
IFS= |
Empty IFS — preserves whitespace in $key (including spaces and Enter) |
Terminal emulators send arrow keys as ANSI escape sequences — a sequence of 3 bytes:
ESC [ A (Up arrow)
1B 5B 41
ESC [ B (Down arrow)
1B 5B 42
ESC [ C (Right arrow)
1B 5B 43
ESC [ D (Left arrow)
1B 5B 44
AX reads these in three separate read calls:
IFS= read -rsn1 key # Reads first byte: 0x1B (ESC)
if [[ "$key" == $'\033' ]]; then
IFS= read -rsn1 -t 0.05 seq # Reads second byte: '[' (with 50ms timeout)
if [[ "$seq" == '[' ]]; then
IFS= read -rsn1 -t 0.05 key # Reads third byte: A/B/C/DThe 50ms timeout (-t 0.05) on the second and third reads is critical: it distinguishes a genuine ESC key press (which produces only one byte, 0x1B, with no following bytes) from an arrow key sequence (which produces three bytes in rapid succession). If no second byte arrives within 50ms, the ESC is treated as a lone ESC key.
A|D) # Up / Left arrow → YES
B|C) # Down / Right arrow → NOThe YES/NO options are displayed side by side (left=YES, right=NO). Mapping spatial navigation intuitively: left and up move toward YES, right and down move toward NO. This matches the mental model of "I'm on YES (left), I press right to go to NO."
AX uses a prefixed string return protocol for functions that cannot use non-zero exit codes to signal errors (because non-zero would trigger the ERR trap prematurely). Instead, error conditions are returned as strings on stdout with structured prefixes:
| Prefix | Source Function | Meaning |
|---|---|---|
AUTH_ERROR: |
request_ai() |
API key missing or rejected |
NETWORK_ERROR: |
request_ai() |
curl network failure |
RATE_ERROR: |
request_ai() |
HTTP 429 rate limit |
REQUEST_ERROR: |
request_ai() |
HTTP 400 bad request |
SERVER_ERROR: |
request_ai() |
HTTP 5xx OpenAI server error |
HTTP_ERROR: |
request_ai() |
Unexpected HTTP status |
API_ERROR: |
parse_response() |
OpenAI returned an error object |
PARSE_ERROR: |
parse_response() |
Response could not be parsed |
if [ $req_status -ne 0 ] || \
[[ "$raw_response" == NETWORK_ERROR:* ]] || \
[[ "$raw_response" == AUTH_ERROR:* ]] || \
[[ "$raw_response" == RATE_ERROR:* ]] || \
[[ "$raw_response" == REQUEST_ERROR:* ]] || \
[[ "$raw_response" == SERVER_ERROR:* ]] || \
[[ "$raw_response" == HTTP_ERROR:* ]]; thenThe [[ ... == PATTERN:* ]] syntax uses Bash's glob pattern matching. The * matches any characters after the colon — making this a prefix-match test.
| Path | Created By | Contents | Condition |
|---|---|---|---|
.github/workflows/main.yml |
main() → echo "$final_yaml" |
AI-generated GitHub Actions YAML | Always, on success |
.github/workflows/ |
main() → mkdir -p |
Directory | Always, on success |
| Pattern | Created By | Cleaned By | Purpose |
|---|---|---|---|
/tmp/ci_stop.XXXXXX |
start_loader() → mktemp |
stop_loader() → rm -f |
Animation stop sentinel |
/tmp/ci_prog.XXXXXX |
start_loader() → mktemp |
stop_loader() → rm -f |
Progress percentage IPC |
/tmp/ci_label.XXXXXX |
start_loader() → mktemp |
stop_loader() → rm -f |
Label text IPC |
All three tmpfiles are cleaned on:
- Normal completion (
stop_loader()inmain()) - User interrupt (
cleanup()→stop_loader()) - Unexpected error (
on_error()→stop_loader())
| Command | Required | Used By | Notes |
|---|---|---|---|
bash |
✅ v4+ | Runtime | Array support requires v4+ |
python3 |
✅ v3.7+ | create_payload(), parse_response() |
json, re, sys modules (stdlib) |
curl |
✅ v7+ | request_ai() |
-w status capture requires 7.x |
find |
✅ | get_project_tree() |
POSIX standard |
grep |
✅ | get_project_tree(), request_ai() |
POSIX standard |
sed |
✅ | get_project_tree() |
POSIX standard |
ls |
✅ | get_context() |
POSIX standard |
mktemp |
✅ | start_loader() |
GNU/BSD standard |
tput |
✅ | Animation, TUI | Part of ncurses |
git-sync |
❌ Optional | prompt_git_sync() |
Only if user confirms YES |
shellcheck |
❌ Runtime | Generated main.yml step |
Required in CI environment, not locally |
All Python3 modules used are part of the standard library — no pip install required:
| Module | Used In | Purpose |
|---|---|---|
json |
create_payload(), parse_response() |
Serialise/deserialise JSON |
re |
parse_response() |
Regex fence stripping |
sys |
create_payload(), parse_response() |
sys.argv, sys.stdin |
| Variable | Required | Fallback |
|---|---|---|
OPENAI_API_KEY |
✅ | None — AUTH_ERROR if absent |
TERM |
❌ | tput commands silently fail gracefully if absent |
HOME |
❌ | Not used directly |
Last updated: AX v1.0.0 — Nikan Eidi