A versatile xyOps Action Plugin that exports job output data to multiple file formats including JSON, CSV, HTML, XML, Markdown, YAML, Plain Text, Excel, PDF, and HL7 healthcare formats. Now with powerful data transformation capabilities!
USE AT YOUR OWN RISK. This software is provided "as is", without warranty of any kind, express or implied. The author and contributors are not responsible for any damages, data loss, or other issues that may arise from the use of this software. Always test in non-production environments first. By using this plugin, you acknowledge that you have read, understood, and accepted this disclaimer.
This plugin includes a "DELETE all files" folder cleanup option that will permanently remove ALL files in the specified output folder before generating a new export.
THIS ACTION CANNOT BE UNDONE!
- Triple-check the output folder path before enabling this option
- Never use on folders containing important data
- Consider using the safer "Move to OLD subfolder" option instead
- The plugin only deletes files, not subdirectories
- Install the plugin in xyOps (copy to plugins directory or install from Marketplace)
- Create a workflow with a job that outputs data
- Add the File Export action to the job's success actions
- Configure parameters (format, filename, location)
- Run the workflow - your data is exported!
| Format | Extension | Description | Dependencies |
|---|---|---|---|
| JSON | .json |
Pretty-printed JSON | None |
| CSV | .csv |
Comma-separated values | None |
| HTML | .html |
Styled HTML table with CSS | None |
| XML | .xml |
Structured XML document | None |
| Markdown | .md |
Markdown table format | None |
| YAML | .yaml |
YAML format | None |
| Plain Text | .txt |
ASCII table format | None |
| Excel | .xlsx |
Microsoft Excel workbook | exceljs (bundled) |
.pdf |
PDF document with table | pdfkit (bundled) |
|
| HL7 v2.x | .hl7 |
HL7 v2 pipe-delimited message | None |
| HL7 FHIR | .fhir.json |
FHIR Bundle with Observations | None |
- Cross-Platform - Works on Linux, Windows, and macOS
- Multiple Data Sources - Handles
job.data,job.output, and raw stdout - Nested Object Flattening - Automatically flattens nested objects for CSV/HTML
- Flexible Filenames - Optional timestamp and unique ID suffixes
- Folder Management - Create folders, archive old files, or clean up
- Data Transforms - 25 transform types: filter, select, rename, sort, compute, group, mask, if, set, and more!
- Custom Report Titles - Set custom titles for HTML/Markdown/PDF reports
- Folder Cleanup Options - Keep, archive to OLD/, or delete existing files
- NPX-Based Distribution - Runs directly from GitHub, dependencies bundled
- Debug Logging - Detailed logging in job output for troubleshooting
| Parameter | Type | Default | Description |
|---|---|---|---|
| Output Format | Menu | json |
Select the export file format |
| Filename | Text | export |
Base filename (extension added automatically) |
| File Location | Text | (job temp) | Directory path for the output file |
| Add Timestamp | Checkbox | true |
Append timestamp (YYYYMMDD_HHmmss) to filename |
| Add Unique ID | Checkbox | false |
Append 8-character unique identifier |
| Report Title | Text | (filename) | Custom title for HTML/Markdown/PDF reports |
| Create Folder | Checkbox | true |
Auto-create output folder if missing |
| Folder Cleanup | Menu | keep |
Keep files, archive to OLD/, or DELETE all |
| Data Transforms | Code (YAML) | (empty) | Optional YAML configuration for data transformations |
With filename report:
| Timestamp | UID | Result |
|---|---|---|
| ✅ | ❌ | report_20260207_143052.json |
| ❌ | ✅ | report_a1b2c3d4.json |
| ✅ | ✅ | report_20260207_143052_a1b2c3d4.json |
| ❌ | ❌ | report.json |
- Navigate to xyOps Marketplace
- Search for "File Export"
- Click Install
- Clone or download this repository
- Copy the plugin folder to your xyOps plugins directory
- Restart xyOps or refresh the plugins list
cd /opt/xyops/plugins
git clone https://github.com/talder/xyOps-File-Export.gitConfigure the action with:
- Output Format: JSON
- Filename:
server_data - File Location:
/exports/daily - Add Timestamp: ✅
Result: /exports/daily/server_data_20260207_143052.json
Configure the action with:
- Output Format: HTML
- Filename:
health_report - Report Title:
Server Health Report - February 2026 - File Location:
/reports
Configure the action with:
- Output Format: PDF
- Filename:
monthly_report - File Location:
/reports/monthly - Folder Cleanup: Move to OLD subfolder
Previous files are moved to /reports/monthly/OLD/ before creating the new report.
The plugin supports powerful data transformations using YAML configuration. Transforms are applied as a pipeline - each step processes the output of the previous step, in order.
Core Transforms:
| Transform | Description | Works On |
|---|---|---|
filter |
Keep rows matching a condition | Arrays |
select |
Keep only specified fields | Arrays & Objects |
exclude |
Remove specified fields | Arrays & Objects |
rename |
Rename field names | Arrays & Objects |
sort |
Sort rows by field | Arrays |
format |
Format field values (dates, numbers, etc.) | Arrays & Objects |
Data Manipulation:
| Transform | Description | Works On |
|---|---|---|
limit |
Keep only first N rows | Arrays |
skip |
Skip first N rows | Arrays |
reverse |
Reverse row order | Arrays |
distinct |
Remove duplicate rows | Arrays |
flatten |
Flatten nested objects to dot-notation | Arrays & Objects |
Computed Fields:
| Transform | Description | Works On |
|---|---|---|
compute |
Add calculated fields with expressions | Arrays & Objects |
concat |
Combine multiple fields into one | Arrays & Objects |
split |
Split one field into multiple | Arrays & Objects |
lookup |
Map values using a lookup table | Arrays & Objects |
Aggregation:
| Transform | Description | Works On |
|---|---|---|
group |
Group by field with aggregations (sum, avg, count, etc.) | Arrays |
summarize |
Add a summary row with totals | Arrays |
String Operations:
| Transform | Description | Works On |
|---|---|---|
truncate |
Limit string length with ellipsis | Arrays & Objects |
pad |
Pad strings to fixed width | Arrays & Objects |
mask |
Mask sensitive data (email, phone, card, etc.) | Arrays & Objects |
Advanced Transforms:
| Transform | Description | Works On |
|---|---|---|
unwind |
Explode array field into multiple rows | Arrays |
addIndex |
Add row number field to data | Arrays |
coalesce |
First non-null value from multiple fields | Arrays & Objects |
if |
Conditional field assignment | Arrays & Objects |
set |
Set field to fixed value or expression | Arrays & Objects |
Transforms are defined as an array of steps under the transforms key:
transforms:
- filter: "status == 'active'"
- select:
- name
- email
- created_at
- rename:
created_at: "Registration Date"
- sort: "name asc"Keep only rows where the condition is true. Supports operators: ==, !=, >, <, >=, <=, contains, startswith, endswith.
transforms:
# Basic equality
- filter: "status == 'active'"
# Numeric comparison
- filter: "age >= 18"
# Not equal
- filter: "type != 'test'"
# String contains (case-insensitive)
- filter: "name contains 'john'"
# String starts with
- filter: "email startswith 'admin'"
# String ends with
- filter: "filename endswith '.pdf'"
# Nested field access
- filter: "user.role == 'admin'"
# Boolean values
- filter: "enabled == true"
# Null check
- filter: "deleted_at == null"Keep only the listed fields, remove all others.
transforms:
- select:
- id
- name
- email
- created_atRemove the listed fields, keep all others.
transforms:
- exclude:
- password
- internal_id
- _metadataRename fields for better readability in reports.
transforms:
- rename:
created_at: "Created Date"
updated_at: "Last Modified"
usr_name: "Username"
email_addr: "Email Address"Sort rows by a field. Use asc (ascending, default) or desc (descending).
transforms:
# Simple sort (ascending)
- sort: "name"
# Explicit ascending
- sort: "created_at asc"
# Descending (newest first)
- sort: "created_at desc"
# Object syntax (alternative)
- sort:
field: "score"
order: descFormat field values for display. Supports multiple format types.
Date Formatting:
transforms:
- format:
created_at:
type: date
pattern: "DD/MM/YYYY" # European format
updated_at:
type: date
pattern: "YYYY-MM-DD HH:mm" # ISO with timeSupported date patterns: YYYY (year), MM (month), DD (day), HH (hours), mm (minutes), ss (seconds)
Number Formatting:
transforms:
- format:
price:
type: number
decimals: 2
prefix: "$"
percentage:
type: number
decimals: 1
suffix: "%"
quantity:
type: number
decimals: 0
thousands: true # Add thousand separators (1,000)String Formatting:
transforms:
- format:
name:
type: uppercase
email:
type: lowercase
description:
type: trimBoolean Formatting:
transforms:
- format:
is_active:
type: boolean
true: "Yes"
false: "No"
enabled:
type: boolean
true: "✓"
false: "✗"Default Values (replace null/empty):
transforms:
- format:
notes:
type: default
value: "N/A"
department:
type: default
value: "Unassigned"String Replace:
transforms:
- format:
status:
type: replace
search: "_" # Search pattern (regex)
replacement: " " # Replace with
flags: "g" # Regex flags (g=global)Keep only the first N rows of data.
transforms:
# Keep top 10 results
- limit: 10
# Combine with sort for "top N"
- sort: "score desc"
- limit: 5Skip the first N rows (useful for pagination or skipping headers).
transforms:
# Skip first row (e.g., header row in imported data)
- skip: 1
# Pagination: get rows 11-20
- skip: 10
- limit: 10Reverse the order of all rows.
transforms:
# Reverse chronological order
- reverse: trueRemove duplicate rows based on all fields or specific fields.
transforms:
# Remove exact duplicate rows
- distinct: true
# Remove duplicates by single field
- distinct: "email"
# Remove duplicates by multiple fields
- distinct:
- customer_id
- order_dateConvert nested objects to flat structure with dot-notation keys.
transforms:
# Default separator (dot)
- flatten: true
# Custom separator
- flatten:
separator: "_"Input: {"user": {"name": "John", "address": {"city": "NYC"}}}
Output: {"user.name": "John", "user.address.city": "NYC"}
Add new fields with calculated values using expressions.
transforms:
- compute:
# Basic math
total: "price * quantity"
# With constants
tax: "price * 0.21"
# Multiple fields
profit: "revenue - cost"
margin: "(revenue - cost) / revenue * 100"Combine multiple fields into a new field.
transforms:
# Combine with space (default)
- concat:
field: "full_name"
fields:
- first_name
- last_name
# Custom separator
- concat:
field: "address"
fields:
- street
- city
- country
separator: ", "Split a field into multiple fields.
transforms:
# Split name into parts
- split:
field: "full_name"
separator: " "
into:
- first_name
- last_name
# Split CSV values
- split:
field: "tags"
separator: ","
into:
- tag1
- tag2
- tag3Replace values using a lookup table (code-to-label mapping).
transforms:
# Map status codes to labels
- lookup:
field: "status"
map:
"1": "Active"
"2": "Pending"
"3": "Inactive"
"0": "Deleted"
default: "Unknown"
# Map to different target field
- lookup:
field: "country_code"
target: "country_name"
map:
"US": "United States"
"UK": "United Kingdom"
"DE": "Germany"Group rows by field(s) and calculate aggregations.
transforms:
# Simple group with count
- group:
by: "department"
# Group by multiple fields
- group:
by:
- department
- year
# With aggregations
- group:
by: "category"
aggregations:
total_sales:
op: sum
field: "amount"
avg_price:
op: avg
field: "price"
order_count:
op: count
field: "order_id"
min_price:
op: min
field: "price"
max_price:
op: max
field: "price"
first_date:
op: first
field: "order_date"
all_products:
op: list
field: "product_name"Supported aggregation operations: sum, avg, count, min, max, first, last, list
Add a summary/totals row at the end of the data.
transforms:
- summarize:
label:
name: "TOTAL"
fields:
quantity: sum
amount: sum
price: avgTruncate long strings with ellipsis.
transforms:
# Simple: field name to max length
- truncate:
description: 50
notes: 100
# Advanced: custom suffix
- truncate:
description:
length: 50
suffix: "..."
title:
length: 30
suffix: " [more]"Pad strings to fixed width.
transforms:
- pad:
# Pad numbers with zeros (left)
employee_id:
length: 6
char: "0"
side: left
# Pad text (right)
name:
length: 20
char: " "
side: rightMask sensitive information for privacy/security.
transforms:
- mask:
# Email: jo**@example.com
email:
type: email
# Phone: ******1234
phone:
type: phone
# Credit card: ************1234
card_number:
type: card
# Full mask: ********
password:
type: full
# Custom: show first 2 and last 2 chars
ssn:
type: custom
showStart: 2
showEnd: 2
char: "*"Mask types:
email- Shows first 2 chars of local part + domainphone- Shows last 4 digitscard- Shows last 4 digitsfull- Masks entire valuecustom- Custom start/end reveal
Expand array field values into separate rows (like MongoDB's $unwind).
transforms:
# Input: [{id: 1, tags: ["a", "b"]}, {id: 2, tags: ["c"]}]
# Output: [{id: 1, tags: "a"}, {id: 1, tags: "b"}, {id: 2, tags: "c"}]
- unwind: tags
# With options: preserve rows with empty arrays
- unwind:
field: items
preserveEmpty: trueUse cases:
- Expand order items into separate rows for reporting
- Flatten nested arrays for CSV export
- Create one row per tag/category for analysis
Add a sequential index/row number field to data.
transforms:
# Simple: adds "_index" field starting at 1
- addIndex: row_number
# With options
- addIndex:
field: "line_no"
start: 1000Use cases:
- Add line numbers to exported reports
- Create unique identifiers for rows
- Track original row order after sorting
Return the first non-null, non-empty value from multiple fields.
transforms:
# Use primary_email, fall back to secondary_email, then default
- coalesce:
field: contact_email
fields:
- primary_email
- secondary_email
- backup_email
default: "no-email@example.com"
# Get the best available phone number
- coalesce:
field: phone
fields:
- mobile_phone
- work_phone
- home_phoneUse cases:
- Merge multiple contact fields into one
- Fall back to alternative data sources
- Handle incomplete data gracefully
Set a field value based on a condition.
transforms:
# Simple condition with literal values
- if:
field: status_label
condition: "status == 'active'"
then: "Active User"
else: "Inactive User"
# Use field references with $
- if:
field: display_name
condition: "nickname != null"
then: $nickname
else: $full_name
# Numeric comparison
- if:
field: priority
condition: "score >= 80"
then: "High"
else: "Normal"
# Chain multiple conditions
- if:
field: tier
condition: "revenue >= 100000"
then: "Enterprise"
else: "Standard"
- if:
field: tier
condition: "revenue >= 500000"
then: "Premium"Use cases:
- Categorize data based on values
- Set display labels based on conditions
- Create computed status fields
Set fields to fixed values or special expressions.
transforms:
# Set literal values
- set:
source: "xyOps Export"
version: "1.0"
department: "IT"
# Special values
- set:
exported_at: $now # ISO timestamp: 2026-02-07T14:30:00.000Z
export_date: $today # Date only: 2026-02-07
timestamp: $timestamp # Unix timestamp: 1738939800000
# Copy from another field
- set:
backup_email: $primary_email
full_address: $address.streetSpecial values:
$now- Current ISO timestamp$today- Current date (YYYY-MM-DD)$timestamp- Unix timestamp in milliseconds$fieldname- Copy value from another field
Use cases:
- Add metadata fields to exports
- Stamp data with export timestamp
- Copy/duplicate field values
- Set default values for all rows
Filter active users, select relevant fields, format dates, sort by name:
transforms:
# Step 1: Keep only active users
- filter: "status == 'active'"
# Step 2: Select fields for report
- select:
- name
- email
- department
- created_at
- last_login
# Step 3: Rename for readability
- rename:
created_at: "Registered"
last_login: "Last Login"
# Step 4: Format dates
- format:
Registered:
type: date
pattern: "DD/MM/YYYY"
Last Login:
type: date
pattern: "DD/MM/YYYY HH:mm"
# Step 5: Sort alphabetically
- sort: "name asc"Process sales data with amounts and dates:
transforms:
# Filter completed sales from this year
- filter: "status == 'completed'"
# Remove internal fields
- exclude:
- internal_id
- _metadata
- processing_notes
# Rename columns
- rename:
cust_name: "Customer"
total_amt: "Total"
sale_date: "Date"
# Format for display
- format:
Total:
type: number
decimals: 2
prefix: "$"
thousands: true
Date:
type: date
pattern: "DD MMM YYYY"
# Sort by date descending (newest first)
- sort: "Date desc"Clean up and format server status data:
transforms:
# Only show servers with issues
- filter: "health_score < 80"
# Select monitoring fields
- select:
- hostname
- ip_address
- health_score
- cpu_usage
- memory_usage
- last_check
- is_critical
# Friendly names
- rename:
hostname: "Server"
ip_address: "IP"
health_score: "Health %"
cpu_usage: "CPU %"
memory_usage: "Memory %"
last_check: "Last Checked"
is_critical: "Critical?"
# Format values
- format:
"Health %":
type: number
decimals: 0
suffix: "%"
"CPU %":
type: number
decimals: 1
suffix: "%"
"Memory %":
type: number
decimals: 1
suffix: "%"
"Last Checked":
type: date
pattern: "HH:mm:ss"
"Critical?":
type: boolean
true: "⚠️ YES"
false: "No"
# Worst health first
- sort: "Health % asc"Just rename and reorder fields:
transforms:
- select:
- first_name
- last_name
- email
- phone
- rename:
first_name: "First Name"
last_name: "Last Name"
email: "Email"
phone: "Phone"If a transform fails, the job will fail with an error message. Common errors:
- Invalid filter condition - Check syntax:
field operator 'value' - Unknown transform type - Check spelling. Valid types: filter, select, exclude, rename, sort, format, limit, skip, reverse, distinct, flatten, compute, concat, split, lookup, group, summarize, truncate, pad, mask
- select requires fields - Provide a list of field names
- sort requires a field - Specify which field to sort by
Debug output shows each transform step:
File Export: Applying 4 transform(s)...
File Export: Step 1: filter
File Export: filter 'status == active' - 100 rows → 75 rows
File Export: Step 2: select
File Export: select fields [name, email, status]
File Export: Step 3: rename
File Export: rename fields [status→Status]
File Export: Step 4: sort
File Export: sort by 'name' asc
File Export: All transforms completed
File Export: Data after transforms: 75 rows
Some transforms use regular expressions (regex) for pattern matching. Here's a quick guide to help you use them effectively.
- format transform with
type: replace- thesearchfield uses regex - Filter operators
contains,startswith,endswithuse simple string matching (not regex)
| Pattern | Matches | Example |
|---|---|---|
abc |
Exact text "abc" | "abc" matches "abc" |
. |
Any single character | "a.c" matches "abc", "a1c", "a-c" |
.* |
Any characters (zero or more) | "a.*c" matches "ac", "abc", "aXXXc" |
.+ |
Any characters (one or more) | "a.+c" matches "abc", "aXXXc" (not "ac") |
? |
Previous char is optional | "colou?r" matches "color" and "colour" |
| Pattern | Matches | Example |
|---|---|---|
[abc] |
Any one of a, b, or c | "[aeiou]" matches any vowel |
[a-z] |
Any lowercase letter | "[a-z]+" matches "hello" |
[A-Z] |
Any uppercase letter | "[A-Z]+" matches "HELLO" |
[0-9] |
Any digit | "[0-9]+" matches "123" |
[^abc] |
NOT a, b, or c | "[^0-9]" matches non-digits |
| Pattern | Matches | Same As |
|---|---|---|
\d |
Any digit | [0-9] |
\D |
Any non-digit | [^0-9] |
\w |
Word character | [a-zA-Z0-9_] |
\W |
Non-word character | [^a-zA-Z0-9_] |
\s |
Whitespace | space, tab, newline |
\S |
Non-whitespace | anything but space/tab/newline |
| Pattern | Matches |
|---|---|
^ |
Start of string |
$ |
End of string |
\b |
Word boundary |
| Pattern | Matches |
|---|---|
* |
0 or more times |
+ |
1 or more times |
? |
0 or 1 time |
{3} |
Exactly 3 times |
{2,5} |
2 to 5 times |
{2,} |
2 or more times |
These characters have special meaning. To match them literally, add \ before them:
. * + ? ^ $ | \ [ ] ( ) { }
Example: To match $100.00, use \$100\.00
Used in the flags parameter of format replace:
| Flag | Meaning |
|---|---|
g |
Global - replace ALL matches (not just first) |
i |
Case-insensitive matching |
gi |
Both global and case-insensitive |
Remove all digits:
- format:
product_code:
type: replace
search: "[0-9]"
replacement: ""
flags: "g"Replace underscores with spaces:
- format:
field_name:
type: replace
search: "_"
replacement: " "
flags: "g"Remove special characters:
- format:
filename:
type: replace
search: "[^a-zA-Z0-9]"
replacement: ""
flags: "g"Extract numbers only:
- format:
phone:
type: replace
search: "\\D" # Note: double backslash in YAML
replacement: ""
flags: "g"Clean up multiple spaces:
- format:
text:
type: replace
search: "\\s+" # One or more whitespace
replacement: " "
flags: "g"Remove HTML tags:
- format:
content:
type: replace
search: "<[^>]+>"
replacement: ""
flags: "g"Format phone number (add dashes):
# First remove non-digits, then use compute or keep as-is
- format:
phone:
type: replace
search: "(\\d{3})(\\d{3})(\\d{4})"
replacement: "$1-$2-$3"| What to Match | Pattern |
|---|---|
| Email (simple) | [a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,} |
| Phone digits | \d{10} or \d{3}-\d{3}-\d{4} |
| Date YYYY-MM-DD | \d{4}-\d{2}-\d{2} |
| IP Address (simple) | \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3} |
| URL (simple) | https?://[^\s]+ |
| Alphanumeric only | ^[a-zA-Z0-9]+$ |
| Has whitespace | \s |
| Empty or whitespace | ^\s*$ |
In YAML, backslashes need to be doubled. So:
\din regex becomes"\\d"in YAML\sin regex becomes"\\s"in YAML\.in regex becomes"\\."in YAML
Alternatively, use single quotes which don't need escaping:
search: '\d+' # Works with single quotesBy default, shell scripts output plain text which is captured as raw stdout. For better export results (especially CSV, Excel, HTML), output structured JSON data from your scripts.
Create a shell script that outputs JSON instead of plain text:
#!/bin/bash
# xyOps Shell Script: ls -lart to JSON
# Outputs structured JSON data for downstream actions
json_array="["
first=true
while IFS= read -r line; do
# Skip empty lines and "total" line
[[ -z "$line" ]] && continue
[[ "$line" =~ ^total ]] && continue
# Parse ls -l output columns using awk
perms=$(echo "$line" | awk '{print $1}')
links=$(echo "$line" | awk '{print $2}')
owner=$(echo "$line" | awk '{print $3}')
group=$(echo "$line" | awk '{print $4}')
size=$(echo "$line" | awk '{print $5}')
month=$(echo "$line" | awk '{print $6}')
day=$(echo "$line" | awk '{print $7}')
time_year=$(echo "$line" | awk '{print $8}')
name_part=$(echo "$line" | awk '{for(i=9;i<=NF;i++) printf "%s%s", $i, (i<NF?" ":""); print ""}')
# Skip if parsing failed
[[ -z "$perms" ]] && continue
# Determine file type
case "${perms:0:1}" in
d) file_type="directory" ;;
l) file_type="symlink" ;;
-) file_type="file" ;;
*) file_type="other" ;;
esac
# Handle symlinks (extract target)
symlink_target=""
display_name="$name_part"
if [[ "$name_part" == *" -> "* ]]; then
display_name="${name_part%% -> *}"
symlink_target="${name_part#* -> }"
fi
# Build JSON object
if [ "$first" = true ]; then
first=false
else
json_array+=","
fi
json_object="{"
json_object+="\"name\":\"$display_name\","
json_object+="\"type\":\"$file_type\","
json_object+="\"permissions\":\"$perms\","
json_object+="\"links\":$links,"
json_object+="\"owner\":\"$owner\","
json_object+="\"group\":\"$group\","
json_object+="\"size\":$size,"
json_object+="\"month\":\"$month\","
json_object+="\"day\":\"$day\","
json_object+="\"time\":\"$time_year\""
if [[ -n "$symlink_target" ]]; then
json_object+=",\"target\":\"$symlink_target\""
fi
json_object+="}"
json_array+="$json_object"
done < <(ls -lart /)
json_array+="]"
# Output to xyOps (enable "Interpret JSON in Output" in Shell Plugin!)
echo "{\"xy\":1,\"code\":0,\"description\":\"Listed items\",\"data\":$json_array}"Important: Enable "Interpret JSON in Output" checkbox in the Shell Plugin parameters!
name,type,permissions,links,owner,group,size,month,day,time,target
var,symlink,lrwxr-xr-x@,1,root,wheel,11,Nov,22,14:49,private/var
usr,directory,drwxr-xr-x@,11,root,wheel,352,Nov,22,14:49,
tmp,symlink,lrwxr-xr-x@,1,root,wheel,11,Nov,22,14:49,private/tmp
bin,directory,drwxr-xr-x@,39,root,wheel,1248,Nov,22,14:49,The HTML export creates a professionally styled table with:
- Blue header row with white text
- Hover effects on rows
- Responsive design
- Generation timestamp
- Custom report title
- Pretty-printed with 2-space indentation
- Preserves original data structure
- Automatically flattens nested objects (e.g.,
meta.levelbecomes column header) - Arrays of primitives joined with commas
- Proper escaping of quotes and special characters
- First row contains headers
- Full HTML5 document with embedded CSS
- Styled table with blue headers
- Responsive design
- Custom report title support
- Valid XML with proper declaration
- Nested objects preserved as child elements
- Special characters escaped
- Clean YAML syntax
- Proper quoting of special strings
- Nested structures preserved
- ASCII table with aligned columns
- Header separator line
- Works in any text viewer
- Styled header row (blue background, white text)
- Auto-column width
- Proper data types (numbers, strings)
- Professional document layout
- Title and timestamp header
- Table format for data
- Page breaks for large datasets
- Standard ORU^R01 message structure
- MSH, PID, OBR, OBX segments
- Proper field escaping
- Each data field becomes an OBX segment
- FHIR R4 Bundle resource
- Collection of Observation resources
- Proper coding and value types
- JSON format
Cause: The previous job didn't output structured data.
Solutions:
- Enable "Interpret JSON in Output" in Shell Plugin
- Output JSON with
dataproperty:{"xy":1,"code":0,"data":{...}} - Check job log for "Using data from" message
Cause: The specified folder doesn't exist and "Create Folder" is disabled.
Solutions:
- Enable "Create Folder" checkbox
- Create the folder manually before running
Cause: Data is in unexpected location.
Solutions:
- Check job log for debug messages
- Verify "Interpret JSON in Output" is enabled for Shell Plugin
- Check the data structure in the previous job's output
The plugin logs detailed information to the job log:
File Export: Using data from 'job.data'
File Export: Output format: 'csv'
File Export: Converting to csv...
File Export: Conversion successful, content length: 1234
File Export: Writing to /exports/report.csv
File Export: File written successfully
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
- Template-based output formatting
- Data validation transforms
- Pivot/unpivot transforms
This project is licensed under the MIT License - see the LICENSE.md file for details.
Tim Alderweireldt
- Plugin: xyOps File Export
- Year: 2026
- xyOps team - For the automation platform
- exceljs - For Excel file generation
- pdfkit - For PDF document generation
- Fixing transparency of logo
- Fixed typo
- 5 NEW Advanced Transforms! Now 25 total transforms available
unwind- Explode array fields into multiple rows (like MongoDB $unwind)addIndex- Add row numbers/sequence field to datacoalesce- Get first non-null value from multiple fieldsif- Conditional field assignment with then/elseset- Set fields to fixed values or special expressions ($now, $today, $timestamp)
- 14 NEW Transform Types! Now 20 total transforms available
- Data Manipulation: limit, skip, reverse, distinct, flatten
- Computed Fields: compute (expressions), concat, split, lookup (value mapping)
- Aggregation: group (with sum/avg/count/min/max), summarize (totals row)
- String Operations: truncate, pad, mask (email/phone/card/custom)
- Mask supports email, phone, credit card, and custom patterns
- Group supports 8 aggregation operations: sum, avg, count, min, max, first, last, list
- NEW: Data Transforms! Filter, select, exclude, rename, sort, and format data using YAML
- Added transforms parameter with code editor (YAML syntax)
- Auto-install
js-yamldependency on first use - Transforms execute as pipeline (chained, order matters)
- Comprehensive filter operators: ==, !=, >, <, >=, <=, contains, startswith, endswith
- Format types: date, number, uppercase, lowercase, trim, boolean, default, replace
- Added YAML export format
- Added Plain Text (TXT) export format
- Added Excel (XLSX) export with auto-install
- Added PDF export with auto-install
- Added folder cleanup options (keep/archive/delete)
- Added Report Title parameter
- Added Create Folder option
- Improved debug logging
- Initial release
- JSON, CSV, HTML, XML, Markdown formats
- HL7 v2.x and FHIR formats
- Timestamp and UID filename options
- Auto-upload to xyOps
Need help? Open an issue on GitHub or contact the author.
Found this useful? Star the repository and share with your team!
