Double-click the Arfni 1.0.0 Setup icon to begin installation.
When the setup wizard opens, click Next to continue.
It is recommended to close all other applications before starting installation.
Read through the license agreement carefully.
If you agree to the terms, select I Agree to proceed.
Select the folder where you want to install Arfni.
You can use the default path or click Browse to choose another location.
Click Install to start the installation.
Once the installation is complete, you’ll see the confirmation screen:
“Arfni 1.0.0 has been installed on your computer.”
Click Finish to close the setup wizard.
Before starting deployment, ensure Docker Desktop is installed on your local machine. Arfni will automatically launch Docker Desktop if it's installed. If automatic launch fails, please start Docker Desktop manually.
In the local environment, click Create New Project to start the deployment process.
Configure the project name and the path where the project will be created.
Check the folder at the specified path. The .arfni folder and stack.yaml file will be automatically generated for deployment and canvas recording.
Create an apps folder and place your deployment files inside it. Each folder name should match the blocks used in the canvas.
Return to the canvas screen and drag & drop blocks according to the elements you want to deploy, placing them on the canvas.
Click on each block to configure its properties. Set ports, environment configurations, and environment variables according to your architecture.
You can view the automatically updated stack.yaml file in the slide panel at the bottom. The file automatically saves when there are no changes for 2 seconds. After placing blocks and completing environment configuration, verify the stack.yaml file to prepare for deployment.
Click the Deploy button in the upper right corner to start deployment.
When you click the Deploy button, automatic deployment begins through 5 stages:
- Preflight - Pre-deployment checks
- Generate - Generate configuration files
- Build - Build container images
- Deploy - Deploy containers
- Health - Health check verification
Tip: You can stop the deployment at any time by clicking the Stop Deployment button.
If required like Dockerfile or docker-compose.yml files are missing, they will be automatically generated. However, for deployment stability, it's recommended to configure these files in advance when possible.
The system will utilize existing images and deployments when available. If there are changes or new requirements, it will rebuild and install accordingly.
Once deployment is complete, a popup window will display:
- Which services were deployed
- What endpoints are available
- Arfni uses Docker for deployment, so Docker Desktop must be installed in your local environment
- If Docker Desktop is already installed, the system will automatically launch it
- If automatic launch fails, please start Docker Desktop manually - deployment will proceed automatically once it's running
Before starting remote deployment, ensure:
- SSH access to your remote server
- PEM key file for server authentication
- For Hybrid monitoring mode: Docker Desktop installed on your local machine
In Remote Projects, click Select Server to choose your target server for remote deployment.
If you don't have a server registered, you'll need to add one. Click Add New Server to add a target server to your server list
.
Configure the following settings:
- Server Name - A friendly name for your server
- Server Address - IP address or domain name
- Username - SSH username
- PEM Key Path - Path to your SSH key file
After completing the configuration:
- Click Test SSH Connection to verify the connection
- Click Add Server to save the server information
Managing Servers:
Click on a saved server to manage projects on that server, then proceed with Create New Project.
Click Create New Project and configure the project name and the path where the project will be created on the remote server.
Check the folder at the specified path. The .arfni folder and stack.yaml file will be automatically generated for deployment and canvas recording.
Create an apps folder and place your deployment files inside it. Each folder name should match the blocks used in the canvas.
For example, in the case of FastAPI, if a requirements.txt file is required, the following values are essential.
This file should be placed under the fastapi folder.
fastapi==0.104.1
uvicorn[standard]==0.24.0
gunicornReturn to the canvas screen and drag & drop blocks according to the elements you want to deploy, placing them on the canvas.
Click on each block to configure its properties. Set ports, environment configurations, and environment variables according to your architecture.
For remote servers, you can choose a monitoring option. Click the ? button for detailed information about each option.
Monitoring Options:
- All-in-One - All monitoring tools (Prometheus, Grafana, etc.) run on a single server. Simple and cost-effective.
- Hybrid - Monitoring tools are distributed across multiple servers. Balances performance and cost.
- Node Exporter and Prometheus run on the remote server
- Grafana runs locally to reduce memory load on the remote server
- No Monitoring - No monitoring tools deployed. Suitable for development or when monitoring isn't needed.
You can view the automatically updated stack.yaml file in the slide panel at the bottom. The file automatically saves when there are no changes for 2 seconds. After placing blocks and completing environment configuration, verify the stack.yaml file to prepare for deployment.
Click the Deploy button in the upper right corner to start deployment.
When you click the Deploy button, automatic deployment begins through 5 stages:
- Preflight - Pre-deployment checks
- Generate - Generate configuration files
- Build - Build container images
- Deploy - Deploy containers
- Health - Health check verification
Tip: You can stop the deployment at any time by clicking the Stop Deployment button.
If required Dockerfile or docker-compose.yml files are missing, they will be automatically generated. However, for deployment stability, it's recommended to configure these files in advance when possible.
The system will utilize existing images and deployments when available. If there are changes or new requirements, it will rebuild and install accordingly.
Once deployment is complete, a popup window will display:
- Which services were deployed
- What endpoints are available
Note: Arfni uses Docker for deployment.
After deployment, click Check Server Status or return to the main screen and click Project Status on the deployed canvas to view the server status dashboard
Click the Connect button to establish an SSH connection. You can conveniently execute commands and perform tasks through the GUI interface.
View all containers currently running on the remote server. Use the control buttons to start, stop, or delete containers conveniently.
Click Open Dashboard to access monitoring (available for All-in-One and Hybrid deployment modes).
Hybrid Mode Requirements:
- Docker Desktop must be installed on your local machine to run Grafana
- If Docker Desktop is already installed, the system will automatically launch it
- If automatic launch fails, please start Docker Desktop manually - the process will continue automatically
Dashboard Features:
- Pre-configured default settings are provided
- Opens automatically in web view
- No complex configuration needed - start monitoring immediately
Before using AI features, you must configure an API key in Settings.
Navigate to Settings → API Keys to manage your API keys.
Click Add API Key to open the input dialog.
Required Information:
- Provider - Select your API provider
- Key Name - A friendly name for identification
- API Key - Your actual API key
Supported Providers:
- OpenAI API
- GMS Key (under "etc" category)
Note: Currently, only OpenAI API and GMS Key are functional.
If you have multiple API keys saved:
- Select the desired key from the list
- Click the Apply button to activate it
- The active key will be marked with Active status
The Estimate feature provides server resource recommendations based on the blocks placed on your canvas, including monitoring systems and Docker requirements.
Arrange blocks on the canvas with your desired architecture, including monitoring systems and Docker configurations. To use, Click Ai button.
Based on your deployed blocks and stack.yaml configuration, the system uses pre-stored benchmark data to provide 3-tier options:
- Budget - Cost-optimized minimal configuration
- Recommended - Balanced performance and cost
- Performance - High-performance optimized configuration
Pricing Reference: Calculations are based on AWS EC2 pricing in the Seoul region as of January 15, 2025.
Click Analyze Project & Recommend Server to send a request to the AI.
The system analyzes your deployed services and recommends AWS infrastructure costs across 3 tiers.
Recommendation Process:
- Step 1 - Calculate service memory requirements
- Step 2 - AI-based instance recommendations
- Step 3 - Actual cost calculatio
Additional Features:
- View deployment Tips for each architecture
- Get optimization suggestions for your specific setup
⚠️ Important Disclaimer: These are estimates only. Actual deployment costs may vary due to numerous variables. These recommendations are advisory, not guaranteed. Arfni does not assume responsibility for excessive cost occurrences.
The Optimize feature provides AI-driven recommendations based on actual data from your deployed remote server.
Click Project Status or Check Server Status after deployment to navigate to the Project Status dashboard.
- Click the Optimize button to open the AI analysis interface
- Click Start Analysis to invoke the AI
Requirement: This feature is only available for servers deployed with All-in-One or Hybrid monitoring modes, as server metrics data must be transmitted to the AI.
After loading, you'll receive a response based on real metric information:
Collected Metrics:
- CPU Usage - Current CPU utilization (%)
- Memory - Used memory (MB) and utilization (%)
- Disk - Used disk space (GB) and utilization (%)
- Instance Type - Current EC2 instance type
These metrics allow you to:
- Easily review your server's performance data
- Transmit data to AI for cost analysis
- Receive optimization recommendations
Based on the transmitted information, a comprehensive report will be generated for your review.
⚠️ Important Disclaimer: These are estimates only. Actual deployment conditions involve many variables that may cause inaccuracies. These recommendations are advisory, not definitive answers. Arfni does not assume responsibility for excessive cost occurrences.
If you find a bug or want to suggest an improvement, please contact us by email at arfni201@googlegroups.com.



































































