This is a Telegram bot designed to serve as an GPT assistant. It was created using Java 21, utilizing the Spring AI framework and the Telegram Bot Java Library. The bot operates with
- gpt-4.1-mini large language model from OpenAI
- gemini-2.0-flash from Google AI
- DeepSeek-V3.2-Exp from DeepSeek
-
Multi-models
Supports following optional AI models:
- OpenAI GPT-4.1-Mini
- Google Gemini-2.0-Flash
- DeepSeek DeepSeek-V3.2-Exp
-
Speech-to-text Support
Voice messages are transcribed using OpenAI Whisper model
-
Proxy Support
The bot can be configured to run behind a proxy server for enhanced security and privacy
-
User Access Control
Only specified Telegram admin usernames are allowed to see detailed error messages
-
Cloud-First Deployment Strategy
- Build in the Cloud
- The Docker image is built using the GitHub Actions environment, ensuring consistency and scalability.
- A fully automated CI/CD pipeline facilitates seamless building and deployment of the bot.
- To maintain a clean and efficient deployment process, avoid pulling source code or building the image directly on the EC2 instance.
- Simplified Deployment Process
- Deployment to the EC2 instance requires only two files:
- compose.yaml
- deploy.sh
- The bot can be deployed effortlessly using Docker and Docker Compose, streamlining the setup and reducing manual intervention.
- commit to
mainbranch triggers github actions workflow to build, push docker image into AWS ECR repository and redeploy the bot on EC2 instance.
- Automated Infrastructure Provisioning
- The EC2 instance is configured automatically using a cloud-init script, enabling consistent and repeatable setups.
- Terraform scripts are provided to provision the AWS EC2 instance and configure essential infrastructure components, such as networking, security groups, and IAM roles.
-
Env Variables Management
Environment variables can be simply managed using AWS SSM Parameter Store for secure configuration
-
Spring AI Framework
- Built using the Spring AI framework for seamless integration with AI services
- Each bot command is implemented as a separate Spring component
- Used factory design pattern to create command components
- start with CommandFactory
- application configuration is managed using Spring Boot properties
- Supports easy extension and customization of bot functionality
-
Telegram Bot
- Utilizes the Telegram Bot Java Library for efficient interaction with the Telegram Bot API
- Supports various Telegram entities:
- telegram bot commands
- text messages
- voice messages
- callback queries
- Utilizes bot settings for configuration
- Supports message replying and code formatting
- Docker installed
- Docker Compose installed
- AWS CLI installed and configured
- Terraform installed
- java 21 installed
- gradle installed
- gcloud CLI installed and authenticated
- Get your Telegram bot token from @BotFather
- Get your AI API key from OpenAI API
- Get your Google API key
- Get your DeppSeek API key
- Register AWS account
- Setup proxy (Optional)
- Create Google Cloud Platform project
- Define API key to call Generative Language API only
- Install the gcloud CLI to use Google Gemini model
- Authenticate by running following commands
gcloud auth application-default login
Create a .env file in the root directory and add the following:
- basic configuration
PROJECT=<bot_name> SERVER_PORT=8080 TGBOT_TOKEN= TGBOT_VOICE_PATH= TGBOT_ALLOWED_USER_NAMES=
- Model configuration
- if you want to use OpenAI model, add the following
# Open AI OPENAI_API_KEY=
- if you want to use Gemini model, add the following
# Gemini GOOGLE_CLOUD_PROJECT_ID= GOOGLE_CLOUD_REGION=europe-west1
- if you want to use DeepSeek model, add the following
# DeepSeek DEEPSEEK_API_KEY=
- To run the telegram bot over proxy define following env vars additionally:
TGBOT_PROXY_HOSTNAME= TGBOT_PROXY_PORT=42567 TGBOT_PROXY_USERNAME= TGBOT_PROXY_PASSWORD=
There are several ways to run the bot:
- locally
- locally in docker
- or on AWS EC2 instance
To run the bot locally:
- build the project
gradlew bootJar
- run the bot from the command line
et -a source .env set +a java -jar build/libs/app.jar
- Run from IDE
Use the
Apprun configuration with environment variables loaded from the .env file.
- build an image
docker build --build-arg GPR_KEY="${GPR_KEY}" -t "${PROJECT,,}:latest" .
- run the bot using docker compose
make sure
.envfile is in the same directory ascompose.yamldocker compose up --detach
- to stop the bot
docker compose down -v
- to see logs
docker compose logs -f
- to clean up unused docker objects
docker system prune -a
- to observe docker resource usage
docker stats
there are two steps to deploy the bot on AWS EC2 instance:
- setup github workflow
- setup AWS infrastructure using terraform scripts
- deploy the bot using github actions workflow
github actions will build and push docker image into AWS ECR repository. Then it will redeploy the bot on EC2 instance.
To run deploy workflow, github needs to have access to AWS account:
- create
awsenvironment in settings - define following environment variables in the
awsenvironmentAWS_ACCESS_KEY_ID- aws access key idAWS_SECRET_ACCESS_KEY- aws secret access key
- define following secret in the
awsenvironmentGPR_KEY- github token to access github packages registry
The project has defined following terraform modules:
Creates ec2 instance with necessary security groups, iam roles, etc.
Use Terraform scripts to provision the required AWS resources. Terraform will do following:
- create ec2 and ecr using terraform modules
- define free_tier alerts
- use
t2.microec2 instance type. - setup security groups to allow only ssh and http access
- setup iam roles and policies:
github actions will be allowed to:
- push docker images into ECR
- run AWS SSM command
- write SSM command execution logs into CloudWatch logs
- it will be allowed to redeploy the bot using AWS SSM command
- use cloud-init script to:
- setup and configure docker
- setup aws-cli
- setup and configure gcloud sdk
- install and configure ecr credential helper
- create working directory
/home/ubuntu/tgbot-gpt - download
deploy.shscript into working directory - write cloud-init logs into
/var/log/cloud-init-output.log
Keep in mind that
cloud-initscript will run only once when the instance is created.
To deploy the infrastructure:
Create
tgbot-gpt-tfS3 bucket ineu-central-1region to store terraform state or keep terraform state locally.
- Init terraform script
cd ./ci/aws/infra terraform init -reconfigure \ -backend-config="bucket=tgbot-gpt-tf" \ -backend-config="region=eu-central-1" \ -backend-config="key=tgbot-gpt-infra.tfstate"
- Provide terraform variables over
terraform.tfvarsor inline - Deploy dockerized application on EC2 instance by running terraform scripts
terraform plan -out tgbot-gpt.tfplan terraform apply -input=false tgbot-gpt.tfplan
- Keep
aws_ec2_idterraform output value.
If some env variables need to be updated, use params terraform module to upload env variables into AWS SSM Parameter
Store.
- Init terraform script
cd ./ci/aws/params terraform init -reconfigure \ -backend-config="bucket=tgbot-gpt-tf" \ -backend-config="region=eu-central-1" \ -backend-config="key=tgbot-gpt-params.tfstate"
- To upload env variables do the following:
- define
aws.envfile locally - upload env variables into AWS SSM Parameter Store by running
terraform plan -out tgbot-gpt-params.tfplan terraform apply -input=false tgbot-gpt-params.tfplan
-
Start/restart the application
Start docker container manually by SSH or run deploy.sh script by the following aws command
aws ssm send-command \ --document-name "AWS-RunShellScript" \ --parameters 'commands=["cd /home/ubuntu/tgbot-gpt", "./deploy.sh"]' \ --instance-ids "<ec2-instance-id>" \ --comment "Deploy tgbot-gpt" \ --cloud-watch-output-config "CloudWatchLogGroupName=/aws/ssm/tgbot-gpt-deploy-logs,CloudWatchOutputEnabled=true" \ --region "eu-central-1"
-
The EC2 instance is configured with only a root volume. Each time Terraform provisions the instance, all data is lost and the environment is reinitialized using the
user_datacloud-init script. -
Current setup
-
uses t2.micro
-
ec2 instance type (1GiB Memory, 1 vCPU).
-
Docker container limits:
- memory: 640MiB
- cpu: 0.8 vCPU
-
Swapfile Usage
During EC2 initialization, a 2GB swapfile is created to extend available memory. This helps prevent out-of-memory errors on small instance types (e.g., t2.micro). Swap usage is tuned for minimal impact on performance (vm.swappiness=10). Monitor swap and memory usage to ensure stable operation under load
-
-
Actual resource usage depends on the number of users.
For light usage:
- memory usage is around 40% - 250MiB
- cpu usage is around 10 - 50%
reply - work in progress reply
markup - format gpt response
models - choose preferred ai model
/.github- github actions workflows/.run- run configurations for IDE/ci- continuous integration scripts/ci/aws/infra- terraform scripts to provision AWS infrastructure/ci/aws/params- terraform scripts to upload env variables into AWS SSM Parameter Store
/gradle- gradle wrapper files/src- java source code.env- environment variables fileaws.env- environment variables file for AWS SSM Parameter Storecompose.yaml- docker compose file to run the bot locallyDockerfile- dockerfile to build the bot imagebuild.gradle- gradle build filegradle.properties- gradle properties file
- TBD
- TBD
- TBD
- init project
- add openai gpt-4.1-mini model support
- add google gemini-2.0-flash model support
- add terraform scripts to provision AWS infrastructure
- add terraform scripts to upload env variables into AWS SSM Parameter Store
- add github actions workflow to deploy the bot on AWS EC2 instance
- add deepseek deepseek-v3.2-exp model support
- Created by webcane
This project is licensed under the MIT License - see the LICENSE file for details