This project aims to demonstrate the integration of MinIO, an open-source object storage server compatible with Amazon S3 API, with a Python Flask API for file uploads.
MinIO is a high-performance, distributed object storage server designed for large-scale data infrastructure. It is API-compatible with Amazon S3, making it easy to integrate with existing S3-compatible applications and tools.
In this project, we explore how to set up a MinIO server using Docker and interact with it via a Python Flask API. The Flask API allows users to upload files to the MinIO server using the S3 API.
- MinIO Server: Dockerized MinIO server configured to run locally.
- Flask API: Python Flask API for handling file uploads to MinIO.
- Upload Functionality: Ability to upload files to the MinIO server via the Flask API.
- Customization: Option to specify custom filenames for uploaded files.
first fill the related credentials in the relative path: credentials.env
MINIO_ENDPOINT_URL: this is the your host IP.MINIO_ACCESS_KEY_ID: get this from MinIO MinIO/Access Keys.MINIO_SECRET_ACCESS_KEY: get this from MinIO MinIO/Access Keys.MINIO_S3_BUCKET_NAME: name of the bucket you created at MinIO.
docker-compose --env-file credentials/.env up
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::*/*"
]
}
]
}Step 4: Create Access Keys and copy them into env file
Step 5: Create a bucket and make the Access Policy to public and add the name into the env file
docker-compose --env-file credentials/.env up
http://<minio-server>:9000/<bucket-name>/<file-name>sudo chmod 755 /var/www/socketssudo chown www-data:www-data /var/www/sockets/minios3.sockserver {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:7000; # Change to the IP address and port of your Gunicorn server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
}
}
POST /upload
https://your.domain.address
This endpoint is designed for uploading files to an MinIO S3 bucket. It requires a file part in the request and optionally accepts a custom filename.
file: File to be uploaded (required).filename: Custom filename for the file (optional).
200 OK: File successfully uploaded.400 Bad Request: Missing file part in the request or no file selected.500 Internal Server Error: Upload to AWS failed.
import requests
url = 'http://127.0.0.1:7000/upload'
files = {'file': open('./photo.png', 'rb')}
response = requests.post(url, files=files)
print(response.text)const fs = require('fs');
const axios = require('axios');
const FormData = require('form-data');
async function uploadFileToApi(apiUrl, filePath, customFilename = null) {
const form = new FormData();
const filename = customFilename || filePath.split('/').pop();
form.append('file', fs.createReadStream(filePath), filename);
form.append('filename', filename);
try {
const response = await axios.post(apiUrl, form, {
headers: {
...form.getHeaders()
}
});
return response.data;
} catch (error) {
return error.response.data;
}
}
// Usage
const apiUrl = 'https://your.domain.address/upload';
const filePath = './photo.png';
const customFilename = 'xxxxxx.png';
uploadFileToApi(apiUrl, filePath, customFilename)
.then(data => console.log(data))
.catch(err => console.error(err));