Production-ready image processing APIs: Background removal (BiRefNet), Image upscaling (Real-ESRGAN 4x), and Face restoration (GFPGAN). GPU-accelerated on NVIDIA A10, serving at scale.
Brainiall Image API provides GPU-accelerated image processing powered by state-of-the-art deep learning models. Each endpoint accepts standard image formats via multipart upload or base64 encoding and returns processed results.
Base URL: https://apim-ai-apis.azure-api.net/v1/image
Key Features:
- GPU-accelerated (NVIDIA A10 24GB)
- Three specialized models: BiRefNet, Real-ESRGAN, GFPGAN
- Multipart upload or base64 input
- Sub-second processing for most images
- 20-40x cheaper than competing services
Use any one of these headers:
| Method | Header |
|---|---|
| Bearer Token | Authorization: Bearer YOUR_KEY |
| API Key | api-key: YOUR_KEY |
| Subscription Key | Ocp-Apim-Subscription-Key: YOUR_KEY |
Get your API key at brainiall.com.
Remove the background from any image using BiRefNet, a state-of-the-art bilateral reference network.
Multipart upload:
import requests
response = requests.post(
"https://apim-ai-apis.azure-api.net/v1/image/remove-background",
headers={"Authorization": "Bearer YOUR_KEY"},
files={"file": open("photo.jpg", "rb")}
)
with open("result.png", "wb") as f:
f.write(response.content)
print("Background removed! Saved to result.png")Base64 input:
import requests
import base64
with open("photo.jpg", "rb") as f:
image_b64 = base64.b64encode(f.read()).decode()
response = requests.post(
"https://apim-ai-apis.azure-api.net/v1/image/remove-background/base64",
headers={
"Authorization": "Bearer YOUR_KEY",
"Content-Type": "application/json"
},
json={"image": image_b64}
)
result = response.json()
output_bytes = base64.b64decode(result["image"])
with open("result.png", "wb") as f:
f.write(output_bytes)
print("Background removed!")JavaScript (multipart):
import fs from "fs";
const formData = new FormData();
formData.append("file", new Blob([fs.readFileSync("photo.jpg")]));
const response = await fetch(
"https://apim-ai-apis.azure-api.net/v1/image/remove-background",
{
method: "POST",
headers: { Authorization: "Bearer YOUR_KEY" },
body: formData,
}
);
const buffer = await response.arrayBuffer();
fs.writeFileSync("result.png", Buffer.from(buffer));
console.log("Background removed!");curl:
curl -X POST https://apim-ai-apis.azure-api.net/v1/image/remove-background \
-H "Authorization: Bearer YOUR_KEY" \
-F "file=@photo.jpg" \
-o result.png
echo "Background removed! Saved to result.png"Upscale images by 4x using Real-ESRGAN with the x4plus model for sharp, detailed results.
Multipart upload:
import requests
response = requests.post(
"https://apim-ai-apis.azure-api.net/v1/image/upscale",
headers={"Authorization": "Bearer YOUR_KEY"},
files={"file": open("small_image.jpg", "rb")}
)
with open("upscaled_4x.png", "wb") as f:
f.write(response.content)
print("Image upscaled 4x! Saved to upscaled_4x.png")Base64 input:
import requests
import base64
with open("small_image.jpg", "rb") as f:
image_b64 = base64.b64encode(f.read()).decode()
response = requests.post(
"https://apim-ai-apis.azure-api.net/v1/image/upscale/base64",
headers={
"Authorization": "Bearer YOUR_KEY",
"Content-Type": "application/json"
},
json={"image": image_b64}
)
result = response.json()
output_bytes = base64.b64decode(result["image"])
with open("upscaled_4x.png", "wb") as f:
f.write(output_bytes)
print(f"Upscaled from {result.get('original_size')} to {result.get('upscaled_size')}")curl:
curl -X POST https://apim-ai-apis.azure-api.net/v1/image/upscale \
-H "Authorization: Bearer YOUR_KEY" \
-F "file=@small_image.jpg" \
-o upscaled_4x.pngRestore and enhance faces in images using GFPGAN v1.3. Fixes blurry faces, compression artifacts, and aging damage.
Multipart upload:
import requests
response = requests.post(
"https://apim-ai-apis.azure-api.net/v1/image/restore-face",
headers={"Authorization": "Bearer YOUR_KEY"},
files={"file": open("blurry_face.jpg", "rb")}
)
with open("restored_face.png", "wb") as f:
f.write(response.content)
print("Face restored! Saved to restored_face.png")Base64 input:
import requests
import base64
with open("blurry_face.jpg", "rb") as f:
image_b64 = base64.b64encode(f.read()).decode()
response = requests.post(
"https://apim-ai-apis.azure-api.net/v1/image/restore-face/base64",
headers={
"Authorization": "Bearer YOUR_KEY",
"Content-Type": "application/json"
},
json={"image": image_b64}
)
result = response.json()
output_bytes = base64.b64decode(result["image"])
with open("restored_face.png", "wb") as f:
f.write(output_bytes)
print("Face restored!")JavaScript (base64):
import fs from "fs";
const imageBuffer = fs.readFileSync("blurry_face.jpg");
const imageB64 = imageBuffer.toString("base64");
const response = await fetch(
"https://apim-ai-apis.azure-api.net/v1/image/restore-face/base64",
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer YOUR_KEY",
},
body: JSON.stringify({ image: imageB64 }),
}
);
const result = await response.json();
const outputBuffer = Buffer.from(result.image, "base64");
fs.writeFileSync("restored_face.png", outputBuffer);
console.log("Face restored!");curl:
curl -X POST https://apim-ai-apis.azure-api.net/v1/image/restore-face \
-H "Authorization: Bearer YOUR_KEY" \
-F "file=@blurry_face.jpg" \
-o restored_face.pngCheck GPU status, VRAM usage, and model loading state.
curl -s https://apim-ai-apis.azure-api.net/v1/image/health \
-H "Authorization: Bearer YOUR_KEY" | python3 -m json.tool
# {
# "status": "healthy",
# "gpu": {
# "name": "NVIDIA A10",
# "vram_total_mb": 24576,
# "vram_used_mb": 1422,
# "vram_free_mb": 23154
# },
# "models": {
# "birefnet": "loaded",
# "realesrgan": "loaded",
# "gfpgan": "loaded"
# }
# }import requests
response = requests.get(
"https://apim-ai-apis.azure-api.net/v1/image/health",
headers={"Authorization": "Bearer YOUR_KEY"}
)
health = response.json()
print(f"Status: {health['status']}")
print(f"GPU: {health['gpu']['name']}")
print(f"VRAM: {health['gpu']['vram_used_mb']}MB / {health['gpu']['vram_total_mb']}MB")
for model, status in health['models'].items():
print(f" {model}: {status}")| Service | Background Removal | Upscaling | Face Restoration |
|---|---|---|---|
| Brainiall | $0.005/image | $0.003/image | $0.005/image |
| remove.bg | $0.195/image | N/A | N/A |
| Photoroom | $0.02/image | N/A | N/A |
| imgupscaler.com | N/A | $0.10/image | N/A |
| Remini | N/A | N/A | $0.08/image |
Bottom line: Brainiall Image API is 4-40x cheaper than competing services.
Process multiple images efficiently:
import requests
from concurrent.futures import ThreadPoolExecutor
from pathlib import Path
import time
BASE_URL = "https://apim-ai-apis.azure-api.net/v1/image"
HEADERS = {"Authorization": "Bearer YOUR_KEY"}
def remove_background(input_path: str, output_path: str):
"""Remove background from a single image."""
with open(input_path, "rb") as f:
response = requests.post(
f"{BASE_URL}/remove-background",
headers=HEADERS,
files={"file": f}
)
if response.status_code == 200:
with open(output_path, "wb") as f:
f.write(response.content)
return True
return False
def upscale_image(input_path: str, output_path: str):
"""Upscale a single image 4x."""
with open(input_path, "rb") as f:
response = requests.post(
f"{BASE_URL}/upscale",
headers=HEADERS,
files={"file": f}
)
if response.status_code == 200:
with open(output_path, "wb") as f:
f.write(response.content)
return True
return False
def restore_face(input_path: str, output_path: str):
"""Restore face in a single image."""
with open(input_path, "rb") as f:
response = requests.post(
f"{BASE_URL}/restore-face",
headers=HEADERS,
files={"file": f}
)
if response.status_code == 200:
with open(output_path, "wb") as f:
f.write(response.content)
return True
return False
# Process a directory of images
input_dir = Path("input_images")
output_dir = Path("output_images")
output_dir.mkdir(exist_ok=True)
def process_image(filename):
input_path = str(input_dir / filename)
output_path = str(output_dir / f"nobg_{filename.replace('.jpg', '.png')}")
success = remove_background(input_path, output_path)
return filename, success
if input_dir.exists():
image_files = list(input_dir.glob("*.jpg")) + list(input_dir.glob("*.png"))
start = time.time()
with ThreadPoolExecutor(max_workers=5) as pool:
results = list(pool.map(lambda f: process_image(f.name), image_files))
elapsed = time.time() - start
success_count = sum(1 for _, s in results if s)
print(f"Processed {success_count}/{len(results)} images in {elapsed:.1f}s")Apply all three operations in sequence:
import requests
import base64
BASE_URL = "https://apim-ai-apis.azure-api.net/v1/image"
HEADERS = {"Authorization": "Bearer YOUR_KEY"}
def full_pipeline(input_path: str, output_prefix: str):
"""
Full image processing pipeline:
1. Remove background
2. Upscale 4x
3. Restore faces
"""
# Step 1: Remove background
with open(input_path, "rb") as f:
resp = requests.post(f"{BASE_URL}/remove-background", headers=HEADERS, files={"file": f})
nobg_path = f"{output_prefix}_nobg.png"
with open(nobg_path, "wb") as f:
f.write(resp.content)
print(f"Step 1: Background removed -> {nobg_path}")
# Step 2: Upscale the result
with open(nobg_path, "rb") as f:
resp = requests.post(f"{BASE_URL}/upscale", headers=HEADERS, files={"file": f})
upscaled_path = f"{output_prefix}_upscaled.png"
with open(upscaled_path, "wb") as f:
f.write(resp.content)
print(f"Step 2: Upscaled 4x -> {upscaled_path}")
# Step 3: Restore faces
with open(input_path, "rb") as f:
resp = requests.post(f"{BASE_URL}/restore-face", headers=HEADERS, files={"file": f})
restored_path = f"{output_prefix}_restored.png"
with open(restored_path, "wb") as f:
f.write(resp.content)
print(f"Step 3: Face restored -> {restored_path}")
return nobg_path, upscaled_path, restored_path
# Usage
# paths = full_pipeline("portrait.jpg", "output/portrait")import base64
import requests
from pathlib import Path
BASE_URL = "https://apim-ai-apis.azure-api.net/v1/image"
HEADERS = {
"Authorization": "Bearer YOUR_KEY",
"Content-Type": "application/json"
}
def image_to_base64(path: str) -> str:
"""Convert image file to base64 string."""
with open(path, "rb") as f:
return base64.b64encode(f.read()).decode()
def base64_to_image(b64_string: str, output_path: str):
"""Save base64 string as image file."""
with open(output_path, "wb") as f:
f.write(base64.b64decode(b64_string))
def process_base64(endpoint: str, image_path: str, output_path: str):
"""Process an image via base64 endpoint."""
b64 = image_to_base64(image_path)
response = requests.post(
f"{BASE_URL}/{endpoint}/base64",
headers=HEADERS,
json={"image": b64}
)
result = response.json()
base64_to_image(result["image"], output_path)
return result
# Usage examples
# process_base64("remove-background", "photo.jpg", "nobg.png")
# process_base64("upscale", "small.jpg", "upscaled.png")
# process_base64("restore-face", "blurry.jpg", "restored.png")Use Brainiall Image via MCP (Model Context Protocol) in Claude Desktop, Cursor, or any MCP client.
{
"mcpServers": {
"brainiall-image": {
"url": "https://apim-ai-apis.azure-api.net/mcp/image/mcp",
"headers": {
"Accept": "application/json, text/event-stream"
}
}
}
}| Tool | Description |
|---|---|
remove_background |
Remove background from image (BiRefNet) |
upscale_image |
Upscale image 4x (Real-ESRGAN) |
restore_face |
Restore and enhance faces (GFPGAN) |
check_image_service |
Health check with GPU status |
{
"mcpServers": {
"brainiall-image-apify": {
"url": "https://n3nr6htYhIkL7dOhK.apify.actor/mcp?token=YOUR_APIFY_TOKEN"
}
}
}import requests
def safe_image_call(endpoint: str, image_path: str, output_path: str):
"""Process image with comprehensive error handling."""
try:
with open(image_path, "rb") as f:
response = requests.post(
f"https://apim-ai-apis.azure-api.net/v1/image/{endpoint}",
headers={"Authorization": "Bearer YOUR_KEY"},
files={"file": f},
timeout=30
)
if response.status_code == 200:
with open(output_path, "wb") as f:
f.write(response.content)
print(f"Success: {output_path}")
return True
elif response.status_code == 400:
error = response.json()
print(f"Bad request: {error.get('detail', 'Unknown error')}")
print("Check: image format (JPG/PNG), minimum 10x10 pixels")
elif response.status_code == 401:
print("Invalid API key")
elif response.status_code == 413:
print("Image too large (max 10MB)")
elif response.status_code == 429:
print("Rate limited — retry after a moment")
elif response.status_code == 503:
print("GPU temporarily unavailable — retry in a few seconds")
else:
print(f"Error: {response.status_code}")
return False
except requests.exceptions.Timeout:
print("Request timed out (GPU may be busy)")
return False
except FileNotFoundError:
print(f"File not found: {image_path}")
return False
# Usage
# safe_image_call("remove-background", "photo.jpg", "result.png")| Model | Task | VRAM | Architecture |
|---|---|---|---|
| BiRefNet | Background removal | ~850 MB | Bilateral Reference Network |
| Real-ESRGAN x4plus | 4x upscaling | ~300 MB | Enhanced SRGAN |
| GFPGAN v1.3 | Face restoration | ~260 MB | Generative Facial Prior GAN |
- GPU: NVIDIA A10 24GB VRAM (production)
- High availability: 2x A10 VMs across availability zones
- Load balanced with health probes
- Automatic VRAM management (cleanup between requests)
- Formats: JPEG, PNG, WebP, BMP, TIFF
- Minimum size: 10x10 pixels
- Maximum file size: 10MB
- Base64: Standard base64 encoding (no data URI prefix)
- Website: brainiall.com
- Get API Key: brainiall.com
- LLM Gateway: github.com/fasuizu-br/brainiall-llm-gateway
- NLP APIs: github.com/fasuizu-br/brainiall-nlp-api
- Speech AI: github.com/fasuizu-br/speech-ai-examples
- MCP Registry: registry.modelcontextprotocol.io
MIT