ZeroGPT is a Python library for interacting with AI APIs, providing capabilities for text and image generation.
| Feature | Description |
|---|---|
| π¬ Text Generation | Various models for diverse text outputs |
| π¨ Image Creation | Generate images from textual descriptions |
| π Uncensored Mode | More unrestricted response capabilities |
| β‘ Optimized Performance | Memory and data handling optimization |
| π‘ Stream Support | Real-time streamed data processing |
| π Secure Authentication | HMAC-SHA256 request signing |
pip install zerogptfrom zerogpt import Client
client = Client()π Simple Text Generation
# Simple request
response = client.send_message("Hi, how are you?")π Text Generation with Instructions
# Request with instruction
response = client.send_message(
"Tell me about space",
instruction="You are an astronomy expert"
)π Uncensored Mode
# Using "uncensored" mode
response = client.send_message(
"Explain a complex topic",
uncensored=True
)π§ Think Mode (Deep Reasoning)
# Using "think" mode (deeper reasoning)
response = client.send_message(
"Solve a difficult math problem",
think=True
)π Contextual Conversations
# With context
messages=[
{"role": "user", "content": "Hi"},
{"role": "assistant", "content": "Hello!"}
]
response = client.send_message(
messages,
think=True
)πΌοΈ Create Images
# Create image
result = client.create_image(
prompt="anime neko girl",
samples=1,
resolution=(768, 512),
seed=-1,
steps=50
)πΎ Manage Generated Images
# Get generated image
image = client.get_image(result['data']['request_id'])
# Save image
image.download(['path/to/save/image.png'])
# View image
image.open()from zerogpt.utils.tools import image_to_prompt
resp = image_to_prompt('path/to/image.png')ποΈ Working with Dummy Context1
πΎ Context Management
from zerogpt.utils.prompt import Dummy
# Create context
dummy = Dummy()
dummy.create(messages=[
{"role": "user", "content": "Hi"},
{"role": "assistant", "content": "Hello!"}
])
# Also possible for image generation
dummy = Dummy()
dummy.create(prompt='neko girl', steps=100)
# Save context
dummy.save("context.bin")
# Load context
dummy.load("context.bin")
# Use instead of messages:
# client.send_message(dummy)
# or
# client.create_image(dummy)| Parameter | Type | Required | Description |
|---|---|---|---|
input |
str or list |
β | Text prompt or list of messages |
instruction |
str |
β | System instruction |
think |
bool |
β | Use model with deeper reasoning |
uncensored |
bool |
β | Use unrestricted mode |
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
str |
β | Description of the desired image |
samples |
int |
β | Number of samples |
resolution |
tuple |
β | Image resolution (width, height) |
seed |
int |
β | Seed for reproducibility |
steps |
int |
β | Number of generation steps |
negative_prompt |
str |
β | Description of undesired elements |
| π | HMAC-SHA256 All requests are signed using HMAC-SHA256 for secure data transmission |
| β° | Timestamp Authentication Prevents replay attacks with timestamp validation |
MIT License
Copyright (c) 2025 RedPiar
Made with β€οΈ by RedPiar
Footnotes
-
Dummy is used to compress context and data in general, very useful for systems with low RAM. It can also be saved for even greater memory efficiency! β©