Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 17 additions & 77 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,98 +1,38 @@
<h1 align="center">/Emplode.</h1>
# Emplode

<p align="center">
<a href="https://discord.gg/uZmvdFpSyW">
<img alt="Discord" src="https://img.shields.io/discord/1172527582684651600?logo=discord&style=flat&logoColor=white"/>
</a>
<br><br>
<b>Agent that performs action on your system by executing code.</b>
</p>
Agent that performs actions on your system by executing code.

<br>

**Emplode** Agent performs actions on your system by executing code locally, It can also serve as an agentic framework for your disposable sandbox projects. You can chat with Emplode in your terminal by running `emplode` after installing.

This provides a natural-language interface to your system's general-purpose capabilities:

- Create, edit and arrange files.
- Control a browser to perform research
- Plot, clean, and analyze large datasets
- ...etc.

<br>
Emplode uses a single model: GPT-5 via OpenAI. Set OPENAI_API_KEY and start the CLI to chat and let Emplode run code locally using the built-in `run_code` tool.

## Quick Start

```shell
pip install emplode
```

### Terminal

After installation, simply run `emplode`:

```shell
export OPENAI_API_KEY=YOUR_KEY # or set in a .env file
emplode
```

### Python
You should see:

```python
import emplode

emplode.chat("Organize all images in my downloads folder into subfolders by year, naming each folder after the year.") # Executes a single command
emplode.chat() # Starts an interactive chat
```

## Commands

### Change the Model

For `gpt-3.5-turbo`, use fast mode:

```shell
emplode --fast
> Model set to `GPT-5`
```

In Python, you will need to set the model manually:
- Use `-y` to auto-run code without confirmation: `emplode -y`
- Or run programmatically:

```python
emplode.model = "gpt-3.5-turbo"
```

### Running Emplode locally

You can run `emplode` in local mode from the command line to use `Code Llama`:

```shell
emplode --local
```

Or run any Hugging Face model **locally** by using its repo ID (e.g. "tiiuae/falcon-180B"):

```shell
emplode --model nvidia/Llama-3.1-Nemotron-70B-Instruct
emplode --model meta-llama/Llama-3.2-11B-Vision-Instruct
```


### Configuration with .env

Emplode allows you to set default behaviors using a .env file. This provides a flexible way to configure it without changing command-line arguments every time.

Here's a sample .env configuration:

```
EMPLODE_CLI_AUTO_RUN=False
EMPLODE_CLI_FAST_MODE=False
EMPLODE_CLI_LOCAL_RUN=False
EMPLODE_CLI_DEBUG=False
import emplode
emplode.chat("Organize all images in my downloads folder into subfolders by year, naming each folder after the year.")
emplode.chat() # interactive
```

You can modify these values in the .env file to change the default behavior of the Emplode
## Requirements

## How Does it Work?
- OPENAI_API_KEY must be set. Only OpenAI’s API is supported.
- Emplode uses OpenAI SDK v1 with Chat Completions, streaming, and a single function tool `run_code`.

Emplode equips a [function-calling model](https://platform.openai.com/docs/guides/gpt/function-calling) with an `exec()` function, which accepts a `language` (like "Python" or "JavaScript") and `code` to run.
## Notes

<br>
- Local/HuggingFace models, Azure, and custom API base options have been removed.
- CLI flags have been simplified; only `-y/--yes` is supported.
2 changes: 1 addition & 1 deletion emplode/README.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
This file will be updated soon!
Emplode now uses OpenAI GPT-5 exclusively. Set `OPENAI_API_KEY` and run `emplode` to start. Only the `-y/--yes` flag is supported for auto-running code.
6 changes: 4 additions & 2 deletions emplode/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
from .emplode import Emplode
import sys

sys.modules["emplode"] = Emplode()
_instance = Emplode()

def chat(message=None, return_messages=False):
return _instance.chat(message=message, return_messages=return_messages)
4 changes: 4 additions & 0 deletions emplode/__main__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from .emplode import Emplode
from .cli import cli

cli(Emplode())
Binary file added emplode/__pycache__/__init__.cpython-310.pyc
Binary file not shown.
Binary file added emplode/__pycache__/__main__.cpython-310.pyc
Binary file not shown.
Binary file added emplode/__pycache__/cli.cpython-310.pyc
Binary file not shown.
Binary file added emplode/__pycache__/code_block.cpython-310.pyc
Binary file not shown.
Binary file added emplode/__pycache__/code_emplode.cpython-310.pyc
Binary file not shown.
Binary file added emplode/__pycache__/emplode.cpython-310.pyc
Binary file not shown.
Binary file added emplode/__pycache__/message_block.cpython-310.pyc
Binary file not shown.
Binary file added emplode/__pycache__/utils.cpython-310.pyc
Binary file not shown.
152 changes: 5 additions & 147 deletions emplode/cli.py
Original file line number Diff line number Diff line change
@@ -1,164 +1,22 @@
import argparse
import os
from dotenv import load_dotenv
import requests
from packaging import version
import pkg_resources
from rich import print as rprint
from rich.markdown import Markdown
import inquirer

load_dotenv()

def check_for_update():
response = requests.get(f'https://pypi.org/pypi/emplode/json')
latest_version = response.json()['info']['version']

current_version = pkg_resources.get_distribution("emplode").version

return version.parse(latest_version) > version.parse(current_version)

def cli(emplode):

try:
if check_for_update():
print("A new version is available. Please run 'pip install --upgrade emplode'.")
except:
pass

AUTO_RUN = os.getenv('EMPLODE_CLI_AUTO_RUN', 'False') == 'True'
FAST_MODE = os.getenv('EMPLODE_CLI_FAST_MODE', 'False') == 'True'
LOCAL_RUN = os.getenv('EMPLODE_CLI_LOCAL_RUN', 'False') == 'True'
DEBUG = os.getenv('EMPLODE_CLI_DEBUG', 'False') == 'True'
USE_AZURE = os.getenv('EMPLODE_CLI_USE_AZURE', 'False') == 'True'

parser = argparse.ArgumentParser(description='Command Emplode.')

parser.add_argument('-y',
'--yes',
action='store_true',
default=AUTO_RUN,
help='execute code without user confirmation')
parser.add_argument('-f',
'--fast',
action='store_true',
default=FAST_MODE,
help='use gpt-4o-mini instead of gpt-4o')
parser.add_argument('-l',
'--local',
action='store_true',
default=LOCAL_RUN,
help='run fully local with code-llama')
parser.add_argument(
'--falcon',
action='store_true',
default=False,
help='run fully local with falcon-40b')
parser.add_argument('-d',
'--debug',
action='store_true',
default=DEBUG,
help='prints extra information')

parser.add_argument('--model',
type=str,
help='model name (for OpenAI compatible APIs) or HuggingFace repo',
default="",
required=False)

parser.add_argument('--max_tokens',
type=int,
help='max tokens generated (for locally run models)')
parser.add_argument('--context_window',
type=int,
help='context window in tokens (for locally run models)')

parser.add_argument('--api_base',
type=str,
help='change your api_base to any OpenAI compatible api',
default="",
required=False)

parser.add_argument('--use-azure',
action='store_true',
default=USE_AZURE,
help='use Azure OpenAI Services')

parser.add_argument('--version',
action='store_true',
help='display current Emplode version')

help='execute code without user confirmation')
args = parser.parse_args()


if args.version:
print("Emplode", pkg_resources.get_distribution("emplode").version)
return

if args.max_tokens:
emplode.max_tokens = args.max_tokens
if args.context_window:
emplode.context_window = args.context_window

if args.yes:
emplode.auto_run = True
if args.fast:
emplode.model = "gpt-4o-mini"
if args.local and not args.falcon:

rprint('', Markdown("**Emplode** will use `Code Llama` for local execution."), '')

models = {
'7B': 'TheBloke/CodeLlama-7B-Instruct-GGUF',
'13B': 'TheBloke/CodeLlama-13B-Instruct-GGUF',
'34B': 'TheBloke/CodeLlama-34B-Instruct-GGUF'
}

parameter_choices = list(models.keys())
questions = [inquirer.List('param', message="Parameter count (smaller is faster, larger is more capable)", choices=parameter_choices)]
answers = inquirer.prompt(questions)
chosen_param = answers['param']

emplode.model = models[chosen_param]
emplode.local = True


if args.debug:
emplode.debug_mode = True
if args.use_azure:
emplode.use_azure = True
emplode.local = False


if args.model != "":
emplode.model = args.model

if "/" in emplode.model:
emplode.local = True

if args.api_base:
emplode.api_base = args.api_base

if args.falcon or args.model == "tiiuae/falcon-180B":

rprint('', Markdown("**Emplode** will use `Falcon` for local execution."), '')

models = {
'7B': 'TheBloke/CodeLlama-7B-Instruct-GGUF',
'40B': 'YokaiKoibito/falcon-40b-GGUF',
'180B': 'TheBloke/Falcon-180B-Chat-GGUF'
}

parameter_choices = list(models.keys())
questions = [inquirer.List('param', message="Parameter count (smaller is faster, larger is more capable)", choices=parameter_choices)]
answers = inquirer.prompt(questions)
chosen_param = answers['param']

if chosen_param == "180B":
rprint(Markdown("> **WARNING:** To run `Falcon-180B` we recommend at least `100GB` of RAM."))

emplode.model = models[chosen_param]
emplode.local = True


emplode.chat()

def cli_entry():
from .emplode import Emplode
cli(Emplode())
Loading