Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 20 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,32 +27,22 @@
- 调用本地运行的 Ollama 服务(或其他 AI 接口)
- 生成回复后通过同一网络回传

### 启动方式

1. **启动 Ollama 服务**:
```bash
ollama serve
```
> 这会启动 Ollama 后台服务,默认监听 `11434` 端口。

2. (可选)提前下载模型:
```bash
ollama pull qwen2.5:7b
```
或使用其他轻量模型如 `phi3`、`tinyllama`。

3. 运行 AI 节点程序:
```bash
python main.py
```

> 注:Ollama 在首次请求时会自动下载并加载模型(如果未提前 pull)。确保设备有足够存储和内存。
## 技术规格
| 连接方式 | 串口 |
| ----- | -----|
| 记忆 | 未实现|
| LLM Tool | 未实现 |
| 语言支持 | 中、英、日、法、俄、韩、西、德|
| 服务提供商 | OpenAI(及类似,如:DeepSeek、Ollama)、web sockets、fastapi|

### 当前配置示例

```json
{
"platform": "ollama",
"localization":{
"language": "zh_CN"
},
"api_keys": {
"openai": "your-openai-api-key",
"deepseek": "your-deepseek-api-key",
Expand Down Expand Up @@ -84,7 +74,7 @@
>
>如果你在使用OpenRouter,请参照[README_OPENROUTER](README_OPENROUTER.md)
>
>若要接入 `AstrBot` ,可以使用 [AstrBot适配器](https://github.com/xiewoc/astrbot_plugin_adapter_meshbot)
>若要接入 `AstrBot` ,可以使用 [AstrBot适配器](https://github.com/xiewoc/astrbot_plugin_adapter_meshbot) (*推荐*)

完全可以在树莓派 + TTGO T-Beam 上跑起来,边走边聊。

Expand All @@ -100,6 +90,11 @@
```bash
ollama serve
```

> 这会启动 Ollama 后台服务,默认监听 `11434` 端口。
>
> 注:Ollama 在首次请求时会自动下载并加载模型(如果未提前 pull)。确保设备有足够存储和内存。

5. 运行主程序:
```bash
python main.py
Expand All @@ -111,16 +106,16 @@

## 🎈当前版本

V 1.0.3
V 1.0.3 - pre 1

- 重构了文件夹结构
- 添加了`Gemini`, `SiliconFlow`, `Claude`和`Fastapi`的适配器
- 重构了`config.json`
- 添加了`localization` ,支持了 `en`, `zh_CN`, `ru`, `jp`, `ko`, `es`, `de`

## 🌱 后续想法

- 引入上下文记忆,让对话更连贯
- 添加一个WebUI
- 添加LLM Tool
- 优化`api`文件夹

## 🙏 写在最后

Expand Down
91 changes: 43 additions & 48 deletions README_EN.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,48 @@
<div align="center">

[**简体中文**](README.md) | **English**
**English** | [ **简体中文** ](README.md)

</div>

# Mesh AI Assistant

A small AI node that quietly resides in the Mesh network.
You send it a message, and it replies with a sentence.
A small AI node quietly residing in the Mesh network.
You send it a message, and it replies.

Unobtrusive, offline, and serverless.
Just for those times when you're in the mountains, in the wild, or somewhere with no signal, and you can still ask, "What do you think?" and receive an answer.
Just for those times when you're in the mountains, the wilderness, or places with no signal you can still ask, "What do you think?" and receive an answer.

## 🧩 What Can It Do?

- Receive private messages sent to it (peer-to-peer messages)
- Generate short replies using a local AI model
- Send the response back the same way, as if it's always online waiting for you
- Receive private messages sent to it (peer-to-peer)
- Generate short replies using a local AI model
- Send the response back through the same path, as if it's always online waiting for you

All processing is done locally, ensuring privacy and control.
All processing is done locally, ensuring privacy control.

## ⚙️ Technical Implementation

- Uses Python to listen for serial port messages from Meshtastic devices
- Extracts content when a private message for this node is received
- Calls a locally running Ollama service (or other AI interfaces)
- Sends the generated reply back through the same network
- Uses Python to listen for serial port messages from Meshtastic devices
- Extracts content when a private message for this node is received
- Calls a locally running Ollama service (or other AI interface)
- Sends the generated reply back through the same network

### How to Start

1. **Start the Ollama Service**:
```bash
ollama serve
```
> This starts the Ollama background service, listening on port `11434` by default.

2. (Optional) Download a model in advance:
```bash
ollama pull qwen2.5:7b
```
Or use other lightweight models like `phi3` or `tinyllama`.

3. Run the AI node program:
```bash
python main.py
```

> Note: Ollama will automatically download and load the model on the first request (if not pulled in advance). Ensure your device has sufficient storage and memory.
## Technical Specifications
| Connection | Serial Port |
| ----- | -----|
| Memory | Not Implemented |
| LLM Tools | Not Implemented |
| Language Support | Chinese, English, Japanese, French, Russian, Korean, Spanish, German |
| Service Providers | OpenAI (and similar, e.g., DeepSeek, Ollama), web sockets, FastAPI |

### Current Configuration Example

```json
{
"platform": "ollama",
"localization":{
"language": "en"
},
"api_keys": {
"openai": "your-openai-api-key",
"deepseek": "your-deepseek-api-key",
Expand All @@ -79,12 +69,12 @@ All processing is done locally, ensuring privacy and control.
}
```

> [!IMPORTANT]
> Please replace `your-api-key` with your actual API key when using services like `openai`, `deepseek`, etc.
>[!IMPORTANT]
>Please replace `your-api-key` with your actual API key when using services like `openai`, `deepseek`, etc.
>
> If you are using OpenRouter, please refer to [README_OPENROUTER](README_OPENROUTER.md)
>If you are using OpenRouter, please refer to [README_OPENROUTER](README_OPENROUTER.md)
>
> To integrate with `AstrBot`, you can use the [AstrBot Adapter](https://github.com/xiewoc/astrbot_plugin_adapter_meshbot)
>To integrate with `AstrBot`, you can use the [AstrBot Adapter](https://github.com/xiewoc/astrbot_plugin_adapter_meshbot) (*Recommended*)

It can easily run on a Raspberry Pi + TTGO T-Beam, allowing you to chat on the go.

Expand All @@ -100,37 +90,42 @@ It can easily run on a Raspberry Pi + TTGO T-Beam, allowing you to chat on the g
```bash
ollama serve
```

> This starts the Ollama background service, listening on port `11434` by default.
>
> Note: Ollama will automatically download and load the model on the first request (if not pulled beforehand). Ensure the device has sufficient storage and memory.

5. Run the main program:
```bash
python main.py
```
6. Send a private message to it from another device and wait for a reply.
6. Send a private message to it from another device and wait for the reply.

> [!IMPORTANT]
> Please pay attention to the runtime path when executing the main program; it must be run from within the project folder.
>[!IMPORTANT]
>Please pay attention to the working directory when running the main program; it must be within the project folder.

## 🎈 Current Version

V 1.0.3
V 1.0.3 - pre 1

- Refactored the folder structure
- Added adapters for `Gemini`, `SiliconFlow`, `Claude`, and `Fastapi`
- Refactored `config.json`
- Added `localization`, supporting `en`, `zh_CN`, `ru`, `jp`, `ko`, `es`, `de`

## 🌱 Future Ideas

- Introduce context memory for more coherent conversations
- Add a WebUI
- Introduce context memory for more coherent conversations
- Add a WebUI
- Add LLM Tools
- Optimize the `api` folder

## 🙏 Final Words

This project isn't meant to replace anyone, nor is it about creating an overly intelligent AI.
It's just about leaving a voice that can respond to you in those quiet places.
It's simply about leaving a voice that can respond to you in those quiet places.

If you also appreciate this concept, you're welcome to help improve it.

Simultaneously, thanks to the developers who have contributed to this project; we appreciate your support and efforts.
Special thanks to the developers who have contributed to this projectwe appreciate your support and dedication.

May your Meshtastic node run stably in the mountains and wilds, where every reply is like a quietly lit signal lamp. 📡💡
May your Meshtastic node run stably out in the wilds, where every reply is like a quietly lit signal lamp. 📡💡

Happy Exploring! ✨
3 changes: 3 additions & 0 deletions config.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
{
"platform": "ollama",
"localization":{
"language": "zh_CN"
},
"api_keys": {
"openai": "your-openai-api-key",
"deepseek": "your-deepseek-api-key",
Expand Down
10 changes: 6 additions & 4 deletions main.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,11 @@
import os
from pathlib import Path

from meshbot.core.bot import MeshAIBot
from meshbot.core.bot import MeshBot
from meshbot.handlers.signal_handlers import setup_signal_handlers
from meshbot.config.config_loader import load_config
from meshbot.config.config_loader import create_example_config
from meshbot.utils.localize import i18n

# 日志配置
logging.basicConfig(
Expand All @@ -15,7 +17,6 @@
)
logger = logging.getLogger(__name__)


def check_config():
"""检查配置文件是否存在,如果不存在则创建示例"""
config_path = Path(__file__).parent / "config.json"
Expand All @@ -33,12 +34,13 @@ async def main() -> None:
if not check_config():
return

bot = MeshAIBot()
load_config()
bot = MeshBot()
setup_signal_handlers(bot)
try:
await bot.run()
except Exception as e:
logger.error(f"💥 机器人运行异常: {e}")
logger.error(i18n.gettext('bot_running_error',err = e))
finally:
await bot.shutdown()

Expand Down
Loading
Loading