Skip to content

Create Dual AI Feature Module (HTTP + Socket.IO Support) #15

@abhishek-nexgen-dev

Description

@abhishek-nexgen-dev

📖 Description

This issue proposes creating a powerful and developer-friendly AI feature module that supports both traditional HTTP APIs and real-time responses using Socket.IO.

Developers should be able to:

  • Plug-and-play the module using a single AiController or AiSocket

  • Use either HTTP (for classic POST requests) or Socket.IO (for live, streaming responses)

  • Extend this module easily to use OpenAI, Gemini, Claude, or any custom model

  • Reuse the same AiService logic across both HTTP and Socket

  • Use this module in chatbots, assistants, AI tools, summarizers, etc.

##. 🧱 Why This Is Important

  • ✅ HTTP API is perfect for static responses like summaries or code generation

  • ⚡ Socket.IO enables real-time streaming for chat interfaces, assistants, and editors

  • 🔄 Reusability: The service layer can be used in both types of communication

  • 🧩 Highly flexible: Apps can choose what to use based on UX needs

  • 🎯 Aligns with FastKit’s modular, class-based architecture

  • 🟢 Difficulty Level: Advanced

##. Required Skills:

  • Express.js + TypeScript

  • Socket.IO integration

  • Working with async/streamed responses

  • External AI APIs (e.g., OpenAI, Gemini)

  • Modular API architecture and controllers

##. ✅ Tasks

###. 1️⃣ File & Folder Structure (Standardized)

src/
└── features/
    └── Ai/
        └── v1/
            ├── Ai.controller.ts      # Handles standard HTTP endpoints
            ├── Ai.service.ts         # Core AI logic used by both socket and http
            ├── Ai.socket.ts          # Socket.IO connection and events
            ├── Ai.validators.ts      # Prompt validation
            ├── Ai.constant.ts        # Errors and success messages
            ├── Ai.demo.ts            # Example routes for HTTP usage
            └── README.md             # Setup and usage guide

###. 2️⃣ Ai.service.ts — Shared Logic for Both HTTP & Socket.IO

export class AiService {
  static async generateCompletion(prompt: string): Promise<string> {
    // Connect to AI API like OpenAI or use mock
    return `AI Response to: ${prompt}`;
  }

  static async summarize(text: string): Promise<string> {
    return `Summary: ${text.slice(0, 100)}...`;
  }

  static async *streamTokens(prompt: string): AsyncGenerator<string> {
    const tokens = ['Hello', ',', ' this', ' is', ' a', ' test', '.'];
    for (let token of tokens) {
      await new Promise((r) => setTimeout(r, 300));
      yield token;
    }
  }
}

###. 3️⃣ Ai.controller.ts — Standard HTTP API

export class AiController {
async generateCompletion(req, res) {
const { prompt } = req.body;
const result = await AiService.generateCompletion(prompt);
return SendResponse.success(res, { result });
}

async summarize(req, res) {
const { text } = req.body;
const summary = await AiService.summarize(text);
return SendResponse.success(res, { summary });
}
}


4️⃣ Ai.socket.ts — Real-Time with Socket.IO

import { Server, Socket } from 'socket.io';
import { AiService } from './Ai.service';

export const initAiSocket = (io: Server) => {
io.of('/ai').on('connection', (socket: Socket) => {
console.log('Client connected to AI socket');

socket.on('prompt', async ({ prompt }) => {
  const stream = await AiService.streamTokens(prompt);

  for await (const token of stream) {
    socket.emit('token', token);
  }

  socket.emit('done');
});

});
};


5️⃣ server.ts — Integration into Backend

import express from 'express';
import http from 'http';
import { Server } from 'socket.io';
import { initAiSocket } from './features/Ai/v1/Ai.socket';

const app = express();
const server = http.createServer(app);
const io = new Server(server, { cors: { origin: '*' } });

initAiSocket(io);

server.listen(3000, () => {
console.log('Server running at http://localhost:3000');
});


6️⃣ Ai.demo.ts — Setup Routes

import express from 'express';
import { AiController } from './Ai.controller';
const router = express.Router();

const aiController = new AiController();

router.post('/ai/complete', aiController.generateCompletion);
router.post('/ai/summarize', aiController.summarize);

export default router;


7️⃣ Frontend Example (Socket.IO Client)

import { io } from 'socket.io-client';

const socket = io('http://localhost:3000/ai');

socket.emit('prompt', { prompt: "Tell me a joke" });

socket.on('token', (data) => {
document.getElementById("output").innerText += data;
});

socket.on('done', () => {
console.log('AI response complete');
});


🎯 Final Usage Example

HTTP (Postman / REST)

POST /ai/complete
{
"prompt": "Explain async/await"
}

Socket.IO (Real-Time)

socket.emit('prompt', { prompt: "Write a haiku" });


🧪 Test Case Ideas

✅ Send a prompt using HTTP — should return full response

✅ Connect via WebSocket — stream tokens one by one

✅ Add validation (prompt must be > 10 chars)

✅ Handle AI API failure gracefully

✅ Use middleware: verifyToken, limitToPlan('pro')


🧠 README.md Should Include:

Setup instructions for both HTTP and Socket.IO

How to use each method

Frontend integration tips

How to switch provider (OpenAI, Gemini)

How to extend with new AI tools


📌 Expected Outcome

[x] Reusable AiService for all logic

[x] Fully working HTTP routes

[x] Real-time token stream using Socket.IO

[x] Easy to plug into existing projects

[x] Modular and scalable


🙋🏻‍♂️ Looking For

Add OpenAI SDK for real completions

Add streaming support via OpenAI API

Track token usage per user (authId)

Secure socket with token middleware

Add chat history via DB


Would you like next:

✅ Notification System (Socket.IO + DB)

✅ File Upload Module (with Multer/S3)

✅ Admin Audit Logs

✅ Real-time Collaboration Tool

✅ AI Image Generation (via DALL·E)

Let me know and I’ll write the GitHub issues for you.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions