Skip to content

Francesco-rainone/Livingdex

Repository files navigation

Livingdex

🇮🇹 Read it in Italian

Livingdex is a Flutter application that uses Gemini 2.0 Flash to simulate a real-life Pokédex, dedicated to identifying plants and animals.

icona_livingdex splashdefinitivo nigga
Application Icon Splash Screen Dark Mode
homepage daje 5913418231807330700 (1)
Main Screen Information Screen Rotomdex Chatbot (full-screen image)

🗂️ Table of Contents


📣 2025 Updates

  • Migration to Gemini 2.0 Flash: The project has been upgraded from Gemini 1.5 to Gemini 2.0 Flash for better performance.
  • Introduction of Firebase AI Logic: The architecture has moved from a direct dependency on Firebase Vertex AI to the new Firebase AI Logic.

Advantages of the New Approach

The main advantage of Firebase AI Logic is flexibility. You can now easily choose which AI provider to use (Vertex AI or Google AI) directly from the code configuration, without having to rewrite the calling logic. This allows you to:

  • Test and compare the models and pricing of both providers.
  • Simplify maintenance and future updates.

Code Example: Before vs Now

Before (Firebase Vertex AI):

// The logic was tightly coupled to FirebaseVertexAI
model = FirebaseVertexAI.instance.generativeModel(
  model: geminiModel,
  generationConfig: GenerationConfig(
    temperature: 0,
    responseMimeType: 'application/json',
  ),
);

Now (Firebase AI Logic): With Firebase AI Logic, the provider choice is configured in the file that manages the model calling logic (in this case, lib/quick_id.dart)

👉 Vertex AI Provider (for enterprise solutions and RAG):

final googleAI = FirebaseAI.vertexAI();

model = googleAI.generativeModel(
  model: geminiModel,
  generationConfig: GenerationConfig(
    temperature: 0.1,
    responseMimeType: 'application/json',
  ),
);

👉 Google AI Provider (for prototyping and lower costs):

final googleAI = FirebaseAI.googleAI();

model = googleAI.generativeModel(
  model: geminiModel,
  generationConfig: GenerationConfig(
    temperature: 0.1,
    responseMimeType: 'application/json',
  ),
);

📖 Project Description

Livingdex is a personal project that I enjoyed developing. The main goal is to satisfy people's curiosity about the animals and plants they encounter. By taking a photo through the application, you can identify the living being in the image, obtain detailed information (name, weight, height, description enriched with curiosities), and interact with a chatbot for further exploration that will respond by consulting certified sources.

Livingdex is designed to encourage people to look around and see their surroundings better, with a fresh perspective on the environment. Everything is presented with an interface that recalls the aesthetics of a Pokédex, enhanced with additional features like dark mode.

📑 Technical Analysis

Here you can find the functional analysis of the project and the folder with the unit tests performed:

✨ Main Features

  • Visual Recognition: Identification of plants and animals via Gemini 2.0 Flash.
  • Pokédex-Themed Interface: UI inspired by the original design for an immersive experience.
  • Integrated Chatbot (Rotomdex): Virtual assistant that provides reliable information from English Wikipedia, thanks to a Reasoning Engine that performs RAG (Retrieval-Augmented Generation).
  • Dark Mode: For a customizable and comfortable visual experience.

🛠️ Architecture and Technologies

Technologies Used

  • Language and Framework: Dart and Flutter
  • AI and Provider: Gemini 2.0 Flash, Firebase AI Logic (with Vertex AI or Google AI provider)
  • Backend: Google Cloud Platform, Cloud Run, Firebase, FlutterFire

Backend Architecture (Cloud Run)

To handle requests from the app, a backend on Cloud Run is required. Two approaches are recommended:

Approach 1: Reasoning Engine (Recommended)

This approach orchestrates multiple services to provide high-quality responses (RAG).

  1. Receives the image from the app via an HTTP endpoint.
  2. Uploads it to Cloud Storage.
  3. Performs a search on Vertex AI Search to find relevant information.
  4. Builds a prompt for Gemini, including the search context.
  5. Calls the Gemini model via Firebase AI Logic requesting structured JSON output.
  6. Returns the formatted data to the app.

Structured JSON Response Example:

{
  "id": "req-1234",
  "identified": true,
  "species": "Acer platanoides",
  "common_name": "Platano",
  "confidence": 0.93,
  "height_estimate": "5-10 m",
  "description": "Short description...",
  "sources": [
    {"name":"Wikipedia", "url":"https://en.wikipedia.org/...."}
  ]
}

Approach 2: Simple Proxy

A simpler alternative if RAG is not needed. The backend acts as a proxy that authenticates the request and forwards it to Gemini. It's faster and cheaper to implement, but with lower response quality.

⚙️ Configuration and Installation

The app works on mobile devices and, at the moment, has been tested only on Android. iOS configuration has not been tested and may cause installation and configuration issues.

1. Prerequisites

Make sure you have the following installed:

  • Flutter SDK: Official Guide
  • IDE: Visual Studio Code and Android Studio for an optimal development experience.
  • Google Cloud & Firebase Account: To use backend and AI services.

2. Google Cloud Backend Configuration

This guide is based on the recommended Reasoning Engine approach.

2.1. Vertex AI Search Preparation

  • In your Google Cloud project, create a search data store on Vertex AI Search.
  • Configure a search app with the necessary data for identification (e.g., descriptions from Wikipedia).

2.2. Agent Deployment on Cloud Run

  • Create an application (e.g., in Node.js or Python) that acts as a Reasoning Engine.
  • Deploy the app on Cloud Run. This service will orchestrate calls to Vertex AI Search and Gemini.

2.3. Enabling Firebase AI Logic

  • In your Firebase project, enable Firebase AI Logic.
  • Configure the integration to communicate with your agent's endpoint on Cloud Run.

3. Flutter Project Configuration

3.1. Connect Firebase

  • Create the config.dart file inside lib/ and insert your Cloud Run service URL and the model you want to use.
    const geminiModel = 'gemini-2.0-flash';
    const cloudRunHost = 'your-cloud-run-service.a.run.app';
  • lib/quick_id.dart: Choose which AI provider to use (Vertex AI or Google AI) as shown in the Updates section

3.2. Update Configuration Files

  • flutterfire configure to connect the Flutter project to your Firebase project. This will generate the lib/firebase_options.dart file. Example of lib/firebase_options.dart:
// File automatically generated by `flutterfire configure`.
import 'package:firebase_core/firebase_core.dart' show FirebaseOptions;
import 'package:flutter/foundation.dart' show defaultTargetPlatform, TargetPlatform;

class DefaultFirebaseOptions {
 static FirebaseOptions get currentPlatform {
   // Example for Android
   if (defaultTargetPlatform == TargetPlatform.android) {
     return const FirebaseOptions(
       apiKey: 'ANDROID_API_KEY_PLACEHOLDER',
       appId: 'ANDROID_APP_ID_PLACEHOLDER',
       messagingSenderId: 'SENDER_ID_PLACEHOLDER',
       projectId: 'PROJECT_ID_PLACEHOLDER',
       storageBucket: 'PROJECT_ID.appspot.com',
     );
   }

   // Add configurations for other platforms here (e.g., iOS)

   throw UnsupportedError(
     'DefaultFirebaseOptions are not supported for this platform.',
   );
 }
}

Running the Flutter App

  • Install all project dependencies: flutter pub get
  • Start the application on an emulator or physical device: flutter run -d Tip: It's preferable to run the app on a physical device. Follow this guide for device configuration.

❗Common Errors:

  • Image Quality < 360p:
    If the image quality is below 360p, the Gemini API may misinterpret the subject or fail to recognize the image, reporting that the subject is neither an animal nor a plant. In this case, a generic error message will be displayed indicating that the image cannot be identified.
  • Slow Loading:
    Slow loading of the subject's description may be due to an internet connection problem or communication with the Gemini API, which may take longer depending on connection quality.

🤝 Contributions and Future Developments

Future Developments

  • Text-to-Speech: Add a voice reading function for descriptions to improve accessibility.
  • iOS Support: Test and resolve any compatibility issues.
  • UI/UX Improvements: Optimize the user interface.

How to Contribute

If you want to contribute, you are welcome! The areas of greatest need are those listed above. Open a Pull Request to propose your changes.


🔗 Useful Links