This Flutter app lets users capture an image from the camera or pick one from the gallery, sends the image to OpenAI for an initial analysis, and then decodes any QR codes found in the image. Each QR code's text is further analyzed by OpenAI to determine what it does (e.g., URL, Wi-Fi credentials, etc.) and any potential security risks.
Read the Wiki on this project and QR codes Here
- Camera Capture: Opens device camera to take a photo.
- Gallery Import: Allows selecting existing images (
.jpg,.jpeg,.png). - OpenAI Integration:
- Initial Image Analysis (conceptually using GPT-4o text completions; a production version might require a specialized vision model or a chat-completion API).
- QR Text Analysis.
- QR Decoding (using
qr_code_tools).
- Flutter SDK (3.0 or above).
- A valid OpenAI API key (placed in
openai_service.dartfor demo).
qr_analyzer/
│ └─lib/
│ ├── main.dart
│ ├── screens/
│ │ ├── home_screen.dart
│ │ └── camera_screen.dart
│ ├── services/
│ │ └── openai_service.dart
│ └── utilities/
│ └── qr_utils.dart
└── README.md
Follow the official Flutter installation guide to set up Flutter on your machine.
git clone https://github.com/VictoKu1/qr_analyzer.git
cd qr_analyzerflutter pub getAdd your OpenAI API key to openai_service.dart:
static const _apiKey = 'YOUR_OPENAI_API_KEY';- Android/iOS:
flutter run- Web:
flutter run -d webFeel free to open issues or submit PRs. For bigger changes, open an issue to discuss first.
- Web Compatibility:
- Some libraries (like
image_picker) andqr_code_toolsmay have limited or no web support. If true cross-platform is desired, you might need a web-friendly alternative or fallback.
- Some libraries (like
- Multi-QR Support:
- Currently,
qr_code_toolstypically decodes the first found QR code. For detecting multiple QR codes, consider using packages like google_ml_kit or another specialized library.
- Currently,
- OpenAI Image Analysis:
- The example uses GPT-3.5-Turbo’s
/chat/completionsfor an “image analysis,” which isn’t truly supported in standard GPT endpoints. This is a conceptual approach. In production, you’d need a real vision model or to integrate with a specialized service.
- The example uses GPT-3.5-Turbo’s
- Permissions:
- On iOS, ensure you have the correct entries in
Info.plistfor camera and photo library. - On Android, ensure
AndroidManifest.xmlincludesCAMERA,READ_EXTERNAL_STORAGE, etc.
- On iOS, ensure you have the correct entries in
With this step-by-step setup, you have a basic Flutter app that:
- Runs on Android, iOS (and potentially web with some caveats).
- Lets users capture or pick images.
- Sends images to OpenAI for a first-phase analysis.
- Decodes any QR code(s) found.
- Sends the decoded text to OpenAI for a second-phase analysis.
- Displays the results in a straightforward UI.
Feel free to customize the design, add robust error handling, or integrate more advanced features like domain reputation checks or advanced vision models.