Generative AI (GenAI) has demonstrated remarkable abilities in automating code generation for various computational tasks, including parallel computing.
Recent advancements highlight the potential of AI-driven tools to produce optimized, scalable parallel code, which is essential for leveraging modern multi-core processors.
This research aims to evaluate the quality and performance of GenAI-generated parallel code and explore the potential for these models to invent new algorithms, utilizing advanced prompting techniques and AI-driven methodologies.
The research evaluates the capabilities of 3 different LLMs to generate parallel code, the tested models are:
- o3-mini
- Llama-3.1-70b-instruct
- Codestral 2508
The models have been tested using 3 different prompting techniques:
- Zero-Shot
- Structured Chain-of-Thought
- Meta-Prompting
Each model and prompting technique was tested in a different parallel computing problem:
- Transposition
- GEMM (General Matrix Multiplication)
- SpMV (Sparse Matrix-Vector Multiplication)
The repository contains the prompts and the code generated using the prompts in question.