Skip to content

NewmanJustice/GDD-Evidence-Evaluator

Repository files navigation

GDD-Evidence-Evaluator

Files and prompts to create an GPT to provide an assessment of GGD evidence.

⚠️ Important Notice

This tool is designed to support evidence evaluation against the GDD Capability Framework.
It must not replace formal Line manager validation, review, or assessment processes.
The evaluator provides structured scoring and feedback to quantify the strength of evidence
and highlight areas for submission improvement, but final DDaT level decisions remain the
responsibility of Line managers.

🗂️ Adding Other DDaT Roles

To assess roles beyond Software Developer, follow these steps:

  1. Visit the official DDaT Capability Framework.
  2. Locate and open the page for the role you want to include (e.g., Test Engineer, Delivery Manager, etc.).
  3. Copy the level descriptions (Awareness / Working / Practitioner / Expert) for that role.
  4. Save these as a new file inside your descriptions folder, following the same format as:
    • Senior developer - management_GDD_Description.rtf
  5. Open the GDD_Skill_Levels_descriptions.xlsx file and add:
    • The role name
    • The skill level
    • The level descriptions

💡 Tip: Keep the structure and formatting identical to the Senior Developer file to ensure the evaluator GPT can interpret and reference it correctly.

🧩 Setup Guide — DDaT Evidence Evaluator (HMCTS) GPT

This guide explains how to build the GDD Evidence Evaluator (HMCTS) custom GPT from scratch using the provided files.


1. 📦 Prepare Your Materials

Make sure you have the following three files ready:

File Purpose
DDaT Evidence Evaluator (HMCTS) — Interactive + Logged + CSV Export_prompt.txt The main prompt that defines behaviour and workflow.
GDD_Skill_Levels_descriptions.xlsx Reference for DDaT skill level definitions.
Senior developer - management_GDD_Description.rtf Reference for Software Developer capability descriptors (management focus).

✅ Before You Start

  • Confirm each file name is clear and final.
  • Decide on your GPT’s name and short description, for example:
    • Name: GDD Evidence Evaluator (HMCTS)
    • Description: Evaluates DDaT developer evidence, assigns level, and exports a CSV log.

2. 🧠 Create the GPT in the Builder

  1. Open the GPT Builder (via chat.openai.com → Explore GPTs → Create a GPT).
  2. Choose Create a GPT.
  3. Fill in:
    • Name: GDD Evidence Evaluator (HMCTS)
    • Description: Expert assessor for DDaT (Software Developer) evidence. Runs an interactive workflow, produces a Markdown report, and exports a CSV log. Timezone: Europe/London.

3. 🧩 Add the Core Instructions

  1. In the Instructions (system prompt) section, paste the entire contents of
    DDaT Evidence Evaluator (HMCTS) — Interactive + Logged + CSV Export_prompt.txt.

  2. At the very top of that file, prepend these key setup lines (if not already included):

    You are "GDD Evidence Evaluator (HMCTS)".
    Use Europe/London timezone for all timestamps.
    Always produce:
      • A Markdown evaluation report with timestamp and reviewer
      • A downloadable CSV export file and fenced CSV code block
      • A Per-Capability Rubric Table (10 areas if “All of them” selected)
    Never infer missing data — always collect interactively.
    
    

4. 📚 Upload Knowledge Files

  1. In the GPT Builder, open the Knowledge (or Upload files) panel.
  2. Upload the following files:
    • GDD_Skill_Levels_descriptions.xlsx
    • Role GDD_Description.rtf files as required
  3. In your Instructions, add a reminder for the model to use these files:

    Use the uploaded knowledge files as authoritative references for capability definitions and level descriptors. When drawing on them, align evaluations to these sources.


5. ⚙️ Enable Tools and Capabilities

Open the Configure / Capabilities panel and set:

Capability Setting Purpose
Web Browsing ON Verify evolving DDaT/HMCTS references and cite sources when needed.
Code Interpreter (Python) ON Generate timestamped CSV outputs and handle table/report formatting.
File Uploads ON Allow large evidence submissions to be evaluated.

No external Actions are required — everything runs with standard GPT tools.

About

Files and prompts to create an GPT to provide an assessment of GGD evidence.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors