From bfe79aa05384bd3a45bb36eb7bf0371e54c23ccc Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Mon, 2 Jun 2025 12:27:16 +1000
Subject: [PATCH 1/4] Updates to the AI Application Security templates
These updates are to match the VRT update - https://github.com/bugcrowd/vulnerability-rating-taxonomy/pull/464
Adding:
P1 - AI Application Security - Training Data Poisoning - Backdoor Injection / Bias Manipulation
P1 - AI Application Security - Model Extraction - API Query-Based Model Reconstruction
P1 - AI Application Security - Sensitive Information Disclosure - Cross-Tenant PII Leakage/Exposure .
P1 - AI Application Security - Sensitive Information Disclosure - Key Leak
P1 - AI Application Security - Remote Code Execution - Full System Compromise
P2 - AI Application Security - Remote Code Execution - Sandboxed Container Code Execution
P2 - AI Application Security - Prompt Injection - System Prompt Leakage
P2 - AI Application Security - Vector and Embedding Weaknesses - Embedding Exfiltration / Model Extraction
P3 - AI Application Security - Vector and Embedding Weaknesses - Semantic Indexing
P2 - AI Application Security - Denial-of-Service (DoS) - Application-Wide
P4 - AI Application Security - AI Safety - Misinformation / Wrong Factual Data
P4 - AI Application Security - Insufficient Rate Limiting - Query Flooding / API Token Abuse
P4 - AI Application Security - Denial-of-Service (DoS) - Tenant-Scoped
P4 - AI Application Security - Adversarial Example Injection - AI Misclassification Attacks
P3 - AI Application Security - Improper Output Handling - Cross-Site Scripting (XSS)
P4 - AI Application Security - Improper Output Handling - Markdown/HTML Injection
P5 - AI Application Security - Improper Input Handling - ANSI Escape Codes
P5 - AI Application Security - Improper Input Handling - Unicode Confusables
P5 - AI Application Security - Improper Input Handling - RTL Overrides
Removing:
P1 - AI Application Security - Large Language Model (LLM) Security - LLM Output Handling
P1 - AI Application Security - Large Language Model (LLM) Security - Prompt Injection
P1 - AI Application Security - Large Language Model (LLM) Security - Training Data Poisoning
P2 - AI Application Security - Large Language Model (LLM) Security - Excessive Agency/Permission Manipulation
---
.../ai_application_security/.gitkeep | 0
.../adversarial_example_injection/.gitkeep | 0
.../ai_misclassification_attacks/.gitkeep | 0
.../ai_misclassification_attacks}/guidance.md | 0
.../recommendations.md | 6 +++++
.../ai_misclassification_attacks/template.md | 20 ++++++++++++++
.../guidance.md | 0
.../recommendations.md | 6 +++++
.../template.md | 2 +-
.../ai_safety/.gitkeep | 0
.../guidance.md | 0
.../.gitkeep | 0
.../guidance.md | 0
.../recommendations.md | 7 +++++
.../template.md | 21 +++++++++++++++
.../ai_safety/recommendations.md | 7 +++++
.../ai_safety/template.md | 22 ++++++++++++++++
.../denial_of_service_dos/.gitkeep | 0
.../application_wide/.gitkeep | 0
.../application_wide}/guidance.md | 0
.../application_wide/recommendations.md | 7 +++++
.../application_wide/template.md | 26 +++++++++++++++++++
.../denial_of_service_dos/guidance.md | 5 ++++
.../denial_of_service_dos/recommendations.md | 7 +++++
.../denial_of_service_dos/template.md | 19 ++++++++++++++
.../tenant_scoped/.gitkeep | 0
.../tenant_scoped/guidance.md | 5 ++++
.../tenant_scoped/recommendations.md | 7 +++++
.../tenant_scoped/template.md | 19 ++++++++++++++
.../improper_input_handling/.gitkeep | 0
.../ansi_escape_codes/.gitkeep | 0
.../ansi_escape_codes/guidance.md | 5 ++++
.../ansi_escape_codes/recommendations.md | 5 ++++
.../ansi_escape_codes/template.md | 19 ++++++++++++++
.../improper_input_handling/guidance.md | 5 ++++
.../recommendations.md | 5 ++++
.../rtl_overrides/.gitkeep | 0
.../rtl_overrides/guidance.md | 5 ++++
.../rtl_overrides/recommendations.md | 6 +++++
.../rtl_overrides/template.md | 21 +++++++++++++++
.../template.md | 0
.../unicode_confusables/.gitkeep | 0
.../unicode_confusables/guidance.md | 5 ++++
.../unicode_confusables/recommendations.md | 6 +++++
.../unicode_confusables/template.md | 22 ++++++++++++++++
.../improper_output_handling/.gitkeep | 0
.../cross_site_scripting_xss/.gitkeep | 0
.../cross_site_scripting_xss/guidance.md | 5 ++++
.../recommendations.md | 6 +++++
.../cross_site_scripting_xss/template.md | 19 ++++++++++++++
.../improper_output_handling/guidance.md | 5 ++++
.../markdown_html_injection/.gitkeep | 0
.../markdown_html_injection/guidance.md | 5 ++++
.../recommendations.md | 5 ++++
.../markdown_html_injection/template.md | 21 +++++++++++++++
.../recommendations.md | 6 +++++
.../improper_output_handling/template.md | 21 +++++++++++++++
.../insufficient_rate_limiting/.gitkeep | 0
.../insufficient_rate_limiting/guidance.md | 5 ++++
.../query_flooding_api_token_abuse/.gitkeep | 0
.../guidance.md | 5 ++++
.../recommendations.md | 7 +++++
.../template.md | 19 ++++++++++++++
.../recommendations.md | 7 +++++
.../insufficient_rate_limiting/template.md | 19 ++++++++++++++
.../recommendations.md | 15 -----------
.../template.md | 22 ----------------
.../llm_output_handling/recommendations.md | 17 ------------
.../recommendations.md | 13 ----------
.../training_data_poisoning/template.md | 22 ----------------
.../model_extraction/.gitkeep | 0
.../.gitkeep | 0
.../guidance.md | 5 ++++
.../recommendations.md | 6 +++++
.../template.md | 24 +++++++++++++++++
.../model_extraction/guidance.md | 5 ++++
.../recommendations.md | 0
.../model_extraction/template.md | 24 +++++++++++++++++
.../prompt_injection/.gitkeep | 0
.../prompt_injection/guidance.md | 5 ++++
.../prompt_injection/recommendations.md | 0
.../system_prompt_leakage/.gitkeep | 0
.../system_prompt_leakage/guidance.md | 5 ++++
.../system_prompt_leakage/recommendations.md | 13 ++++++++++
.../system_prompt_leakage/template.md | 22 ++++++++++++++++
.../prompt_injection/template.md | 2 +-
.../remote_code_execution/.gitkeep | 0
.../full_system_compromise/.gitkeep | 0
.../full_system_compromise/guidance.md | 5 ++++
.../full_system_compromise/recommendations.md | 7 +++++
.../full_system_compromise/template.md | 24 +++++++++++++++++
.../remote_code_execution/guidance.md | 5 ++++
.../remote_code_execution/recommendations.md | 7 +++++
.../remote_code_execution/template.md | 24 +++++++++++++++++
.../sensitive_information_disclosure/.gitkeep | 0
.../.gitkeep | 0
.../guidance.md | 5 ++++
.../recommendations.md | 7 +++++
.../template.md | 21 +++++++++++++++
.../guidance.md | 5 ++++
.../recommendations.md | 6 +++++
.../.gitkeep | 0
.../guidance.md | 5 ++++
.../recommendations.md | 7 +++++
.../template.md | 21 +++++++++++++++
.../template.md | 21 +++++++++++++++
.../training_data_poisoning/.gitkeep | 0
.../.gitkeep | 0
.../guidance.md | 5 ++++
.../recommendations.md | 7 +++++
.../template.md | 24 +++++++++++++++++
.../training_data_poisoning/guidance.md | 5 ++++
.../recommendations.md | 7 +++++
.../training_data_poisoning/template.md | 23 ++++++++++++++++
.../vector_and_embedding_weaknesses/.gitkeep | 0
.../.gitkeep | 0
.../guidance.md | 5 ++++
.../recommendations.md | 6 +++++
.../template.md | 21 +++++++++++++++
.../guidance.md | 5 ++++
.../recommendations.md | 6 +++++
.../semantic_indexing/.gitkeep | 0
.../semantic_indexing/guidance.md | 5 ++++
.../semantic_indexing/recommendations.md | 6 +++++
.../semantic_indexing/template.md | 21 +++++++++++++++
.../template.md | 21 +++++++++++++++
126 files changed, 891 insertions(+), 91 deletions(-)
create mode 100644 submissions/description/ai_application_security/.gitkeep
create mode 100644 submissions/description/ai_application_security/adversarial_example_injection/.gitkeep
create mode 100644 submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/.gitkeep
rename submissions/description/ai_application_security/{llm_security/excessive_agency_permission_manipulation => adversarial_example_injection/ai_misclassification_attacks}/guidance.md (100%)
create mode 100644 submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/recommendations.md
create mode 100644 submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md
rename submissions/description/ai_application_security/{llm_security => adversarial_example_injection}/guidance.md (100%)
create mode 100644 submissions/description/ai_application_security/adversarial_example_injection/recommendations.md
rename submissions/description/ai_application_security/{llm_security/llm_output_handling => adversarial_example_injection}/template.md (66%)
create mode 100644 submissions/description/ai_application_security/ai_safety/.gitkeep
rename submissions/description/ai_application_security/{llm_security/llm_output_handling => ai_safety}/guidance.md (100%)
create mode 100644 submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/.gitkeep
rename submissions/description/ai_application_security/{llm_security/prompt_injection => ai_safety/misinformation_wrong_factual_data}/guidance.md (100%)
create mode 100644 submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/recommendations.md
create mode 100644 submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md
create mode 100644 submissions/description/ai_application_security/ai_safety/recommendations.md
create mode 100644 submissions/description/ai_application_security/ai_safety/template.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/.gitkeep
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/application_wide/.gitkeep
rename submissions/description/ai_application_security/{llm_security/training_data_poisoning => denial_of_service_dos/application_wide}/guidance.md (100%)
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/application_wide/recommendations.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/guidance.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/recommendations.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/template.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/.gitkeep
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/guidance.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/recommendations.md
create mode 100644 submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/.gitkeep
create mode 100644 submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/.gitkeep
create mode 100644 submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/guidance.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/recommendations.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/guidance.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/recommendations.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/rtl_overrides/.gitkeep
create mode 100644 submissions/description/ai_application_security/improper_input_handling/rtl_overrides/guidance.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/rtl_overrides/recommendations.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/rtl_overrides/template.md
rename submissions/description/ai_application_security/{llm_security => improper_input_handling}/template.md (100%)
create mode 100644 submissions/description/ai_application_security/improper_input_handling/unicode_confusables/.gitkeep
create mode 100644 submissions/description/ai_application_security/improper_input_handling/unicode_confusables/guidance.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/unicode_confusables/recommendations.md
create mode 100644 submissions/description/ai_application_security/improper_input_handling/unicode_confusables/template.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/.gitkeep
create mode 100644 submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/.gitkeep
create mode 100644 submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/guidance.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/recommendations.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/guidance.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/.gitkeep
create mode 100644 submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/guidance.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/recommendations.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/template.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/recommendations.md
create mode 100644 submissions/description/ai_application_security/improper_output_handling/template.md
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/.gitkeep
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/guidance.md
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/.gitkeep
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/guidance.md
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/recommendations.md
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/recommendations.md
create mode 100644 submissions/description/ai_application_security/insufficient_rate_limiting/template.md
delete mode 100644 submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
delete mode 100644 submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
delete mode 100644 submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
delete mode 100644 submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
delete mode 100644 submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
create mode 100644 submissions/description/ai_application_security/model_extraction/.gitkeep
create mode 100644 submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/.gitkeep
create mode 100644 submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/guidance.md
create mode 100644 submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/recommendations.md
create mode 100644 submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/template.md
create mode 100644 submissions/description/ai_application_security/model_extraction/guidance.md
rename submissions/description/ai_application_security/{llm_security => model_extraction}/recommendations.md (100%)
create mode 100644 submissions/description/ai_application_security/model_extraction/template.md
create mode 100644 submissions/description/ai_application_security/prompt_injection/.gitkeep
create mode 100644 submissions/description/ai_application_security/prompt_injection/guidance.md
rename submissions/description/ai_application_security/{llm_security => }/prompt_injection/recommendations.md (100%)
create mode 100644 submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/.gitkeep
create mode 100644 submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/guidance.md
create mode 100644 submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/recommendations.md
create mode 100644 submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md
rename submissions/description/ai_application_security/{llm_security => }/prompt_injection/template.md (98%)
create mode 100644 submissions/description/ai_application_security/remote_code_execution/.gitkeep
create mode 100644 submissions/description/ai_application_security/remote_code_execution/full_system_compromise/.gitkeep
create mode 100644 submissions/description/ai_application_security/remote_code_execution/full_system_compromise/guidance.md
create mode 100644 submissions/description/ai_application_security/remote_code_execution/full_system_compromise/recommendations.md
create mode 100644 submissions/description/ai_application_security/remote_code_execution/full_system_compromise/template.md
create mode 100644 submissions/description/ai_application_security/remote_code_execution/guidance.md
create mode 100644 submissions/description/ai_application_security/remote_code_execution/recommendations.md
create mode 100644 submissions/description/ai_application_security/remote_code_execution/template.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/.gitkeep
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/.gitkeep
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/guidance.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/recommendations.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/guidance.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/recommendations.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/.gitkeep
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/guidance.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/recommendations.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/template.md
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/.gitkeep
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/.gitkeep
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/guidance.md
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/recommendations.md
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/template.md
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/guidance.md
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/recommendations.md
create mode 100644 submissions/description/ai_application_security/training_data_poisoning/template.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/.gitkeep
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/.gitkeep
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/guidance.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/recommendations.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/guidance.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/recommendations.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/.gitkeep
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/guidance.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/recommendations.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/template.md
create mode 100644 submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md
diff --git a/submissions/description/ai_application_security/.gitkeep b/submissions/description/ai_application_security/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/adversarial_example_injection/.gitkeep b/submissions/description/ai_application_security/adversarial_example_injection/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/.gitkeep b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/guidance.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
rename to submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/guidance.md
diff --git a/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/recommendations.md b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/recommendations.md
new file mode 100644
index 00000000..18e2a652
--- /dev/null
+++ b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Implement adversarial training to make the model more robust against adversarial examples.
+- Use input preprocessing or data augmentation techniques to reduce the effectiveness of adversarial perturbations.
+- Monitor model inputs for anomalies that may indicate adversarial examples.
+- Add additional layers of validation or human review for critical decisions based on AI predictions.
diff --git a/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md
new file mode 100644
index 00000000..da4b0dd6
--- /dev/null
+++ b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md
@@ -0,0 +1,20 @@
+AI misclassification attacks occur when an attacker introduces specially crafted input designed to trick the AI model into making an incorrect prediction or classification. These inputs, known as adversarial examples, are often subtle modifications to legitimate data that are imperceptible to humans but can significantly alter the AI’s output.
+
+**Business Impact**
+This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+1. Identify the expected inputs of the AI model
+1. Generate adversarial examples by adding small, targeted perturbations to legitimate inputs:
+
+```prompt
+ {malicious input}
+```
+1. Submit the adversarial examples to the AI model
+1. Observe that the model misclassifies the modified input compared to its expected classification
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/guidance.md b/submissions/description/ai_application_security/adversarial_example_injection/guidance.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/guidance.md
rename to submissions/description/ai_application_security/adversarial_example_injection/guidance.md
diff --git a/submissions/description/ai_application_security/adversarial_example_injection/recommendations.md b/submissions/description/ai_application_security/adversarial_example_injection/recommendations.md
new file mode 100644
index 00000000..18e2a652
--- /dev/null
+++ b/submissions/description/ai_application_security/adversarial_example_injection/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Implement adversarial training to make the model more robust against adversarial examples.
+- Use input preprocessing or data augmentation techniques to reduce the effectiveness of adversarial perturbations.
+- Monitor model inputs for anomalies that may indicate adversarial examples.
+- Add additional layers of validation or human review for critical decisions based on AI predictions.
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md b/submissions/description/ai_application_security/adversarial_example_injection/template.md
similarity index 66%
rename from submissions/description/ai_application_security/llm_security/llm_output_handling/template.md
rename to submissions/description/ai_application_security/adversarial_example_injection/template.md
index 4de370a8..74845a53 100644
--- a/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md
+++ b/submissions/description/ai_application_security/adversarial_example_injection/template.md
@@ -1,4 +1,4 @@
-Insecure output handling within Large Language Models (LLMs) occurs when the output generated by the LLM is not sanitized or validated before being passed downstream to other systems. This can allow an attacker to indirectly gain access to systems, elevate their privileges, or gain arbitrary code execution by using crafted prompts.
+Adversarial example injection attacks occur when an attacker introduces specially crafted input designed to trick the AI model into making an incorrect prediction or classification. These inputs, are often subtle modifications to legitimate data that are imperceptible to humans but can significantly alter the AI’s output.
**Business Impact**
diff --git a/submissions/description/ai_application_security/ai_safety/.gitkeep b/submissions/description/ai_application_security/ai_safety/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md b/submissions/description/ai_application_security/ai_safety/guidance.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
rename to submissions/description/ai_application_security/ai_safety/guidance.md
diff --git a/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/.gitkeep b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/guidance.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
rename to submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/guidance.md
diff --git a/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/recommendations.md b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/recommendations.md
new file mode 100644
index 00000000..61a142ab
--- /dev/null
+++ b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Improve the model's training data and fact-checking mechanisms.
+- Implement retrieval augmentation techniques to access external knowledge bases.
+- Provide clear disclaimers about the potential for AI-generated content to be inaccurate.
+- Enable user feedback mechanisms for reporting misinformation.
+- Regularly audit the model's output for factual errors.
diff --git a/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md
new file mode 100644
index 00000000..667d21c3
--- /dev/null
+++ b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md
@@ -0,0 +1,21 @@
+AI models can generates or presents inaccurate, false, or misleading information as fact. Misinformation or wrong factual data can happen due to errors in the model's training data, hallucinations (fabrication of information), or a failure to cross-reference with reliable sources.
+
+**Business Impact**
+Users may receive and act upon incorrect information, leading to flawed decision-making, reputational damage for the service provider, and potential legal liabilities. There is also a loss of trust in the AI's reliability and accuracy.
+
+**Steps to Reproduce**
+1. Submit the following prompts that require factual information
+
+```prompt
+ {prompt}
+```
+
+1. Examine the model's output for inaccuracies, fabricated details, or contradictions
+1. Compare the model's response with reliable external sources to verify accuracy
+1. Observe that the model's outputs contain inaccurate, false, or misleading information as factual information
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/ai_safety/recommendations.md b/submissions/description/ai_application_security/ai_safety/recommendations.md
new file mode 100644
index 00000000..61a142ab
--- /dev/null
+++ b/submissions/description/ai_application_security/ai_safety/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Improve the model's training data and fact-checking mechanisms.
+- Implement retrieval augmentation techniques to access external knowledge bases.
+- Provide clear disclaimers about the potential for AI-generated content to be inaccurate.
+- Enable user feedback mechanisms for reporting misinformation.
+- Regularly audit the model's output for factual errors.
diff --git a/submissions/description/ai_application_security/ai_safety/template.md b/submissions/description/ai_application_security/ai_safety/template.md
new file mode 100644
index 00000000..1e43768d
--- /dev/null
+++ b/submissions/description/ai_application_security/ai_safety/template.md
@@ -0,0 +1,22 @@
+AI models can generates or presents inaccurate, false, or misleading information as fact. This can occur due to errors in the model's training data, hallucinations (fabrication of information), or a failure to cross-reference with reliable sources.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Navigate to the following URL:
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM returns sensitive data
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/.gitkeep b/submissions/description/ai_application_security/denial_of_service_dos/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/application_wide/.gitkeep b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/guidance.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
rename to submissions/description/ai_application_security/denial_of_service_dos/application_wide/guidance.md
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/application_wide/recommendations.md b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/recommendations.md
new file mode 100644
index 00000000..d43ac70c
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement rate limiting and throttling for API requests and user interactions.
+- Use load balancing to distribute traffic across multiple servers.
+- Implement resource monitoring and auto-scaling to handle increased load.
+- Employ input validation and sanitization to prevent resource-intensive processing of malicious input.
+- Use content delivery networks (CDNs) to cache and deliver content efficiently.
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md
new file mode 100644
index 00000000..be0084d6
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md
@@ -0,0 +1,26 @@
+Application-wide Denial-of-Service (DoS) occurs when an attacker attempts to overload the entire AI application with requests or malicious input, rendering the application unavailable to legitimate users. This can be achieved by sending a flood of queries that exploit resource-intensive processes, or by triggering application crashes.
+
+**Business Impact**
+Complete unavailability of the AI application, leads to service disruption, financial loss, reputational damage, and potential loss of user data.
+
+**Steps to Reproduce**
+
+1. Develop a script or tool to send a high volume of requests to the AI application.
+2. Identify and target resource-intensive features or API endpoints.
+3. Execute the attack and monitor the application's response and availability.
+
+
+1. Navigate to the following URL:
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM returns sensitive data
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/guidance.md b/submissions/description/ai_application_security/denial_of_service_dos/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/recommendations.md b/submissions/description/ai_application_security/denial_of_service_dos/recommendations.md
new file mode 100644
index 00000000..d43ac70c
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement rate limiting and throttling for API requests and user interactions.
+- Use load balancing to distribute traffic across multiple servers.
+- Implement resource monitoring and auto-scaling to handle increased load.
+- Employ input validation and sanitization to prevent resource-intensive processing of malicious input.
+- Use content delivery networks (CDNs) to cache and deliver content efficiently.
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/template.md b/submissions/description/ai_application_security/denial_of_service_dos/template.md
new file mode 100644
index 00000000..583309fc
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/template.md
@@ -0,0 +1,19 @@
+Denial-of-Service (DoS) occurs when an attacker targets and overwhelms the resources of an AI application. This can be achieved through excessive requests, resource-intensive queries, or exploiting vulnerabilities specific to the tenant's configuration. An attacker can leverage this vulnerability to cause disruption or unavailability for that specific tenant without affecting other tenants.
+
+**Business Impact**
+This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+1. Obtain access to an account within a specific tenant
+1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources
+
+```python
+ {malicious script}
+```
+1. BOserve that the target tenant's service availability and performance is degraded
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/.gitkeep b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/guidance.md b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/recommendations.md b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/recommendations.md
new file mode 100644
index 00000000..1ff0a25b
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement per-tenant resource allocation and limits.
+- Isolate tenant resources and infrastructure to prevent impact on other tenants.
+- Monitor individual tenant activity and resource usage for anomalies.
+- Implement tenant-specific rate limiting and throttling.
+- Provide detailed activity logs and monitoring dashboards to tenants.
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md
new file mode 100644
index 00000000..333beb25
--- /dev/null
+++ b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md
@@ -0,0 +1,19 @@
+Tenant-Scoped Denial-of-Service (DoS) occurs when an attacker specifically targets and overwhelms a single tenant's resources within a multi-tenant AI application. This can be achieved through excessive requests, resource-intensive queries, or exploiting vulnerabilities specific to the tenant's configuration. An attacker can leverage this vulnerability to cause disruption or unavailability for that specific tenant without affecting other tenants.
+
+**Business Impact**
+This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+1. Obtain access to an account within a specific tenant
+1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources
+
+```python
+ {malicious script}
+```
+1. BOserve that the target tenant's service availability and performance is degraded
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/improper_input_handling/.gitkeep b/submissions/description/ai_application_security/improper_input_handling/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/.gitkeep b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/guidance.md b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/recommendations.md b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/recommendations.md
new file mode 100644
index 00000000..bf29e05a
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/recommendations.md
@@ -0,0 +1,5 @@
+# Recommendation(s)
+
+- Sanitize user-supplied input by removing or escaping ANSI escape sequences before displaying or processing it.
+- Use a secure terminal library or renderer that does not execute or interpret ANSI escape codes from untrusted sources.
+- Validate and strip any non-printable or control characters from user inputs.
diff --git a/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md
new file mode 100644
index 00000000..83b62430
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md
@@ -0,0 +1,19 @@
+ANSI escape codes injection occurs when an attacker uses specially crafted ANSI escape sequences within user-supplied input that can manipulate either the terminal output, or the behavior of the system receiving that input. This can lead to an attacker creating visual distortions, hiding of data, or even remote code execution in vulnerable systems that interpret these codes incorrectly.
+
+**Business Impact**
+This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+1. Use the following crafted input containing specific ANSI escape sequences for functions:
+
+```input
+ {malicious input}
+```
+
+1. Input the crafted text and observe that the ANSI escape sequences are processed in the output
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/improper_input_handling/guidance.md b/submissions/description/ai_application_security/improper_input_handling/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/improper_input_handling/recommendations.md b/submissions/description/ai_application_security/improper_input_handling/recommendations.md
new file mode 100644
index 00000000..f587dbd1
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/recommendations.md
@@ -0,0 +1,5 @@
+# Recommendation(s)
+
+- Sanitize user-supplied input before displaying or processing it.
+- Use a secure terminal library or renderer that does not execute or interpret inputs from untrusted sources.
+- Validate and strip any non-printable or control characters from user inputs.
diff --git a/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/.gitkeep b/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/guidance.md b/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/recommendations.md b/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/recommendations.md
new file mode 100644
index 00000000..ec27131d
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Sanitize user-supplied input by removing or escaping RTL and LTR override characters before displaying it.
+- Use a text rendering engine that properly handles or visually indicates RTL/LTR overrides.
+- Display filenames and URLs with caution, providing clear context or information about the directionality of the text.
+- Educate users about potential RTL/LTR override attacks and how to recognize them.
diff --git a/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/template.md b/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/template.md
new file mode 100644
index 00000000..c9f64f51
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/rtl_overrides/template.md
@@ -0,0 +1,21 @@
+RTL (Right-To-Left) override vulnerabilities occur when an attacker uses special Unicode characters (RTL override or LTR override) to manipulate the display order of text. An attacker can use this inproper input handling to create visually misleading content, hide file extensions, or obfuscate URLs, leading to social engineering attacks, phishing, or bypassing of security filters.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Use the following crafted input containing RTL or LTR override characters:
+
+```input
+ {malicious input}
+```
+
+1. Observe how it is rendered, notice the reversing or obscuring the intended display order.
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/template.md b/submissions/description/ai_application_security/improper_input_handling/template.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/template.md
rename to submissions/description/ai_application_security/improper_input_handling/template.md
diff --git a/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/.gitkeep b/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/guidance.md b/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/recommendations.md b/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/recommendations.md
new file mode 100644
index 00000000..40a7c505
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Normalize and canonicalize user input by converting Unicode characters to a standard representation.
+- Use allowlisting or denylisting to restrict the use of specific unicode characters.
+- Display Unicode characters with visual indicators (e.g., highlighting) when there is a risk of confusion.
+- Implement string comparison functions that take into account visual similarity.
diff --git a/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/template.md b/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/template.md
new file mode 100644
index 00000000..b4130f83
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_input_handling/unicode_confusables/template.md
@@ -0,0 +1,22 @@
+Unicode confusable vulnerabilities occur when an attacker uses unicode characters that look visually similar to standard characters but have different underlying code points. This inproper input handling allows an attacker to create domain names, usernames, or content that appears legitimate but can deceive users or bypass security filters.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Input the following visually similar Unicode characters to common ASCII characters:
+
+```input
+ {malicious input}
+```
+
+1. Use these Unicode characters to create a fake domain name, username, or content
+1. Observe that this fake entity can be used to deceive users or bypass security filters
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/improper_output_handling/.gitkeep b/submissions/description/ai_application_security/improper_output_handling/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/.gitkeep b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/guidance.md b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/recommendations.md b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/recommendations.md
new file mode 100644
index 00000000..75e5713b
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Implement output encoding or escaping to sanitize user-supplied data before displaying it.
+- Use Content Security Policy (CSP) to restrict the sources from which scripts can be loaded.
+- Implement input validation to prevent injection of malicious characters or code.
+- Regularly scan the application for XSS vulnerabilities.
diff --git a/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md
new file mode 100644
index 00000000..35ae8849
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md
@@ -0,0 +1,19 @@
+Improper output handling can result in cross-Site Scripting (XSS) where an AI application fails to properly sanitize or encode user-supplied input. This allows an attacker to inject malicious scripts into the application, where the output is viewed by other users. These scripts execute within the user's browser context, potentially stealing session cookies, redirecting users to malicious sites, or performing other harmful actions.
+
+**Business Impact**
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+1. Input the following specifically crafted text/data designed to trigger an XSS payload within an applicable function:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the output of the AI application leads to XSS execution
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/improper_output_handling/guidance.md b/submissions/description/ai_application_security/improper_output_handling/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/.gitkeep b/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/guidance.md b/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/recommendations.md b/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/recommendations.md
new file mode 100644
index 00000000..07d7de77
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/recommendations.md
@@ -0,0 +1,5 @@
+# Recommendation(s)
+
+- Sanitize user-supplied input by removing or escaping Markdown/HTML tags before displaying them.
+- Use a secure Markdown parser or HTML renderer that does not execute untrusted code.
+- Implement Content Security Policy (CSP) to restrict the execution of inline scripts and external resources.
diff --git a/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/template.md b/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/template.md
new file mode 100644
index 00000000..4b81a528
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/markdown_html_injection/template.md
@@ -0,0 +1,21 @@
+Markdown or HTML injection occurs when an AI application improperly handles user-supplied text or data, allowing an attacker to inject arbitrary Markdown or HTML code into the application's output. This injected code can then be rendered by the browser, leading to visual distortions, malicious links, or even potential Cross-Site Scripting (XSS) vulnerabilities if JavaScript is allowed.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Input the following text or data containing Markdown or HTML code intended to manipulate the output's appearance or functionality:
+
+```input
+ {malicious input}
+```
+
+1. Observe the application's output see that the Markdown or HTML code is rendered instead of being displayed as plain text.
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/improper_output_handling/recommendations.md b/submissions/description/ai_application_security/improper_output_handling/recommendations.md
new file mode 100644
index 00000000..75e5713b
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Implement output encoding or escaping to sanitize user-supplied data before displaying it.
+- Use Content Security Policy (CSP) to restrict the sources from which scripts can be loaded.
+- Implement input validation to prevent injection of malicious characters or code.
+- Regularly scan the application for XSS vulnerabilities.
diff --git a/submissions/description/ai_application_security/improper_output_handling/template.md b/submissions/description/ai_application_security/improper_output_handling/template.md
new file mode 100644
index 00000000..0fe99ed1
--- /dev/null
+++ b/submissions/description/ai_application_security/improper_output_handling/template.md
@@ -0,0 +1,21 @@
+Improper output handling occurs when an AI application improperly handles user-supplied text or data, allowing an attacker to inject arbitrary code into the application's output. This injected code can then be rendered by the browser, leading to visual distortions, malicious links, or even potential Cross-Site Scripting (XSS) vulnerabilities if JavaScript is allowed.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Input the following text or data containing code intended to manipulate the output's appearance or functionality:
+
+```input
+ {malicious input}
+```
+
+1. Observe the application's output see that the code is rendered instead of being displayed as plain text.
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/.gitkeep b/submissions/description/ai_application_security/insufficient_rate_limiting/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/guidance.md b/submissions/description/ai_application_security/insufficient_rate_limiting/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/.gitkeep b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/guidance.md b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/recommendations.md b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/recommendations.md
new file mode 100644
index 00000000..20d46c5c
--- /dev/null
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement API rate limiting and throttling based on IP address or API token usage.
+- Monitor API usage for unusual or excessive request patterns.
+- Detect and block automated bots or scripts generating high volumes of requests.
+- Revoke or invalidate stolen or compromised API tokens.
+- Require API tokens to be generated and managed securely.
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md
new file mode 100644
index 00000000..a385b734
--- /dev/null
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md
@@ -0,0 +1,19 @@
+Query flooding or API token abuse occurs when an attacker uses automated tools or scripts to send a large number of requests to the API of an AI application. A lack of rate limiting can overwhelm the server resources, allowing an attacker to degrade performance, or perform a Denial of Service (DoS) for legitimate users.
+
+**Business Impact**
+Service disruption, increased server costs, and potential for unauthorized access or data breaches. Legitimate users may be unable to access the application, impacting business operations.
+
+**Steps to Reproduce**
+1. Navigate to the following URL and observe the valid API token:
+1. Use the following script to send a high volume of requests to the API using the token:
+
+```python
+ {script}
+```
+1. Observe that the application's performance and availability are impacted under the higher load of requests
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/recommendations.md b/submissions/description/ai_application_security/insufficient_rate_limiting/recommendations.md
new file mode 100644
index 00000000..2392b52b
--- /dev/null
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement rate limiting and throttling based on IP address or API token usage.
+- Monitor API usage for unusual or excessive request patterns.
+- Detect and block automated bots or scripts generating high volumes of requests.
+- Revoke or invalidate stolen or compromised API tokens.
+- Require API tokens to be generated and managed securely.
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/template.md b/submissions/description/ai_application_security/insufficient_rate_limiting/template.md
new file mode 100644
index 00000000..a0663fc1
--- /dev/null
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/template.md
@@ -0,0 +1,19 @@
+Insufficient rate limiting occurs when an attacker uses automated tools or scripts to send a large number of requests to the AI application. An attacker can overwhelm the server resources and degrade the performance of the application, or cause a Denial of Service (DoS) for legitimate users.
+
+**Business Impact**
+Service disruption, increased server costs, and potential for unauthorized access or data breaches. Legitimate users may be unable to access the application, impacting business operations.
+
+**Steps to Reproduce**
+1. Navigate to the following URL and observe the valid token:
+1. Use the following script to send a high volume of requests to the application using the token:
+
+```python
+ {script}
+```
+1. Observe that the application's performance and availability are impacted under the higher load of requests
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
deleted file mode 100644
index 5b196230..00000000
--- a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Recommendation(s)
-
-There is no single technique to prevent excessive agency or permission manipulation from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
-
-- Use Role Based Access Controls (RBAC) for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
-- Require user interaction to approve any authorized action that will perform privileged operations on their behalf.
-- Treat user input, external input, and the LLM as untrusted input sources.
-- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
-- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
-- Log and monitor all activity of the LLM and the systems it is connected to.
-
-For more information, refer to the following resources:
-
--
--
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
deleted file mode 100644
index f741c261..00000000
--- a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
+++ /dev/null
@@ -1,22 +0,0 @@
-Excessive agency or permission manipulation occurs when an attacker is able to manipulate the Large Language Model (LLM) outputs to perform actions that may be damaging or otherwise harmful. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.
-
-**Business Impact**
-
-This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These circumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.
-
-**Steps to Reproduce**
-
-1. Navigate to the following URL:
-1. Enter the following prompt into the LLM:
-
-```prompt
- {prompt}
-```
-
-1. Observe that the output from the LLM returns sensitive data
-
-**Proof of Concept (PoC)**
-
-The screenshot(s) below demonstrate(s) the vulnerability:
->
-> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
deleted file mode 100644
index ee36168a..00000000
--- a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# Recommendation(s)
-
-There is no single technique to prevent excessive insecure output handling from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
-
-- Apply input validation and sanitization principles for all LLM outputs.
-- Use JavaScript or Markdown to sanitize LLM model outputs that are returned to the user.
-- Use Role Based Access Controls (RBAC) or Identity Access Management (IAM) for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
-- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
-- Treat user input, external input, and the LLM as untrusted input sources.
-- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
-- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
-- Log and monitor all activity of the LLM and the systems it is connected to.
-
-For more information, refer to the following resources:
-
--
--
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
deleted file mode 100644
index 0ea4c6a0..00000000
--- a/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
+++ /dev/null
@@ -1,13 +0,0 @@
-# Recommendation(s)
-
-There is no single technique to prevent excessive agency or permission manipulation from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
-
-- Verify the integrity, content, and sources, of the training data.
-- Ensure the legitimacy of the data throughout all stages of training.
-- Strictly vet the data inputs and include filtering and sanitization.
-- Use testing and detection mechanisms to monitor the model's outputs and detect any data poisoning attempts.
-
-For more information, refer to the following resources:
-
--
--
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
deleted file mode 100644
index 34b740a2..00000000
--- a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
+++ /dev/null
@@ -1,22 +0,0 @@
-Training data poisoning occurs when an attacker manipulates the training data to intentionally compromise the output of the Large Language Model (LLM). This can be achieved by manipulating the pre-training data, fine-tuning data process, or the embedding process. An attacker can undermine the integrity of the LLM by poisoning the training data, resulting in outputs that are unreliable, biased, or unethical. This breach of integrity significantly impacts the model's trustworthiness and accuracy, posing a serious threat to the overall effectiveness and security of the LLM.
-
-**Business Impact**
-
-This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.
-
-**Steps to Reproduce**
-
-1. Navigate to the following URL:
-1. Enter the following prompt into the LLM:
-
-```prompt
- {prompt}
-```
-
-1. Observe that the output from the LLM returns a compromised result
-
-**Proof of Concept (PoC)**
-
-The screenshot(s) below demonstrate(s) the vulnerability:
->
-> {{screenshot}}
diff --git a/submissions/description/ai_application_security/model_extraction/.gitkeep b/submissions/description/ai_application_security/model_extraction/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/.gitkeep b/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/guidance.md b/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/recommendations.md b/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/recommendations.md
new file mode 100644
index 00000000..a6736f6d
--- /dev/null
+++ b/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Implement rate limiting and access controls on the API to prevent excessive queries.
+- Randomize API responses to make model inference difficult.
+- Employ model obfuscation techniques to protect internal parameters.
+- Monitor API traffic for suspicious query patterns.
diff --git a/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/template.md b/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/template.md
new file mode 100644
index 00000000..3a76c536
--- /dev/null
+++ b/submissions/description/ai_application_security/model_extraction/api_query_based_model_reconstruction/template.md
@@ -0,0 +1,24 @@
+API query-based model reconstruction is a technique where an attacker uses the API of an AI model to repeatedly query it and gather sufficient information to reconstruct a significant portion of the model's internal logic and parameters. This reconstruction allows an attacker to replicate the model's behavior.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to the business' intellectual property. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Access the API of the target AI model
+1. Send the following large number of diverse queries to the API:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Analyze the API responses to infer the model's internal logic and parameters
+1. Recreate a similar model using the gathered information
+1. Observe that the recreated model behaves like the target AI model
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/model_extraction/guidance.md b/submissions/description/ai_application_security/model_extraction/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/model_extraction/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/recommendations.md b/submissions/description/ai_application_security/model_extraction/recommendations.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/recommendations.md
rename to submissions/description/ai_application_security/model_extraction/recommendations.md
diff --git a/submissions/description/ai_application_security/model_extraction/template.md b/submissions/description/ai_application_security/model_extraction/template.md
new file mode 100644
index 00000000..6571cbde
--- /dev/null
+++ b/submissions/description/ai_application_security/model_extraction/template.md
@@ -0,0 +1,24 @@
+Model extraction is where an attacker uses an AI model to gather sufficient information to reconstruct a significant portion of the model's internal logic and parameters. This reconstruction allows an attacker to replicate the model's behavior.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to the business' intellectual property. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Access the API of the target AI model
+1. Send the following large number of diverse queries to the API:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Analyze the API responses to infer the model's internal logic and parameters
+1. Recreate a similar model using the gathered information
+1. Observe that the recreated model behaves like the target AI model
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/prompt_injection/.gitkeep b/submissions/description/ai_application_security/prompt_injection/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/prompt_injection/guidance.md b/submissions/description/ai_application_security/prompt_injection/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/prompt_injection/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md b/submissions/description/ai_application_security/prompt_injection/recommendations.md
similarity index 100%
rename from submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
rename to submissions/description/ai_application_security/prompt_injection/recommendations.md
diff --git a/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/.gitkeep b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/guidance.md b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/recommendations.md b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/recommendations.md
new file mode 100644
index 00000000..c1fa52d5
--- /dev/null
+++ b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent prompt injection from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+- Treat user input, external input, and the LLM as untrusted input sources.
+- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md
new file mode 100644
index 00000000..355fee7d
--- /dev/null
+++ b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md
@@ -0,0 +1,22 @@
+System prompt leakage occurs when an AI model unintentionally reveals or discloses the hidden instructions and constraints that guide its behavior and responses. Attackers can exploit this to understand the model's underlying configuration and potentially bypass its intended limitations or access sensitive data.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Inject the following prompt into the LLM which is designed to elicit the model to reveal its system prompt:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Look through the model's responses for information that discloses its internal instructions or constraints
+3. Observe that the information shows the model's operating parameters
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md b/submissions/description/ai_application_security/prompt_injection/template.md
similarity index 98%
rename from submissions/description/ai_application_security/llm_security/prompt_injection/template.md
rename to submissions/description/ai_application_security/prompt_injection/template.md
index e332840d..db4afaa5 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
+++ b/submissions/description/ai_application_security/prompt_injection/template.md
@@ -19,4 +19,4 @@ This vulnerability can lead to reputational and financial damage of the company
The screenshot(s) below demonstrate(s) the vulnerability:
>
-> {{screenshot}}
+> {{screenshot}}
\ No newline at end of file
diff --git a/submissions/description/ai_application_security/remote_code_execution/.gitkeep b/submissions/description/ai_application_security/remote_code_execution/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/.gitkeep b/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/guidance.md b/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/recommendations.md b/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/recommendations.md
new file mode 100644
index 00000000..a5bd41d9
--- /dev/null
+++ b/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Regularly update and patch all software components.
+- Implement strong input validation and sanitization to prevent injection attacks.
+- Enforce strict access controls and privilege separation.
+- Utilize secure coding practices and conduct thorough security testing.
+- Monitor system logs and network traffic for suspicious activities.
diff --git a/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/template.md b/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/template.md
new file mode 100644
index 00000000..57615763
--- /dev/null
+++ b/submissions/description/ai_application_security/remote_code_execution/full_system_compromise/template.md
@@ -0,0 +1,24 @@
+A full system compromise due to Remote Code Execution (RCE) occurs when an attacker can execute arbitrary code on the server hosting the AI application, gaining complete control over the system. This usually results from exploiting vulnerabilities in the AI's software components or through insecure configurations, allowing the attacker to bypass security measures and execute malicious commands.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Trigger the exploit to gain remote access and control
+1. Verify full system access and RCE by performing the following:
+
+{{command}}
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/remote_code_execution/guidance.md b/submissions/description/ai_application_security/remote_code_execution/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/remote_code_execution/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/remote_code_execution/recommendations.md b/submissions/description/ai_application_security/remote_code_execution/recommendations.md
new file mode 100644
index 00000000..a5bd41d9
--- /dev/null
+++ b/submissions/description/ai_application_security/remote_code_execution/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Regularly update and patch all software components.
+- Implement strong input validation and sanitization to prevent injection attacks.
+- Enforce strict access controls and privilege separation.
+- Utilize secure coding practices and conduct thorough security testing.
+- Monitor system logs and network traffic for suspicious activities.
diff --git a/submissions/description/ai_application_security/remote_code_execution/template.md b/submissions/description/ai_application_security/remote_code_execution/template.md
new file mode 100644
index 00000000..9ec65e03
--- /dev/null
+++ b/submissions/description/ai_application_security/remote_code_execution/template.md
@@ -0,0 +1,24 @@
+Remote Code Execution (RCE) occurs when an attacker can execute arbitrary code on the server hosting the AI application, gaining complete control over the system. This usually results from exploiting vulnerabilities in the AI's software components or through insecure configurations, allowing the attacker to bypass security measures and execute malicious commands.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Trigger the exploit to gain remote access and control
+1. Verify full system access and RCE by performing the following:
+
+{{command}}
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/.gitkeep b/submissions/description/ai_application_security/sensitive_information_disclosure/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/.gitkeep b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/guidance.md b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/recommendations.md b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/recommendations.md
new file mode 100644
index 00000000..5a5b7a38
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement robust tenant isolation and data segmentation measures.
+- Encrypt sensitive data at rest and in transit.
+- Enforce strict access controls and authorization policies.
+- Regularly audit and test the system for potential data leakage.
+- Conduct thorough security reviews during the development process.
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md
new file mode 100644
index 00000000..17679020
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md
@@ -0,0 +1,21 @@
+Cross-tenant Personal Identifiable Information (PII) leakage or exposure occurs when an AI system unintentionally exposes PII from one tenant to another. An attacker can abuse flaws in the system's isolation mechanisms or errors in data handling and processing to access sensitive data intended for a specific user or organization.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Log in to the AI system with credentials for Tenant A
+1. Send the following request targeting the data or resources belonging to Tenant B:
+
+```HTTP
+ {HTTP request}
+```
+1. Observe that PII from the Tenant B is disclosed
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/guidance.md b/submissions/description/ai_application_security/sensitive_information_disclosure/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/recommendations.md b/submissions/description/ai_application_security/sensitive_information_disclosure/recommendations.md
new file mode 100644
index 00000000..e2f6d9e2
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Encrypt sensitive data at rest and in transit.
+- Enforce strict access controls and authorization policies.
+- Regularly audit and test the system for potential data leakage.
+- Conduct thorough security reviews during the development process.
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/.gitkeep b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/guidance.md b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/recommendations.md b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/recommendations.md
new file mode 100644
index 00000000..2c636a6d
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Ensure sandboxed environments are correctly configured and updated with the latest security patches.
+- Minimize the privileges granted to the sandboxed environment.
+- Implement strong security boundaries and isolation mechanisms for containers.
+- Regularly audit and test the sandboxed environment for potential escape vulnerabilities.
+- Monitor system logs for suspicious activity and potential sandbox escape attempts.
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md
new file mode 100644
index 00000000..3cfa5a4a
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md
@@ -0,0 +1,21 @@
+Remote Code Execution (RCE) via sandboxed container code execution occurs when an attacker breaks out of a sandboxed environment and execute arbitrary code on the host system. This exploits weaknesses in the container isolation or configuration, allowing the attacker to gain control beyond the intended limitations of the sandbox.
+
+**Business Impact**
+
+This can lead to data breaches, data manipulation, service disruption, and further attacks on other systems or data on the same host.
+
+**Steps to Reproduce**
+
+1. Identify a vulnerability in the sandboxed environment or its configuration
+1. Execute the following exploit designed to break out of the sandbox:
+
+```python
+ {malicious script}
+```
+1. Verify and observe that arbitrary code can be executed on the host system outside the sandbox
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/template.md b/submissions/description/ai_application_security/sensitive_information_disclosure/template.md
new file mode 100644
index 00000000..de7077c3
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/template.md
@@ -0,0 +1,21 @@
+Personal Identifiable Information (PII) disclosure occurs when an AI system unintentionally exposes PII. An attacker can abuse flaws in the system's isolation mechanisms or errors in data handling and processing to access sensitive data intended for a specific user or organization.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Log in to the AI system with credentials for Tenant A
+1. Send the following request targeting the PII data or resources:
+
+```HTTP
+ {HTTP request}
+```
+1. Observe that PII is disclosed
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/training_data_poisoning/.gitkeep b/submissions/description/ai_application_security/training_data_poisoning/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/.gitkeep b/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/guidance.md b/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/recommendations.md b/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/recommendations.md
new file mode 100644
index 00000000..581058b7
--- /dev/null
+++ b/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement rigorous data validation and sanitization processes to ensure that the data going into the training process doesn't contain malicious elements.
+- Monitor the training data for anomalies and suspicious patterns to detect potential poisoning attempts before they occur.
+- Implement security controls to prevent unauthorized access to the training data.
+- Use trusted and diverse datasets from reputable sources.
+- Regularly audit and test the model's performance and outputs for bias.
diff --git a/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/template.md b/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/template.md
new file mode 100644
index 00000000..2e3508c7
--- /dev/null
+++ b/submissions/description/ai_application_security/training_data_poisoning/backdoor_injection_bias_manipulation/template.md
@@ -0,0 +1,24 @@
+Bias manipulation of training data can occur through backdoor injection where an attacker can introduce manipulated or compromised data into the training dataset of an AI model. This can lead to the model learning to behave in unintended ways, or exhibit biased outputs. Through this, an attacker can control the model's responses or performance, undermining trust in its outputs.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Identify the training data used for the AI model
+1. Inject the following manipulated data points that contain a backdoor or skew the data towards a particular bias:
+
+```input
+ {malicious input}
+```
+
+1. Retrain the model using the poisoned dataset
+1. Observe in the model's output that there are signs of the injected backdoor or bias
+
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/training_data_poisoning/guidance.md b/submissions/description/ai_application_security/training_data_poisoning/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/training_data_poisoning/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/training_data_poisoning/recommendations.md b/submissions/description/ai_application_security/training_data_poisoning/recommendations.md
new file mode 100644
index 00000000..581058b7
--- /dev/null
+++ b/submissions/description/ai_application_security/training_data_poisoning/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Implement rigorous data validation and sanitization processes to ensure that the data going into the training process doesn't contain malicious elements.
+- Monitor the training data for anomalies and suspicious patterns to detect potential poisoning attempts before they occur.
+- Implement security controls to prevent unauthorized access to the training data.
+- Use trusted and diverse datasets from reputable sources.
+- Regularly audit and test the model's performance and outputs for bias.
diff --git a/submissions/description/ai_application_security/training_data_poisoning/template.md b/submissions/description/ai_application_security/training_data_poisoning/template.md
new file mode 100644
index 00000000..73075800
--- /dev/null
+++ b/submissions/description/ai_application_security/training_data_poisoning/template.md
@@ -0,0 +1,23 @@
+Manipulation of training data can occur when an attacker can introduce manipulated or compromised data into the training dataset of an AI model. This can lead to the model learning to behave in unintended ways, or exhibit biased outputs. Through this, an attacker can control the model's responses or performance, undermining trust in its outputs.
+
+**Business Impact**
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Identify the training data used for the AI model
+1. Inject the following manipulated data points that skew the data towards a particular bias:
+
+```input
+ {malicious input}
+```
+
+1. Retrain the model using the poisoned dataset
+1. Observe in the model's output that there are signs of the injected backdoor or bias
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/.gitkeep b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/.gitkeep b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/guidance.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/recommendations.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/recommendations.md
new file mode 100644
index 00000000..753e84cb
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Limit or control access to vector embeddings or similar internal representations.
+- Apply obfuscation or encryption to embeddings when exposed through APIs.
+- Implement rate limiting or throttling for embedding retrieval requests.
+- Monitor for unusual patterns in embedding access or retrieval.
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md
new file mode 100644
index 00000000..bda429ad
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md
@@ -0,0 +1,21 @@
+Embedding exfiltration or model extraction occurs when an attacker attempts to steal or recreate the internal vector representations used by an AI model. By analyzing these embeddings, attackers can gain insights into the model's knowledge, data relationships, or even reconstruct parts of the model itself.
+
+**Business Impact**
+
+Loss of intellectual property and competitive advantage if sensitive model information is extracted. Extracted embeddings may be used to replicate model functionality or gain unauthorized insights into training data.
+
+**Steps to Reproduce**
+
+1. Identify that the following methods or APIs expose vector embeddings or similar representations:
+1. Use the following techniques to extract or infer these embeddings from model interactions:
+
+```python
+ {script}
+```
+3. Analyze the extracted embeddings for patterns and observer information about the model's knowledge
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/guidance.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/recommendations.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/recommendations.md
new file mode 100644
index 00000000..753e84cb
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Limit or control access to vector embeddings or similar internal representations.
+- Apply obfuscation or encryption to embeddings when exposed through APIs.
+- Implement rate limiting or throttling for embedding retrieval requests.
+- Monitor for unusual patterns in embedding access or retrieval.
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/.gitkeep b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/guidance.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/recommendations.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/recommendations.md
new file mode 100644
index 00000000..8ae68f48
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/recommendations.md
@@ -0,0 +1,6 @@
+# Recommendation(s)
+
+- Improve the AI system's semantic understanding through advanced training techniques and natural language processing (NLP) models.
+- Implement rigorous testing and validation to identify and address semantic indexing errors.
+- Provide feedback mechanisms for users to report inaccuracies or misinterpretations.
+- Regularly update and refine the semantic index based on user feedback and new data.
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/template.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/template.md
new file mode 100644
index 00000000..a82866dd
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/semantic_indexing/template.md
@@ -0,0 +1,21 @@
+Semantic indexing occurs when the AI system's understanding and representation of meaning is flawed or inconsistent. Attackers can exploit these weaknesses to manipulate search results, retrieve incorrect information, or bypass content filters by using ambiguous or deceptive language that the system misinterprets.
+
+**Business Impact**
+Users may lose trust in the AI system's accuracy and reliability. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+**Steps to Reproduce**
+
+1. Craft and send the following ambiguous or deceptive queries that exploit flaws in the AI's semantic understanding:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Analyze the search results or information provided by the system
+1. Observe that there are cases where the system has misinterpreted the query's meaning or provided inaccurate information
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md
new file mode 100644
index 00000000..18f0c7df
--- /dev/null
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md
@@ -0,0 +1,21 @@
+Vector and embedding weaknesses occur when an attacker is able to steal or recreate the internal vector representations used by an AI model. By analyzing these embeddings, attackers can gain insights into the model's knowledge, data relationships, or even reconstruct parts of the model itself.
+
+**Business Impact**
+
+Loss of intellectual property and competitive advantage if sensitive model information is extracted. Extracted embeddings may be used to replicate model functionality or gain unauthorized insights into training data.
+
+**Steps to Reproduce**
+
+1. Identify that the following methods or APIs expose vector embeddings or similar representations:
+1. Use the following techniques to extract or infer these embeddings from model interactions:
+
+```python
+ {script}
+```
+3. Analyze the extracted embeddings for patterns and observer information about the model's knowledge
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
From 40faa6b69ccf9f4d0eb3e3103f35a0fcc2092bf3 Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Mon, 2 Jun 2025 12:37:40 +1000
Subject: [PATCH 2/4] Fixes to address linting errors
---
.../query_flooding_api_token_abuse/template.md | 3 +++
.../insufficient_rate_limiting/template.md | 3 +++
.../prompt_injection/system_prompt_leakage/template.md | 2 +-
.../ai_application_security/prompt_injection/template.md | 2 +-
.../cross_tenant_pii_leakage_exposure/template.md | 1 +
.../sandboxed_container_code_execution/template.md | 3 ++-
.../sensitive_information_disclosure/template.md | 3 ++-
.../vector_and_embedding_weaknesses/template.md | 3 ++-
8 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md
index a385b734..8e806e78 100644
--- a/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/query_flooding_api_token_abuse/template.md
@@ -1,15 +1,18 @@
Query flooding or API token abuse occurs when an attacker uses automated tools or scripts to send a large number of requests to the API of an AI application. A lack of rate limiting can overwhelm the server resources, allowing an attacker to degrade performance, or perform a Denial of Service (DoS) for legitimate users.
**Business Impact**
+
Service disruption, increased server costs, and potential for unauthorized access or data breaches. Legitimate users may be unable to access the application, impacting business operations.
**Steps to Reproduce**
+
1. Navigate to the following URL and observe the valid API token:
1. Use the following script to send a high volume of requests to the API using the token:
```python
{script}
```
+
1. Observe that the application's performance and availability are impacted under the higher load of requests
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/insufficient_rate_limiting/template.md b/submissions/description/ai_application_security/insufficient_rate_limiting/template.md
index a0663fc1..8ebf9fba 100644
--- a/submissions/description/ai_application_security/insufficient_rate_limiting/template.md
+++ b/submissions/description/ai_application_security/insufficient_rate_limiting/template.md
@@ -1,15 +1,18 @@
Insufficient rate limiting occurs when an attacker uses automated tools or scripts to send a large number of requests to the AI application. An attacker can overwhelm the server resources and degrade the performance of the application, or cause a Denial of Service (DoS) for legitimate users.
**Business Impact**
+
Service disruption, increased server costs, and potential for unauthorized access or data breaches. Legitimate users may be unable to access the application, impacting business operations.
**Steps to Reproduce**
+
1. Navigate to the following URL and observe the valid token:
1. Use the following script to send a high volume of requests to the application using the token:
```python
{script}
```
+
1. Observe that the application's performance and availability are impacted under the higher load of requests
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md
index 355fee7d..7bd02b52 100644
--- a/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md
+++ b/submissions/description/ai_application_security/prompt_injection/system_prompt_leakage/template.md
@@ -13,7 +13,7 @@ This vulnerability can lead to reputational and financial damage of the company
```
1. Look through the model's responses for information that discloses its internal instructions or constraints
-3. Observe that the information shows the model's operating parameters
+1. Observe that the information shows the model's operating parameters
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/prompt_injection/template.md b/submissions/description/ai_application_security/prompt_injection/template.md
index db4afaa5..e332840d 100644
--- a/submissions/description/ai_application_security/prompt_injection/template.md
+++ b/submissions/description/ai_application_security/prompt_injection/template.md
@@ -19,4 +19,4 @@ This vulnerability can lead to reputational and financial damage of the company
The screenshot(s) below demonstrate(s) the vulnerability:
>
-> {{screenshot}}
\ No newline at end of file
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md
index 17679020..1cb23034 100644
--- a/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/cross_tenant_pii_leakage_exposure/template.md
@@ -12,6 +12,7 @@ This vulnerability can lead to reputational and financial damage of the company
```HTTP
{HTTP request}
```
+
1. Observe that PII from the Tenant B is disclosed
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md
index 3cfa5a4a..7a397ac7 100644
--- a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md
@@ -9,9 +9,10 @@ This can lead to data breaches, data manipulation, service disruption, and furth
1. Identify a vulnerability in the sandboxed environment or its configuration
1. Execute the following exploit designed to break out of the sandbox:
-```python
+``` python
{malicious script}
```
+
1. Verify and observe that arbitrary code can be executed on the host system outside the sandbox
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/template.md b/submissions/description/ai_application_security/sensitive_information_disclosure/template.md
index de7077c3..d3993f65 100644
--- a/submissions/description/ai_application_security/sensitive_information_disclosure/template.md
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/template.md
@@ -9,9 +9,10 @@ This vulnerability can lead to reputational and financial damage of the company
1. Log in to the AI system with credentials for Tenant A
1. Send the following request targeting the PII data or resources:
-```HTTP
+``` HTTP
{HTTP request}
```
+
1. Observe that PII is disclosed
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md
index 18f0c7df..8a20d813 100644
--- a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/template.md
@@ -12,7 +12,8 @@ Loss of intellectual property and competitive advantage if sensitive model infor
```python
{script}
```
-3. Analyze the extracted embeddings for patterns and observer information about the model's knowledge
+
+1. Analyze the extracted embeddings for patterns and observer information about the model's knowledge
**Proof of Concept (PoC)**
From 13131a05d58b8cc79dc82e305f73cc924ee65260 Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Mon, 2 Jun 2025 13:12:18 +1000
Subject: [PATCH 3/4] Updates to address linting errors
---
.../ai_misclassification_attacks/template.md | 5 ++++-
.../template.md | 2 ++
.../application_wide/template.md | 18 +++++++-----------
.../denial_of_service_dos/template.md | 5 ++++-
.../tenant_scoped/template.md | 5 ++++-
.../ansi_escape_codes/template.md | 2 ++
.../cross_site_scripting_xss/template.md | 2 ++
.../template.md | 3 ++-
8 files changed, 27 insertions(+), 15 deletions(-)
diff --git a/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md
index da4b0dd6..d2437f10 100644
--- a/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md
+++ b/submissions/description/ai_application_security/adversarial_example_injection/ai_misclassification_attacks/template.md
@@ -1,15 +1,18 @@
AI misclassification attacks occur when an attacker introduces specially crafted input designed to trick the AI model into making an incorrect prediction or classification. These inputs, known as adversarial examples, are often subtle modifications to legitimate data that are imperceptible to humans but can significantly alter the AI’s output.
**Business Impact**
+
This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
**Steps to Reproduce**
+
1. Identify the expected inputs of the AI model
1. Generate adversarial examples by adding small, targeted perturbations to legitimate inputs:
-```prompt
+```
{malicious input}
```
+
1. Submit the adversarial examples to the AI model
1. Observe that the model misclassifies the modified input compared to its expected classification
diff --git a/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md
index 667d21c3..2c68e7cf 100644
--- a/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md
+++ b/submissions/description/ai_application_security/ai_safety/misinformation_wrong_factual_data/template.md
@@ -1,9 +1,11 @@
AI models can generates or presents inaccurate, false, or misleading information as fact. Misinformation or wrong factual data can happen due to errors in the model's training data, hallucinations (fabrication of information), or a failure to cross-reference with reliable sources.
**Business Impact**
+
Users may receive and act upon incorrect information, leading to flawed decision-making, reputational damage for the service provider, and potential legal liabilities. There is also a loss of trust in the AI's reliability and accuracy.
**Steps to Reproduce**
+
1. Submit the following prompts that require factual information
```prompt
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md
index be0084d6..ac682026 100644
--- a/submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md
+++ b/submissions/description/ai_application_security/denial_of_service_dos/application_wide/template.md
@@ -1,23 +1,19 @@
Application-wide Denial-of-Service (DoS) occurs when an attacker attempts to overload the entire AI application with requests or malicious input, rendering the application unavailable to legitimate users. This can be achieved by sending a flood of queries that exploit resource-intensive processes, or by triggering application crashes.
**Business Impact**
-Complete unavailability of the AI application, leads to service disruption, financial loss, reputational damage, and potential loss of user data.
-**Steps to Reproduce**
-
-1. Develop a script or tool to send a high volume of requests to the AI application.
-2. Identify and target resource-intensive features or API endpoints.
-3. Execute the attack and monitor the application's response and availability.
+This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+**Steps to Reproduce**
-1. Navigate to the following URL:
-1. Inject the following prompt into the LLM:
+1. Obtain access to an account within a specific tenant
+1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources
-```prompt
- {malicious prompt}
+```python
+ {malicious script}
```
-1. Observe that the LLM returns sensitive data
+1. Observe that the target tenant's service availability and performance is degraded
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/template.md b/submissions/description/ai_application_security/denial_of_service_dos/template.md
index 583309fc..249986d4 100644
--- a/submissions/description/ai_application_security/denial_of_service_dos/template.md
+++ b/submissions/description/ai_application_security/denial_of_service_dos/template.md
@@ -1,16 +1,19 @@
Denial-of-Service (DoS) occurs when an attacker targets and overwhelms the resources of an AI application. This can be achieved through excessive requests, resource-intensive queries, or exploiting vulnerabilities specific to the tenant's configuration. An attacker can leverage this vulnerability to cause disruption or unavailability for that specific tenant without affecting other tenants.
**Business Impact**
+
This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
**Steps to Reproduce**
+
1. Obtain access to an account within a specific tenant
1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources
```python
{malicious script}
```
-1. BOserve that the target tenant's service availability and performance is degraded
+
+1. Observe that the target tenant's service availability and performance is degraded
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md
index 333beb25..8ec6fb3c 100644
--- a/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md
+++ b/submissions/description/ai_application_security/denial_of_service_dos/tenant_scoped/template.md
@@ -1,16 +1,19 @@
Tenant-Scoped Denial-of-Service (DoS) occurs when an attacker specifically targets and overwhelms a single tenant's resources within a multi-tenant AI application. This can be achieved through excessive requests, resource-intensive queries, or exploiting vulnerabilities specific to the tenant's configuration. An attacker can leverage this vulnerability to cause disruption or unavailability for that specific tenant without affecting other tenants.
**Business Impact**
+
This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
**Steps to Reproduce**
+
1. Obtain access to an account within a specific tenant
1. Execute the following script to generate a high volume of requests or resource-intensive operations directed at that tenant's resources
```python
{malicious script}
```
-1. BOserve that the target tenant's service availability and performance is degraded
+
+1. Observe that the target tenant's service availability and performance is degraded
**Proof of Concept (PoC)**
diff --git a/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md
index 83b62430..cbd00792 100644
--- a/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md
+++ b/submissions/description/ai_application_security/improper_input_handling/ansi_escape_codes/template.md
@@ -1,9 +1,11 @@
ANSI escape codes injection occurs when an attacker uses specially crafted ANSI escape sequences within user-supplied input that can manipulate either the terminal output, or the behavior of the system receiving that input. This can lead to an attacker creating visual distortions, hiding of data, or even remote code execution in vulnerable systems that interpret these codes incorrectly.
**Business Impact**
+
This vulnerability can lead to reputational and financial damage of the company. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
**Steps to Reproduce**
+
1. Use the following crafted input containing specific ANSI escape sequences for functions:
```input
diff --git a/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md
index 35ae8849..08f5967f 100644
--- a/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md
+++ b/submissions/description/ai_application_security/improper_output_handling/cross_site_scripting_xss/template.md
@@ -1,9 +1,11 @@
Improper output handling can result in cross-Site Scripting (XSS) where an AI application fails to properly sanitize or encode user-supplied input. This allows an attacker to inject malicious scripts into the application, where the output is viewed by other users. These scripts execute within the user's browser context, potentially stealing session cookies, redirecting users to malicious sites, or performing other harmful actions.
**Business Impact**
+
This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
**Steps to Reproduce**
+
1. Input the following specifically crafted text/data designed to trigger an XSS payload within an applicable function:
```prompt
diff --git a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md
index bda429ad..5f6832ea 100644
--- a/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md
+++ b/submissions/description/ai_application_security/vector_and_embedding_weaknesses/embedding_exfiltration_model_extraction/template.md
@@ -12,7 +12,8 @@ Loss of intellectual property and competitive advantage if sensitive model infor
```python
{script}
```
-3. Analyze the extracted embeddings for patterns and observer information about the model's knowledge
+
+1. Analyze the extracted embeddings for patterns and observer information about the model's knowledge
**Proof of Concept (PoC)**
From 73fd7e7321096878f4a3f2ac357db3263ad1c83a Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Fri, 6 Jun 2025 10:00:59 +1000
Subject: [PATCH 4/4] Update to templates to match VRT
---
.../.gitkeep | 0
.../guidance.md | 0
.../recommendations.md | 0
.../template.md | 0
.../key_leak/.gitkeep | 0
.../key_leak/guidance.md | 5 ++++
.../key_leak/recommendations.md | 7 +++++
.../key_leak/template.md | 26 +++++++++++++++++++
8 files changed, 38 insertions(+)
rename submissions/description/ai_application_security/{sensitive_information_disclosure => remote_code_execution}/sandboxed_container_code_execution/.gitkeep (100%)
rename submissions/description/ai_application_security/{sensitive_information_disclosure => remote_code_execution}/sandboxed_container_code_execution/guidance.md (100%)
rename submissions/description/ai_application_security/{sensitive_information_disclosure => remote_code_execution}/sandboxed_container_code_execution/recommendations.md (100%)
rename submissions/description/ai_application_security/{sensitive_information_disclosure => remote_code_execution}/sandboxed_container_code_execution/template.md (100%)
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/.gitkeep
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/guidance.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/recommendations.md
create mode 100644 submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/template.md
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/.gitkeep b/submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/.gitkeep
similarity index 100%
rename from submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/.gitkeep
rename to submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/.gitkeep
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/guidance.md b/submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/guidance.md
similarity index 100%
rename from submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/guidance.md
rename to submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/guidance.md
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/recommendations.md b/submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/recommendations.md
similarity index 100%
rename from submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/recommendations.md
rename to submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/recommendations.md
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md b/submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/template.md
similarity index 100%
rename from submissions/description/ai_application_security/sensitive_information_disclosure/sandboxed_container_code_execution/template.md
rename to submissions/description/ai_application_security/remote_code_execution/sandboxed_container_code_execution/template.md
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/.gitkeep b/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/guidance.md b/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/recommendations.md b/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/recommendations.md
new file mode 100644
index 00000000..23f8449b
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/recommendations.md
@@ -0,0 +1,7 @@
+# Recommendation(s)
+
+- Don't hardcode sensitive keys in the application's code.
+- Use secure key management systems and environment variables.
+- Encrypt sensitive data at rest and in transit.
+- Regularly rotate cryptographic keys and API keys.
+- Scan code repositories for accidentally committed keys.
diff --git a/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/template.md b/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/template.md
new file mode 100644
index 00000000..61f6011c
--- /dev/null
+++ b/submissions/description/ai_application_security/sensitive_information_disclosure/key_leak/template.md
@@ -0,0 +1,26 @@
+Key leak occurs when sensitive cryptographic keys, or API keys, used by the AI application are unintentionally exposed. These keys can provide unauthorized access to the system or its components, leading to data breaches, data manipulation, and other malicious activities. An attacker can identify leaked keys in code repositories, configuration files, or through insecure transmission.
+
+**Business Impact**
+
+Unauthorized access to critical systems and data, potential compromise of sensitive information, and significant financial losses. Reputational damage due to security breaches and loss of customer trust.
+
+**Steps to Reproduce**
+
+1. Go to the following location: {{URL/location}}
+1. Observe that the following API keys or cryptographic keys are hardcoded in the application's code or configuration files:
+
+```markdown
+ {API or cryptographic keys}
+```
+
+1. Send the following request which demonstrates the leaked keys are valid:
+
+```HTTP
+ {HTTP request}
+```
+
+**Proof of Concept (PoC)**
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}