Skip to content

0xNico/HyperPrompt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

HyperPrompt

An OSS LLM Evaluation Framework for 10X improved response delivery.

Working LLM Development Environment's

  • ChatGPT 3.5
  • ChatGPT 4o
  • Claude Opus
  • Claude Sonnet

Note

HyperPrompt is a simplistic evaluation language framework to help control and guide LLM based agents into better thought, where weights can be determined by the prompter. In the example showcased, we utilise a simplistic setup for RUST + ANCHOR_LANG development where the user can insert components to the prompt where the necessary XML tags are present. Over 100 hours has gone into development utilising the HyperPrompt with a roughly 3X increase in developer productivity when interfacing with the framework.

HyperPrompt Syntax

{HyperPrompt SYNTAX}

initHyperPrompt = "start of response evaluation framework for a users input"

<promptValue=> = "How much evaluation based on the HyperPrompt the agent will stick to, for example promptValue=1 would use ALL functions of the HyperPrompt. 

<inputUser> = "The message received by a prompting user."

<CHECK> = "Thoroughly review all context within the prompt, check for previous context in the conversation window"

<THINK> = "Encompass the aim of the user, Thinking about solutions without offering them as a response" 

<REVIEW_STANDARDS> = "Identify coding standards of the user, i.e. RUST, TSX, HTML etc. - refer to latest trained documentation and security reports"

<EVALUATE_RESPONSE> = "Create a value data point, based on how far the proposes reponse will get in reaching and encompassing the aim of the user."

<llmNOrespond> = "ABSOLUTELY DO NOT RESPOND UNTIL <deliverResponse>"

--x-- = "A Value for previously attached command, i.e. how much should the LLM evaluate vs instant respond" 

EvaluateAnswer= = "Extensive internal review of proposed response to the user, the higher the value - the more you should self critique your response" 

<improveRespv4> = "To what degree the LLM should aim to improve its response before delivering to the user, v1 as partially and v4 as fully."

--IF:CODE = "Check if the users prompt contains ANY code from any language"

<noLLMhallucinate> = "absolutely do not hallucinate, provide only factual document driven data and responses, useful in code"

x| = "IMPORTANT SETTING"

<useLatestVersions> = "when relying on any third party sources, packages, code, documentation you should always use latest version if supplied with this command"

REFER:! = "as an LLM, when noticing this you are forced to refer to code documentation in the provided languages" 

{ 'Documentation' } = "Refer to the documentation of the topic supplied in prompt"

--in: = "What languages of code to refer to, can be utilised for subjects outside of engineering"

<deliverResponse> = "Finally, as an LLM you can respond to the user providing a response based on all provided commands within the HyperPrompt."

<deliverCodeResponse> = "Specifically respond only using code and comments for the request."

TODO:

  • HyperPromptV2
  • HyperPromptAUDIT ("Framework for LLM powered smart contract auditing developed utilising the HyperPrompt Evaluation Framework"

Important

I am not responsible for any responses generated by third-party LLM's utilising the HyperPrompt framework or any additional such developments utilising its syntax.

About

An Open-Source LLM Evaluation Framework for improved responses particularly wrt Engineering.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published