Skip to content

Conversation

@EvenSol
Copy link
Owner

@EvenSol EvenSol commented Nov 29, 2025

Summary

  • add iterative auto-correction that reruns the generated NeqSim script until it succeeds or reaches a retry limit
  • capture each failed attempt and show expandable error traces when validation cannot complete
  • persist the last validated or suggested script so users can rerun or refine with new prompts

Testing

  • Not run (not requested)

Codex Task

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +70 to +73
exec( # nosec
code,
{"__builtins__": {"range": range, "len": len, "min": min, "max": max}},
local_env,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Allow generated scripts to import required modules

The exec sandbox strips __import__ by replacing __builtins__ with only range/len/min/max, so any generated script that includes an import statement (e.g., the common from neqsim.thermo import fluid or import pandas as pd) raises ImportError: __import__ not found before any process logic runs. Because the auto-correction loop keeps using the same sandbox, these scripts will never succeed even after retries, preventing users from running most model-generated NeqSim examples.

Useful? React with 👍 / 👎.

Comment on lines +105 to +110
def request_process_script(prompt_text: str, *, model: str, api_key: str) -> str:
OpenAI.api_key = api_key
client = OpenAI(api_key=api_key)
completion = client.completions.create(
model=model,
prompt=prompt_text,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge gpt-4o-mini unsupported by completions endpoint

The request uses client.completions.create(model=model, ...), but the sidebar offers gpt-4o-mini, which is a chat-only model and is rejected by the completions API. Selecting that option results in an API error instead of a generated script, blocking users from using the default newer model. The call needs the chat completions endpoint for chat models or the option should be limited to completion-capable models.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants