Skip to content

Conversation

@apbari
Copy link
Collaborator

@apbari apbari commented Jan 9, 2026

  • Add dbt plugin entry point and command handler
  • Implement dynamic profiles.yml generation
  • Support JWT authentication with Snowflake
  • Enable selector and model targeting
  • Stream dbt output to Heimdall logs

- Add dbt plugin entry point and command handler
- Implement dynamic profiles.yml generation
- Support JWT authentication with Snowflake
- Enable selector and model targeting
- Stream dbt output to Heimdall logs
Copilot AI review requested due to automatic review settings January 9, 2026 09:27
@apbari apbari changed the title feat: Add dbt plugin for Heimdall-based dbt execution Add dbt plugin for Heimdall-based dbt execution Jan 9, 2026
@wiz-55ccc8b716
Copy link

wiz-55ccc8b716 bot commented Jan 9, 2026

Wiz Scan Summary

Scanner Findings
Vulnerability Finding Vulnerabilities -
Data Finding Sensitive Data -
Secret Finding Secrets -
IaC Misconfiguration IaC Misconfigurations -
SAST Finding SAST Findings 3 Medium
Software Supply Chain Finding Software Supply Chain Findings -
Total 3 Medium

View scan details in Wiz

To detect these findings earlier in the dev lifecycle, try using Wiz Code VS Code Extension.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a new dbt plugin for Heimdall that enables execution of dbt commands against Snowflake through a centralized job management system. The plugin dynamically generates dbt profiles, handles JWT authentication, and streams dbt output to Heimdall logs.

Key changes:

  • New plugin structure with entry point for dbt command execution
  • Dynamic profiles.yml generation with Snowflake JWT authentication
  • Support for dbt commands (build, run, test, snapshot) with selectors, models, threads, and variable configurations

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 8 comments.

File Description
plugins/dbt/dbt.go Plugin entry point that instantiates the dbt command handler
plugins/dbt/README.md Comprehensive documentation covering configuration, usage, authentication, and requirements
internal/pkg/object/command/dbt/dbt.go Core implementation including profile generation, command building, and execution logic

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

account: %s
user: %s
role: %s
database: analytics_dev_db
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The database name 'analytics_dev_db' is hardcoded for the dev target. This should be derived from the cluster context or made configurable. Consider using the same database as prod with a different schema, or adding a 'dev_database' field to the cluster context.

Copilot uses AI. Check for mistakes.
schema: %s
authenticator: snowflake_jwt
private_key_path: %s
threads: 4
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The threads value is hardcoded to 4 in both the prod and dev outputs. This should use the jobContext.Threads value that's passed as a parameter to allow per-job configuration of thread count. The threads parameter is already being set in the dbt command itself (line 214), but it would be better to use it in the profile as well for consistency.

Copilot uses AI. Check for mistakes.
Comment on lines +223 to +231
if len(jobCtx.Vars) > 0 {
varsStr := ""
for k, v := range jobCtx.Vars {
if varsStr != "" {
varsStr += ","
}
varsStr += fmt.Sprintf("%s:%s", k, v)
}
cmd = append(cmd, "--vars", fmt.Sprintf("{%s}", varsStr))
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The vars are formatted as comma-separated key:value pairs, but dbt expects JSON format for the --vars argument. This should be properly JSON-encoded instead. For example, if vars contains {"key1": "value1", "key2": "value2"}, the current implementation would produce "{key1:value1,key2:value2}" which is not valid JSON. It should produce '{"key1":"value1","key2":"value2"}' instead.

Copilot uses AI. Check for mistakes.
Comment on lines +213 to +215
if jobCtx.Threads > 0 {
cmd = append(cmd, "--threads", fmt.Sprintf("%d", jobCtx.Threads))
}
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The threads parameter is being added to the command even when it's greater than 0, but line 74-76 ensures a default of 4 is always set. This means the condition 'if jobCtx.Threads > 0' will always be true. Consider changing the condition or removing it since threads will always have a value.

Suggested change
if jobCtx.Threads > 0 {
cmd = append(cmd, "--threads", fmt.Sprintf("%d", jobCtx.Threads))
}
cmd = append(cmd, "--threads", fmt.Sprintf("%d", jobCtx.Threads))

Copilot uses AI. Check for mistakes.
Comment on lines +235 to +237
if jobCtx.Target != "" {
cmd = append(cmd, "--target", jobCtx.Target)
}
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The target parameter is being added to the command even though it's already set in the handler function with a default value on lines 77-79. This means the condition 'if jobCtx.Target != ""' will always be true. Consider removing this redundant check.

Suggested change
if jobCtx.Target != "" {
cmd = append(cmd, "--target", jobCtx.Target)
}
cmd = append(cmd, "--target", jobCtx.Target)

Copilot uses AI. Check for mistakes.
Comment on lines +132 to +135
role := clusterCtx.Role
if role == "" {
role = "PUBLIC" // Default role if not specified
}
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default role 'PUBLIC' may not be appropriate for production use. PUBLIC typically has limited permissions in Snowflake. Consider either requiring the role to be specified in the cluster configuration or using a more descriptive error message to indicate that the role should be configured.

Suggested change
role := clusterCtx.Role
if role == "" {
role = "PUBLIC" // Default role if not specified
}
if clusterCtx.Role == "" {
return "", fmt.Errorf("missing required Snowflake role; please configure a role instead of relying on the default PUBLIC role")
}
role := clusterCtx.Role

Copilot uses AI. Check for mistakes.

- dbt-core >= 1.7.0
- dbt-snowflake >= 1.7.0
- Valid Snowflake credentials in cluster configuration
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation states that dbt-core and dbt-snowflake version >= 1.7.0 are required, but there's no version validation in the code. Consider adding a version check during initialization to provide clear error messages if the required versions are not available, or documenting how version compatibility is enforced.

Suggested change
- Valid Snowflake credentials in cluster configuration
- Valid Snowflake credentials in cluster configuration
**Note:** The dbt plugin does not currently perform runtime validation of installed `dbt-core` or `dbt-snowflake` versions. Ensure your environment or dependency management (e.g., requirements.txt, Poetry, or container image) provides `dbt-core` and `dbt-snowflake` versions >= 1.7.0 before using this plugin.

Copilot uses AI. Check for mistakes.
}

// Generate profiles.yml content
profiles := fmt.Sprintf(`analytics_poc_profile:
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The profile name 'analytics_poc_profile' is hardcoded. This should be configurable through the command context to support different dbt projects with different profile names. Consider adding a 'dbt_profile_name' field to the commandContext struct and using it here.

Suggested change
profiles := fmt.Sprintf(`analytics_poc_profile:
profileName := os.Getenv("DBT_PROFILE_NAME")
if profileName == "" {
profileName = "analytics_poc_profile"
}
profiles := fmt.Sprintf(profileName+`:

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants