Skip to content

fivetran/dbt_greenhouse

Repository files navigation

Greenhouse dbt Package

This dbt package transforms data from Fivetran's Greenhouse connector into analytics-ready tables.

Resources

What does this dbt package do?

This package enables you to understand trends in sourcing, recruiting, interviewing, and hiring at your company. It creates enriched models with metrics focused on applications, interviews, and jobs.

Output schema

Final output tables are generated in the following target schema:

<your_database>.<connector/schema_name>_greenhouse

Final output tables

By default, this package materializes the following final tables:

Table Description
greenhouse__application_enhanced Tracks all candidate applications with complete applicant profiles including current pipeline stage, recruiter and coordinator assignments, contact information, resume links, and interview activity to manage the hiring funnel.

Example Analytics Questions:
  • Which recruiters or sources generate the most applications and hires?
  • What is the average time from application to hire by job or candidate source?
  • How do application volumes and status distributions vary across different pipeline stages?
greenhouse__job_enhanced Provides comprehensive job posting data with metrics on application volumes, hiring outcomes, and team assignments to understand job performance and hiring effectiveness.

Example Analytics Questions:
  • Which jobs have the most open applications and highest conversion rates to hire?
  • How long do job postings stay open before being filled?
  • What is the ratio of rejected to hired applications by department or office?
greenhouse__interview_enhanced Tracks individual interviews between interviewers and candidates with feedback scores, interviewer information, and application status to evaluate interview effectiveness and candidate progression.

Example Analytics Questions:
  • Which interviewers provide the most feedback and have the highest candidate advancement rates?
  • What is the distribution of interview recommendations by job or candidate source?
  • How do interview outcomes correlate with eventual hiring decisions?
greenhouse__interview_scorecard_detail Captures detailed interview scorecard ratings for each evaluation criterion to analyze interviewer feedback patterns and candidate assessment consistency. Note: Does not include free-form text responses.

Example Analytics Questions:
  • Which scorecard attributes have the highest average ratings across all interviews?
  • How do scorecard ratings vary by interviewer or candidate source?
  • What rating patterns correlate with successful hires versus rejections?
greenhouse__application_history Chronicles application progression through hiring stages with time-in-stage metrics, activity volumes, and recruiter assignments to analyze hiring velocity and pipeline bottlenecks.

Example Analytics Questions:
  • What is the average time candidates spend in each hiring stage?
  • Which stages have the highest drop-off or rejection rates?
  • How does time-to-hire vary by job, department, or candidate source?

¹ Each Quickstart transformation job run materializes these models if all components of this data model are enabled. This count includes all staging, intermediate, and final models materialized as view, table, or incremental.


Prerequisites

To use this dbt package, you must have the following:

  • At least one Fivetran Greenhouse connection syncing data into your destination.
  • A BigQuery, Snowflake, Redshift, PostgreSQL, or Databricks destination.

How do I use the dbt package?

You can either add this dbt package in the Fivetran dashboard or import it into your dbt project:

  • To add the package in the Fivetran dashboard, follow our Quickstart guide.
  • To add the package to your dbt project, follow the setup instructions in the dbt package's README file to use this package.

Install the package

Include the following greenhouse package version in your packages.yml file:

TIP: Check dbt Hub for the latest installation instructions or read the dbt docs for more information on installing packages.

packages:
  - package: fivetran/greenhouse
    version: [">=1.3.0", "<1.4.0"]

Define database and schema variables

Option A: Single connection

By default, this package runs using your destination and the greenhouse schema. If this is not where your Greenhouse data is (for example, if your Greenhouse schema is named greenhouse_fivetran), add the following configuration to your root dbt_project.yml file:

vars:
  greenhouse:
    greenhouse_database: your_database_name
    greenhouse_schema: your_schema_name

Option B: Union multiple connections

If you have multiple Greenhouse connections in Fivetran and would like to use this package on all of them simultaneously, we have provided functionality to do so. For each source table, the package will union all of the data together and pass the unioned table into the transformations. The source_relation column in each model indicates the origin of each record.

PLEASE NOTE: Rows from your individual Greenhouse connections will be stored together in unified tables. Given the potentially sensitive nature of Greenhouse data, confirm that this configuration complies with your organization's PII and data governance requirements.

To use this functionality, you will need to set the greenhouse_sources variable in your root dbt_project.yml file:

# dbt_project.yml

vars:
  greenhouse:
    greenhouse_sources:
      - database: connection_1_destination_name # Required
        schema: connection_1_schema_name # Required
        name: connection_1_source_name # Required only if following the step in the following subsection

      - database: connection_2_destination_name
        schema: connection_2_schema_name
        name: connection_2_source_name
Recommended: Incorporate unioned sources into DAG

If you are running the package through Fivetran Transformations for dbt Core™, the below step is necessary in order to synchronize model runs with your Greenhouse connections. Alternatively, you may choose to run the package through Fivetran Quickstart, which would create separate sets of models for each Greenhouse source rather than one set of unioned models.

By default, this package defines one single-connection source, called greenhouse, which will be disabled if you are unioning multiple connections. This means that your DAG will not include your Greenhouse sources, though the package will run successfully.

To properly incorporate all of your Greenhouse connections into your project's DAG:

  1. Define each of your sources in a .yml file in your project. Utilize the following template for the source-level configurations, and, most importantly, copy and paste the table and column-level definitions from the package's src_greenhouse.yml file.
# a .yml file in your root project

version: 2

sources:
  - name: <name> # ex: Should match name in greenhouse_sources
    schema: <schema_name>
    database: <database_name>
    loader: fivetran
    config:
      loaded_at_field: _fivetran_synced
      freshness: # feel free to adjust to your liking
        warn_after: {count: 72, period: hour}
        error_after: {count: 168, period: hour}

    tables: # copy and paste from greenhouse/models/staging/src_greenhouse.yml - see https://support.atlassian.com/bitbucket-cloud/docs/yaml-anchors/ for how to use anchors to only do so once

Note: If there are source tables you do not have (see Disable models for non-existent sources), you may still include them, as long as you have set the right variables to False.

  1. Set the has_defined_sources variable (scoped to the greenhouse package) to True, like such:
# dbt_project.yml
vars:
  greenhouse:
    has_defined_sources: true

Disable models for non-existent sources

Your Greenhouse connection might not sync every table that this package expects. If your syncs exclude certain tables, it is because you either do not use that functionality in Greenhouse or have actively excluded some tables from your syncs.

To disable the corresponding functionality in the package, you must set the relevant config variables to false. By default, all variables are set to true. Alter variables only for the tables you want to disable:

vars:
    greenhouse_using_prospects: false # Disable if you do not use prospects and/or do not have the PROPECT_POOL and PROSPECT_STAGE tables synced
    greenhouse_using_eeoc: false # Disable if you do not have EEOC data synced and/or do not want to integrate it into the package models
    greenhouse_using_app_history: false # Disable if you do not have APPLICATION_HISTORY synced and/or do not want to run the application_history transform model
    greenhouse_using_job_office: false # Disable if you do not have JOB_OFFICE and/or OFFICE synced, or do not want to include offices in the job_enhanced transform model
    greenhouse_using_job_department: false # Disable if you do not have JOB_DEPARTMENT and/or DEPARTMENT synced, or do not want to include offices in the job_enhanced transform model

Note: This package only integrates the above variables. If you'd like to disable other models, please create an issue specifying which ones.

(Optional) Additional configurations

Expand/Collapse details

Passing Through Custom Columns

The Greenhouse APPLICATION, JOB, and CANDIDATE tables may have custom columns, all prefixed with custom_field_. To pass these columns along to the staging and final transformation models, add the following variables to your dbt_project.yml file:

vars:
    greenhouse_application_custom_columns: ['the', 'list', 'of', 'columns'] # these columns will be in the final application_enhanced model
    greenhouse_candidate_custom_columns: ['the', 'list', 'of', 'columns'] # these columns will be in the final application_enhanced model
    greenhouse_job_custom_columns: ['the', 'list', 'of', 'columns'] # these columns will be in the final job_enhanced model

Changing the Build Schema

By default this package will build the Greenhouse staging models within a schema titled (<target_schema> + _stg_greenhouse) and the Greenhouse final transform models within a schema titled (<target_schema> + _greenhouse) in your target database. If this is not where you would like you Greenhouse staging and final models to be written to, add the following configuration to your dbt_project.yml file:

models:
    greenhouse:
      +schema: my_new_schema_name # Leave +schema: blank to use the default target_schema.
      staging:
        +schema: my_new_schema_name # Leave +schema: blank to use the default target_schema.

Change the source table references

If an individual source table has a different name than the package expects, add the table name as it appears in your destination to the respective variable:

IMPORTANT: See this project's dbt_project.yml variable declarations to see the expected names.

vars:
    greenhouse_<default_source_table_name>_identifier: your_table_name 

(Optional) Orchestrate your models with Fivetran Transformations for dbt Core™

Expand for details

Fivetran offers the ability for you to orchestrate your dbt project through Fivetran Transformations for dbt Core™. Learn how to set up your project for orchestration through Fivetran in our Transformations for dbt Core setup guides.

Does this package have dependencies?

This dbt package is dependent on the following dbt packages. These dependencies are installed by default within this package. For more information on the following packages, refer to the dbt hub site.

IMPORTANT: If you have any of these dependent packages in your own packages.yml file, we highly recommend that you remove them from your root packages.yml to avoid package version conflicts.

packages:
    - package: fivetran/fivetran_utils
      version: [">=0.4.0", "<0.5.0"]

    - package: dbt-labs/dbt_utils
      version: [">=1.0.0", "<2.0.0"]

How is this package maintained and can I contribute?

Package Maintenance

The Fivetran team maintaining this package only maintains the latest version of the package. We highly recommend you stay consistent with the latest version of the package and refer to the CHANGELOG and release notes for more information on changes across versions.

Contributions

A small team of analytics engineers at Fivetran develops these dbt packages. However, the packages are made better by community contributions.

We highly encourage and welcome contributions to this package. Learn how to contribute to a package in dbt's Contributing to an external dbt package article.

Are there any resources available?

  • If you have questions or want to reach out for help, see the GitHub Issue section to find the right avenue of support for you.
  • If you would like to provide feedback to the dbt package team at Fivetran or would like to request a new dbt package, fill out our Feedback Form.

About

Data models for Fivetran's Greenhouse connector built using dbt.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 11

Languages