You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Requirements Freeze and Milestone Planning: Anshul, Vani, Niki, Tyler, and Peter Smith discussed finalizing the project requirements, reviewed the current state of the ERD and PRD, and collaboratively established a detailed milestone schedule, with input from Don and Corinne to be incorporated asynchronously.
Requirements Finalization: Anshul led the discussion on freezing the project requirements, confirming that the ERD would be finalized by the next meeting and that the PRD would be updated to reflect all agreed-upon milestones and user stories. Corinne's input was to be solicited asynchronously due to scheduling conflicts.
Milestone Scheduling: Vani and Niki worked through the project timeline, adjusting milestone dates to accommodate travel, holidays, and team member availability, ensuring that each phase had realistic deadlines and buffer periods. They agreed to update the PRD and milestone documents accordingly.
Success Metrics and User Stories: Vani and Anshul agreed to use the P1 user stories as the basis for success metrics in the PRD, omitting 'nice to have' items to maintain focus. They discussed handling unavailable data by removing it from the PRD but retaining it in the user story documentation for reference.
Asynchronous Review Process: The team decided to notify Corinne via Slack when the PRD and milestones were ready for review, streamlining the process and eliminating unnecessary steps.
Google Cloud Setup and Access Management: Tyler, Niki, and Anshul coordinated on setting up Google Cloud infrastructure, addressing access bottlenecks for new team members, and clarifying the onboarding process for affiliate appointments and funding documentation.
Access Bottlenecks: Tyler explained that Niki and Don required LBL email accounts to access Google Cloud projects, and Niki described progress on onboarding paperwork, including questions about funding sections, which Tyler clarified as non-funded agreements.
Onboarding Form Guidance: Vani and Niki discussed how to fill out the onboarding form, specifying the correct institution, contact, and funding details, and agreed to use 'unfunded' or $0 as appropriate, with Vani advising not to enter inaccurate information.
Cloud Credits and Cost Management: Anshul and Vani confirmed that cloud credits would cover Google Cloud usage, so no direct payment was required, and Niki was instructed to indicate a one-time payment of $0 on the form.
ETL Pipeline and Data Workflow Design: Anshul, Tyler, and Niki outlined the ETL pipeline architecture for infrastructure and agriculture datasets, emphasizing extensibility, automation, and the use of serverless transformations to populate a Postgres database.
Pipeline Extensibility: Tyler and Anshul agreed that the ETL pipelines should be designed to accommodate new infrastructure datasets in the future, favoring a generalized and extensible approach over one-off solutions.
Data Sources and Workflow: Tyler clarified that infrastructure datasets would be sourced from Google Drive, while agriculture datasets would come from NAS data, with Peter Smith having already pulled some files. The ETL process would automate ingestion into the Postgres database.
Serverless Transformation: Anshul described the workflow where updated datasets are dropped into a bucket, processed via serverless transformations, and loaded into the database, with Tyler confirming this approach.
Technical Stack and Infrastructure as Code Decisions: Niki, Tyler, and Anshul discussed the technical stack, agreeing on FastAPI with SQLModel and SQLAlchemy for the backend, and decided to use infrastructure as code tools at Niki's discretion for reproducibility and deployment automation.
Framework and ORM Selection: Tyler and Niki confirmed the use of FastAPI as the backend framework and SQLModel/SQLAlchemy for database modeling and migrations, preferring this modern approach over raw SQL.
Infrastructure as Code Tools: Niki proposed using Pulumi or Terraform for infrastructure as code, with Tyler and Anshul supporting the choice of whichever tool Niki was most comfortable with, as the team had no strong preference.
Deployment Automation: Niki outlined a plan for automated deployment using containers and GitHub workflows, with production releases triggered by tags and development deployments triggered by pushes to main, ensuring only authorized users could deploy to production.
API Design, Authentication, and Data Export: Anshul and Tyler reviewed API requirements, including authentication, data export formats, and the scope of analytics, deciding to maintain high-resolution data and clarify the division of responsibilities between frontend and backend.
API Authentication: Anshul asked about API key management, and Tyler explained that API clients, including the frontend, would require keys, with backend handling authentication.
Data Export and Resolution: Tyler clarified that the national tool exports data as CSVs rather than JSON, and recommended omitting feedstock clustering for the California tool to preserve high-resolution data.
Version Control and Reproducibility Practices: Niki, Anshul, and Tyler discussed version control for ETL pipelines and infrastructure as code, agreeing to use GitHub for tracking changes and to automate reproducibility through containerization and CI/CD workflows.
ETL Pipeline Versioning: Tyler described the current practice of using one-off scripts for the national tool, with scripts stored in the repository, and agreed to contribute relevant scripts for the new project if file formats align.
Infrastructure as Code in GitHub: Niki and Anshul agreed to manage infrastructure as code in GitHub, with automated deployment for development and production environments, and to restrict production deployment permissions to a dedicated GitHub user.
Follow-up tasks:
PRD and Milestones Finalization: Update the PRD and milestones document to reflect the finalized requirements, adjusted dates, and success metrics, and notify Corinne via Slack for asynchronous review and approval. (Anshul, Vani)
Google Cloud Access Onboarding: Complete the onboarding paperwork for Google Cloud access, clarifying funding details with Bonnie if needed, and notify Tyler if any issues arise to expedite the process. (Niki)
Infrastructure as Code Tool Selection: Decide on and implement the preferred infrastructure as code tool (e.g., Pulumi or Terraform) for Google Cloud setup, ensuring reproducibility and automation for dev and prod environments. (Niki)
Feedstock Clustering Requirement Update: Remove the feedstock clustering (K-means) requirement from the California tool scope in the PRD and milestones, maintaining high-resolution data as discussed. (Anshul)
Sharing National Tool Scripts: Identify and share any useful one-off scripts from the national tool repo that could assist with importing feedstock, mass, and infrastructure data into the new project repo via a PR. (Tyler)
User Stories and Success Metrics Update: Transfer the P1 user stories and relevant success metrics into the PRD, omitting unavailable data (e.g., market value and environmental impacts), and ensure documentation is up to date. (Vani)
Milestone Dates Adjustment: Adjust milestone dates and deliverables in the project tracking table to account for team travel, holidays, and individual availability, ensuring realistic timelines and buffer periods. (Vani, Niki)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Generated by AI. Be sure to check for accuracy.
Meeting notes:
Follow-up tasks:
Beta Was this translation helpful? Give feedback.
All reactions