You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GCP Access and Project Setup: Niki, Tyler, and Andrea discussed the process of gaining Google Cloud Platform (GCP) access, confirmed Niki's permissions for the Bioserv project, and outlined initial repository scaffolding and infrastructure setup tasks for the week.
GCP Access Confirmation: Tyler confirmed that Niki was granted access to the Bioserv project on GCP and explained that this would be the only project where Niki could create resources, as permissions are tightly controlled by LBL.
SSO and Account Setup: Tyler guided Niki through the process of logging into GCP using their LBL email, clarifying that Google allows multiple accounts to be active simultaneously and that SSO sign-in should be triggered upon login.
Repository Scaffolding: Niki stated their plan to set up basic scaffolding in the repository to organize code for infrastructure as code, React front end, and API development, aiming to facilitate contributions from Tyler and themselves.
Pulumi and CLI Integration: Niki mentioned their intention to experiment with Pulumi for infrastructure as code, motivated by team interest, and to set up the necessary CLI tools for integration with GCP.
Infrastructure as Code and Deployment Pipelines: Tyler and Niki discussed the use of Pulumi for infrastructure as code, synchronization between API and front end deployment, and the setup of Cloud Build and Cloud Run pipelines for automated deployments.
Platform Selection: Niki indicated a preference to try Pulumi for infrastructure as code, while Tyler emphasized the importance of synchronizing deployment strategies between the API and front end.
Cloud Build and Cloud Run Usage: Tyler described the current deployment pipeline for the front end, which uses Cloud Build and Cloud Run to automatically deploy changes pushed to the main branch, and suggested a similar approach for the back end.
Repository Structure: The team agreed that maintaining separate repositories for the front end and back end would support separation of concerns, especially if different maintainers are involved in the future.
Secret Management: Tyler explained the use of Google Secret Manager for storing sensitive information such as Mapbox access tokens, which are referenced in build and deployment YAML files.
Database and Data Pipeline Status: Tyler updated the group on the current state of the SQL database setup by Peter, the informal process of migrating data from Google Sheets, and the need to formalize the data pipeline and ORM models.
SQL Database Setup: Peter has set up a SQL database in GCP and has been manually migrating bio server data from Google Sheets using R scripts, though the process is not yet formalized.
Pipeline Formalization: Tyler noted that work remains to formalize the data pipeline for robustness and to develop ORM models representing the database structures, as current uploads are performed manually.
Next Steps and Synchronization: The team agreed to dedicate time in the next meeting, when Peter returns, to synchronize ongoing data management work and plan further development.
Front End Development and Staging Deployment: Tyler demonstrated the current state of the front end, including the deployment pipeline to a staging site, and discussed plans for further development and integration with the back end.
Staging Site Deployment: Tyler showcased the live staging deployment at staging.calbioscape.org, which is automatically updated via Cloud Build and Cloud Run upon pushes to the main branch.
Front End Features: The front end includes placeholders for API integration, resource inventory display, adjustable buffer zones, and plans for additional filtering options and infrastructure layers.
Repository Access: Tyler shared the front end repository link in Slack and committed to adding Niki as a contributor to facilitate collaboration.
API Documentation and Integration: Niki and Tyler discussed options for API documentation, considering hosting it in the back end repository and linking or embedding it in the front end, with further investigation planned.
Documentation Location: Niki suggested that API documentation should reside in the back end repository for synchronization, with the possibility of linking or embedding it in the front end via iframe or direct links.
Tooling Considerations: Tyler mentioned the potential use of Swagger or similar auto-documentation tools, and both agreed to explore the best approach for user accessibility.
Procedural Questions and Meeting Scheduling: Niki raised questions about attending a lab introduction meeting and clarified with Tyler and Andrea that such sessions are typically optional unless explicitly required; the team also coordinated upcoming meeting schedules.
Lab Introduction Attendance: Niki asked if attendance at a lab introduction meeting was necessary, and Tyler and Andrea confirmed it was likely optional unless marked as required, with mandatory trainings usually clearly indicated.
Meeting Coordination: The group discussed upcoming travel schedules and agreed to adjust recurring meetings accordingly, ensuring continued progress tracking via milestones and issues in the repository.
Credit, Acknowledgment, and DOI Practices: Corinne, Niki, and Tyler discussed how to credit the team on the project site, including logo usage, acknowledgment language, and the practice of DOI stamping software releases for academic citation.
Logo and Acknowledgment Preferences: Corinne asked about using the University of Washington logo and standard acknowledgment language, noting UC Berkeley's restrictions, and Niki agreed to consult their institute for preferences.
DOI Stamping: Niki explained the practice of DOI stamping software releases for academic record-keeping and citation, with Corinne and Tyler confirming similar practices for related tools.
Feedstock Filtering and Future Planning: Corinne raised the need to plan for feedstock characteristic filtering in the tool, acknowledging the complexity and its connection to Peter's ongoing work.
Filtering Requirements: Corinne highlighted the importance of defining which feedstock characteristics should be filterable and the need to coordinate with Peter to implement these features.
Integration with API: Niki confirmed that the API development will support the front end's filtering needs, enabling broader data access as the system matures.
Follow-up tasks:
GCP Access and Repository Setup: Set up basic scaffolding in the repository for infrastructure as code, React front end, and API code, ensuring Tyler has access to contribute. (Niki)
Pulumi and CLI Integration: Figure out how to get the CLI working with Pulumi for infrastructure as code in GCP. (Niki)
Milestones and Issue Tracking: Add agreed-upon milestones and edits about the API and infrastructure as code stack to the repository, and set up issue tracking for progress. (Niki)
Front End Repository Access: Send the front end tool repository link to the Slack channel and add Niki as a contributor. (Tyler)
Credit and Acknowledgement on Site: Ask the boss at E Science/Scientific Software Engineering Center for preference on crediting and standard language for site acknowledgement, then send the information to Corinne. (Niki)
DOI Release Coordination: Coordinate with the team to ensure DOI stamping of software releases for both back end and front end portals as part of academic record-keeping. (Niki)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Generated by AI. Be sure to check for accuracy.
Meeting notes:
Follow-up tasks:
Beta Was this translation helpful? Give feedback.
All reactions