Replies: 7 comments 7 replies
-
|
To make sure I understand this correctly. We will have the base files (The dockerfile, the base configs and such) in the main branch. And when a project needs to also have this we branch off the main branch to the project branch, make any changes and use that branch going forwards for that project. This could lead to a project which is based on another project, based on my understanding. Which I think could be a good idea. The questions that come to mind would be:
|
Beta Was this translation helpful? Give feedback.
-
|
For the versioning in the related document, how often will we need complicated version names as described? I can see us using main application and submodule as described. But I don't quite understand how many more levels we would need. How ever I am quite happy with your suggestion, considering it allows for as many levels as we would need, |
Beta Was this translation helpful? Give feedback.
-
|
One thing I am uncertain about. Will this repo contain the actual source code for the projects? I think it shouldn't but just want to make sure. Or will this more be about, the changes that will be required per project. For example geonode allows you to add new applications without touching the base source code. Will this repo contain that new source code. Or will we just link to where the source code is located? |
Beta Was this translation helpful? Give feedback.
-
|
With the multiple depths setting. How will we know the parent branch? It's obvious for depth 0 and 1, but after that? I would suggest perhaps with the |
Beta Was this translation helpful? Give feedback.
-
|
For the overlays directory. You mention that we will merge the files. However I assume there might be an order that needs to be followed. How will we enforce an ordering? |
Beta Was this translation helpful? Give feedback.
-
|
From your comments then I'm quite happy with this RFC. I think it sounds more complicated that it will actually be, in other words you have thought of the really complicated steps even if we don't use everything. |
Beta Was this translation helpful? Give feedback.
-
|
Sorry for the late reply. This looks like a lot to wrap my head around and I was swamped lately. Below some reply based on my personal experiences, might not be applicable directly but might give you some ideas.
This is probably not avoidable unless you pin every possible dependencies (including all recursive and indirect dependencies). However pinning all dependencies is extremely hard to manage because all the indirect dependencies can change their list of dependencies as well. So instead of investing on deterministic build, you probably should invest in automated testing. If the new docker build still pass the tests, it should be good to go.
I think this is what "feature flags" is meant to solve. You have one single code base and depending on the needs of each project, you activate the corresponding list of features. Having used your Having one single code base means all changes should be "backward-compatible". Taking Will go on and read https://github.com/kartoza/docker-geonode/blob/main/build/docs/DEVELOPMENT.md and provide feedback here. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
For TL;DR version, I would like you to read the draft of the procedures in:
https://github.com/kartoza/docker-geonode/blob/main/build/docs/DEVELOPMENT.md
Then provide feedback in this thread.
Long explanation about the reasoning
We have used/reused/customized/deployed many GeoNode instance in the past.
I realized that normally we extend GeoNode into our own specific projects.
But this introduces problem over time as our unique instance of GeoNode grows:
I'm drafting a base standard procedures in https://github.com/kartoza/docker-geonode/blob/main/build/docs/DEVELOPMENT.md that tries to address and solve the following problems:
orchestrationrepo. Even if the source code is on a different project repo. In my opinion, it's easier to include the source code as submodule in this repo and track the development, compared with developing Github Action for each and every project we have (it involves significant time in setting up the credentials and testing/maintaining the action)Our repo structure for
orchestration(this repo) is slightly different from our usual repo structure for development workflow.The notable difference (and why I recommend this) are:
main,develop,gep,benin. This way, thebeninsource overlay only contains some files that will override any files with the same relative path from the other sources.From the above reasoning, the main idea of this code organization is so that we can generalize our deployment and ops strategy into 3 main phases:
We define the source overlay and how they relate. Then we create the merged sources in a separate directory that we
called the merged overlay.
From the merged overlay, we generate any secondary files/structures from the resulting merged configuration. This may include:
In this phase, we actually do the final phase of what the repo is intended to do. For docker-geonode this means building a docker image and push it to repository. For ops related repo, that means deploying the manifests to kubernetes. It can be building a product or doing certain tasks.
From the main phases name above, I recommend calling this approach the DOGE approach.
Once the draft is finalized, we can make toolset that can help setting up this project structures and executing those phases.
Beta Was this translation helpful? Give feedback.
All reactions