Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
3502292
Helm charts: add gitignore
the-glu Apr 1, 2025
42a27b4
Add minikube instructions and update helm charts to work with minikube
the-glu Apr 1, 2025
02ff9c8
Helm charts: add configurable imagePullPolicy
the-glu Apr 1, 2025
bf0f05b
Yugabyte: helm charts
the-glu Apr 1, 2025
5209853
Yugabyte: helm charts
the-glu Apr 1, 2025
f349099
Merge branch 'master' into yugabyte_helm
the-glu Apr 30, 2025
1585b29
Merge branch 'yugabyte_helm' of github.com:Orbitalize/dss into yugaby…
the-glu Apr 30, 2025
471c6fc
[helm] Add TLS support on yugabyte
the-glu May 1, 2025
87a5479
Fixes for PR
the-glu May 6, 2025
f9db49b
Merge branch 'yugabyte_helm' into yugabyte_ssl
the-glu May 6, 2025
4fe7286
Fixes for PR
the-glu May 6, 2025
52903d2
Merge branch 'yugabyte_helm' into yugabyte_ssl
the-glu May 6, 2025
7a0176f
Normalize cockroachdbEnabled from previous PR
the-glu May 6, 2025
30497d3
Merge branch 'master' into yugabyte_helm
the-glu May 6, 2025
f24ccab
Remove duplicate locality
the-glu May 6, 2025
da41a74
Merge branch 'yugabyte_helm' into yugabyte_ssl
the-glu May 7, 2025
63f86f2
Yugabyte: Certificate managment
the-glu May 7, 2025
85333d3
[helm] Add TLS support on yugabyte
the-glu May 1, 2025
b3065ba
Normalize cockroachdbEnabled from previous PR
the-glu May 6, 2025
148f6d1
Merge branch 'yugabyte_ssl' into yugabyte_certificates
the-glu May 13, 2025
41813fe
Update deploy/operations/certificates-management/README.md
the-glu May 13, 2025
7eda8df
Secret -> secret_name
the-glu May 13, 2025
5ccbe66
Yugabyte in GCP
the-glu May 27, 2025
e470cbe
Yugaybte in AWS
the-glu May 27, 2025
38bcb08
Yugabyte: Cleanup
the-glu May 27, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,13 @@ test_result
# Generated configs
build/generated/
build/workspace/
build/workspace-yugabyte/
build/cockroachdb.yaml
build/values.yaml
build/dss.yaml

deploy/operations/certificates-management/workspace/

temp

# Django stuff:
Expand Down Expand Up @@ -131,4 +134,4 @@ go
.vscode

# terraform
.terraform*
.terraform*
38 changes: 27 additions & 11 deletions build/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ a PR to that effect would be greatly appreciated.
to create DNS entries for the static IP addresses created above. To list
the IP addresses, use `gcloud compute addresses list`.

1. Use [`make-certs.py` script](./make-certs.py) to create certificates for
1. (Only if you use CockroachDB) Use [`make-certs.py` script](./make-certs.py) to create certificates for
the CockroachDB nodes in this DSS instance:

./make-certs.py --cluster $CLUSTER_CONTEXT --namespace $NAMESPACE
Expand Down Expand Up @@ -243,6 +243,8 @@ a PR to that effect would be greatly appreciated.
the rest of the instances, such that ca.crt is the same across all
instances.

1. (Only if you use Yugabyte) Use [`css-certs.py` script](../deploy/operations/certificates-management/README.md) to create certificates for the Yugabyte nodes in this DSS instance.

1. If joining an existing DSS pool, share ca.crt with the DSS instance(s) you
are trying to join, and have them apply the new ca.crt, which now contains
both your instance's and the original instance's public certs, to enable
Expand All @@ -251,14 +253,28 @@ a PR to that effect would be greatly appreciated.
actions below. While they are performing those actions, you may continue
with the instructions.

1. Overwrite its existing ca.crt with the new ca.crt provided by the DSS
instance joining the pool.
1. Upload the new ca.crt to its cluster using
`./apply-certs.sh $CLUSTER_CONTEXT $NAMESPACE`
1. Restart their CockroachDB pods to recognize the updated ca.crt:
`kubectl rollout restart statefulset/cockroachdb --namespace $NAMESPACE`
1. Inform you when their CockroachDB pods have finished restarting
(typically around 10 minutes)
1. If you use CockroachDB:

1. Overwrite its existing ca.crt with the new ca.crt provided by the DSS
instance joining the pool.
1. Upload the new ca.crt to its cluster using
`./apply-certs.sh $CLUSTER_CONTEXT $NAMESPACE`
1. Restart their CockroachDB pods to recognize the updated ca.crt:
`kubectl rollout restart statefulset/cockroachdb --namespace $NAMESPACE`
1. Inform you when their CockroachDB pods have finished restarting
(typically around 10 minutes)

1. If you use Yugabyte

1. Share your CA with `./dss-certs.py get-ca`
1. Add others CAs of the pool with `./dss-certs.py add-pool-ca`
1. Upload the new CAs to its cluster using
`./dss-certs.py apply`
1. Restart their Yugabyte pods to recognize the updated ca.crt:
`kubectl rollout restart statefulset/yb-master --namespace $NAMESPACE`
`kubectl rollout restart statefulset/yb-tserver --namespace $NAMESPACE`
1. Inform you when their Yugabyte pods have finished restarting
(typically around 10 minutes)

1. Ensure the Docker images are built according to the instructions in the
[previous section](#docker-images).
Expand Down Expand Up @@ -295,10 +311,10 @@ a PR to that effect would be greatly appreciated.
DSS v0.16, the recommended CockroachDB image name is `cockroachdb/cockroach:v21.2.7`.
From DSS v0.17, the recommended CockroachDB version is `cockroachdb/cockroach:v24.1.3`.

1. `VAR_CRDB_HOSTNAME_SUFFIX`: The domain name suffix shared by all of your
1. `VAR_DB_HOSTNAME_SUFFIX`: The domain name suffix shared by all of your
CockroachDB nodes. For instance, if your CRDB nodes were addressable at
`0.db.example.com`, `1.db.example.com`, and `2.db.example.com`, then
VAR_CRDB_HOSTNAME_SUFFIX would be `db.example.com`.
VAR_DB_HOSTNAME_SUFFIX would be `db.example.com`.

1. `VAR_CRDB_LOCALITY`: Unique name for your DSS instance. Currently, we
recommend "<ORG_NAME>_<CLUSTER_NAME>", and the `=` character is not
Expand Down
2 changes: 0 additions & 2 deletions build/make-certs.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
#!/usr/bin/env python3

import argparse
import itertools
import glob
import os
import shutil
import subprocess
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,4 +128,3 @@ Delete the resources: `kubectl delete -f test-app.yml`.
1. Delete all created resources from the cluster (eg. clean up test as described in the previous section.)
2. Make sure all load balancers and target groups have been removed.
3. Run `terraform destroy`.

Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@

locals {
crdb_hostnames = var.aws_route53_zone_id == "" ? {} : { for i in aws_eip.ip_crdb[*] : i.tags.ExpectedDNS => i.public_ip }
yugabyte_master_hostnames = var.aws_route53_zone_id == "" ? {} : { for i in aws_eip.ip_yugabyte_masters[*] : i.tags.ExpectedDNS => i.public_ip }
yugabyte_tserver_hostnames = var.aws_route53_zone_id == "" ? {} : { for i in aws_eip.ip_yugabyte_tservers[*] : i.tags.ExpectedDNS => i.public_ip }
}


Expand Down Expand Up @@ -37,3 +39,25 @@ resource "aws_route53_record" "crdb_hostname" {
ttl = 300
records = [each.value]
}

# Yugabyte master nodes DNS
resource "aws_route53_record" "yugabyte_master_hostnames" {
for_each = local.yugabyte_master_hostnames

zone_id = var.aws_route53_zone_id
name = each.key
type = "A"
ttl = 300
records = [each.value]
}

# Yugabyte tserver nodes DNS
resource "aws_route53_record" "yugabyte_tserver_hostnames" {
for_each = local.yugabyte_tserver_hostnames

zone_id = var.aws_route53_zone_id
name = each.key
type = "A"
ttl = 300
records = [each.value]
}
Original file line number Diff line number Diff line change
Expand Up @@ -73,12 +73,36 @@ resource "aws_eip" "gateway" {

# Public Elastic IPs for the crdb instances
resource "aws_eip" "ip_crdb" {
count = var.node_count
count = var.datastore_type == "cockroachdb" ? var.node_count : 0
vpc = true

tags = {
Name = format("%s-ip-crdb%v", var.cluster_name, count.index)
# Preserve mapping between ips and hostnames
ExpectedDNS = format("%s.%s", count.index, var.crdb_hostname_suffix)
ExpectedDNS = format("%s.%s", count.index, var.db_hostname_suffix)
}
}

# Public Elastic IPs for the yubagybte master instances
resource "aws_eip" "ip_yugabyte_masters" {
count = var.datastore_type == "yugabyte" ? var.node_count : 0
vpc = true

tags = {
Name = format("%s-ip-yugabyte-master%v", var.cluster_name, count.index)
# Preserve mapping between ips and hostnames
ExpectedDNS = format("%s.master.%s", count.index, var.db_hostname_suffix)
}
}

# Public Elastic IPs for the yubagybte tserver instances
resource "aws_eip" "ip_yugabyte_tservers" {
count = var.datastore_type == "yugabyte" ? var.node_count : 0
vpc = true

tags = {
Name = format("%s-ip-yugabyte-tserver%v", var.cluster_name, count.index)
# Preserve mapping between ips and hostnames
ExpectedDNS = format("%s.tserver.%s", count.index, var.db_hostname_suffix)
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,42 @@ output "crdb_nodes" {
]
}

output "yugabyte_masters_nodes" {
value = [
for i in aws_eip.ip_yugabyte_masters : {
ip = i.allocation_id
dns = i.tags.ExpectedDNS
}
]
depends_on = [
aws_eip.ip_yugabyte_masters
]
}

output "yugabyte_tservers_nodes" {
value = [
for i in aws_eip.ip_yugabyte_tservers : {
ip = i.allocation_id
dns = i.tags.ExpectedDNS
}
]
depends_on = [
aws_eip.ip_yugabyte_tservers
]
}

output "crdb_addresses" {
value = [for i in aws_eip.ip_crdb[*] : { expected_dns : i.tags.ExpectedDNS, address : i.public_ip }]
}

output "yugabyte_masters_addresses" {
value = [for i in aws_eip.ip_yugabyte_masters[*] : { expected_dns : i.tags.ExpectedDNS, address : i.public_ip }]
}

output "yugabyte_tservers_addresses" {
value = [for i in aws_eip.ip_yugabyte_tservers[*] : { expected_dns : i.tags.ExpectedDNS, address : i.public_ip }]
}

output "gateway_address" {
value = {
expected_dns : aws_eip.gateway[0].tags.ExpectedDNS,
Expand All @@ -56,4 +88,4 @@ output "workload_subnet" {

output "iam_role_node_group_arn" {
value = aws_iam_role.dss-cluster-node-group.arn
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -54,17 +54,35 @@ variable "app_hostname" {
EOT
}

variable "crdb_hostname_suffix" {
variable "db_hostname_suffix" {
type = string
description = <<-EOT
The domain name suffix shared by all of your CockroachDB nodes.
For instance, if your CRDB nodes were addressable at 0.db.example.com,
1.db.example.com and 2.db.example.com, then the value would be db.example.com.
The domain name suffix shared by all of your databases nodes.
For instance, if your database nodes were addressable at 0.db.example.com,
1.db.example.com and 2.db.example.com (CockroachDB) or 0.master.db.example.com, 1.tserver.db.example.com (Yugabyte), then the value would be db.example.com.

Example: db.example.com
EOT
}


variable "datastore_type" {
type = string
description = <<-EOT
Type of datastore used

Supported technologies: cockroachdb, yugabyte
EOT

validation {
condition = contains(["cockroachdb", "yugabyte"], var.datastore_type)
error_message = "Supported technologies: cockroachdb, yugabyte"
}

default = "cockroachdb"
}


variable "cluster_name" {
type = string
description = <<-EOT
Expand Down
Loading
Loading