This repository was archived by the owner on Nov 2, 2023. It is now read-only.
Description What happened :
first step
cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1alpha1
kind: YurtAppSet
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: ud-test
spec:
selector:
matchLabels:
app: ud-test
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: ud-test
spec:
template:
metadata:
labels:
app: ud-test
spec:
containers:
- name: nginx
image: nginx:1.19.3
topology:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
patch:
spec:
template:
spec:
containers:
- name: nginx
image: nginx:1.19.0
- name: hangzhou
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- hangzhou
replicas: 2
tolerations:
- effect: NoSchedule
key: apps.openyurt.io/example
operator: Exists
revisionHistoryLimit: 5
EOF
second step
Execute kubectl get yas ud-test -oyaml,we get following yaml
...
spec:
revisionHistoryLimit: 5
selector:
matchLabels:
app: ud-test
topology:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
patch: {}
replicas: 1
- name: hangzhou
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- hangzhou
replicas: 2
tolerations:
- effect: NoSchedule
key: apps.openyurt.io/example
operator: Exists
...
patch in beijing nodePool has been pruned. And two deployments have same sepc.template.spec.containers as following
[root@kind-k8s yurt-app-manager]# kubectl get deploy -owide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
ud-test-beijing-5djcs 0/1 1 0 31m nginx nginx:1.19.3 app=ud-test,apps.openyurt.io/pool-name=beijing
ud-test-hangzhou-wgxv4 0/2 2 0 31m nginx nginx:1.19.3 app=ud-test,apps.openyurt.io/pool-name=hangzhou
What you expected to happen :
How to reproduce it (as minimally and precisely as possible) :
Anything else we need to know? :
Environment :
OpenYurt version: v0.6.0
Kubernetes version (use kubectl version): client v1.22.15, server 1.22.15
OS (e.g: cat /etc/os-release)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/ "
BUG_REPORT_URL="https://bugs.centos.org/ "
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Kernel (e.g. uname -a):
Linux kind-k8s 3.10.0-1160.el7.x86_64 delete useless code #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
kubectl create -f config/setup/all-in-one.yaml
Reactions are currently unavailable
What happened:
first step
second step
Execute
kubectl get yas ud-test -oyaml,we get following yamlpatch in beijing nodePool has been pruned. And two deployments have same
sepc.template.spec.containersas followingWhat you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version): client v1.22.15, server 1.22.15cat /etc/os-release)NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
uname -a):Linux kind-k8s 3.10.0-1160.el7.x86_64 delete useless code #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
kubectl create -f config/setup/all-in-one.yaml