Merge pull request #1169 from dod-ccpo/generalize-k8s

Use kustomize and envsubst to generalize k8s config.
This commit is contained in:
dandds 2019-11-11 13:14:25 -05:00 committed by GitHub
commit 42e682e63f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 2852 additions and 2276 deletions

View File

@ -1,86 +1,35 @@
# Kubernetes Deployment Configuration
This folder contains Kubernetes deployment configuration for AWS and Azure. The following assumes that you have `kubectl` installed and configured with permissions to a cluster in one of the two CSPs.
This folder contains Kubernetes deployment configuration for Azure. The following assumes that you have `kubectl` installed and configured with permissions to a Kubernetes cluster.
## Applying K8s configuration
Note that the images specified in the config are out-of-date. CI/CD updates them automatically within the clusters. Be careful when applying new config that you don't rotate the image to an older copy that is out-of-date.
Applying the K8s config relies on a combination of kustomize and envsubst. Kustomize comes packaged with kubectl v0.14 and higher. envsubst is part of the gettext package. It can be installed with `brew install gettext` for MacOS users.
Depending on how your `kubectl` config is set up, these commands may need to be adjusted. If you have configuration for both clusters, you may need to specify the `kubectl` context for each command with the `--context` flag (something like `kubectl --context=aws [etc.]` or `kubectl --context=azure [etc.]`).
The production configuration (azure.atat.code.mil, currently) is reflected in the configuration found in the `deploy/azure` directory. Configuration for a staging environment relies on kustomize to overwrite the production config with values appropriate for that environment. You can find more information about using kustomize [here](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/). Kustomize does not manage templating, and certain values need to be templated. These include:
### Apply the config to an AWS cluster
- CONTAINER_IMAGE: the ATAT container image to use
- PORT_PREFIX: "8" for production, "9" for staging
- MAIN_DOMAIN: the host domain for the environment
- AUTH_DOMAIN: the host domain for the authentication endpoint for the environment
Applying the AWS configuration requires that you have an Elastic File Storage (EFS) volume available to your EC2 node instances. The EFS ID should be set as an environment variable before you apply the AWS config:
We use envsubst to substitute values for these variables.
To apply config to the main environment, you should first do a diff to determine whether your new config introduces unexpected changes:
```
export EFSID=my-efs-volume-id
kubectl kustomize deploy/azure | CONTAINER_IMAGE=myregistry.io/atat-some-commit-sha PORT_PREFIX=8 MAIN_DOMAIN=azure.atat.code.mil AUTH_DOMAIN=auth-azure.atat.code.mil envsubst '$CONTAINER_IMAGE $PORT_PREFIX $MAIN_DOMAIN $AUTH_DOMAIN' | kubectl diff -f -
```
First apply all the config:
Here, `kubectl kustomize` assembles the config and streams it to STDOUT. We specify environment variables for envsubst to use and pass the names of those env vars as a string argument to envsubst. This is important, because envsubst will override NGINX variables in the NGINX config if you don't limit its scope. Finally, we pipe the result from envsubst to `kubectl diff`, which reports a list of differences. Note that some values tracked by K8s internally might have changed, such as [`generation`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#objectmeta-v1-meta). This is fine and expected.
If you are satisfied with the output from the diff, you can apply the new config the same way:
```
kubectl apply -f deploy/aws/
kubectl kustomize deploy/azure | CONTAINER_IMAGE=myregistry.io/atat-some-commit-sha PORT_PREFIX=8 MAIN_DOMAIN=azure.atat.code.mil AUTH_DOMAIN=auth-azure.atat.code.mil envsubst '$CONTAINER_IMAGE $PORT_PREFIX $MAIN_DOMAIN $AUTH_DOMAIN' | kubectl apply -f -
```
Then apply the storage class config using `envsubst`:
```
envsubst < deploy/aws/storage-class.yml | kubectl apply -f -
```
When applying configuration changes, be careful not to over-write the storage class configuration without the environment variable substituted.
#### Fluentd Configuration
For the Fluentd/CloudWatch integration to work for logging purposes, you will need to add an additional policy to the worker nodes' role. What follows is adapted from the [EKS Workshop](https://eksworkshop.com/logging/prereqs/).
If you used eksctl to provision the EKS cluster, there will be a CloudFormation stack associated with the cluster. The node instances within the cluster will have a role associated to define their permissions. You need the name of the role. To get it using the AWS CLI, run:
```
export ROLE_NAME=$(aws --profile=dds --region us-east-2 cloudformation describe-stacks --stack-name eksctl-atat-nodegroup-standard-workers | jq -r '.Stacks[].Outputs[] | select(.OutputKey=="InstanceRoleARN") | .OutputValue' | cut -f2 -d/)
```
(This assumes that you have [`jq`](https://stedolan.github.io/jq/) available to parse the JSON response.)
Run `echo $ROLE_NAME` to check that the previous command worked.
Create a file called `k8s-logs-policy.json` and add the following content:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
```
This is the new policy that allows the nodes to aggregate logs. To apply it, run the following with the AWS CLI:
```
aws iam put-role-policy --role-name $ROLE_NAME --policy-name Logs-Policy-For-Worker --policy-document file://./k8s-logs-policy.json
```
(This command assumes you are executing it in the same directory as the policy JSON file; adjust the path as needed.)
### Apply the config to an Azure cluster
To apply the configuration to a new cluster, run:
```
kubectl apply -f deploy/azure/
```
**Note:** Depending on how your `kubectl` config is set up, these commands may need to be adjusted. If you have configuration for multiple clusters, you may need to specify the `kubectl` context for each command with the `--context` flag (something like `kubectl --context=my-cluster [etc.]` or `kubectl --context=azure [etc.]`).
## Secrets and Configuration
@ -113,24 +62,6 @@ Notes:
- Be careful not to check the override.ini file into source control.
- Be careful not to overwrite one CSP cluster's config with the other's. This will break everything.
### nginx-client-ca-bundle
(NOTE: This really doesn't need to be a secret since these are public certs. A good change would be to convert it to a k8s configmap.)
This is the PEM certificate file of the DoD Certificate Authority certs. This must be available for CAC authentication.
A local copy of the certs are stored in the repo at `ssl/client-certs/ca-chain.pem`. It can be updated by running `script/sync-dod-certs`. When creating a new cluster, you can copy the cert file to the repo root:
```
cp ssl/client-certs/ca-chain.pem client-ca-bundle.pem
```
and then create a new secret from it:
```
kubectl -n atat create secret generic nginx-client-ca-bundle --from-file=./client-ca-bundle.pem
```
### nginx-htpasswd
If the site is running in dev mode, the `/login-dev` endpoint is available. This endpoint is protected by basic HTTP auth. To create a new password file, run:

View File

@ -1,8 +0,0 @@
apiVersion: v1
data:
cluster.name: atat
logs.region: us-east-2
kind: ConfigMap
metadata:
name: cluster-info
namespace: amazon-cloudwatch

View File

@ -1,35 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: atst-config
namespace: atat
data:
uwsgi-config: |-
[uwsgi]
callable = app
module = app
socket = /var/run/uwsgi/uwsgi.socket
plugin = python3
plugin = logfile
virtualenv = /opt/atat/atst/.venv
chmod-socket = 666
; logger config
; application logs: log without modifying
logger = secondlogger stdio
log-route = secondlogger atst
log-encoder = format:secondlogger ${msg}
; default uWSGI messages (start, stop, etc.)
logger = default stdio
log-route = default ^((?!atst).)*$
log-encoder = json:default {"timestamp":"${strftime:%%FT%%T}","source":"uwsgi","severity":"DEBUG","message":"${msg}"}
log-encoder = nl
; uWSGI request logs
logger-req = stdio
log-format = request_id=%(var.HTTP_X_REQUEST_ID), pid=%(pid), remote_add=%(addr), request=%(method) %(uri), status=%(status), body_bytes_sent=%(rsize), referer=%(referer), user_agent=%(uagent), http_x_forwarded_for=%(var.HTTP_X_FORWARDED_FOR)
log-req-encoder = json {"timestamp":"${strftime:%%FT%%T}","source":"req","severity":"INFO","message":"${msg}"}
log-req-encoder = nl

View File

@ -1,15 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: atst-envvars
namespace: atat
data:
TZ: UTC
FLASK_ENV: dev
OVERRIDE_CONFIG_FULLPATH: /opt/atat/atst/atst-overrides.ini
UWSGI_CONFIG_FULLPATH: /opt/atat/atst/uwsgi.ini
LOG_JSON: "true"
CSP: aws
PGSSLMODE: verify-full
PGSSLROOTCERT: /opt/atat/atst/ssl/pgsslrootcert.crt

View File

@ -1,78 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: atst-nginx
namespace: atat
data:
nginx-config: |-
server {
listen 8342;
server_name aws.atat.code.mil;
return 301 https://$host$request_uri;
}
server {
listen 8343;
server_name auth-aws.atat.code.mil;
return 301 https://$host$request_uri;
}
server {
server_name aws.atat.code.mil;
# access_log /var/log/nginx/access.log json;
listen 8442 ssl;
listen [::]:8442 ssl ipv6only=on;
ssl_certificate /etc/ssl/private/atat.crt;
ssl_certificate_key /etc/ssl/private/atat.key;
location /login-redirect {
return 301 https://auth-aws.atat.code.mil$request_uri;
}
location /login-dev {
try_files $uri @appbasicauth;
}
location / {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///var/run/uwsgi/uwsgi.socket;
uwsgi_param HTTP_X_REQUEST_ID $request_id;
}
location @appbasicauth {
include uwsgi_params;
uwsgi_pass unix:///var/run/uwsgi/uwsgi.socket;
auth_basic "Developer Access";
auth_basic_user_file /etc/nginx/.htpasswd;
uwsgi_param HTTP_X_REQUEST_ID $request_id;
}
}
server {
# access_log /var/log/nginx/access.log json;
server_name auth-aws.atat.code.mil;
listen 8443 ssl;
listen [::]:8443 ssl ipv6only=on;
ssl_certificate /etc/ssl/private/atat.crt;
ssl_certificate_key /etc/ssl/private/atat.key;
# Request and validate client certificate
ssl_verify_client on;
ssl_verify_depth 10;
ssl_client_certificate /etc/ssl/client-ca-bundle.pem;
# Guard against HTTPS -> HTTP downgrade
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; always";
location / {
return 301 https://aws.atat.code.mil$request_uri;
}
location /login-redirect {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///var/run/uwsgi/uwsgi.socket;
uwsgi_param HTTP_X_SSL_CLIENT_VERIFY $ssl_client_verify;
uwsgi_param HTTP_X_SSL_CLIENT_CERT $ssl_client_raw_cert;
uwsgi_param HTTP_X_SSL_CLIENT_S_DN $ssl_client_s_dn;
uwsgi_param HTTP_X_SSL_CLIENT_S_DN_LEGACY $ssl_client_s_dn_legacy;
uwsgi_param HTTP_X_SSL_CLIENT_I_DN $ssl_client_i_dn;
uwsgi_param HTTP_X_SSL_CLIENT_I_DN_LEGACY $ssl_client_i_dn_legacy;
uwsgi_param HTTP_X_REQUEST_ID $request_id;
}
}

View File

@ -1,12 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: atst-worker-envvars
namespace: atat
data:
TZ: UTC
DISABLE_CRL_CHECK: "True"
SERVER_NAME: aws.atat.code.mil
PGSSLMODE: verify-full
PGSSLROOTCERT: /opt/atat/atst/ssl/pgsslrootcert.crt

View File

@ -1,287 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: atat
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: atst
name: atst
namespace: atat
spec:
selector:
matchLabels:
role: web
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: atst
role: web
spec:
securityContext:
fsGroup: 101
containers:
- name: atst
image: 904153757533.dkr.ecr.us-east-2.amazonaws.com/atat:latest
envFrom:
- configMapRef:
name: atst-envvars
volumeMounts:
- name: atst-config
mountPath: "/opt/atat/atst/atst-overrides.ini"
subPath: atst-overrides.ini
- name: nginx-client-ca-bundle
mountPath: "/opt/atat/atst/ssl/server-certs/ca-chain.pem"
subPath: client-ca-bundle.pem
- name: uwsgi-socket-dir
mountPath: "/var/run/uwsgi"
- name: crls-vol
mountPath: "/opt/atat/atst/crls"
- name: pgsslrootcert
mountPath: "/opt/atat/atst/ssl/pgsslrootcert.crt"
subPath: pgsslrootcert.crt
- name: nginx
image: nginx:alpine
ports:
- containerPort: 8342
name: main-upgrade
- containerPort: 8442
name: main
- containerPort: 8343
name: auth-upgrade
- containerPort: 8443
name: auth
volumeMounts:
- name: nginx-config
mountPath: "/etc/nginx/conf.d/atst.conf"
subPath: atst.conf
- name: uwsgi-socket-dir
mountPath: "/var/run/uwsgi"
- name: nginx-htpasswd
mountPath: "/etc/nginx/.htpasswd"
subPath: .htpasswd
- name: tls
mountPath: "/etc/ssl/private"
- name: nginx-client-ca-bundle
mountPath: "/etc/ssl/"
volumes:
- name: atst-config
secret:
secretName: atst-config-ini
items:
- key: override.ini
path: atst-overrides.ini
mode: 0644
- name: nginx-client-ca-bundle
secret:
secretName: nginx-client-ca-bundle
items:
- key: client-ca-bundle.pem
path: client-ca-bundle.pem
mode: 0666
- name: nginx-config
configMap:
name: atst-nginx
items:
- key: nginx-config
path: atst.conf
- name: uwsgi-socket-dir
emptyDir:
medium: Memory
- name: nginx-htpasswd
secret:
secretName: atst-nginx-htpasswd
items:
- key: htpasswd
path: .htpasswd
mode: 0640
- name: tls
secret:
secretName: aws-atat-code-mil-tls
items:
- key: tls.crt
path: atat.crt
mode: 0644
- key: tls.key
path: atat.key
mode: 0640
- name: crls-vol
persistentVolumeClaim:
claimName: efs
- name: pgsslrootcert
configMap:
name: pgsslrootcert
items:
- key: cert
path: pgsslrootcert.crt
mode: 0666
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: atst
name: atst-worker
namespace: atat
spec:
selector:
matchLabels:
role: worker
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: atst
role: worker
spec:
securityContext:
fsGroup: 101
containers:
- name: atst-worker
image: 904153757533.dkr.ecr.us-east-2.amazonaws.com/atat:latest
args: [
"/opt/atat/atst/.venv/bin/python",
"/opt/atat/atst/.venv/bin/celery",
"-A",
"celery_worker.celery",
"worker",
"--loglevel=info"
]
envFrom:
- configMapRef:
name: atst-envvars
- configMapRef:
name: atst-worker-envvars
volumeMounts:
- name: atst-config
mountPath: "/opt/atat/atst/atst-overrides.ini"
subPath: atst-overrides.ini
- name: pgsslrootcert
mountPath: "/opt/atat/atst/ssl/pgsslrootcert.crt"
subPath: pgsslrootcert.crt
volumes:
- name: atst-config
secret:
secretName: atst-config-ini
items:
- key: override.ini
path: atst-overrides.ini
mode: 0644
- name: pgsslrootcert
configMap:
name: pgsslrootcert
items:
- key: cert
path: pgsslrootcert.crt
mode: 0666
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: atst
name: atst-beat
namespace: atat
spec:
selector:
matchLabels:
role: beat
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: atst
role: beat
spec:
securityContext:
fsGroup: 101
containers:
- name: atst-beat
image: 904153757533.dkr.ecr.us-east-2.amazonaws.com/atat:latest
args: [
"/opt/atat/atst/.venv/bin/python",
"/opt/atat/atst/.venv/bin/celery",
"-A",
"celery_worker.celery",
"beat",
"--loglevel=info"
]
envFrom:
- configMapRef:
name: atst-envvars
- configMapRef:
name: atst-worker-envvars
volumeMounts:
- name: atst-config
mountPath: "/opt/atat/atst/atst-overrides.ini"
subPath: atst-overrides.ini
- name: pgsslrootcert
mountPath: "/opt/atat/atst/ssl/pgsslrootcert.crt"
subPath: pgsslrootcert.crt
volumes:
- name: atst-config
secret:
secretName: atst-config-ini
items:
- key: override.ini
path: atst-overrides.ini
mode: 0644
- name: pgsslrootcert
configMap:
name: pgsslrootcert
items:
- key: cert
path: pgsslrootcert.crt
mode: 0666
---
apiVersion: v1
kind: Service
metadata:
labels:
app: atst
name: atst-main
namespace: atat
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
ports:
- port: 80
targetPort: 8342
name: http
- port: 443
targetPort: 8442
name: https
selector:
role: web
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
app: atst
name: atst-auth
namespace: atat
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
ports:
- port: 80
targetPort: 8343
name: http
- port: 443
targetPort: 8443
name: https
selector:
role: web
type: LoadBalancer

View File

@ -1,7 +0,0 @@
# create amazon-cloudwatch namespace
apiVersion: v1
kind: Namespace
metadata:
name: amazon-cloudwatch
labels:
name: amazon-cloudwatch

View File

@ -1,43 +0,0 @@
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: crls
namespace: atat
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: crls
image: 904153757533.dkr.ecr.us-east-2.amazonaws.com/atat:latest
command: [
"/bin/sh", "-c"
]
args: [
"/opt/atat/atst/script/sync-crls",
]
envFrom:
- configMapRef:
name: atst-envvars
- configMapRef:
name: atst-worker-envvars
volumeMounts:
- name: atst-config
mountPath: "/opt/atat/atst/atst-overrides.ini"
subPath: atst-overrides.ini
- name: crls-vol
mountPath: "/opt/atat/atst/crls"
volumes:
- name: atst-config
secret:
secretName: atst-config-ini
items:
- key: override.ini
path: atst-overrides.ini
mode: 0644
- name: crls-vol
persistentVolumeClaim:
claimName: efs

View File

@ -1,66 +0,0 @@
# This can't be run without substituting the EFSID environment variable.
# from https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/rbac.yaml
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: efs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: efs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete", "describe"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
# replace with namespace where provisioner is deployed
namespace: atat
roleRef:
kind: ClusterRole
name: efs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
# replace with namespace where provisioner is deployed
namespace: atat
roleRef:
kind: Role
name: leader-locking-efs-provisioner
apiGroup: rbac.authorization.k8s.io

View File

@ -1,433 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: amazon-cloudwatch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd-role
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
- pods/logs
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fluentd-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluentd-role
subjects:
- kind: ServiceAccount
name: fluentd
namespace: amazon-cloudwatch
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: amazon-cloudwatch
labels:
k8s-app: fluentd-cloudwatch
data:
fluent.conf: |
@include containers.conf
@include systemd.conf
@include host.conf
<match fluent.**>
@type null
</match>
containers.conf: |
<source>
@type tail
@id in_tail_container_logs
@label @containers
path /var/log/containers/*.log
exclude_path ["/var/log/containers/cloudwatch-agent*", "/var/log/containers/fluentd*"]
pos_file /var/log/fluentd-containers.log.pos
tag *
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<source>
@type tail
@id in_tail_cwagent_logs
@label @cwagentlogs
path /var/log/containers/cloudwatch-agent*
pos_file /var/log/cloudwatch-agent.log.pos
tag *
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<source>
@type tail
@id in_tail_fluentd_logs
@label @fluentdlogs
path /var/log/containers/fluentd*
pos_file /var/log/fluentd.log.pos
tag *
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<label @fluentdlogs>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata_fluentd
</filter>
<filter **>
@type record_transformer
@id filter_fluentd_stream_transformer
<record>
stream_name ${tag_parts[3]}
</record>
</filter>
<match **>
@type relabel
@label @NORMAL
</match>
</label>
<label @containers>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata
</filter>
<filter **>
@type record_transformer
@id filter_containers_stream_transformer
<record>
stream_name ${tag_parts[3]}
</record>
</filter>
<filter **>
@type concat
key log
multiline_start_regexp /^\S/
separator ""
flush_interval 5
timeout_label @NORMAL
</filter>
<match **>
@type relabel
@label @NORMAL
</match>
</label>
<label @cwagentlogs>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata_cwagent
</filter>
<filter **>
@type record_transformer
@id filter_cwagent_stream_transformer
<record>
stream_name ${tag_parts[3]}
</record>
</filter>
<filter **>
@type concat
key log
multiline_start_regexp /^\d{4}[-/]\d{1,2}[-/]\d{1,2}/
separator ""
flush_interval 5
timeout_label @NORMAL
</filter>
<match **>
@type relabel
@label @NORMAL
</match>
</label>
<label @NORMAL>
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers
region "#{ENV.fetch('REGION')}"
log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/application"
log_stream_name_key stream_name
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
systemd.conf: |
<source>
@type systemd
@id in_systemd_kubelet
@label @systemd
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /var/log/journal
<storage>
@type local
persistent true
path /var/log/fluentd-journald-kubelet-pos.json
</storage>
read_from_head true
tag kubelet.service
</source>
<source>
@type systemd
@id in_systemd_kubeproxy
@label @systemd
filters [{ "_SYSTEMD_UNIT": "kubeproxy.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /var/log/journal
<storage>
@type local
persistent true
path /var/log/fluentd-journald-kubeproxy-pos.json
</storage>
read_from_head true
tag kubeproxy.service
</source>
<source>
@type systemd
@id in_systemd_docker
@label @systemd
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
<entry>
field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
field_map_strict true
</entry>
path /var/log/journal
<storage>
@type local
persistent true
path /var/log/fluentd-journald-docker-pos.json
</storage>
read_from_head true
tag docker.service
</source>
<label @systemd>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata_systemd
</filter>
<filter **>
@type record_transformer
@id filter_systemd_stream_transformer
<record>
stream_name ${tag}-${record["hostname"]}
</record>
</filter>
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs_systemd
region "#{ENV.fetch('REGION')}"
log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/dataplane"
log_stream_name_key stream_name
auto_create_stream true
remove_log_stream_name_key true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
host.conf: |
<source>
@type tail
@id in_tail_dmesg
@label @hostlogs
path /var/log/dmesg
pos_file /var/log/dmesg.log.pos
tag host.dmesg
read_from_head true
<parse>
@type syslog
</parse>
</source>
<source>
@type tail
@id in_tail_secure
@label @hostlogs
path /var/log/secure
pos_file /var/log/secure.log.pos
tag host.secure
read_from_head true
<parse>
@type syslog
</parse>
</source>
<source>
@type tail
@id in_tail_messages
@label @hostlogs
path /var/log/messages
pos_file /var/log/messages.log.pos
tag host.messages
read_from_head true
<parse>
@type syslog
</parse>
</source>
<label @hostlogs>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata_host
</filter>
<filter **>
@type record_transformer
@id filter_containers_stream_transformer_host
<record>
stream_name ${tag}-${record["host"]}
</record>
</filter>
<match host.**>
@type cloudwatch_logs
@id out_cloudwatch_logs_host_logs
region "#{ENV.fetch('REGION')}"
log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/host"
log_stream_name_key stream_name
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-cloudwatch
namespace: amazon-cloudwatch
labels:
k8s-app: fluentd-cloudwatch
spec:
template:
metadata:
labels:
k8s-app: fluentd-cloudwatch
annotations:
configHash: 8915de4cf9c3551a8dc74c0137a3e83569d28c71044b0359c2578d2e0461825
spec:
serviceAccountName: fluentd
terminationGracePeriodSeconds: 30
# Because the image's entrypoint requires to write on /fluentd/etc but we mount configmap there which is read-only,
# this initContainers workaround or other is needed.
# See https://github.com/fluent/fluentd-kubernetes-daemonset/issues/90
initContainers:
- name: copy-fluentd-config
image: busybox
command: ['sh', '-c', 'cp /config-volume/..data/* /fluentd/etc']
volumeMounts:
- name: config-volume
mountPath: /config-volume
- name: fluentdconf
mountPath: /fluentd/etc
- name: update-log-driver
image: busybox
command: ['sh','-c','']
containers:
- name: fluentd-cloudwatch
image: fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-cloudwatch-1.4
env:
- name: REGION
valueFrom:
configMapKeyRef:
name: cluster-info
key: logs.region
- name: CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: cluster-info
key: cluster.name
- name: CI_VERSION
value: "k8s/1.0.0"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: config-volume
mountPath: /config-volume
- name: fluentdconf
mountPath: /fluentd/etc
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: runlogjournal
mountPath: /run/log/journal
readOnly: true
- name: dmesg
mountPath: /var/log/dmesg
readOnly: true
volumes:
- name: config-volume
configMap:
name: fluentd-config
- name: fluentdconf
emptyDir: {}
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: runlogjournal
hostPath:
path: /run/log/journal
- name: dmesg
hostPath:
path: /var/log/dmesg

File diff suppressed because it is too large Load Diff

View File

@ -1,80 +0,0 @@
# from https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/manifest.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner
data:
file.system.id: $EFSID
aws.region: us-east-2
provisioner.name: example.com/aws-efs
dns.name: $EFSID.efs.us-east-2.amazonaws.com
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
serviceAccountName: efs-provisioner
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: DNS_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: dns.name
optional: true
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: $EFSID.efs.us-east-2.amazonaws.com
path: /
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: example.com/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
storageClassName: aws-efs
resources:
requests:
storage: 1Mi
---

View File

@ -7,20 +7,20 @@ metadata:
data:
nginx-config: |-
server {
listen 8342;
server_name azure.atat.code.mil;
listen ${PORT_PREFIX}342;
server_name ${MAIN_DOMAIN};
return 301 https://$host$request_uri;
}
server {
listen 8343;
server_name auth-azure.atat.code.mil;
listen ${PORT_PREFIX}343;
server_name ${AUTH_DOMAIN};
return 301 https://$host$request_uri;
}
server {
server_name azure.atat.code.mil;
server_name ${MAIN_DOMAIN};
# access_log /var/log/nginx/access.log json;
listen 8442 ssl;
listen [::]:8442 ssl ipv6only=on;
listen ${PORT_PREFIX}442 ssl;
listen [::]:${PORT_PREFIX}442 ssl ipv6only=on;
ssl_certificate /etc/ssl/private/atat.crt;
ssl_certificate_key /etc/ssl/private/atat.key;
location /login-redirect {
@ -47,9 +47,9 @@ data:
}
server {
# access_log /var/log/nginx/access.log json;
server_name auth-azure.atat.code.mil;
listen 8443 ssl;
listen [::]:8443 ssl ipv6only=on;
server_name ${AUTH_DOMAIN};
listen ${PORT_PREFIX}443 ssl;
listen [::]:${PORT_PREFIX}443 ssl ipv6only=on;
ssl_certificate /etc/ssl/private/atat.crt;
ssl_certificate_key /etc/ssl/private/atat.key;
# Request and validate client certificate

View File

@ -15,7 +15,7 @@ spec:
selector:
matchLabels:
role: web
replicas: 1
replicas: 4
strategy:
type: RollingUpdate
template:
@ -28,7 +28,7 @@ spec:
fsGroup: 101
containers:
- name: atst
image: pwatat.azurecr.io/atat:latest
image: $CONTAINER_IMAGE
envFrom:
- configMapRef:
name: atst-envvars
@ -79,12 +79,9 @@ spec:
path: atst-overrides.ini
mode: 0644
- name: nginx-client-ca-bundle
secret:
secretName: nginx-client-ca-bundle
items:
- key: client-ca-bundle.pem
path: client-ca-bundle.pem
mode: 0666
configMap:
name: nginx-client-ca-bundle
defaultMode: 0666
- name: nginx-config
configMap:
name: atst-nginx
@ -133,7 +130,7 @@ spec:
selector:
matchLabels:
role: worker
replicas: 1
replicas: 2
strategy:
type: RollingUpdate
template:
@ -146,7 +143,7 @@ spec:
fsGroup: 101
containers:
- name: atst-worker
image: pwatat.azurecr.io/atat:latest
image: $CONTAINER_IMAGE
args: [
"/opt/atat/atst/.venv/bin/python",
"/opt/atat/atst/.venv/bin/celery",
@ -207,7 +204,7 @@ spec:
fsGroup: 101
containers:
- name: atst-beat
image: pwatat.azurecr.io/atat:latest
image: $CONTAINER_IMAGE
args: [
"/opt/atat/atst/.venv/bin/python",
"/opt/atat/atst/.venv/bin/celery",

View File

@ -12,7 +12,7 @@ spec:
restartPolicy: OnFailure
containers:
- name: crls
image: pwatat.azurecr.io/atat:latest
image: $CONTAINER_IMAGE
command: [
"/bin/sh", "-c"
]

View File

@ -0,0 +1,11 @@
namespace: atat
resources:
- azure.yml
- atst-configmap.yml
- atst-envvars-configmap.yml
- atst-nginx-configmap.yml
- atst-worker-envvars-configmap.yml
- crls-sync.yaml
- pgsslrootcert.yml
- volume-claim.yml
- nginx-client-ca-bundle.yml

File diff suppressed because it is too large Load Diff

View File

@ -32,15 +32,3 @@ subjects:
- kind: ServiceAccount
name: persistent-volume-binder
namespace: kube-system
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: crls-vol-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: crls-vol-claim
namespace: atat
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 1Gi

View File

@ -69,12 +69,9 @@ spec:
path: atst-overrides.ini
mode: 0644
- name: nginx-client-ca-bundle
secret:
secretName: nginx-client-ca-bundle
items:
- key: client-ca-bundle.pem
path: client-ca-bundle.pem
mode: 0666
configMap:
name: nginx-client-ca-bundle
defaultMode: 0666
- name: nginx-config
configMap:
name: atst-nginx

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,12 @@
- op: replace
path: /spec/template/spec/containers/1/ports/0/containerPort
value: 9342
- op: replace
path: /spec/template/spec/containers/1/ports/1/containerPort
value: 9442
- op: replace
path: /spec/template/spec/containers/1/ports/2/containerPort
value: 9343
- op: replace
path: /spec/template/spec/containers/1/ports/3/containerPort
value: 9443

View File

@ -0,0 +1,15 @@
namespace: staging
bases:
- ../../azure/
resources:
- namespace.yml
patchesStrategicMerge:
- replica_count.yml
- ports.yml
patchesJson6902:
- target:
group: extensions
version: v1beta1
kind: Deployment
name: atst
path: json_ports.yml

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: staging

View File

@ -0,0 +1,28 @@
---
apiVersion: v1
kind: Service
metadata:
name: atst-main
spec:
loadBalancerIP: 40.76.217.62
ports:
- port: 80
targetPort: 9342
name: http
- port: 443
targetPort: 9442
name: https
---
apiVersion: v1
kind: Service
metadata:
name: atst-auth
spec:
loadBalancerIP: 40.87.14.233
ports:
- port: 80
targetPort: 9343
name: http
- port: 443
targetPort: 9443
name: https

View File

@ -0,0 +1,14 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: atst
spec:
replicas: 2
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: atst-worker
spec:
replicas: 1