AT-AT needs to maintain a key-value CRL cache where each key is the DER
byte-string of the issuer and the value is a dictionary of the CRL file
path and expiration. This way when it checks a client certificate, it
can load the correct CRL by comparing the issuers. This is preferable to
loading all of the CRLs in-memory. However, it still requires that AT-AT
load and parse each CRL when the application boots. Because of the size
of the CRLs and their parsed, in-memory size, this leads to the
application spiking to use nearly 900MB of memory (resting usage is
around 50MB).
This change introduces a small function to ad-hoc parse the CRL and
obtain the information in the CRL we need: the issuer and the
expiration. It does this by reading the CRL byte-by-byte until it
reaches the ASN1 sequence that corresponds to the issuer, and then looks
ahead to find the nextUpdate field (i.e., the expiration date). The
CRLCache class uses this function to build its cache and JSON-serializes
the cache to disk. If another AT-AT application process finds the
serialized version, it will load that copy instead of rebuilding it. It
also entails a change to the function signature for the init method of
CRLCache: now it expects the CRL directory as its second argument,
instead of a list of locations.
The Python script invoked by `script/sync-crls` will rebuild the
location cache each time it's run. This means that when the Kubernetes
CronJob for CRLs runs, it will refresh the cache each time. When a new
application container boots, it will get the refreshed cache.
This also adds a nightly CircleCI job to sync the CRLs and test that the
ad-hoc parsing function returns the same result as a proper parsing
using the Python cryptography library. This provides extra insurance
that the function is returning correct results on real data.
The CircleCI Orbs were useful for getting started, but now that we only
have to deploy to one provider our pipeline should be tailored to
efficiently push to just that environment. This inlines all the relevant
pieces from the Orbs we were relying on as bash/sh commands instead.
This builds the Docker images upfront. Since we have a multi-stage
Dockerfile, it builds the first stage as a separate image and then
proceeds to build the complete image. This is done so that the first
stage (called "builder") can be used for testing. It retains executables
like pipenv that we need to install development dependencies needed for
tests.
Other notes:
- CircleCI does not persist Docker images between jobs. As a
work-around, we use the CircleCI caching mechanism to create a named
cache with *.tar copies of the images. Subsequent jobs use the cache
and load the images.
- Both the test and integration-tests jobs need to make minor
modifications to the container to run correctly. The test job needs to
install the development Python dependencies, and the integration-tests
job needs to rebuild the JS bundle so that it uses the mock uploader
(the container is build to use the Azure uploader by default).
- The test and integration-tests jobs run in parallel.
- This adjusts the Dockerfile so that the TZ environment variable is set
for both stages of the build.
This does the following:
- Consolidates the app_setup and test jobs into one. The test job was
only one additional step, so it's not worth separating.
- Updates the Postgres image used to reflect what we're using for the
deployed version of the site (i.e., v 10).
- Removes some unnecessary steps from the first job.
- Removes all AWS config so that CD will only push to the Azure
container registry, run migrations against the Azure-hosted database,
and rotate the container images in the Azure k8s cluster.
Adds a CircleCI integration for Ghost Inspector
(https://ghostinspector.com), a headless browser testing SaaS. The
README is updated with details about how to run GI locally.
Removes the bootstrap setup for Selenium testing with BrowserStack.
We will run a separate pod for the beat worker. There should only ever
be a single beat worker (to avoid redundant work) so the number of
replicas needs to be managed independently.
This adds both the Kubernetes config for the new pod and additional
CircleCI config to swap a new image into the pod during CD.
This will allow Kubernetes resources that only pull images occasionally
(i.e., k8s jobs) to point to a static tag name, "latest", that is updated
regularly. It also means we can refer to that image in the k8s config
tracked in the repo, instead of out-of-date images.
This applies configuration changes for the Flask app and adds changes to
the Dockerfile so that the build can make a CSP-specific JS bundle. It
adds `write_dotenv` script that creates the appropriate `.env` file for
the `parcel` bundler depending on how the `CSP` environment variable is
set.
- Configure K8s environment variables for Flask CSP usage
- Supply default CSP config setting to Flask app
- Declare the CSP arg in the Dockerfile
- Supply extra Docker build args to CD
- Fix top-level reference to boto3 in file_upload module
- Add back missing sample NGINX config for docker-compose build
Add CircleCI config for both CSPs to:
- build the Docker image and push it to the registry
- run a short-lived k8s job to apply migrations and see data
- update the images for the Flask pods and rq worker pods
This adds the AWS and Azure CircleCI orbs for updating container images
in a cluster. It installs the clients for both CSPs, configures kubectl
with a programmatic user's auth information, and executes a `kubectl set
image` command to reset the cluster image to the one that was just
pushed to the container registry.
We should try and track mainline Python as much as possible.
PyYAML was a sub-dependency of a dev dependency but was being included
in the translations utility. Bundling only the production Python
dependencies was not working because of this.