Creating the ATAT database requires a separate connection to one of the
default Postgres databases, like `postgres`. This updates the scripts
and secrets-tool command to handle creating the database. It also
removes database creation from Terraform and updates the documentation.
This additional secrets-tool command can be used to run the database
bootsrapping script (`script/database_setup.py`) inside an ATAT docker
container against the Azure database. It sources the necessary keys from
Key Vault.
This script is for bootstrapping the initial database. It can be run via
a container, but requires that a Postgres superuser's credentials be
provided via our normal config. That way the superuser can provision a
less-privileged user for the application's database connection.
- Fixes LGTM warnings for an unused import and equality comparisons to
None in SQLAlchemy filters.
- Removes part of a unit test asserting that the claimed_until locking
mechanism works correctly. If I recall correctly, this does not work
in unit tests because the test takes place inside a transaction, and
the database provider does evaluate the current time until the
transaction is written.
Having `get_stage_csp_class` in the csp module meant that any file that interacted with that import path would throw an error in a REPL. This will allow importing of the Azure and Mock providers for interactive dev.
Move cloud.py to a module init. Move policy with it. Update related unit tests. Also adds a patch to state machine test to prevent randomness in mock from failing test.
There is an issue with circular imports because the
PortfolioStateMachine model imports some error classes from the cloud
module. The cloud module was importing some other models in turn, which
was causing the issue. Since we plan to pass all data as dataclass
payloads to the cloud interfacem, I removed the type hints that
referenced specific SQLAlchemy models and removed the imports.
This adds:
- A Celery beat task for enqueuing application creation tasks
- A Celery task for creating the application
- Payload and Response dataclasses for creating management groups
It also does some incidental cleanup.
We don't know yet how useful the job failue tables will be, and
maintaining multiple failure tables--one for every entity involved in
CSP provisioning--is burdensome. This collapses them all into a single
table that track the entity type (environment, portfolio, etc.) and the
entity ID. That way we can construct queries when needed to find task
results.
Adds a method to the Applications domain class that can return a list of
UUIDs for applications that are ready to be provisioned. It requires
that:
- the associated portfolio and state machine have a state of COMPLETED
- the application not have been marked deleted
- the application not have an existing cloud_id
- the application does not have an existing claim on it
This finally fixes the output coming from the vpc module so that it
returns a full list of subnets. Now they can be referenced just like the
redis module is using in this commit.