- Fixes LGTM warnings for an unused import and equality comparisons to
None in SQLAlchemy filters.
- Removes part of a unit test asserting that the claimed_until locking
mechanism works correctly. If I recall correctly, this does not work
in unit tests because the test takes place inside a transaction, and
the database provider does evaluate the current time until the
transaction is written.
We don't know yet how useful the job failue tables will be, and
maintaining multiple failure tables--one for every entity involved in
CSP provisioning--is burdensome. This collapses them all into a single
table that track the entity type (environment, portfolio, etc.) and the
entity ID. That way we can construct queries when needed to find task
results.
Adds a method to the Applications domain class that can return a list of
UUIDs for applications that are ready to be provisioned. It requires
that:
- the associated portfolio and state machine have a state of COMPLETED
- the application not have been marked deleted
- the application not have an existing cloud_id
- the application does not have an existing claim on it
Since we're using non-native enums for our model, alembic has some issues knowing what the previous "type" actually was, and not specifying it correctly causes a bad constraint.
This is leftover from a previous iteration of ATAT where inviting a user
to a portfolio would create a pending entry in the users table. This is
no longer used.
New designs call for a streamlined New Portfolio page, with far
fewer input options. This commit refactors that page according to those
designs.
Some of the route functions in this commit refer to a "step 1" of creating
a new Portfolio. Though there is no "step 2" right now, the designs call
for a multistep flow for Portfolio creation process, so this commit sets
the stage for that.
AT-AT needs to be able to track which user tasks failed and why. To
accomplish this we:
- Enabled Celery's results backend, which logs task results to a data
store; a Postgres table, in our case.
(https://docs.celeryproject.org/en/latest/userguide/tasks.html#result-backends)
- Created tables to track the relationships between the relevant models
(Environment, EnvironmentRole) and their task failures.
- Added an `on_failure` hook that tasks can use. The hook will add
records to the job failure tables.
Now a resource like an `Environment` has access to it task failures
through the corresponding failure table.
Notes:
- It might prove useful to use a real foreign key to the Celery results
table eventually. I did not do it here because it requires that we
explicitly store the Celery results table schema as a migration and
add a model for it. In the current implementation, AT-AT can be
agnostic about where the results live.
- We store the task results indefinitely, so it is important to specify
tasks for which we do not care about the results (like `send_mail`)
via the `ignore_result` kwarg.