atst/uitests/README.md
graham-dds d247620c51 Slience output from curl for integration tests
We use curl in our integration test script to make sure the application container is
available before moving on. We expect many connection errors and don't care
about the output of curl, so this will just swallow all of the output.
2019-12-18 16:43:30 -05:00

4.1 KiB

Ghost Inspector Readme

The suite of Ghost Inspector tests that runs as part of the CI workflow can be found here. Its current status is https://api.ghostinspector.com/v1/suites/5d9e3d303ab5d56633c11598/status-badge

Another suite of Ghost Inspector tests runs once daily against the Staging site. Status: https://api.ghostinspector.com/v1/suites/5d9603f1af31210914da04ca/status-badge

Testing philosophy

The tests have been created to traverse the most common user flows in AT-AT. There are a few tests (e.g. "New Portfolio - no optional fields") that check for regressions. Others (e.g. "Remove Portfolio Member") check less-common, "negative path" flows. Tests are added as necessary to ensure fairly thorough checking of AT-AT.

The tests are constructed in a "stepwise" fashion; that is, no individual test depends upon another, and each test checks one step in complex user flows. As an example, there are six tests for creating a new Task Order, one corresponding to each screen in the process plus one for adding a second CLIN to a TO. Each test relies on Ghost Inspector's "Import steps from Test X" functionality to perform all the necessary setup before running its own check. This also ensures that tests can be run in any sequence since Ghost Inspector launches up to four tests simultaneously.

Handling UI changes

As with any UI-testing system, Ghost Inspector tests will fail because of changes to the UI. This can be problematic since failures cause errors in the CI workflow, which could cause the Pull Request not to be deployed. To mitigate this issue, we have utilized the following strategies:

  1. Front-end developers notify the project QA lead of changes that will potentially cause Ghost Inspector failures --OR-- the QA lead monitors all failures on the CI suite and determines that such a change has occurred.

  2. If only one step in a test is affected (e.g. a visual but not functional change), that step can be marked as "optional" in the Ghost Inspector UI.

  3. If the (potentially) failing test will need to be reworked to account for functional changes, the test is moved from the CI suite to the "Holding" suite until the PR is merged. Then the test can be edited and returned to the CI suite.

Running Ghost Inspector tests locally

To run the Ghost Inspector tests against a local instance of AT-AT, you will need the following:

  • docker
  • the prerequisite variable information listed here: NGROK_TOKEN, GI_API_KEY, GI_SUITE

First you will need to build a copy of the container:

docker build . --build-arg CSP=azure -f ./Dockerfile -t atat:builder --target builder

This builds the first stage of the docker container, which is the one we need to run integration tests. You can tag the container whatever you want; in the example we've tagged it "atat:builder".

Then you can run the integration tests script. You will need four environment variables set: the three mentioned previously and CONTAINER_IMAGE. You can either export them or set them inline in the command you use to run the script. In the example we'll set them inline:

NGROK_TOKEN=<token> GI_API_KEY=<api key> GI_SUITE=<suite> CONTAINER_IMAGE=atat:builder ./script/integration_tests

Troubleshooting

  • If you get errors regarding ports being in use, make sure you don't have instances of the Flask app, Postgres, or Redis running locally using those ports.
  • If the curl command used to wait for the application container times out and fails, you can increase the timeout by setting a CONTAINER_TIMEOUT environment variable. It defaults to 200 in the script.
  • You may see errors like "No such container". The script attempts to clean up any previous incarnations of the containers before it starts, and it may print errors when it doesn't find them. This is fine.
  • The script is, for the most part, a series of docker commands, so try running the commands individually and debugging that way.