mirror of
https://github.com/cisagov/manage.get.gov.git
synced 2025-05-17 10:07:04 +02:00
Some formatting
This commit is contained in:
parent
73a82a0b1b
commit
e100a0edbd
1 changed files with 63 additions and 43 deletions
|
@ -14,40 +14,51 @@ Some common commands:
|
||||||
|
|
||||||
|
|
||||||
On occasion, you will need to run this set of commands to refresh your environment:
|
On occasion, you will need to run this set of commands to refresh your environment:
|
||||||
docker-compose down
|
- docker-compose down
|
||||||
docker-compose build
|
- docker-compose build
|
||||||
docker-compose up
|
- docker-compose up
|
||||||
|
|
||||||
|
|
||||||
## Scenarios
|
## Scenarios
|
||||||
|
|
||||||
### Scenario 1: Conflicting migrations on local
|
### Scenario 1: Conflicting migrations on local
|
||||||
|
|
||||||
If you get conflicting migrations on local, you probably have a new migration on your branch and you merged main which had new migrations as well. Do NOT merge migrations together. Assume your local migration is 40_local_migration and the migration from main is 40_some_migration_from_main:
|
If you get conflicting migrations on local, you probably have a new migration on your branch and you merged main which had new migrations as well. Do NOT merge migrations together.
|
||||||
Delete 40_local_migration
|
|
||||||
Run `docker-compose exec app ./manage.py makemigrations`
|
Assuming your local migration is `40_local_migration` and the migration from main is `40_some_migration_from_main`:
|
||||||
Compose down then up or run `docker-compose exec app ./manage.py migrate`
|
- Delete `40_local_migration`
|
||||||
You should end up with 40_some_migration_from_main, 41_local_migration
|
- Run `docker-compose exec app ./manage.py makemigrations`
|
||||||
|
- Run `docker-compose down`
|
||||||
|
- Run `docker-compose up`
|
||||||
|
- Run `docker-compose exec app ./manage.py migrate`
|
||||||
|
|
||||||
|
You should end up with `40_some_migration_from_main`, `41_local_migration`
|
||||||
|
|
||||||
Alternatively, assuming that the conflicting migrations are not dependent on each other, you can manually edit the migration file such that your new migration is incremented by one (file name, and definition inside the file) but this approach is not recommended.
|
Alternatively, assuming that the conflicting migrations are not dependent on each other, you can manually edit the migration file such that your new migration is incremented by one (file name, and definition inside the file) but this approach is not recommended.
|
||||||
|
|
||||||
### Scenario 2: Conflicting migrations on sandbox
|
### Scenario 2: Conflicting migrations on sandbox
|
||||||
|
|
||||||
You will diagnose when the migrations job on your PR fails and the logs show “conflicting migrations, multiple leaves found” (something like that). This happens when you swap branches on your sandbox that contain diverging leaves (eg: 40_migration_1 and 40_migration_2). The fix is to go into the sandbox, delete one of these leaves, fake run the preceding migration, hand run the remaining previously conflicting leaf, fake run the last migration:
|
This occurs when the logs return the following:
|
||||||
`cf login -a api.fr.cloud.gov --sso`
|
>Conflicting migrations detected; multiple leaf nodes in the migration graph: (0040_example, 0041_example in base).
|
||||||
`cf ssh getgov-<app>`
|
To fix them run 'python manage.py makemigrations --merge'
|
||||||
`/tmp/lifecycle/shell`
|
|
||||||
Navigate to and delete the offending migration
|
|
||||||
`cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 39_previous_miration --fake' --name migrate
|
|
||||||
`cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 40_migration_2' --name migrate`
|
|
||||||
`cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 45_last_migration --fake' --name migrate`
|
|
||||||
|
|
||||||
|
This happens when you swap branches on your sandbox that contain diverging leaves (eg: 0040_example, 0041_example). The fix is to go into the sandbox, delete one of these leaves, fake run the preceding migration, hand run the remaining previously conflicting leaf, fake run the last migration:
|
||||||
|
|
||||||
|
- `cf login -a api.fr.cloud.gov --sso`
|
||||||
|
- `cf ssh getgov-<app>`
|
||||||
|
- `/tmp/lifecycle/shell`
|
||||||
|
- `cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 39_previous_miration --fake' --name migrate`
|
||||||
|
- `cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 41_example_migration' --name migrate`
|
||||||
|
- `cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 45_last_migration --fake' --name migrate`
|
||||||
|
|
||||||
|
Then, navigate to and delete the offending migration. In this case, it is 0041_example_migration.
|
||||||
|
|
||||||
### Scenario 3: Migrations ran incorrectly, and migrate no longer works (sandbox)
|
### Scenario 3: Migrations ran incorrectly, and migrate no longer works (sandbox)
|
||||||
|
|
||||||
This has happened when updating user perms (so running a new data migration). Something is off with the update on the sandbox and you need to run that last data migration again:
|
This has happened when updating user perms (so running a new data migration). Something is off with the update on the sandbox and you need to run that last data migration again:
|
||||||
`cf login -a api.fr.cloud.gov --sso`
|
- `cf login -a api.fr.cloud.gov --sso`
|
||||||
`cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 39_penultimate_miration --fake' --name migrate`
|
- `cf run-task getgov-<app> --wait --command 'python manage.py migrate registrar 39_penultimate_miration --fake' --name migrate`
|
||||||
`cf run-task getgov-<app> --wait --command 'python manage.py migrate' --name migrate`
|
- `cf run-task getgov-<app> --wait --command 'python manage.py migrate' --name migrate`
|
||||||
|
|
||||||
|
|
||||||
### Scenario 4: All migrations refuse to load due to existing duplicates on sandboxes
|
### Scenario 4: All migrations refuse to load due to existing duplicates on sandboxes
|
||||||
|
@ -55,35 +66,44 @@ This has happened when updating user perms (so running a new data migration). So
|
||||||
This typically happens with a DB conflict that prevents 001_initial from loading. For instance, let's say all migrations have ran successfully before, and a zero command is ran to reset everything. This can lead to a catastrophic issue with your postgres database.
|
This typically happens with a DB conflict that prevents 001_initial from loading. For instance, let's say all migrations have ran successfully before, and a zero command is ran to reset everything. This can lead to a catastrophic issue with your postgres database.
|
||||||
|
|
||||||
To diagnose this issue, you will have to manually delete tables using the psql shell environment. If you are in a production environment and cannot lose that data, then you will need some method of backing that up and reattaching it to the table.
|
To diagnose this issue, you will have to manually delete tables using the psql shell environment. If you are in a production environment and cannot lose that data, then you will need some method of backing that up and reattaching it to the table.
|
||||||
`cf login -a api.fr.cloud.gov --sso`
|
|
||||||
Run `cf connect-to-service -no-client getgov-{environment_name} getgov-{environment_name}-database` to open a SSH tunnel
|
1. `cf login -a api.fr.cloud.gov --sso`
|
||||||
Run `psql -h localhost -p {port} -U {username} -d {broker_name}`
|
2. Run `cf connect-to-service -no-client getgov-{environment_name} getgov-{environment_name}-database` to open a SSH tunnel
|
||||||
Open a new terminal window and run `cf ssh getgov{environment_name}`
|
3. Run `psql -h localhost -p {port} -U {username} -d {broker_name}`
|
||||||
Run `tmp/lifecycle/shell`
|
4. Open a new terminal window and run `cf ssh getgov{environment_name}`
|
||||||
Run `./manage.py migrate` and observe which tables are duplicates
|
5. Within that window, run `tmp/lifecycle/shell`
|
||||||
In the psql instance, run `DROP TABLE {table_name} CASCADE` **WARNING:** this will permanently erase data! Be careful when doing this and exercise common sense.
|
6. Within that window, run `./manage.py migrate` and observe which tables are duplicates
|
||||||
Run `./manage.py migrate` again and repeat step 7 for each table which returns this error.
|
|
||||||
|
Afterwards, go back to your psql instance. Run the following for each problematic table:
|
||||||
|
|
||||||
|
7. `DROP TABLE {table_name} CASCADE`
|
||||||
|
|
||||||
|
**WARNING:** this will permanently erase data! Be careful when doing this and exercise common sense.
|
||||||
|
|
||||||
|
Then, run `./manage.py migrate` again and repeat step 7 for each table which returns this error.
|
||||||
After these errors are resolved, follow instructions in the other scenarios if applicable.
|
After these errors are resolved, follow instructions in the other scenarios if applicable.
|
||||||
|
|
||||||
|
|
||||||
### Scenario 5: Permissions group exist, but my users cannot log onto the sandbox
|
### Scenario 5: Permissions group exist, but my users cannot log onto the sandbox
|
||||||
|
|
||||||
This is most likely due to fixtures not running or fixtures running before the data creating migration. Simple run fixtures again (WARNING: This applies to dev sandboxes only. We never want to rerun fixtures on a stable environment)
|
This is most likely due to fixtures not running or fixtures running before the data creating migration. Simple run fixtures again (WARNING: This applies to dev sandboxes only. We never want to rerun fixtures on a stable environment)
|
||||||
`cf login -a api.fr.cloud.gov --sso`
|
|
||||||
`cf run-task getgov-<app> --command "./manage.py load" --name fixtures`
|
|
||||||
|
|
||||||
### Scenario 6: The data is corrupted on the sandbox
|
- `cf login -a api.fr.cloud.gov --sso`
|
||||||
|
- `cf run-task getgov-<app> --command "./manage.py load" --name fixtures`
|
||||||
|
|
||||||
|
### Scenario 6: Data is corrupted on the sandbox
|
||||||
|
|
||||||
Example: there are extra columns created on a table by an old migration long since gone from the code. In that case, you may have to tunnel into your DB on the sandbox and hand-delete these columns. See scenario #4 if you are running into duplicate table definitions. Also see [this documentation](docs/developer/database-access.md) for a good reference here:
|
Example: there are extra columns created on a table by an old migration long since gone from the code. In that case, you may have to tunnel into your DB on the sandbox and hand-delete these columns. See scenario #4 if you are running into duplicate table definitions. Also see [this documentation](docs/developer/database-access.md) for a good reference here:
|
||||||
`cf login -a api.fr.cloud.gov --sso`
|
|
||||||
Open a new terminal window and run `cf ssh getgov{environment_name}`
|
- `cf login -a api.fr.cloud.gov --sso`
|
||||||
Run `tmp/lifecycle/shell`
|
- Open a new terminal window and run `cf ssh getgov{environment_name}`
|
||||||
Run `./manage.py migrate` and observe which tables have invalid column definitions
|
- Run `tmp/lifecycle/shell`
|
||||||
Run the `\l` command to see all of the databases that are present
|
- Run `./manage.py migrate` and observe which tables have invalid column definitions
|
||||||
`\c cgawsbrokerprodlgi635s6c0afp8w` (assume cgawsbrokerprodlgi635s6c0afp8w is your DB)
|
- Run the `\l` command to see all of the databases that are present
|
||||||
|
- `\c cgawsbrokerprodlgi635s6c0afp8w` (assume cgawsbrokerprodlgi635s6c0afp8w is your DB)
|
||||||
‘\dt’ to see the tables
|
‘\dt’ to see the tables
|
||||||
`SELECT * FROM {bad_table};`
|
- `SELECT * FROM {bad_table};`
|
||||||
`alter table registrar_domain drop {bad_column};`
|
- `alter table registrar_domain drop {bad_column};`
|
||||||
|
|
||||||
|
|
||||||
### Scenario 7: Continual 500 error for the registrar + your requests (login, clicking around, etc) are not showing up in the logstream
|
### Scenario 7: Continual 500 error for the registrar + your requests (login, clicking around, etc) are not showing up in the logstream
|
||||||
|
@ -97,7 +117,7 @@ Essentially this shows that your requests were being handled by two completely s
|
||||||
To resolve this issue, remove the app named `cisa-dotgov` from this space.
|
To resolve this issue, remove the app named `cisa-dotgov` from this space.
|
||||||
Test out the sandbox from there and it should be working!
|
Test out the sandbox from there and it should be working!
|
||||||
|
|
||||||
Debug connectivity:
|
**Debug connectivity**
|
||||||
|
|
||||||
dig getgov-rh.app.cloud.gov (domain information groper, gets DNS nameserver information)
|
dig getgov-rh.app.cloud.gov (domain information groper, gets DNS nameserver information)
|
||||||
curl -v https://getgov-<app>.app.cloud.gov/ --resolve 'getgov-<app>.app.cloud.gov:<your-ip-address-from-dig-command-above-here>' (this gets you access to ping to it)
|
curl -v https://getgov-<app>.app.cloud.gov/ --resolve 'getgov-<app>.app.cloud.gov:<your-ip-address-from-dig-command-above-here>' (this gets you access to ping to it)
|
||||||
|
@ -106,7 +126,7 @@ https://cisa-corp.slack.com/archives/C05BGB4L5NF/p1697810600723069
|
||||||
|
|
||||||
### Scenario 8: Can’t log into sandbox, permissions do not exist
|
### Scenario 8: Can’t log into sandbox, permissions do not exist
|
||||||
|
|
||||||
Fake migrate the migration that’s before the last data creation migration
|
- Fake migrate the migration that’s before the last data creation migration
|
||||||
Run the last data creation migration (AND ONLY THAT ONE)
|
- Run the last data creation migration (AND ONLY THAT ONE)
|
||||||
Fake migrate the last migration in the migration list
|
- Fake migrate the last migration in the migration list
|
||||||
Rerun fixtures
|
- Rerun fixtures
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue