mirror of
https://github.com/cisagov/manage.get.gov.git
synced 2025-08-05 01:11:55 +02:00
Merge branch 'main' into dk/1208-dnssec-addtl-items
This commit is contained in:
commit
5145e3782e
22 changed files with 269 additions and 160 deletions
41
.github/workflows/deploy-development.yaml
vendored
Normal file
41
.github/workflows/deploy-development.yaml
vendored
Normal file
|
@ -0,0 +1,41 @@
|
|||
# This workflow runs on pushes to main
|
||||
# any merge/push to main will result in development being deployed
|
||||
|
||||
name: Build and deploy development for release
|
||||
|
||||
on:
|
||||
push:
|
||||
paths-ignore:
|
||||
- 'docs/**'
|
||||
- '**.md'
|
||||
- '.gitignore'
|
||||
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
deploy-development:
|
||||
if: ${{ github.ref_type == 'tag' }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Compile USWDS assets
|
||||
working-directory: ./src
|
||||
run: |
|
||||
docker compose run node npm install &&
|
||||
docker compose run node npx gulp copyAssets &&
|
||||
docker compose run node npx gulp compile
|
||||
- name: Collect static assets
|
||||
working-directory: ./src
|
||||
run: docker compose run app python manage.py collectstatic --no-input
|
||||
- name: Deploy to cloud.gov sandbox
|
||||
uses: 18f/cg-deploy-action@main
|
||||
env:
|
||||
DEPLOY_NOW: thanks
|
||||
with:
|
||||
cf_username: ${{ secrets.CF_DEVELOPMENT_USERNAME }}
|
||||
cf_password: ${{ secrets.CF_DEVELOPMENT_PASSWORD }}
|
||||
cf_org: cisa-dotgov
|
||||
cf_space: development
|
||||
push_arguments: "-f ops/manifests/manifest-development.yaml"
|
1
.github/workflows/deploy-sandbox.yaml
vendored
1
.github/workflows/deploy-sandbox.yaml
vendored
|
@ -20,6 +20,7 @@ jobs:
|
|||
|| startsWith(github.head_ref, 'nl/')
|
||||
|| startsWith(github.head_ref, 'dk/')
|
||||
|| startsWith(github.head_ref, 'es/')
|
||||
|| startsWith(github.head_ref, 'ky/')
|
||||
outputs:
|
||||
environment: ${{ steps.var.outputs.environment}}
|
||||
runs-on: "ubuntu-latest"
|
||||
|
|
2
.github/workflows/migrate.yaml
vendored
2
.github/workflows/migrate.yaml
vendored
|
@ -15,6 +15,8 @@ on:
|
|||
options:
|
||||
- stable
|
||||
- staging
|
||||
- development
|
||||
- ky
|
||||
- es
|
||||
- nl
|
||||
- rh
|
||||
|
|
3
.github/workflows/reset-db.yaml
vendored
3
.github/workflows/reset-db.yaml
vendored
|
@ -14,8 +14,9 @@ on:
|
|||
type: choice
|
||||
description: Which environment should we flush and re-load data for?
|
||||
options:
|
||||
- stable
|
||||
- staging
|
||||
- development
|
||||
- ky
|
||||
- es
|
||||
- nl
|
||||
- rh
|
||||
|
|
|
@ -54,9 +54,10 @@ If a bug fix or feature needs to be made to stable out of the normal cycle, this
|
|||
In the case where a bug fix or feature needs to be added outside of the normal cycle, the code-fix branch and release will be handled differently than normal:
|
||||
|
||||
1. Code will need to be branched NOT off of main, but off of the same commit as the most recent stable commit. This should be the one tagged with the most recent vX.XX.XX value.
|
||||
2. After making the bug fix, the approved PR will branch will be tagged with a new release tag, incrementing the patch value from the last commit number.
|
||||
3. This branch then needs to be merged to main per the usual process.
|
||||
4. This same branch should be merged into staging.
|
||||
2. After making the bug fix, the approved PR branch will not be merged yet, instead it will be tagged with a new release tag, incrementing the patch value from the last commit number.
|
||||
3. If main and stable are on the the same commit then merge this branch into the staging using the staging release tag (staging-<the hotfix release number>).
|
||||
4. If staging is already ahead stable, you may need to create another branch that is based off of the current staging commit, merge in your code change and then tag that branch with the staging release.
|
||||
5. Wait to merge your original branch until both deploys finish. Once they succeed then merge to main per the usual process.
|
||||
|
||||
## Serving static assets
|
||||
We are using [WhiteNoise](http://whitenoise.evans.io/en/stable/index.html) plugin to serve our static assets on cloud.gov. This plugin is added to the `MIDDLEWARE` list in our apps `settings.py`.
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
# Registrar Data Migration
|
||||
|
||||
The original system has an existing registrar/registry that we will import.
|
||||
The company of that system will provide us with an export of the data.
|
||||
The original system uses an existing registrar/registry that we will import.
|
||||
The company of that system will provide us with an export of the existing data.
|
||||
The goal of our data migration is to take the provided data and use
|
||||
it to create as much as possible a _matching_ state
|
||||
it to create, as close as possible, a _matching_ state
|
||||
in our registrar.
|
||||
|
||||
There is no way to make our registrar _identical_ to the original system
|
||||
|
@ -11,17 +11,17 @@ because we have a different data model and workflow model. Instead, we should
|
|||
focus our migration efforts on creating a state in our new registrar that will
|
||||
primarily allow users of the system to perform the tasks that they want to do.
|
||||
|
||||
## Users
|
||||
#### Users
|
||||
|
||||
One of the major differences with the existing registrar/registry is that our
|
||||
system uses Login.gov for authentication. Any person with an identity-verified
|
||||
Login.gov account can make an account on the new registrar, and the first time
|
||||
that person logs in through Login.gov, we make a corresponding account in our
|
||||
user table. Because we cannot know the Universal Unique ID (UUID) for a
|
||||
person's Login.gov account, we cannot pre-create user accounts for individuals
|
||||
in our new registrar based on the original data.
|
||||
Login.gov account can make an account on the new registrar. The first time
|
||||
a person logs into the registrar through Login.gov, we make a corresponding
|
||||
account in our user table. Because we cannot know the Universal Unique ID (UUID)
|
||||
for a person's Login.gov account, we cannot pre-create user accounts for
|
||||
individuals in our new registrar based on the original data.
|
||||
|
||||
## Domains
|
||||
#### Domains
|
||||
|
||||
Our registrar keeps track of domains. The authoritative source for domain
|
||||
information is the registry, but the registrar needs a copy of that
|
||||
|
@ -29,7 +29,7 @@ information to make connections between registry users and the domains that
|
|||
they manage. The registrar stores very few fields about a domain except for
|
||||
its name, so it could be straightforward to import the exported list of domains
|
||||
from `escrow_domains.daily.dotgov.GOV.txt`. It doesn't appear that
|
||||
that table stores a flag for active or inactive.
|
||||
that table stores a flag for if a domain is active or inactive.
|
||||
|
||||
An example Django management command that can load the delimited text file
|
||||
from the daily escrow is in
|
||||
|
@ -42,13 +42,13 @@ locally for testing, using Docker Compose:
|
|||
docker compose run -T app ./manage.py load_domains_data < /tmp/escrow_domains.daily.dotgov.GOV.txt
|
||||
```
|
||||
|
||||
## User access to domains
|
||||
#### User access to domains
|
||||
|
||||
The data export contains a `escrow_domain_contacts.daily.dotgov.txt` file
|
||||
that links each domain to three different types of contacts: `billing`,
|
||||
`tech`, and `admin`. The ID of the contact in this linking table corresponds
|
||||
to the ID of a contact in the `escrow_contacts.daily.dotgov.txt` file. In the
|
||||
contacts file is an email address for each contact.
|
||||
to the ID of a contact in the `escrow_contacts.daily.dotgov.txt` file. The
|
||||
contacts file contains an email address for each contact.
|
||||
|
||||
The new registrar associates user accounts (authenticated with Login.gov) with
|
||||
domains using a `UserDomainRole` linking table. New users can be granted roles
|
||||
|
@ -78,23 +78,24 @@ An example script using this technique is in
|
|||
docker compose run app ./manage.py load_domain_invitations /app/escrow_domain_contacts.daily.dotgov.GOV.txt /app/escrow_contacts.daily.dotgov.GOV.txt
|
||||
```
|
||||
|
||||
## Transition Domains (Part 1) - Setup Files for Import
|
||||
## Set Up Files for Importing Domains
|
||||
|
||||
#### STEP 1: obtain data files
|
||||
### Step 1: Obtain migration data files
|
||||
We are provided with information about Transition Domains in the following files:
|
||||
| | Filename | Description |
|
||||
|:-| :-------------------------------------------- | :---------- |
|
||||
|1| **escrow_domain_contacts.daily.gov.GOV.txt** | Has the map of domain names to contact ID. Domains in this file will usually have 3 contacts each
|
||||
|2| **escrow_contacts.daily.gov.GOV.txt** | Has the mapping of contact id to contact email address (which is what we care about for sending domain invitations)
|
||||
|3| **escrow_domain_statuses.daily.gov.GOV.txt** | Has the map of domains and their statuses
|
||||
|4| **escrow_domains.daily.dotgov.GOV.txt** | Has a map of domainname, expiration and creation dates
|
||||
|5| **domainadditionaldatalink.adhoc.dotgov.txt** | Has the map of domains to other data like authority, organization, & domain type
|
||||
|6| **domaintypes.adhoc.dotgov.txt** | Has data on federal type and organization type
|
||||
|7| **organization.adhoc.dotgov.txt** | Has organization name data
|
||||
|8| **authority.adhoc.dotgov.txt** | Has authority data which maps to an agency
|
||||
|9| **agency.adhoc.dotgov.txt** | Has federal agency data
|
||||
|10| **migrationFilepaths.json** | A JSON which points towards all given filenames. Specified below.
|
||||
|
||||
- FILE 1: **escrow_domain_contacts.daily.gov.GOV.txt** -> has the map of domain names to contact ID. Domains in this file will usually have 3 contacts each
|
||||
- FILE 2: **escrow_contacts.daily.gov.GOV.txt** -> has the mapping of contact id to contact email address (which is what we care about for sending domain invitations)
|
||||
- FILE 3: **escrow_domain_statuses.daily.gov.GOV.txt** -> has the map of domains and their statuses
|
||||
- FILE 4: **escrow_domains.daily.dotgov.GOV.txt** -> has a map of domainname, expiration and creation dates
|
||||
- FILE 5: **domainadditionaldatalink.adhoc.dotgov.txt** -> has the map of domains to other data like authority, organization, & domain type
|
||||
- FILE 6: **domaintypes.adhoc.dotgov.txt** -> has data on federal type and organization type
|
||||
- FILE 7: **organization.adhoc.dotgov.txt** -> has organization name data
|
||||
- FILE 8: **authority.adhoc.dotgov.txt** -> has authority data which maps to an agency
|
||||
- FILE 9: **agency.adhoc.dotgov.txt** -> has federal agency data
|
||||
- FILE 10: **migrationFilepaths.json** -> A JSON which points towards all given filenames. Specified below.
|
||||
|
||||
#### STEP 2: obtain JSON file (for file locations)
|
||||
### Step 2: Obtain JSON file for migration files locations
|
||||
Add a JSON file called "migrationFilepaths.json" with the following contents (update filenames and directory as needed):
|
||||
```
|
||||
{
|
||||
|
@ -119,21 +120,22 @@ Later on, we will bundle this file along with the others into its own folder. Ke
|
|||
We need to run a few scripts to parse these files into our domain tables.
|
||||
We can do this both locally and in a sandbox.
|
||||
|
||||
#### STEP 3: Bundle all relevant data files into an archive
|
||||
### Step 3: Bundle all relevant data files into an archive
|
||||
Move all the files specified in Step 1 into a shared folder, and create a tar.gz.
|
||||
|
||||
Create a folder on your desktop called `datafiles` and move all of the obtained files into that. Add these files to a tar.gz archive using any method. See (here)[https://stackoverflow.com/questions/53283240/how-to-create-tar-file-with-7zip].
|
||||
Create a folder on your desktop called `datafiles` and move all of the obtained files into that. Add these files to a tar.gz archive using any method. See [here](https://stackoverflow.com/questions/53283240/how-to-create-tar-file-with-7zip).
|
||||
|
||||
After this is created, move this archive into `src/migrationdata`.
|
||||
|
||||
|
||||
### SECTION 1 - SANDBOX MIGRATION SETUP
|
||||
### Set Up Migrations on Sandbox
|
||||
Load migration data onto a production or sandbox environment
|
||||
|
||||
**WARNING:** All files uploaded in this manner are temporary, i.e. they will be deleted when the app is restaged.
|
||||
Do not use these environments to store data you want to keep around permanently. We don't want sensitive data to be accidentally present in our application environments.
|
||||
|
||||
#### STEP 1: Using cat to transfer data to sandboxes
|
||||
### Step 1: Transfer data to sandboxes
|
||||
Use the following cat command to upload your data to a sandbox environment of your choice:
|
||||
|
||||
```bash
|
||||
cat {LOCAL_PATH_TO_FILE} | cf ssh {APP_NAME_IN_ENVIRONMENT} -c "cat > /home/vcap/tmp/{DESIRED_NAME_OF_FILE}"
|
||||
|
@ -143,17 +145,22 @@ cat {LOCAL_PATH_TO_FILE} | cf ssh {APP_NAME_IN_ENVIRONMENT} -c "cat > /home/vcap
|
|||
* LOCAL_PATH_TO_FILE - Path to the file you want to copy, ex: src/tmp/escrow_contacts.daily.gov.GOV.txt
|
||||
* DESIRED_NAME_OF_FILE - Use this to specify the filename and type, ex: test.txt or escrow_contacts.daily.gov.GOV.txt
|
||||
|
||||
**TROUBLESHOOTING:** Depending on your operating system (Windows for instance), this command may upload corrupt data. If you encounter the error `gzip: prfiles.tar.gz: not in gzip format` when trying to unzip a .tar.gz file, use the scp command instead.
|
||||
|
||||
#### STEP 1 (Alternative): Using scp to transfer data to sandboxes
|
||||
**IMPORTANT:** Only follow these steps if cat does not work as expected. If it does, skip to step 2.
|
||||
#### TROUBLESHOOTING STEP 1 ISSUES
|
||||
Depending on your operating system (Windows for instance), this command may upload corrupt data. If you encounter the error `gzip: prfiles.tar.gz: not in gzip format` when trying to unzip a .tar.gz file, use the scp command instead.
|
||||
|
||||
**IMPORTANT:** Only follow the below troubleshooting steps if cat does not work as expected. If it does, skip to step 2.
|
||||
<details>
|
||||
<summary>Troubleshooting cat instructions
|
||||
</summary>
|
||||
|
||||
#### Use scp to transfer data to sandboxes.
|
||||
CloudFoundry supports scp as means of transferring data locally to our environment. If you are dealing with a batch of files, try sending across a tar.gz and unpacking that.
|
||||
|
||||
|
||||
##### Login to Cloud.gov
|
||||
|
||||
```bash
|
||||
cf login -a api.fr.cloud.gov --sso
|
||||
|
||||
```
|
||||
|
||||
##### Target your workspace
|
||||
|
@ -186,8 +193,10 @@ cf ssh-code
|
|||
Copy this code into the password prompt from earlier.
|
||||
|
||||
NOTE: You can use different utilities to copy this onto the clipboard for you. If you are on Windows, try the command `cf ssh-code | clip`. On Mac, this will be `cf ssh-code | pbcopy`
|
||||
</details>
|
||||
|
||||
#### STEP 2: Transfer uploaded files to the getgov directory
|
||||
|
||||
### Step 2: Transfer uploaded files to the getgov directory
|
||||
Due to the nature of how Cloud.gov operates, the getgov directory is dynamically generated whenever the app is built under the tmp/ folder. We can directly upload files to the tmp/ folder but cannot target the generated getgov folder directly, as we need to spin up a shell to access this. From here, we can move those uploaded files into the getgov directory using the `cat` command. Note that you will have to repeat this for each file you want to move, so it is better to use a tar.gz for multiple, and unpack it inside of the `datamigration` folder.
|
||||
|
||||
##### SSH into your sandbox
|
||||
|
@ -204,12 +213,20 @@ cf ssh {APP_NAME_IN_ENVIRONMENT}
|
|||
|
||||
##### From this directory, run the following command:
|
||||
```shell
|
||||
./manage.py cat_files_into_getgov --file_extension txt
|
||||
./manage.py cat_files_into_getgov --file_extension {FILE_EXTENSION_TYPE}
|
||||
```
|
||||
|
||||
NOTE: This will look for all files in /tmp with the .txt extension, but this can
|
||||
be changed if you are dealing with different extensions. For instance, a .tar.gz could be expressed
|
||||
as `--file_extension tar.gz`.
|
||||
This will look for all files in /tmp with that are the same file type as `FILE_EXTENSION_TYPE`.
|
||||
|
||||
**Example 1: Transferring txt files**
|
||||
|
||||
`./manage.py cat_files_into_getgov --file_extension txt` will search for
|
||||
all files with the .txt extension.
|
||||
|
||||
**Example 2: Transferring tar.gz files**
|
||||
|
||||
`./manage.py cat_files_into_getgov --file_extension tar.gz` will search
|
||||
for .tar.gz files.
|
||||
|
||||
If you are using a tar.gz file, you will need to perform one additional step to extract it.
|
||||
Run the following command from the same directory:
|
||||
|
@ -220,7 +237,7 @@ tar -xvf migrationdata/{FILE_NAME}.tar.gz -C migrationdata/ --strip-components=1
|
|||
*FILE_NAME* - Name of the desired file, ex: exportdata
|
||||
|
||||
|
||||
#### Manual method
|
||||
#### Manually transferring your files
|
||||
If the `cat_files_into_getgov.py` script isn't working, follow these steps instead.
|
||||
|
||||
##### Move the desired file into the correct directory
|
||||
|
@ -230,9 +247,9 @@ cat ../tmp/{filename} > migrationdata/{filename}
|
|||
```
|
||||
|
||||
|
||||
*You are now ready to run migration scripts (see [Running the Migration Scripts](running-the-migration-scripts))*
|
||||
*You are now ready to run migration scripts (see [Running the Domain Migration Scripts](running-the-domain-migration-scripts))*
|
||||
|
||||
### SECTION 2 - LOCAL MIGRATION SETUP (TESTING PURPOSES ONLY)
|
||||
### Set Up Local Migrations (TESTING PURPOSES ONLY)
|
||||
|
||||
***IMPORTANT: only use test data, to avoid publicizing PII in our public repo.***
|
||||
|
||||
|
@ -245,14 +262,14 @@ This will allow Docker to mount the files to a container (under `/app`) for our
|
|||
|
||||
*You are now ready to run migration scripts.*
|
||||
|
||||
## Transition Domains (Part 2) - Running the Migration Scripts
|
||||
While keeping the same ssh instance open (if you are running on a sandbox), run through the following commands.If you cannot run `manage.py` commands, try running `/tmp/lifecycle/shell` in the ssh instance.
|
||||
## Running the Domain Migration Scripts
|
||||
While keeping the same ssh instance open (if you are running on a sandbox), run through the following commands. If you cannot run `manage.py` commands, try running `/tmp/lifecycle/shell` in the ssh instance.
|
||||
|
||||
### STEP 1: Load Transition Domains
|
||||
### Step 1: Upload Transition Domains
|
||||
|
||||
Run the following command, making sure the file paths point to the right location. This will parse all given files and load the information into the TransitionDomain table. Make sure you have your migrationFilepaths.json file in the same directory.
|
||||
Run the following command, making sure the file paths point to the right location of your migration files. This will parse all given files and
|
||||
load the information into the TransitionDomain table. Make sure you have your migrationFilepaths.json file in the same directory.
|
||||
|
||||
```
|
||||
##### LOCAL COMMAND
|
||||
```shell
|
||||
docker-compose exec app ./manage.py load_transition_domain migrationFilepaths.json --directory /app/tmp/ --debug --limitParse 10
|
||||
|
@ -268,7 +285,8 @@ docker-compose exec app ./manage.py load_transition_domain migrationFilepaths.js
|
|||
This will print out additional, detailed logs.
|
||||
|
||||
`--limitParse 100`
|
||||
Directs the script to load only the first 100 entries into the table. You can adjust this number as needed for testing purposes.
|
||||
Directs the script to load only the first 100 entries into the table. You can adjust this number as needed for testing purposes.
|
||||
**Note:** `--limitParse` is currently experiencing issues and may not work as intended.
|
||||
|
||||
`--resetTable`
|
||||
This will delete all the data in transtion_domain. It is helpful if you want to see the entries reload from scratch or for clearing test data.
|
||||
|
@ -308,7 +326,7 @@ Defines the filename for domain type adhocs.
|
|||
`--infer_filenames`
|
||||
Determines if we should infer filenames or not. This setting is not available for use in environments with the flag `settings.DEBUG` set to false, as it is intended for local development only.
|
||||
|
||||
### STEP 2: Transfer Transition Domain data into main Domain tables
|
||||
### Step 2: Transfer Transition Domain data into main Domain tables
|
||||
|
||||
Now that we've loaded all the data into TransitionDomain, we need to update the main Domain and DomainInvitation tables with this information.
|
||||
In the same terminal as used in STEP 1, run the command below;
|
||||
|
@ -329,9 +347,10 @@ docker compose run -T app ./manage.py transfer_transition_domains_to_domains --d
|
|||
This will print out additional, detailed logs.
|
||||
|
||||
`--limitParse 100`
|
||||
Directs the script to load only the first 100 entries into the table. You can adjust this number as needed for testing purposes.
|
||||
Directs the script to load only the first 100 entries into the table. You can adjust this number as needed for testing purposes.
|
||||
**Note:** `--limitParse` is currently experiencing issues and may not work as intended.
|
||||
|
||||
### STEP 3: Send Domain invitations
|
||||
### Step 3: Send Domain invitations
|
||||
|
||||
To send invitation emails for every transition domain in the transition domain table, execute the following command:
|
||||
|
||||
|
@ -344,11 +363,11 @@ docker compose run -T app ./manage.py send_domain_invitations -s
|
|||
./manage.py send_domain_invitations -s
|
||||
```
|
||||
|
||||
### STEP 4: Test the results (Run the analyzer script)
|
||||
### Step 4: Test the results (Run the analyzer script)
|
||||
|
||||
This script's main function is to scan the transition domain and domain tables for any anomalies. It produces a simple report of missing or duplicate data. NOTE: some missing data might be expected depending on the nature of our migrations so use best judgement when evaluating the results.
|
||||
|
||||
#### OPTION 1 - ANALYZE ONLY
|
||||
#### OPTION 1 - Analyze Only
|
||||
|
||||
To analyze our database without running migrations, execute the script without any optional arguments:
|
||||
|
||||
|
@ -361,7 +380,7 @@ docker compose run -T app ./manage.py master_domain_migrations --debug
|
|||
./manage.py master_domain_migrations --debug
|
||||
```
|
||||
|
||||
#### OPTION 2 - RUN MIGRATIONS FEATURE
|
||||
#### OPTION 2 - Run Migrations Feature
|
||||
|
||||
To run the migrations again (all above migration steps) before analyzing, execute the following command (read the documentation on the terminal arguments below. Everything used by the migration scripts can also be passed into this script and will have the same effects). NOTE: --debug provides detailed logging statements during the migration. It is recommended that you use this argument when using the --runMigrations feature:
|
||||
|
||||
|
@ -415,7 +434,8 @@ Disables the terminal prompts that allows the user to step through each portion
|
|||
Used by the migration scripts (load_transition_domain) to set the limit for the
|
||||
number of data entries to insert. Set to 0 (or just don't use this
|
||||
argument) to parse every entry. This was provided primarily for testing
|
||||
purposes
|
||||
purposes.
|
||||
**Note:** `--limitParse` is currently experiencing issues and may not work as intended.
|
||||
|
||||
`--resetTable`
|
||||
|
||||
|
|
32
ops/manifests/manifest-development.yaml
Normal file
32
ops/manifests/manifest-development.yaml
Normal file
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
applications:
|
||||
- name: getgov-development
|
||||
buildpacks:
|
||||
- python_buildpack
|
||||
path: ../../src
|
||||
instances: 1
|
||||
memory: 512M
|
||||
stack: cflinuxfs4
|
||||
timeout: 180
|
||||
command: ./run.sh
|
||||
health-check-type: http
|
||||
health-check-http-endpoint: /health
|
||||
health-check-invocation-timeout: 40
|
||||
env:
|
||||
# Send stdout and stderr straight to the terminal without buffering
|
||||
PYTHONUNBUFFERED: yup
|
||||
# Tell Django where to find its configuration
|
||||
DJANGO_SETTINGS_MODULE: registrar.config.settings
|
||||
# Tell Django where it is being hosted
|
||||
DJANGO_BASE_URL: https://getgov-development.app.cloud.gov
|
||||
# Tell Django how much stuff to log
|
||||
DJANGO_LOG_LEVEL: INFO
|
||||
# default public site location
|
||||
GETGOV_PUBLIC_SITE_URL: https://beta.get.gov
|
||||
# Flag to disable/enable features in prod environments
|
||||
IS_PRODUCTION: False
|
||||
routes:
|
||||
- route: getgov-development.app.cloud.gov
|
||||
services:
|
||||
- getgov-credentials
|
||||
- getgov-development-database
|
32
ops/manifests/manifest-ky.yaml
Normal file
32
ops/manifests/manifest-ky.yaml
Normal file
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
applications:
|
||||
- name: getgov-ky
|
||||
buildpacks:
|
||||
- python_buildpack
|
||||
path: ../../src
|
||||
instances: 1
|
||||
memory: 512M
|
||||
stack: cflinuxfs4
|
||||
timeout: 180
|
||||
command: ./run.sh
|
||||
health-check-type: http
|
||||
health-check-http-endpoint: /health
|
||||
health-check-invocation-timeout: 40
|
||||
env:
|
||||
# Send stdout and stderr straight to the terminal without buffering
|
||||
PYTHONUNBUFFERED: yup
|
||||
# Tell Django where to find its configuration
|
||||
DJANGO_SETTINGS_MODULE: registrar.config.settings
|
||||
# Tell Django where it is being hosted
|
||||
DJANGO_BASE_URL: https://getgov-ky.app.cloud.gov
|
||||
# Tell Django how much stuff to log
|
||||
DJANGO_LOG_LEVEL: INFO
|
||||
# default public site location
|
||||
GETGOV_PUBLIC_SITE_URL: https://beta.get.gov
|
||||
# Flag to disable/enable features in prod environments
|
||||
IS_PRODUCTION: False
|
||||
routes:
|
||||
- route: getgov-ky.app.cloud.gov
|
||||
services:
|
||||
- getgov-credentials
|
||||
- getgov-ky-database
|
|
@ -43,7 +43,7 @@ cp ops/scripts/manifest-sandbox-template.yaml ops/manifests/manifest-$1.yaml
|
|||
sed -i '' "s/ENVIRONMENT/$1/" "ops/manifests/manifest-$1.yaml"
|
||||
|
||||
echo "Adding new environment to settings.py..."
|
||||
sed -i '' '/getgov-staging.app.cloud.gov/ {a\
|
||||
sed -i '' '/getgov-development.app.cloud.gov/ {a\
|
||||
'\"getgov-$1.app.cloud.gov\"',
|
||||
}' src/registrar/config/settings.py
|
||||
|
||||
|
@ -105,11 +105,11 @@ echo
|
|||
echo "Moving on to setup Github automation..."
|
||||
|
||||
echo "Adding new environment to Github Actions..."
|
||||
sed -i '' '/ - staging/ {a\
|
||||
sed -i '' '/ - development/ {a\
|
||||
- '"$1"'
|
||||
}' .github/workflows/reset-db.yaml
|
||||
|
||||
sed -i '' '/ - staging/ {a\
|
||||
sed -i '' '/ - development/ {a\
|
||||
- '"$1"'
|
||||
}' .github/workflows/migrate.yaml
|
||||
|
||||
|
|
|
@ -106,6 +106,7 @@ class EPPLibWrapper:
|
|||
# Flag that the pool is frozen,
|
||||
# then restart the pool.
|
||||
self.pool_status.pool_hanging = True
|
||||
logger.error("Pool timed out")
|
||||
self.start_connection_pool()
|
||||
except (ValueError, ParsingError) as err:
|
||||
message = f"{cmd_type} failed to execute due to some syntax error."
|
||||
|
@ -174,6 +175,7 @@ class EPPLibWrapper:
|
|||
|
||||
def _create_pool(self, client, login, options):
|
||||
"""Creates and returns new pool instance"""
|
||||
logger.info("New pool was created")
|
||||
return EPPConnectionPool(client, login, options)
|
||||
|
||||
def start_connection_pool(self, restart_pool_if_exists=True):
|
||||
|
@ -187,7 +189,7 @@ class EPPLibWrapper:
|
|||
# Since we reuse the same creds for each pool, we can test on
|
||||
# one socket, and if successful, then we know we can connect.
|
||||
if not self._test_registry_connection_success():
|
||||
logger.warning("Cannot contact the Registry")
|
||||
logger.warning("start_connection_pool() -> Cannot contact the Registry")
|
||||
self.pool_status.connection_success = False
|
||||
else:
|
||||
self.pool_status.connection_success = True
|
||||
|
@ -197,6 +199,7 @@ class EPPLibWrapper:
|
|||
if self._pool is not None and restart_pool_if_exists:
|
||||
logger.info("Connection pool restarting...")
|
||||
self.kill_pool()
|
||||
logger.info("Old pool killed")
|
||||
|
||||
self._pool = self._create_pool(self._client, self._login, self.pool_options)
|
||||
|
||||
|
@ -221,6 +224,7 @@ class EPPLibWrapper:
|
|||
credentials are valid, and/or if the Registrar
|
||||
can be contacted
|
||||
"""
|
||||
# This is closed in test_connection_success
|
||||
socket = Socket(self._client, self._login)
|
||||
can_login = False
|
||||
|
||||
|
|
|
@ -31,6 +31,7 @@ class Socket:
|
|||
|
||||
def connect(self):
|
||||
"""Use epplib to connect."""
|
||||
logger.info("Opening socket on connection pool")
|
||||
self.client.connect()
|
||||
response = self.client.send(self.login)
|
||||
if self.is_login_error(response.code):
|
||||
|
@ -40,11 +41,13 @@ class Socket:
|
|||
|
||||
def disconnect(self):
|
||||
"""Close the connection."""
|
||||
logger.info("Closing socket on connection pool")
|
||||
try:
|
||||
self.client.send(commands.Logout())
|
||||
self.client.close()
|
||||
except Exception:
|
||||
except Exception as err:
|
||||
logger.warning("Connection to registry was not cleanly closed.")
|
||||
logger.error(err)
|
||||
|
||||
def send(self, command):
|
||||
"""Sends a command to the registry.
|
||||
|
@ -77,19 +80,17 @@ class Socket:
|
|||
try:
|
||||
self.client.connect()
|
||||
response = self.client.send(self.login)
|
||||
except LoginError as err:
|
||||
if err.should_retry() and counter < 10:
|
||||
except (LoginError, OSError) as err:
|
||||
logger.error(err)
|
||||
should_retry = True
|
||||
if isinstance(err, LoginError):
|
||||
should_retry = err.should_retry()
|
||||
if should_retry and counter < 3:
|
||||
counter += 1
|
||||
sleep((counter * 50) / 1000) # sleep 50 ms to 150 ms
|
||||
else: # don't try again
|
||||
return False
|
||||
# Occurs when an invalid creds are passed in - such as on localhost
|
||||
except OSError as err:
|
||||
logger.error(err)
|
||||
return False
|
||||
else:
|
||||
self.disconnect()
|
||||
|
||||
# If we encounter a login error, fail
|
||||
if self.is_login_error(response.code):
|
||||
logger.warning("A login error was found in test_connection_success")
|
||||
|
@ -97,3 +98,5 @@ class Socket:
|
|||
|
||||
# Otherwise, just return true
|
||||
return True
|
||||
finally:
|
||||
self.disconnect()
|
||||
|
|
|
@ -125,10 +125,14 @@ class TestConnectionPool(TestCase):
|
|||
xml = (location).read_bytes()
|
||||
return xml
|
||||
|
||||
def do_nothing(command):
|
||||
pass
|
||||
|
||||
# Mock what happens inside the "with"
|
||||
with ExitStack() as stack:
|
||||
stack.enter_context(patch.object(EPPConnectionPool, "_create_socket", self.fake_socket))
|
||||
stack.enter_context(patch.object(Socket, "connect", self.fake_client))
|
||||
stack.enter_context(patch.object(EPPConnectionPool, "kill_all_connections", do_nothing))
|
||||
stack.enter_context(patch.object(SocketTransport, "send", self.fake_send))
|
||||
stack.enter_context(patch.object(SocketTransport, "receive", fake_receive))
|
||||
# Restart the connection pool
|
||||
|
|
|
@ -98,13 +98,17 @@ class EPPConnectionPool(ConnectionPool):
|
|||
"""Kills all active connections in the pool."""
|
||||
try:
|
||||
if len(self.conn) > 0 or len(self.greenlets) > 0:
|
||||
logger.info("Attempting to kill connections")
|
||||
gevent.killall(self.greenlets)
|
||||
|
||||
self.greenlets.clear()
|
||||
for connection in self.conn:
|
||||
connection.disconnect()
|
||||
self.conn.clear()
|
||||
|
||||
# Clear the semaphore
|
||||
self.lock = BoundedSemaphore(self.size)
|
||||
logger.info("Finished killing connections")
|
||||
else:
|
||||
logger.info("No connections to kill.")
|
||||
except Exception as err:
|
||||
|
|
|
@ -136,6 +136,8 @@ MIDDLEWARE = [
|
|||
"allow_cidr.middleware.AllowCIDRMiddleware",
|
||||
# django-cors-headers: listen to cors responses
|
||||
"corsheaders.middleware.CorsMiddleware",
|
||||
# custom middleware to stop caching from CloudFront
|
||||
"registrar.no_cache_middleware.NoCacheMiddleware",
|
||||
# serve static assets in production
|
||||
"whitenoise.middleware.WhiteNoiseMiddleware",
|
||||
# provide security enhancements to the request/response cycle
|
||||
|
@ -617,6 +619,8 @@ SECURE_SSL_REDIRECT = True
|
|||
ALLOWED_HOSTS = [
|
||||
"getgov-stable.app.cloud.gov",
|
||||
"getgov-staging.app.cloud.gov",
|
||||
"getgov-development.app.cloud.gov",
|
||||
"getgov-ky.app.cloud.gov",
|
||||
"getgov-es.app.cloud.gov",
|
||||
"getgov-nl.app.cloud.gov",
|
||||
"getgov-rh.app.cloud.gov",
|
||||
|
|
|
@ -5,7 +5,6 @@ from django.db import models
|
|||
|
||||
from .domain_invitation import DomainInvitation
|
||||
from .transition_domain import TransitionDomain
|
||||
from .domain_information import DomainInformation
|
||||
from .domain import Domain
|
||||
|
||||
from phonenumber_field.modelfields import PhoneNumberField # type: ignore
|
||||
|
@ -97,51 +96,6 @@ class User(AbstractUser):
|
|||
new_domain_invitation = DomainInvitation(email=transition_domain_email.lower(), domain=new_domain)
|
||||
new_domain_invitation.save()
|
||||
|
||||
def check_transition_domains_on_login(self):
|
||||
"""When a user first arrives on the site, we need to check
|
||||
if they are logging in with the same e-mail as a
|
||||
transition domain and update our database accordingly."""
|
||||
|
||||
for transition_domain in TransitionDomain.objects.filter(username=self.email):
|
||||
# Looks like the user logged in with the same e-mail as
|
||||
# one or more corresponding transition domains.
|
||||
# Create corresponding DomainInformation objects.
|
||||
|
||||
# NOTE: adding an ADMIN user role for this user
|
||||
# for each domain should already be done
|
||||
# in the invitation.retrieve() method.
|
||||
# However, if the migration scripts for transition
|
||||
# domain objects were not executed correctly,
|
||||
# there could be transition domains without
|
||||
# any corresponding Domain & DomainInvitation objects,
|
||||
# which means the invitation.retrieve() method might
|
||||
# not execute.
|
||||
# Check that there is a corresponding domain object
|
||||
# for this transition domain. If not, we have an error
|
||||
# with our data and migrations need to be run again.
|
||||
|
||||
# Get the domain that corresponds with this transition domain
|
||||
domain_exists = Domain.objects.filter(name=transition_domain.domain_name).exists()
|
||||
if not domain_exists:
|
||||
logger.warn(
|
||||
"""There are transition domains without
|
||||
corresponding domain objects!
|
||||
Please run migration scripts for transition domains
|
||||
(See data_migration.md)"""
|
||||
)
|
||||
# No need to throw an exception...just create a domain
|
||||
# and domain invite, then proceed as normal
|
||||
self.create_domain_and_invite(transition_domain)
|
||||
|
||||
domain = Domain.objects.get(name=transition_domain.domain_name)
|
||||
|
||||
# Create a domain information object, if one doesn't
|
||||
# already exist
|
||||
domain_info_exists = DomainInformation.objects.filter(domain=domain).exists()
|
||||
if not domain_info_exists:
|
||||
new_domain_info = DomainInformation(creator=self, domain=domain)
|
||||
new_domain_info.save()
|
||||
|
||||
def on_each_login(self):
|
||||
"""Callback each time the user is authenticated.
|
||||
|
||||
|
@ -152,17 +106,6 @@ class User(AbstractUser):
|
|||
as a transition domain and update our domainInfo objects accordingly.
|
||||
"""
|
||||
|
||||
# PART 1: TRANSITION DOMAINS
|
||||
#
|
||||
# NOTE: THIS MUST RUN FIRST
|
||||
# (If we have an issue where transition domains were
|
||||
# not fully converted into Domain and DomainInvitation
|
||||
# objects, this method will fill in the gaps.
|
||||
# This will ensure the Domain Invitations method
|
||||
# runs correctly (no missing invites))
|
||||
self.check_transition_domains_on_login()
|
||||
|
||||
# PART 2: DOMAIN INVITATIONS
|
||||
self.check_domain_invitations_on_login()
|
||||
|
||||
class Meta:
|
||||
|
|
18
src/registrar/no_cache_middleware.py
Normal file
18
src/registrar/no_cache_middleware.py
Normal file
|
@ -0,0 +1,18 @@
|
|||
"""Middleware to add Cache-control: no-cache to every response.
|
||||
|
||||
Used to force Cloudfront caching to leave us alone while we develop
|
||||
better caching responses.
|
||||
"""
|
||||
|
||||
|
||||
class NoCacheMiddleware:
|
||||
|
||||
"""Middleware to add a single header to every response."""
|
||||
|
||||
def __init__(self, get_response):
|
||||
self.get_response = get_response
|
||||
|
||||
def __call__(self, request):
|
||||
response = self.get_response(request)
|
||||
response["Cache-Control"] = "no-cache"
|
||||
return response
|
|
@ -11,23 +11,18 @@
|
|||
<h1>
|
||||
{% translate "You are not authorized to view this page" %}
|
||||
</h1>
|
||||
|
||||
<h2>
|
||||
{% translate "Status 401" %}
|
||||
</h2>
|
||||
|
||||
|
||||
{% if friendly_message %}
|
||||
<p>{{ friendly_message }}</p>
|
||||
{% else %}
|
||||
<p>{% translate "Authorization failed." %}</p>
|
||||
{% endif %}
|
||||
|
||||
<p>
|
||||
You must be an authorized user and need to be signed in to view this page.
|
||||
Would you like to <a href="{% url 'login' %}"> try logging in again?</a>
|
||||
You must be an authorized user and signed in to view this page. If you are an authorized user,
|
||||
<strong><a href="{% url 'login' %}"> try signing in again</a>.</strong>
|
||||
</p>
|
||||
<p>
|
||||
If you'd like help with this error <a class="usa-link" rel="noopener noreferrer" target="_blank" href="{% public_site_url 'contact/' %}">contact us</a>.
|
||||
</p>
|
||||
If you'd like help with this error <a class="usa-link" rel="noopener noreferrer" target="_blank" href="{% public_site_url 'contact/' %}">contact us</a>.</p>
|
||||
|
||||
|
||||
{% if log_identifier %}
|
||||
<p>Here's a unique identifier for this error.</p>
|
||||
|
@ -35,6 +30,7 @@
|
|||
<p>{% translate "Please include it if you contact us." %}</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<div class="tablet:grid-col-4">
|
||||
<img
|
||||
src="{% static 'img/registrar/dotgov_401_illo.svg' %}"
|
||||
|
@ -43,4 +39,4 @@
|
|||
</div>
|
||||
</div>
|
||||
</main>
|
||||
{% endblock %}
|
||||
{% endblock %}
|
|
@ -4,6 +4,7 @@
|
|||
{% block title %}Security email | {{ domain.name }} | {% endblock %}
|
||||
|
||||
{% block domain_content %}
|
||||
{% include "includes/form_errors.html" with form=form %}
|
||||
|
||||
<h1>Security email</h1>
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
{% block title %}Your contact information | {{ domain.name }} | {% endblock %}
|
||||
|
||||
{% block domain_content %}
|
||||
{% include "includes/form_errors.html" with form=form %}
|
||||
|
||||
<h1>Your contact information</h1>
|
||||
|
||||
|
|
|
@ -11,12 +11,22 @@ If you’re not affiliated with the above domain{% if domains|length > 1 %}s{% e
|
|||
|
||||
CREATE A LOGIN.GOV ACCOUNT
|
||||
|
||||
You can’t use your old credentials to access the new registrar. Access is now managed through Login.gov, a simple and secure process for signing into many government services with one account. Follow these steps to create your Login.gov account <https://login.gov/help/get-started/create-your-account/>.
|
||||
You can’t use your old credentials to access the new registrar. Access is now managed through Login.gov, a simple and secure process for signing in to many government services with one account.
|
||||
|
||||
When creating an account, you’ll need to provide the same email address you used to log in to the old registrar. That will ensure your domains are linked to your Login.gov account.
|
||||
When creating a Login.gov account, you’ll need to provide the same email address you used to sign in to the old registrar. That will link your domain{% if domains|length > 1 %}s{% endif %} to your account.
|
||||
|
||||
If you need help finding the email address you used in the past, let us know in a reply to this email.
|
||||
|
||||
YOU MUST VERIFY YOUR IDENTITY WITH LOGIN.GOV
|
||||
|
||||
We require you to verify your identity with Login.gov as part of the account creation process. This is an extra layer of security that requires you to prove you are you, and not someone pretending to be you.
|
||||
|
||||
When you try to access the registrar with your Login.gov account, we’ll ask you to verify your identity if you haven’t already. You’ll only have to verify your identity once. You’ll need a state-issued ID, a Social Security number, and a phone number for identity verification.
|
||||
|
||||
Follow these steps to create your Login.gov account <https://login.gov/help/get-started/create-your-account/>.
|
||||
|
||||
Read more about verifying your identity with Login.gov <https://login.gov/help/verify-your-identity/how-to-verify-your-identity/>.
|
||||
|
||||
CHECK YOUR .GOV DOMAIN CONTACTS
|
||||
|
||||
This is a good time to check who has access to your .gov domain{% if domains|length > 1 %}s{% endif %}. The admin, technical, and billing contacts listed for your domain{% if domains|length > 1 %}s{% endif %} in our old system also received this email. In our new registrar, these contacts are all considered “domain managers.” We no longer have the admin, technical, and billing roles, and you aren’t limited to three domain managers like in the old system.
|
||||
|
|
|
@ -627,22 +627,10 @@ class TestUser(TestCase):
|
|||
TransitionDomain.objects.all().delete()
|
||||
User.objects.all().delete()
|
||||
|
||||
def test_check_transition_domains_on_login(self):
|
||||
"""A user's on_each_login callback checks transition domains.
|
||||
Makes DomainInformation object."""
|
||||
self.domain, _ = Domain.objects.get_or_create(name=self.domain_name)
|
||||
|
||||
self.user.on_each_login()
|
||||
self.assertTrue(DomainInformation.objects.get(domain=self.domain))
|
||||
|
||||
def test_check_transition_domains_without_domains_on_login(self):
|
||||
"""A user's on_each_login callback checks transition domains.
|
||||
"""A user's on_each_login callback does not check transition domains.
|
||||
This test makes sure that in the event a domain does not exist
|
||||
for a given transition domain, both a domain and domain invitation
|
||||
are created."""
|
||||
self.user.on_each_login()
|
||||
self.assertTrue(Domain.objects.get(name=self.domain_name))
|
||||
|
||||
domain = Domain.objects.get(name=self.domain_name)
|
||||
self.assertTrue(DomainInvitation.objects.get(email=self.email, domain=domain))
|
||||
self.assertTrue(DomainInformation.objects.get(domain=domain))
|
||||
self.assertFalse(Domain.objects.filter(name=self.domain_name).exists())
|
||||
|
|
|
@ -62,6 +62,9 @@
|
|||
10038 OUTOFSCOPE http://app:8080/delete
|
||||
10038 OUTOFSCOPE http://app:8080/withdraw
|
||||
10038 OUTOFSCOPE http://app:8080/withdrawconfirmed
|
||||
10038 OUTOFSCOPE http://app:8080/dns
|
||||
10038 OUTOFSCOPE http://app:8080/dnssec
|
||||
10038 OUTOFSCOPE http://app:8080/dns/dnssec
|
||||
# This URL always returns 404, so include it as well.
|
||||
10038 OUTOFSCOPE http://app:8080/todo
|
||||
# OIDC isn't configured in the test environment and DEBUG=True so this gives a 500 without CSP headers
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue