Merge remote-tracking branch 'origin' into add-rotate-login-script

This commit is contained in:
Alysia Broddrick 2024-08-19 09:32:58 -07:00
commit a30568e110
No known key found for this signature in database
GPG key ID: 03917052CD0F06B7
281 changed files with 30777 additions and 13803 deletions

View file

@ -1,6 +1,6 @@
---
name: Designer Onboarding
about: Onboarding steps for designers.
about: Onboarding steps for new designers joining the .gov team.
title: 'Designer Onboarding: GH_HANDLE'
labels: design, onboarding
assignees: katherineosos

View file

@ -1,6 +1,6 @@
---
name: Developer Onboarding
about: Onboarding steps for developers.
about: Onboarding steps for new developers joining the .gov team.
title: 'Developer Onboarding: GH_HANDLE'
labels: dev, onboarding
assignees: abroddrick
@ -14,15 +14,35 @@ assignees: abroddrick
## Installation
There are several tools we use locally that you will need to have.
- [ ] [Install the cf CLI v7](https://docs.cloudfoundry.org/cf-cli/install-go-cli.html#pkg-mac) for the ability to deploy
- [ ] Make sure you have `gpg` >2.1.7. Run `gpg --version` to check. If not, [install gnupg](https://formulae.brew.sh/formula/gnupg)
- [ ] Install the [Github CLI](https://cli.github.com/)
There are several tools we use locally that you will need to have.
- [ ] [Cloudfoundry CLI](https://docs.cloudfoundry.org/cf-cli/install-go-cli.html#pkg-mac)
- If you are using Windows the cli will be under `cf8` or `cf7` depending on which version you install.
- If you are using Windows, installation information can be found [here](https://github.com/cloudfoundry/cli/wiki/V8-CLI-Installation-Guide#installers-and-compressed-binaries)
- Alternatively, for Windows, [consider using chocolately](https://community.chocolatey.org/packages/cloudfoundry-cli/7.2.0)
- [ ] [GPG](https://gnupg.org/download/) if you are using GPG to sign commits.
- Make sure you have `gpg` >2.1.7. Run `gpg --version` to check. If not, [install gnupg](https://formulae.brew.sh/formula/gnupg)
- This may not work on DHS devices. Alternatively, you can [use ssh keys](#setting-up-commit-signing-with-ssh) instead.
- [ ] Docker Community Edition*
- [ ] Git*
- [ ] VSCode (our preferred editor)*
- [ ] Github Desktop* or the Github CLI*
The following tools are optional but recommended. For DHS devices, these can be requested through the DHS IT portal:
- [ ] Slack Desktop App**
- [ ] Python 3.10*
- [ ] NodeJS (latest version available)*
- [ ] Putty*
- [ ] Windows Subsystem for Linux*
\* Must be requested through DHS IT portal on DHS devices
** Downloadable via DHS Software Center
## Access
### Steps for the onboardee
- [ ] Setup [commit signing in Github](#setting-up-commit-signing) and with git locally.
- [ ] Setup commit signing in Github and with git locally using either [gpg](#setting-up-commit-signing-with-gpg) or [ssh](#setting-up-commit-signing-with-ssh).
- [ ] [Create a cloud.gov account](https://cloud.gov/docs/getting-started/accounts/)
- [ ] Email github@cisa.dhs.gov (cc: Cameron) to add you to the [CISA Github organization](https://github.com/getgov) and [.gov Team](https://github.com/orgs/cisagov/teams/gov).
- [ ] Ensure you can login to your cloud.gov account via the CLI
@ -34,7 +54,12 @@ cf login -a api.fr.cloud.gov --sso
**Note:** As mentioned in the [Login documentation](https://developers.login.gov/testing/), the sandbox Login account is different account from your regular, production Login account. If you have not created a Login account for the sandbox before, you will need to create a new account first.
- [ ] Optional- add yourself as a codeowner if desired. See the [Developer readme](https://github.com/cisagov/getgov/blob/main/docs/developer/README.md) for how to do this and what it does.
Follow the [.gov onboarding dev setup instructions](https://docs.google.com/document/d/1ukbpW4LSqkb_CCt8LWfpehP03qqfyYfvK3Fl21NaEq8/edit#heading=h.94jwfwkpkhdx). Confirm you successfully set up the following accounts:
- [ ] Identity sandbox accounts - 1 superuser access account and 1 analyst access account.
- [ ] Login.gov account to access stable
**Optional**
- [ ] Add yourself as a codeowner if desired. See the [Developer readme](https://github.com/cisagov/getgov/blob/main/docs/developer/README.md) for how to do this and what it does.
### Steps for the onboarder
- [ ] Add the onboardee to cloud.gov org (cisa-dotgov)
@ -49,9 +74,9 @@ cf login -a api.fr.cloud.gov --sso
- [ ] [Contributing Policy](https://github.com/cisagov/dotgov/tree/main/CONTRIBUTING.md)
## Setting up commit signing
## Setting up commit signing with GPG
Follow the instructions [here](https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key) to generate a new GPG key (default configurations are okay) and add it to your GPG keys on Github.
Follow GitHub's instructions to [generate a new GPG key](https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key) (default configurations are okay) and [add it to your GitHub GPG keys](https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-gpg-key-to-your-github-account).
Configure your key locally:
@ -70,6 +95,7 @@ when setting up your key in Github.
Now test commit signing is working by checking out a branch (`yourname/test-commit-signing`) and making some small change to a file. Commit the change (it should prompt you for your GPG credential) and push it to Github. Look on Github at your branch and ensure the commit is `verified`.
### Troubleshooting GPG on MacOS
**Note:** if you are on a mac and not able to successfully create a signed commit, getting the following error:
```zsh
error: gpg failed to sign the data
@ -90,8 +116,49 @@ or
source ~/.zshrc
```
### Troubleshooting GPG on Windows
If GPG doesn't work out of the box with git for you:
- You can [download the GPG binary directly](https://gnupg.org/download/).
- It may be helpful to use [gpg4win](https://www.gpg4win.org/get-gpg4win.html).
From there, you should be able to access gpg through the terminal.
Additionally, consider a gpg key manager like Kleopatra if you run into issues with environment variables or with the gpg service not running on startup.
## Setting up commit signing with SSH
Follow GitHub's instructions to [generate a new SSH key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#generating-a-new-ssh-key) and [add it to your GitHub SSH keys](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account) as a **signing key.**
Configure your key locally:
```bash
git config --global gpg.format ssh
git config --global commit.gpgsign true
git config --global user.signingkey <YOUR_KEY_PATH>
```
Where `<YOUR_KEY_PATH>` is the path of your public key file. GitHub defaults this to `~/.ssh/id_ed25519.pub`. If you named SSH public key a different name from the default, you may need to replace `id_ed25519.pub` with the name you gave your key.
Now test commit signing is working by checking out a branch (`yourinitials/test-commit-signing`) and making some small change to a file. Commit the change (it should prompt you for your key passphrase) and push it to Github. Look on Github at your branch and ensure the commit is `verified`.
## Setting up developer sandbox
We have three types of environments: stable, staging, and sandbox. Stable (production)and staging (pre-prod) get deployed via tagged release, and developer sandboxes are given to get.gov developers to mess around in a production-like environment without disrupting stable or staging. Each sandbox is namespaced and will automatically be deployed too when the appropriate branch syntax is used for that space in an open pull request. There are several things you need to setup to make the sandbox work for a developer.
All automation for setting up a developer sandbox is documented in the scripts for [creating a developer sandbox](../../ops/scripts/create_dev_sandbox.sh) and [removing a developer sandbox](../../ops/scripts/destroy_dev_sandbox.sh). A Cloud.gov organization administrator will have to perform the script in order to create the sandbox.
## Known Issues
### SSL Verification Failure
Some developers using Government Furnished Equipment (GFE) have problems using tools such as git and pip due to SSL verification failurse. This happens because GFE has a custom certificate chain installed, but these tools use their own certificate bundles. As a result, when they try to verify an ssl connection, they cannot and so the connection fails. To resolve this in pip you can use --use-feature=truststore to direct pip to use the local certificate store. If you are running into this issue when using git on windows, run ```git config --global http.sslbackend schannel```.
If you are running into these issues in a docker container you will need to export the root certificate and pull it into the container. Ask another developer how to do this properly.
### Puppeteer Download Error
When building the node image either individually or with docker compose, there may be an error caused by a node package call puppeteer. This can be resolved by adding `ENV PUPPETEER_SKIP_DOWNLOAD=true` to [node.Dockerfile](../../src/node.Dockerfile) after the COPY command.
### Checksum Error
There is an unresolved issue with python package installation that occurs after the above SSL Verification failure has been resolved. It often manifests as a checksum error, where the hash of a download .whl file (python package) does not match the expected value. This appears to be because pythonhosted.org is cutting off download connections to some devices for some packages (the behavior is somewhat inconsistent). We have outstanding issues with PyPA and DHS IT to fix this. In the meantime we have a [workaround](#developing-using-docker).
## Developing Using Docker
While we have unresolved issues with certain devices, you can pull a pre-built docker image from matthewswspence/getgov-base that comes with all the needed packages installed. To do this, you will need to change the very first line in the main [Dockerfile](../../src/Dockerfile) to `FROM matthewswspence/getgov-base:latest`. Note: this change will need to be reverted before any branch can be merged. Additionally, this will only resolve the [checksum error](#checksum-error), you will still need to resolve any other issues through the listed instructions. We are actively working to resolve this inconvenience.

View file

@ -31,8 +31,8 @@ body:
attributes:
label: Links to other issues
description: |
"Add issue #numbers this relates to and how (e.g., 🚧 [construction] Blocks, ⛔️ [no_entry] Is blocked by, 🔄 [arrows_counterclockwise] Relates to)."
placeholder: 🔄 Relates to...
"With a `-` to start the line, add issue #numbers this relates to and how (e.g., 🚧 [construction] Blocks, ⛔️ [no_entry] Is blocked by, 🔄 [arrows_counterclockwise] Relates to)."
placeholder: "- 🔄 Relates to..."
- type: markdown
id: note
attributes:

View file

@ -28,6 +28,7 @@ on:
- ab
- rjm
- dk
- ms
jobs:
createcachetable:

View file

@ -22,16 +22,10 @@ jobs:
- name: Compile USWDS assets
working-directory: ./src
run: |
docker compose run node bash -c "\
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash && \
export NVM_DIR=\"\$HOME/.nvm\" && \
[ -s \"\$NVM_DIR/nvm.sh\" ] && \. \"\$NVM_DIR/nvm.sh\" && \
[ -s \"\$NVM_DIR/bash_completion\" ] && \. \"\$NVM_DIR/bash_completion\" && \
nvm install 21.7.3 && \
nvm use 21.7.3 && \
npm install && \
npx gulp copyAssets && \
npx gulp compile"
docker compose run node npm install npm@latest &&
docker compose run node npm install &&
docker compose run node npx gulp copyAssets &&
docker compose run node npx gulp compile
- name: Collect static assets
working-directory: ./src
run: docker compose run app python manage.py collectstatic --no-input

77
.github/workflows/deploy-manual.yaml vendored Normal file
View file

@ -0,0 +1,77 @@
# Manually deploy a branch of choice to an environment of choice.
name: Manual Build and Deploy
run-name: Manually build and deploy branch to sandbox of choice
on:
workflow_dispatch:
inputs:
environment:
description: 'Environment to deploy'
required: true
default: 'backup'
type: 'choice'
options:
- ab
- backup
- cb
- dk
- es
- gd
- ko
- ky
- nl
- rb
- rh
- rjm
- meoward
- bob
- hotgov
- litterbox
- ms
- ad
# GitHub Actions has no "good" way yet to dynamically input branches
branch:
description: 'Branch to deploy'
required: true
default: 'main'
type: string
jobs:
variables:
runs-on: ubuntu-latest
steps:
- name: Setting global variables
uses: actions/github-script@v6
id: var
with:
script: |
core.setOutput('environment', '${{ github.head_ref }}'.split("/")[0]);
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Compile USWDS assets
working-directory: ./src
run: |
docker compose run node npm install npm@latest &&
docker compose run node npm install &&
docker compose run node npx gulp copyAssets &&
docker compose run node npx gulp compile
- name: Collect static assets
working-directory: ./src
run: docker compose run app python manage.py collectstatic --no-input
- name: Deploy to cloud.gov sandbox
uses: cloud-gov/cg-cli-tools@main
env:
ENVIRONMENT: ${{ github.event.inputs.environment }}
CF_USERNAME: CF_${{ github.event.inputs.environment }}_USERNAME
CF_PASSWORD: CF_${{ github.event.inputs.environment }}_PASSWORD
with:
cf_username: ${{ secrets[env.CF_USERNAME] }}
cf_password: ${{ secrets[env.CF_PASSWORD] }}
cf_org: cisa-dotgov
cf_space: ${{ env.ENVIRONMENT }}
cf_manifest: ops/manifests/manifest-${{ env.ENVIRONMENT }}.yaml

View file

@ -24,6 +24,12 @@ jobs:
|| startsWith(github.head_ref, 'backup/')
|| startsWith(github.head_ref, 'meoward/')
|| startsWith(github.head_ref, 'bob/')
|| startsWith(github.head_ref, 'cb/')
|| startsWith(github.head_ref, 'hotgov/')
|| startsWith(github.head_ref, 'litterbox/')
|| startsWith(github.head_ref, 'ag/')
|| startsWith(github.head_ref, 'ms/')
|| startsWith(github.head_ref, 'ad/')
outputs:
environment: ${{ steps.var.outputs.environment}}
runs-on: "ubuntu-latest"
@ -42,16 +48,10 @@ jobs:
- name: Compile USWDS assets
working-directory: ./src
run: |
docker compose run node bash -c "\
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash && \
export NVM_DIR=\"\$HOME/.nvm\" && \
[ -s \"\$NVM_DIR/nvm.sh\" ] && \. \"\$NVM_DIR/nvm.sh\" && \
[ -s \"\$NVM_DIR/bash_completion\" ] && \. \"\$NVM_DIR/bash_completion\" && \
nvm install 21.7.3 && \
nvm use 21.7.3 && \
npm install && \
npx gulp copyAssets && \
npx gulp compile"
docker compose run node npm install npm@latest &&
docker compose run node npm install &&
docker compose run node npx gulp copyAssets &&
docker compose run node npx gulp compile
- name: Collect static assets
working-directory: ./src
run: docker compose run app python manage.py collectstatic --no-input

View file

@ -23,16 +23,9 @@ jobs:
- name: Compile USWDS assets
working-directory: ./src
run: |
docker compose run node bash -c "\
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash && \
export NVM_DIR=\"\$HOME/.nvm\" && \
[ -s \"\$NVM_DIR/nvm.sh\" ] && \. \"\$NVM_DIR/nvm.sh\" && \
[ -s \"\$NVM_DIR/bash_completion\" ] && \. \"\$NVM_DIR/bash_completion\" && \
nvm install 21.7.3 && \
nvm use 21.7.3 && \
npm install && \
npx gulp copyAssets && \
npx gulp compile"
docker compose run node npm install &&
docker compose run node npx gulp copyAssets &&
docker compose run node npx gulp compile
- name: Collect static assets
working-directory: ./src
run: docker compose run app python manage.py collectstatic --no-input

View file

@ -23,16 +23,9 @@ jobs:
- name: Compile USWDS assets
working-directory: ./src
run: |
docker compose run node bash -c "\
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash && \
export NVM_DIR=\"\$HOME/.nvm\" && \
[ -s \"\$NVM_DIR/nvm.sh\" ] && \. \"\$NVM_DIR/nvm.sh\" && \
[ -s \"\$NVM_DIR/bash_completion\" ] && \. \"\$NVM_DIR/bash_completion\" && \
nvm install 21.7.3 && \
nvm use 21.7.3 && \
npm install && \
npx gulp copyAssets && \
npx gulp compile"
docker compose run node npm install &&
docker compose run node npx gulp copyAssets &&
docker compose run node npx gulp compile
- name: Collect static assets
working-directory: ./src
run: docker compose run app python manage.py collectstatic --no-input

View file

@ -0,0 +1,18 @@
name: Notify users based on issue labels
on:
issues:
types: [labeled]
pull_request:
types: [labeled]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- uses: jenschelkopf/issue-label-notification-action@1.3
with:
recipients: |
design-review=@Katherine-Osos
message: 'cc/ {recipients} — adding you to this **{label}** issue!'

View file

@ -16,6 +16,12 @@ on:
- stable
- staging
- development
- ad
- ms
- ag
- litterbox
- hotgov
- cb
- bob
- meoward
- backup

View file

@ -16,6 +16,12 @@ on:
options:
- staging
- development
- ad
- ms
- ag
- litterbox
- hotgov
- cb
- bob
- meoward
- backup

View file

@ -70,6 +70,6 @@ jobs:
- name: run pa11y
working-directory: ./src
run: |
sleep 10;
sleep 20;
npm i -g pa11y-ci
pa11y-ci

2
.gitignore vendored
View file

@ -7,8 +7,10 @@ docs/research/data/**
public/
credentials*
src/certs/
*.pem
*.crt
*.cer
*.bk

View file

@ -0,0 +1,47 @@
# 26. Django Waffle library for Feature Flags
Date: 2024-07-06
## Status
Approved
## Context
We release finished code twice weekly, allowing features to reach users quickly. However, several upcoming features require a series of changes that will need to be done over a few sprints and should only be displayed to users once we are all done. Thus, users would see half-finished features if we followed our standard process.
At the same time, some of these features should only be turned on for users upon request (and likely during user research). We would want a way for our CISA users to turn this feature on and off for people without requiring a lengthy process or code changes.
This brought us to finding solutions that could fix one or both of these problems.
## Considered Options
**Option 1:** Environment variables
The environment allows developers to set a true or false value to the given variable, allowing implementation over multiple sprints when new features are encapsulated with this variable. The feature shows when the variable is on (true); otherwise, it remains hidden. Environment variables are also innate to Django, making them free to use; on top of that, we already use them for other things in our code.
The downside is that you would need to go to cloud.gov or use the cf CLI to see the current settings on a sandbox. This is very technical, meaning only developers would really be able to see what features were set, and we would be the only ones able to adjust them. It would also be easy to accidentally have the feature on or off without noticing. This also would not solve the problem of turning features on and off quickly for a given user group.
**Option 2:** Feature branches
Like environment variables, using feature branches would be free and allow us to iterate on developing big features over multiple sprints. We would make a feature branch that developers working on that feature would push and pull from to iterate on. This quickly brings us to the downsides of this approach.
Using feature branches, we do not solve the problem of being able to turn features on and off quickly for a user group. More importantly, by working in a separate branch for more than a sprint, we easily risk having out-of-sync migrations and merge conflicts that would slow development time and cause frustration. Out-of-sync migrations can also cause technical issues on sandboxes, further contributing to development frustration.
**Option 3:** Feature flags
Feature flags are free, allowing us to implement features over multiple sprints, and some libraries can apply features based on UserGroups while even more come with an interface for non-developers to control turning feature flags on and off. Going with this decision would also entail picking the correct library or product.
**Option 3a:** Feature flags with Waffle
The Waffle feature flag library is a highly recommended Django library for handling large features. It has clear documentation on turning on feature flags for user groups, which is one of the main problems it attempts to solve. It also provides "Samples" that can turn on flags for a certain percentage of users and "Switches" that can be used to turn features on and off holistically. The reviews from those who used it were highly favorable, some even mentioning how it beat out competitors like Gargoyl. It's also compatible with Django admin, providing a quick way to add the view of the flags in Django admin so any user with admin access can modify flags for their sandbox.
The repo has had new releases every year since its the creation and looks to be well maintained, with many issues on the repo referring to new feature requests.
**Option 3b:** Feature flags with Gargoyl
Gargoyl is another feature-flag library with Django, but it is no longer maintained, and reviews say it wasn't as easy to work with as Waffle. Using it would require forking the library, and many outstanding issues indicate bugs that need fixing. The mixed reviews from those who have done this and the less robust documentation were immediately huge cons to using this as an option.
**Option 3c:** Paid feature flag system with GitHub integration- LaunchDarkly
LaunchDarkly is a Fedramped solution with excellent reviews for controlling feature flags straight from GitHub to promote any team member easily controlling feature flags. However, the big con to this was that it would be a paid solution and would take time to procure, thus slowing down our ability to start on these significant features. We shouldn't consider LaunchDarkly because taking time to procure it would negatively affect our timeline, even if the budget was eventually approved.
## Decision
Option 3a, feature flags with the Django Waffle library
## Consequences
We are now reliant on the Waffle library for feature flags. As with any library, we would need to fork it if it ever became non-maintained with critical bugs. This doesn't seem likely in the near future, but if it occurred, we could complete the forking and fix any bug within a sprint without drastically impacting our timeline.

View file

@ -41,7 +41,7 @@ class DomainRequest {
--
creator (User)
investigator (User)
authorizing_official (Contact)
senior_official (Contact)
submitter (Contact)
other_contacts (Contacts)
approved_domain (Domain)
@ -80,7 +80,7 @@ class Contact {
--
}
DomainRequest *-r-* Contact : authorizing_official, submitter, other_contacts
DomainRequest *-r-* Contact : senior_official, submitter, other_contacts
class DraftDomain {
Requested domain

View file

@ -1,11 +1,15 @@
# Complete model documentation
This is an auto-generated diagram of our data models generated with the
[django-model2puml](https://github.com/sen-den/django-model2puml) library
using the command
[django-model2puml](https://github.com/sen-den/django-model2puml) library.
## How to generate the puml
1. Uncomment `puml_generator` from `INSTALLED_APPS` in settings.py and docker-compose down and up
2. Run the following command to generate a puml file
```bash
$ docker compose app ./manage.py generate_puml --include registrar
docker compose exec app ./manage.py generate_puml --include registrar
```
![Complete data models diagram](./models_diagram.svg)
@ -13,12 +17,19 @@ $ docker compose app ./manage.py generate_puml --include registrar
<details>
<summary>PlantUML source code</summary>
To regenerate this image using Docker, run
## How To regenerate the database svg image
1. Copy your puml file contents into the bottom of this file and replace the current code marked by `plantuml`
2. Navigate to the `diagram` folder and then run the following command below:
```bash
$ docker run -v $(pwd):$(pwd) -w $(pwd) -it plantuml/plantuml -tsvg models_diagram.md
docker run -v $(pwd):$(pwd) -w $(pwd) -it plantuml/plantuml -tsvg models_diagram.md
```
3. Remove the puml file from earlier (if you still have it)
4. Commit the new image and the md file
```plantuml
@startuml
class "registrar.Contact <Registrar>" as registrar.Contact #d6f4e9 {
@ -28,17 +39,114 @@ class "registrar.Contact <Registrar>" as registrar.Contact #d6f4e9 {
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
~ user (OneToOneField)
+ first_name (TextField)
+ middle_name (TextField)
+ last_name (TextField)
+ title (TextField)
+ email (TextField)
+ first_name (CharField)
+ middle_name (CharField)
+ last_name (CharField)
+ title (CharField)
+ email (EmailField)
+ phone (PhoneNumberField)
--
}
registrar.Contact -- registrar.User
class "registrar.Host <Registrar>" as registrar.Host #d6f4e9 {
host
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ name (CharField)
~ domain (ForeignKey)
--
}
registrar.Host -- registrar.Domain
class "registrar.HostIP <Registrar>" as registrar.HostIP #d6f4e9 {
host ip
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ address (CharField)
~ host (ForeignKey)
--
}
registrar.HostIP -- registrar.Host
class "registrar.PublicContact <Registrar>" as registrar.PublicContact #d6f4e9 {
public contact
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ contact_type (CharField)
+ registry_id (CharField)
~ domain (ForeignKey)
+ name (CharField)
+ org (CharField)
+ street1 (CharField)
+ street2 (CharField)
+ street3 (CharField)
+ city (CharField)
+ sp (CharField)
+ pc (CharField)
+ cc (CharField)
+ email (EmailField)
+ voice (CharField)
+ fax (CharField)
+ pw (CharField)
--
}
registrar.PublicContact -- registrar.Domain
class "registrar.UserDomainRole <Registrar>" as registrar.UserDomainRole #d6f4e9 {
user domain role
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
~ user (ForeignKey)
~ domain (ForeignKey)
+ role (TextField)
--
}
registrar.UserDomainRole -- registrar.User
registrar.UserDomainRole -- registrar.Domain
class "registrar.Domain <Registrar>" as registrar.Domain #d6f4e9 {
domain
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ name (DomainField)
+ state (FSMField)
+ expiration_date (DateField)
+ security_contact_registry_id (TextField)
+ deleted (DateField)
+ first_ready (DateField)
+ dsdata_last_change (TextField)
--
}
class "registrar.FederalAgency <Registrar>" as registrar.FederalAgency #d6f4e9 {
Federal agency
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ agency (CharField)
+ federal_type (CharField)
--
}
class "registrar.DomainRequest <Registrar>" as registrar.DomainRequest #d6f4e9 {
domain request
--
@ -46,42 +154,55 @@ class "registrar.DomainRequest <Registrar>" as registrar.DomainRequest #d6f4e9 {
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ status (FSMField)
+ rejection_reason (TextField)
+ action_needed_reason (TextField)
+ action_needed_reason_email (TextField)
~ federal_agency (ForeignKey)
~ portfolio (ForeignKey)
~ creator (ForeignKey)
~ investigator (ForeignKey)
+ generic_org_type (CharField)
+ is_election_board (BooleanField)
+ organization_type (CharField)
+ federally_recognized_tribe (BooleanField)
+ state_recognized_tribe (BooleanField)
+ tribe_name (TextField)
+ federal_agency (TextField)
+ tribe_name (CharField)
+ federal_type (CharField)
+ is_election_board (BooleanField)
+ organization_name (TextField)
+ address_line1 (TextField)
+ organization_name (CharField)
+ address_line1 (CharField)
+ address_line2 (CharField)
+ city (TextField)
+ city (CharField)
+ state_territory (CharField)
+ zipcode (CharField)
+ urbanization (TextField)
+ type_of_work (TextField)
+ more_organization_information (TextField)
~ authorizing_official (ForeignKey)
+ urbanization (CharField)
+ about_your_organization (TextField)
~ senior_official (ForeignKey)
~ approved_domain (OneToOneField)
~ requested_domain (OneToOneField)
~ submitter (ForeignKey)
+ purpose (TextField)
+ no_other_contacts_rationale (TextField)
+ anything_else (TextField)
+ has_anything_else_text (BooleanField)
+ cisa_representative_email (EmailField)
+ cisa_representative_first_name (CharField)
+ cisa_representative_last_name (CharField)
+ has_cisa_representative (BooleanField)
+ is_policy_acknowledged (BooleanField)
+ submission_date (DateField)
+ notes (TextField)
# current_websites (ManyToManyField)
# alternative_domains (ManyToManyField)
# other_contacts (ManyToManyField)
--
}
registrar.DomainRequest -- registrar.FederalAgency
registrar.DomainRequest -- registrar.Portfolio
registrar.DomainRequest -- registrar.User
registrar.DomainRequest -- registrar.User
registrar.DomainRequest -- registrar.Contact
registrar.DomainRequest -- registrar.DraftDomain
registrar.DomainRequest -- registrar.Domain
registrar.DomainRequest -- registrar.DraftDomain
registrar.DomainRequest -- registrar.Contact
registrar.DomainRequest *--* registrar.Website
registrar.DomainRequest *--* registrar.Website
@ -94,36 +215,44 @@ class "registrar.DomainInformation <Registrar>" as registrar.DomainInformation #
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
~ federal_agency (ForeignKey)
~ creator (ForeignKey)
~ portfolio (ForeignKey)
~ domain_request (OneToOneField)
+ generic_org_type (CharField)
+ organization_type (CharField)
+ federally_recognized_tribe (BooleanField)
+ state_recognized_tribe (BooleanField)
+ tribe_name (TextField)
+ federal_agency (TextField)
+ tribe_name (CharField)
+ federal_type (CharField)
+ is_election_board (BooleanField)
+ organization_name (TextField)
+ address_line1 (TextField)
+ organization_name (CharField)
+ address_line1 (CharField)
+ address_line2 (CharField)
+ city (TextField)
+ city (CharField)
+ state_territory (CharField)
+ zipcode (CharField)
+ urbanization (TextField)
+ type_of_work (TextField)
+ more_organization_information (TextField)
~ authorizing_official (ForeignKey)
+ urbanization (CharField)
+ about_your_organization (TextField)
~ senior_official (ForeignKey)
~ domain (OneToOneField)
~ submitter (ForeignKey)
+ purpose (TextField)
+ no_other_contacts_rationale (TextField)
+ anything_else (TextField)
+ has_anything_else_text (BooleanField)
+ cisa_representative_email (EmailField)
+ cisa_representative_first_name (CharField)
+ cisa_representative_last_name (CharField)
+ has_cisa_representative (BooleanField)
+ is_policy_acknowledged (BooleanField)
+ security_email (EmailField)
+ notes (TextField)
# other_contacts (ManyToManyField)
--
}
registrar.DomainInformation -- registrar.FederalAgency
registrar.DomainInformation -- registrar.User
registrar.DomainInformation -- registrar.Portfolio
registrar.DomainInformation -- registrar.DomainRequest
registrar.DomainInformation -- registrar.Contact
registrar.DomainInformation -- registrar.Domain
@ -142,58 +271,6 @@ class "registrar.DraftDomain <Registrar>" as registrar.DraftDomain #d6f4e9 {
}
class "registrar.Domain <Registrar>" as registrar.Domain #d6f4e9 {
domain
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ name (CharField)
--
}
class "registrar.HostIP <Registrar>" as registrar.HostIP #d6f4e9 {
host ip
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ address (CharField)
~ host (ForeignKey)
--
}
registrar.HostIP -- registrar.Host
class "registrar.Host <Registrar>" as registrar.Host #d6f4e9 {
host
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ name (CharField)
~ domain (ForeignKey)
--
}
registrar.Host -- registrar.Domain
class "registrar.UserDomainRole <Registrar>" as registrar.UserDomainRole #d6f4e9 {
user domain role
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
~ user (ForeignKey)
~ domain (ForeignKey)
+ role (TextField)
--
}
registrar.UserDomainRole -- registrar.User
registrar.UserDomainRole -- registrar.Domain
class "registrar.DomainInvitation <Registrar>" as registrar.DomainInvitation #d6f4e9 {
domain invitation
--
@ -208,47 +285,49 @@ class "registrar.DomainInvitation <Registrar>" as registrar.DomainInvitation #d6
registrar.DomainInvitation -- registrar.Domain
class "registrar.Nameserver <Registrar>" as registrar.Nameserver #d6f4e9 {
nameserver
class "registrar.TransitionDomain <Registrar>" as registrar.TransitionDomain #d6f4e9 {
transition domain
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ name (CharField)
~ domain (ForeignKey)
~ host_ptr (OneToOneField)
+ username (CharField)
+ domain_name (CharField)
+ status (CharField)
+ email_sent (BooleanField)
+ processed (BooleanField)
+ generic_org_type (CharField)
+ organization_name (CharField)
+ federal_type (CharField)
+ federal_agency (CharField)
+ epp_creation_date (DateField)
+ epp_expiration_date (DateField)
+ first_name (CharField)
+ middle_name (CharField)
+ last_name (CharField)
+ title (CharField)
+ email (EmailField)
+ phone (CharField)
+ address_line (CharField)
+ city (CharField)
+ state_territory (CharField)
+ zipcode (CharField)
--
}
registrar.Nameserver -- registrar.Domain
registrar.Nameserver -- registrar.Host
class "registrar.PublicContact <Registrar>" as registrar.PublicContact #d6f4e9 {
public contact
class "registrar.VerifiedByStaff <Registrar>" as registrar.VerifiedByStaff #d6f4e9 {
verified by staff
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ contact_type (CharField)
+ registry_id (CharField)
~ domain (ForeignKey)
+ name (TextField)
+ org (TextField)
+ street1 (TextField)
+ street2 (TextField)
+ street3 (TextField)
+ city (TextField)
+ sp (TextField)
+ pc (TextField)
+ cc (TextField)
+ email (TextField)
+ voice (TextField)
+ fax (TextField)
+ pw (TextField)
+ email (EmailField)
~ requestor (ForeignKey)
+ notes (TextField)
--
}
registrar.PublicContact -- registrar.Domain
registrar.VerifiedByStaff -- registrar.User
class "registrar.User <Registrar>" as registrar.User #d6f4e9 {
@ -265,7 +344,11 @@ class "registrar.User <Registrar>" as registrar.User #d6f4e9 {
+ is_staff (BooleanField)
+ is_active (BooleanField)
+ date_joined (DateTimeField)
+ status (CharField)
+ phone (PhoneNumberField)
+ middle_name (CharField)
+ title (CharField)
+ verification_type (CharField)
# groups (ManyToManyField)
# user_permissions (ManyToManyField)
# domains (ManyToManyField)
@ -274,6 +357,17 @@ class "registrar.User <Registrar>" as registrar.User #d6f4e9 {
registrar.User *--* registrar.Domain
class "registrar.UserGroup <Registrar>" as registrar.UserGroup #d6f4e9 {
User group
--
- id (AutoField)
+ name (CharField)
~ group_ptr (OneToOneField)
# permissions (ManyToManyField)
--
}
class "registrar.Website <Registrar>" as registrar.Website #d6f4e9 {
website
--
@ -285,6 +379,81 @@ class "registrar.Website <Registrar>" as registrar.Website #d6f4e9 {
}
class "registrar.WaffleFlag <Registrar>" as registrar.WaffleFlag #d6f4e9 {
waffle flag
--
+ id (BigAutoField)
+ name (CharField)
+ everyone (BooleanField)
+ percent (DecimalField)
+ testing (BooleanField)
+ superusers (BooleanField)
+ staff (BooleanField)
+ authenticated (BooleanField)
+ languages (TextField)
+ rollout (BooleanField)
+ note (TextField)
+ created (DateTimeField)
+ modified (DateTimeField)
# groups (ManyToManyField)
# users (ManyToManyField)
--
}
registrar.WaffleFlag *--* registrar.User
class "registrar.Portfolio <Registrar>" as registrar.Portfolio #d6f4e9 {
portfolio
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
~ creator (ForeignKey)
+ notes (TextField)
~ federal_agency (ForeignKey)
+ organization_type (CharField)
+ organization_name (CharField)
+ address_line1 (CharField)
+ address_line2 (CharField)
+ city (CharField)
+ state_territory (CharField)
+ zipcode (CharField)
+ urbanization (CharField)
+ security_contact_email (EmailField)
--
}
registrar.Portfolio -- registrar.User
registrar.Portfolio -- registrar.FederalAgency
class "registrar.DomainGroup <Registrar>" as registrar.DomainGroup #d6f4e9 {
domain group
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ name (CharField)
~ portfolio (ForeignKey)
# domains (ManyToManyField)
--
}
registrar.DomainGroup -- registrar.Portfolio
registrar.DomainGroup *--* registrar.DomainInformation
class "registrar.Suborganization <Registrar>" as registrar.Suborganization #d6f4e9 {
suborganization
--
+ id (BigAutoField)
+ created_at (DateTimeField)
+ updated_at (DateTimeField)
+ name (CharField)
~ portfolio (ForeignKey)
--
}
registrar.Suborganization -- registrar.Portfolio
@enduml
```

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 116 KiB

Before After
Before After

View file

@ -291,13 +291,13 @@ We use the [CSS Block Element Modifier (BEM)](https://getbem.com/naming/) naming
### Upgrading USWDS and other JavaScript packages
Version numbers can be manually controlled in `package.json`. Edit that, if desired.
Now run `docker-compose run node npm update`.
Then run `docker-compose up` to recompile and recopy the assets.
Examine the results in the running application (remember to empty your cache!) and commit `package.json` and `package-lock.json` if all is well.
1. Version numbers can be manually controlled in `package.json`. Edit that, if desired.
2. Now run `docker-compose run node npm update`.
3. Then run `docker-compose up` to recompile and recopy the assets, or run `docker-compose updateUswds` if your docker is already up.
4. Make note of the dotgov changes in uswds-edited.js.
5. Copy over the newly compiled code from uswds.js into uswds-edited.js.
6. Put back the dotgov changes you made note of into uswds-edited.js.
7. Examine the results in the running application (remember to empty your cache!) and commit `package.json` and `package-lock.json` if all is well.
## Finite State Machines
@ -320,33 +320,6 @@ it may help to resync your laptop with time.nist.gov:
sudo sntp -sS time.nist.gov
```
## Connection pool
To handle our connection to the registry, we utilize a connection pool to keep a socket open to increase responsiveness. In order to accomplish this, we are utilizing a heavily modified version of the [geventconnpool](https://github.com/rasky/geventconnpool) library.
### Settings
The config for the connection pool exists inside the `settings.py` file.
| Name | Purpose |
| ------------------------ | ------------------------------------------------------------------------------------------------- |
| EPP_CONNECTION_POOL_SIZE | Determines the number of concurrent sockets that should exist in the pool. |
| POOL_KEEP_ALIVE | Determines the interval in which we ping open connections in seconds. Calculated as POOL_KEEP_ALIVE / EPP_CONNECTION_POOL_SIZE |
| POOL_TIMEOUT | Determines how long we try to keep a pool alive for, before restarting it. |
Consider updating the `POOL_TIMEOUT` or `POOL_KEEP_ALIVE` periods if the pool often restarts. If the pool only restarts after a period of inactivity, update `POOL_KEEP_ALIVE`. If it restarts during the EPP call itself, then `POOL_TIMEOUT` needs to be updated.
### Test if the connection pool is running
Our connection pool has a built-in `pool_status` object which you can call at anytime to assess the current connection status of the pool. Follow these steps to access it.
1. `cf ssh getgov-{env-name} -i {instance-index}`
* env-name -> Which environment to target, e.g. `staging`
* instance-index -> Which instance to target. For instance, `cf ssh getgov-staging -i 0`
2. `/tmp/lifecycle/shell`
3. `./manage.py shell`
4. `from epplibwrapper import CLIENT as registry, commands`
5. `print(registry.pool_status.connection_success)`
* Should return true
If you have multiple instances (staging for example), then repeat commands 1-5 for each instance you want to test.
## Adding a S3 instance to your sandbox
This can either be done through the CLI, or through the cloud.gov dashboard. Generally, it is better to do it through the dashboard as it handles app binding for you.
@ -378,4 +351,14 @@ You can view these variables by running the following command:
cf env getgov-{app name}
```
Then, copy the variables under the section labled `s3`.
Then, copy the variables under the section labled `s3`.
## Disable email sending (toggling the disable_email_sending flag)
1. On the app, navigate to `\admin`.
2. Under models, click `Waffle flags`.
3. Click the `disable_email_sending` record. This should exist by default, if not - create one with that name.
4. (Important) Set the field `everyone` to `Yes`. This field overrides all other settings
## Request Flow FSM Diagram
The [.gov Domain Request & Domain Status Digram](https://miro.com/app/board/uXjVMuqbLOk=/?moveToWidget=3458764594819017396&cot=14) visualizes the domain request flow and resulting domain objects.

View file

@ -0,0 +1,23 @@
# Adding feature flags
Feature flags are booleans (stored in our DB as the `WaffleFlag` object) that programmatically disable/enable "features" (such as DNS hosting) for a specified set of users.
We use [django-waffle](https://waffle.readthedocs.io/en/stable/) for our feature flags. Waffle makes using flags fairly straight forward.
## Adding feature flags through django admin
1. On the app, navigate to `\admin`.
2. Under models, click `Waffle flags`.
3. Click `Add waffle flag`.
4. Add the model as you would normally. Refer to waffle's documentation [regarding attributes](https://waffle.readthedocs.io/en/stable/types/flag.html#flag-attributes) for more information on them.
### Enabling the profile_feature flag
1. On the app, navigate to `\admin`.
2. Under models, click `Waffle flags`.
3. Click the `profile_feature` record. This should exist by default, if not - create one with that name.
4. (Important) Set the field `Everyone` to `Unknown`. This field overrides all other settings when set to anything else.
5. Configure the settings as you see fit.
## Using feature flags as boolean values
Waffle [provides a boolean](https://waffle.readthedocs.io/en/stable/usage/views.html) called `flag_is_active` that you can use as you otherwise would a boolean. This boolean requires a request object and the flag name.
## Using feature flags to disable/enable views
Waffle [provides a decorator](https://waffle.readthedocs.io/en/stable/usage/decorators.html) that you can use to enable/disable views. When disabled, the view will return a 404 if said user tries to navigate to it.

View file

@ -0,0 +1,65 @@
# Terminal Helper Functions
`terminal_helper.py` contains utility functions to assist with common terminal and script operations.
This file documents what they do and provides guidance on their usage.
## TerminalColors
`TerminalColors` provides ANSI color codes as variables to style terminal output.
## ScriptDataHelper
### bulk_update_fields
`bulk_update_fields` performs a memory-efficient bulk update on a Django model in batches using a Paginator.
## TerminalHelper
### log_script_run_summary
`log_script_run_summary` logs a summary of a script run, including counts of updated, skipped, and failed records.
### print_conditional
`print_conditional` conditionally logs a statement at a specified severity if a condition is met.
### prompt_for_execution
`prompt_for_execution` prompts the user to inspect a string and confirm if they wish to proceed. Returns True if proceeding, False if skipping, or exits the script.
### query_yes_no
`query_yes_no` prompts the user with a yes/no question and returns True for "yes" or False for "no".
### query_yes_no_exit
`query_yes_no_exit` is similar to `query_yes_no` but includes an "exit" option to terminate the script.
## PopulateScriptTemplate
`PopulateScriptTemplate` is an abstract base class that provides a template for creating generic populate scripts. It handles logging and bulk updating for repetitive scripts that update a few fields.
### **Disclaimer**
This template is intended as a shorthand for simple scripts. It is not recommended for complex operations. See `transfer_federal_agency.py` for a straightforward example of how to use this template.
### Step-by-step usage guide
To create a script using `PopulateScriptTemplate`:
1. Create a new class that inherits from `PopulateScriptTemplate`
2. Implement the `update_record` method to define how each record should be updated
3. Optionally, override the configuration variables and helper methods as needed
4. Call `mass_update_records` within `handle` and run the script
#### Template explanation
The main method provided by `PopulateScriptTemplate` is `mass_update_records`. This method loops through each valid object (specified by `filter_conditions`) and updates the fields defined in `fields_to_update` using the `update_record` method.
Before updating, `mass_update_records` prompts the user to confirm the proposed changes. If the user does not proceed, the script will exit.
After processing the records, `mass_update_records` performs a bulk update on the specified fields using `ScriptDataHelper.bulk_update_fields` and logs a summary of the script run using `TerminalHelper.log_script_run_summary`.
#### Config options
The class provides the following optional configuration variables:
- `prompt_title`: The header displayed by `prompt_for_execution` when the script starts (default: "Do you wish to proceed?")
- `display_run_summary_items_as_str`: If True, runs `str(item)` on each item when printing the run summary for prettier output (default: False)
- `run_summary_header`: The header for the script run summary printed after the script finishes (default: None)
The class also provides helper methods:
- `get_class_name`: Returns a display-friendly class name for the terminal prompt
- `get_failure_message`: Returns the message to display if a record fails to update
- `should_skip_record`: Defines the condition for skipping a record (by default, no records are skipped)

View file

@ -30,7 +30,19 @@ You should end up with `40_some_migration_from_main`, `41_local_migration`
Alternatively, assuming that the conflicting migrations are not dependent on each other, you can manually edit the migration file such that your new migration is incremented by one (file name, and definition inside the file) but this approach is not recommended.
### Scenario 2: Conflicting migrations on sandbox
### Scenario 2: Conflicting migrations on sandbox (can be fixed with GH workflow)
A 500 error on a sanbox after a fresh push usually indicates a migration issue.
Most of the time, these migration issues can easily be fixed by simply running the
"reset-db" workflow in Github.
For the workflow, select the following inputs before running it;
"Use workflow from": Branch-main
"Which environment should we flush and re-load data for?" <YOUR_TARGET_SANDBOX>
This is not a cure-all since it simply flushes and re-runs migrations against your sandbox.
If running this workflow does not solve your issue, proceed examining the scenarios below.
### Scenario 3: Conflicting migrations on sandbox (cannot be fixed with GH workflow)
This occurs when the logs return the following:
>Conflicting migrations detected; multiple leaf nodes in the migration graph: (0040_example, 0041_example in base).

View file

@ -45,6 +45,8 @@ When deploying to your personal sandbox, you should make sure all of the USWDS a
For ease of use, you can run the `deploy.sh <sandbox name>` script in the `/src` directory to build the assets and deploy to your sandbox. Similarly, you could run `build.sh <sandbox name>` script to just compile and collect the assets without deploying.
You may also manually deploy to a sandbox using our [manual deploy workflow](https://github.com/cisagov/manage.get.gov/actions/workflows/deploy-manual.yaml) on GitHub Actions. Select Run workflow and enter the branch you want to deploy to your sandbox of choice.
Your sandbox space should've been setup as part of the onboarding process. If this was not the case, please have an admin follow the instructions below.
## Creating a sandbox or new environment

View file

@ -668,3 +668,151 @@ Example: `cf ssh getgov-za`
#### Step 1: Running the script
```docker-compose exec app ./manage.py populate_verification_type```
## Copy names from contacts to users
### Running on sandboxes
#### Step 1: Login to CloudFoundry
```cf login -a api.fr.cloud.gov --sso```
#### Step 2: SSH into your environment
```cf ssh getgov-{space}```
Example: `cf ssh getgov-za`
#### Step 3: Create a shell instance
```/tmp/lifecycle/shell```
#### Step 4: Running the script
```./manage.py copy_names_from_contacts_to_users --debug```
### Running locally
#### Step 1: Running the script
```docker-compose exec app ./manage.py copy_names_from_contacts_to_users --debug```
##### Optional parameters
| | Parameter | Description |
|:-:|:-------------------------- |:----------------------------------------------------------------------------|
| 1 | **debug** | Increases logging detail. Defaults to False. |
## Transfer federal agency script
The transfer federal agency script adds the "federal_type" field on each associated DomainRequest, and uses that to populate the "federal_type" field on each FederalAgency.
**Important:** When running this script, note that data generated by our fixtures will be inaccurate (since we assign random data to them). Use real data on this script.
Do note that there is a check on record uniqueness. If two or more records do NOT have the same value for federal_type for any given federal agency, then the record is skipped. This protects against fixtures data when loaded with real data.
### Running on sandboxes
#### Step 1: Login to CloudFoundry
```cf login -a api.fr.cloud.gov --sso```
#### Step 2: SSH into your environment
```cf ssh getgov-{space}```
Example: `cf ssh getgov-za`
#### Step 3: Create a shell instance
```/tmp/lifecycle/shell```
#### Step 4: Running the script
```./manage.py transfer_federal_agency_type```
### Running locally
#### Step 1: Running the script
```docker-compose exec app ./manage.py transfer_federal_agency_type```
## Email current metadata report
### Running on sandboxes
#### Step 1: Login to CloudFoundry
```cf login -a api.fr.cloud.gov --sso```
#### Step 2: SSH into your environment
```cf ssh getgov-{space}```
Example: `cf ssh getgov-za`
#### Step 3: Create a shell instance
```/tmp/lifecycle/shell```
#### Step 4: Running the script
```./manage.py email_current_metadata_report --emailTo {desired email address}```
### Running locally
#### Step 1: Running the script
```docker-compose exec app ./manage.py email_current_metadata_report --emailTo {desired email address}```
##### Parameters
| | Parameter | Description |
|:-:|:-------------------------- |:-----------------------------------------------------------------------------------|
| 1 | **emailTo** | Specifies where the email will be emailed. Defaults to help@get.gov on production. |
## Populate federal agency initials and FCEB
This script adds to the "is_fceb" and "initials" fields on the FederalAgency model. This script expects a CSV of federal CIOs to pull from, which can be sourced from [here](https://docs.google.com/spreadsheets/d/14oXHFpKyUXS5_mDWARPusghGdHCrP67jCleOknaSx38/edit?gid=479328070#gid=479328070).
### Running on sandboxes
#### Step 1: Login to CloudFoundry
```cf login -a api.fr.cloud.gov --sso```
#### Step 2: SSH into your environment
```cf ssh getgov-{space}```
Example: `cf ssh getgov-za`
#### Step 3: Create a shell instance
```/tmp/lifecycle/shell```
#### Step 4: Upload your csv to the desired sandbox
[Follow these steps](#use-scp-to-transfer-data-to-sandboxes) to upload the federal_cio csv to a sandbox of your choice.
#### Step 5: Running the script
```./manage.py populate_federal_agency_initials_and_fceb {path_to_CIO_csv}```
### Running locally
#### Step 1: Running the script
```docker-compose exec app ./manage.py populate_federal_agency_initials_and_fceb {path_to_CIO_csv}```
##### Parameters
| | Parameter | Description |
|:-:|:-------------------------- |:-----------------------------------------------------------------------------------|
| 1 | **federal_cio_csv_path** | Specifies where the federal CIO csv is |
## Load senior official table
This script adds SeniorOfficial records to the related table based off of a CSV. This script expects a CSV of federal CIOs to pull from, which can be sourced from [here](https://docs.google.com/spreadsheets/d/14oXHFpKyUXS5_mDWARPusghGdHCrP67jCleOknaSx38/edit?gid=479328070#gid=479328070).
### Running on sandboxes
#### Step 1: Login to CloudFoundry
```cf login -a api.fr.cloud.gov --sso```
#### Step 2: SSH into your environment
```cf ssh getgov-{space}```
Example: `cf ssh getgov-za`
#### Step 3: Create a shell instance
```/tmp/lifecycle/shell```
#### Step 4: Upload your csv to the desired sandbox
[Follow these steps](#use-scp-to-transfer-data-to-sandboxes) to upload the federal_cio csv to a sandbox of your choice.
#### Step 5: Running the script
```./manage.py load_senior_official_table {path_to_CIO_csv}```
### Running locally
#### Step 1: Running the script
```docker-compose exec app ./manage.py load_senior_official_table {path_to_CIO_csv}```
##### Parameters
| | Parameter | Description |
|:-:|:-------------------------- |:-----------------------------------------------------------------------------------|
| 1 | **federal_cio_csv_path** | Specifies where the federal CIO csv is |

View file

@ -0,0 +1,138 @@
# Export / Import Tables
A means is provided to export and import tables from
one environment to another. This allows for replication of
production data in a development environment. Import and export
are provided through a modified library, django-import-export.
Simple scripts are provided as detailed below.
### Export
To export from the source environment, run the following command from src directory:
manage.py export_tables
Connect to the source sandbox and run the command:
cf ssh {source-app}
/tmp/lifecycle/shell
./manage.py export_tables
example exporting from getgov-stable:
cf ssh getgov-stable
/tmp/lifecycle/shell
./manage.py export_tables
This exports a file, exported_tables.zip, to the tmp directory
For reference, the zip file will contain the following tables in csv form:
* User
* Contact
* Domain
* DomainRequest
* DomainInformation
* DomainUserRole
* DraftDomain
* FederalAgency
* Websites
* Host
* HostIP
* PublicContact
After exporting the file from the target environment, scp the exported_tables.zip
file from the target environment to local. Run the below commands from local.
Get passcode by running:
cf ssh-code
scp file from source app to local file:
scp -P 2222 -o User=cf:$(cf curl /v3/apps/$(cf app {source-app} --guid)/processes | jq -r '.resources[] | select(.type=="web") | .guid')/0 ssh.fr.cloud.gov:app/tmp/exported_tables.zip {local_file_path}
when prompted, supply the passcode retrieved in the 'cf ssh-code' command
example copying from stable to local cwd:
scp -P 2222 -o User=cf:$(cf curl /v3/apps/$(cf app getgov-stable --guid)/processes | jq -r '.resources[] | select(.type=="web") | .guid')/0 ssh.fr.cloud.gov:app/tmp/exported_tables.zip .
### Import
When importing into the target environment, if the target environment
is different than the source environment, it must be prepared for the
import. This involves clearing out rows in the appropriate tables so
that there are no database conflicts on import.
#### Preparing Target Environment
In order to delete all rows from the appropriate tables, run the following
command:
cf ssh {target-app}
/tmp/lifecycle/shell
./manage.py clean_tables
example cleaning getgov-backup:
cf ssh getgov-backup
/tmp/lifecycle/backup
./manage.py clean_tables
For reference, this deletes all rows from the following tables:
* DomainInformation
* DomainRequest
* Domain
* User
* Contact
* Websites
* DraftDomain
* HostIP
* Host
* PublicContact
* FederalAgency
#### Importing into Target Environment
Once target environment is prepared, files can be imported.
If importing tables from stable environment into an OT&E sandbox, there will be a difference
between the stable's registry and the sandbox's registry. Therefore, you need to run import_tables
with --skipEppSave option set to False. If you set to False, it will attempt to save PublicContact
records to the registry on load. If this is unset, or set to True, it will load the database and not
attempt to update the registry on load.
To scp the exported_tables.zip file from local to the sandbox, run the following:
Get passcode by running:
cf ssh-code
scp file from local to target app:
scp -P 2222 -o User=cf:$(cf curl /v3/apps/$(cf app {target-app} --guid)/processes | jq -r '.resources[] | select(.type=="web") | .guid')/0 {local_file_path} ssh.fr.cloud.gov:app/tmp/exported_tables.zip
when prompted, supply the passcode retrieved in the 'cf ssh-code' command
example copy of local file in tmp to getgov-backup:
scp -P 2222 -o User=cf:$(cf curl /v3/apps/$(cf app getgov-backup --guid)/processes | jq -r '.resources[] | select(.type=="web") | .guid')/0 tmp/exported_tables.zip ssh.fr.cloud.gov:app/tmp/exported_tables.zip
Then connect to a shell in the target environment, and run the following import command:
cf ssh {target-app}
/tmp/lifecycle/shell
./manage.py import_tables
example cleaning getgov-backup:
cf ssh getgov-backup
/tmp/lifecycle/backup
./manage.py import_tables --no-skipEppSave
For reference, this imports tables in the following order:
* User
* Contact
* Domain
* Host
* HostIP
* DraftDomain
* Websites
* FederalAgency
* DomainRequest
* DomainInformation
* UserDomainRole
* PublicContact
Optional step:
* Run fixtures to load fixture users back in

View file

@ -0,0 +1,32 @@
---
applications:
- name: getgov-ad
buildpacks:
- python_buildpack
path: ../../src
instances: 1
memory: 512M
stack: cflinuxfs4
timeout: 180
command: ./run.sh
health-check-type: http
health-check-http-endpoint: /health
health-check-invocation-timeout: 40
env:
# Send stdout and stderr straight to the terminal without buffering
PYTHONUNBUFFERED: yup
# Tell Django where to find its configuration
DJANGO_SETTINGS_MODULE: registrar.config.settings
# Tell Django where it is being hosted
DJANGO_BASE_URL: https://getgov-ad.app.cloud.gov
# Tell Django how much stuff to log
DJANGO_LOG_LEVEL: INFO
# default public site location
GETGOV_PUBLIC_SITE_URL: https://get.gov
# Flag to disable/enable features in prod environments
IS_PRODUCTION: False
routes:
- route: getgov-ad.app.cloud.gov
services:
- getgov-credentials
- getgov-ad-database

View file

@ -0,0 +1,32 @@
---
applications:
- name: getgov-ag
buildpacks:
- python_buildpack
path: ../../src
instances: 1
memory: 512M
stack: cflinuxfs4
timeout: 180
command: ./run.sh
health-check-type: http
health-check-http-endpoint: /health
health-check-invocation-timeout: 40
env:
# Send stdout and stderr straight to the terminal without buffering
PYTHONUNBUFFERED: yup
# Tell Django where to find its configuration
DJANGO_SETTINGS_MODULE: registrar.config.settings
# Tell Django where it is being hosted
DJANGO_BASE_URL: https://getgov-ag.app.cloud.gov
# Tell Django how much stuff to log
DJANGO_LOG_LEVEL: INFO
# default public site location
GETGOV_PUBLIC_SITE_URL: https://get.gov
# Flag to disable/enable features in prod environments
IS_PRODUCTION: False
routes:
- route: getgov-ag.app.cloud.gov
services:
- getgov-credentials
- getgov-ag-database

View file

@ -0,0 +1,32 @@
---
applications:
- name: getgov-cb
buildpacks:
- python_buildpack
path: ../../src
instances: 1
memory: 512M
stack: cflinuxfs4
timeout: 180
command: ./run.sh
health-check-type: http
health-check-http-endpoint: /health
health-check-invocation-timeout: 40
env:
# Send stdout and stderr straight to the terminal without buffering
PYTHONUNBUFFERED: yup
# Tell Django where to find its configuration
DJANGO_SETTINGS_MODULE: registrar.config.settings
# Tell Django where it is being hosted
DJANGO_BASE_URL: https://getgov-cb.app.cloud.gov
# Tell Django how much stuff to log
DJANGO_LOG_LEVEL: INFO
# default public site location
GETGOV_PUBLIC_SITE_URL: https://get.gov
# Flag to disable/enable features in prod environments
IS_PRODUCTION: False
routes:
- route: getgov-cb.app.cloud.gov
services:
- getgov-credentials
- getgov-cb-database

View file

@ -0,0 +1,32 @@
---
applications:
- name: getgov-hotgov
buildpacks:
- python_buildpack
path: ../../src
instances: 1
memory: 512M
stack: cflinuxfs4
timeout: 180
command: ./run.sh
health-check-type: http
health-check-http-endpoint: /health
health-check-invocation-timeout: 40
env:
# Send stdout and stderr straight to the terminal without buffering
PYTHONUNBUFFERED: yup
# Tell Django where to find its configuration
DJANGO_SETTINGS_MODULE: registrar.config.settings
# Tell Django where it is being hosted
DJANGO_BASE_URL: https://getgov-hotgov.app.cloud.gov
# Tell Django how much stuff to log
DJANGO_LOG_LEVEL: INFO
# default public site location
GETGOV_PUBLIC_SITE_URL: https://get.gov
# Flag to disable/enable features in prod environments
IS_PRODUCTION: False
routes:
- route: getgov-hotgov.app.cloud.gov
services:
- getgov-credentials
- getgov-hotgov-database

View file

@ -0,0 +1,32 @@
---
applications:
- name: getgov-litterbox
buildpacks:
- python_buildpack
path: ../../src
instances: 1
memory: 512M
stack: cflinuxfs4
timeout: 180
command: ./run.sh
health-check-type: http
health-check-http-endpoint: /health
health-check-invocation-timeout: 40
env:
# Send stdout and stderr straight to the terminal without buffering
PYTHONUNBUFFERED: yup
# Tell Django where to find its configuration
DJANGO_SETTINGS_MODULE: registrar.config.settings
# Tell Django where it is being hosted
DJANGO_BASE_URL: https://getgov-litterbox.app.cloud.gov
# Tell Django how much stuff to log
DJANGO_LOG_LEVEL: INFO
# default public site location
GETGOV_PUBLIC_SITE_URL: https://get.gov
# Flag to disable/enable features in prod environments
IS_PRODUCTION: False
routes:
- route: getgov-litterbox.app.cloud.gov
services:
- getgov-credentials
- getgov-litterbox-database

View file

@ -0,0 +1,32 @@
---
applications:
- name: getgov-ms
buildpacks:
- python_buildpack
path: ../../src
instances: 1
memory: 512M
stack: cflinuxfs4
timeout: 180
command: ./run.sh
health-check-type: http
health-check-http-endpoint: /health
health-check-invocation-timeout: 40
env:
# Send stdout and stderr straight to the terminal without buffering
PYTHONUNBUFFERED: yup
# Tell Django where to find its configuration
DJANGO_SETTINGS_MODULE: registrar.config.settings
# Tell Django where it is being hosted
DJANGO_BASE_URL: https://getgov-ms.app.cloud.gov
# Tell Django how much stuff to log
DJANGO_LOG_LEVEL: INFO
# default public site location
GETGOV_PUBLIC_SITE_URL: https://get.gov
# Flag to disable/enable features in prod environments
IS_PRODUCTION: False
routes:
- route: getgov-ms.app.cloud.gov
services:
- getgov-credentials
- getgov-ms-database

View file

@ -116,6 +116,10 @@ sed -i '' '/ - development/ {a\
- '"$1"'
}' .github/workflows/migrate.yaml
sed -i '' '/ - backup/ {a\
- '"$1"'
}' .github/workflows/deploy-manual.yaml
sed -i '' '/${{startsWith(github.head_ref, / {a\
|| startsWith(github.head_ref, '"'$1'"')
}' .github/workflows/deploy-sandbox.yaml

View file

@ -49,6 +49,7 @@ rm ops/manifests/manifest-$1.yaml
sed -i '' "/getgov-$1.app.cloud.gov/d" src/registrar/config/settings.py
sed -i '' "/- $1/d" .github/workflows/reset-db.yaml
sed -i '' "/- $1/d" .github/workflows/migrate.yaml
sed -i '' "/- $1/d" .github/workflows/deploy-manual.yaml
echo "Cleaning up services, applications, and the Cloud.gov space for $1..."
cf delete getgov-$1

View file

@ -11,7 +11,7 @@
"http://localhost:8080/request/org_federal/",
"http://localhost:8080/request/org_election/",
"http://localhost:8080/request/org_contact/",
"http://localhost:8080/request/authorizing_official/",
"http://localhost:8080/request/senior_official/",
"http://localhost:8080/request/current_sites/",
"http://localhost:8080/request/dotgov_domain/",
"http://localhost:8080/request/purpose/",
@ -19,6 +19,7 @@
"http://localhost:8080/request/other_contacts/",
"http://localhost:8080/request/anything_else/",
"http://localhost:8080/request/requirements/",
"http://localhost:8080/request/finished/"
"http://localhost:8080/request/finished/",
"http://localhost:8080/user-profile/"
]
}

View file

@ -32,6 +32,8 @@ fred-epplib = {git = "https://github.com/cisagov/epplib.git", ref = "master"}
pyzipper="*"
tblib = "*"
django-admin-multiple-choice-list-filter = "*"
django-import-export = "*"
django-waffle = "*"
[dev-packages]
django-debug-toolbar = "*"

321
src/Pipfile.lock generated
View file

@ -1,7 +1,7 @@
{
"_meta": {
"hash": {
"sha256": "16a0db98015509322cf1d27f06fced5b7635057c4eb98921a9419d63d51925ab"
"sha256": "9095c4f98f58a9502444584067a63f329d5a5fc4b49454c4e129bda09552d19d"
},
"pipfile-spec": 6,
"requires": {},
@ -32,20 +32,20 @@
},
"boto3": {
"hashes": [
"sha256:2824e3dd18743ca50e5b10439d20e74647b1416e8a94509cb30beac92d27a18d",
"sha256:b2e5cb5b95efcc881e25a3bc872d7a24e75ff4e76f368138e4baf7b9d6ee3422"
"sha256:decf52f8d5d8a1b10c9ff2a0e96ee207ed79e33d2e53fdf0880a5cbef70785e0",
"sha256:e836b71d79671270fccac0a4d4c8ec239a6b82ea47c399b64675aa597d0ee63b"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==1.34.90"
"version": "==1.34.95"
},
"botocore": {
"hashes": [
"sha256:113cd4c0cb63e13163ccbc2bb13d551be314ba7f8ba5bfab1c51a19ca01aa133",
"sha256:d48f152498e2c60b43ce25b579d26642346a327b6fb2c632d57219e0a4f63392"
"sha256:6bd76a2eadb42b91fa3528392e981ad5b4dfdee3968fa5b904278acf6cbf15ff",
"sha256:ead5823e0dd6751ece5498cb979fd9abf190e691c8833bcac6876fd6ca261fa7"
],
"markers": "python_version >= '3.8'",
"version": "==1.34.90"
"version": "==1.34.95"
},
"cachetools": {
"hashes": [
@ -272,6 +272,14 @@
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==0.7.1"
},
"diff-match-patch": {
"hashes": [
"sha256:953019cdb9c9d2c9e47b5b12bcff3cf4746fc4598eb406076fa1fc27e6a1f15c",
"sha256:dce43505fb7b1b317de7195579388df0746d90db07015ed47a85e5e44930ef93"
],
"markers": "python_version >= '3.7'",
"version": "==20230430"
},
"dj-database-url": {
"hashes": [
"sha256:04bc34b248d4c21aaa13e4ab419ae6575ef5f10f3df735ce7da97722caa356e0",
@ -352,6 +360,15 @@
"index": "pypi",
"version": "==2.8.1"
},
"django-import-export": {
"hashes": [
"sha256:2eac09e8cec8670f36e24314760448011ad23c51e8fb930d55f50d0c3c926da0",
"sha256:4deabc557801d368093608c86fd0f4831bc9540e2ea41ca2f023e2efb3eb6f48"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==3.3.8"
},
"django-login-required-middleware": {
"hashes": [
"sha256:847ae9a69fd7a07618ed53192b3c06946af70a0caf6d0f4eb40a8f37593cd970"
@ -370,6 +387,15 @@
"markers": "python_version >= '3.8'",
"version": "==7.3.0"
},
"django-waffle": {
"hashes": [
"sha256:5979a2f3dd674ef7086480525b39651fc2045427f6d8e6a614192656d3402c5b",
"sha256:e49d7d461d89f3bd8e53f20efe39310acca8f275c9888495e68e195345bf18b1"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==4.1.0"
},
"django-widget-tweaks": {
"hashes": [
"sha256:1c2180681ebb994e922c754804c7ffebbe1245014777ac47897a81f57cc629c7",
@ -390,14 +416,22 @@
"markers": "python_version >= '3.8'",
"version": "==11.0.0"
},
"et-xmlfile": {
"hashes": [
"sha256:8eb9e2bc2f8c97e37a2dc85a09ecdcdec9d8a396530a6d5a33b30b9a92da0c5c",
"sha256:a2ba85d1d6a74ef63837eed693bcb89c3f752169b0e3e7ae5b16ca5e1b3deada"
],
"markers": "python_version >= '3.6'",
"version": "==1.1.0"
},
"faker": {
"hashes": [
"sha256:34b947581c2bced340c39b35f89dbfac4f356932cfff8fe893bde854903f0e6e",
"sha256:adb98e771073a06bdc5d2d6710d8af07ac5da64c8dc2ae3b17bb32319e66fd82"
"sha256:87ef41e24b39a5be66ecd874af86f77eebd26782a2681200e86c5326340a95d3",
"sha256:e23a2b74888885c3d23a9237bacb823041291c03d609a39acb9ebe6c123f3986"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==24.11.0"
"version": "==25.0.0"
},
"fred-epplib": {
"git": "https://github.com/cisagov/epplib.git",
@ -588,7 +622,6 @@
"sha256:3d0c3dd24bb4605439bf91068598d00c6370684f8de4a67c2992683f6c309d6b",
"sha256:3dbe858ee582cbb2c6294dc85f55b5f19c918c2597855e950f34b660f1a5ede6",
"sha256:3dc773b2861b37b41a6136e0b72a1a44689a9c4c101e0cddb6b854016acc0aa8",
"sha256:3e183c6e3298a2ed5af9d7a356ea823bccaab4ec2349dc9ed83999fd289d14d5",
"sha256:3f7765e69bbce0906a7c74d5fe46d2c7a7596147318dbc08e4a2431f3060e306",
"sha256:417d14450f06d51f363e41cace6488519038f940676ce9664b34ebf5653433a5",
"sha256:44f6c7caff88d988db017b9b0e4ab04934f11e3e72d478031efc7edcac6c622f",
@ -725,6 +758,12 @@
"markers": "python_version >= '3.8'",
"version": "==1.3.3"
},
"markuppy": {
"hashes": [
"sha256:1adee2c0a542af378fe84548ff6f6b0168f3cb7f426b46961038a2bcfaad0d5f"
],
"version": "==1.14"
},
"markupsafe": {
"hashes": [
"sha256:00e046b6dd71aa03a41079792f8473dc494d564611a8f89bbbd7cb93295ebdcf",
@ -799,14 +838,28 @@
"markers": "python_version >= '3.8'",
"version": "==3.21.1"
},
"odfpy": {
"hashes": [
"sha256:db766a6e59c5103212f3cc92ec8dd50a0f3a02790233ed0b52148b70d3c438ec",
"sha256:fc3b8d1bc098eba4a0fda865a76d9d1e577c4ceec771426bcb169a82c5e9dfe0"
],
"version": "==1.4.1"
},
"oic": {
"hashes": [
"sha256:385a1f64bb59519df1e23840530921bf416740240f505ea6d161e331d3d39fad",
"sha256:fcbf948a22e4d4df66f6bf57d327933f32a7b539640d9b42883457634360ba78"
"sha256:b74bd06c7de1ab4f8e798f714062e6a68f68ad9cdbed1f1c30a7fb887602f321",
"sha256:e51705d0c14c97e9ca594374bfb54269a72c9b489e0e979598344c0189bfcb64"
],
"index": "pypi",
"markers": "python_version ~= '3.7'",
"version": "==1.6.1"
"markers": "python_version ~= '3.8'",
"version": "==1.7.0"
},
"openpyxl": {
"hashes": [
"sha256:a6f5977418eff3b2d5500d54d9db50c8277a368436f4e4f8ddb1be3422870184",
"sha256:f91456ead12ab3c6c2e9491cf33ba6d08357d802192379bb482f1033ade496f5"
],
"version": "==3.1.2"
},
"orderedmultidict": {
"hashes": [
@ -1080,6 +1133,62 @@
"markers": "python_version >= '3.8'",
"version": "==1.0.1"
},
"pyyaml": {
"hashes": [
"sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5",
"sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc",
"sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df",
"sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741",
"sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206",
"sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27",
"sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595",
"sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62",
"sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98",
"sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696",
"sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290",
"sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9",
"sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d",
"sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6",
"sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867",
"sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47",
"sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486",
"sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6",
"sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3",
"sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007",
"sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938",
"sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0",
"sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c",
"sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735",
"sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d",
"sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28",
"sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4",
"sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba",
"sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8",
"sha256:a08c6f0fe150303c1c6b71ebcd7213c2858041a7e01975da3a99aed1e7a378ef",
"sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5",
"sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd",
"sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3",
"sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0",
"sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515",
"sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c",
"sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c",
"sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924",
"sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34",
"sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43",
"sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859",
"sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673",
"sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54",
"sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a",
"sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b",
"sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab",
"sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa",
"sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c",
"sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585",
"sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d",
"sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"
],
"version": "==6.0.1"
},
"pyzipper": {
"hashes": [
"sha256:0adca90a00c36a93fbe49bfa8c5add452bfe4ef85a1b8e3638739dd1c7b26bfc",
@ -1130,6 +1239,21 @@
"markers": "python_version >= '3.8'",
"version": "==0.5.0"
},
"tablib": {
"extras": [
"html",
"ods",
"xls",
"xlsx",
"yaml"
],
"hashes": [
"sha256:9821caa9eca6062ff7299fa645e737aecff982e6b2b42046928a6413c8dabfd9",
"sha256:f6661dfc45e1d4f51fa8a6239f9c8349380859a5bfaa73280645f046d6c96e33"
],
"markers": "python_version >= '3.8'",
"version": "==3.5.0"
},
"tblib": {
"hashes": [
"sha256:80a6c77e59b55e83911e1e607c649836a69c103963c5f28a46cbeef44acf8129",
@ -1165,6 +1289,20 @@
"markers": "python_version >= '3.8'",
"version": "==6.6.0"
},
"xlrd": {
"hashes": [
"sha256:6a33ee89877bd9abc1158129f6e94be74e2679636b8a205b43b85206c3f0bbdd",
"sha256:f72f148f54442c6b056bf931dbc34f986fd0c3b0b6b5a58d013c9aef274d0c88"
],
"version": "==2.0.1"
},
"xlwt": {
"hashes": [
"sha256:a082260524678ba48a297d922cc385f58278b8aa68741596a87de01a9c628b2e",
"sha256:c59912717a9b28f1a3c2a98fd60741014b06b043936dcecbc113eaaada156c88"
],
"version": "==1.3.0"
},
"zope.event": {
"hashes": [
"sha256:2832e95014f4db26c47a13fdaef84cef2f4df37e66b59d8f1f4a8f319a632c26",
@ -1244,49 +1382,49 @@
},
"black": {
"hashes": [
"sha256:1bb9ca06e556a09f7f7177bc7cb604e5ed2d2df1e9119e4f7d2f1f7071c32e5d",
"sha256:21f9407063ec71c5580b8ad975653c66508d6a9f57bd008bb8691d273705adcd",
"sha256:4396ca365a4310beef84d446ca5016f671b10f07abdba3e4e4304218d2c71d33",
"sha256:44d99dfdf37a2a00a6f7a8dcbd19edf361d056ee51093b2445de7ca09adac965",
"sha256:5cd5b4f76056cecce3e69b0d4c228326d2595f506797f40b9233424e2524c070",
"sha256:64578cf99b6b46a6301bc28bdb89f9d6f9b592b1c5837818a177c98525dbe397",
"sha256:64e60a7edd71fd542a10a9643bf369bfd2644de95ec71e86790b063aa02ff745",
"sha256:652e55bb722ca026299eb74e53880ee2315b181dfdd44dca98e43448620ddec1",
"sha256:6644f97a7ef6f401a150cca551a1ff97e03c25d8519ee0bbc9b0058772882665",
"sha256:6ad001a9ddd9b8dfd1b434d566be39b1cd502802c8d38bbb1ba612afda2ef436",
"sha256:71d998b73c957444fb7c52096c3843875f4b6b47a54972598741fe9a7f737fcb",
"sha256:74eb9b5420e26b42c00a3ff470dc0cd144b80a766128b1771d07643165e08d0e",
"sha256:75a2d0b4f5eb81f7eebc31f788f9830a6ce10a68c91fbe0fade34fff7a2836e6",
"sha256:7852b05d02b5b9a8c893ab95863ef8986e4dda29af80bbbda94d7aee1abf8702",
"sha256:7f2966b9b2b3b7104fca9d75b2ee856fe3fdd7ed9e47c753a4bb1a675f2caab8",
"sha256:8e5537f456a22cf5cfcb2707803431d2feeb82ab3748ade280d6ccd0b40ed2e8",
"sha256:d4e71cdebdc8efeb6deaf5f2deb28325f8614d48426bed118ecc2dcaefb9ebf3",
"sha256:dae79397f367ac8d7adb6c779813328f6d690943f64b32983e896bcccd18cbad",
"sha256:e3a3a092b8b756c643fe45f4624dbd5a389f770a4ac294cf4d0fce6af86addaf",
"sha256:eb949f56a63c5e134dfdca12091e98ffb5fd446293ebae123d10fc1abad00b9e",
"sha256:f07b69fda20578367eaebbd670ff8fc653ab181e1ff95d84497f9fa20e7d0641",
"sha256:f95cece33329dc4aa3b0e1a771c41075812e46cf3d6e3f1dfe3d91ff09826ed2"
"sha256:257d724c2c9b1660f353b36c802ccece186a30accc7742c176d29c146df6e474",
"sha256:37aae07b029fa0174d39daf02748b379399b909652a806e5708199bd93899da1",
"sha256:415e686e87dbbe6f4cd5ef0fbf764af7b89f9057b97c908742b6008cc554b9c0",
"sha256:48a85f2cb5e6799a9ef05347b476cce6c182d6c71ee36925a6c194d074336ef8",
"sha256:7768a0dbf16a39aa5e9a3ded568bb545c8c2727396d063bbaf847df05b08cd96",
"sha256:7e122b1c4fb252fd85df3ca93578732b4749d9be076593076ef4d07a0233c3e1",
"sha256:88c57dc656038f1ab9f92b3eb5335ee9b021412feaa46330d5eba4e51fe49b04",
"sha256:8e537d281831ad0e71007dcdcbe50a71470b978c453fa41ce77186bbe0ed6021",
"sha256:98e123f1d5cfd42f886624d84464f7756f60ff6eab89ae845210631714f6db94",
"sha256:accf49e151c8ed2c0cdc528691838afd217c50412534e876a19270fea1e28e2d",
"sha256:b1530ae42e9d6d5b670a34db49a94115a64596bc77710b1d05e9801e62ca0a7c",
"sha256:b9176b9832e84308818a99a561e90aa479e73c523b3f77afd07913380ae2eab7",
"sha256:bdde6f877a18f24844e381d45e9947a49e97933573ac9d4345399be37621e26c",
"sha256:be8bef99eb46d5021bf053114442914baeb3649a89dc5f3a555c88737e5e98fc",
"sha256:bf10f7310db693bb62692609b397e8d67257c55f949abde4c67f9cc574492cc7",
"sha256:c872b53057f000085da66a19c55d68f6f8ddcac2642392ad3a355878406fbd4d",
"sha256:d36ed1124bb81b32f8614555b34cc4259c3fbc7eec17870e8ff8ded335b58d8c",
"sha256:da33a1a5e49c4122ccdfd56cd021ff1ebc4a1ec4e2d01594fef9b6f267a9e741",
"sha256:dd1b5a14e417189db4c7b64a6540f31730713d173f0b63e55fabd52d61d8fdce",
"sha256:e151054aa00bad1f4e1f04919542885f89f5f7d086b8a59e5000e6c616896ffb",
"sha256:eaea3008c281f1038edb473c1aa8ed8143a5535ff18f978a318f10302b254063",
"sha256:ef703f83fc32e131e9bcc0a5094cfe85599e7109f896fe8bc96cc402f3eb4b6e"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==24.4.0"
"version": "==24.4.2"
},
"blinker": {
"hashes": [
"sha256:c3f865d4d54db7abc53758a01601cf343fe55b84c1de4e3fa910e420b438d5b9",
"sha256:e6820ff6fa4e4d1d8e2747c2283749c3f547e4fee112b98555cdcdae32996182"
"sha256:5f1cdeff423b77c31b89de0565cd03e5275a03028f44b2b15f912632a58cced6",
"sha256:da44ec748222dcd0105ef975eed946da197d5bdf8bafb6aa92f5bc89da63fa25"
],
"markers": "python_version >= '3.8'",
"version": "==1.7.0"
"version": "==1.8.1"
},
"boto3": {
"hashes": [
"sha256:2824e3dd18743ca50e5b10439d20e74647b1416e8a94509cb30beac92d27a18d",
"sha256:b2e5cb5b95efcc881e25a3bc872d7a24e75ff4e76f368138e4baf7b9d6ee3422"
"sha256:decf52f8d5d8a1b10c9ff2a0e96ee207ed79e33d2e53fdf0880a5cbef70785e0",
"sha256:e836b71d79671270fccac0a4d4c8ec239a6b82ea47c399b64675aa597d0ee63b"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==1.34.90"
"version": "==1.34.95"
},
"boto3-mocking": {
"hashes": [
@ -1299,28 +1437,28 @@
},
"boto3-stubs": {
"hashes": [
"sha256:7361f162523168ddcfb3e0cc70e5208e78f95b9f1f2553032036a2b67ab33355",
"sha256:c82f3db8558e28f766361ba1eea7c77dff735f72fef2a0b9dffaa9c0d9ae76a3"
"sha256:412006b27ee707e9b51a084b02ac92b143af8a3b56727582afec2a76ce93c3b6",
"sha256:4fb5830626de42446c238ca72ca1a53e461281396007fb900edf50ceeb044a10"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==1.34.90"
"version": "==1.34.95"
},
"botocore": {
"hashes": [
"sha256:113cd4c0cb63e13163ccbc2bb13d551be314ba7f8ba5bfab1c51a19ca01aa133",
"sha256:d48f152498e2c60b43ce25b579d26642346a327b6fb2c632d57219e0a4f63392"
"sha256:6bd76a2eadb42b91fa3528392e981ad5b4dfdee3968fa5b904278acf6cbf15ff",
"sha256:ead5823e0dd6751ece5498cb979fd9abf190e691c8833bcac6876fd6ca261fa7"
],
"markers": "python_version >= '3.8'",
"version": "==1.34.90"
"version": "==1.34.95"
},
"botocore-stubs": {
"hashes": [
"sha256:b2d7416b524bce7325aa5fe09bb5e0b6bc9531d4136f4407fa39b6bc58507f34",
"sha256:d9b66542cbb8fbe28eef3c22caf941ae22d36cc1ef55b93fc0b52239457cab57"
"sha256:64d80a3467e3b19939e9c2750af33328b3087f8f524998dbdf7ed168227f507d",
"sha256:b0345f55babd8b901c53804fc5c326a4a0bd2e23e3b71f9ea5d9f7663466e6ba"
],
"markers": "python_version >= '3.8' and python_version < '4.0'",
"version": "==1.34.89"
"version": "==1.34.94"
},
"click": {
"hashes": [
@ -1357,20 +1495,20 @@
},
"django-stubs": {
"hashes": [
"sha256:4cf4de258fa71adc6f2799e983091b9d46cfc67c6eebc68fe111218c9a62b3b8",
"sha256:8ccd2ff4ee5adf22b9e3b7b1a516d2e1c2191e9d94e672c35cc2bc3dd61e0f6b"
"sha256:084484cbe16a6d388e80ec687e46f529d67a232f3befaf55c936b3b476be289d",
"sha256:b8a792bee526d6cab31e197cb414ee7fa218abd931a50948c66a80b3a2548621"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==4.2.7"
"version": "==5.0.0"
},
"django-stubs-ext": {
"hashes": [
"sha256:45a5d102417a412e3606e3c358adb4744988a92b7b58ccf3fd64bddd5d04d14c",
"sha256:519342ac0849cda1559746c9a563f03ff99f636b0ebe7c14b75e816a00dfddc3"
"sha256:5bacfbb498a206d5938454222b843d81da79ea8b6fcd1a59003f529e775bc115",
"sha256:8e1334fdf0c8bff87e25d593b33d4247487338aaed943037826244ff788b56a8"
],
"markers": "python_version >= '3.8'",
"version": "==4.2.7"
"version": "==5.0.0"
},
"django-webtest": {
"hashes": [
@ -1423,37 +1561,37 @@
},
"mypy": {
"hashes": [
"sha256:0235391f1c6f6ce487b23b9dbd1327b4ec33bb93934aa986efe8a9563d9349e6",
"sha256:190da1ee69b427d7efa8aa0d5e5ccd67a4fb04038c380237a0d96829cb157913",
"sha256:2418488264eb41f69cc64a69a745fad4a8f86649af4b1041a4c64ee61fc61129",
"sha256:3a3c007ff3ee90f69cf0a15cbcdf0995749569b86b6d2f327af01fd1b8aee9dc",
"sha256:3cc5da0127e6a478cddd906068496a97a7618a21ce9b54bde5bf7e539c7af974",
"sha256:48533cdd345c3c2e5ef48ba3b0d3880b257b423e7995dada04248725c6f77374",
"sha256:49c87c15aed320de9b438ae7b00c1ac91cd393c1b854c2ce538e2a72d55df150",
"sha256:4d3dbd346cfec7cb98e6cbb6e0f3c23618af826316188d587d1c1bc34f0ede03",
"sha256:571741dc4194b4f82d344b15e8837e8c5fcc462d66d076748142327626a1b6e9",
"sha256:587ce887f75dd9700252a3abbc9c97bbe165a4a630597845c61279cf32dfbf02",
"sha256:5d741d3fc7c4da608764073089e5f58ef6352bedc223ff58f2f038c2c4698a89",
"sha256:5e6061f44f2313b94f920e91b204ec600982961e07a17e0f6cd83371cb23f5c2",
"sha256:61758fabd58ce4b0720ae1e2fea5cfd4431591d6d590b197775329264f86311d",
"sha256:653265f9a2784db65bfca694d1edd23093ce49740b2244cde583aeb134c008f3",
"sha256:68edad3dc7d70f2f17ae4c6c1b9471a56138ca22722487eebacfd1eb5321d612",
"sha256:81a10926e5473c5fc3da8abb04119a1f5811a236dc3a38d92015cb1e6ba4cb9e",
"sha256:85ca5fcc24f0b4aeedc1d02f93707bccc04733f21d41c88334c5482219b1ccb3",
"sha256:a260627a570559181a9ea5de61ac6297aa5af202f06fd7ab093ce74e7181e43e",
"sha256:aceb1db093b04db5cd390821464504111b8ec3e351eb85afd1433490163d60cd",
"sha256:b685154e22e4e9199fc95f298661deea28aaede5ae16ccc8cbb1045e716b3e04",
"sha256:d357423fa57a489e8c47b7c85dfb96698caba13d66e086b412298a1a0ea3b0ed",
"sha256:d4d5ddc13421ba3e2e082a6c2d74c2ddb3979c39b582dacd53dd5d9431237185",
"sha256:e49499be624dead83927e70c756970a0bc8240e9f769389cdf5714b0784ca6bf",
"sha256:e54396d70be04b34f31d2edf3362c1edd023246c82f1730bbf8768c28db5361b",
"sha256:f88566144752999351725ac623471661c9d1cd8caa0134ff98cceeea181789f4",
"sha256:f8a67616990062232ee4c3952f41c779afac41405806042a8126fe96e098419f",
"sha256:fe28657de3bfec596bbeef01cb219833ad9d38dd5393fc649f4b366840baefe6"
"sha256:075cbf81f3e134eadaf247de187bd604748171d6b79736fa9b6c9685b4083061",
"sha256:12b6bfc1b1a66095ab413160a6e520e1dc076a28f3e22f7fb25ba3b000b4ef99",
"sha256:1ec404a7cbe9fc0e92cb0e67f55ce0c025014e26d33e54d9e506a0f2d07fe5de",
"sha256:28d0e038361b45f099cc086d9dd99c15ff14d0188f44ac883010e172ce86c38a",
"sha256:2b0695d605ddcd3eb2f736cd8b4e388288c21e7de85001e9f85df9187f2b50f9",
"sha256:3236a4c8f535a0631f85f5fcdffba71c7feeef76a6002fcba7c1a8e57c8be1ec",
"sha256:3be66771aa5c97602f382230165b856c231d1277c511c9a8dd058be4784472e1",
"sha256:3d087fcbec056c4ee34974da493a826ce316947485cef3901f511848e687c131",
"sha256:3f298531bca95ff615b6e9f2fc0333aae27fa48052903a0ac90215021cdcfa4f",
"sha256:4a2b5cdbb5dd35aa08ea9114436e0d79aceb2f38e32c21684dcf8e24e1e92821",
"sha256:4cf18f9d0efa1b16478c4c129eabec36148032575391095f73cae2e722fcf9d5",
"sha256:8b2cbaca148d0754a54d44121b5825ae71868c7592a53b7292eeb0f3fdae95ee",
"sha256:8f55583b12156c399dce2df7d16f8a5095291354f1e839c252ec6c0611e86e2e",
"sha256:92f93b21c0fe73dc00abf91022234c79d793318b8a96faac147cd579c1671746",
"sha256:9e36fb078cce9904c7989b9693e41cb9711e0600139ce3970c6ef814b6ebc2b2",
"sha256:9fd50226364cd2737351c79807775136b0abe084433b55b2e29181a4c3c878c0",
"sha256:a781f6ad4bab20eef8b65174a57e5203f4be627b46291f4589879bf4e257b97b",
"sha256:a87dbfa85971e8d59c9cc1fcf534efe664d8949e4c0b6b44e8ca548e746a8d53",
"sha256:b808e12113505b97d9023b0b5e0c0705a90571c6feefc6f215c1df9381256e30",
"sha256:bc6ac273b23c6b82da3bb25f4136c4fd42665f17f2cd850771cb600bdd2ebeda",
"sha256:cd777b780312ddb135bceb9bc8722a73ec95e042f911cc279e2ec3c667076051",
"sha256:da1cbf08fb3b851ab3b9523a884c232774008267b1f83371ace57f412fe308c2",
"sha256:e22e1527dc3d4aa94311d246b59e47f6455b8729f4968765ac1eacf9a4760bc7",
"sha256:f8c083976eb530019175aabadb60921e73b4f45736760826aa1689dda8208aee",
"sha256:f90cff89eea89273727d8783fef5d4a934be2fdca11b47def50cf5d311aff727",
"sha256:fa7ef5244615a2523b56c034becde4e9e3f9b034854c93639adb667ec9ec2976",
"sha256:fcfc70599efde5c67862a07a1aaf50e55bce629ace26bb19dc17cece5dd31ca4"
],
"index": "pypi",
"markers": "python_version >= '3.8'",
"version": "==1.9.0"
"version": "==1.10.0"
},
"mypy-extensions": {
"hashes": [
@ -1589,7 +1727,6 @@
"sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d",
"sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"
],
"markers": "python_version >= '3.6'",
"version": "==6.0.1"
},
"rich": {
@ -1665,14 +1802,6 @@
"markers": "python_version >= '3.7'",
"version": "==5.3.0.7"
},
"types-pytz": {
"hashes": [
"sha256:6810c8a1f68f21fdf0f4f374a432487c77645a0ac0b31de4bf4690cf21ad3981",
"sha256:8335d443310e2db7b74e007414e74c4f53b67452c0cb0d228ca359ccfba59659"
],
"markers": "python_version >= '3.8'",
"version": "==2024.1.0.20240417"
},
"types-pyyaml": {
"hashes": [
"sha256:a9e0f0f88dc835739b0c1ca51ee90d04ca2a897a71af79de9aec5f38cb0a5342",

View file

@ -65,13 +65,31 @@ class OpenIdConnectBackend(ModelBackend):
return user
def update_existing_user(self, user, kwargs):
"""Update other fields without overwriting first_name and last_name.
Overwrite first_name and last_name if not empty string"""
"""
Update user fields without overwriting certain fields.
Args:
user: User object to be updated.
kwargs: Dictionary containing fields to update and their new values.
Note:
This method updates user fields while preserving the values of 'first_name',
'last_name', and 'phone' fields, unless specific conditions are met.
- 'first_name', 'last_name' or 'phone' will be updated if the provided value is not empty.
"""
fields_to_check = ["first_name", "last_name", "phone"]
# Iterate over fields to update
for key, value in kwargs.items():
# Check if the key is not first_name or last_name or value is not empty string
if key not in ["first_name", "last_name"] or value:
# Check if the field is not 'first_name', 'last_name', or 'phone',
# or if it's 'first_name' or 'last_name' or 'phone' and the provided value is not empty
if key not in fields_to_check or (key in fields_to_check and value):
# Update the corresponding attribute of the user object
setattr(user, key, value)
# Save the user object with the updated fields
user.save()
def clean_username(self, username):

View file

@ -50,15 +50,21 @@ class OpenIdConnectBackendTestCase(TestCase):
self.assertEqual(user.email, "john.doe@example.com")
self.assertEqual(user.phone, "123456789")
def test_authenticate_with_existing_user_no_name(self):
def test_authenticate_with_existing_user_with_existing_first_last_phone(self):
"""Test that authenticate updates an existing user if it finds one.
For this test, given_name and family_name are not supplied"""
For this test, given_name and family_name are not supplied.
The existing user's first and last name are not overwritten.
The existing user's phone number is not overwritten"""
# Create an existing user with the same username and with first and last names
existing_user = User.objects.create_user(username="test_user", first_name="John", last_name="Doe")
existing_user = User.objects.create_user(
username="test_user", first_name="WillNotBe", last_name="Replaced", phone="9999999999"
)
# Remove given_name and family_name from the input, self.kwargs
self.kwargs.pop("given_name", None)
self.kwargs.pop("family_name", None)
self.kwargs.pop("phone", None)
# Ensure that the authenticate method updates the existing user
# and preserves existing first and last names
@ -68,16 +74,18 @@ class OpenIdConnectBackendTestCase(TestCase):
self.assertEqual(user, existing_user) # The same user instance should be returned
# Verify that user fields are correctly updated
self.assertEqual(user.first_name, "John")
self.assertEqual(user.last_name, "Doe")
self.assertEqual(user.first_name, "WillNotBe")
self.assertEqual(user.last_name, "Replaced")
self.assertEqual(user.email, "john.doe@example.com")
self.assertEqual(user.phone, "123456789")
self.assertEqual(user.phone, "9999999999")
def test_authenticate_with_existing_user_different_name(self):
def test_authenticate_with_existing_user_different_name_phone(self):
"""Test that authenticate updates an existing user if it finds one.
For this test, given_name and family_name are supplied and overwrite"""
# Create an existing user with the same username and with first and last names
existing_user = User.objects.create_user(username="test_user", first_name="WillBe", last_name="Replaced")
existing_user = User.objects.create_user(
username="test_user", first_name="WillBe", last_name="Replaced", phone="987654321"
)
# Ensure that the authenticate method updates the existing user
# and preserves existing first and last names

View file

@ -429,6 +429,10 @@ class ViewsTest(TestCase):
# Create a mock request
request = self.factory.get("/some-url")
request.session = {"acr_value": ""}
# Mock user and its attributes
mock_user = MagicMock()
mock_user.is_authenticated = True
request.user = mock_user
# Ensure that the CLIENT instance used in login_callback is the mock
# patch _requires_step_up_auth to return False
with patch("djangooidc.views._requires_step_up_auth", return_value=False), patch(

View file

@ -1,4 +1,3 @@
version: "3.0"
services:
app:
build: .
@ -67,8 +66,8 @@ services:
# command: "python"
command: >
bash -c " python manage.py migrate &&
python manage.py load &&
python manage.py createcachetable &&
python manage.py load &&
python manage.py runserver 0.0.0.0:8080"
db:

View file

@ -3,10 +3,10 @@ from dateutil.tz import tzlocal # type: ignore
from unittest.mock import MagicMock, patch
from pathlib import Path
from django.test import TestCase
from api.tests.common import less_console_noise_decorator
from gevent.exceptions import ConcurrentObjectUseError
from epplibwrapper.client import EPPLibWrapper
from epplibwrapper.errors import RegistryError, LoginError
from .common import less_console_noise
import logging
try:
@ -24,99 +24,101 @@ logger = logging.getLogger(__name__)
class TestClient(TestCase):
"""Test the EPPlibwrapper client"""
@less_console_noise_decorator
def fake_result(self, code, msg):
"""Helper function to create a fake Result object"""
return Result(code=code, msg=msg, res_data=[], cl_tr_id="cl_tr_id", sv_tr_id="sv_tr_id")
@less_console_noise_decorator
@patch("epplibwrapper.client.Client")
def test_initialize_client_success(self, mock_client):
"""Test when the initialize_client is successful"""
with less_console_noise():
# Mock the Client instance and its methods
mock_connect = MagicMock()
# Create a mock Result instance
mock_result = MagicMock(spec=Result)
mock_result.code = 200
mock_result.msg = "Success"
mock_result.res_data = ["data1", "data2"]
mock_result.cl_tr_id = "client_id"
mock_result.sv_tr_id = "server_id"
mock_send = MagicMock(return_value=mock_result)
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
# Mock the Client instance and its methods
mock_connect = MagicMock()
# Create a mock Result instance
mock_result = MagicMock(spec=Result)
mock_result.code = 200
mock_result.msg = "Success"
mock_result.res_data = ["data1", "data2"]
mock_result.cl_tr_id = "client_id"
mock_result.sv_tr_id = "server_id"
mock_send = MagicMock(return_value=mock_result)
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
# Create EPPLibWrapper instance and initialize client
wrapper = EPPLibWrapper()
# Create EPPLibWrapper instance and initialize client
wrapper = EPPLibWrapper()
# Assert that connect method is called once
mock_connect.assert_called_once()
# Assert that _client is not None after initialization
self.assertIsNotNone(wrapper._client)
# Assert that connect method is called once
mock_connect.assert_called_once()
# Assert that _client is not None after initialization
self.assertIsNotNone(wrapper._client)
@less_console_noise_decorator
@patch("epplibwrapper.client.Client")
def test_initialize_client_transport_error(self, mock_client):
"""Test when the send(login) step of initialize_client raises a TransportError."""
with less_console_noise():
# Mock the Client instance and its methods
mock_connect = MagicMock()
mock_send = MagicMock(side_effect=TransportError("Transport error"))
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
# Mock the Client instance and its methods
mock_connect = MagicMock()
mock_send = MagicMock(side_effect=TransportError("Transport error"))
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
with self.assertRaises(RegistryError):
# Create EPPLibWrapper instance and initialize client
# if functioning as expected, initial __init__ should except
# and log any Exception raised
wrapper = EPPLibWrapper()
# so call _initialize_client a second time directly to test
# the raised exception
wrapper._initialize_client()
with self.assertRaises(RegistryError):
# Create EPPLibWrapper instance and initialize client
# if functioning as expected, initial __init__ should except
# and log any Exception raised
wrapper = EPPLibWrapper()
# so call _initialize_client a second time directly to test
# the raised exception
wrapper._initialize_client()
@less_console_noise_decorator
@patch("epplibwrapper.client.Client")
def test_initialize_client_login_error(self, mock_client):
"""Test when the send(login) step of initialize_client returns (2400) comamnd failed code."""
with less_console_noise():
# Mock the Client instance and its methods
mock_connect = MagicMock()
# Create a mock Result instance
mock_result = MagicMock(spec=Result)
mock_result.code = 2400
mock_result.msg = "Login failed"
mock_result.res_data = ["data1", "data2"]
mock_result.cl_tr_id = "client_id"
mock_result.sv_tr_id = "server_id"
mock_send = MagicMock(return_value=mock_result)
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
# Mock the Client instance and its methods
mock_connect = MagicMock()
# Create a mock Result instance
mock_result = MagicMock(spec=Result)
mock_result.code = 2400
mock_result.msg = "Login failed"
mock_result.res_data = ["data1", "data2"]
mock_result.cl_tr_id = "client_id"
mock_result.sv_tr_id = "server_id"
mock_send = MagicMock(return_value=mock_result)
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
with self.assertRaises(LoginError):
# Create EPPLibWrapper instance and initialize client
# if functioning as expected, initial __init__ should except
# and log any Exception raised
wrapper = EPPLibWrapper()
# so call _initialize_client a second time directly to test
# the raised exception
wrapper._initialize_client()
with self.assertRaises(LoginError):
# Create EPPLibWrapper instance and initialize client
# if functioning as expected, initial __init__ should except
# and log any Exception raised
wrapper = EPPLibWrapper()
# so call _initialize_client a second time directly to test
# the raised exception
wrapper._initialize_client()
@less_console_noise_decorator
@patch("epplibwrapper.client.Client")
def test_initialize_client_unknown_exception(self, mock_client):
"""Test when the send(login) step of initialize_client raises an unexpected Exception."""
with less_console_noise():
# Mock the Client instance and its methods
mock_connect = MagicMock()
mock_send = MagicMock(side_effect=Exception("Unknown exception"))
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
# Mock the Client instance and its methods
mock_connect = MagicMock()
mock_send = MagicMock(side_effect=Exception("Unknown exception"))
mock_client.return_value.connect = mock_connect
mock_client.return_value.send = mock_send
with self.assertRaises(RegistryError):
# Create EPPLibWrapper instance and initialize client
# if functioning as expected, initial __init__ should except
# and log any Exception raised
wrapper = EPPLibWrapper()
# so call _initialize_client a second time directly to test
# the raised exception
wrapper._initialize_client()
with self.assertRaises(RegistryError):
# Create EPPLibWrapper instance and initialize client
# if functioning as expected, initial __init__ should except
# and log any Exception raised
wrapper = EPPLibWrapper()
# so call _initialize_client a second time directly to test
# the raised exception
wrapper._initialize_client()
@less_console_noise_decorator
@patch("epplibwrapper.client.Client")
def test_initialize_client_fails_recovers_with_send_command(self, mock_client):
"""Test when the initialize_client fails on the connect() step. And then a subsequent
@ -126,56 +128,56 @@ class TestClient(TestCase):
Initialization step fails at app init
Send command fails (with 2400 code) prompting retry
Client closes and re-initializes, and command is sent successfully"""
with less_console_noise():
# Mock the Client instance and its methods
# close() should return successfully
mock_close = MagicMock()
mock_client.return_value.close = mock_close
# Create success and failure results
command_success_result = self.fake_result(1000, "Command completed successfully")
command_failure_result = self.fake_result(2400, "Command failed")
# side_effect for the connect() calls
# first connect() should raise an Exception
# subsequent connect() calls should return success
connect_call_count = 0
# Mock the Client instance and its methods
# close() should return successfully
mock_close = MagicMock()
mock_client.return_value.close = mock_close
# Create success and failure results
command_success_result = self.fake_result(1000, "Command completed successfully")
command_failure_result = self.fake_result(2400, "Command failed")
# side_effect for the connect() calls
# first connect() should raise an Exception
# subsequent connect() calls should return success
connect_call_count = 0
def connect_side_effect(*args, **kwargs):
nonlocal connect_call_count
connect_call_count += 1
if connect_call_count == 1:
raise Exception("Connection failed")
else:
return command_success_result
def connect_side_effect(*args, **kwargs):
nonlocal connect_call_count
connect_call_count += 1
if connect_call_count == 1:
raise Exception("Connection failed")
else:
return command_success_result
mock_connect = MagicMock(side_effect=connect_side_effect)
mock_client.return_value.connect = mock_connect
# side_effect for the send() calls
# first send will be the send("InfoDomainCommand") and should fail
# subsequend send() calls should return success
send_call_count = 0
mock_connect = MagicMock(side_effect=connect_side_effect)
mock_client.return_value.connect = mock_connect
# side_effect for the send() calls
# first send will be the send("InfoDomainCommand") and should fail
# subsequend send() calls should return success
send_call_count = 0
def send_side_effect(*args, **kwargs):
nonlocal send_call_count
send_call_count += 1
if send_call_count == 1:
return command_failure_result
else:
return command_success_result
def send_side_effect(*args, **kwargs):
nonlocal send_call_count
send_call_count += 1
if send_call_count == 1:
return command_failure_result
else:
return command_success_result
mock_send = MagicMock(side_effect=send_side_effect)
mock_client.return_value.send = mock_send
# Create EPPLibWrapper instance and call send command
wrapper = EPPLibWrapper()
wrapper.send("InfoDomainCommand", cleaned=True)
# two connect() calls should be made, the initial failed connect()
# and the successful connect() during retry()
self.assertEquals(mock_connect.call_count, 2)
# close() should only be called once, during retry()
mock_close.assert_called_once()
# send called 4 times: failed send("InfoDomainCommand"), passed send(logout),
# passed send(login), passed send("InfoDomainCommand")
self.assertEquals(mock_send.call_count, 4)
mock_send = MagicMock(side_effect=send_side_effect)
mock_client.return_value.send = mock_send
# Create EPPLibWrapper instance and call send command
wrapper = EPPLibWrapper()
wrapper.send("InfoDomainCommand", cleaned=True)
# two connect() calls should be made, the initial failed connect()
# and the successful connect() during retry()
self.assertEquals(mock_connect.call_count, 2)
# close() should only be called once, during retry()
mock_close.assert_called_once()
# send called 4 times: failed send("InfoDomainCommand"), passed send(logout),
# passed send(login), passed send("InfoDomainCommand")
self.assertEquals(mock_send.call_count, 4)
@less_console_noise_decorator
@patch("epplibwrapper.client.Client")
def test_send_command_failed_retries_and_fails_again(self, mock_client):
"""Test when the send("InfoDomainCommand) call fails with a 2400, prompting a retry
@ -185,42 +187,42 @@ class TestClient(TestCase):
Initialization succeeds
Send command fails (with 2400 code) prompting retry
Client closes and re-initializes, and command fails again with 2400"""
with less_console_noise():
# Mock the Client instance and its methods
# connect() and close() should succeed throughout
mock_connect = MagicMock()
mock_close = MagicMock()
# Create a mock Result instance
send_command_success_result = self.fake_result(1000, "Command completed successfully")
send_command_failure_result = self.fake_result(2400, "Command failed")
# Mock the Client instance and its methods
# connect() and close() should succeed throughout
mock_connect = MagicMock()
mock_close = MagicMock()
# Create a mock Result instance
send_command_success_result = self.fake_result(1000, "Command completed successfully")
send_command_failure_result = self.fake_result(2400, "Command failed")
# side_effect for send command, passes for all other sends (login, logout), but
# fails for send("InfoDomainCommand")
def side_effect(*args, **kwargs):
if args[0] == "InfoDomainCommand":
return send_command_failure_result
else:
return send_command_success_result
# side_effect for send command, passes for all other sends (login, logout), but
# fails for send("InfoDomainCommand")
def side_effect(*args, **kwargs):
if args[0] == "InfoDomainCommand":
return send_command_failure_result
else:
return send_command_success_result
mock_send = MagicMock(side_effect=side_effect)
mock_client.return_value.connect = mock_connect
mock_client.return_value.close = mock_close
mock_client.return_value.send = mock_send
mock_send = MagicMock(side_effect=side_effect)
mock_client.return_value.connect = mock_connect
mock_client.return_value.close = mock_close
mock_client.return_value.send = mock_send
with self.assertRaises(RegistryError):
# Create EPPLibWrapper instance and initialize client
wrapper = EPPLibWrapper()
# call send, which should throw a RegistryError (after retry)
wrapper.send("InfoDomainCommand", cleaned=True)
# connect() should be called twice, once during initialization, second time
# during retry
self.assertEquals(mock_connect.call_count, 2)
# close() is called once during retry
mock_close.assert_called_once()
# send() is called 5 times: send(login), send(command) fails, send(logout)
# send(login), send(command)
self.assertEquals(mock_send.call_count, 5)
with self.assertRaises(RegistryError):
# Create EPPLibWrapper instance and initialize client
wrapper = EPPLibWrapper()
# call send, which should throw a RegistryError (after retry)
wrapper.send("InfoDomainCommand", cleaned=True)
# connect() should be called twice, once during initialization, second time
# during retry
self.assertEquals(mock_connect.call_count, 2)
# close() is called once during retry
mock_close.assert_called_once()
# send() is called 5 times: send(login), send(command) fails, send(logout)
# send(login), send(command)
self.assertEquals(mock_send.call_count, 5)
@less_console_noise_decorator
@patch("epplibwrapper.client.Client")
def test_send_command_failure_prompts_successful_retry(self, mock_client):
"""Test when the send("InfoDomainCommand) call fails with a 2400, prompting a retry
@ -229,40 +231,40 @@ class TestClient(TestCase):
Initialization succeeds
Send command fails (with 2400 code) prompting retry
Client closes and re-initializes, and command succeeds"""
with less_console_noise():
# Mock the Client instance and its methods
# connect() and close() should succeed throughout
mock_connect = MagicMock()
mock_close = MagicMock()
# create success and failure result messages
send_command_success_result = self.fake_result(1000, "Command completed successfully")
send_command_failure_result = self.fake_result(2400, "Command failed")
# side_effect for send call, initial send(login) succeeds during initialization, next send(command)
# fails, subsequent sends (logout, login, command) all succeed
send_call_count = 0
# Mock the Client instance and its methods
# connect() and close() should succeed throughout
mock_connect = MagicMock()
mock_close = MagicMock()
# create success and failure result messages
send_command_success_result = self.fake_result(1000, "Command completed successfully")
send_command_failure_result = self.fake_result(2400, "Command failed")
# side_effect for send call, initial send(login) succeeds during initialization, next send(command)
# fails, subsequent sends (logout, login, command) all succeed
send_call_count = 0
def side_effect(*args, **kwargs):
nonlocal send_call_count
send_call_count += 1
if send_call_count == 2:
return send_command_failure_result
else:
return send_command_success_result
def side_effect(*args, **kwargs):
nonlocal send_call_count
send_call_count += 1
if send_call_count == 2:
return send_command_failure_result
else:
return send_command_success_result
mock_send = MagicMock(side_effect=side_effect)
mock_client.return_value.connect = mock_connect
mock_client.return_value.close = mock_close
mock_client.return_value.send = mock_send
# Create EPPLibWrapper instance and initialize client
wrapper = EPPLibWrapper()
wrapper.send("InfoDomainCommand", cleaned=True)
# connect() is called twice, once during initialization of app, once during retry
self.assertEquals(mock_connect.call_count, 2)
# close() is called once, during retry
mock_close.assert_called_once()
# send() is called 5 times: send(login), send(command) fail, send(logout), send(login), send(command)
self.assertEquals(mock_send.call_count, 5)
mock_send = MagicMock(side_effect=side_effect)
mock_client.return_value.connect = mock_connect
mock_client.return_value.close = mock_close
mock_client.return_value.send = mock_send
# Create EPPLibWrapper instance and initialize client
wrapper = EPPLibWrapper()
wrapper.send("InfoDomainCommand", cleaned=True)
# connect() is called twice, once during initialization of app, once during retry
self.assertEquals(mock_connect.call_count, 2)
# close() is called once, during retry
mock_close.assert_called_once()
# send() is called 5 times: send(login), send(command) fail, send(logout), send(login), send(command)
self.assertEquals(mock_send.call_count, 5)
@less_console_noise_decorator
def fake_failure_send_concurrent_threads(self, command=None, cleaned=None):
"""
Raises a ConcurrentObjectUseError, which gevent throws when accessing
@ -277,6 +279,7 @@ class TestClient(TestCase):
"""
pass # noqa
@less_console_noise_decorator
def fake_success_send(self, command=None, cleaned=None):
"""
Simulates receiving a success response from EPP.
@ -292,6 +295,7 @@ class TestClient(TestCase):
)
return mock
@less_console_noise_decorator
def fake_info_domain_received(self, command=None, cleaned=None):
"""
Simulates receiving a response by reading from a predefined XML file.
@ -300,6 +304,7 @@ class TestClient(TestCase):
xml = (location).read_bytes()
return xml
@less_console_noise_decorator
def get_fake_epp_result(self):
"""Mimics a return from EPP by returning a dictionary in the same format"""
result = {
@ -338,6 +343,7 @@ class TestClient(TestCase):
}
return result
@less_console_noise_decorator
def test_send_command_close_failure_recovers(self):
"""
Validates the resilience of the connection handling mechanism
@ -350,7 +356,6 @@ class TestClient(TestCase):
- Subsequently, the client re-initializes the connection.
- A retry of the command execution post-reinitialization succeeds.
"""
expected_result = self.get_fake_epp_result()
wrapper = None
# Trigger a retry

View file

@ -33,4 +33,5 @@ exports.init = uswds.init;
exports.compile = uswds.compile;
exports.watch = uswds.watch;
exports.copyAssets = uswds.copyAssets
exports.updateUswds = uswds.updateUswds

View file

@ -1,5 +1,4 @@
FROM docker.io/cimg/node:current-browsers
FROM node:21.7.3
WORKDIR /app
# Install app dependencies
@ -7,6 +6,4 @@ WORKDIR /app
# where available (npm@5+)
COPY --chown=circleci:circleci package*.json ./
RUN npm install -g npm@10.5.0
RUN npm install

6617
src/package-lock.json generated

File diff suppressed because it is too large Load diff

View file

@ -3,11 +3,6 @@
"version": "1.0.0",
"description": "========================",
"main": "index.js",
"engines": {
"node": "21.7.3",
"npm": "10.5.0"
},
"engineStrict": true,
"scripts": {
"pa11y-ci": "pa11y-ci",
"test": "echo \"Error: no test specified\" && exit 1"
@ -15,11 +10,11 @@
"author": "",
"license": "ISC",
"dependencies": {
"@uswds/uswds": "^3.3.0",
"@uswds/uswds": "^3.8.1",
"pa11y-ci": "^3.0.1",
"sass": "^1.54.8"
},
"devDependencies": {
"@uswds/compile": "^1.0.0-beta.3"
}
}
}

File diff suppressed because it is too large Load diff

View file

@ -5,12 +5,3 @@ class RegistrarConfig(AppConfig):
"""Configure signal handling for our registrar Django application."""
name = "registrar"
def ready(self):
"""Runs when all Django applications have been loaded.
We use it here to load signals that connect related models.
"""
# noqa here because we are importing something to make the signals
# get registered, but not using what we import
from . import signals # noqa

View file

@ -0,0 +1,76 @@
/*
* We will run our own version of
* https://github.com/django/django/blob/195d885ca01b14e3ce9a1881c3b8f7074f953736/django/contrib/admin/static/admin/js/collapse.js
* Works with our fieldset override
*/
'use strict';
{
window.addEventListener('load', function() {
// Add anchor tag for Show/Hide link
const fieldsets = document.querySelectorAll('fieldset.collapse--dgfieldset');
for (const [i, elem] of fieldsets.entries()) {
// Don't hide if fields in this fieldset have errors
if (elem.querySelectorAll('div.errors, ul.errorlist').length === 0) {
elem.classList.add('collapsed');
const button = elem.querySelector('button');
button.id = 'fieldsetcollapser' + i;
button.className = 'collapse-toggle--dgfieldset usa-button usa-button--unstyled';
}
}
// Add toggle to hide/show anchor tag
const toggleFuncDotgov = function(e) {
e.preventDefault();
e.stopPropagation();
const fieldset = this.closest('fieldset');
const spanElement = this.querySelector('span');
const useElement = this.querySelector('use');
if (fieldset.classList.contains('collapsed')) {
// Show
spanElement.textContent = 'Hide details';
useElement.setAttribute('xlink:href', '/public/img/sprite.svg#expand_less');
fieldset.classList.remove('collapsed');
} else {
// Hide
spanElement.textContent = 'Show details';
useElement.setAttribute('xlink:href', '/public/img/sprite.svg#expand_more');
fieldset.classList.add('collapsed');
}
};
document.querySelectorAll('.collapse-toggle--dgfieldset').forEach(function(el) {
el.addEventListener('click', toggleFuncDotgov);
});
});
}
'use strict';
{
window.addEventListener('load', function() {
// Add anchor tag for Show/Hide link
const collapsibleContent = document.querySelectorAll('fieldset.collapse--dgsimple');
for (const [i, elem] of collapsibleContent.entries()) {
const button = elem.closest('div').querySelector('button');
button.id = 'simplecollapser' + i;
}
// Add toggle to hide/show anchor tag
const toggleFuncDotgovSimple = function(e) {
const fieldset = this.closest('div').querySelector('.collapse--dgsimple');
const spanElement = this.querySelector('span');
const useElement = this.querySelector('use');
if (fieldset.classList.contains('collapsed')) {
// Show
spanElement.textContent = 'Hide details';
useElement.setAttribute('xlink:href', '/public/img/sprite.svg#expand_less');
fieldset.classList.remove('collapsed');
} else {
// Hide
spanElement.textContent = 'Show details';
useElement.setAttribute('xlink:href', '/public/img/sprite.svg#expand_more');
fieldset.classList.add('collapsed');
}
};
document.querySelectorAll('.collapse-toggle--dgsimple').forEach(function(el) {
el.addEventListener('click', toggleFuncDotgovSimple);
});
});
}

View file

@ -8,6 +8,25 @@
// <<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>>
// Helper functions.
/**
* Hide element
*
*/
const hideElement = (element) => {
if (element && !element.classList.contains("display-none"))
element.classList.add('display-none');
};
/**
* Show element
*
*/
const showElement = (element) => {
if (element && element.classList.contains("display-none"))
element.classList.remove('display-none');
};
/** Either sets attribute target="_blank" to a given element, or removes it */
function openInNewTab(el, removeAttribute = false){
if(removeAttribute){
@ -17,6 +36,15 @@ function openInNewTab(el, removeAttribute = false){
}
};
// Adds or removes a boolean from our session
function addOrRemoveSessionBoolean(name, add){
if (add) {
sessionStorage.setItem(name, "true");
}else {
sessionStorage.removeItem(name);
}
}
// <<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>><<>>
// Event handlers.
@ -57,6 +85,7 @@ function openInNewTab(el, removeAttribute = false){
createPhantomModalFormButtons();
})();
/** An IIFE for DomainRequest to hook a modal to a dropdown option.
* This intentionally does not interact with createPhantomModalFormButtons()
*/
@ -137,15 +166,52 @@ function openInNewTab(el, removeAttribute = false){
prepareDjangoAdmin();
})();
/** An IIFE for the "Assign to me" button under the investigator field in DomainRequests.
** This field uses the "select2" selector, rather than the default.
** To perform data operations on this - we need to use jQuery rather than vanilla js.
*/
(function (){
let selector = django.jQuery("#id_investigator")
let assignSelfButton = document.querySelector("#investigator__assign_self");
if (!selector || !assignSelfButton) {
return;
}
let currentUserId = assignSelfButton.getAttribute("data-user-id");
let currentUserName = assignSelfButton.getAttribute("data-user-name");
if (!currentUserId || !currentUserName){
console.error("Could not assign current user: no values found.")
return;
}
// Hook a click listener to the "Assign to me" button.
// Logic borrowed from here: https://select2.org/programmatic-control/add-select-clear-items#create-if-not-exists
assignSelfButton.addEventListener("click", function() {
if (selector.find(`option[value='${currentUserId}']`).length) {
// Select the value that is associated with the current user.
selector.val(currentUserId).trigger("change");
} else {
// Create a DOM Option that matches the desired user. Then append it and select it.
let userOption = new Option(currentUserName, currentUserId, true, true);
selector.append(userOption).trigger("change");
}
});
// Listen to any change events, and hide the parent container if investigator has a value.
selector.on('change', function() {
// The parent container has display type flex.
assignSelfButton.parentElement.style.display = this.value === currentUserId ? "none" : "flex";
});
})();
/** An IIFE for pages in DjangoAdmin that use a clipboard button
*/
(function (){
function copyInnerTextToClipboard(elem) {
let text = elem.innerText
navigator.clipboard.writeText(text)
}
function copyToClipboardAndChangeIcon(button) {
// Assuming the input is the previous sibling of the button
let input = button.previousElementSibling;
@ -154,7 +220,7 @@ function openInNewTab(el, removeAttribute = false){
if (input) {
navigator.clipboard.writeText(input.value).then(function() {
// Change the icon to a checkmark on successful copy
let buttonIcon = button.querySelector('.usa-button__clipboard use');
let buttonIcon = button.querySelector('.copy-to-clipboard use');
if (buttonIcon) {
let currentHref = buttonIcon.getAttribute('xlink:href');
let baseHref = currentHref.split('#')[0];
@ -163,21 +229,17 @@ function openInNewTab(el, removeAttribute = false){
buttonIcon.setAttribute('xlink:href', baseHref + '#check');
// Change the button text
nearestSpan = button.querySelector("span")
let nearestSpan = button.querySelector("span")
let original_text = nearestSpan.innerText
nearestSpan.innerText = "Copied to clipboard"
setTimeout(function() {
// Change back to the copy icon
buttonIcon.setAttribute('xlink:href', currentHref);
if (button.classList.contains('usa-button__small-text')) {
nearestSpan.innerText = "Copy email";
} else {
nearestSpan.innerText = "Copy";
}
nearestSpan.innerText = original_text;
}, 2000);
}
}).catch(function(error) {
console.error('Clipboard copy failed', error);
});
@ -185,7 +247,7 @@ function openInNewTab(el, removeAttribute = false){
}
function handleClipboardButtons() {
clipboardButtons = document.querySelectorAll(".usa-button__clipboard")
clipboardButtons = document.querySelectorAll(".copy-to-clipboard")
clipboardButtons.forEach((button) => {
// Handle copying the text to your clipboard,
@ -208,20 +270,7 @@ function openInNewTab(el, removeAttribute = false){
});
}
function handleClipboardLinks() {
let emailButtons = document.querySelectorAll(".usa-button__clipboard-link");
if (emailButtons){
emailButtons.forEach((button) => {
button.addEventListener("click", ()=>{
copyInnerTextToClipboard(button);
})
});
}
}
handleClipboardButtons();
handleClipboardLinks();
})();
@ -235,6 +284,8 @@ function openInNewTab(el, removeAttribute = false){
// "to" select list
checkToListThenInitWidget('id_groups_to', 0);
checkToListThenInitWidget('id_user_permissions_to', 0);
checkToListThenInitWidget('id_portfolio_roles_to', 0);
checkToListThenInitWidget('id_portfolio_additional_permissions_to', 0);
})();
// Function to check for the existence of the "to" select list element in the DOM, and if and when found,
@ -300,42 +351,63 @@ function initializeWidgetOnList(list, parentId) {
*/
(function (){
let rejectionReasonFormGroup = document.querySelector('.field-rejection_reason')
// This is the "action needed reason" field
let actionNeededReasonFormGroup = document.querySelector('.field-action_needed_reason');
// This is the "auto-generated email" field
let actionNeededReasonEmailFormGroup = document.querySelector('.field-action_needed_reason_email')
if (rejectionReasonFormGroup) {
if (rejectionReasonFormGroup && actionNeededReasonFormGroup && actionNeededReasonEmailFormGroup) {
let statusSelect = document.getElementById('id_status')
let isRejected = statusSelect.value == "rejected"
let isActionNeeded = statusSelect.value == "action needed"
// Initial handling of rejectionReasonFormGroup display
if (statusSelect.value != 'rejected')
rejectionReasonFormGroup.style.display = 'none';
showOrHideObject(rejectionReasonFormGroup, show=isRejected)
showOrHideObject(actionNeededReasonFormGroup, show=isActionNeeded)
showOrHideObject(actionNeededReasonEmailFormGroup, show=isActionNeeded)
// Listen to change events and handle rejectionReasonFormGroup display, then save status to session storage
statusSelect.addEventListener('change', function() {
if (statusSelect.value == 'rejected') {
rejectionReasonFormGroup.style.display = 'block';
sessionStorage.removeItem('hideRejectionReason');
} else {
rejectionReasonFormGroup.style.display = 'none';
sessionStorage.setItem('hideRejectionReason', 'true');
}
// Show the rejection reason field if the status is rejected.
// Then track if its shown or hidden in our session cache.
isRejected = statusSelect.value == "rejected"
showOrHideObject(rejectionReasonFormGroup, show=isRejected)
addOrRemoveSessionBoolean("showRejectionReason", add=isRejected)
isActionNeeded = statusSelect.value == "action needed"
showOrHideObject(actionNeededReasonFormGroup, show=isActionNeeded)
showOrHideObject(actionNeededReasonEmailFormGroup, show=isActionNeeded)
addOrRemoveSessionBoolean("showActionNeededReason", add=isActionNeeded)
});
// Listen to Back/Forward button navigation and handle rejectionReasonFormGroup display based on session storage
// When you navigate using forward/back after changing status but not saving, when you land back on the DA page the
// status select will say (for example) Rejected but the selected option can be something else. To manage the show/hide
// accurately for this edge case, we use cache and test for the back/forward navigation.
const observer = new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
if (entry.type === "back_forward") {
let showRejectionReason = sessionStorage.getItem("showRejectionReason") !== null
showOrHideObject(rejectionReasonFormGroup, show=showRejectionReason)
let showActionNeededReason = sessionStorage.getItem("showActionNeededReason") !== null
showOrHideObject(actionNeededReasonFormGroup, show=showActionNeededReason)
showOrHideObject(actionNeededReasonEmailFormGroup, show=isActionNeeded)
}
});
});
observer.observe({ type: "navigation" });
}
// Listen to Back/Forward button navigation and handle rejectionReasonFormGroup display based on session storage
// When you navigate using forward/back after changing status but not saving, when you land back on the DA page the
// status select will say (for example) Rejected but the selected option can be something else. To manage the show/hide
// accurately for this edge case, we use cache and test for the back/forward navigation.
const observer = new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
if (entry.type === "back_forward") {
if (sessionStorage.getItem('hideRejectionReason'))
document.querySelector('.field-rejection_reason').style.display = 'none';
else
document.querySelector('.field-rejection_reason').style.display = 'block';
}
});
});
observer.observe({ type: "navigation" });
// Adds or removes the display-none class to object depending on the value of boolean show
function showOrHideObject(object, show){
if (show){
object.classList.remove("display-none");
}else {
object.classList.add("display-none");
}
}
})();
/** An IIFE for toggling the submit bar on domain request forms
@ -394,3 +466,287 @@ function initializeWidgetOnList(list, parentId) {
observer.observe(targetElement);
}
})();
/** An IIFE for toggling the overflow styles on django-admin__model-description (the show more / show less button) */
(function () {
function handleShowMoreButton(toggleButton, descriptionDiv){
// Check the length of the text content in the description div
if (descriptionDiv.textContent.length < 200) {
// Hide the toggle button if text content is less than 200 characters
// This is a little over 160 characters to give us some wiggle room if we
// change the font size marginally.
toggleButton.classList.add('display-none');
} else {
toggleButton.addEventListener('click', function() {
toggleShowMoreButton(toggleButton, descriptionDiv, 'dja__model-description--no-overflow')
});
}
}
function toggleShowMoreButton(toggleButton, descriptionDiv, showMoreClassName){
// Toggle the class on the description div
descriptionDiv.classList.toggle(showMoreClassName);
// Change the button text based on the presence of the class
if (descriptionDiv.classList.contains(showMoreClassName)) {
toggleButton.textContent = 'Show less';
} else {
toggleButton.textContent = 'Show more';
}
}
let toggleButton = document.getElementById('dja-show-more-model-description');
let descriptionDiv = document.querySelector('.dja__model-description');
if (toggleButton && descriptionDiv) {
handleShowMoreButton(toggleButton, descriptionDiv)
}
})();
/** An IIFE that hooks to the show/hide button underneath action needed reason.
* This shows the auto generated email on action needed reason.
*/
(function () {
// Since this is an iife, these vars will be removed from memory afterwards
var actionNeededReasonDropdown = document.querySelector("#id_action_needed_reason");
var actionNeededEmail = document.querySelector("#id_action_needed_reason_email");
var readonlyView = document.querySelector("#action-needed-reason-email-readonly");
let emailWasSent = document.getElementById("action-needed-email-sent");
let actionNeededEmailData = document.getElementById('action-needed-emails-data').textContent;
let actionNeededEmailsJson = JSON.parse(actionNeededEmailData);
const domainRequestId = actionNeededReasonDropdown ? document.querySelector("#domain_request_id").value : null
const emailSentSessionVariableName = `actionNeededEmailSent-${domainRequestId}`;
const oldDropdownValue = actionNeededReasonDropdown ? actionNeededReasonDropdown.value : null;
const oldEmailValue = actionNeededEmailData ? actionNeededEmailData.value : null;
if(actionNeededReasonDropdown && actionNeededEmail && domainRequestId) {
// Add a change listener to dom load
document.addEventListener('DOMContentLoaded', function() {
let reason = actionNeededReasonDropdown.value;
// Handle the session boolean (to enable/disable editing)
if (emailWasSent && emailWasSent.value === "True") {
// An email was sent out - store that information in a session variable
addOrRemoveSessionBoolean(emailSentSessionVariableName, add=true);
}
// Show an editable email field or a readonly one
updateActionNeededEmailDisplay(reason)
});
// Add a change listener to the action needed reason dropdown
actionNeededReasonDropdown.addEventListener("change", function() {
let reason = actionNeededReasonDropdown.value;
let emailBody = reason in actionNeededEmailsJson ? actionNeededEmailsJson[reason] : null;
if (reason && emailBody) {
// Replace the email content
actionNeededEmail.value = emailBody;
// Reset the session object on change since change refreshes the email content.
if (oldDropdownValue !== actionNeededReasonDropdown.value || oldEmailValue !== actionNeededEmail.value) {
let emailSent = sessionStorage.getItem(emailSentSessionVariableName)
if (emailSent !== null){
addOrRemoveSessionBoolean(emailSentSessionVariableName, add=false)
}
}
}
// Show an editable email field or a readonly one
updateActionNeededEmailDisplay(reason)
});
}
// Shows an editable email field or a readonly one.
// If the email doesn't exist or if we're of reason "other", display that no email was sent.
// Likewise, if we've sent this email before, we should just display the content.
function updateActionNeededEmailDisplay(reason) {
let emailHasBeenSentBefore = sessionStorage.getItem(emailSentSessionVariableName) !== null;
let collapseableDiv = readonlyView.querySelector(".collapse--dgsimple");
let showMoreButton = document.querySelector("#action_needed_reason_email__show_details");
if ((reason && reason != "other") && !emailHasBeenSentBefore) {
showElement(actionNeededEmail.parentElement)
hideElement(readonlyView)
hideElement(showMoreButton)
} else {
if (!reason || reason === "other") {
collapseableDiv.innerHTML = reason ? "No email will be sent." : "-";
hideElement(showMoreButton)
if (collapseableDiv.classList.contains("collapsed")) {
showMoreButton.click()
}
}else {
showElement(showMoreButton)
}
hideElement(actionNeededEmail.parentElement)
showElement(readonlyView)
}
}
})();
/** An IIFE for copy summary button (appears in DomainRegistry models)
*/
(function (){
const copyButton = document.getElementById('id-copy-to-clipboard-summary');
if (copyButton) {
copyButton.addEventListener('click', function() {
/// Generate a rich HTML summary text and copy to clipboard
//------ Organization Type
const organizationTypeElement = document.getElementById('id_organization_type');
const organizationType = organizationTypeElement.options[organizationTypeElement.selectedIndex].text;
//------ Alternative Domains
const alternativeDomainsDiv = document.querySelector('.form-row.field-alternative_domains .readonly');
const alternativeDomainslinks = alternativeDomainsDiv.querySelectorAll('a');
const alternativeDomains = Array.from(alternativeDomainslinks).map(link => link.textContent);
//------ Existing Websites
const existingWebsitesDiv = document.querySelector('.form-row.field-current_websites .readonly');
const existingWebsiteslinks = existingWebsitesDiv.querySelectorAll('a');
const existingWebsites = Array.from(existingWebsiteslinks).map(link => link.textContent);
//------ Additional Contacts
// 1 - Create a hyperlinks map so we can display contact details and also link to the contact
const otherContactsDiv = document.querySelector('.form-row.field-other_contacts .readonly');
let otherContactLinks = [];
const nameToUrlMap = {};
if (otherContactsDiv) {
otherContactLinks = otherContactsDiv.querySelectorAll('a');
otherContactLinks.forEach(link => {
const name = link.textContent.trim();
const url = link.href;
nameToUrlMap[name] = url;
});
}
// 2 - Iterate through contact details and assemble html for summary
let otherContactsSummary = ""
const bulletList = document.createElement('ul');
// CASE 1 - Contacts are not in a table (this happens if there is only one or two other contacts)
const contacts = document.querySelectorAll('.field-other_contacts .dja-detail-list dd');
if (contacts) {
contacts.forEach(contact => {
// Check if the <dl> element is not empty
const name = contact.querySelector('a#contact_info_name')?.innerText;
const title = contact.querySelector('span#contact_info_title')?.innerText;
const email = contact.querySelector('span#contact_info_email')?.innerText;
const phone = contact.querySelector('span#contact_info_phone')?.innerText;
const url = nameToUrlMap[name] || '#';
// Format the contact information
const listItem = document.createElement('li');
listItem.innerHTML = `<a href="${url}">${name}</a>, ${title}, ${email}, ${phone}`;
bulletList.appendChild(listItem);
});
}
// CASE 2 - Contacts are in a table (this happens if there is more than 2 contacts)
const otherContactsTable = document.querySelector('.form-row.field-other_contacts table tbody');
if (otherContactsTable) {
const otherContactsRows = otherContactsTable.querySelectorAll('tr');
otherContactsRows.forEach(contactRow => {
// Extract the contact details
const name = contactRow.querySelector('th').textContent.trim();
const title = contactRow.querySelectorAll('td')[0].textContent.trim();
const email = contactRow.querySelectorAll('td')[1].textContent.trim();
const phone = contactRow.querySelectorAll('td')[2].textContent.trim();
const url = nameToUrlMap[name] || '#';
// Format the contact information
const listItem = document.createElement('li');
listItem.innerHTML = `<a href="${url}">${name}</a>, ${title}, ${email}, ${phone}`;
bulletList.appendChild(listItem);
});
}
otherContactsSummary += bulletList.outerHTML
//------ Requested Domains
const requestedDomainElement = document.getElementById('id_requested_domain');
const requestedDomain = requestedDomainElement.options[requestedDomainElement.selectedIndex].text;
//------ Submitter
// Function to extract text by ID and handle missing elements
function extractTextById(id, divElement) {
if (divElement) {
const element = divElement.querySelector(`#${id}`);
return element ? ", " + element.textContent.trim() : '';
}
return '';
}
// Extract the submitter name, title, email, and phone number
const submitterDiv = document.querySelector('.form-row.field-submitter');
const submitterNameElement = document.getElementById('id_submitter');
const submitterName = submitterNameElement.options[submitterNameElement.selectedIndex].text;
const submitterTitle = extractTextById('contact_info_title', submitterDiv);
const submitterEmail = extractTextById('contact_info_email', submitterDiv);
const submitterPhone = extractTextById('contact_info_phone', submitterDiv);
let submitterInfo = `${submitterName}${submitterTitle}${submitterEmail}${submitterPhone}`;
//------ Senior Official
const seniorOfficialDiv = document.querySelector('.form-row.field-senior_official');
const seniorOfficialElement = document.getElementById('id_senior_official');
const seniorOfficialName = seniorOfficialElement.options[seniorOfficialElement.selectedIndex].text;
const seniorOfficialTitle = extractTextById('contact_info_title', seniorOfficialDiv);
const seniorOfficialEmail = extractTextById('contact_info_email', seniorOfficialDiv);
const seniorOfficialPhone = extractTextById('contact_info_phone', seniorOfficialDiv);
let seniorOfficialInfo = `${seniorOfficialName}${seniorOfficialTitle}${seniorOfficialEmail}${seniorOfficialPhone}`;
const html_summary = `<strong>Recommendation:</strong></br>` +
`<strong>Organization Type:</strong> ${organizationType}</br>` +
`<strong>Requested Domain:</strong> ${requestedDomain}</br>` +
`<strong>Current Websites:</strong> ${existingWebsites.join(', ')}</br>` +
`<strong>Rationale:</strong></br>` +
`<strong>Alternative Domains:</strong> ${alternativeDomains.join(', ')}</br>` +
`<strong>Submitter:</strong> ${submitterInfo}</br>` +
`<strong>Senior Official:</strong> ${seniorOfficialInfo}</br>` +
`<strong>Other Employees:</strong> ${otherContactsSummary}</br>`;
//Replace </br> with \n, then strip out all remaining html tags (replace <...> with '')
const plain_summary = html_summary.replace(/<\/br>|<br>/g, '\n').replace(/<\/?[^>]+(>|$)/g, '');
// Create Blobs with the summary content
const html_blob = new Blob([html_summary], { type: 'text/html' });
const plain_blob = new Blob([plain_summary], { type: 'text/plain' });
// Create a ClipboardItem with the Blobs
const clipboardItem = new ClipboardItem({
'text/html': html_blob,
'text/plain': plain_blob
});
// Write the ClipboardItem to the clipboard
navigator.clipboard.write([clipboardItem]).then(() => {
// Change the icon to a checkmark on successful copy
let buttonIcon = copyButton.querySelector('use');
if (buttonIcon) {
let currentHref = buttonIcon.getAttribute('xlink:href');
let baseHref = currentHref.split('#')[0];
// Append the new icon reference
buttonIcon.setAttribute('xlink:href', baseHref + '#check');
// Change the button text
nearestSpan = copyButton.querySelector("span")
original_text = nearestSpan.innerText
nearestSpan.innerText = "Copied to clipboard"
setTimeout(function() {
// Change back to the copy icon
buttonIcon.setAttribute('xlink:href', currentHref);
nearestSpan.innerText = original_text
}, 2000);
}
console.log('Summary copied to clipboard successfully!');
}).catch(err => {
console.error('Failed to copy text: ', err);
});
});
}
})();

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,33 @@
@use "uswds-core" as *;
.usa-accordion--select {
display: inline-block;
width: auto;
position: relative;
.usa-accordion__button[aria-expanded=false],
.usa-accordion__button[aria-expanded=false]:hover,
.usa-accordion__button[aria-expanded=true],
.usa-accordion__button[aria-expanded=true]:hover {
background-image: none;
}
.usa-accordion__content {
// Note, width is determined by a custom width class on one of the children
position: absolute;
z-index: 1;
top: 33.88px;
left: 0;
border-radius: 4px;
border: solid 1px color('base-lighter');
padding: units(2) units(2) units(3) units(2);
width: max-content;
}
h2 {
font-size: size('body', 'sm');
}
.usa-button {
width: 100%;
}
.margin-top-0 {
margin-top: 0 !important;
}
}

View file

@ -112,12 +112,26 @@ html[data-theme="light"] {
.change-list .usa-table--borderless thead th,
.change-list .usa-table thead td,
.change-list .usa-table thead th,
.change-form .usa-table,
.change-form .usa-table--striped tbody tr:nth-child(odd) td,
.change-form .usa-table--borderless thead th,
.change-form .usa-table thead td,
.change-form .usa-table thead th,
body.dashboard,
body.change-list,
body.change-form,
.analytics {
color: var(--body-fg);
}
.usa-table td {
background-color: transparent;
}
// Sets darker color on delete page links.
// Remove when dark mode successfully applies to Django delete page.
.delete-confirmation .content a:not(.button) {
color: color('primary');
}
}
// Firefox needs this to be specifically set
@ -127,13 +141,29 @@ html[data-theme="dark"] {
.change-list .usa-table--borderless thead th,
.change-list .usa-table thead td,
.change-list .usa-table thead th,
.change-form .usa-table,
.change-form .usa-table--striped tbody tr:nth-child(odd) td,
.change-form .usa-table--borderless thead th,
.change-form .usa-table thead td,
.change-form .usa-table thead th,
body.dashboard,
body.change-list,
body.change-form {
body.change-form,
.analytics {
color: var(--body-fg);
}
.usa-table td {
background-color: transparent;
}
// Sets darker color on delete page links.
// Remove when dark mode successfully applies to Django delete page.
.delete-confirmation .content a:not(.button) {
color: color('primary');
}
}
#branding h1 a:link, #branding h1 a:visited {
color: var(--primary-fg);
}
@ -156,6 +186,14 @@ div#content > h2 {
margin: units(2) 0 units(1) 0;
}
.module ul.padding-0 {
padding: 0 !important;
}
.module ul.margin-0 {
margin: 0 !important;
}
.change-list {
.usa-table--striped tbody tr:nth-child(odd) td,
.usa-table--striped tbody tr:nth-child(odd) th,
@ -165,6 +203,18 @@ div#content > h2 {
}
}
.change-form {
.usa-table--striped tbody tr:nth-child(odd) td,
.usa-table--striped tbody tr:nth-child(odd) th,
.usa-table td,
.usa-table th {
background-color: transparent;
}
.usa-table td {
border-bottom: 1px solid var(--hairline-color);
}
}
#nav-sidebar {
padding-top: 20px;
}
@ -226,7 +276,7 @@ div#content > h2 {
// in the future
.object-tools li a,
.object-tools p a {
font-family: "Source Sans Pro Web", "Helvetica Neue", Helvetica, Roboto, Arial, sans-serif;
font-family: family('sans');
text-transform: none!important;
font-size: 14px!important;
}
@ -276,7 +326,7 @@ div#content > h2 {
.messagelist_content-list--unstyled {
padding-left: 0;
li {
font-family: "Source Sans Pro Web", "Helvetica Neue", Helvetica, Roboto, Arial, sans-serif;
font-family: family('sans');
font-size: 13.92px!important;
background: none!important;
padding: 0!important;
@ -319,9 +369,6 @@ input.admin-confirm-button {
padding: 10px 8px;
line-height: normal;
}
.usa-icon {
top: 2px;
}
a.button:active, a.button:focus {
text-decoration: none;
}
@ -378,7 +425,6 @@ details.dja-detail-table {
border-top: none;
border-bottom: none;
}
}
@ -398,15 +444,12 @@ address.margin-top-neg-1__detail-list {
}
}
td button.usa-button__clipboard-link, address.dja-address-contact-list {
address.dja-address-contact-list {
font-size: unset;
}
address.dja-address-contact-list {
color: var(--body-quiet-color);
button.usa-button__clipboard-link {
font-size: unset;
}
}
// Mimic the normal label size
@ -415,11 +458,18 @@ address.dja-address-contact-list {
font-size: 0.875rem;
color: var(--body-quiet-color);
}
}
address button.usa-button__clipboard-link, td button.usa-button__clipboard-link {
font-size: 0.875rem !important;
}
// Targets the unstyled buttons in the form
.button--clipboard {
color: var(--link-fg);
}
// Targets the DJA buttom with a nested icon
button .usa-icon,
.button .usa-icon,
.button--clipboard .usa-icon {
vertical-align: middle;
}
.errors span.select2-selection {
@ -525,32 +575,42 @@ address.dja-address-contact-list {
}
// Collapse button styles for fieldsets
.module.collapse--dotgov {
.module.collapse--dgfieldset {
margin-top: -35px;
padding-top: 0;
border: none;
button {
background: none;
text-transform: none;
}
.collapse-toggle--dgsimple,
.module.collapse--dgfieldset button {
background: none;
text-transform: none;
color: var(--link-fg);
margin-top: 8px;
margin-left: 10px;
span {
text-decoration: underline;
font-size: 13px;
font-feature-settings: "kern";
font-kerning: normal;
line-height: 13px;
font-family: family('sans');
}
&:hover {
color: var(--link-fg);
margin-top: 8px;
margin-left: 10px;
span {
text-decoration: underline;
font-size: 13px;
font-feature-settings: "kern";
font-kerning: normal;
line-height: 13px;
font-family: -apple-system, "system-ui", "Segoe UI", system-ui, Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
svg {
color: var(--link-fg);
}
}
}
.collapse--dotgov.collapsed .collapse-toggle--dotgov {
.collapse--dgfieldset.collapsed .collapse-toggle--dgfieldset {
display: inline-block!important;
* {
display: inline-block;
}
}
.collapse--dgsimple.collapsed {
display: none;
}
.dja-status-list {
border-top: solid 1px var(--border-color);
@ -559,7 +619,7 @@ address.dja-address-contact-list {
padding-top: 10px;
li {
line-height: 1.5;
font-family: "Source Sans Pro Web", "Helvetica Neue", Helvetica, Roboto, Arial, sans-serif !important;
font-family: family('sans');
padding-top: 0;
padding-bottom: 0;
}
@ -604,7 +664,7 @@ address.dja-address-contact-list {
align-items: center;
.usa-button__icon {
.usa-button--icon {
position: absolute;
right: auto;
left: 4px;
@ -616,7 +676,6 @@ address.dja-address-contact-list {
display: inline-flex;
padding-top: 4px;
line-height: 14px;
color: var(--link-fg);
width: max-content;
font-size: unset;
text-decoration: none !important;
@ -659,3 +718,132 @@ form .aligned p.help, form .aligned div.help {
background: var(--primary);
color: var(--header-link-color);
}
div.dja__model-description{
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
p, li {
font-size: medium;
color: var(--secondary);
}
li {
list-style-type: disc;
font-family: Source Sans Pro Web,Helvetica Neue,Helvetica,Roboto,Arial,sans-serif;
}
a, a:link, a:visited {
font-size: medium;
color: color('primary') !important;
}
&.dja__model-description--no-overflow {
display: block;
overflow: auto;
}
}
.import_export_text {
color: var(--secondary);
}
.text-underline {
text-decoration: underline !important;
}
//-- Override some styling for the USWDS summary box (per design quidance for ticket #2055
.usa-summary-box {
background: #{$dhs-blue-10};
border-color: #{$dhs-blue-30};
max-width: 72ex;
word-wrap: break-word;
}
.usa-summary-box h3 {
color: #{$dhs-blue-60};
}
.module caption, .inline-group h2 {
text-transform: capitalize;
}
.wrapped-button-group {
// This button group has too many items
flex-wrap: wrap;
// Fix a weird spacing issue with USWDS a buttons in DJA
a.button {
padding: 6px 8px 10px 8px;
}
}
.usa-button--dja-link-color {
color: var(--link-fg);
}
.textarea-wrapper {
width: 100%;
max-width: 610px;
}
.dja-readonly-textarea-container {
width: 100%;
textarea {
width: 100%;
max-width: 610px;
resize: none;
cursor: auto;
&::-webkit-scrollbar {
background-color: transparent;
border: none;
width: 12px;
}
// Style the scroll bar handle
&::-webkit-scrollbar-thumb {
background-color: var(--body-fg);
border-radius: 99px;
background-clip: content-box;
border: 3px solid transparent;
}
}
}
.max-full {
width: 100% !important;
}
.thin-border {
background-color: var(--selected-bg);
border: 1px solid var(--border-color);
border-radius: 8px;
label {
padding-top: 0 !important;
}
}
.display-none {
// Many elements in django admin try to override this, so we need !important.
display: none !important;
}
.margin-top-0 {
margin-top: 0 !important;
}
.padding-top-0 {
padding-top: 0 !important;
}
.flex-container {
@media screen and (min-width: 700px) and (max-width: 1150px) {
&.flex-container--mobile-inline {
display: inline !important;
}
}
}

View file

@ -1,4 +1,5 @@
@use "uswds-core" as *;
@use "cisa_colors" as *;
/* Styles for making visible to screen reader / AT users only. */
.sr-only {
@ -28,24 +29,15 @@ body {
#wrapper.dashboard {
background-color: color('primary-lightest');
padding-top: units(5);
}
.usa-logo {
@include at-media(desktop) {
margin-top: units(2);
}
}
.usa-logo__text {
@include typeset('sans', 'xl', 2);
color: color('primary-darker');
padding-top: units(5)!important;
}
.usa-nav__primary {
margin-top:units(1);
#wrapper.dashboard--portfolio {
background-color: color('gray-1');
padding-top: units(4)!important;
}
.section--outlined {
background-color: color('white');
border: 1px solid color('base-lighter');
@ -53,6 +45,10 @@ body {
padding: 0 units(2) units(3);
margin-top: units(3);
&.margin-top-0 {
margin-top: 0;
}
h2 {
color: color('primary-dark');
margin-top: units(2);
@ -66,12 +62,39 @@ body {
@include at-media(mobile-lg) {
margin-top: units(5);
&.margin-top-0 {
margin-top: 0;
}
h2 {
margin-bottom: 0;
}
}
}
.section--outlined__header--no-portfolio {
.section--outlined__search,
.section--outlined__utility-button {
margin-top: units(2);
}
@include at-media(tablet) {
display: flex;
column-gap: units(3);
.section--outlined__search,
.section--outlined__utility-button {
margin-top: 0;
}
.section--outlined__search {
flex-grow: 4;
// Align right
max-width: 383px;
margin-left: auto;
}
}
}
.break-word {
word-break: break-word;
}
@ -98,10 +121,6 @@ footer {
color: color('primary');
}
.usa-identifier__logo {
height: units(7);
}
abbr[title] {
// workaround for underlining abbr element
border-bottom: none;
@ -140,3 +159,36 @@ abbr[title] {
.cursor-pointer {
cursor: pointer;
}
.padding--8-8-9 {
padding: 8px 8px 9px !important;
}
.ellipsis {
display: inline-block;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.ellipsis--23 {
max-width: 23ch;
}
.ellipsis--30 {
max-width: 30ch;
}
.ellipsis--50 {
max-width: 50ch;
}
.vertical-align-middle {
vertical-align: middle;
}
@include at-media(desktop) {
.ellipsis--desktop-50 {
max-width: 50ch;
}
}

View file

@ -1,4 +1,5 @@
@use "uswds-core" as *;
@use "cisa_colors" as *;
/* Make "placeholder" links visually obvious */
a[href$="todo"]::after {
@ -7,11 +8,16 @@ a[href$="todo"]::after {
content: " [link TBD]";
font-style: italic;
}
a.usa-link.usa-link--always-blue {
color: #{$dhs-blue};
}
a.breadcrumb__back {
display:flex;
align-items: center;
margin-bottom: units(2.5);
color: #{$dhs-blue};
&:visited {
color: color('primary');
}
@ -155,3 +161,56 @@ a.usa-button--unstyled:visited {
margin-left: units(2);
}
}
.input-with-edit-button {
svg.usa-icon {
width: 1.5em !important;
height: 1.5em !important;
color: #{$dhs-green};
position: absolute;
}
&.input-with-edit-button__error {
svg.usa-icon {
color: #{$dhs-red};
}
div.readonly-field {
color: #{$dhs-red};
}
}
}
// We need to deviate from some default USWDS styles here
// in this particular case, so we have to override this.
.usa-form .usa-button.readonly-edit-button {
margin-top: 0px !important;
padding-top: 0px !important;
svg {
width: 1.25em !important;
height: 1.25em !important;
}
}
.usa-button--filter {
width: auto;
// For mobile stacking
margin-bottom: units(1);
border: solid 1px color('base-light') !important;
padding: units(1);
color: color('primary-darker') !important;
font-weight: font-weight('normal');
font-size: size('ui', 'xs');
box-shadow: none;
&:hover {
box-shadow: none;
}
}
.usa-icon.usa-icon--big {
margin: 0;
height: 1.5em;
width: 1.5em;
}
.margin-right-neg-4px {
margin-right: -4px;
}

View file

@ -46,6 +46,7 @@ $dhs-gray-10: #fcfdfd;
/*--- Dark Gray ---*/
$dhs-dark-gray-90: #040404;
$dhs-dark-gray-85: #1b1b1b;
$dhs-dark-gray-80: #19191a;
$dhs-dark-gray-70: #2f2f30;
$dhs-dark-gray-60: #444547;

View file

@ -1,4 +1,5 @@
@use "uswds-core" as *;
@use "cisa_colors" as *;
.usa-form .usa-button {
margin-top: units(3);
@ -26,6 +27,34 @@
}
}
.usa-form-editable {
border-top: 2px #{$dhs-dark-gray-15} solid;
.bold-usa-label label.usa-label{
font-weight: bold;
}
&.bold-usa-label label.usa-label{
font-weight: bold;
}
&.usa-form-editable--no-border {
border-top: None;
margin-top: 0px !important;
}
}
.usa-form-editable > .usa-form-group:first-of-type {
margin-top: unset;
}
@media (min-width: 35em) {
.usa-form--largest {
max-width: 35rem;
}
}
.usa-form-group--unstyled-error {
margin-left: 0;
padding-left: 0;
@ -52,4 +81,14 @@ legend.float-left-tablet + button.float-right-tablet {
background-color: var(--body-fg);
color: var(--close-button-hover-bg);
}
}
}
.read-only-label {
font-size: size('body', 'sm');
color: color('primary');
margin-bottom: units(0.5);
}
.read-only-value {
margin-top: units(0);
}

View file

@ -0,0 +1,121 @@
@use "uswds-core" as *;
@use "cisa_colors" as *;
// Define some styles for the .gov header/logo
.usa-logo button {
color: #{$dhs-dark-gray-85};
font-weight: 700;
font-family: family('sans');
font-size: 1.6rem;
line-height: 1.1;
}
.usa-logo button:hover{
color: #{$dhs-dark-gray-85};
}
.usa-header {
.usa-logo {
@include at-media(desktop) {
margin-top: units(2);
}
}
.usa-logo__text {
@include typeset('sans', 'xl', 2);
}
.usa-nav__username {
max-width: 208px;
min-height: units(2);
@include at-media(desktop) {
max-width: 500px;
}
}
.padding-y-0 {
padding-top: 0 !important;
padding-bottom: 0 !important;
}
}
.usa-header--basic {
.usa-logo__text {
color: color('primary-darker');
}
.usa-nav__username {
padding: units(1) units(2);
@include at-media(desktop) {
padding: units(2);
}
}
.usa-nav__primary {
margin-top:units(1);
}
@include at-media(desktop) {
.usa-nav__primary-item:not(:first-child) {
position: relative;
}
.usa-nav__primary-item:not(:first-child)::before {
content: '';
position: absolute;
top: 50%;
left: 0;
width: 0; /* No width since it's a border */
height: 40%;
border-left: solid 1px color('base-light');
transform: translateY(-50%);
}
}
}
.usa-header--extended {
@include at-media(desktop) {
background-color: color('primary-darker');
border-top: solid 1px color('base-light');
border-bottom: solid 1px color('base-lighter');
.usa-logo__text a,
.usa-logo__text button,
.usa-logo__text button:hover {
color: color('white');
}
.usa-nav {
background-color: color('primary-lightest');
}
.usa-nav__primary-item:last-child {
margin-left: auto;
.usa-nav-link {
margin-right: units(-2);
}
}
.usa-nav__primary {
.usa-nav-link,
.usa-nav-link:hover,
.usa-nav-link:active {
color: color('primary');
font-weight: font-weight('normal');
font-size: 16px;
}
.usa-current,
.usa-current:hover,
.usa-current:active {
font-weight: font-weight('bold');
}
}
.usa-nav__secondary {
// I don't know why USWDS has this at 2 rem, which puts it out of alignment
right: 3rem;
color: color('white');
bottom: 4.3rem;
.usa-nav-link,
.usa-nav-link:hover,
.usa-nav-link:active {
font-weight: font-weight('bold');
color: color('primary-lighter');
font-size: 16px;
}
}
> .usa-navbar {
// This is a dangerous override to USWDS, necessary because we have a tooltip on the logo
overflow: visible;
}
}
}

View file

@ -0,0 +1,9 @@
@use "uswds-core" as *;
.usa-banner {
background-color: color('primary-darker');
}
.usa-identifier__logo {
height: units(7);
}

View file

@ -1,21 +1,18 @@
@use "uswds-core" as *;
.dotgov-table {
a {
display: flex;
align-items: flex-start;
color: color('primary');
.dotgov-table a,
.usa-link--icon {
display: flex;
align-items: flex-start;
color: color('primary');
&:visited {
color: color('primary');
}
&:visited {
color: color('primary');
}
}
a {
.usa-icon {
// align icon with x height
margin-top: units(0.5);
margin-right: units(0.5);
}
}

View file

@ -0,0 +1,15 @@
@use "uswds-core" as *;
.usa-pagination {
flex-wrap: wrap;
background-color: transparent;
.usa-current {
background-color: color('base-dark');
}
}
@include at-media(desktop) {
.usa-pagination {
flex-wrap: nowrap;
}
}

View file

@ -25,7 +25,7 @@
}
}
h3.register-form-review-header {
.register-form-review-header {
color: color('primary-dark');
margin-top: units(2);
margin-bottom: 0;

View file

@ -27,7 +27,6 @@
}
td .no-click-outline-and-cursor-help {
outline: none;
cursor: help;
use {
// USWDS has weird interactions with SVGs regarding tooltips,
@ -35,22 +34,6 @@
pointer-events: none;
}
}
// Ticket #1510
// @include at-media('desktop') {
// th:first-child {
// width: 220px;
// }
// th:nth-child(2) {
// width: 175px;
// }
// th:nth-child(3) {
// width: 130px;
// }
// th:nth-child(5) {
// width: 130px;
// }
// }
}
.dotgov-table {
@ -97,46 +80,3 @@
}
}
}
@media (min-width: 1040px){
.dotgov-table__domain-requests {
th:nth-of-type(1) {
width: 200px;
}
th:nth-of-type(2) {
width: 158px;
}
th:nth-of-type(3) {
width: 120px;
}
th:nth-of-type(4) {
width: 95px;
}
th:nth-of-type(5) {
width: 85px;
}
}
}
@media (min-width: 1040px){
.dotgov-table__registered-domains {
th:nth-of-type(1) {
width: 200px;
}
th:nth-of-type(2) {
width: 158px;
}
th:nth-of-type(3) {
width: 215px;
}
th:nth-of-type(4) {
width: 95px;
}
}
}

View file

@ -24,3 +24,7 @@
text-align: center !important;
}
}
#extended-logo .usa-tooltip__body {
font-weight: 400 !important;
}

View file

@ -12,15 +12,19 @@
@forward "typography";
@forward "links";
@forward "lists";
@forward "accordions";
@forward "buttons";
@forward "pagination";
@forward "forms";
@forward "tooltips";
@forward "fieldsets";
@forward "alerts";
@forward "tables";
@forward "sidenav";
@forward "identifier";
@forward "header";
@forward "register-form";
/*--------------------------------------------------
--- Admin ---------------------------------*/
@forward "admin";
@forward "admin";

View file

@ -22,7 +22,6 @@ from base64 import b64decode
from cfenv import AppEnv # type: ignore
from pathlib import Path
from typing import Final
from botocore.config import Config
# # # ###
@ -148,6 +147,10 @@ INSTALLED_APPS = [
"corsheaders",
# library for multiple choice filters in django admin
"django_admin_multiple_choice_list_filter",
# library for export and import of data
"import_export",
# Waffle feature flags
"waffle",
]
# Middleware are routines for processing web requests.
@ -159,7 +162,7 @@ MIDDLEWARE = [
# django-cors-headers: listen to cors responses
"corsheaders.middleware.CorsMiddleware",
# custom middleware to stop caching from CloudFront
"registrar.no_cache_middleware.NoCacheMiddleware",
"registrar.registrar_middleware.NoCacheMiddleware",
# serve static assets in production
"whitenoise.middleware.WhiteNoiseMiddleware",
# provide security enhancements to the request/response cycle
@ -183,6 +186,10 @@ MIDDLEWARE = [
"csp.middleware.CSPMiddleware",
# django-auditlog: obtain the request User for use in logging
"auditlog.middleware.AuditlogMiddleware",
# Used for waffle feature flags
"waffle.middleware.WaffleMiddleware",
"registrar.registrar_middleware.CheckUserProfileMiddleware",
"registrar.registrar_middleware.CheckPortfolioMiddleware",
]
# application object used by Djangos built-in servers (e.g. `runserver`)
@ -233,6 +240,10 @@ TEMPLATES = [
"registrar.context_processors.canonical_path",
"registrar.context_processors.is_demo_site",
"registrar.context_processors.is_production",
"registrar.context_processors.org_user_status",
"registrar.context_processors.add_path_to_context",
"registrar.context_processors.add_has_profile_feature_flag_to_context",
"registrar.context_processors.portfolio_permissions",
],
},
},
@ -319,6 +330,17 @@ EMAIL_TIMEOUT = 30
SERVER_EMAIL = "root@get.gov"
# endregion
# region: Waffle feature flags-----------------------------------------------------------###
# If Waffle encounters a reference to a flag that is not in the database, create the flag automagically.
WAFFLE_CREATE_MISSING_FLAGS = True
# The model that will be used to keep track of flags. Extends AbstractUserFlag.
# Used to replace the default flag class (for customization purposes).
WAFFLE_FLAG_MODEL = "registrar.WaffleFlag"
# endregion
# region: Headers-----------------------------------------------------------###
# Content-Security-Policy configuration
@ -642,6 +664,12 @@ ALLOWED_HOSTS = [
"getgov-stable.app.cloud.gov",
"getgov-staging.app.cloud.gov",
"getgov-development.app.cloud.gov",
"getgov-ad.app.cloud.gov",
"getgov-ms.app.cloud.gov",
"getgov-ag.app.cloud.gov",
"getgov-litterbox.app.cloud.gov",
"getgov-hotgov.app.cloud.gov",
"getgov-cb.app.cloud.gov",
"getgov-bob.app.cloud.gov",
"getgov-meoward.app.cloud.gov",
"getgov-backup.app.cloud.gov",
@ -785,6 +813,6 @@ if DEBUG:
# Run:
# cf run-task getgov-<> --wait --command 'python manage.py auditlogmigratejson --traceback' --name auditlogmigratejson
# on our staging and stable, then remove these 2 variables or set to False
AUDITLOG_TWO_STEP_MIGRATION = True
AUDITLOG_TWO_STEP_MIGRATION = False
AUDITLOG_USE_TEXT_CHANGES_IF_JSON_IS_NOT_PRESENT = True
AUDITLOG_USE_TEXT_CHANGES_IF_JSON_IS_NOT_PRESENT = False

View file

@ -9,7 +9,7 @@ from django.urls import include, path
from django.views.generic import RedirectView
from registrar import views
from registrar.views.admin_views import (
from registrar.views.report_views import (
ExportDataDomainsGrowth,
ExportDataFederal,
ExportDataFull,
@ -18,9 +18,13 @@ from registrar.views.admin_views import (
ExportDataType,
ExportDataUnmanagedDomains,
AnalyticsView,
ExportDomainRequestDataFull,
ExportDataTypeUser,
)
from registrar.views.domain_request import Step
from registrar.views.domain_requests_json import get_domain_requests_json
from registrar.views.domains_json import get_domains_json
from registrar.views.utility import always_404
from api.views import available, get_current_federal, get_current_full
@ -40,7 +44,7 @@ for step, view in [
(Step.ORGANIZATION_ELECTION, views.OrganizationElection),
(Step.ORGANIZATION_CONTACT, views.OrganizationContact),
(Step.ABOUT_YOUR_ORGANIZATION, views.AboutYourOrganization),
(Step.AUTHORIZING_OFFICIAL, views.AuthorizingOfficial),
(Step.SENIOR_OFFICIAL, views.SeniorOfficial),
(Step.CURRENT_SITES, views.CurrentSites),
(Step.DOTGOV_DOMAIN, views.DotgovDomain),
(Step.PURPOSE, views.Purpose),
@ -55,6 +59,26 @@ for step, view in [
urlpatterns = [
path("", views.index, name="home"),
path(
"domains/",
views.PortfolioDomainsView.as_view(),
name="domains",
),
path(
"requests/",
views.PortfolioDomainRequestsView.as_view(),
name="domain-requests",
),
path(
"organization/",
views.PortfolioOrganizationView.as_view(),
name="organization",
),
path(
"senior-official/",
views.PortfolioSeniorOfficialView.as_view(),
name="senior-official",
),
path(
"admin/logout/",
RedirectView.as_view(pattern_name="logout", permanent=False),
@ -64,6 +88,11 @@ urlpatterns = [
ExportDataType.as_view(),
name="export_data_type",
),
path(
"admin/analytics/export_data_domain_requests_full/",
ExportDomainRequestDataFull.as_view(),
name="export_data_domain_requests_full",
),
path(
"admin/analytics/export_data_full/",
ExportDataFull.as_view(),
@ -100,6 +129,11 @@ urlpatterns = [
name="analytics",
),
path("admin/", admin.site.urls),
path(
"reports/export_data_type_user/",
ExportDataTypeUser.as_view(),
name="export_data_type_user",
),
path(
"domain-request/<id>/edit/",
views.DomainRequestWizard.as_view(),
@ -164,9 +198,14 @@ urlpatterns = [
name="domain-org-name-address",
),
path(
"domain/<int:pk>/authorizing-official",
views.DomainAuthorizingOfficialView.as_view(),
name="domain-authorizing-official",
"domain/<int:pk>/suborganization",
views.DomainSubOrganizationView.as_view(),
name="domain-suborganization",
),
path(
"domain/<int:pk>/senior-official",
views.DomainSeniorOfficialView.as_view(),
name="domain-senior-official",
),
path(
"domain/<int:pk>/security-email",
@ -178,6 +217,16 @@ urlpatterns = [
views.DomainAddUserView.as_view(),
name="domain-users-add",
),
path(
"finish-profile-setup",
views.FinishProfileSetupView.as_view(),
name="finish-user-profile-setup",
),
path(
"user-profile",
views.UserProfileView.as_view(),
name="user-profile",
),
path(
"invitation/<int:pk>/delete",
views.DomainInvitationDeleteView.as_view(http_method_names=["post"]),
@ -193,6 +242,8 @@ urlpatterns = [
views.DomainDeleteUserView.as_view(http_method_names=["post"]),
name="domain-user-delete",
),
path("get-domains-json/", get_domains_json, name="get_domains_json"),
path("get-domain-requests-json/", get_domain_requests_json, name="get_domain_requests_json"),
]
# Djangooidc strips out context data from that context, so we define a custom error
@ -206,6 +257,7 @@ urlpatterns = [
# Rather than dealing with that, we keep everything centralized in one location.
# This way, we can share a view for djangooidc, and other pages as we see fit.
handler500 = "registrar.views.utility.error_views.custom_500_error_view"
handler403 = "registrar.views.utility.error_views.custom_403_error_view"
# we normally would guard these with `if settings.DEBUG` but tests run with
# DEBUG = False even when these apps have been loaded because settings.DEBUG

View file

@ -1,4 +1,5 @@
from django.conf import settings
from waffle.decorators import flag_is_active
def language_code(request):
@ -36,3 +37,51 @@ def is_demo_site(request):
def is_production(request):
"""Add a boolean if this is our production site."""
return {"IS_PRODUCTION": settings.IS_PRODUCTION}
def org_user_status(request):
if request.user.is_authenticated:
is_org_user = request.user.is_org_user(request)
else:
is_org_user = False
return {
"is_org_user": is_org_user,
}
def add_path_to_context(request):
return {"path": getattr(request, "path", None)}
def add_has_profile_feature_flag_to_context(request):
return {"has_profile_feature_flag": flag_is_active(request, "profile_feature")}
def portfolio_permissions(request):
"""Make portfolio permissions for the request user available in global context"""
try:
if not request.user or not request.user.is_authenticated or not flag_is_active(request, "organization_feature"):
return {
"has_base_portfolio_permission": False,
"has_domains_portfolio_permission": False,
"has_domain_requests_portfolio_permission": False,
"portfolio": None,
"has_organization_feature_flag": False,
}
return {
"has_base_portfolio_permission": request.user.has_base_portfolio_permission(),
"has_domains_portfolio_permission": request.user.has_domains_portfolio_permission(),
"has_domain_requests_portfolio_permission": request.user.has_domain_requests_portfolio_permission(),
"portfolio": request.user.portfolio,
"has_organization_feature_flag": True,
}
except AttributeError:
# Handles cases where request.user might not exist
return {
"has_base_portfolio_permission": False,
"has_domains_portfolio_permission": False,
"has_domain_requests_portfolio_permission": False,
"portfolio": None,
"has_organization_feature_flag": False,
}

View file

@ -3,13 +3,7 @@ import random
from faker import Faker
from django.db import transaction
from registrar.models import (
User,
DomainRequest,
DraftDomain,
Contact,
Website,
)
from registrar.models import User, DomainRequest, DraftDomain, Contact, Website, FederalAgency
fake = Faker()
logger = logging.getLogger(__name__)
@ -42,7 +36,7 @@ class DomainRequestFixture:
# "purpose": None,
# "anything_else": None,
# "is_policy_acknowledged": None,
# "authorizing_official": None,
# "senior_official": None,
# "submitter": None,
# "other_contacts": [],
# "current_websites": [],
@ -101,12 +95,6 @@ class DomainRequestFixture:
# TODO for a future ticket: Allow for more than just "federal" here
da.generic_org_type = app["generic_org_type"] if "generic_org_type" in app else "federal"
da.federal_agency = (
app["federal_agency"]
if "federal_agency" in app
# Random choice of agency for selects, used as placeholders for testing.
else random.choice(DomainRequest.AGENCIES) # nosec
)
da.submission_date = fake.date()
da.federal_type = (
app["federal_type"]
@ -129,11 +117,11 @@ class DomainRequestFixture:
if not da.investigator:
da.investigator = User.objects.get(username=user.username) if "investigator" in app else None
if not da.authorizing_official:
if "authorizing_official" in app and app["authorizing_official"] is not None:
da.authorizing_official, _ = Contact.objects.get_or_create(**app["authorizing_official"])
if not da.senior_official:
if "senior_official" in app and app["senior_official"] is not None:
da.senior_official, _ = Contact.objects.get_or_create(**app["senior_official"])
else:
da.authorizing_official = Contact.objects.create(**cls.fake_contact())
da.senior_official = Contact.objects.create(**cls.fake_contact())
if not da.submitter:
if "submitter" in app and app["submitter"] is not None:
@ -146,6 +134,13 @@ class DomainRequestFixture:
da.requested_domain, _ = DraftDomain.objects.get_or_create(name=app["requested_domain"])
else:
da.requested_domain = DraftDomain.objects.create(name=cls.fake_dot_gov())
if not da.federal_agency:
if "federal_agency" in app and app["federal_agency"] is not None:
da.federal_agency, _ = FederalAgency.objects.get_or_create(name=app["federal_agency"])
else:
federal_agencies = FederalAgency.objects.all()
# Random choice of agency for selects, used as placeholders for testing.
da.federal_agency = random.choice(federal_agencies) # nosec
@classmethod
def _set_many_to_many_relations(cls, da: DomainRequest, app: dict):

View file

@ -22,6 +22,16 @@ class UserFixture:
"""
ADMINS = [
{
"username": "aad084c3-66cc-4632-80eb-41cdf5c5bcbf",
"first_name": "Aditi",
"last_name": "Green",
},
{
"username": "be17c826-e200-4999-9389-2ded48c43691",
"first_name": "Matthew",
"last_name": "Spence",
},
{
"username": "5f283494-31bd-49b5-b024-a7e7cae00848",
"first_name": "Rachid",
@ -106,9 +116,25 @@ class UserFixture:
"last_name": "Orr",
"email": "riley+320@truss.works",
},
{
"username": "76612d84-66b0-4ae9-9870-81e98b9858b6",
"first_name": "Anna",
"last_name": "Gingle",
"email": "annagingle@truss.works",
},
]
STAFF = [
{
"username": "ffec5987-aa84-411b-a05a-a7ee5cbcde54",
"first_name": "Aditi-Analyst",
"last_name": "Green-Analyst",
},
{
"username": "d6bf296b-fac5-47ff-9c12-f88ccc5c1b99",
"first_name": "Matthew-Analyst",
"last_name": "Spence-Analyst",
},
{
"username": "319c490d-453b-43d9-bc4d-7d6cd8ff6844",
"first_name": "Rachid-Analyst",
@ -194,14 +220,20 @@ class UserFixture:
"last_name": "Orr-Analyst",
"email": "riley+321@truss.works",
},
{
"username": "e1e350b1-cfc1-4753-a6cb-3ae6d912f99c",
"first_name": "Anna-Analyst",
"last_name": "Gingle-Analyst",
"email": "annagingle+analyst@truss.works",
},
]
def load_users(cls, users, group_name):
def load_users(cls, users, group_name, are_superusers=False):
logger.info(f"Going to load {len(users)} users in group {group_name}")
for user_data in users:
try:
user, _ = User.objects.get_or_create(username=user_data["username"])
user.is_superuser = False
user.is_superuser = are_superusers
user.first_name = user_data["first_name"]
user.last_name = user_data["last_name"]
if "email" in user_data:
@ -229,5 +261,5 @@ class UserFixture:
# steps now do not need to close/reopen a db connection,
# instead they share one.
with transaction.atomic():
cls.load_users(cls, cls.ADMINS, "full_access_group")
cls.load_users(cls, cls.ADMINS, "full_access_group", are_superusers=True)
cls.load_users(cls, cls.STAFF, "cisa_analysts_group")

View file

@ -4,9 +4,13 @@ from .domain import (
NameserverFormset,
DomainSecurityEmailForm,
DomainOrgNameAddressForm,
ContactForm,
AuthorizingOfficialContactForm,
UserForm,
SeniorOfficialContactForm,
DomainDnssecForm,
DomainDsdataFormset,
DomainDsdataForm,
DomainSuborganizationForm,
)
from .portfolio import (
PortfolioOrgAddressForm,
)

View file

@ -6,6 +6,7 @@ from django.core.validators import MinValueValidator, MaxValueValidator, RegexVa
from django.forms import formset_factory
from registrar.models import DomainRequest
from phonenumber_field.widgets import RegionalPhoneNumberWidget
from registrar.models.suborganization import Suborganization
from registrar.models.utility.domain_helper import DomainHelper
from registrar.utility.errors import (
NameserverError,
@ -16,7 +17,7 @@ from registrar.utility.errors import (
SecurityEmailErrorCodes,
)
from ..models import Contact, DomainInformation, Domain
from ..models import Contact, DomainInformation, Domain, User
from .common import (
ALGORITHM_CHOICES,
DIGEST_TYPE_CHOICES,
@ -153,6 +154,42 @@ class DomainNameserverForm(forms.Form):
self.add_error("ip", str(e))
class DomainSuborganizationForm(forms.ModelForm):
"""Form for updating the suborganization"""
sub_organization = forms.ModelChoiceField(
queryset=Suborganization.objects.none(),
required=False,
widget=forms.Select(),
)
class Meta:
model = DomainInformation
fields = [
"sub_organization",
]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
portfolio = self.instance.portfolio if self.instance else None
self.fields["sub_organization"].queryset = Suborganization.objects.filter(portfolio=portfolio)
# Set initial value
if self.instance and self.instance.sub_organization:
self.fields["sub_organization"].initial = self.instance.sub_organization
# Set custom form label
self.fields["sub_organization"].label = "Suborganization name"
# Use the combobox rather than the regular select widget
self.fields["sub_organization"].widget.template_name = "django/forms/widgets/combobox.html"
# Set data-default-value attribute
if self.instance and self.instance.sub_organization:
self.fields["sub_organization"].widget.attrs["data-default-value"] = self.instance.sub_organization.pk
class BaseNameserverFormset(forms.BaseFormSet):
def clean(self):
"""
@ -203,6 +240,63 @@ NameserverFormset = formset_factory(
)
class UserForm(forms.ModelForm):
"""Form for updating users."""
email = forms.EmailField(max_length=None)
class Meta:
model = User
fields = ["first_name", "middle_name", "last_name", "title", "email", "phone"]
widgets = {
"first_name": forms.TextInput,
"middle_name": forms.TextInput,
"last_name": forms.TextInput,
"title": forms.TextInput,
"email": forms.EmailInput,
"phone": RegionalPhoneNumberWidget,
}
# the database fields have blank=True so ModelForm doesn't create
# required fields by default. Use this list in __init__ to mark each
# of these fields as required
required = ["first_name", "last_name", "title", "email", "phone"]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# take off maxlength attribute for the phone number field
# which interferes with out input_with_errors template tag
self.fields["phone"].widget.attrs.pop("maxlength", None)
# Define a custom validator for the email field with a custom error message
email_max_length_validator = MaxLengthValidator(320, message="Response must be less than 320 characters.")
self.fields["email"].validators.append(email_max_length_validator)
for field_name in self.required:
self.fields[field_name].required = True
# Set custom form label
self.fields["middle_name"].label = "Middle name (optional)"
# Set custom error messages
self.fields["first_name"].error_messages = {"required": "Enter your first name / given name."}
self.fields["last_name"].error_messages = {"required": "Enter your last name / family name."}
self.fields["title"].error_messages = {
"required": "Enter your title or role in your organization (e.g., Chief Information Officer)"
}
self.fields["email"].error_messages = {
"required": "Enter your email address in the required format, like name@example.com."
}
self.fields["phone"].error_messages["required"] = "Enter your phone number."
self.domainInfo = None
def set_domain_info(self, domainInfo):
"""Set the domain information for the form.
The form instance is associated with the contact itself. In order to access the associated
domain information object, this needs to be set in the form by the view."""
self.domainInfo = domainInfo
class ContactForm(forms.ModelForm):
"""Form for updating contacts."""
@ -260,26 +354,30 @@ class ContactForm(forms.ModelForm):
self.domainInfo = domainInfo
class AuthorizingOfficialContactForm(ContactForm):
"""Form for updating authorizing official contacts."""
class SeniorOfficialContactForm(ContactForm):
"""Form for updating senior official contacts."""
JOIN = "authorizing_official"
JOIN = "senior_official"
full_name = forms.CharField(label="Full name", required=False)
def __init__(self, disable_fields=False, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.instance and self.instance.id:
self.fields["full_name"].initial = self.instance.get_formatted_name()
# Overriding bc phone not required in this form
self.fields["phone"] = forms.IntegerField(required=False)
# Set custom error messages
self.fields["first_name"].error_messages = {
"required": "Enter the first name / given name of your authorizing official."
"required": "Enter the first name / given name of your senior official."
}
self.fields["last_name"].error_messages = {
"required": "Enter the last name / family name of your authorizing official."
"required": "Enter the last name / family name of your senior official."
}
self.fields["title"].error_messages = {
"required": "Enter the title or role your authorizing official has in your \
"required": "Enter the title or role your senior official has in your \
organization (e.g., Chief Information Officer)."
}
self.fields["email"].error_messages = {
@ -290,6 +388,12 @@ class AuthorizingOfficialContactForm(ContactForm):
if disable_fields:
DomainHelper.mass_disable_fields(fields=self.fields, disable_required=True, disable_maxlength=True)
def clean(self):
"""Clean override to remove unused fields"""
cleaned_data = super().clean()
cleaned_data.pop("full_name", None)
return cleaned_data
def save(self, commit=True):
"""
Override the save() method of the BaseModelForm.
@ -306,21 +410,21 @@ class AuthorizingOfficialContactForm(ContactForm):
is_federal = self.domainInfo.generic_org_type == DomainRequest.OrganizationChoices.FEDERAL
is_tribal = self.domainInfo.generic_org_type == DomainRequest.OrganizationChoices.TRIBAL
# Get the Contact object from the db for the Authorizing Official
db_ao = Contact.objects.get(id=self.instance.id)
# Get the Contact object from the db for the Senior Official
db_so = Contact.objects.get(id=self.instance.id)
if (is_federal or is_tribal) and self.has_changed():
# This action should be blocked by the UI, as the text fields are readonly.
# If they get past this point, we forbid it this way.
# This could be malicious, so lets reserve information for the backend only.
raise ValueError("Authorizing Official cannot be modified for federal or tribal domains.")
elif db_ao.has_more_than_one_join("information_authorizing_official"):
# Handle the case where the domain information object is available and the AO Contact
raise ValueError("Senior Official cannot be modified for federal or tribal domains.")
elif db_so.has_more_than_one_join("information_senior_official"):
# Handle the case where the domain information object is available and the SO Contact
# has more than one joined object.
# In this case, create a new Contact, and update the new Contact with form data.
# Then associate with domain information object as the authorizing_official
# Then associate with domain information object as the senior_official
data = dict(self.cleaned_data.items())
self.domainInfo.authorizing_official = Contact.objects.create(**data)
self.domainInfo.senior_official = Contact.objects.create(**data)
self.domainInfo.save()
else:
# If all checks pass, just save normally
@ -385,7 +489,6 @@ class DomainOrgNameAddressForm(forms.ModelForm):
# because for this fields we are creating an individual
# instance of the Select. For the other fields we use the for loop to set
# the class's required attribute to true.
"federal_agency": forms.TextInput,
"organization_name": forms.TextInput,
"address_line1": forms.TextInput,
"address_line2": forms.TextInput,
@ -402,7 +505,7 @@ class DomainOrgNameAddressForm(forms.ModelForm):
# the database fields have blank=True so ModelForm doesn't create
# required fields by default. Use this list in __init__ to mark each
# of these fields as required
required = ["organization_name", "address_line1", "city", "zipcode"]
required = ["organization_name", "address_line1", "city", "state_territory", "zipcode"]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

View file

@ -13,9 +13,10 @@ from registrar.forms.utility.wizard_form_helper import (
BaseYesNoForm,
BaseDeletableRegistrarForm,
)
from registrar.models import Contact, DomainRequest, DraftDomain, Domain
from registrar.models import Contact, DomainRequest, DraftDomain, Domain, FederalAgency
from registrar.templatetags.url_helpers import public_site_url
from registrar.utility.enums import ValidationReturnType
from registrar.utility.constants import BranchChoices
logger = logging.getLogger(__name__)
@ -67,7 +68,7 @@ class TribalGovernmentForm(RegistrarForm):
class OrganizationFederalForm(RegistrarForm):
federal_type = forms.ChoiceField(
choices=DomainRequest.BranchChoices.choices,
choices=BranchChoices.choices,
widget=forms.RadioSelect,
error_messages={"required": ("Select the part of the federal government your organization is in.")},
)
@ -97,13 +98,16 @@ class OrganizationElectionForm(RegistrarForm):
class OrganizationContactForm(RegistrarForm):
# for federal agencies we also want to know the top-level agency.
federal_agency = forms.ChoiceField(
excluded_agencies = ["gov Administration", "Non-Federal Agency"]
federal_agency = forms.ModelChoiceField(
label="Federal agency",
# not required because this field won't be filled out unless
# it is a federal agency. Use clean to check programatically
# if it has been filled in when required.
# uncomment to see if modelChoiceField can be an arg later
required=False,
choices=[("", "--Select--")] + DomainRequest.AGENCY_CHOICES,
queryset=FederalAgency.objects.exclude(agency__in=excluded_agencies),
empty_label="--Select--",
)
organization_name = forms.CharField(
label="Organization name",
@ -179,14 +183,14 @@ class AboutYourOrganizationForm(RegistrarForm):
)
class AuthorizingOfficialForm(RegistrarForm):
JOIN = "authorizing_official"
class SeniorOfficialForm(RegistrarForm):
JOIN = "senior_official"
def to_database(self, obj):
if not self.is_valid():
return
contact = getattr(obj, "authorizing_official", None)
if contact is not None and not contact.has_more_than_one_join("authorizing_official"):
contact = getattr(obj, "senior_official", None)
if contact is not None and not contact.has_more_than_one_join("senior_official"):
# if contact exists in the database and is not joined to other entities
super().to_database(contact)
else:
@ -194,27 +198,27 @@ class AuthorizingOfficialForm(RegistrarForm):
# in either case, create a new contact and update it
contact = Contact()
super().to_database(contact)
obj.authorizing_official = contact
obj.senior_official = contact
obj.save()
@classmethod
def from_database(cls, obj):
contact = getattr(obj, "authorizing_official", None)
contact = getattr(obj, "senior_official", None)
return super().from_database(contact)
first_name = forms.CharField(
label="First name / given name",
error_messages={"required": ("Enter the first name / given name of your authorizing official.")},
error_messages={"required": ("Enter the first name / given name of your senior official.")},
)
last_name = forms.CharField(
label="Last name / family name",
error_messages={"required": ("Enter the last name / family name of your authorizing official.")},
error_messages={"required": ("Enter the last name / family name of your senior official.")},
)
title = forms.CharField(
label="Title or role in your organization",
error_messages={
"required": (
"Enter the title or role your authorizing official has in your"
"Enter the title or role your senior official has in your"
" organization (e.g., Chief Information Officer)."
)
},
@ -644,20 +648,27 @@ class NoOtherContactsForm(BaseDeletableRegistrarForm):
class CisaRepresentativeForm(BaseDeletableRegistrarForm):
cisa_representative_first_name = forms.CharField(
label="First name / given name",
error_messages={"required": "Enter the first name / given name of the CISA regional representative."},
)
cisa_representative_last_name = forms.CharField(
label="Last name / family name",
error_messages={"required": "Enter the last name / family name of the CISA regional representative."},
)
cisa_representative_email = forms.EmailField(
required=True,
label="Your representatives email (optional)",
max_length=None,
label="Your representatives email",
required=False,
error_messages={
"invalid": ("Enter your representatives email address in the required format, like name@example.com."),
},
validators=[
MaxLengthValidator(
320,
message="Response must be less than 320 characters.",
)
],
error_messages={
"invalid": ("Enter your email address in the required format, like name@example.com."),
"required": ("Enter the email address of your CISA regional representative."),
},
)

View file

@ -0,0 +1,97 @@
"""Forms for portfolio."""
import logging
from django import forms
from django.core.validators import RegexValidator
from ..models import DomainInformation, Portfolio, SeniorOfficial
logger = logging.getLogger(__name__)
class PortfolioOrgAddressForm(forms.ModelForm):
"""Form for updating the portfolio org mailing address."""
zipcode = forms.CharField(
label="Zip code",
validators=[
RegexValidator(
"^[0-9]{5}(?:-[0-9]{4})?$|^$",
message="Enter a zip code in the required format, like 12345 or 12345-6789.",
)
],
)
class Meta:
model = Portfolio
fields = [
"address_line1",
"address_line2",
"city",
"state_territory",
"zipcode",
# "urbanization",
]
error_messages = {
"address_line1": {"required": "Enter the street address of your organization."},
"city": {"required": "Enter the city where your organization is located."},
"state_territory": {
"required": "Select the state, territory, or military post where your organization is located."
},
}
widgets = {
# We need to set the required attributed for State/territory
# because for this fields we are creating an individual
# instance of the Select. For the other fields we use the for loop to set
# the class's required attribute to true.
"address_line1": forms.TextInput,
"address_line2": forms.TextInput,
"city": forms.TextInput,
"state_territory": forms.Select(
attrs={
"required": True,
},
choices=DomainInformation.StateTerritoryChoices.choices,
),
# "urbanization": forms.TextInput,
}
# the database fields have blank=True so ModelForm doesn't create
# required fields by default. Use this list in __init__ to mark each
# of these fields as required
required = ["address_line1", "city", "state_territory", "zipcode"]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for field_name in self.required:
self.fields[field_name].required = True
self.fields["state_territory"].widget.attrs.pop("maxlength", None)
self.fields["zipcode"].widget.attrs.pop("maxlength", None)
class PortfolioSeniorOfficialForm(forms.ModelForm):
"""
Form for updating the portfolio senior official.
This form is readonly for now.
"""
JOIN = "senior_official"
full_name = forms.CharField(label="Full name", required=False)
class Meta:
model = SeniorOfficial
fields = [
"title",
"email",
]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.instance and self.instance.id:
self.fields["full_name"].initial = self.instance.get_formatted_name()
def clean(self):
"""Clean override to remove unused fields"""
cleaned_data = super().clean()
cleaned_data.pop("full_name", None)
return cleaned_data

View file

@ -0,0 +1,99 @@
from django import forms
from registrar.models.user import User
from django.core.validators import MaxLengthValidator
from phonenumber_field.widgets import RegionalPhoneNumberWidget
from registrar.models.utility.domain_helper import DomainHelper
class UserProfileForm(forms.ModelForm):
"""Form for updating user profile."""
redirect = forms.CharField(widget=forms.HiddenInput(), required=False)
class Meta:
model = User
fields = ["first_name", "middle_name", "last_name", "title", "email", "phone"]
widgets = {
"first_name": forms.TextInput,
"middle_name": forms.TextInput,
"last_name": forms.TextInput,
"title": forms.TextInput,
"email": forms.EmailInput,
"phone": RegionalPhoneNumberWidget,
}
# the database fields have blank=True so ModelForm doesn't create
# required fields by default. Use this list in __init__ to mark each
# of these fields as required
required = ["first_name", "last_name", "title", "email", "phone"]
def __init__(self, *args, **kwargs):
"""Override the inerited __init__ method to update the fields."""
super().__init__(*args, **kwargs)
# take off maxlength attribute for the phone number field
# which interferes with out input_with_errors template tag
self.fields["phone"].widget.attrs.pop("maxlength", None)
# Define a custom validator for the email field with a custom error message
email_max_length_validator = MaxLengthValidator(320, message="Response must be less than 320 characters.")
self.fields["email"].validators.append(email_max_length_validator)
for field_name in self.required:
self.fields[field_name].required = True
# Set custom form label
self.fields["first_name"].label = "First name / given name"
self.fields["middle_name"].label = "Middle name (optional)"
self.fields["last_name"].label = "Last name / family name"
self.fields["title"].label = "Title or role in your organization"
self.fields["email"].label = "Organization email"
# Set custom error messages
self.fields["first_name"].error_messages = {"required": "Enter your first name / given name."}
self.fields["last_name"].error_messages = {"required": "Enter your last name / family name."}
self.fields["title"].error_messages = {
"required": "Enter your title or role in your organization (e.g., Chief Information Officer)"
}
self.fields["email"].error_messages = {
"required": "Enter your email address in the required format, like name@example.com."
}
self.fields["phone"].error_messages["required"] = "Enter your phone number."
if self.instance and self.instance.phone:
self.fields["phone"].initial = self.instance.phone.as_national
DomainHelper.disable_field(self.fields["email"], disable_required=True)
class FinishSetupProfileForm(UserProfileForm):
"""Form for updating user profile."""
full_name = forms.CharField(required=False, label="Full name")
def clean(self):
cleaned_data = super().clean()
# Remove the full name property
if "full_name" in cleaned_data:
# Delete the full name element as its purely decorative.
# We include it as a normal Charfield for all the advantages
# and utility that it brings, but we're playing pretend.
del cleaned_data["full_name"]
return cleaned_data
def __init__(self, *args, **kwargs):
"""Override the inerited __init__ method to update the fields."""
super().__init__(*args, **kwargs)
# Set custom form label for email
self.fields["email"].label = "Organization email"
self.fields["title"].label = "Title or role in your organization"
# Define the "full_name" value
full_name = None
if self.instance.first_name and self.instance.last_name:
full_name = self.instance.get_formatted_name()
self.fields["full_name"].initial = full_name

View file

@ -0,0 +1,83 @@
import logging
from django.conf import settings
from django.core.management import BaseCommand
from django.apps import apps
from django.db import transaction
from registrar.management.commands.utility.terminal_helper import TerminalHelper
logger = logging.getLogger(__name__)
class Command(BaseCommand):
help = "Clean tables in database to prepare for import."
def handle(self, **options):
"""Delete all rows from a list of tables"""
if settings.IS_PRODUCTION:
logger.error("clean_tables cannot be run in production")
return
TerminalHelper.prompt_for_execution(
system_exit_on_terminate=True,
info_to_inspect="""
This script will delete all rows from the following tables:
* Contact
* Domain
* DomainInformation
* DomainRequest
* DraftDomain
* FederalAgency
* Host
* HostIp
* PublicContact
* User
* Website
""",
prompt_title="Do you wish to proceed with these changes?",
)
table_names = [
"DomainInformation",
"DomainRequest",
"FederalAgency",
"PublicContact",
"HostIp",
"Host",
"Domain",
"User",
"Contact",
"Website",
"DraftDomain",
]
for table_name in table_names:
self.clean_table(table_name)
def clean_table(self, table_name):
"""Delete all rows in the given table.
Delete in batches to be able to handle large tables"""
try:
# Get the model class dynamically
model = apps.get_model("registrar", table_name)
BATCH_SIZE = 1000
total_deleted = 0
# Get initial batch of primary keys
pks = list(model.objects.values_list("pk", flat=True)[:BATCH_SIZE])
while pks:
# Use a transaction to ensure database integrity
with transaction.atomic():
deleted, _ = model.objects.filter(pk__in=pks).delete()
total_deleted += deleted
logger.debug(f"Deleted {deleted} {table_name}s, total deleted: {total_deleted}")
# Get the next batch of primary keys
pks = list(model.objects.values_list("pk", flat=True)[:BATCH_SIZE])
logger.info(f"Successfully cleaned table {table_name}, deleted {total_deleted} rows")
except LookupError:
logger.error(f"Model for table {table_name} not found.")
except Exception as e:
logger.error(f"Error cleaning table {table_name}: {e}")

View file

@ -1,235 +0,0 @@
import logging
import argparse
import sys
from django.core.management import BaseCommand
from registrar.management.commands.utility.terminal_helper import (
TerminalColors,
TerminalHelper,
)
from registrar.models.contact import Contact
from registrar.models.user import User
logger = logging.getLogger(__name__)
class Command(BaseCommand):
help = """Copy first and last names from a contact to
a related user if it exists and if its first and last name
properties are null or blank strings."""
# ======================================================
# ===================== ARGUMENTS =====================
# ======================================================
def add_arguments(self, parser):
parser.add_argument("--debug", action=argparse.BooleanOptionalAction)
# ======================================================
# ===================== PRINTING ======================
# ======================================================
def print_debug_mode_statements(self, debug_on: bool):
"""Prints additional terminal statements to indicate if --debug
or --limitParse are in use"""
TerminalHelper.print_conditional(
debug_on,
f"""{TerminalColors.OKCYAN}
----------DEBUG MODE ON----------
Detailed print statements activated.
{TerminalColors.ENDC}
""",
)
def print_summary_of_findings(
self,
skipped_contacts,
eligible_users,
processed_users,
debug_on,
):
"""Prints to terminal a summary of findings from
copying first and last names from contacts to users"""
total_eligible_users = len(eligible_users)
total_skipped_contacts = len(skipped_contacts)
total_processed_users = len(processed_users)
logger.info(
f"""{TerminalColors.OKGREEN}
============= FINISHED ===============
Skipped {total_skipped_contacts} contacts
Found {total_eligible_users} users linked to contacts
Processed {total_processed_users} users
{TerminalColors.ENDC}
""" # noqa
)
# DEBUG:
TerminalHelper.print_conditional(
debug_on,
f"""{TerminalColors.YELLOW}
======= DEBUG OUTPUT =======
Users who have a linked contact:
{eligible_users}
Processed users (users who have a linked contact and a missing first or last name):
{processed_users}
===== SKIPPED CONTACTS =====
{skipped_contacts}
{TerminalColors.ENDC}
""",
)
# ======================================================
# =================== USER =====================
# ======================================================
def update_user(self, contact: Contact, debug_on: bool):
"""Given a contact with a first_name and last_name, find & update an existing
corresponding user if her first_name and last_name are null.
Returns tuple of eligible (is linked to the contact) and processed
(first and last are blank) users.
"""
user_exists = User.objects.filter(contact=contact).exists()
if user_exists:
try:
# ----------------------- UPDATE USER -----------------------
# ---- GET THE USER
eligible_user = User.objects.get(contact=contact)
processed_user = None
# DEBUG:
TerminalHelper.print_conditional(
debug_on,
f"""{TerminalColors.YELLOW}
> Found linked user for contact:
{contact} {contact.email} {contact.first_name} {contact.last_name}
> The linked user is {eligible_user} {eligible_user.username}
{TerminalColors.ENDC}""", # noqa
)
# ---- UPDATE THE USER IF IT DOES NOT HAVE A FIRST AND LAST NAMES
# ---- LET'S KEEP A LIGHT TOUCH
if not eligible_user.first_name and not eligible_user.last_name:
# (expression has type "str | None", variable has type "str | int | Combinable")
# so we'll ignore type
eligible_user.first_name = contact.first_name # type: ignore
eligible_user.last_name = contact.last_name # type: ignore
eligible_user.save()
processed_user = eligible_user
return (
eligible_user,
processed_user,
)
except Exception as error:
logger.warning(
f"""
{TerminalColors.FAIL}
!!! ERROR: An exception occured in the
User table for the following user:
{contact.email} {contact.first_name} {contact.last_name}
Exception is: {error}
----------TERMINATING----------"""
)
sys.exit()
else:
return None, None
# ======================================================
# ================= PROCESS CONTACTS ==================
# ======================================================
def process_contacts(
self,
debug_on,
skipped_contacts=[],
eligible_users=[],
processed_users=[],
):
for contact in Contact.objects.all():
TerminalHelper.print_conditional(
debug_on,
f"{TerminalColors.OKCYAN}"
"Processing Contact: "
f"{contact.email},"
f" {contact.first_name},"
f" {contact.last_name}"
f"{TerminalColors.ENDC}",
)
# ======================================================
# ====================== USER =======================
(eligible_user, processed_user) = self.update_user(contact, debug_on)
debug_string = ""
if eligible_user:
# ---------------- UPDATED ----------------
eligible_users.append(contact.email)
debug_string = f"eligible user: {eligible_user}"
if processed_user:
processed_users.append(contact.email)
debug_string = f"processed user: {processed_user}"
else:
skipped_contacts.append(contact.email)
debug_string = f"skipped user: {contact.email}"
# DEBUG:
TerminalHelper.print_conditional(
debug_on,
(f"{TerminalColors.OKCYAN} {debug_string} {TerminalColors.ENDC}"),
)
return (
skipped_contacts,
eligible_users,
processed_users,
)
# ======================================================
# ===================== HANDLE ========================
# ======================================================
def handle(
self,
**options,
):
"""Parse entries in Contact table
and update valid corresponding entries in the
User table."""
# grab command line arguments and store locally...
debug_on = options.get("debug")
self.print_debug_mode_statements(debug_on)
logger.info(
f"""{TerminalColors.OKCYAN}
==========================
Beginning Data Transfer
==========================
{TerminalColors.ENDC}"""
)
logger.info(
f"""{TerminalColors.OKCYAN}
========= Adding Domains and Domain Invitations =========
{TerminalColors.ENDC}"""
)
(
skipped_contacts,
eligible_users,
processed_users,
) = self.process_contacts(
debug_on,
)
self.print_summary_of_findings(
skipped_contacts,
eligible_users,
processed_users,
debug_on,
)

View file

@ -1,7 +1,6 @@
"""Generates current-metadata.csv then uploads to S3 + sends email"""
import logging
import os
import pyzipper
from datetime import datetime
@ -9,7 +8,7 @@ from datetime import datetime
from django.core.management import BaseCommand
from django.conf import settings
from registrar.utility import csv_export
from registrar.utility.s3_bucket import S3ClientHelper
from io import StringIO
from ...utility.email import send_templated_email
@ -17,89 +16,101 @@ logger = logging.getLogger(__name__)
class Command(BaseCommand):
"""Emails a encrypted zip file containing a csv of our domains and domain requests"""
help = (
"Generates and uploads a domain-metadata.csv file to our S3 bucket "
"which is based off of all existing Domains."
)
current_date = datetime.now().strftime("%m%d%Y")
def add_arguments(self, parser):
"""Add our two filename arguments."""
parser.add_argument("--directory", default="migrationdata", help="Desired directory")
parser.add_argument(
"--checkpath",
default=True,
help="Flag that determines if we do a check for os.path.exists. Used for test cases",
"--emailTo",
default=settings.DEFAULT_FROM_EMAIL,
help="Defines where we should email this report",
)
def handle(self, **options):
"""Grabs the directory then creates domain-metadata.csv in that directory"""
file_name = "domain-metadata.csv"
# Ensures a slash is added
directory = os.path.join(options.get("directory"), "")
check_path = options.get("checkpath")
zip_filename = f"domain-metadata-{self.current_date}.zip"
email_to = options.get("emailTo")
# Don't email to DEFAULT_FROM_EMAIL when not prod.
if not settings.IS_PRODUCTION and email_to == settings.DEFAULT_FROM_EMAIL:
raise ValueError(
"The --emailTo arg must be specified in non-prod environments, "
"and the arg must not equal the DEFAULT_FROM_EMAIL value (aka: help@get.gov)."
)
logger.info("Generating report...")
try:
self.email_current_metadata_report(directory, file_name, check_path)
self.email_current_metadata_report(zip_filename, email_to)
except Exception as err:
# TODO - #1317: Notify operations when auto report generation fails
raise err
else:
logger.info(f"Success! Created {file_name} and successfully sent out an email!")
logger.info(f"Success! Created {zip_filename} and successfully sent out an email!")
def email_current_metadata_report(self, directory, file_name, check_path):
"""Creates a current-metadata.csv file under the specified directory,
then uploads it to a AWS S3 bucket. This is done for resiliency
reasons in the event our application goes down and/or the email
cannot send -- we'll still be able to grab info from the S3
instance"""
s3_client = S3ClientHelper()
file_path = os.path.join(directory, file_name)
def email_current_metadata_report(self, zip_filename, email_to):
"""Emails a password protected zip containing domain-metadata and domain-request-metadata"""
reports = {
"Domain report": {
"report_filename": f"domain-metadata-{self.current_date}.csv",
"report_function": csv_export.export_data_type_to_csv,
},
"Domain request report": {
"report_filename": f"domain-request-metadata-{self.current_date}.csv",
"report_function": csv_export.DomainRequestExport.export_full_domain_request_report,
},
}
# Generate a file locally for upload
with open(file_path, "w") as file:
csv_export.export_data_type_to_csv(file)
# Set the password equal to our content in SECRET_ENCRYPT_METADATA.
# For local development, this will be "devpwd" unless otherwise set.
# Uncomment these lines if you want to use this:
# override = settings.SECRET_ENCRYPT_METADATA is None and not settings.IS_PRODUCTION
# password = "devpwd" if override else settings.SECRET_ENCRYPT_METADATA
password = settings.SECRET_ENCRYPT_METADATA
if not password:
raise ValueError("No password was specified for this zip file.")
if check_path and not os.path.exists(file_path):
raise FileNotFoundError(f"Could not find newly created file at '{file_path}'")
s3_client.upload_file(file_path, file_name)
# Set zip file name
current_date = datetime.now().strftime("%m%d%Y")
current_filename = f"domain-metadata-{current_date}.zip"
# Pre-set zip file name
encrypted_metadata_output = current_filename
# Set context for the subject
current_date_str = datetime.now().strftime("%Y-%m-%d")
# Encrypt the metadata
encrypted_metadata_in_bytes = self._encrypt_metadata(
s3_client.get_file(file_name), encrypted_metadata_output, str.encode(settings.SECRET_ENCRYPT_METADATA)
)
encrypted_zip_in_bytes = self.get_encrypted_zip(zip_filename, reports, password)
# Send the metadata file that is zipped
send_templated_email(
template_name="emails/metadata_body.txt",
subject_template_name="emails/metadata_subject.txt",
to_address=settings.DEFAULT_FROM_EMAIL,
context={"current_date_str": current_date_str},
attachment_file=encrypted_metadata_in_bytes,
to_address=email_to,
context={"current_date_str": datetime.now().strftime("%Y-%m-%d")},
attachment_file=encrypted_zip_in_bytes,
)
def _encrypt_metadata(self, input_file, output_file, password):
def get_encrypted_zip(self, zip_filename, reports, password):
"""Helper function for encrypting the attachment file"""
current_date = datetime.now().strftime("%m%d%Y")
current_filename = f"domain-metadata-{current_date}.csv"
# Using ZIP_DEFLATED bc it's a more common compression method supported by most zip utilities and faster
# We could also use compression=pyzipper.ZIP_LZMA if we are looking for smaller file size
with pyzipper.AESZipFile(
output_file, "w", compression=pyzipper.ZIP_DEFLATED, encryption=pyzipper.WZ_AES
zip_filename, "w", compression=pyzipper.ZIP_DEFLATED, encryption=pyzipper.WZ_AES
) as f_out:
f_out.setpassword(password)
f_out.writestr(current_filename, input_file)
with open(output_file, "rb") as file_data:
f_out.setpassword(str.encode(password))
for report_name, report in reports.items():
logger.info(f"Generating {report_name}")
report_content = self.write_and_return_report(report["report_function"])
f_out.writestr(report["report_filename"], report_content)
# Get the final report for emailing purposes
with open(zip_filename, "rb") as file_data:
attachment_in_bytes = file_data.read()
return attachment_in_bytes
def write_and_return_report(self, report_function):
"""Writes a report to a StringIO object given a report_function and returns the string."""
report_bytes = StringIO()
report_function(report_bytes)
# Rewind the buffer to the beginning after writing
report_bytes.seek(0)
return report_bytes.read()

View file

@ -0,0 +1,97 @@
from django.core.paginator import Paginator
import logging
import os
import pyzipper
import tablib
from django.core.management import BaseCommand
import registrar.admin
logger = logging.getLogger(__name__)
class Command(BaseCommand):
help = "Exports tables in csv format to zip file in tmp directory."
def handle(self, **options):
"""Generates CSV files for specified tables and creates a zip archive"""
table_names = [
"User",
"Contact",
"Domain",
"DomainRequest",
"DomainInformation",
"FederalAgency",
"UserDomainRole",
"DraftDomain",
"Website",
"HostIp",
"Host",
"PublicContact",
]
# Ensure the tmp directory exists
os.makedirs("tmp", exist_ok=True)
for table_name in table_names:
self.export_table(table_name)
# Create a zip file containing all the CSV files
zip_filename = "tmp/exported_tables.zip"
with pyzipper.AESZipFile(zip_filename, "w", compression=pyzipper.ZIP_DEFLATED) as zipf:
for table_name in table_names:
# Define the tmp directory and the file pattern
tmp_dir = "tmp"
pattern = f"{table_name}_"
zip_file_path = os.path.join(tmp_dir, "exported_files.zip")
# Find all files that match the pattern
matching_files = [file for file in os.listdir(tmp_dir) if file.startswith(pattern)]
for file_path in matching_files:
# Add each file to the zip archive
zipf.write(f"tmp/{file_path}", os.path.basename(file_path))
logger.info(f"Added {file_path} to {zip_file_path}")
# Remove the file after adding to zip
os.remove(f"tmp/{file_path}")
logger.info(f"Removed {file_path}")
def export_table(self, table_name):
"""Export a given table to csv files in the tmp directory"""
resourcename = f"{table_name}Resource"
try:
resourceclass = getattr(registrar.admin, resourcename)
dataset = resourceclass().export()
if not isinstance(dataset, tablib.Dataset):
raise ValueError(f"Exported data from {resourcename} is not a tablib.Dataset")
# Determine the number of rows per file
rows_per_file = 10000
# Use Paginator to handle splitting the dataset
paginator = Paginator(dataset.dict, rows_per_file)
num_files = paginator.num_pages
logger.info(f"splitting {table_name} into {num_files} files")
# Export each page to a separate file
for page_num in paginator.page_range:
page = paginator.page(page_num)
# Create a new dataset for the chunk
chunk = tablib.Dataset(headers=dataset.headers)
for row_dict in page.object_list:
row = [row_dict[header] for header in dataset.headers]
chunk.append(row)
# Export the chunk to a new file
filename = f"tmp/{table_name}_{page_num}.csv"
with open(filename, "w") as f:
f.write(chunk.export("csv"))
logger.info(f"Successfully exported {table_name} into {num_files} files.")
except AttributeError:
logger.error(f"Resource class {resourcename} not found in registrar.admin")
except Exception as e:
logger.error(f"Failed to export {table_name}: {e}")

View file

@ -50,7 +50,7 @@ class Command(BaseCommand):
# Generate a file locally for upload
with open(file_path, "w") as file:
csv_export.export_data_federal_to_csv(file)
csv_export.DomainDataFederal.export_data_to_csv(file)
if check_path and not os.path.exists(file_path):
raise FileNotFoundError(f"Could not find newly created file at '{file_path}'")

View file

@ -49,7 +49,7 @@ class Command(BaseCommand):
# Generate a file locally for upload
with open(file_path, "w") as file:
csv_export.export_data_full_to_csv(file)
csv_export.DomainDataFull.export_data_to_csv(file)
if check_path and not os.path.exists(file_path):
raise FileNotFoundError(f"Could not find newly created file at '{file_path}'")

View file

@ -0,0 +1,113 @@
import argparse
import logging
import os
import pyzipper
import tablib
from django.apps import apps
from django.conf import settings
from django.db import transaction
from django.core.management import BaseCommand
import registrar.admin
logger = logging.getLogger(__name__)
class Command(BaseCommand):
help = "Imports tables from a zip file, exported_tables.zip, containing CSV files in the tmp directory."
def add_arguments(self, parser):
"""Add command line arguments."""
parser.add_argument("--skipEppSave", default=True, action=argparse.BooleanOptionalAction)
def handle(self, **options):
"""Extracts CSV files from a zip archive and imports them into the respective tables"""
if settings.IS_PRODUCTION:
logger.error("import_tables cannot be run in production")
return
self.skip_epp_save = options.get("skipEppSave")
table_names = [
"User",
"Contact",
"Domain",
"Host",
"HostIp",
"DraftDomain",
"Website",
"FederalAgency",
"DomainRequest",
"DomainInformation",
"UserDomainRole",
"PublicContact",
]
# Ensure the tmp directory exists
os.makedirs("tmp", exist_ok=True)
# Unzip the file
zip_filename = "tmp/exported_tables.zip"
if not os.path.exists(zip_filename):
logger.error(f"Zip file {zip_filename} does not exist.")
return
with pyzipper.AESZipFile(zip_filename, "r") as zipf:
zipf.extractall("tmp")
logger.info(f"Extracted zip file {zip_filename} into tmp directory")
# Import each CSV file
for table_name in table_names:
self.import_table(table_name)
def import_table(self, table_name):
"""Import data from a CSV file into the given table"""
resourcename = f"{table_name}Resource"
# Define the directory and the pattern for csv filenames
tmp_dir = "tmp"
pattern = f"{table_name}_"
resourceclass = getattr(registrar.admin, resourcename)
resource_instance = resourceclass()
# Find all files that match the pattern
matching_files = [file for file in os.listdir(tmp_dir) if file.startswith(pattern)]
for csv_filename in matching_files:
try:
with open(f"tmp/{csv_filename}", "r") as csvfile:
dataset = tablib.Dataset().load(csvfile.read(), format="csv")
result = resource_instance.import_data(dataset, dry_run=False, skip_epp_save=self.skip_epp_save)
if result.has_errors():
logger.error(f"Errors occurred while importing {csv_filename}:")
for row_error in result.row_errors():
row_index = row_error[0]
errors = row_error[1]
for error in errors:
logger.error(f"Row {row_index} - {error.error} - {error.row}")
else:
logger.info(f"Successfully imported {csv_filename} into {table_name}")
except AttributeError:
logger.error(f"Resource class {resourcename} not found in registrar.admin")
except Exception as e:
logger.error(f"Failed to import {csv_filename}: {e}")
finally:
if os.path.exists(csv_filename):
os.remove(csv_filename)
logger.info(f"Removed temporary file {csv_filename}")
def clean_table(self, table_name):
"""Delete all rows in the given table"""
try:
# Get the model class dynamically
model = apps.get_model("registrar", table_name)
# Use a transaction to ensure database integrity
with transaction.atomic():
model.objects.all().delete()
logger.info(f"Successfully cleaned table {table_name}")
except LookupError:
logger.error(f"Model for table {table_name} not found.")
except Exception as e:
logger.error(f"Error cleaning table {table_name}: {e}")

View file

@ -0,0 +1,120 @@
import argparse
import csv
import logging
import os
from django.core.management import BaseCommand
from registrar.management.commands.utility.terminal_helper import TerminalHelper, TerminalColors
from registrar.models import SeniorOfficial, FederalAgency
logger = logging.getLogger(__name__)
class Command(BaseCommand):
help = """Populates the SeniorOfficial table based off of a given csv"""
def add_arguments(self, parser):
"""Add command line arguments."""
parser.add_argument("federal_cio_csv_path", help="A csv containing information about federal CIOs")
def handle(self, federal_cio_csv_path, **kwargs):
"""Populates the SeniorOfficial table with data given to it through a CSV"""
# Check if the provided file path is valid.
if not os.path.isfile(federal_cio_csv_path):
raise argparse.ArgumentTypeError(f"Invalid file path '{federal_cio_csv_path}'")
TerminalHelper.prompt_for_execution(
system_exit_on_terminate=True,
info_to_inspect=f"""
==Proposed Changes==
CSV: {federal_cio_csv_path}
For each item in this CSV, a SeniorOffical record will be added.
Note:
- If the row is missing SO data - it will not be added.
""", # noqa: W291
prompt_title="Do you wish to load records into the SeniorOfficial table?",
)
logger.info("Updating...")
# Get all existing data.
self.existing_senior_officials = SeniorOfficial.objects.all().prefetch_related("federal_agency")
self.existing_agencies = FederalAgency.objects.all()
# Read the CSV
self.added_senior_officials = []
self.skipped_rows = []
with open(federal_cio_csv_path, "r") as requested_file:
for row in csv.DictReader(requested_file):
# Note: the csv files we have received do not currently have a phone field.
# However, we will include it in our kwargs because that is the data we are mapping to
# and it seems best to check for the data even if it ends up not being there.
so_kwargs = {
"first_name": row.get("First Name"),
"last_name": row.get("Last Name"),
"title": row.get("Role/Position"),
"email": row.get("Email"),
"phone": row.get("Phone"),
}
# Clean the returned data
for key, value in so_kwargs.items():
if isinstance(value, str):
clean_string = value.strip()
if clean_string:
so_kwargs[key] = clean_string
else:
so_kwargs[key] = None
# Handle the federal_agency record seperately (db call)
agency_name = row.get("Agency").strip() if row.get("Agency") else None
if agency_name:
so_kwargs["federal_agency"] = self.existing_agencies.filter(agency=agency_name).first()
# Check if at least one field has a non-empty value
if row and any(so_kwargs.values()):
# Split into a function: C901 'Command.handle' is too complex.
# Doesn't add it to the DB, but just inits a class of SeniorOfficial.
self.create_senior_official(so_kwargs)
else:
self.skipped_rows.append(row)
message = f"Skipping row (no data was found): {row}"
TerminalHelper.colorful_logger(logger.warning, TerminalColors.YELLOW, message)
# Bulk create the SO fields
if len(self.added_senior_officials) > 0:
SeniorOfficial.objects.bulk_create(self.added_senior_officials)
added_message = f"Added {len(self.added_senior_officials)} records"
TerminalHelper.colorful_logger(logger.info, TerminalColors.OKBLUE, added_message)
if len(self.skipped_rows) > 0:
skipped_message = f"Skipped {len(self.skipped_rows)} records"
TerminalHelper.colorful_logger(logger.warning, TerminalColors.MAGENTA, skipped_message)
def create_senior_official(self, so_kwargs):
"""Creates a senior official object from kwargs but does not add it to the DB"""
# Create a new SeniorOfficial object
new_so = SeniorOfficial(**so_kwargs)
# Store a variable for the console logger
if all([new_so.first_name, new_so.last_name]):
record_display = new_so
else:
record_display = so_kwargs
# Before adding this record, check to make sure we aren't adding a duplicate.
duplicate_field = self.existing_senior_officials.filter(**so_kwargs).exists()
if not duplicate_field:
self.added_senior_officials.append(new_so)
message = f"Creating record: {record_display}"
TerminalHelper.colorful_logger(logger.info, TerminalColors.OKCYAN, message)
else:
# if this field is a duplicate, don't do anything
self.skipped_rows.append(new_so)
message = f"Skipping add on duplicate record: {record_display}"
TerminalHelper.colorful_logger(logger.warning, TerminalColors.YELLOW, message)

View file

@ -0,0 +1,56 @@
import argparse
import csv
import logging
import os
from django.core.management import BaseCommand
from registrar.management.commands.utility.terminal_helper import TerminalHelper, PopulateScriptTemplate, TerminalColors
from registrar.models import FederalAgency
logger = logging.getLogger(__name__)
class Command(BaseCommand, PopulateScriptTemplate):
help = """Populates the initials and fceb fields for FederalAgencies"""
def add_arguments(self, parser):
"""Add command line arguments."""
parser.add_argument("federal_cio_csv_path", help="A csv containing information about federal CIOs")
def handle(self, federal_cio_csv_path, **kwargs):
"""Loops through each FederalAgency object and attempts to update is_fceb and initials"""
# Check if the provided file path is valid.
if not os.path.isfile(federal_cio_csv_path):
raise argparse.ArgumentTypeError(f"Invalid file path '{federal_cio_csv_path}'")
# Returns a dictionary keyed by the agency name containing initials and agency status
self.federal_agency_dict = {}
with open(federal_cio_csv_path, "r") as requested_file:
for row in csv.DictReader(requested_file):
agency_name = row.get("Agency")
if agency_name:
initials = row.get("Initials")
agency_status = row.get("Agency Status")
self.federal_agency_dict[agency_name.strip()] = (initials, agency_status)
# Update every federal agency record
self.mass_update_records(FederalAgency, {"agency__isnull": False}, ["initials", "is_fceb"])
def update_record(self, record: FederalAgency):
"""For each record, update the initials and is_fceb field if data exists for it"""
initials, agency_status = self.federal_agency_dict.get(record.agency)
record.initials = initials
if agency_status and isinstance(agency_status, str) and agency_status.strip().upper() == "FCEB":
record.is_fceb = True
else:
record.is_fceb = False
message = f"Updating {record} => initials: {initials} | is_fceb: {record.is_fceb}"
TerminalHelper.colorful_logger(logger.info, TerminalColors.OKCYAN, message)
def should_skip_record(self, record) -> bool:
"""Skip record update if there is no data for that particular agency"""
return record.agency not in self.federal_agency_dict

View file

@ -12,12 +12,11 @@ class Command(BaseCommand, PopulateScriptTemplate):
def handle(self, **kwargs):
"""Loops through each valid User object and updates its verification_type value"""
filter_condition = {"verification_type__isnull": True}
self.mass_populate_field(User, filter_condition, ["verification_type"])
self.mass_update_records(User, filter_condition, ["verification_type"])
def populate_field(self, field_to_update):
def update_record(self, record: User):
"""Defines how we update the verification_type field"""
field_to_update.set_user_verification_type()
record.set_user_verification_type()
logger.info(
f"{TerminalColors.OKCYAN}Updating {field_to_update} => "
f"{field_to_update.verification_type}{TerminalColors.OKCYAN}"
f"{TerminalColors.OKCYAN}Updating {record} => " f"{record.verification_type}{TerminalColors.OKCYAN}"
)

View file

@ -0,0 +1,76 @@
import argparse
import csv
import logging
import os
from django.core.management import BaseCommand
from registrar.management.commands.utility.terminal_helper import PopulateScriptTemplate, TerminalColors
from registrar.models import DomainInformation
logger = logging.getLogger(__name__)
class Command(BaseCommand, PopulateScriptTemplate):
"""
This command uses the PopulateScriptTemplate,
which provides reusable logging and bulk updating functions for mass-updating fields.
"""
help = "Loops through each valid DomainInformation object and updates its Senior Official"
prompt_title = "Do you wish to update all Senior Officials for Domain Information?"
def handle(self, domain_info_csv_path, **kwargs):
"""Loops through each valid DomainInformation object and updates its senior official field"""
# Check if the provided file path is valid.
if not os.path.isfile(domain_info_csv_path):
raise argparse.ArgumentTypeError(f"Invalid file path '{domain_info_csv_path}'")
# Simple check to make sure we don't accidentally pass in the wrong file. Crude but it works.
if "information" not in domain_info_csv_path.lower():
raise argparse.ArgumentTypeError(f"Invalid file for domain information: '{domain_info_csv_path}'")
# Get all ao data.
self.ao_dict = {}
self.ao_dict = self.read_csv_file_and_get_contacts(domain_info_csv_path)
self.mass_update_records(
DomainInformation, filter_conditions={"senior_official__isnull": True}, fields_to_update=["senior_official"]
)
def add_arguments(self, parser):
"""Add command line arguments."""
parser.add_argument(
"--domain_info_csv_path", help="A csv containing the domain information id and the contact id"
)
def read_csv_file_and_get_contacts(self, file):
dict_data = {}
with open(file, "r") as requested_file:
reader = csv.DictReader(requested_file)
for row in reader:
domain_info_id = row.get("id")
ao_id = row.get("authorizing_official")
if ao_id:
ao_id = int(ao_id)
if domain_info_id and ao_id:
dict_data[int(domain_info_id)] = ao_id
return dict_data
def update_record(self, record: DomainInformation):
"""Defines how we update the senior official field on each record."""
record.senior_official_id = self.ao_dict.get(record.id)
logger.info(f"{TerminalColors.OKCYAN}Updating {str(record)} => {record.senior_official}{TerminalColors.ENDC}")
def should_skip_record(self, record) -> bool: # noqa
"""Defines the conditions in which we should skip updating a record."""
# Don't update this record if there isn't ao data to pull from
if self.ao_dict.get(record.id) is None:
logger.info(
f"{TerminalColors.YELLOW}Skipping update for {str(record)} => "
f"Missing authorizing_official data.{TerminalColors.ENDC}"
)
return True
else:
return False

View file

@ -0,0 +1,81 @@
import argparse
import csv
import logging
import os
from django.core.management import BaseCommand
from registrar.management.commands.utility.terminal_helper import PopulateScriptTemplate, TerminalColors
from registrar.models import DomainRequest
logger = logging.getLogger(__name__)
class Command(BaseCommand, PopulateScriptTemplate):
"""
This command uses the PopulateScriptTemplate,
which provides reusable logging and bulk updating functions for mass-updating fields.
"""
help = """Loops through each valid DomainRequest object and updates its senior official field"""
prompt_title = "Do you wish to update all Senior Officials for Domain Requests?"
def handle(self, domain_request_csv_path, **kwargs):
"""Loops through each valid DomainRequest object and updates its senior official field"""
# Check if the provided file path is valid.
if not os.path.isfile(domain_request_csv_path):
raise argparse.ArgumentTypeError(f"Invalid file path '{domain_request_csv_path}'")
# Simple check to make sure we don't accidentally pass in the wrong file. Crude but it works.
if "request" not in domain_request_csv_path.lower():
raise argparse.ArgumentTypeError(f"Invalid file for domain requests: '{domain_request_csv_path}'")
# Get all ao data.
self.ao_dict = {}
self.ao_dict = self.read_csv_file_and_get_contacts(domain_request_csv_path)
self.mass_update_records(
DomainRequest,
filter_conditions={
"senior_official__isnull": True,
},
fields_to_update=["senior_official"],
)
def add_arguments(self, parser):
"""Add command line arguments."""
parser.add_argument(
"--domain_request_csv_path", help="A csv containing the domain request id and the contact id"
)
def read_csv_file_and_get_contacts(self, file):
dict_data: dict = {}
with open(file, "r") as requested_file:
reader = csv.DictReader(requested_file)
for row in reader:
domain_request_id = row.get("id")
ao_id = row.get("authorizing_official")
if ao_id:
ao_id = int(ao_id)
if domain_request_id and ao_id:
dict_data[int(domain_request_id)] = ao_id
return dict_data
def update_record(self, record: DomainRequest):
"""Defines how we update the federal_type field on each record."""
record.senior_official_id = self.ao_dict.get(record.id)
# record.senior_official = Contact.objects.get(id=contact_id)
logger.info(f"{TerminalColors.OKCYAN}Updating {str(record)} => {record.senior_official}{TerminalColors.ENDC}")
def should_skip_record(self, record) -> bool: # noqa
"""Defines the conditions in which we should skip updating a record."""
# Don't update this record if there isn't ao data to pull from
if self.ao_dict.get(record.id) is None:
logger.info(
f"{TerminalColors.YELLOW}Skipping update for {str(record)} => "
f"Missing authorizing_official data.{TerminalColors.ENDC}"
)
return True
else:
return False

View file

@ -0,0 +1,94 @@
import logging
from django.core.management import BaseCommand
from registrar.management.commands.utility.terminal_helper import PopulateScriptTemplate, TerminalColors
from registrar.models import FederalAgency, DomainInformation
from registrar.utility.constants import BranchChoices
logger = logging.getLogger(__name__)
class Command(BaseCommand, PopulateScriptTemplate):
"""
This command uses the PopulateScriptTemplate,
which provides reusable logging and bulk updating functions for mass-updating fields.
"""
help = "Loops through each valid User object and updates its verification_type value"
prompt_title = "Do you wish to update all Federal Agencies?"
def handle(self, **kwargs):
"""Loops through each valid User object and updates the value of its verification_type field"""
# These are federal agencies that we don't have any data on.
# Independent agencies are considered "EXECUTIVE" here.
self.missing_records = {
"Christopher Columbus Fellowship Foundation": BranchChoices.EXECUTIVE,
"Commission for the Preservation of America's Heritage Abroad": BranchChoices.EXECUTIVE,
"Commission of Fine Arts": BranchChoices.EXECUTIVE,
"Committee for Purchase From People Who Are Blind or Severely Disabled": BranchChoices.EXECUTIVE,
"DC Court Services and Offender Supervision Agency": BranchChoices.EXECUTIVE,
"DC Pre-trial Services": BranchChoices.EXECUTIVE,
"Department of Agriculture": BranchChoices.EXECUTIVE,
"Dwight D. Eisenhower Memorial Commission": BranchChoices.LEGISLATIVE,
"Farm Credit System Insurance Corporation": BranchChoices.EXECUTIVE,
"Federal Financial Institutions Examination Council": BranchChoices.EXECUTIVE,
"Federal Judiciary": BranchChoices.JUDICIAL,
"Institute of Peace": BranchChoices.EXECUTIVE,
"International Boundary and Water Commission: United States and Mexico": BranchChoices.EXECUTIVE,
"International Boundary Commission: United States and Canada": BranchChoices.EXECUTIVE,
"International Joint Commission: United States and Canada": BranchChoices.EXECUTIVE,
"Legislative Branch": BranchChoices.LEGISLATIVE,
"National Foundation on the Arts and the Humanities": BranchChoices.EXECUTIVE,
"Nuclear Safety Oversight Committee": BranchChoices.EXECUTIVE,
"Office of Compliance": BranchChoices.LEGISLATIVE,
"Overseas Private Investment Corporation": BranchChoices.EXECUTIVE,
"Public Defender Service for the District of Columbia": BranchChoices.EXECUTIVE,
"The Executive Office of the President": BranchChoices.EXECUTIVE,
"U.S. Access Board": BranchChoices.EXECUTIVE,
"U.S. Agency for Global Media": BranchChoices.EXECUTIVE,
"U.S. China Economic and Security Review Commission": BranchChoices.LEGISLATIVE,
"U.S. Interagency Council on Homelessness": BranchChoices.EXECUTIVE,
"U.S. International Trade Commission": BranchChoices.EXECUTIVE,
"U.S. Postal Service": BranchChoices.EXECUTIVE,
"U.S. Trade and Development Agency": BranchChoices.EXECUTIVE,
"Udall Foundation": BranchChoices.EXECUTIVE,
"United States Arctic Research Commission": BranchChoices.EXECUTIVE,
"Utah Reclamation Mitigation and Conservation Commission": BranchChoices.EXECUTIVE,
"Vietnam Education Foundation": BranchChoices.EXECUTIVE,
"Woodrow Wilson International Center for Scholars": BranchChoices.EXECUTIVE,
"World War I Centennial Commission": BranchChoices.EXECUTIVE,
}
# Get all existing domain requests. Select_related allows us to skip doing db queries.
self.all_domain_infos = DomainInformation.objects.select_related("federal_agency")
self.mass_update_records(
FederalAgency, filter_conditions={"agency__isnull": False}, fields_to_update=["federal_type"]
)
def update_record(self, record: FederalAgency):
"""Defines how we update the federal_type field on each record."""
request = self.all_domain_infos.filter(federal_agency__agency=record.agency).first()
if request:
record.federal_type = request.federal_type
elif not request and record.agency in self.missing_records:
record.federal_type = self.missing_records.get(record.agency)
logger.info(f"{TerminalColors.OKCYAN}Updating {str(record)} => {record.federal_type}{TerminalColors.ENDC}")
def should_skip_record(self, record) -> bool: # noqa
"""Defines the conditions in which we should skip updating a record."""
requests = self.all_domain_infos.filter(federal_agency__agency=record.agency, federal_type__isnull=False)
# Check if all federal_type values are the same. Skip the record otherwise.
distinct_federal_types = requests.values("federal_type").distinct()
should_skip = distinct_federal_types.count() != 1
if should_skip and record.agency not in self.missing_records:
logger.info(
f"{TerminalColors.YELLOW}Skipping update for {str(record)} => count is "
f"{distinct_federal_types.count()} and records are {distinct_federal_types}{TerminalColors.ENDC}"
)
elif record.agency in self.missing_records:
logger.info(
f"{TerminalColors.MAGENTA}Missing data on {str(record)} - "
f"swapping to manual mapping{TerminalColors.ENDC}"
)
should_skip = False
return should_skip

View file

@ -18,6 +18,8 @@ from registrar.models.contact import Contact
from registrar.models.domain_request import DomainRequest
from registrar.models.domain_information import DomainInformation
from registrar.models.user import User
from registrar.models.federal_agency import FederalAgency
from registrar.utility.constants import BranchChoices
logger = logging.getLogger(__name__)
@ -388,7 +390,7 @@ class Command(BaseCommand):
fed_type = transition_domain.federal_type
fed_agency = transition_domain.federal_agency
# = AO Information = #
# = SO Information = #
first_name = transition_domain.first_name
middle_name = transition_domain.middle_name
last_name = transition_domain.last_name
@ -427,7 +429,7 @@ class Command(BaseCommand):
"domain": domain,
"organization_name": transition_domain.organization_name,
"creator": default_creator,
"authorizing_official": contact,
"senior_official": contact,
}
if valid_org_type:
@ -818,8 +820,8 @@ class Command(BaseCommand):
invitation.save()
valid_org_choices = [(name, value) for name, value in DomainRequest.OrganizationChoices.choices]
valid_fed_choices = [value for name, value in DomainRequest.BranchChoices.choices]
valid_agency_choices = DomainRequest.AGENCIES
valid_fed_choices = [value for name, value in BranchChoices.choices]
valid_agency_choices = FederalAgency.objects.all()
# ======================================================
# ================= DOMAIN INFORMATION =================
logger.info(

View file

@ -177,7 +177,7 @@ class LoadExtraTransitionDomain:
# STEP 3: Parse agency data
updated_transition_domain = self.parse_agency_data(domain_name, transition_domain)
# STEP 4: Parse ao data
# STEP 4: Parse so data
updated_transition_domain = self.parse_authority_data(domain_name, transition_domain)
# STEP 5: Parse creation and expiration data
@ -326,7 +326,7 @@ class LoadExtraTransitionDomain:
)
def parse_authority_data(self, domain_name, transition_domain) -> TransitionDomain:
"""Grabs authorizing_offical data from the parsed files and associates it
"""Grabs senior_offical data from the parsed files and associates it
with a transition_domain object, then returns that object."""
if not isinstance(transition_domain, TransitionDomain):
raise ValueError("Not a valid object, must be TransitionDomain")
@ -336,7 +336,7 @@ class LoadExtraTransitionDomain:
self.parse_logs.create_log_item(
EnumFilenames.AGENCY_ADHOC,
LogCode.ERROR,
f"Could not add authorizing_official on {domain_name}, no data exists.",
f"Could not add senior_official on {domain_name}, no data exists.",
domain_name,
not self.debug,
)

View file

@ -61,56 +61,96 @@ class ScriptDataHelper:
class PopulateScriptTemplate(ABC):
"""
Contains an ABC for generic populate scripts
Contains an ABC for generic populate scripts.
This template provides reusable logging and bulk updating functions for
mass-updating fields.
"""
def mass_populate_field(self, sender, filter_conditions, fields_to_update):
"""Loops through each valid "sender" object - specified by filter_conditions - and
updates fields defined by fields_to_update using populate_function.
# Optional script-global config variables. For the most part, you can leave these untouched.
# Defines what prompt_for_execution displays as its header when you first start the script
prompt_title: str = "Do you wish to proceed?"
You must define populate_field before you can use this function.
# The header when printing the script run summary (after the script finishes)
run_summary_header = None
@abstractmethod
def update_record(self, record):
"""Defines how we update each field. Must be defined before using mass_update_records."""
raise NotImplementedError
def mass_update_records(self, object_class, filter_conditions, fields_to_update, debug=True):
"""Loops through each valid "object_class" object - specified by filter_conditions - and
updates fields defined by fields_to_update using update_record.
You must define update_record before you can use this function.
"""
objects = sender.objects.filter(**filter_conditions)
records = object_class.objects.filter(**filter_conditions)
readable_class_name = self.get_class_name(object_class)
# Code execution will stop here if the user prompts "N"
TerminalHelper.prompt_for_execution(
system_exit_on_terminate=True,
info_to_inspect=f"""
==Proposed Changes==
Number of {sender} objects to change: {len(objects)}
Number of {readable_class_name} objects to change: {len(records)}
These fields will be updated on each record: {fields_to_update}
""",
prompt_title="Do you wish to patch this data?",
prompt_title=self.prompt_title,
)
logger.info("Updating...")
to_update: List[sender] = []
failed_to_update: List[sender] = []
for updated_object in objects:
to_update: List[object_class] = []
to_skip: List[object_class] = []
failed_to_update: List[object_class] = []
for record in records:
try:
self.populate_field(updated_object)
to_update.append(updated_object)
if not self.should_skip_record(record):
self.update_record(record)
to_update.append(record)
else:
to_skip.append(record)
except Exception as err:
failed_to_update.append(updated_object)
fail_message = self.get_failure_message(record)
failed_to_update.append(record)
logger.error(err)
logger.error(f"{TerminalColors.FAIL}" f"Failed to update {updated_object}" f"{TerminalColors.ENDC}")
logger.error(fail_message)
# Do a bulk update on the first_ready field
ScriptDataHelper.bulk_update_fields(sender, to_update, fields_to_update)
# Do a bulk update on the desired field
ScriptDataHelper.bulk_update_fields(object_class, to_update, fields_to_update)
# Log what happened
TerminalHelper.log_script_run_summary(to_update, failed_to_update, skipped=[], debug=True)
TerminalHelper.log_script_run_summary(
to_update,
failed_to_update,
to_skip,
debug=debug,
log_header=self.run_summary_header,
display_as_str=True,
)
@abstractmethod
def populate_field(self, field_to_update):
"""Defines how we update each field. Must be defined before using mass_populate_field."""
raise NotImplementedError
def get_class_name(self, sender) -> str:
"""Returns the class name that we want to display for the terminal prompt.
Example: DomainRequest => "Domain Request"
"""
return sender._meta.verbose_name if getattr(sender, "_meta") else sender
def get_failure_message(self, record) -> str:
"""Returns the message that we will display if a record fails to update"""
return f"{TerminalColors.FAIL}" f"Failed to update {record}" f"{TerminalColors.ENDC}"
def should_skip_record(self, record) -> bool: # noqa
"""Defines the condition in which we should skip updating a record. Override as needed."""
# By default - don't skip
return False
class TerminalHelper:
@staticmethod
def log_script_run_summary(to_update, failed_to_update, skipped, debug: bool, log_header=None):
def log_script_run_summary(
to_update, failed_to_update, skipped, debug: bool, log_header=None, display_as_str=False
):
"""Prints success, failed, and skipped counts, as well as
all affected objects."""
update_success_count = len(to_update)
@ -121,20 +161,24 @@ class TerminalHelper:
log_header = "============= FINISHED ==============="
# Prepare debug messages
debug_messages = {
"success": (f"{TerminalColors.OKCYAN}Updated: {to_update}{TerminalColors.ENDC}\n"),
"skipped": (f"{TerminalColors.YELLOW}Skipped: {skipped}{TerminalColors.ENDC}\n"),
"failed": (f"{TerminalColors.FAIL}Failed: {failed_to_update}{TerminalColors.ENDC}\n"),
}
if debug:
updated_display = [str(u) for u in to_update] if display_as_str else to_update
skipped_display = [str(s) for s in skipped] if display_as_str else skipped
failed_display = [str(f) for f in failed_to_update] if display_as_str else failed_to_update
debug_messages = {
"success": (f"{TerminalColors.OKCYAN}Updated: {updated_display}{TerminalColors.ENDC}\n"),
"skipped": (f"{TerminalColors.YELLOW}Skipped: {skipped_display}{TerminalColors.ENDC}\n"),
"failed": (f"{TerminalColors.FAIL}Failed: {failed_display}{TerminalColors.ENDC}\n"),
}
# Print out a list of everything that was changed, if we have any changes to log.
# Otherwise, don't print anything.
TerminalHelper.print_conditional(
debug,
f"{debug_messages.get('success') if update_success_count > 0 else ''}"
f"{debug_messages.get('skipped') if update_skipped_count > 0 else ''}"
f"{debug_messages.get('failed') if update_failed_count > 0 else ''}",
)
# Print out a list of everything that was changed, if we have any changes to log.
# Otherwise, don't print anything.
TerminalHelper.print_conditional(
debug,
f"{debug_messages.get('success') if update_success_count > 0 else ''}"
f"{debug_messages.get('skipped') if update_skipped_count > 0 else ''}"
f"{debug_messages.get('failed') if update_failed_count > 0 else ''}",
)
if update_failed_count == 0 and update_skipped_count == 0:
logger.info(
@ -273,6 +317,7 @@ class TerminalHelper:
case _:
logger.info(print_statement)
# TODO - "info_to_inspect" should be refactored to "prompt_message"
@staticmethod
def prompt_for_execution(system_exit_on_terminate: bool, info_to_inspect: str, prompt_title: str) -> bool:
"""Create to reduce code complexity.
@ -329,3 +374,26 @@ class TerminalHelper:
logger.info(f"{TerminalColors.MAGENTA}Writing to file " f" {filepath}..." f"{TerminalColors.ENDC}")
with open(f"{filepath}", "w+") as f:
f.write(file_contents)
@staticmethod
def colorful_logger(log_level, color, message):
"""Adds some color to your log output.
Args:
log_level: str | Logger.method -> Desired log level. ex: logger.info or "INFO"
color: str | TerminalColors -> Output color. ex: TerminalColors.YELLOW or "YELLOW"
message: str -> Message to display.
"""
if isinstance(log_level, str) and hasattr(logger, log_level.lower()):
log_method = getattr(logger, log_level.lower())
else:
log_method = log_level
if isinstance(color, str) and hasattr(TerminalColors, color.upper()):
terminal_color = getattr(TerminalColors, color.upper())
else:
terminal_color = color
colored_message = f"{terminal_color}{message}{TerminalColors.ENDC}"
log_method(colored_message)

View file

@ -0,0 +1,127 @@
# Generated by Django 4.2.10 on 2024-05-02 17:47
from django.conf import settings
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
("auth", "0012_alter_user_first_name_max_length"),
("registrar", "0089_user_verification_type"),
]
operations = [
migrations.CreateModel(
name="WaffleFlag",
fields=[
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
(
"name",
models.CharField(
help_text="The human/computer readable name.", max_length=100, unique=True, verbose_name="Name"
),
),
(
"everyone",
models.BooleanField(
blank=True,
help_text="Flip this flag on (Yes) or off (No) for everyone, overriding all other settings. Leave as Unknown to use normally.",
null=True,
verbose_name="Everyone",
),
),
(
"percent",
models.DecimalField(
blank=True,
decimal_places=1,
help_text="A number between 0.0 and 99.9 to indicate a percentage of users for whom this flag will be active.",
max_digits=3,
null=True,
verbose_name="Percent",
),
),
(
"testing",
models.BooleanField(
default=False,
help_text="Allow this flag to be set for a session for user testing",
verbose_name="Testing",
),
),
(
"superusers",
models.BooleanField(
default=True, help_text="Flag always active for superusers?", verbose_name="Superusers"
),
),
(
"staff",
models.BooleanField(default=False, help_text="Flag always active for staff?", verbose_name="Staff"),
),
(
"authenticated",
models.BooleanField(
default=False,
help_text="Flag always active for authenticated users?",
verbose_name="Authenticated",
),
),
(
"languages",
models.TextField(
blank=True,
default="",
help_text="Activate this flag for users with one of these languages (comma-separated list)",
verbose_name="Languages",
),
),
(
"rollout",
models.BooleanField(default=False, help_text="Activate roll-out mode?", verbose_name="Rollout"),
),
("note", models.TextField(blank=True, help_text="Note where this Flag is used.", verbose_name="Note")),
(
"created",
models.DateTimeField(
db_index=True,
default=django.utils.timezone.now,
help_text="Date when this Flag was created.",
verbose_name="Created",
),
),
(
"modified",
models.DateTimeField(
default=django.utils.timezone.now,
help_text="Date when this Flag was last modified.",
verbose_name="Modified",
),
),
(
"groups",
models.ManyToManyField(
blank=True,
help_text="Activate this flag for these user groups.",
to="auth.group",
verbose_name="Groups",
),
),
(
"users",
models.ManyToManyField(
blank=True,
help_text="Activate this flag for these users.",
to=settings.AUTH_USER_MODEL,
verbose_name="Users",
),
),
],
options={
"verbose_name": "waffle flag",
"verbose_name_plural": "Waffle flags",
},
),
]

View file

@ -0,0 +1,21 @@
# Generated by Django 4.2.10 on 2024-05-02 17:19
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("registrar", "0090_waffleflag"),
]
operations = [
migrations.RemoveField(
model_name="domaininformation",
name="federal_agency",
),
migrations.RemoveField(
model_name="domainrequest",
name="federal_agency",
),
]

View file

@ -0,0 +1,23 @@
# Generated by Django 4.2.10 on 2024-05-02 17:22
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("registrar", "0091_remove_domaininformation_federal_agency_and_more"),
]
operations = [
migrations.RenameField(
model_name="domaininformation",
old_name="updated_federal_agency",
new_name="federal_agency",
),
migrations.RenameField(
model_name="domainrequest",
old_name="updated_federal_agency",
new_name="federal_agency",
),
]

View file

@ -0,0 +1,17 @@
# Generated by Django 4.2.10 on 2024-05-08 17:35
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("registrar", "0092_rename_updated_federal_agency_domaininformation_federal_agency_and_more"),
]
operations = [
migrations.AlterUniqueTogether(
name="publiccontact",
unique_together={("contact_type", "registry_id", "domain")},
),
]

View file

@ -0,0 +1,37 @@
# This migration creates the create_full_access_group and create_cisa_analyst_group groups
# It is dependent on 0079 (which populates federal agencies)
# If permissions on the groups need changing, edit CISA_ANALYST_GROUP_PERMISSIONS
# in the user_group model then:
# [NOT RECOMMENDED]
# step 1: docker-compose exec app ./manage.py migrate --fake registrar 0035_contenttypes_permissions
# step 2: docker-compose exec app ./manage.py migrate registrar 0036_create_groups
# step 3: fake run the latest migration in the migrations list
# [RECOMMENDED]
# Alternatively:
# step 1: duplicate the migration that loads data
# step 2: docker-compose exec app ./manage.py migrate
from django.db import migrations
from registrar.models import UserGroup
from typing import Any
# For linting: RunPython expects a function reference,
# so let's give it one
def create_groups(apps, schema_editor) -> Any:
UserGroup.create_cisa_analyst_group(apps, schema_editor)
UserGroup.create_full_access_group(apps, schema_editor)
class Migration(migrations.Migration):
dependencies = [
("registrar", "0093_alter_publiccontact_unique_together"),
]
operations = [
migrations.RunPython(
create_groups,
reverse_code=migrations.RunPython.noop,
atomic=True,
),
]

View file

@ -0,0 +1,23 @@
# Generated by Django 4.2.10 on 2024-05-22 14:54
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("registrar", "0094_create_groups_v12"),
]
operations = [
migrations.AddField(
model_name="user",
name="middle_name",
field=models.CharField(blank=True, null=True),
),
migrations.AddField(
model_name="user",
name="title",
field=models.CharField(blank=True, null=True, verbose_name="title / role"),
),
]

View file

@ -0,0 +1,131 @@
# Generated by Django 4.2.10 on 2024-05-28 14:40
from django.db import migrations, models
import phonenumber_field.modelfields
class Migration(migrations.Migration):
dependencies = [
("registrar", "0095_user_middle_name_user_title"),
]
operations = [
migrations.AlterField(
model_name="contact",
name="email",
field=models.EmailField(blank=True, max_length=320, null=True),
),
migrations.AlterField(
model_name="contact",
name="first_name",
field=models.CharField(blank=True, null=True, verbose_name="first name"),
),
migrations.AlterField(
model_name="contact",
name="last_name",
field=models.CharField(blank=True, null=True, verbose_name="last name"),
),
migrations.AlterField(
model_name="contact",
name="phone",
field=phonenumber_field.modelfields.PhoneNumberField(blank=True, max_length=128, null=True, region=None),
),
migrations.AlterField(
model_name="domaininformation",
name="organization_name",
field=models.CharField(blank=True, null=True),
),
migrations.AlterField(
model_name="domaininformation",
name="zipcode",
field=models.CharField(blank=True, max_length=10, null=True, verbose_name="zip code"),
),
migrations.AlterField(
model_name="domainrequest",
name="organization_name",
field=models.CharField(blank=True, null=True),
),
migrations.AlterField(
model_name="domainrequest",
name="zipcode",
field=models.CharField(blank=True, max_length=10, null=True, verbose_name="zip code"),
),
migrations.AlterField(
model_name="transitiondomain",
name="first_name",
field=models.CharField(
blank=True, help_text="First name / given name", null=True, verbose_name="first name"
),
),
migrations.AlterField(
model_name="transitiondomain",
name="organization_name",
field=models.CharField(blank=True, help_text="Organization name", null=True),
),
migrations.AlterField(
model_name="transitiondomain",
name="zipcode",
field=models.CharField(blank=True, help_text="Zip code", max_length=10, null=True, verbose_name="zip code"),
),
migrations.AlterField(
model_name="user",
name="phone",
field=phonenumber_field.modelfields.PhoneNumberField(
blank=True, help_text="Phone", max_length=128, null=True, region=None
),
),
migrations.AlterField(
model_name="verifiedbystaff",
name="email",
field=models.EmailField(max_length=254),
),
migrations.AddIndex(
model_name="contact",
index=models.Index(fields=["user"], name="registrar_c_user_id_4059c4_idx"),
),
migrations.AddIndex(
model_name="contact",
index=models.Index(fields=["email"], name="registrar_c_email_bde2de_idx"),
),
migrations.AddIndex(
model_name="domain",
index=models.Index(fields=["name"], name="registrar_d_name_5b1956_idx"),
),
migrations.AddIndex(
model_name="domain",
index=models.Index(fields=["state"], name="registrar_d_state_84c134_idx"),
),
migrations.AddIndex(
model_name="domaininformation",
index=models.Index(fields=["domain"], name="registrar_d_domain__88838a_idx"),
),
migrations.AddIndex(
model_name="domaininformation",
index=models.Index(fields=["domain_request"], name="registrar_d_domain__d1fba8_idx"),
),
migrations.AddIndex(
model_name="domaininvitation",
index=models.Index(fields=["status"], name="registrar_d_status_e84571_idx"),
),
migrations.AddIndex(
model_name="domainrequest",
index=models.Index(fields=["requested_domain"], name="registrar_d_request_6894eb_idx"),
),
migrations.AddIndex(
model_name="domainrequest",
index=models.Index(fields=["approved_domain"], name="registrar_d_approve_ac4c46_idx"),
),
migrations.AddIndex(
model_name="domainrequest",
index=models.Index(fields=["status"], name="registrar_d_status_a32b59_idx"),
),
migrations.AddIndex(
model_name="user",
index=models.Index(fields=["username"], name="registrar_u_usernam_964b1b_idx"),
),
migrations.AddIndex(
model_name="user",
index=models.Index(fields=["email"], name="registrar_u_email_c8f2c4_idx"),
),
]

View file

@ -0,0 +1,19 @@
# Generated by Django 4.2.10 on 2024-06-06 18:38
from django.db import migrations
import phonenumber_field.modelfields
class Migration(migrations.Migration):
dependencies = [
("registrar", "0096_alter_contact_email_alter_contact_first_name_and_more"),
]
operations = [
migrations.AlterField(
model_name="user",
name="phone",
field=phonenumber_field.modelfields.PhoneNumberField(blank=True, max_length=128, null=True, region=None),
),
]

View file

@ -0,0 +1,32 @@
# Generated by Django 4.2.10 on 2024-06-07 15:27
from django.db import migrations
import django_fsm
class Migration(migrations.Migration):
dependencies = [
("registrar", "0097_alter_user_phone"),
]
operations = [
migrations.AlterField(
model_name="domainrequest",
name="status",
field=django_fsm.FSMField(
choices=[
("in review", "In review"),
("action needed", "Action needed"),
("approved", "Approved"),
("rejected", "Rejected"),
("ineligible", "Ineligible"),
("submitted", "Submitted"),
("withdrawn", "Withdrawn"),
("started", "Started"),
],
default="started",
max_length=50,
),
),
]

Some files were not shown because too many files have changed in this diff Show more