mirror of
https://github.com/cisagov/manage.get.gov.git
synced 2025-08-04 17:01:56 +02:00
Merge remote-tracking branch 'origin/main' into nl/2359-portfolio-inline-domaingroups-and-suborgs
This commit is contained in:
commit
b818419da6
15 changed files with 1902 additions and 1494 deletions
12
.github/ISSUE_TEMPLATE/developer-onboarding.md
vendored
12
.github/ISSUE_TEMPLATE/developer-onboarding.md
vendored
|
@ -16,6 +16,8 @@ assignees: abroddrick
|
|||
|
||||
There are several tools we use locally that you will need to have.
|
||||
- [ ] [Install the cf CLI v7](https://docs.cloudfoundry.org/cf-cli/install-go-cli.html#pkg-mac) for the ability to deploy
|
||||
- If you are using Windows, installation information can be found [here](https://github.com/cloudfoundry/cli/wiki/V8-CLI-Installation-Guide#installers-and-compressed-binaries)
|
||||
- Alternatively, for Windows, [consider using chocolately](https://community.chocolatey.org/packages/cloudfoundry-cli/7.2.0)
|
||||
- [ ] Make sure you have `gpg` >2.1.7. Run `gpg --version` to check. If not, [install gnupg](https://formulae.brew.sh/formula/gnupg)
|
||||
- [ ] Install the [Github CLI](https://cli.github.com/)
|
||||
|
||||
|
@ -70,6 +72,7 @@ when setting up your key in Github.
|
|||
|
||||
Now test commit signing is working by checking out a branch (`yourname/test-commit-signing`) and making some small change to a file. Commit the change (it should prompt you for your GPG credential) and push it to Github. Look on Github at your branch and ensure the commit is `verified`.
|
||||
|
||||
### MacOS
|
||||
**Note:** if you are on a mac and not able to successfully create a signed commit, getting the following error:
|
||||
```zsh
|
||||
error: gpg failed to sign the data
|
||||
|
@ -90,6 +93,15 @@ or
|
|||
source ~/.zshrc
|
||||
```
|
||||
|
||||
### Windows
|
||||
If GPG doesn't work out of the box with git for you:
|
||||
- You can [download the GPG binary directly](https://gnupg.org/download/).
|
||||
- It may be helpful to use [gpg4win](https://www.gpg4win.org/get-gpg4win.html).
|
||||
|
||||
From there, you should be able to access gpg through the terminal.
|
||||
|
||||
Additionally, consider a gpg key manager like Kleopatra if you run into issues with environment variables or with the gpg service not running on startup.
|
||||
|
||||
## Setting up developer sandbox
|
||||
|
||||
We have three types of environments: stable, staging, and sandbox. Stable (production)and staging (pre-prod) get deployed via tagged release, and developer sandboxes are given to get.gov developers to mess around in a production-like environment without disrupting stable or staging. Each sandbox is namespaced and will automatically be deployed too when the appropriate branch syntax is used for that space in an open pull request. There are several things you need to setup to make the sandbox work for a developer.
|
||||
|
|
1
.github/workflows/deploy-sandbox.yaml
vendored
1
.github/workflows/deploy-sandbox.yaml
vendored
|
@ -28,6 +28,7 @@ jobs:
|
|||
|| startsWith(github.head_ref, 'hotgov/')
|
||||
|| startsWith(github.head_ref, 'litterbox/')
|
||||
|| startsWith(github.head_ref, 'ag/')
|
||||
|| startsWith(github.head_ref, 'ms/')
|
||||
outputs:
|
||||
environment: ${{ steps.var.outputs.environment}}
|
||||
runs-on: "ubuntu-latest"
|
||||
|
|
1
.github/workflows/migrate.yaml
vendored
1
.github/workflows/migrate.yaml
vendored
|
@ -16,6 +16,7 @@ on:
|
|||
- stable
|
||||
- staging
|
||||
- development
|
||||
- ms
|
||||
- ag
|
||||
- litterbox
|
||||
- hotgov
|
||||
|
|
1
.github/workflows/reset-db.yaml
vendored
1
.github/workflows/reset-db.yaml
vendored
|
@ -16,6 +16,7 @@ on:
|
|||
options:
|
||||
- staging
|
||||
- development
|
||||
- ms
|
||||
- ag
|
||||
- litterbox
|
||||
- hotgov
|
||||
|
|
32
ops/manifests/manifest-ms.yaml
Normal file
32
ops/manifests/manifest-ms.yaml
Normal file
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
applications:
|
||||
- name: getgov-ms
|
||||
buildpacks:
|
||||
- python_buildpack
|
||||
path: ../../src
|
||||
instances: 1
|
||||
memory: 512M
|
||||
stack: cflinuxfs4
|
||||
timeout: 180
|
||||
command: ./run.sh
|
||||
health-check-type: http
|
||||
health-check-http-endpoint: /health
|
||||
health-check-invocation-timeout: 40
|
||||
env:
|
||||
# Send stdout and stderr straight to the terminal without buffering
|
||||
PYTHONUNBUFFERED: yup
|
||||
# Tell Django where to find its configuration
|
||||
DJANGO_SETTINGS_MODULE: registrar.config.settings
|
||||
# Tell Django where it is being hosted
|
||||
DJANGO_BASE_URL: https://getgov-ms.app.cloud.gov
|
||||
# Tell Django how much stuff to log
|
||||
DJANGO_LOG_LEVEL: INFO
|
||||
# default public site location
|
||||
GETGOV_PUBLIC_SITE_URL: https://get.gov
|
||||
# Flag to disable/enable features in prod environments
|
||||
IS_PRODUCTION: False
|
||||
routes:
|
||||
- route: getgov-ms.app.cloud.gov
|
||||
services:
|
||||
- getgov-credentials
|
||||
- getgov-ms-database
|
|
@ -660,6 +660,7 @@ ALLOWED_HOSTS = [
|
|||
"getgov-stable.app.cloud.gov",
|
||||
"getgov-staging.app.cloud.gov",
|
||||
"getgov-development.app.cloud.gov",
|
||||
"getgov-ms.app.cloud.gov",
|
||||
"getgov-ag.app.cloud.gov",
|
||||
"getgov-litterbox.app.cloud.gov",
|
||||
"getgov-hotgov.app.cloud.gov",
|
||||
|
|
|
@ -50,7 +50,7 @@ class Command(BaseCommand):
|
|||
|
||||
# Generate a file locally for upload
|
||||
with open(file_path, "w") as file:
|
||||
csv_export.export_data_federal_to_csv(file)
|
||||
csv_export.DomainDataFederal.export_data_to_csv(file)
|
||||
|
||||
if check_path and not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"Could not find newly created file at '{file_path}'")
|
||||
|
|
|
@ -49,7 +49,7 @@ class Command(BaseCommand):
|
|||
|
||||
# Generate a file locally for upload
|
||||
with open(file_path, "w") as file:
|
||||
csv_export.export_data_full_to_csv(file)
|
||||
csv_export.DomainDataFull.export_data_to_csv(file)
|
||||
|
||||
if check_path and not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"Could not find newly created file at '{file_path}'")
|
||||
|
|
|
@ -0,0 +1,76 @@
|
|||
import argparse
|
||||
import csv
|
||||
import logging
|
||||
import os
|
||||
from django.core.management import BaseCommand
|
||||
from registrar.management.commands.utility.terminal_helper import PopulateScriptTemplate, TerminalColors
|
||||
from registrar.models import DomainInformation
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Command(BaseCommand, PopulateScriptTemplate):
|
||||
"""
|
||||
This command uses the PopulateScriptTemplate,
|
||||
which provides reusable logging and bulk updating functions for mass-updating fields.
|
||||
"""
|
||||
|
||||
help = "Loops through each valid DomainInformation object and updates its Senior Official"
|
||||
prompt_title = "Do you wish to update all Senior Officials for Domain Information?"
|
||||
|
||||
def handle(self, domain_info_csv_path, **kwargs):
|
||||
"""Loops through each valid DomainInformation object and updates its senior official field"""
|
||||
|
||||
# Check if the provided file path is valid.
|
||||
if not os.path.isfile(domain_info_csv_path):
|
||||
raise argparse.ArgumentTypeError(f"Invalid file path '{domain_info_csv_path}'")
|
||||
|
||||
# Simple check to make sure we don't accidentally pass in the wrong file. Crude but it works.
|
||||
if "information" not in domain_info_csv_path.lower():
|
||||
raise argparse.ArgumentTypeError(f"Invalid file for domain information: '{domain_info_csv_path}'")
|
||||
|
||||
# Get all ao data.
|
||||
self.ao_dict = {}
|
||||
self.ao_dict = self.read_csv_file_and_get_contacts(domain_info_csv_path)
|
||||
|
||||
self.mass_update_records(
|
||||
DomainInformation, filter_conditions={"senior_official__isnull": True}, fields_to_update=["senior_official"]
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
"""Add command line arguments."""
|
||||
parser.add_argument(
|
||||
"--domain_info_csv_path", help="A csv containing the domain information id and the contact id"
|
||||
)
|
||||
|
||||
def read_csv_file_and_get_contacts(self, file):
|
||||
dict_data = {}
|
||||
with open(file, "r") as requested_file:
|
||||
reader = csv.DictReader(requested_file)
|
||||
for row in reader:
|
||||
domain_info_id = row.get("id")
|
||||
ao_id = row.get("authorizing_official")
|
||||
if ao_id:
|
||||
ao_id = int(ao_id)
|
||||
if domain_info_id and ao_id:
|
||||
dict_data[int(domain_info_id)] = ao_id
|
||||
|
||||
return dict_data
|
||||
|
||||
def update_record(self, record: DomainInformation):
|
||||
"""Defines how we update the senior official field on each record."""
|
||||
record.senior_official_id = self.ao_dict.get(record.id)
|
||||
logger.info(f"{TerminalColors.OKCYAN}Updating {str(record)} => {record.senior_official}{TerminalColors.ENDC}")
|
||||
|
||||
def should_skip_record(self, record) -> bool: # noqa
|
||||
"""Defines the conditions in which we should skip updating a record."""
|
||||
# Don't update this record if there isn't ao data to pull from
|
||||
if self.ao_dict.get(record.id) is None:
|
||||
logger.info(
|
||||
f"{TerminalColors.YELLOW}Skipping update for {str(record)} => "
|
||||
f"Missing authorizing_official data.{TerminalColors.ENDC}"
|
||||
)
|
||||
return True
|
||||
else:
|
||||
return False
|
|
@ -0,0 +1,81 @@
|
|||
import argparse
|
||||
import csv
|
||||
import logging
|
||||
import os
|
||||
from django.core.management import BaseCommand
|
||||
from registrar.management.commands.utility.terminal_helper import PopulateScriptTemplate, TerminalColors
|
||||
from registrar.models import DomainRequest
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Command(BaseCommand, PopulateScriptTemplate):
|
||||
"""
|
||||
This command uses the PopulateScriptTemplate,
|
||||
which provides reusable logging and bulk updating functions for mass-updating fields.
|
||||
"""
|
||||
|
||||
help = """Loops through each valid DomainRequest object and updates its senior official field"""
|
||||
prompt_title = "Do you wish to update all Senior Officials for Domain Requests?"
|
||||
|
||||
def handle(self, domain_request_csv_path, **kwargs):
|
||||
"""Loops through each valid DomainRequest object and updates its senior official field"""
|
||||
|
||||
# Check if the provided file path is valid.
|
||||
if not os.path.isfile(domain_request_csv_path):
|
||||
raise argparse.ArgumentTypeError(f"Invalid file path '{domain_request_csv_path}'")
|
||||
|
||||
# Simple check to make sure we don't accidentally pass in the wrong file. Crude but it works.
|
||||
if "request" not in domain_request_csv_path.lower():
|
||||
raise argparse.ArgumentTypeError(f"Invalid file for domain requests: '{domain_request_csv_path}'")
|
||||
|
||||
# Get all ao data.
|
||||
self.ao_dict = {}
|
||||
self.ao_dict = self.read_csv_file_and_get_contacts(domain_request_csv_path)
|
||||
|
||||
self.mass_update_records(
|
||||
DomainRequest,
|
||||
filter_conditions={
|
||||
"senior_official__isnull": True,
|
||||
},
|
||||
fields_to_update=["senior_official"],
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
"""Add command line arguments."""
|
||||
parser.add_argument(
|
||||
"--domain_request_csv_path", help="A csv containing the domain request id and the contact id"
|
||||
)
|
||||
|
||||
def read_csv_file_and_get_contacts(self, file):
|
||||
dict_data: dict = {}
|
||||
with open(file, "r") as requested_file:
|
||||
reader = csv.DictReader(requested_file)
|
||||
for row in reader:
|
||||
domain_request_id = row.get("id")
|
||||
ao_id = row.get("authorizing_official")
|
||||
if ao_id:
|
||||
ao_id = int(ao_id)
|
||||
if domain_request_id and ao_id:
|
||||
dict_data[int(domain_request_id)] = ao_id
|
||||
|
||||
return dict_data
|
||||
|
||||
def update_record(self, record: DomainRequest):
|
||||
"""Defines how we update the federal_type field on each record."""
|
||||
record.senior_official_id = self.ao_dict.get(record.id)
|
||||
# record.senior_official = Contact.objects.get(id=contact_id)
|
||||
logger.info(f"{TerminalColors.OKCYAN}Updating {str(record)} => {record.senior_official}{TerminalColors.ENDC}")
|
||||
|
||||
def should_skip_record(self, record) -> bool: # noqa
|
||||
"""Defines the conditions in which we should skip updating a record."""
|
||||
# Don't update this record if there isn't ao data to pull from
|
||||
if self.ao_dict.get(record.id) is None:
|
||||
logger.info(
|
||||
f"{TerminalColors.YELLOW}Skipping update for {str(record)} => "
|
||||
f"Missing authorizing_official data.{TerminalColors.ENDC}"
|
||||
)
|
||||
return True
|
||||
else:
|
||||
return False
|
|
@ -151,6 +151,11 @@ class Domain(TimeStampedModel, DomainHelper):
|
|||
# previously existed but has been deleted from the registry
|
||||
DELETED = "deleted", "Deleted"
|
||||
|
||||
@classmethod
|
||||
def get_state_label(cls, state: str):
|
||||
"""Returns the associated label for a given state value"""
|
||||
return cls(state).label if state else None
|
||||
|
||||
@classmethod
|
||||
def get_help_text(cls, state) -> str:
|
||||
"""Returns a help message for a desired state. If none is found, an empty string is returned"""
|
||||
|
|
|
@ -136,6 +136,13 @@ class DomainRequest(TimeStampedModel):
|
|||
@classmethod
|
||||
def get_org_label(cls, org_name: str):
|
||||
"""Returns the associated label for a given org name"""
|
||||
# This is an edgecase on domains with no org.
|
||||
# This unlikely to happen but
|
||||
# a break will occur in certain edge cases without this.
|
||||
# (more specifically, csv exports).
|
||||
if not org_name:
|
||||
return None
|
||||
|
||||
org_names = org_name.split("_election")
|
||||
if len(org_names) > 0:
|
||||
org_name = org_names[0]
|
||||
|
|
|
@ -1,21 +1,23 @@
|
|||
import csv
|
||||
import io
|
||||
from django.test import Client, RequestFactory
|
||||
from io import StringIO
|
||||
from registrar.models.domain_request import DomainRequest
|
||||
from registrar.models.domain import Domain
|
||||
from registrar.models.utility.generic_helper import convert_queryset_to_dict
|
||||
from registrar.utility.csv_export import (
|
||||
export_data_managed_domains_to_csv,
|
||||
export_data_unmanaged_domains_to_csv,
|
||||
get_sliced_domains,
|
||||
get_sliced_requests,
|
||||
write_csv_for_domains,
|
||||
DomainDataFull,
|
||||
DomainDataType,
|
||||
DomainDataFederal,
|
||||
DomainGrowth,
|
||||
DomainManaged,
|
||||
DomainUnmanaged,
|
||||
DomainExport,
|
||||
DomainRequestExport,
|
||||
DomainRequestGrowth,
|
||||
DomainRequestDataFull,
|
||||
get_default_start_date,
|
||||
get_default_end_date,
|
||||
DomainRequestExport,
|
||||
)
|
||||
|
||||
from django.db.models import Case, When
|
||||
from django.core.management import call_command
|
||||
from unittest.mock import MagicMock, call, mock_open, patch
|
||||
from api.views import get_current_federal, get_current_full
|
||||
|
@ -45,10 +47,10 @@ class CsvReportsTest(MockDb):
|
|||
fake_open = mock_open()
|
||||
expected_file_content = [
|
||||
call("Domain name,Domain type,Agency,Organization name,City,State,Security contact email\r\n"),
|
||||
call("cdomain11.gov,Federal - Executive,World War I Centennial Commission,,,, \r\n"),
|
||||
call("cdomain1.gov,Federal - Executive,World War I Centennial Commission,,,, \r\n"),
|
||||
call("adomain10.gov,Federal,Armed Forces Retirement Home,,,, \r\n"),
|
||||
call("ddomain3.gov,Federal,Armed Forces Retirement Home,,,, \r\n"),
|
||||
call("cdomain11.gov,Federal - Executive,World War I Centennial Commission,,,,\r\n"),
|
||||
call("cdomain1.gov,Federal - Executive,World War I Centennial Commission,,,,\r\n"),
|
||||
call("adomain10.gov,Federal,Armed Forces Retirement Home,,,,\r\n"),
|
||||
call("ddomain3.gov,Federal,Armed Forces Retirement Home,,,,\r\n"),
|
||||
]
|
||||
# We don't actually want to write anything for a test case,
|
||||
# we just want to verify what is being written.
|
||||
|
@ -67,11 +69,12 @@ class CsvReportsTest(MockDb):
|
|||
fake_open = mock_open()
|
||||
expected_file_content = [
|
||||
call("Domain name,Domain type,Agency,Organization name,City,State,Security contact email\r\n"),
|
||||
call("cdomain11.gov,Federal - Executive,World War I Centennial Commission,,,, \r\n"),
|
||||
call("cdomain1.gov,Federal - Executive,World War I Centennial Commission,,,, \r\n"),
|
||||
call("adomain10.gov,Federal,Armed Forces Retirement Home,,,, \r\n"),
|
||||
call("ddomain3.gov,Federal,Armed Forces Retirement Home,,,, \r\n"),
|
||||
call("adomain2.gov,Interstate,,,,, \r\n"),
|
||||
call("cdomain11.gov,Federal - Executive,World War I Centennial Commission,,,,\r\n"),
|
||||
call("cdomain1.gov,Federal - Executive,World War I Centennial Commission,,,,\r\n"),
|
||||
call("adomain10.gov,Federal,Armed Forces Retirement Home,,,,\r\n"),
|
||||
call("ddomain3.gov,Federal,Armed Forces Retirement Home,,,,\r\n"),
|
||||
call("adomain2.gov,Interstate,,,,,\r\n"),
|
||||
call("zdomain12.gov,Interstate,,,,,\r\n"),
|
||||
]
|
||||
# We don't actually want to write anything for a test case,
|
||||
# we just want to verify what is being written.
|
||||
|
@ -202,494 +205,299 @@ class ExportDataTest(MockDb, MockEppLib):
|
|||
def tearDown(self):
|
||||
super().tearDown()
|
||||
|
||||
def test_export_domains_to_writer_security_emails_and_first_ready(self):
|
||||
"""Test that export_domains_to_writer returns the
|
||||
expected security email and first_ready value"""
|
||||
@less_console_noise_decorator
|
||||
def test_domain_data_type(self):
|
||||
"""Shows security contacts, domain managers, so"""
|
||||
# Add security email information
|
||||
self.domain_1.name = "defaultsecurity.gov"
|
||||
self.domain_1.save()
|
||||
# Invoke setter
|
||||
self.domain_1.security_contact
|
||||
# Invoke setter
|
||||
self.domain_2.security_contact
|
||||
# Invoke setter
|
||||
self.domain_3.security_contact
|
||||
# Add a first ready date on the first domain. Leaving the others blank.
|
||||
self.domain_1.first_ready = get_default_start_date()
|
||||
self.domain_1.save()
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
# Call the export functions
|
||||
DomainDataType.export_data_to_csv(csv_file)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains,
|
||||
# sorted alphabetially by domain name
|
||||
expected_content = (
|
||||
"Domain name,Status,First ready on,Expiration date,Domain type,Agency,Organization name,City,State,SO,"
|
||||
"SO email,Security contact email,Domain managers,Invited domain managers\n"
|
||||
"cdomain11.gov,Ready,2024-04-02,(blank),Federal - Executive,World War I Centennial Commission,,,, ,,,"
|
||||
"meoward@rocks.com,\n"
|
||||
"defaultsecurity.gov,Ready,2023-11-01,(blank),Federal - Executive,World War I Centennial Commission,,,"
|
||||
', ,,dotgov@cisa.dhs.gov,"meoward@rocks.com, info@example.com, big_lebowski@dude.co",'
|
||||
"woofwardthethird@rocks.com\n"
|
||||
"adomain10.gov,Ready,2024-04-03,(blank),Federal,Armed Forces Retirement Home,,,, ,,,,"
|
||||
"squeaker@rocks.com\n"
|
||||
"bdomain4.gov,Unknown,(blank),(blank),Federal,Armed Forces Retirement Home,,,, ,,,,\n"
|
||||
"bdomain5.gov,Deleted,(blank),(blank),Federal,Armed Forces Retirement Home,,,, ,,,,\n"
|
||||
"bdomain6.gov,Deleted,(blank),(blank),Federal,Armed Forces Retirement Home,,,, ,,,,\n"
|
||||
"ddomain3.gov,On hold,(blank),2023-11-15,Federal,Armed Forces Retirement Home,,,, ,,"
|
||||
"security@mail.gov,,\n"
|
||||
"sdomain8.gov,Deleted,(blank),(blank),Federal,Armed Forces Retirement Home,,,, ,,,,\n"
|
||||
"xdomain7.gov,Deleted,(blank),(blank),Federal,Armed Forces Retirement Home,,,, ,,,,\n"
|
||||
"zdomain9.gov,Deleted,(blank),(blank),Federal,Armed Forces Retirement Home,,,, ,,,,\n"
|
||||
"adomain2.gov,Dns needed,(blank),(blank),Interstate,,,,, ,,registrar@dotgov.gov,"
|
||||
"meoward@rocks.com,squeaker@rocks.com\n"
|
||||
"zdomain12.gov,Ready,2024-04-02,(blank),Interstate,,,,, ,,,meoward@rocks.com,\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
with less_console_noise():
|
||||
# Add security email information
|
||||
self.domain_1.name = "defaultsecurity.gov"
|
||||
self.domain_1.save()
|
||||
# Invoke setter
|
||||
self.domain_1.security_contact
|
||||
# Invoke setter
|
||||
self.domain_2.security_contact
|
||||
# Invoke setter
|
||||
self.domain_3.security_contact
|
||||
@less_console_noise_decorator
|
||||
def test_domain_data_full(self):
|
||||
"""Shows security contacts, filtered by state"""
|
||||
# Add security email information
|
||||
self.domain_1.name = "defaultsecurity.gov"
|
||||
self.domain_1.save()
|
||||
# Invoke setter
|
||||
self.domain_1.security_contact
|
||||
# Invoke setter
|
||||
self.domain_2.security_contact
|
||||
# Invoke setter
|
||||
self.domain_3.security_contact
|
||||
# Add a first ready date on the first domain. Leaving the others blank.
|
||||
self.domain_1.first_ready = get_default_start_date()
|
||||
self.domain_1.save()
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
# Call the export functions
|
||||
DomainDataFull.export_data_to_csv(csv_file)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains,
|
||||
# sorted alphabetially by domain name
|
||||
expected_content = (
|
||||
"Domain name,Domain type,Agency,Organization name,City,State,Security contact email\n"
|
||||
"cdomain11.gov,Federal - Executive,World War I Centennial Commission,,,,\n"
|
||||
"defaultsecurity.gov,Federal - Executive,World War I Centennial Commission,,,,dotgov@cisa.dhs.gov\n"
|
||||
"adomain10.gov,Federal,Armed Forces Retirement Home,,,,\n"
|
||||
"ddomain3.gov,Federal,Armed Forces Retirement Home,,,,security@mail.gov\n"
|
||||
"adomain2.gov,Interstate,,,,,registrar@dotgov.gov\n"
|
||||
"zdomain12.gov,Interstate,,,,,\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
# Add a first ready date on the first domain. Leaving the others blank.
|
||||
self.domain_1.first_ready = get_default_start_date()
|
||||
self.domain_1.save()
|
||||
@less_console_noise_decorator
|
||||
def test_domain_data_federal(self):
|
||||
"""Shows security contacts, filtered by state and org type"""
|
||||
# Add security email information
|
||||
self.domain_1.name = "defaultsecurity.gov"
|
||||
self.domain_1.save()
|
||||
# Invoke setter
|
||||
self.domain_1.security_contact
|
||||
# Invoke setter
|
||||
self.domain_2.security_contact
|
||||
# Invoke setter
|
||||
self.domain_3.security_contact
|
||||
# Add a first ready date on the first domain. Leaving the others blank.
|
||||
self.domain_1.first_ready = get_default_start_date()
|
||||
self.domain_1.save()
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
# Call the export functions
|
||||
DomainDataFederal.export_data_to_csv(csv_file)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains,
|
||||
# sorted alphabetially by domain name
|
||||
expected_content = (
|
||||
"Domain name,Domain type,Agency,Organization name,City,State,Security contact email\n"
|
||||
"cdomain11.gov,Federal - Executive,World War I Centennial Commission,,,,\n"
|
||||
"defaultsecurity.gov,Federal - Executive,World War I Centennial Commission,,,,dotgov@cisa.dhs.gov\n"
|
||||
"adomain10.gov,Federal,Armed Forces Retirement Home,,,,\n"
|
||||
"ddomain3.gov,Federal,Armed Forces Retirement Home,,,,security@mail.gov\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
writer = csv.writer(csv_file)
|
||||
# Define columns, sort fields, and filter condition
|
||||
columns = [
|
||||
"Domain name",
|
||||
"Domain type",
|
||||
"Agency",
|
||||
"Organization name",
|
||||
"City",
|
||||
"State",
|
||||
"SO",
|
||||
"SO email",
|
||||
"Security contact email",
|
||||
"Status",
|
||||
"Expiration date",
|
||||
"First ready on",
|
||||
]
|
||||
sort_fields = ["domain__name"]
|
||||
filter_condition = {
|
||||
"domain__state__in": [
|
||||
Domain.State.READY,
|
||||
Domain.State.DNS_NEEDED,
|
||||
Domain.State.ON_HOLD,
|
||||
],
|
||||
}
|
||||
|
||||
# Call the export functions
|
||||
write_csv_for_domains(
|
||||
writer,
|
||||
columns,
|
||||
sort_fields,
|
||||
filter_condition,
|
||||
should_get_domain_managers=False,
|
||||
should_write_header=True,
|
||||
@less_console_noise_decorator
|
||||
def test_domain_growth(self):
|
||||
"""Shows ready and deleted domains within a date range, sorted"""
|
||||
# Remove "Created at" and "First ready" because we can't guess this immutable, dynamically generated test data
|
||||
columns = [
|
||||
"Domain name",
|
||||
"Domain type",
|
||||
"Agency",
|
||||
"Organization name",
|
||||
"City",
|
||||
"State",
|
||||
"Status",
|
||||
"Expiration date",
|
||||
# "Created at",
|
||||
# "First ready",
|
||||
"Deleted",
|
||||
]
|
||||
sort = {
|
||||
"custom_sort": Case(
|
||||
When(domain__state=Domain.State.READY, then="domain__created_at"),
|
||||
When(domain__state=Domain.State.DELETED, then="domain__deleted"),
|
||||
)
|
||||
}
|
||||
with patch("registrar.utility.csv_export.DomainGrowth.get_columns", return_value=columns):
|
||||
with patch("registrar.utility.csv_export.DomainGrowth.get_annotations_for_sort", return_value=sort):
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
# Call the export functions
|
||||
DomainGrowth.export_data_to_csv(
|
||||
csv_file,
|
||||
self.start_date.strftime("%Y-%m-%d"),
|
||||
self.end_date.strftime("%Y-%m-%d"),
|
||||
)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains first, created between day-2 and day+2, sorted by created_at then name
|
||||
# and DELETED domains deleted between day-2 and day+2, sorted by deleted then name
|
||||
expected_content = (
|
||||
"Domain name,Domain type,Agency,Organization name,City,"
|
||||
"State,Status,Expiration date, Deleted\n"
|
||||
"cdomain1.gov,Federal-Executive,World War I Centennial Commission,,,,Ready,(blank)\n"
|
||||
"adomain10.gov,Federal,Armed Forces Retirement Home,,,,Ready,(blank)\n"
|
||||
"cdomain11.govFederal-ExecutiveWorldWarICentennialCommissionReady(blank)\n"
|
||||
"zdomain12.govInterstateReady(blank)\n"
|
||||
"zdomain9.gov,Federal,ArmedForcesRetirementHome,Deleted,(blank),2024-04-01\n"
|
||||
"sdomain8.gov,Federal,Armed Forces Retirement Home,,,,Deleted,(blank),2024-04-02\n"
|
||||
"xdomain7.gov,FederalArmedForcesRetirementHome,Deleted,(blank),2024-04-02\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = (
|
||||
csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
)
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains,
|
||||
# sorted alphabetially by domain name
|
||||
expected_content = (
|
||||
"Domain name,Domain type,Agency,Organization name,City,State,SO,"
|
||||
"SO email,Security contact email,Status,Expiration date, First ready on\n"
|
||||
"adomain10.gov,Federal,Armed Forces Retirement Home,Ready,(blank),2024-04-03\n"
|
||||
"adomain2.gov,Interstate,(blank),Dns needed,(blank),(blank)\n"
|
||||
"cdomain11.gov,Federal-Executive,WorldWarICentennialCommission,Ready,(blank),2024-04-02\n"
|
||||
"ddomain3.gov,Federal,Armed Forces Retirement Home,security@mail.gov,On hold,2023-11-15,(blank)\n"
|
||||
"defaultsecurity.gov,Federal - Executive,World War I Centennial Commission,"
|
||||
"(blank),Ready,(blank),2023-11-01\n"
|
||||
"zdomain12.govInterstateReady,(blank),2024-04-02\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
def test_write_csv_for_domains(self):
|
||||
"""Test that write_body returns the
|
||||
existing domain, test that sort by domain name works,
|
||||
test that filter works"""
|
||||
|
||||
with less_console_noise():
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
writer = csv.writer(csv_file)
|
||||
|
||||
# Define columns, sort fields, and filter condition
|
||||
columns = [
|
||||
"Domain name",
|
||||
"Domain type",
|
||||
"Agency",
|
||||
"Organization name",
|
||||
"City",
|
||||
"State",
|
||||
"SO",
|
||||
"SO email",
|
||||
"Submitter",
|
||||
"Submitter title",
|
||||
"Submitter email",
|
||||
"Submitter phone",
|
||||
"Security contact email",
|
||||
"Status",
|
||||
]
|
||||
sort_fields = ["domain__name"]
|
||||
filter_condition = {
|
||||
"domain__state__in": [
|
||||
Domain.State.READY,
|
||||
Domain.State.DNS_NEEDED,
|
||||
Domain.State.ON_HOLD,
|
||||
],
|
||||
}
|
||||
# Call the export functions
|
||||
write_csv_for_domains(
|
||||
writer,
|
||||
columns,
|
||||
sort_fields,
|
||||
filter_condition,
|
||||
should_get_domain_managers=False,
|
||||
should_write_header=True,
|
||||
)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains,
|
||||
# sorted alphabetially by domain name
|
||||
expected_content = (
|
||||
"Domain name,Domain type,Agency,Organization name,City,State,SO,"
|
||||
"SO email,Submitter,Submitter title,Submitter email,Submitter phone,"
|
||||
"Security contact email,Status\n"
|
||||
"adomain10.gov,Federal,Armed Forces Retirement Home,Ready\n"
|
||||
"adomain2.gov,Interstate,Dns needed\n"
|
||||
"cdomain11.govFederal-ExecutiveWorldWarICentennialCommissionReady\n"
|
||||
"cdomain1.gov,Federal - Executive,World War I Centennial Commission,Ready\n"
|
||||
"ddomain3.gov,Federal,Armed Forces Retirement Home,On hold\n"
|
||||
"zdomain12.govInterstateReady\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
def test_write_domains_body_additional(self):
|
||||
"""An additional test for filters and multi-column sort"""
|
||||
|
||||
with less_console_noise():
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
writer = csv.writer(csv_file)
|
||||
# Define columns, sort fields, and filter condition
|
||||
columns = [
|
||||
"Domain name",
|
||||
"Domain type",
|
||||
"Agency",
|
||||
"Organization name",
|
||||
"City",
|
||||
"State",
|
||||
"Security contact email",
|
||||
]
|
||||
sort_fields = ["domain__name", "federal_agency", "generic_org_type"]
|
||||
filter_condition = {
|
||||
"generic_org_type__icontains": "federal",
|
||||
"domain__state__in": [
|
||||
Domain.State.READY,
|
||||
Domain.State.DNS_NEEDED,
|
||||
Domain.State.ON_HOLD,
|
||||
],
|
||||
}
|
||||
# Call the export functions
|
||||
write_csv_for_domains(
|
||||
writer,
|
||||
columns,
|
||||
sort_fields,
|
||||
filter_condition,
|
||||
should_get_domain_managers=False,
|
||||
should_write_header=True,
|
||||
)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains,
|
||||
# federal only
|
||||
# sorted alphabetially by domain name
|
||||
expected_content = (
|
||||
"Domain name,Domain type,Agency,Organization name,City,"
|
||||
"State,Security contact email\n"
|
||||
"adomain10.gov,Federal,Armed Forces Retirement Home\n"
|
||||
"cdomain11.govFederal-ExecutiveWorldWarICentennialCommission\n"
|
||||
"cdomain1.gov,Federal - Executive,World War I Centennial Commission\n"
|
||||
"ddomain3.gov,Federal,Armed Forces Retirement Home\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
def test_write_domains_body_with_date_filter_pulls_domains_in_range(self):
|
||||
"""Test that domains that are
|
||||
1. READY and their first_ready dates are in range
|
||||
2. DELETED and their deleted dates are in range
|
||||
are pulled when the growth report conditions are applied to export_domains_to_writed.
|
||||
Test that ready domains are sorted by first_ready/deleted dates first, names second.
|
||||
|
||||
We considered testing export_data_domain_growth_to_csv which calls write_body
|
||||
and would have been easy to set up, but expected_content would contain created_at dates
|
||||
which are hard to mock.
|
||||
|
||||
TODO: Simplify if created_at is not needed for the report."""
|
||||
|
||||
with less_console_noise():
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
writer = csv.writer(csv_file)
|
||||
# Define columns, sort fields, and filter condition
|
||||
columns = [
|
||||
"Domain name",
|
||||
"Domain type",
|
||||
"Agency",
|
||||
"Organization name",
|
||||
"City",
|
||||
"State",
|
||||
"Status",
|
||||
"Expiration date",
|
||||
]
|
||||
sort_fields = [
|
||||
"created_at",
|
||||
"domain__name",
|
||||
]
|
||||
sort_fields_for_deleted_domains = [
|
||||
"domain__deleted",
|
||||
"domain__name",
|
||||
]
|
||||
filter_condition = {
|
||||
"domain__state__in": [
|
||||
Domain.State.READY,
|
||||
],
|
||||
"domain__first_ready__lte": self.end_date,
|
||||
"domain__first_ready__gte": self.start_date,
|
||||
}
|
||||
filter_conditions_for_deleted_domains = {
|
||||
"domain__state__in": [
|
||||
Domain.State.DELETED,
|
||||
],
|
||||
"domain__deleted__lte": self.end_date,
|
||||
"domain__deleted__gte": self.start_date,
|
||||
}
|
||||
|
||||
# Call the export functions
|
||||
write_csv_for_domains(
|
||||
writer,
|
||||
columns,
|
||||
sort_fields,
|
||||
filter_condition,
|
||||
should_get_domain_managers=False,
|
||||
should_write_header=True,
|
||||
)
|
||||
write_csv_for_domains(
|
||||
writer,
|
||||
columns,
|
||||
sort_fields_for_deleted_domains,
|
||||
filter_conditions_for_deleted_domains,
|
||||
should_get_domain_managers=False,
|
||||
should_write_header=False,
|
||||
)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
|
||||
# We expect READY domains first, created between day-2 and day+2, sorted by created_at then name
|
||||
# and DELETED domains deleted between day-2 and day+2, sorted by deleted then name
|
||||
expected_content = (
|
||||
"Domain name,Domain type,Agency,Organization name,City,"
|
||||
"State,Status,Expiration date\n"
|
||||
"cdomain1.gov,Federal-Executive,World War I Centennial Commission,,,,Ready,(blank)\n"
|
||||
"adomain10.gov,Federal,Armed Forces Retirement Home,,,,Ready,(blank)\n"
|
||||
"cdomain11.govFederal-ExecutiveWorldWarICentennialCommissionReady(blank)\n"
|
||||
"zdomain12.govInterstateReady(blank)\n"
|
||||
"zdomain9.gov,Federal,ArmedForcesRetirementHome,Deleted,(blank)\n"
|
||||
"sdomain8.gov,Federal,Armed Forces Retirement Home,,,,Deleted,(blank)\n"
|
||||
"xdomain7.gov,FederalArmedForcesRetirementHome,Deleted,(blank)\n"
|
||||
)
|
||||
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
def test_export_domains_to_writer_domain_managers(self):
|
||||
"""Test that export_domains_to_writer returns the
|
||||
expected domain managers.
|
||||
@less_console_noise_decorator
|
||||
def test_domain_managed(self):
|
||||
"""Shows ready and deleted domains by an end date, sorted
|
||||
|
||||
An invited user, woofwardthethird, should also be pulled into this report.
|
||||
|
||||
squeaker@rocks.com is invited to domain2 (DNS_NEEDED) and domain10 (No managers).
|
||||
She should show twice in this report but not in test_export_data_managed_domains_to_csv."""
|
||||
She should show twice in this report but not in test_DomainManaged."""
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
# Call the export functions
|
||||
DomainManaged.export_data_to_csv(
|
||||
csv_file,
|
||||
self.start_date.strftime("%Y-%m-%d"),
|
||||
self.end_date.strftime("%Y-%m-%d"),
|
||||
)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect the READY domain names with the domain managers: Their counts, and listing at end_date.
|
||||
expected_content = (
|
||||
"MANAGED DOMAINS COUNTS AT START DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,Special district,"
|
||||
"School district,Election office\n"
|
||||
"0,0,0,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"MANAGED DOMAINS COUNTS AT END DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,"
|
||||
"Special district,School district,Election office\n"
|
||||
"3,2,1,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"Domain name,Domain type,Domain managers,Invited domain managers\n"
|
||||
"cdomain11.gov,Federal - Executive,meoward@rocks.com,\n"
|
||||
'cdomain1.gov,Federal - Executive,"meoward@rocks.com, info@example.com, big_lebowski@dude.co",'
|
||||
"woofwardthethird@rocks.com\n"
|
||||
"zdomain12.gov,Interstate,meoward@rocks.com,\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
with less_console_noise():
|
||||
@less_console_noise_decorator
|
||||
def test_domain_unmanaged(self):
|
||||
"""Shows unmanaged domains by an end date, sorted"""
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
DomainUnmanaged.export_data_to_csv(
|
||||
csv_file, self.start_date.strftime("%Y-%m-%d"), self.end_date.strftime("%Y-%m-%d")
|
||||
)
|
||||
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
|
||||
# We expect the READY domain names with the domain managers: Their counts, and listing at end_date.
|
||||
expected_content = (
|
||||
"UNMANAGED DOMAINS AT START DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,Special district,"
|
||||
"School district,Election office\n"
|
||||
"0,0,0,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"UNMANAGED DOMAINS AT END DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,Special district,"
|
||||
"School district,Election office\n"
|
||||
"1,1,0,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"Domain name,Domain type\n"
|
||||
"adomain10.gov,Federal\n"
|
||||
)
|
||||
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
@less_console_noise_decorator
|
||||
def test_domain_request_growth(self):
|
||||
"""Shows submitted requests within a date range, sorted"""
|
||||
# Remove "Submitted at" because we can't guess this immutable, dynamically generated test data
|
||||
columns = [
|
||||
"Domain request",
|
||||
"Domain type",
|
||||
"Federal type",
|
||||
# "Submitted at",
|
||||
]
|
||||
with patch("registrar.utility.csv_export.DomainRequestGrowth.get_columns", return_value=columns):
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
writer = csv.writer(csv_file)
|
||||
# Define columns, sort fields, and filter condition
|
||||
columns = [
|
||||
"Domain name",
|
||||
"Status",
|
||||
"Expiration date",
|
||||
"Domain type",
|
||||
"Agency",
|
||||
"Organization name",
|
||||
"City",
|
||||
"State",
|
||||
"SO",
|
||||
"SO email",
|
||||
"Security contact email",
|
||||
]
|
||||
sort_fields = ["domain__name"]
|
||||
filter_condition = {
|
||||
"domain__state__in": [
|
||||
Domain.State.READY,
|
||||
Domain.State.DNS_NEEDED,
|
||||
Domain.State.ON_HOLD,
|
||||
],
|
||||
}
|
||||
|
||||
# Call the export functions
|
||||
write_csv_for_domains(
|
||||
writer,
|
||||
columns,
|
||||
sort_fields,
|
||||
filter_condition,
|
||||
should_get_domain_managers=True,
|
||||
should_write_header=True,
|
||||
DomainRequestGrowth.export_data_to_csv(
|
||||
csv_file,
|
||||
self.start_date.strftime("%Y-%m-%d"),
|
||||
self.end_date.strftime("%Y-%m-%d"),
|
||||
)
|
||||
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains,
|
||||
# sorted alphabetially by domain name
|
||||
expected_content = (
|
||||
"Domain name,Status,Expiration date,Domain type,Agency,"
|
||||
"Organization name,City,State,SO,SO email,"
|
||||
"Security contact email,Domain manager 1,DM1 status,Domain manager 2,DM2 status,"
|
||||
"Domain manager 3,DM3 status,Domain manager 4,DM4 status\n"
|
||||
"adomain10.gov,Ready,(blank),Federal,Armed Forces Retirement Home,,,, , ,squeaker@rocks.com, I\n"
|
||||
"adomain2.gov,Dns needed,(blank),Interstate,,,,, , , ,meoward@rocks.com, R,squeaker@rocks.com, I\n"
|
||||
"cdomain11.govReady,(blank),Federal-ExecutiveWorldWarICentennialCommissionmeoward@rocks.comR\n"
|
||||
"cdomain1.gov,Ready,(blank),Federal - Executive,World War I Centennial Commission,,,"
|
||||
", , , ,meoward@rocks.com,R,info@example.com,R,big_lebowski@dude.co,R,"
|
||||
"woofwardthethird@rocks.com,I\n"
|
||||
"ddomain3.gov,On hold,(blank),Federal,Armed Forces Retirement Home,,,, , , ,,\n"
|
||||
"zdomain12.gov,Ready,(blank),Interstate,meoward@rocks.com,R\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
def test_export_data_managed_domains_to_csv(self):
|
||||
"""Test get counts for domains that have domain managers for two different dates,
|
||||
get list of managed domains at end_date.
|
||||
|
||||
An invited user, woofwardthethird, should also be pulled into this report."""
|
||||
|
||||
with less_console_noise():
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
export_data_managed_domains_to_csv(
|
||||
csv_file, self.start_date.strftime("%Y-%m-%d"), self.end_date.strftime("%Y-%m-%d")
|
||||
)
|
||||
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
|
||||
# We expect the READY domain names with the domain managers: Their counts, and listing at end_date.
|
||||
expected_content = (
|
||||
"MANAGED DOMAINS COUNTS AT START DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,Special district,"
|
||||
"School district,Election office\n"
|
||||
"0,0,0,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"MANAGED DOMAINS COUNTS AT END DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,"
|
||||
"Special district,School district,Election office\n"
|
||||
"3,2,1,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"Domain name,Domain type,Domain manager 1,DM1 status,Domain manager 2,DM2 status,"
|
||||
"Domain manager 3,DM3 status,Domain manager 4,DM4 status\n"
|
||||
"cdomain11.govFederal-Executivemeoward@rocks.com, R\n"
|
||||
"cdomain1.gov,Federal - Executive,meoward@rocks.com,R,info@example.com,R,"
|
||||
"big_lebowski@dude.co,R,woofwardthethird@rocks.com,I\n"
|
||||
"zdomain12.govInterstatemeoward@rocks.com,R\n"
|
||||
)
|
||||
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
def test_export_data_unmanaged_domains_to_csv(self):
|
||||
"""Test get counts for domains that do not have domain managers for two different dates,
|
||||
get list of unmanaged domains at end_date."""
|
||||
|
||||
with less_console_noise():
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
export_data_unmanaged_domains_to_csv(
|
||||
csv_file, self.start_date.strftime("%Y-%m-%d"), self.end_date.strftime("%Y-%m-%d")
|
||||
)
|
||||
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
|
||||
# We expect the READY domain names with the domain managers: Their counts, and listing at end_date.
|
||||
expected_content = (
|
||||
"UNMANAGED DOMAINS AT START DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,Special district,"
|
||||
"School district,Election office\n"
|
||||
"0,0,0,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"UNMANAGED DOMAINS AT END DATE\n"
|
||||
"Total,Federal,Interstate,State or territory,Tribal,County,City,Special district,"
|
||||
"School district,Election office\n"
|
||||
"1,1,0,0,0,0,0,0,0,0\n"
|
||||
"\n"
|
||||
"Domain name,Domain type\n"
|
||||
"adomain10.gov,Federal\n"
|
||||
)
|
||||
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
def test_write_requests_body_with_date_filter_pulls_requests_in_range(self):
|
||||
"""Test that requests that are
|
||||
1. SUBMITTED and their submission_date are in range
|
||||
are pulled when the growth report conditions are applied to export_requests_to_writed.
|
||||
Test that requests are sorted by requested domain name.
|
||||
"""
|
||||
|
||||
with less_console_noise():
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
writer = csv.writer(csv_file)
|
||||
# Define columns, sort fields, and filter condition
|
||||
# We'll skip submission date because it's dynamic and therefore
|
||||
# impossible to set in expected_content
|
||||
columns = ["Domain request", "Domain type", "Federal type"]
|
||||
sort_fields = [
|
||||
"requested_domain__name",
|
||||
]
|
||||
filter_condition = {
|
||||
"status": DomainRequest.DomainRequestStatus.SUBMITTED,
|
||||
"submission_date__lte": self.end_date,
|
||||
"submission_date__gte": self.start_date,
|
||||
}
|
||||
|
||||
additional_values = ["requested_domain__name"]
|
||||
all_requests = DomainRequest.objects.filter(**filter_condition).order_by(*sort_fields).distinct()
|
||||
annotated_requests = DomainRequestExport.annotate_and_retrieve_fields(all_requests, {}, additional_values)
|
||||
requests_dict = convert_queryset_to_dict(annotated_requests, is_model=False)
|
||||
DomainRequestExport.write_csv_for_requests(writer, columns, requests_dict)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
# We expect READY domains first, created between today-2 and today+2, sorted by created_at then name
|
||||
# and DELETED domains deleted between today-2 and today+2, sorted by deleted then name
|
||||
expected_content = (
|
||||
"Domain request,Domain type,Federal type\n"
|
||||
"city3.gov,Federal,Executive\n"
|
||||
|
@ -705,68 +513,82 @@ class ExportDataTest(MockDb, MockEppLib):
|
|||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
@less_console_noise_decorator
|
||||
def test_full_domain_request_report(self):
|
||||
def test_domain_request_data_full(self):
|
||||
"""Tests the full domain request report."""
|
||||
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
writer = csv.writer(csv_file)
|
||||
|
||||
# Call the report. Get existing fields from the report itself.
|
||||
annotations = DomainRequestExport._full_domain_request_annotations()
|
||||
additional_values = [
|
||||
"requested_domain__name",
|
||||
"federal_agency__agency",
|
||||
"senior_official__first_name",
|
||||
"senior_official__last_name",
|
||||
"senior_official__email",
|
||||
"senior_official__title",
|
||||
"creator__first_name",
|
||||
"creator__last_name",
|
||||
"creator__email",
|
||||
"investigator__email",
|
||||
# Remove "Submitted at" because we can't guess this immutable, dynamically generated test data
|
||||
columns = [
|
||||
"Domain request",
|
||||
# "Submitted at",
|
||||
"Status",
|
||||
"Domain type",
|
||||
"Federal type",
|
||||
"Federal agency",
|
||||
"Organization name",
|
||||
"Election office",
|
||||
"City",
|
||||
"State/territory",
|
||||
"Region",
|
||||
"Creator first name",
|
||||
"Creator last name",
|
||||
"Creator email",
|
||||
"Creator approved domains count",
|
||||
"Creator active requests count",
|
||||
"Alternative domains",
|
||||
"SO first name",
|
||||
"SO last name",
|
||||
"SO email",
|
||||
"SO title/role",
|
||||
"Request purpose",
|
||||
"Request additional details",
|
||||
"Other contacts",
|
||||
"CISA regional representative",
|
||||
"Current websites",
|
||||
"Investigator",
|
||||
]
|
||||
requests = DomainRequest.objects.exclude(status=DomainRequest.DomainRequestStatus.STARTED)
|
||||
annotated_requests = DomainRequestExport.annotate_and_retrieve_fields(requests, annotations, additional_values)
|
||||
requests_dict = convert_queryset_to_dict(annotated_requests, is_model=False)
|
||||
DomainRequestExport.write_csv_for_requests(writer, DomainRequestExport.all_columns, requests_dict)
|
||||
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
expected_content = (
|
||||
# Header
|
||||
"Domain request,Submitted at,Status,Domain type,Federal type,"
|
||||
"Federal agency,Organization name,Election office,City,State/territory,"
|
||||
"Region,Creator first name,Creator last name,Creator email,Creator approved domains count,"
|
||||
"Creator active requests count,Alternative domains,SO first name,SO last name,SO email,"
|
||||
"SO title/role,Request purpose,Request additional details,Other contacts,"
|
||||
"CISA regional representative,Current websites,Investigator\n"
|
||||
# Content
|
||||
"city2.gov,,In review,Federal,Executive,,Testorg,N/A,,NY,2,,,,0,1,city1.gov,Testy,Tester,testy@town.com,"
|
||||
"Chief Tester,Purpose of the site,There is more,Testy Tester testy2@town.com,,city.com,\n"
|
||||
"city3.gov,2024-04-02,Submitted,Federal,Executive,,Testorg,N/A,,NY,2,,,,0,1,"
|
||||
"cheeseville.gov | city1.gov | igorville.gov,Testy,Tester,testy@town.com,Chief Tester,"
|
||||
"Purpose of the site,CISA-first-name CISA-last-name | There is more,Meow Tester24 te2@town.com | "
|
||||
"Testy1232 Tester24 te2@town.com | Testy Tester testy2@town.com,test@igorville.com,"
|
||||
"city.com | https://www.example2.com | https://www.example.com,\n"
|
||||
"city4.gov,2024-04-02,Submitted,City,Executive,,Testorg,Yes,,NY,2,,,,0,1,city1.gov,Testy,Tester,"
|
||||
"testy@town.com,Chief Tester,Purpose of the site,CISA-first-name CISA-last-name | There is more,"
|
||||
"Testy Tester testy2@town.com,cisaRep@igorville.gov,city.com,\n"
|
||||
"city5.gov,,Approved,Federal,Executive,,Testorg,N/A,,NY,2,,,,1,0,city1.gov,Testy,Tester,testy@town.com,"
|
||||
"Chief Tester,Purpose of the site,There is more,Testy Tester testy2@town.com,,city.com,\n"
|
||||
"city6.gov,2024-04-02,Submitted,Federal,Executive,,Testorg,N/A,,NY,2,,,,0,1,city1.gov,Testy,Tester,"
|
||||
"testy@town.com,Chief Tester,Purpose of the site,CISA-first-name CISA-last-name | There is more,"
|
||||
"Testy Tester testy2@town.com,cisaRep@igorville.gov,city.com,"
|
||||
)
|
||||
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
with patch("registrar.utility.csv_export.DomainRequestDataFull.get_columns", return_value=columns):
|
||||
# Create a CSV file in memory
|
||||
csv_file = StringIO()
|
||||
# Call the export functions
|
||||
DomainRequestDataFull.export_data_to_csv(csv_file)
|
||||
# Reset the CSV file's position to the beginning
|
||||
csv_file.seek(0)
|
||||
# Read the content into a variable
|
||||
csv_content = csv_file.read()
|
||||
print(csv_content)
|
||||
expected_content = (
|
||||
# Header
|
||||
"Domain request,Status,Domain type,Federal type,"
|
||||
"Federal agency,Organization name,Election office,City,State/territory,"
|
||||
"Region,Creator first name,Creator last name,Creator email,Creator approved domains count,"
|
||||
"Creator active requests count,Alternative domains,SO first name,SO last name,SO email,"
|
||||
"SO title/role,Request purpose,Request additional details,Other contacts,"
|
||||
"CISA regional representative,Current websites,Investigator\n"
|
||||
# Content
|
||||
"city5.gov,,Approved,Federal,Executive,,Testorg,N/A,,NY,2,,,,1,0,city1.gov,Testy,Tester,testy@town.com,"
|
||||
"Chief Tester,Purpose of the site,There is more,Testy Tester testy2@town.com,,city.com,\n"
|
||||
"city2.gov,,In review,Federal,Executive,,Testorg,N/A,,NY,2,,,,0,1,city1.gov,Testy,Tester,"
|
||||
"testy@town.com,"
|
||||
"Chief Tester,Purpose of the site,There is more,Testy Tester testy2@town.com,,city.com,\n"
|
||||
'city3.gov,Submitted,Federal,Executive,,Testorg,N/A,,NY,2,,,,0,1,"cheeseville.gov, city1.gov,'
|
||||
'igorville.gov",Testy,Tester,testy@town.com,Chief Tester,Purpose of the site,CISA-first-name '
|
||||
"CISA-last-name "
|
||||
'| There is more,"Meow Tester24 te2@town.com, Testy1232 Tester24 te2@town.com, Testy Tester '
|
||||
'testy2@town.com"'
|
||||
',test@igorville.com,"city.com, https://www.example2.com, https://www.example.com",\n'
|
||||
"city4.gov,Submitted,City,Executive,,Testorg,Yes,,NY,2,,,,0,1,city1.gov,Testy,Tester,testy@town.com,"
|
||||
"Chief Tester,Purpose of the site,CISA-first-name CISA-last-name | There is more,Testy Tester "
|
||||
"testy2@town.com"
|
||||
",cisaRep@igorville.gov,city.com,\n"
|
||||
"city6.gov,Submitted,Federal,Executive,,Testorg,N/A,,NY,2,,,,0,1,city1.gov,Testy,Tester,testy@town.com,"
|
||||
"Chief Tester,Purpose of the site,CISA-first-name CISA-last-name | There is more,Testy Tester "
|
||||
"testy2@town.com,"
|
||||
"cisaRep@igorville.gov,city.com,\n"
|
||||
)
|
||||
# Normalize line endings and remove commas,
|
||||
# spaces and leading/trailing whitespace
|
||||
csv_content = csv_content.replace(",,", "").replace(",", "").replace(" ", "").replace("\r\n", "\n").strip()
|
||||
expected_content = expected_content.replace(",,", "").replace(",", "").replace(" ", "").strip()
|
||||
self.assertEqual(csv_content, expected_content)
|
||||
|
||||
|
||||
class HelperFunctions(MockDb):
|
||||
|
@ -792,12 +614,12 @@ class HelperFunctions(MockDb):
|
|||
"domain__first_ready__lte": self.end_date,
|
||||
}
|
||||
# Test with distinct
|
||||
managed_domains_sliced_at_end_date = get_sliced_domains(filter_condition)
|
||||
managed_domains_sliced_at_end_date = DomainExport.get_sliced_domains(filter_condition)
|
||||
expected_content = [3, 2, 1, 0, 0, 0, 0, 0, 0, 0]
|
||||
self.assertEqual(managed_domains_sliced_at_end_date, expected_content)
|
||||
|
||||
# Test without distinct
|
||||
managed_domains_sliced_at_end_date = get_sliced_domains(filter_condition)
|
||||
managed_domains_sliced_at_end_date = DomainExport.get_sliced_domains(filter_condition)
|
||||
expected_content = [3, 2, 1, 0, 0, 0, 0, 0, 0, 0]
|
||||
self.assertEqual(managed_domains_sliced_at_end_date, expected_content)
|
||||
|
||||
|
@ -809,6 +631,6 @@ class HelperFunctions(MockDb):
|
|||
"status": DomainRequest.DomainRequestStatus.SUBMITTED,
|
||||
"submission_date__lte": self.end_date,
|
||||
}
|
||||
submitted_requests_sliced_at_end_date = get_sliced_requests(filter_condition)
|
||||
submitted_requests_sliced_at_end_date = DomainRequestExport.get_sliced_requests(filter_condition)
|
||||
expected_content = [3, 2, 0, 0, 0, 0, 1, 0, 0, 1]
|
||||
self.assertEqual(submitted_requests_sliced_at_end_date, expected_content)
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -49,8 +49,10 @@ class AnalyticsView(View):
|
|||
"domain__permissions__isnull": False,
|
||||
"domain__first_ready__lte": end_date_formatted,
|
||||
}
|
||||
managed_domains_sliced_at_start_date = csv_export.get_sliced_domains(filter_managed_domains_start_date)
|
||||
managed_domains_sliced_at_end_date = csv_export.get_sliced_domains(filter_managed_domains_end_date)
|
||||
managed_domains_sliced_at_start_date = csv_export.DomainExport.get_sliced_domains(
|
||||
filter_managed_domains_start_date
|
||||
)
|
||||
managed_domains_sliced_at_end_date = csv_export.DomainExport.get_sliced_domains(filter_managed_domains_end_date)
|
||||
|
||||
filter_unmanaged_domains_start_date = {
|
||||
"domain__permissions__isnull": True,
|
||||
|
@ -60,8 +62,12 @@ class AnalyticsView(View):
|
|||
"domain__permissions__isnull": True,
|
||||
"domain__first_ready__lte": end_date_formatted,
|
||||
}
|
||||
unmanaged_domains_sliced_at_start_date = csv_export.get_sliced_domains(filter_unmanaged_domains_start_date)
|
||||
unmanaged_domains_sliced_at_end_date = csv_export.get_sliced_domains(filter_unmanaged_domains_end_date)
|
||||
unmanaged_domains_sliced_at_start_date = csv_export.DomainExport.get_sliced_domains(
|
||||
filter_unmanaged_domains_start_date
|
||||
)
|
||||
unmanaged_domains_sliced_at_end_date = csv_export.DomainExport.get_sliced_domains(
|
||||
filter_unmanaged_domains_end_date
|
||||
)
|
||||
|
||||
filter_ready_domains_start_date = {
|
||||
"domain__state__in": [models.Domain.State.READY],
|
||||
|
@ -71,8 +77,8 @@ class AnalyticsView(View):
|
|||
"domain__state__in": [models.Domain.State.READY],
|
||||
"domain__first_ready__lte": end_date_formatted,
|
||||
}
|
||||
ready_domains_sliced_at_start_date = csv_export.get_sliced_domains(filter_ready_domains_start_date)
|
||||
ready_domains_sliced_at_end_date = csv_export.get_sliced_domains(filter_ready_domains_end_date)
|
||||
ready_domains_sliced_at_start_date = csv_export.DomainExport.get_sliced_domains(filter_ready_domains_start_date)
|
||||
ready_domains_sliced_at_end_date = csv_export.DomainExport.get_sliced_domains(filter_ready_domains_end_date)
|
||||
|
||||
filter_deleted_domains_start_date = {
|
||||
"domain__state__in": [models.Domain.State.DELETED],
|
||||
|
@ -82,8 +88,10 @@ class AnalyticsView(View):
|
|||
"domain__state__in": [models.Domain.State.DELETED],
|
||||
"domain__deleted__lte": end_date_formatted,
|
||||
}
|
||||
deleted_domains_sliced_at_start_date = csv_export.get_sliced_domains(filter_deleted_domains_start_date)
|
||||
deleted_domains_sliced_at_end_date = csv_export.get_sliced_domains(filter_deleted_domains_end_date)
|
||||
deleted_domains_sliced_at_start_date = csv_export.DomainExport.get_sliced_domains(
|
||||
filter_deleted_domains_start_date
|
||||
)
|
||||
deleted_domains_sliced_at_end_date = csv_export.DomainExport.get_sliced_domains(filter_deleted_domains_end_date)
|
||||
|
||||
filter_requests_start_date = {
|
||||
"created_at__lte": start_date_formatted,
|
||||
|
@ -91,8 +99,8 @@ class AnalyticsView(View):
|
|||
filter_requests_end_date = {
|
||||
"created_at__lte": end_date_formatted,
|
||||
}
|
||||
requests_sliced_at_start_date = csv_export.get_sliced_requests(filter_requests_start_date)
|
||||
requests_sliced_at_end_date = csv_export.get_sliced_requests(filter_requests_end_date)
|
||||
requests_sliced_at_start_date = csv_export.DomainRequestExport.get_sliced_requests(filter_requests_start_date)
|
||||
requests_sliced_at_end_date = csv_export.DomainRequestExport.get_sliced_requests(filter_requests_end_date)
|
||||
|
||||
filter_submitted_requests_start_date = {
|
||||
"status": models.DomainRequest.DomainRequestStatus.SUBMITTED,
|
||||
|
@ -102,8 +110,12 @@ class AnalyticsView(View):
|
|||
"status": models.DomainRequest.DomainRequestStatus.SUBMITTED,
|
||||
"submission_date__lte": end_date_formatted,
|
||||
}
|
||||
submitted_requests_sliced_at_start_date = csv_export.get_sliced_requests(filter_submitted_requests_start_date)
|
||||
submitted_requests_sliced_at_end_date = csv_export.get_sliced_requests(filter_submitted_requests_end_date)
|
||||
submitted_requests_sliced_at_start_date = csv_export.DomainRequestExport.get_sliced_requests(
|
||||
filter_submitted_requests_start_date
|
||||
)
|
||||
submitted_requests_sliced_at_end_date = csv_export.DomainRequestExport.get_sliced_requests(
|
||||
filter_submitted_requests_end_date
|
||||
)
|
||||
|
||||
context = dict(
|
||||
# Generate a dictionary of context variables that are common across all admin templates
|
||||
|
@ -142,7 +154,7 @@ class ExportDataType(View):
|
|||
# match the CSV example with all the fields
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = 'attachment; filename="domains-by-type.csv"'
|
||||
csv_export.export_data_type_to_csv(response)
|
||||
csv_export.DomainDataType.export_data_to_csv(response)
|
||||
return response
|
||||
|
||||
|
||||
|
@ -151,7 +163,7 @@ class ExportDataFull(View):
|
|||
# Smaller export based on 1
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = 'attachment; filename="current-full.csv"'
|
||||
csv_export.export_data_full_to_csv(response)
|
||||
csv_export.DomainDataFull.export_data_to_csv(response)
|
||||
return response
|
||||
|
||||
|
||||
|
@ -160,7 +172,7 @@ class ExportDataFederal(View):
|
|||
# Federal only
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = 'attachment; filename="current-federal.csv"'
|
||||
csv_export.export_data_federal_to_csv(response)
|
||||
csv_export.DomainDataFederal.export_data_to_csv(response)
|
||||
return response
|
||||
|
||||
|
||||
|
@ -171,63 +183,51 @@ class ExportDomainRequestDataFull(View):
|
|||
"""Returns a content disposition response for current-full-domain-request.csv"""
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = 'attachment; filename="current-full-domain-request.csv"'
|
||||
csv_export.DomainRequestExport.export_full_domain_request_report(response)
|
||||
csv_export.DomainRequestDataFull.export_data_to_csv(response)
|
||||
return response
|
||||
|
||||
|
||||
class ExportDataDomainsGrowth(View):
|
||||
def get(self, request, *args, **kwargs):
|
||||
# Get start_date and end_date from the request's GET parameters
|
||||
# #999: not needed if we switch to django forms
|
||||
start_date = request.GET.get("start_date", "")
|
||||
end_date = request.GET.get("end_date", "")
|
||||
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = f'attachment; filename="domain-growth-report-{start_date}-to-{end_date}.csv"'
|
||||
# For #999: set export_data_domain_growth_to_csv to return the resulting queryset, which we can then use
|
||||
# in context to display this data in the template.
|
||||
csv_export.export_data_domain_growth_to_csv(response, start_date, end_date)
|
||||
csv_export.DomainGrowth.export_data_to_csv(response, start_date, end_date)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
class ExportDataRequestsGrowth(View):
|
||||
def get(self, request, *args, **kwargs):
|
||||
# Get start_date and end_date from the request's GET parameters
|
||||
# #999: not needed if we switch to django forms
|
||||
start_date = request.GET.get("start_date", "")
|
||||
end_date = request.GET.get("end_date", "")
|
||||
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = f'attachment; filename="requests-{start_date}-to-{end_date}.csv"'
|
||||
# For #999: set export_data_domain_growth_to_csv to return the resulting queryset, which we can then use
|
||||
# in context to display this data in the template.
|
||||
csv_export.DomainRequestExport.export_data_requests_growth_to_csv(response, start_date, end_date)
|
||||
csv_export.DomainRequestGrowth.export_data_to_csv(response, start_date, end_date)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
class ExportDataManagedDomains(View):
|
||||
def get(self, request, *args, **kwargs):
|
||||
# Get start_date and end_date from the request's GET parameters
|
||||
# #999: not needed if we switch to django forms
|
||||
start_date = request.GET.get("start_date", "")
|
||||
end_date = request.GET.get("end_date", "")
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = f'attachment; filename="managed-domains-{start_date}-to-{end_date}.csv"'
|
||||
csv_export.export_data_managed_domains_to_csv(response, start_date, end_date)
|
||||
csv_export.DomainManaged.export_data_to_csv(response, start_date, end_date)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
class ExportDataUnmanagedDomains(View):
|
||||
def get(self, request, *args, **kwargs):
|
||||
# Get start_date and end_date from the request's GET parameters
|
||||
# #999: not needed if we switch to django forms
|
||||
start_date = request.GET.get("start_date", "")
|
||||
end_date = request.GET.get("end_date", "")
|
||||
response = HttpResponse(content_type="text/csv")
|
||||
response["Content-Disposition"] = f'attachment; filename="unamanaged-domains-{start_date}-to-{end_date}.csv"'
|
||||
csv_export.export_data_unmanaged_domains_to_csv(response, start_date, end_date)
|
||||
response["Content-Disposition"] = f'attachment; filename="unmanaged-domains-{start_date}-to-{end_date}.csv"'
|
||||
csv_export.DomainUnmanaged.export_data_to_csv(response, start_date, end_date)
|
||||
|
||||
return response
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue