Create a Dataflow pipeline to resave EPP resources (#1553)

* Create a Dataflow pipeline to resave EPP resources

This has two modes.

If `fast` is false, then we will just load all EPP resources, project them to the current time, and save them.

If `fast` is true, we will attempt to intelligently load and save only resources that we expect to have changes applied when we project them to the current time. This means resources with pending transfers that have expired, domains with expired grace periods, and non-deleted domains that have expired (we expect that they autorenewed).
This commit is contained in:
gbrodman 2022-04-15 15:46:35 -04:00 committed by GitHub
parent 94017b694e
commit 9939833c25
14 changed files with 676 additions and 2 deletions

View file

@ -98,7 +98,9 @@ steps:
google.registry.beam.rde.RdePipeline \
google/registry/beam/rde_pipeline_metadata.json \
google.registry.beam.comparedb.ValidateDatabasePipeline \
google/registry/beam/validate_database_pipeline_metadata.json
google/registry/beam/validate_database_pipeline_metadata.json \
google.registry.beam.resave.ResaveAllEppResourcesPipeline \
google/registry/beam/resave_all_epp_resources_pipeline_metadata.json
# Tentatively build and publish Cloud SQL schema jar here, before schema release
# process is finalized. Also publish nomulus:core jars that are needed for
# server/schema compatibility tests.