This is not a complete removal of ofy as we still a dependency on it
(GaeUserIdConverter). But this PR removed it from a lot of places where
it's no longer needed.
* Fix Gradle dependency version pinning
In Gradle 7, version labels require '!!' at the end to be free from
any forced upgrade.
Hibernate min version needs to be advanced past 5.6.12, which is buggy.
Upgraded most dependencies to the latest version.
Therefore this PR also removed several classes and related tests that
support the setup and verification of Ofy entities.
In addition, support for creating a VKey from a string is limited to
VKey<? extends EppResource> only because it is the only use case (to
pass a key to an EPP resource in a web safe way to facilitate resave),
and we do not want to keep an extra simple name to class mapping, in
addition to what persistence.xml contains. I looked into using
PersistenceXmlUtility to obtain the mapping, but the xml file contains
classes with the same simple name (namely OneTime from both PollMessage
and BillingEvent). It doesn't seem like a worthwhile investment to write
more code to deal with that, when the fact is that we only need to
consider EppResource.
* Remove Cloud KMS from Nomulus Server
Removed Cloud KMS from the Nomulus (:core) since it is no longer used.
Renamed remaining classes to reflect their use of the SecretManager.
Updated the config instructions to use a new codename for the keyring:
KMS to CSM. This PR works with both codenames. Will drop 'KMS' after
the internal repo is updated.
Add a implementation of Delegated credential without using downloaded private key.
This is a stop-gap implementation while waiting for a solution from the Java auth library.
Also added a verifier action to test the new credential in production. Testing is helpful because:
Configuration is per-environment, therefore, success in alpha does not fully validate prod.
The relevant use case is triggered by low-frequency activities. Problem may not pop out for hours or longer.
This PR removes all Ofy related cruft around `HistoryEntry` and its three subclasses in order to support dual-write to datastore and SQL. The class structure was refactored to take advantage of inheritance to reduce code duplication and improve clarity.
Note that for the embedded EPP resources, either their columns are all empty (for pre-3.0 entities imported into SQL), including their unique foreign key (domain name, host name, contact id) and the update timestamp; or they are filled as expected (for entities that were written since dual writing was implemented).
Therefore the check for foreign key column nullness in the various `@PostLoad` methods in the original code is an no-op as the EPP resource would have been loaded as null. In another word, there is no case where the update timestamp is null but other columns are not.
See the following query for the most recent entries in each table where the foreign key column or the update timestamp are null -- they are the same.
```
[I]postgres=> select MAX(history_modification_time) from "DomainHistory" where update_timestamp is null;
max
----------------------------
2021-09-27 15:56:52.502+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "DomainHistory" where domain_name is null;
max
----------------------------
2021-09-27 15:56:52.502+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "ContactHistory" where update_timestamp is null;
max
----------------------------
2021-09-27 15:56:04.311+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "ContactHistory" where contact_id is null;
max
----------------------------
2021-09-27 15:56:04.311+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "HostHistory" where update_timestamp is null;
max
----------------------------
2021-09-27 15:52:16.517+00
(1 row)
[I]postgres=> select MAX(history_modification_time) from "HostHistory" where host_name is null;
max
----------------------------
2021-09-27 15:52:16.517+00
(1 row)
```
These alternative ORMs are introduced as a way to make querying large number of
domains and domain histories more efficient through bulk loading from several
to-be-joined tables separately, then in-memory re-assembly of the final entity,
bypassing the need to query multiple tables each time an entity is queried.
Their primary use case is loading these entities for comparison between
datastore and SQL during the migration, which has been completed. The
code remain unused as of now and their existence makes refactoring and
general maintenance more complicated than necessary due to the need to keep
them up to date.
Therefore we remove the related code.
<!-- Reviewable:start -->
- - -
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1834)
<!-- Reviewable:end -->
Also removed the ability to disable update timestamp auto update as it
was only needed during the migration.
Lastly, rectified the use of raw Coder in RegistryJpaIO.
Passing in an already-existing instance is an antipattern because it can
lead to race conditions where something else modified the object in
between when the pipeline loaded it and when you're saving it. The Write
action should only be writing new entities.
We cannot check IDs for the objects (some IDs are not autogenerated so
they might exist already). We also cannot call `insert` on the objects
because the underlying JPA `persist` call adds the input object to the
persistence context, meaning that any modifications (e.g.
updateTimestamp) are reflected in the input object. Beam doesn't allow
modification of input objects.
* Does not self allocate IDs in Beam by default.
Per b/250948425, it is dangerous to implicitly allow all Beam pipelines
to create buildables by self allocating the IDs. This change makes it so
that one has to explicitly request self allocation in Beam.
A boolean is added to the pipeline option so that it can be passed to
the beam worker initializer that controls the behavior of the JVM on
each worker. Note that we did not add the option in the metadata.json file
because we did not want people to use the override at run time when launching
a pipeline, due to the risk. As shown in RdePipeline.java, we instead
explicitly hard-code the option in the pipeline. There is nothing that
stops one to supply that option when launching the pipeline, but it's
not advised.
Tested=deployed the pipeline alpha and ran it.
* Properly create and use default credential
This PR consists of the following changes:
- Stopped adding scopes to the default credential when using it to access other
non-workspace GCP APIs. Scopes are not needed here.
- Started applying scopes to the default credential when using to access
Drive and Sheets APIs.
- Upgraded Drive access from the deprecated credential lib to the
up-to-date one
- Switched Sheet access from the exported json credential to the
scoped default credential.
This PR requires that the affected files be writable to the default
service account (project-name@appspot.gserviceaccount.com) of the
project.
- This is already the case for exported files (premium terms, reserved
terms, and domain list).
- The registrar sync sheets in alpha, sandbox, and production have been
updated with the new permissions.
All impacted operations have been tested in alpha.
* Properly create and use default credential
This PR consists of the following changes:
- Added a new method to generate scope-less default credential when using it to
access other non-workspace GCP APIs. Scopes are not needed here.
- Started to use the new credential in the SecreteManager.
- Will migrate other usages to this new credential gradually.
- Marked the old DefaultCredential as deprecated.
- Started applying scopes to the default credential when using to access Drive
and Sheets APIs.
- Upgraded Drive access from the deprecated credentials lib
- Switched Sheet access from the exported json credential to the scoped
default credential.
This PR requires that the affected files be writable to the default service
account (project-name@appspot.gserviceaccount.com) of the project.
- This is already the case for exported files (premium terms, reserved terms,
and domain list).
- The registrar sync sheets in alpha, sandbox, and production have been
updated with the new permissions.
All impacted operations have been tested in alpha.
This reverts commit 1ab077d267.
Apparently the new version of Spinnaker that is compatible with this doesn't
work for our release, so we need to roll this back for now. (Again!)
We started getting failures because some of the tests used October. In
general we should freeze the clock for testing as much as possible.
Same thing with the Get*Commands
Basically, any time we're loading a bunch of linked objects that might
change, we want to have REPEATABLE_READ so that another transaction
doesn't come along and smush whatever we think we're loading.
The following instances of READ_COMMITTED haven't changed:
- RdePipeline (it only loads immutable objects like histories)
- Invoicing pipeline (only immutable objects like BillingEvents)
- Spec11 (doesn't use any linked info from Domain)
This also changes the PersistenceModule to use REPEATABLE_READ by
default on the replica JPA TM, for the standard reasoning.
This runs over all domains that weren't deleted as of September 5. This
will fix most of b/248112997, which is itself caused by b/245940594 --
creating synthetic history objects means that the RDE pipeline will look
at those instead of the potentially-no-longer-valid data in the old
history objects.
Also fixed a bug introduced in #1785 where identity checked were performed instead of equality. This resulted in two sets containing the same elements not being regarded as equal and subsequent DNS updated being unnecessarily enqueued.