Nothing super crazy here other than persisting the entity changes in
DomainDeleteFlow at the end of the flow rather than almost at the end.
This means that when we return the results we give the results as they
were originally present, rather than the subsequently-changed values.
* Fix Spec11 domain check
We should be checking to see if there are _any_ active domains for a given
reported domain, not to see if _the_ domain for the name is active.
The last change caused an exception for domains with soft-deleted past domains
of the same name. The original code only checked the first domain returned
from the query, which may have been soft-deleted. This version checks all
domain records to see if any are active.
* filter().count() -> anyMatch()
* Begin saving the EppResource parent in *History objects
We use DomainCreateFlow as an example here of how this will work. There
were a few changes necessary:
- various changes around GracePeriod / GracePeriodHistory so that we can
actually store them without throwing NPEs
- Creating one injectable *History.Builder field and using in place of
the HistoryEntry.Builder injected field in DomainCreateFlow
- Saving the EppResource as the parent in the *History.Builder setParent
calls
- Converting to/from HistoryEntry/*History classes in
DatastoreTransactionManager. Basically, we'll want to return the
*History subclasses (and similar in the ofy portions of HistoryEntryDao)
- Converting a few HistoryEntry.Builder usages to DomainHistory.Builder
usages. Eventually we should convert all of them.
This is similar to the migration of the spec11 pipeline in #1073. Also removed
a few Dagger providers that are no longer needed.
TESTED=tested the dataflow job on alpha.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1100)
<!-- Reviewable:end -->
* Migrate Spec11 pipeline to flex template
Unfortunately this PR has turned out to be much bigger than I initially
conceived. However this is no good way to separate it out because the
changes are intertwined. This PR includes 3 main changes:
1. Change the spec11 pipline to use Dataflow Flex Template.
2. Retire the use of the old JPA layer that relies on credential saved
in KMS.
3. Some extensive refactoring to streamline the logic and improve test
isolation.
* Fix job name and remove projectId from options
* Add parameter logs
* Set RegistryEnvironment
* Remove logging and modify safe browsing API key regex
* Rename a test method and rebase
* Remove unused Junit extension
* Specify job region
* Send an immediate poll message for superuser domain deletes
This poll message is in addition to the normal poll message that is sent when
the domain's deletion is effective (typically 35 days later). It's needed
because, in the event of a superuser deletion, the owning registrar won't
otherwise necessarily know it's happening.
Note that, in the case of a --immediate superuser deletion, the normal poll
message is already being sent immediately, so this additional poll message is
not necessary.
* Update various tests to work with SQL as well
The main weird bit here is adding a method in DatabaseHelper to
retrieve and initialize all objects in either database. The
initialization is necessary since it's used post-command-dry-run to make
sure that no changes were actually made.
* Convert CountDomainsCommand to tm
As part of this, implement "select count(*)" queries in the QueryComposer.
* Replaced kludgy trick for objectify count
* Modify ClaimsList DAO to always use Cloud SQL as primary
* Revert ClaimsList add changes to SignedMarkRevocationList
* Fix flow tests
* Use start of time for empty list
* replace lambda with method reference
* Upload latest version of RDE report to icann
Currently the RdeReportAction is hard coded to load the initial version
of a report. This is wrong when reports have been regenerated.
Changed lines are copied from RdeUploadAction.
* Implement query abstraction
Implement a query abstraction layer ("QueryComposer") that allows us to
construct fluent-style queries that work across both Objectify and JPA.
As a demonstration of the concept, convert Spec11EmailUtils and its test to
use the new API.
Limitations:
- The primary limitations of this system are imposed by datastore, for
example all queryable fields must be indexed, orderBy must coincide with
the order of any inequality queries, inequality filters are limited to one
property...
- JPA queries are limited to a set of where clauses (all of which must match)
and an "order by" clause. Joins, functions, complex where logic and
multi-table queries are simply not allowed.
- Descending sort order is currently unsupported (this is simple enough to
add).
* Fix bug that was incorrectly assuming Cursor would always exist
In fact, the Cursor entity does not always exist (i.e. if an upload has never
previously been done on this TLD, i.e. it's a new TLD), and the code needs to be
resilient to its non-existence.
This bug was introduced in #1044.
* Use lazy injection in SendEscrow command
The injected object in SendEscrowReportToIcannCommand creates Ofy keys
in its static initialization routine. This happens before the RemoteApi
setup. Use lazy injection to prevent failure.
* Specify explicit ofyTm usage in SetDatabaseTransitionScheduleCommand
We cannot use the standard MutatingCommand because the DB schedule is
explicitly always stored in Datastore, and once we transition to
SQL-as-primary, MutatingCommand will stage the entity changes to SQL.
In addition, we remove the raw ofy() call from the test.
* Migrate Keyring secrets to Secret Manager
Implented dual-read of Keyring secrets with Datastore as primary.
Implemented dual-write of keyring secrets with Datastore as primary.
Secret manager write failures are simply thrown. This is fine since all
keyring writes are manual, throught eh update_kms_keyring command.
Added a one-way migration command that copies all data to secret manager
(unencrypted).
* Upgrade testcontainers to work around a race
testcontainers 1.15.? has a race condition that occassionally causes deadlocks.
This can be worked around by upgrading to 1.15.2 and set transport type to
http5.
See https://github.com/testcontainers/testcontainers-java/issues/3531
for more information.
There are two changes that are not lockfiles:
- dependencies.gradle
- java_common.gradle
* Convert TmchCrl and ServerSecret to cleaner tm() impls
When I implemented this originally I knew a lot less than I know now
about how we'll be storing and retrieving these singletons from SQL. The
optimal way here is to use the single SINGLETON_ID as the primary key,
that way we always know how to create the key that we can use in the
tm() retrieval.
This allows us to use generic tm() methods and to remove the handcrafted
SQL queries.
* Enforce consistency in non-cached FKI loads
For the cached code path, we do not require consistency but we do
require the ability to load / operate on large numbers of entities (so,
we must do so without a Datastore transaction). For the non-cached code
path, we require consistency but do not care about large numbers of
entities, so we must remain in the transaction that we're already in.
* Add a beforeSqlSave callback to ReplaySpecializer
When in the Datastore-primary and SQL-secondary stage, we will want to
save the EppResource-at-this-point-in-time field in the *History
objects so that later on we can examine the *History objects to see what
the resource looked like at that point in time.
Without this PR, the full object at that point in time would be lost
during the asynchronous replay since Datastore doesn't know about it.
In addition, we modify the HistoryEntry weight / priority so that
additions to it come after the additions to the resource off of which it
is based. As a result, we need to DEFER some foreign keys so that we can
write the billing / poll message objects before the history object that
they're referencing.
* Partially convert EppResourceUtils to SQL
Some of the rest will depend on b/184578521.
The primary conversion in this PR is the change in
NameserverLookupByIpCommand as that is the only place where the removed
EppResourceUtils method was called. We also convert to DualDatabaseTest
the tests of the callers of NLBIC. and use a CriteriaQueryBuilder in the
foreign key index SQL lookup (allowing us to avoid the String.format
call).
* Remove SQL credentials from Keyring
Remove SQL credentials from Keyring. SQL credentials will be managed by
an automated system (go/dr-sql-security) and the keyring is no longer a
suitable place to hold them.
Also stopped loading SQL credentials from they keyring for comparison
with those from the secret manager.
* Convert RefreshDnsOnHostRenameAction to tm
This is not quite complete because it also requires the conversion of a
map-reduce which is in scope for an entirely different work. Tests of the
map-reduce functionality are excluded from the SQL run.
This also requires the following additional fixes:
- Convert Lock to tm, as doing so was necessary to get this action to work.
As Lock is being targeted as DatastoreOnly, we convert all calls in it to
use ofyTm()
- Fix a bug in DualDatabaseTest (the check for an AppEngineExtension field is
wrong, and captures fields of type Object as AppEngineExtension's)
- Introduce another VKey.from() method that creates a VKey from a stringified
Ofy Key.
* Rename VKey.from(String) to fromWebsafeKey
* Throw NoSuchElementE. instead of NPE
* Correctly get the primary database value in PremiumListDualDao
* Remove extra AppEngineExtension
* get rid of ofy call
* Remove extra duration skip in test
* Convert poll-message-related classes to use SQL as well
Two relatively complex parts. The first is that we needed a small
refactor on the AckPollMessagesCommand because we could theoretically be
acking more poll messages than the Datastore transaction size boundary.
This means that the normal flow of "gather the poll messages from the DB
into one collection, then act on it" needs to be changed to a more
functional flow.
The second is that acking the poll message (deleting it in most cases)
reduces the number of remaining poll messages in SQL but not in
Datastore, since in Datastore the deletion does not take effect until
after the transaction is over.
* Fix some low-hanging code quality issue fruits
These include problems such as: use of raw types, unnecessary throw clauses,
unused variables, and more.
* Convert ofy -> tm for two more classes
Convert ofy -> tm for MutatingCommand and DedupeOneTimeBillingEventIdsCommand.
Note that DedupeOneTimeBillingEventIdsCommand will not be needed after
migration, so this conversion is just to remove the ofy uses from the
codebase. We don't update the test (other than to keep it working) and it
wouldn't currently work in SQL.
* Fixed a test broken by this PR
In addition, we move the deleteTestDomain method to DatabaseHelper since
it'll be useful in other places (e.g. RelockDomainActionTest) and remove
the duplicate definition of ResaveEntityAction.PATH.
We also can ignore deletions of non-persisted entities in the JPA
transaction manager.
* Update RegistrarSettingsAction and RegistrarContact to SQL calls
Relevant potentially-unclear changes:
- Making sure the last update time is always correct and up to date in
the auto timestamp object
- Reloading the domain upon return when updating in a new transaction to
make sure that we use the properly-updated last update time (SQL returns
the correct result if retrieved within the same txn but DS does not)
* Convert DomainTAF and DomainFlowUtils to SQL
The only tricky part to this is that the order of entities that we're
saving during the DomainTransferApproveFlow matters -- some entities
have dependencies on others so we need to save the latter first. We
change `entitiesToSave` to be a list to reinforce this.