Various necessary changes included as part of this:
- Make ForeignKeyIndex completely generic. Previously, only the load()
method that took a DateTime as input could use SQL, and the cached flow
was particular to Objectify Keys. Now, the cached flow and the
non-cached flow can use the same (ish) piece of code to load / create
the relevant index objects before filtering or modifying them as
necessary.
- EntityChanges should use VKeys
- FlowUtils should persist entity changes using tm(), however not all
object types are storable in SQL.
- Filling out PollMessage fields with the proper object type when
loading from SQL
- Changing a few tm() calls to ofyTm() calls when using objectify. This
is because creating a read-only transaction in SQL is quite a footgun at
the moment, because it makes the entire transaction you're in (if you
were already in one) a read-only transaction.
* Convert 3 classes from ofy -> tm
Convert SyncGroupMembersAction, SyncRegistrarsSheet and
IcannReportingUploadAction and their test cases to use TransactionManager and
dual-test them so we know they work in jpa.
* Address comments in review
Address review comments and make the entire IcannReportingUploadAction run
transactional.
* reformatted.
* Remove duplicate loadByKey() method
Remove test method added in a recent PR.
* Embed a ZonedDateTime as the UpdateAutoTimestamp in SQL
This means we can get rid of the converter and more importantly, means
that reading the object from SQL does not affect the last-read time (the
test added to UpdateAutoTimestampTest failed prior to the production
code change).
For now we keep both time fields in UpdateAutoTimestamp however
post-migration, we can remove the joda-time field if we wish.
Note: I'm not sure why <now> is the time that we started getting
LazyInitializationExceptions in the LegacyHistoryObject and
ReplayExtension tests but we can solve that by just examining /
initializing the object within the transaction.
* Make TransactionManager.loadAllOf() smart w.r.t the cross-TLD entity group
The loadAllOf() method will now automatically append the cross-TLD entity group
ancestor query as necessary, iff the entity class being loaded is tagged with
the new @IsCrossTld annotation.
* Add tests
* Add SQL wipeout action in QA
Added the WipeOutSqlAction that deletes all data in Cloud SQL.
Wipe out is restricted to the QA environment, which will get production
data during migration testing.
Also added a cron job that invokes wipeout on every saturday morning.
This is part of the privacy requirments for using production data in QA.
Tested in QA.
* Add some load convenience methods to DatabaseHelper
These can only be called by test code, and they automatically wrap the load
in a transaction if one isn't already specified (for convenience).
In production code we don't want to be able to use these, as we have to be
more thoughtful about transactions in production code (e.g. make sure that
we aren't loading and then saving a resource in separate transactions in a
way that makes it prone to contention errors).
* Disallow admin triggering of internal endpoints
Stop simply relying on the presence of the X-AppEngine-QueueName as an
indicator that an endpoint has been triggered internally, as this allows
admins to trigger a remote execution vulnerability.
We now supplement this check by ensuring that there is no authenticated user.
Since only an admin user can set these headers, this means that the header
must have been set by an internal request.
Tested:
In addition to the new unit test, verified on Crash that:
- Internal requests are still getting authenticated via the internal auth
mechanism.
- Admin requests with the X-AppEngine-QueueName header are rejected as
"unauthorized."
* Reformatted.
This has the same issue as the domain-search action where the database
ordering is not consistent between Objectify and SQL -- as a result,
there is one test that we have to duplicate in order to account for the
two sort orders.
In addition, there isn't a way to query @Convert-ed fields in Postgres
via the standard Hibernate / JPA query language, meaning we have to use
a raw Postgres query for that.
* Add a jpaTm().query(...) convenience method
This replaces the more ungainly jpaTm().getEntityManager().createQuery(...).
Note that this is in JpaTransactionManager, not the parent TransactionManager,
because this is not an operation that Datastore can support. Once we finish
migrating away from Datastore this won't matter anyway because
JpaTransactionManager will be merged into TransactionManager and then deleted.
In the process of writing this PR I discovered several other methods available
on the EntityManager that may merit their own convenience methods if we start
using them enough. The more commonly used ones will be addressed in subsequent
PRs. They are:
jpaTm().getEntityManager().getMetamodel().entity(...).getName()
jpaTm().getEntityManager().getCriteriaBuilder().createQuery(...)
jpaTm().getEntityManager().createNativeQuery(...)
jpaTm().getEntityManager().find(...)
This PR also addresses some existing callsites that were calling
getEntityManager() rather than using extant convenience methods, such as
jpa().insert(...).
* Add replay to remaining (non-trivial) flow tests
Convert all remaining flow tests to do replay/compare testing. In the course
of this:
- Move the class specific SetClock extension into its own place.
- Fix another "cyclic" foreign key (there may be another solution in this case
because HostHistory is actually different from HistoryEntry, but that would
require changing the way we establish priority since HostHistory is not
distinguished from HistoryEntry in the current methodology)
* Attempt to fix flakey deleteExpiredDomain test
Though hard to reproduce locally, the test_deletesThreeDomainsInOneRun
test has failed multiple times on Kokoro. The root cause may be the
non-transactional query executed by the Action object, which was by
design. Observing that the other test never fails, this PR follows its behavior
and adds a transactional query before invoking the action.
The dual DAO takes care of switching between databases, comparing the
results of one to the results of the other, and caching the result. All
calls to ClaimsList retrieval or storing should use the
dual-database-DAO.
Previously, calls to comparing the lists were somewhat scattered
throughout the codebase. Now, there is one class for retrieval and
comparison (the dual DAO), one class for retrieval from SQL (the SQL
DAO), and one class for retrieval from Datastore (ClaimsListShard
itself, though the retrieval could be moved in to a separate DAO if we
wished).
In addition, we rename the ClaimsListDao to ClaimsListSqlDao
This allows us to get rid of the DAO as well as the sanity-checking
methods since we can be reasonably sure that the fields will be the
same. Future PRs will add conversions from ofy() to tm() calls that will
make sure that we get the same proper data in both Datastore and SQL
* Convert more flow tests to replay/compare
Add the replay extension to another batch of flow tests. In the course of
this:
- Refactor out domain deletion code into DatabaseHelper so that it can be used
from multiple tests.
- Make null handling uniform for contact phone numbers.
* Convert postLoad method to onLoad.
* Remove "Test" import missed during rebase
* Deal with persistence of billing cancellations
Deal with the persistence of billing cancellations, which were added in the
master branch since before this PR was initially sent for review.
* Adding forgotten flyway file
* Removed debug variable
* Clear autorenew end time when a domain is restored
This allows us to still see in the database which now-deleted domains had
reached expiration, while correctly not re-deleting the domain immediately if
the registrar pays to explicitly restore the domain.
This also resolves some TODOs around data migration for this field on domain so
that it's not null, as said migration has already been completed.
* Remove grace period ID @OnLoads now that migration is complete
I verified in BigQuery that all grace period IDs are now allocated (as expected
given that the re-save all EPP resource mapreduce has been run several times
since this migration started last year). The query I used for verification is:
SELECT fullyQualifiedDomainName, gp, ot
FROM `domain-registry.latest_datastore_export.DomainBase`
JOIN UNNEST(gracePeriods.billingEventRecurring) AS gp
JOIN UNNEST(gracePeriods.billingEventOneTime) AS ot
WHERE gp.id IS NULL or ot.id IS NULL
BUG=169873747
* Add daily cron entries to for DeleteExpiredDomainsAction
This also requires setting this action to GET instead of POST, as GAE cron makes
GET requests.
* Use shared jar to stage BEAM pipeline if possible
Allow multiple BEAM pipelines with the same classes and dependencies to
share one Uber jar.
Added metadata for BulkDeleteDatastorePipeline.
Updated shell and Cloud Build scripts to stage all pipelines in one
step.
* Convert DomainTransferRequestFlow to tm() calls
Besides the standard ofy-to-tm conversions this includes storing the
billing event cancellation VKey in the DomainTransferData object so that
we know to handle it on process / cancellation.
* Use ReplaySpecializer to fix DomainBase replays
DomainBase currently has a number of ancillary objects that require a
cascading delete that doesn't get propagated. Implement beforeSqlDelete() in
DomainContent to delete these child entities.
* Remove unnecessary Query variable
* Fix rebase error
* Update more dependencies to newer versions
* Add lockfiles and back out 2 problematic dep updates
* Fix the build (backs out more changes)
* Back out qdox 2.0 too
* Rewrite the JPA output connector for BEAM
Following BEAM's IO connector style, added a RegistryJpaIO class to hold
IO connectors, and implemented the Write connector as a static inner
class in it. The JpaTransactionManager used by the Write connector
retrieves SQL credentials from the SecretManager.
Cleaned up SQL-related pipeline parameters.
Converted the InitSqlPipeline to use RegistryJpaIO.
* Add SQL queries to RdapDomainSearchAction
Unfortunately, because ORDER BY uses the locale's sorting functionality,
we end up with some weird sort orders in SQL-land (notably, periods are
ignored / omitted). As a result, a few of the tests have to be separated
out into ofy and SQL versions based on the expected sort order.
In addition, there isn't a way to query @Convert-ed fields in Postgres
via the standard Hibernate / JPA query language, meaning we have to use
a raw Postgres query for that.
* Add a "ReplaySpecializer" to fix certain replays
Due to the fact that a given entity in either database type can map to
multiple entities in the other database, there are certain replication
scenarios that don't quite work. Current known examples include:
- propagation of cascading deletes from datastore to SQL
- creation of datastore indexed entities for SQL entities (where indexes are a
first-class concept)
This change introduces a ReplaySpecializer class, which allows us to declare
static method hooks at the entity class level that define any special
operations that need to be performed before or after replaying a mutation for
any given entity type.
Currently, "before SQL delete" is the only supported hook. A change to
DomainContent demonstrating how this facility can be used to fix problems in
cascading delete propagation will be sent as a subsequent PR.
* Throw exception on beforeSqlDelete failures
* Changes for review
* Convert DomainTransferRejectFlow to use tm() methods
This change includes a few other necessary dependencies to converting
DomainTransferRejectFlowTest to be a dual-database test. Namely:
- The basic "use tm() instead of ofy()" and "branching database
selection on what were previously raw ofy queries"
- Modification of the PollMessage convertVKey methods to do what they
say they do
- Filling the generic pending / response fields in PollMessage based on what type of
poll message it is (this has to be done because SQL is not very good at
storing ambiguous superclasses)
- Setting the generic pending / repsonse fields in PollMessage upon
build
- Filling out the serverApproveEntities field in DomainTransferData with
all necessary poll messages / billing events that should be cancelled on
rejection
- Scattered changes in DatabaseHelper to make sure that we're saving and
loading entities correctly where we weren't before
* Disable whois caching in nomulus tool
The whois commands previously served output generated from cached EppResource
objects in most cases. While this is entirely appropriate from server-side,
it is less useful when these commands are run from nomulus tool and, in fact,
when run from the "shell" command this results in changes that have been
applied from the shell not being visible from a subsequent "whois". The
command may instead serve information on an earlier, cached version of the
resource instead of the latest version.
This implementation uses dagger for parameterization of cached/non-cached
modes. I did consider the possibility of simply parameterizing the query
commands in all cases as discussed, however, having gone down the
daggerization path and having gotten it to work, I have to say I find this
approach preferrable. There's really no case for identifying
cached/non-cached on a per-command basis and doing so would require
propagating the flag throughout all levels of the API and all callsites.
Tested: In addition to the new unit test which explicitly verifies the
caching/noncaching behavior of the new commands, tested the actual failing
sequence from "nomulus -e sandbox shell" and verified that the correct results
are served after a mutation.
* Fixed copyright year
* More copyright date fixes
* Added WhoisCommandFactoryTest to fragile tests
I suspect that this test has some contention with other tests, it's not clear
why.
* Replay Cloud SQL transactions against datastore
Implement the ReplicateToDatastore cron job that will apply all Cloud SQL
transactions to the datastore. The last transaction id is stored in a
LastSqlTransaction entity in datastore.
Note that this will not be activated in production until a) the cron
configuration is updated and b) the cloudSql.replicateTransactions flag is set
to true in the nomulus config file.
* Post-review changes
Fixed immutability issues with LastSqlTransaction, write a single transaction
at a time to datastore.
* Changes requested in review
* Get a batch of SQL transactions
Read a batch of SQL transactions at a time and then process them
transactionally against datastore.
* Bring this up-to-date with the codebase
* Changes requested in review
* Fixed date in copyright
* Update a few plugins for Java 11 compatibility
Guice 5.0.1 is now compatible with Java 11. However we don't
directly depend on Guice. Rather Soy depends on Guice. So I added a
direct dependency on Guice 5.0 just before Soy in order to frontload Soy
and pull in the newer version.
Mockito 3.7.7 is now compatible with Java 11. The complication is that
we need to use the inline version of Mockito, which among other things
also allows mocking for final classes (hooray!). It will eventually
become the default Mockito mock maker but for now it needs to be
manually activated.
Note that the inline version now introduces another warning:
```
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
```
Which I think is WAI due to how the inline mock maker works. Waiting on
the author to confirm.
After these to changes the only illegal reflective access is caused by
App Engine SDK tools, which we will rid ourselves of when we migrate off
of GAE.
* Restore package-lock.json
* Add GetPremiumListCommand
When testing the premium list refactor, it would have been nice and
convenient to have this. Currently we have no way of inspecting premium
list contents that I'm aware of.
- Adds a CriteriaQueryBuilder class that allows us to build
CriteriaQuery objects with sane and modular WHERE and ORDER BY clauses.
CriteriaQuery requires that all WHERE and ORDER BY clauses be specified
at the same time (else later ones will overwrite the earlier ones) so in
order to have a proper builder pattern we need to wait to build the
query object until we are done adding clauses.
- In addition, encapsulating the query logic in the CriteriaQueryBuilder
class means that we don't need to deal with the complicated Root/Path
branching, otherwise we'd have to keep track of CriteriaQuery and Root
objects everywhere.
- Added a REPLAYED_ENTITIES TransitionId that will represent all
replayed entities, e.g. EppResources. Also sets this, by default, to
always be CLOUD_SQL if we're using the SQL transaction manager in tests.
- Added branching logic in RdapEntitySearchAction based on that transition
ID that determines whether we do the existing ofy query logic or JPA
logic.
* Partially convert RDAP ofy calls to tm calls
This converts the simple retrieval actions but does not touch the more
complicated search actions -- those use some ofy() query searching logic
and will likely end up being significantly more complicated than this
change. Here, though, we are just changing the calls that can be
converted easily to tm() lookups.
To change in future PRs:
- RdapDomainSearchAction
- RdapEntitySearchAction
- RdapNameserverSearchAction
- RdapSearchActionBase
* Refactor PremiumList storage and retrieval for dual-database setup
Previously, the storage and retrieval code was scattered across various
places haphazardly and there was no good way to set up dual database
access. This reorganizes the code so that retrieval is simpler and it
allows for dual-write and dual-read.
This includes the following changes:
- Move all static / object retrieval code out of PremiumList -- the
class should solely consist of its data and methods on its data and it
shouldn't have to worry about complicated caching or retrieval
- Split all PremiumList retrieval methods into PremiumListDatastoreDao
and PremiumListSqlDao that handle retrieval of the premium list entry
objects from the corresponding databases (since the way the actual data
itself is stored is not the same between the two
- Create a dual-DAO for PremiumList retrieval that branches between
SQL/Datastore depending on which is appropriate -- it will read from
and write to both but only log errors for the secondary DB
- Cache the mapping from name to premium list in the dual-DAO. This is a
common code path regardless of database so we can cache it at a high
level
- Cache the ways to go from premium list -> premium entries in the
Datastore and SQL DAOs. These caches are specific to the corresponding
DB and should thus be stored in the corresponding DAO.
- Moves the database-choosing code from the actions to the lower-level
dual-DAO. This is because we will often wish to access this premium list
data in flows and all accesses should use the proper DB-selecting code
* Properly set up JPA in BEAM workers
Sets up a singleton JpaTransactionManger on each worker JVM for all
pipeline nodes to share.
Also added/updated relevant dependencies. The BEAM SDK version change
caused the InitSqlPipeline's graph to change.
* Fix ContactTransferData SQL loads
ContactTransferData is currently loaded back from SQL as an unspecialized
TransferData object. Replace it with the ContactTransferData object that we
use it to reconstitute.
It's likely that this could be done more straightforwardly with a schema
change.
* Changes requested in review
* Fix obscure bug when checking restore prices of duplicate domain names
There were instances of "java.lang.IllegalArgumentException: Multiple entries
with same key" in the logs, caused by attempting to construct an ImmutableMap
containing duplicate keys. It turns out this was happening in the domain check
flow when the following conditions were all simultaneously met:
1. The older v06 fee extension is used
2. The same domain name is being queried multiple times in a single check
command (which is valid per the spec but doesn't actually make any sense)
3. Said domain exists
4. The cost of a restore (an uncommon operation) is being checked
When all of those conditions were met, an error was being thrown when the
dupe-containing list of domain names was used as the keys of a new Map. This
fixes that bug by calling .distinct() first.
Give enough registrars enough typewriters ...
BUG=179052195
* Add db-compare tests to three more flows
Add database comparison to the replay tests for DomainDeleteFlowTest,
DomainRenewFlowTest and DomainUpdateFlowTest.
* Add databaseTransitionSchedule entitiy
* add UpdateDatabaseTransitionScheduleCommand
* small fixes
* change entity structure to no longer be singleton
* add get command
* fix getCache
* Change id to TransitionId enum
* more fixes
* Cleanup tests
* Add link to javadoc
* Add lastUpdateTime
* fix datatype of getCached