The dual DAO takes care of switching between databases, comparing the
results of one to the results of the other, and caching the result. All
calls to ClaimsList retrieval or storing should use the
dual-database-DAO.
Previously, calls to comparing the lists were somewhat scattered
throughout the codebase. Now, there is one class for retrieval and
comparison (the dual DAO), one class for retrieval from SQL (the SQL
DAO), and one class for retrieval from Datastore (ClaimsListShard
itself, though the retrieval could be moved in to a separate DAO if we
wished).
In addition, we rename the ClaimsListDao to ClaimsListSqlDao
* Update creation script for schema_deployer
Move the create user command for schema_deployer before the
initialization of roles. As the owner of all schema objects, it needs to
be present before grant statements are executed.
Also fixed a bug in credential printing, which fails when the password
contains '%'.
This allows us to get rid of the DAO as well as the sanity-checking
methods since we can be reasonably sure that the fields will be the
same. Future PRs will add conversions from ofy() to tm() calls that will
make sure that we get the same proper data in both Datastore and SQL
* Convert more flow tests to replay/compare
Add the replay extension to another batch of flow tests. In the course of
this:
- Refactor out domain deletion code into DatabaseHelper so that it can be used
from multiple tests.
- Make null handling uniform for contact phone numbers.
* Convert postLoad method to onLoad.
* Remove "Test" import missed during rebase
* Deal with persistence of billing cancellations
Deal with the persistence of billing cancellations, which were added in the
master branch since before this PR was initially sent for review.
* Adding forgotten flyway file
* Removed debug variable
* Add schema_deployer SQL user to SecretManager
Add the 'schema_deployer' user to the SecretManager so that its
credential can be set up. The schema deployment process will use this
user instead of the 'postgres' user.
Changed the output of the get_sql_credential command for the schema
deployment process.
Added a sql script that documents the privileges granted to
'schema_deployer'.
* Clear autorenew end time when a domain is restored
This allows us to still see in the database which now-deleted domains had
reached expiration, while correctly not re-deleting the domain immediately if
the registrar pays to explicitly restore the domain.
This also resolves some TODOs around data migration for this field on domain so
that it's not null, as said migration has already been completed.
* Remove grace period ID @OnLoads now that migration is complete
I verified in BigQuery that all grace period IDs are now allocated (as expected
given that the re-save all EPP resource mapreduce has been run several times
since this migration started last year). The query I used for verification is:
SELECT fullyQualifiedDomainName, gp, ot
FROM `domain-registry.latest_datastore_export.DomainBase`
JOIN UNNEST(gracePeriods.billingEventRecurring) AS gp
JOIN UNNEST(gracePeriods.billingEventOneTime) AS ot
WHERE gp.id IS NULL or ot.id IS NULL
BUG=169873747
* Add daily cron entries to for DeleteExpiredDomainsAction
This also requires setting this action to GET instead of POST, as GAE cron makes
GET requests.
* Use shared jar to stage BEAM pipeline if possible
Allow multiple BEAM pipelines with the same classes and dependencies to
share one Uber jar.
Added metadata for BulkDeleteDatastorePipeline.
Updated shell and Cloud Build scripts to stage all pipelines in one
step.
* Add comments to Cloud SQL configs
I believe the similarity in trace to https://github.com/brettwooldridge/HikariCP/issues/1212
is misleading.
The real cause of the exceptions may be that we ran out of connections. At the
time, the production Cloud SQL server could handle 500 connections at the
maximum. That number was within reach of a busy Nomulus server.
The maximum connection in production has been increased to 1000. We
haven't encountered this issue for a long time. All connection problems
are due to Cloud SQL maintenance or other GCP related issues.
This issue is tracked by b/154720215, which is being closed with this
PR.
* Convert DomainTransferRequestFlow to tm() calls
Besides the standard ofy-to-tm conversions this includes storing the
billing event cancellation VKey in the DomainTransferData object so that
we know to handle it on process / cancellation.
* Stage the init_sql_pipeline in CloudBuild
Defined metadata file and added Gradle uberJar task for the pipeline,
which are needed for staging.
Updated cloud build script to stage this pipeline during the build
processs.
* Add TODOs regarding cloud sql database name change
We should choose a different database name for nomulus data since using
'postgres' is bad practice. See b/181693544 more background.
We have decided to delay the db change to the time when we upgrade
postgresql version. This PR adds TODOs to all occurrences of the jdbcUrl
property, including those in the internal-repo. This property will change
when we upgrade, so the TODOs will be noticed.
* Use ReplaySpecializer to fix DomainBase replays
DomainBase currently has a number of ancillary objects that require a
cascading delete that doesn't get propagated. Implement beforeSqlDelete() in
DomainContent to delete these child entities.
* Remove unnecessary Query variable
* Fix rebase error
* Update more dependencies to newer versions
* Add lockfiles and back out 2 problematic dep updates
* Fix the build (backs out more changes)
* Back out qdox 2.0 too
* Rewrite the JPA output connector for BEAM
Following BEAM's IO connector style, added a RegistryJpaIO class to hold
IO connectors, and implemented the Write connector as a static inner
class in it. The JpaTransactionManager used by the Write connector
retrieves SQL credentials from the SecretManager.
Cleaned up SQL-related pipeline parameters.
Converted the InitSqlPipeline to use RegistryJpaIO.
* Add SQL queries to RdapDomainSearchAction
Unfortunately, because ORDER BY uses the locale's sorting functionality,
we end up with some weird sort orders in SQL-land (notably, periods are
ignored / omitted). As a result, a few of the tests have to be separated
out into ofy and SQL versions based on the expected sort order.
In addition, there isn't a way to query @Convert-ed fields in Postgres
via the standard Hibernate / JPA query language, meaning we have to use
a raw Postgres query for that.
* Add a "ReplaySpecializer" to fix certain replays
Due to the fact that a given entity in either database type can map to
multiple entities in the other database, there are certain replication
scenarios that don't quite work. Current known examples include:
- propagation of cascading deletes from datastore to SQL
- creation of datastore indexed entities for SQL entities (where indexes are a
first-class concept)
This change introduces a ReplaySpecializer class, which allows us to declare
static method hooks at the entity class level that define any special
operations that need to be performed before or after replaying a mutation for
any given entity type.
Currently, "before SQL delete" is the only supported hook. A change to
DomainContent demonstrating how this facility can be used to fix problems in
cascading delete propagation will be sent as a subsequent PR.
* Throw exception on beforeSqlDelete failures
* Changes for review
* Convert DomainTransferRejectFlow to use tm() methods
This change includes a few other necessary dependencies to converting
DomainTransferRejectFlowTest to be a dual-database test. Namely:
- The basic "use tm() instead of ofy()" and "branching database
selection on what were previously raw ofy queries"
- Modification of the PollMessage convertVKey methods to do what they
say they do
- Filling the generic pending / response fields in PollMessage based on what type of
poll message it is (this has to be done because SQL is not very good at
storing ambiguous superclasses)
- Setting the generic pending / repsonse fields in PollMessage upon
build
- Filling out the serverApproveEntities field in DomainTransferData with
all necessary poll messages / billing events that should be cancelled on
rejection
- Scattered changes in DatabaseHelper to make sure that we're saving and
loading entities correctly where we weren't before
* Disable whois caching in nomulus tool
The whois commands previously served output generated from cached EppResource
objects in most cases. While this is entirely appropriate from server-side,
it is less useful when these commands are run from nomulus tool and, in fact,
when run from the "shell" command this results in changes that have been
applied from the shell not being visible from a subsequent "whois". The
command may instead serve information on an earlier, cached version of the
resource instead of the latest version.
This implementation uses dagger for parameterization of cached/non-cached
modes. I did consider the possibility of simply parameterizing the query
commands in all cases as discussed, however, having gone down the
daggerization path and having gotten it to work, I have to say I find this
approach preferrable. There's really no case for identifying
cached/non-cached on a per-command basis and doing so would require
propagating the flag throughout all levels of the API and all callsites.
Tested: In addition to the new unit test which explicitly verifies the
caching/noncaching behavior of the new commands, tested the actual failing
sequence from "nomulus -e sandbox shell" and verified that the correct results
are served after a mutation.
* Fixed copyright year
* More copyright date fixes
* Added WhoisCommandFactoryTest to fragile tests
I suspect that this test has some contention with other tests, it's not clear
why.
* Replay Cloud SQL transactions against datastore
Implement the ReplicateToDatastore cron job that will apply all Cloud SQL
transactions to the datastore. The last transaction id is stored in a
LastSqlTransaction entity in datastore.
Note that this will not be activated in production until a) the cron
configuration is updated and b) the cloudSql.replicateTransactions flag is set
to true in the nomulus config file.
* Post-review changes
Fixed immutability issues with LastSqlTransaction, write a single transaction
at a time to datastore.
* Changes requested in review
* Get a batch of SQL transactions
Read a batch of SQL transactions at a time and then process them
transactionally against datastore.
* Bring this up-to-date with the codebase
* Changes requested in review
* Fixed date in copyright
* Log forbidden HTTP request method at warning
This seems like more reasonable. It will potential issues with how
requests are generated more discoverable in the log.
GAE cron only issuse HTTP GET requests to the endpoint in question. This
particular only allows POSTs. As a result this cron job never succeeded.
This is not a big problem as this job is meant to catch up any
unforeseen upload failures and in case it needs to catch up but fails,
every month the staging job (which is enqueued corrected by cron) will
eventually catch everything to date.
* Update a few plugins for Java 11 compatibility
Guice 5.0.1 is now compatible with Java 11. However we don't
directly depend on Guice. Rather Soy depends on Guice. So I added a
direct dependency on Guice 5.0 just before Soy in order to frontload Soy
and pull in the newer version.
Mockito 3.7.7 is now compatible with Java 11. The complication is that
we need to use the inline version of Mockito, which among other things
also allows mocking for final classes (hooray!). It will eventually
become the default Mockito mock maker but for now it needs to be
manually activated.
Note that the inline version now introduces another warning:
```
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
```
Which I think is WAI due to how the inline mock maker works. Waiting on
the author to confirm.
After these to changes the only illegal reflective access is caused by
App Engine SDK tools, which we will rid ourselves of when we migrate off
of GAE.
* Restore package-lock.json
* Add GetPremiumListCommand
When testing the premium list refactor, it would have been nice and
convenient to have this. Currently we have no way of inspecting premium
list contents that I'm aware of.
- Adds a CriteriaQueryBuilder class that allows us to build
CriteriaQuery objects with sane and modular WHERE and ORDER BY clauses.
CriteriaQuery requires that all WHERE and ORDER BY clauses be specified
at the same time (else later ones will overwrite the earlier ones) so in
order to have a proper builder pattern we need to wait to build the
query object until we are done adding clauses.
- In addition, encapsulating the query logic in the CriteriaQueryBuilder
class means that we don't need to deal with the complicated Root/Path
branching, otherwise we'd have to keep track of CriteriaQuery and Root
objects everywhere.
- Added a REPLAYED_ENTITIES TransitionId that will represent all
replayed entities, e.g. EppResources. Also sets this, by default, to
always be CLOUD_SQL if we're using the SQL transaction manager in tests.
- Added branching logic in RdapEntitySearchAction based on that transition
ID that determines whether we do the existing ofy query logic or JPA
logic.
Because we don't store serverApproveEntities specifically as a set in
the SQL world, we need to make sure that the entities are all separated
and stored if they exist. For domain transfers, there exist three
separate poll messages (client losing, client gaining, autorenew) so we
need to store and retrieve that one.
Founnd this while converting domain transfer flows to SQL.
* Partially convert RDAP ofy calls to tm calls
This converts the simple retrieval actions but does not touch the more
complicated search actions -- those use some ofy() query searching logic
and will likely end up being significantly more complicated than this
change. Here, though, we are just changing the calls that can be
converted easily to tm() lookups.
To change in future PRs:
- RdapDomainSearchAction
- RdapEntitySearchAction
- RdapNameserverSearchAction
- RdapSearchActionBase
* Refactor PremiumList storage and retrieval for dual-database setup
Previously, the storage and retrieval code was scattered across various
places haphazardly and there was no good way to set up dual database
access. This reorganizes the code so that retrieval is simpler and it
allows for dual-write and dual-read.
This includes the following changes:
- Move all static / object retrieval code out of PremiumList -- the
class should solely consist of its data and methods on its data and it
shouldn't have to worry about complicated caching or retrieval
- Split all PremiumList retrieval methods into PremiumListDatastoreDao
and PremiumListSqlDao that handle retrieval of the premium list entry
objects from the corresponding databases (since the way the actual data
itself is stored is not the same between the two
- Create a dual-DAO for PremiumList retrieval that branches between
SQL/Datastore depending on which is appropriate -- it will read from
and write to both but only log errors for the secondary DB
- Cache the mapping from name to premium list in the dual-DAO. This is a
common code path regardless of database so we can cache it at a high
level
- Cache the ways to go from premium list -> premium entries in the
Datastore and SQL DAOs. These caches are specific to the corresponding
DB and should thus be stored in the corresponding DAO.
- Moves the database-choosing code from the actions to the lower-level
dual-DAO. This is because we will often wish to access this premium list
data in flows and all accesses should use the proper DB-selecting code
* Properly set up JPA in BEAM workers
Sets up a singleton JpaTransactionManger on each worker JVM for all
pipeline nodes to share.
Also added/updated relevant dependencies. The BEAM SDK version change
caused the InitSqlPipeline's graph to change.
* Fix ContactTransferData SQL loads
ContactTransferData is currently loaded back from SQL as an unspecialized
TransferData object. Replace it with the ContactTransferData object that we
use it to reconstitute.
It's likely that this could be done more straightforwardly with a schema
change.
* Changes requested in review
* Fix obscure bug when checking restore prices of duplicate domain names
There were instances of "java.lang.IllegalArgumentException: Multiple entries
with same key" in the logs, caused by attempting to construct an ImmutableMap
containing duplicate keys. It turns out this was happening in the domain check
flow when the following conditions were all simultaneously met:
1. The older v06 fee extension is used
2. The same domain name is being queried multiple times in a single check
command (which is valid per the spec but doesn't actually make any sense)
3. Said domain exists
4. The cost of a restore (an uncommon operation) is being checked
When all of those conditions were met, an error was being thrown when the
dupe-containing list of domain names was used as the keys of a new Map. This
fixes that bug by calling .distinct() first.
Give enough registrars enough typewriters ...
BUG=179052195
* Revert BEAM pipeline back to SQL credential file
Stop using the SecretManager for SQL credential in BEAM for now. The
SecretManager cannot be injected into the code on pipeline workers
because RegistryEnvironment is not set.
See b/179839014 for details.
* Add db-compare tests to three more flows
Add database comparison to the replay tests for DomainDeleteFlowTest,
DomainRenewFlowTest and DomainUpdateFlowTest.
* Add databaseTransitionSchedule entitiy
* add UpdateDatabaseTransitionScheduleCommand
* small fixes
* change entity structure to no longer be singleton
* add get command
* fix getCache
* Change id to TransitionId enum
* more fixes
* Cleanup tests
* Add link to javadoc
* Add lastUpdateTime
* fix datatype of getCached