* Create a Dataflow pipeline to resave EPP resources
This has two modes.
If `fast` is false, then we will just load all EPP resources, project them to the current time, and save them.
If `fast` is true, we will attempt to intelligently load and save only resources that we expect to have changes applied when we project them to the current time. This means resources with pending transfers that have expired, domains with expired grace periods, and non-deleted domains that have expired (we expect that they autorenewed).
For some inexplicable reason, the RDE beam pipeline in both sandbox and
production has been broken for the past week or so. Our investigations
revealed that during the CoGropuByKey stage, some repo ID -> revision ID
pairs were duplicated. This may be a problem with the Dataflow runtime
which somehow introduced the duplicate during reshuffling.
This PR attempts to fix the symptom only by deduping the revision IDs. We
will do some more investigation and possibly follow up with the Dataflow
team if we determine it is an upstream issue.
TESTED=deployed the pipeline and successfully run sandbox RDE with it.
* Begin migration from Guava Cache to Caffeine
Caffeine is apparently strictly superior to the older Guava Cache (and is even
recommended in lieu of Guava Cache on Guava Cache's own documentation).
This adds the relevant dependencies and switch over just a single call site to
use the new Caffeine cache. It also implements a new pattern, asynchronously
refreshing the cache value starting from half of our configuration time. For
frequently accessed entities this will allow us to NEVER block on a load, as it
will be asynchronously refreshed in the background long before it ever expires
synchronously during a read operation.
* Add new columns to BillingEvent.java
* Improve PR and modifyJodaMoneyType to handle null currency in override
* Add test cases for edge cases of nullSafeGet in JodaMoneyType
* Improve assertions
* Remove dos.xml from the configs
We don't have dos config right now, and applying dos from "gcloud app
deploy" is deprecated and has started causing problems.
If we add dos configs, it should be using "gcloud app firewall-rules".
* Build Java8-compatible release
Use the new options.release Gradle property to make sure builds are
compatible with Java 8, which is the runtime on Appengine.
This new property replaces sourceCompatibility, targetCompatibility, and
bootclasspath (wasn't previously set, which is the reason why we
couldn't detect Java9 api usage when building).
* Ignore read-only when saving commit logs
Ignore read-only when saving commit logs and commit log mutations so that we
can safely replicate in read-only mode. This should be safe, as we only ever
to the situation of saving commit logs and mutations when something has
already actually been modified in a transaction, meaning that we should have hit
the "read only" sentinel already.
This also introduces the ability to set the Clock in the
TransactionManagerFactory so that we can test this functionality.
* Changes per review
* Fix issues affecting tests
- Restore clobbered async phase in testNoInMigrationState_doesNothing
- Restore system clock to TransactionManagerFactory to avoid affecting other
tests.
* Change billingIdentifier to BillingAccountMap in invoicing pipeline
* Add a default for billing account map
* Throw error on missing PAK
* Add unit test
Check for a PSQLException referencing a failed connection to "google:5433",
which likely indicates that there is another nomulus tool instance running.
It's worth giving this hint because in cases like this it's not at all obvious
that the other instance of nomulus is problematic.
* Add a no-async actions DB migration phase
This needs to be set several hours prior to entering the READONLY stage. This is
not a read-only stage; all synchronous actions under Datastore (such as domain
creates) will continue to succeed. The only thing that will fail is host
deletes, host renames, and contact deletes, as these three actions require a
mapreduce to run before they are complete, and we don't want mapreduces hanging
around and executing during what is supposed to be a short duration READONLY
period.
* Use UrlFetch for RDE and default TLS (1.2) for other URL connections
This removes the TLS 1.3-settings in the module providers and,
essentially, reverts the changes in #1535 only to the RdeReporter and
RdeReportActionTest
We have a cron job that runs the RDE upload action every 4 hours for all
TLD. Normally this should be a no-op beacuse a RDE upload is scheduled
after RDE staging is completed, and when it fails with non-2XX status it
will retry. However if for some reason it failed due to 20X status (like
waiting for the SFTP cursor), it will not retry but rely on the cron job to
catch up.
With the BEAM RDE pipeline every staging job saves all its deposits in a
uniquely named folder to avoid the need to use a lock, which is not
practical in BEAM. However the cron job has no way of knowing what the
prefixes are for each TLD so it will fail in SQL mode.
In this PR we implemented a logic to guess what the prefix should be and
use it, if we are in SQL mode and a prefix is not provided.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1574)
<!-- Reviewable:end -->
* Set the initial worker count for the RDE beam pipeline at 24
This likely will speed up the pipeline by skipping the initially slow
process of spinning up instances.
* Fix sporadic SQL Snapshot failure
The Postgresql set-snapshot statement (called in
JpaTransactionManager.setDatabaseSnapshot() method) must be the first
statement in the SQL transaction.
Currenty the JpaTransaction.transact() method may insert a query for
DatabaseMigrationStateSchedule before the user query when the cache is
empty or the cached value expires.
This PR proactively preloads the cache in RegistryJpaIO to prevent cache
loading inside the transaction.
This PR also changes some DatabaseSnapshotTest tests to be retrying, in
case they run just after the cache expires. (This has happened before in
CI).
The format check script currently outputs "true" if there were files that need
reformatting and "false" if not, which is useful for gradle but less so for
other applications (notably commit hooks). Terminate with an exit code of 1
if the format check fails.
TESTED: Tried this from both a pre-commit hook and from the gradle build.
* Add a "list_txns" to dump Transaction table
Add the list_txns command which can dump the entire contents of the
Transaction table, either in csv format or as human readable transactions.
The CSV format is useful for storing the transaction table at a specific point
in time for later reference without requiring us to repeatedly hit the
replica.
Creating this without tests because this command has a very short shelf-life
and is really only intended to be run by developers. Tested all features
locally.
* Reformatted
* Ignore trivial differences when comparing DB
Some data difference are due to entity model differences and also
harmless. We should igore them when comparing Datastore and SQL.
This PR ignores the following diffs:
- null vs empty collection
- the empty string in Address.stree field, which is a list
* Bump flogger and beam dependency versions
Beam 2.34.0 -> 2.37.0
Flogger 0.7.3 -> 0.7.4
Intellij keeps getting confused about which version of Flogger we're
bringing in. Even though we had previously locked Flogger to 0.7.3, for
some reason it was still bringing in the Beam transitive dependency of
0.6.0 which was causing the a bunch of class initialization errors.
Bumping Beam to 2.34.0 bumps the transitive dependency to 0.7.4 so we
can always use that.
1. testRun_withPrefix() in RdeUploadActionTest does calls a mock lock
handler and does not actually try to read from the fake GCS
implementation. Therefore there's no point settig it up.
2. Remove an unused field in UploadDatastoreBackupActionTest.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1563)
<!-- Reviewable:end -->
* Remove static methods in back up actions
* Remove BigqueryPollJob helper class
* Add schedule time in task comparison
* Change payload type from byte[] to ByteString
* Fix a subtle issue in BRDA copy caused by Cloud Tasks
After the Cloud Tasks migration and #1508, the BRDA copy job now
routinely fail on the first try because the revision update is not
commited by the time the Cloud Tasks job enqueued in the same
transaction runs for the first time. This is because the enqueueing is
a side effect and not part of the transaction. The job eventually
succeeds because of retries.
This PR attempts to mitigate the initial failure by adding a delay to
the enqueued job, and checking the cursor in the job itself to prevent
it from running before the transaction is commited.
* Fix issues with saving and deleting gap records
Datastore limits us to mutating up to 25 records per transaction. We
sometimes exceed that when deleting expired gap records. In addition, it is
theoretically possible for us to accumulate enough continuous gap records to
exceed this count while replaying the original transaction.
Deal with deletion by breaking up the gap records to be deleted into a batch
size that is small enough to be deleted transactionally (in practice, we don't
much care about the transactionality but it doesn't seem like we can delete
batches without it).
Deal with the possibility of too many additions by always breaking out gap
record storage and last transaction number updates into their own
transaction(s) (separate from the replay of the original SQL transaction).
These are useful for the purposes of filtering by one-time/multi-use tokens, and
for determining which one-time tokens have been used (and if so, for which
domain).
* Track and replay Transaction table gaps
Id gaps in the Transaction table can be the result of a transactions committed
out of order. To deal with this, keep track of gaps for up to five minutes
and check to see if they've been back-filled prior to applying the next batch
of transactions during reply.
* Changes for review
* Calculate gap expiration time before gap queries
* Reformat.
* Add 3 more SQL indexes to the Host table
These indexes on creationTime, deletionTime, and currentSponsorRegistrarId are
present on the other two EPP resource tables (Domain and Contact), and are
useful for a wide variety of operations/analytics queries.
* Improve cache loading in Registries.java
The loader for the TLD cache in Registries.java unnecessarily reads from
another cache when running with SQL, potentially triggering additional
database access. This code runs in the whois query path, and contributes
to the high latency in sandbox.
The query analyzer identified this is a missing index on the BillingEvent table,
and I added it for recurrences and cancellations as well as it's likely to be a
problem for them too. "Give me all the billing events associated with a given
domain by its repo ID" seems like a pretty common use case for the DB (and does
appear to be used by our invoicing pipeline).
This is a follow-up to PR #1545.
These indexes were identified as missing by PostgreSQL's query analyzer in our
sandbox environment (where we get enough realistic EPP traffic to identify these
deficiencies).
Note that a lot of the new indexes being named have to use the DB representation
of the column name because they are either embedded or subclassed entities,
whereas most of the existing ones are able to simply refer to Java field names.
This is the Java schema follow-up PR to PR #1541, which is what added the
actual DB changes through Flyway scripts.
* Reorganize new schema changes
Reorganized new schema changes and make each flyway script update a
single table.
Each flyway script is executed in a single database transaction so that
the script can be rolled back in one shot. It acquires a shared lock on
all tables touched by the script. This is deadlock-prone because in a
busy database, there may be user queries that attempt to lock the same
set of tables, but in different order. By limiting each script to one
table, we avoid the problem.
We should have some a presubmit check to enforce this rule.
All changes have been deployed to Sandbox out-of-band. When doing so,
we changed all CREATE INDEX statements to CREATE INDEX IF NOT EXISTS.
Future deployments should be able to proceed normally.