Commit graph

990 commits

Author SHA1 Message Date
gbrodman
40b7a23d88
Filter missing dsData digests during replay (#1439)
This is a result of bad data (we should never allow a null digest) and
we'll need to fix that separately, but this allows us to not fail on
this during replay
2021-11-30 15:37:42 -05:00
gbrodman
05e36f378b
Add NotLoggedInException tests to flows and flow docs (#1437)
* Add NotLoggedInException tests to flows and flow docs

This wasn't included in flows.md before because the test existed in
ResourceFlowTestCase. So even though the exception could be thrown and
even though this was tested, it wasn't picked up in the documentation
because the documentation is picked up from the corresponding concrete
test class.
2021-11-30 15:00:05 -05:00
Weimin Yu
a82e6a05af
Validate SQL with Datastore being Primary (#1436)
* Validate SQL with Datastore being primary

Validates the data asynchronously replicated from Datastore to SQL.
This is a short term tool optimized for the current production database.

Tested in production.
2021-11-30 12:57:49 -05:00
gbrodman
b8583bb325
Provide useful error messages on flows run during read-only mode (#1425)
We want to keep the read-only-mode-exception as an unchecked exception,
so we introduce a temporary check in the EppController that provides a
specific error message for this situation (rather than letting it fall
through to the generic "command failed" messaging
2021-11-24 14:57:44 -05:00
Rachel Guan
c31c1d4013
Replace VKey.fromWebsafeKey() with VKey.create(string) (#1414)
* Replace with stringify() and VKey.create(string)

* Convert implicit cases of VKey.fromWebsafeKey(string)

* Convert from Key to VKey to use stringify()

* Modify existing code to show correct string representation of a key

* Use VKey.create(websafeKey) to get ofy key in ResaveEntitiesCommand

* Add TODO note in CommitLogMutation and determine if key string should be modified

* Revert from stringify() to getOfyKey().getString()

* Add bug ids to TODOs
2021-11-24 12:14:13 -05:00
gbrodman
4adb7d859d
Ignore read-only mode in SQL->DS replication process (#1432)
* Ignore read-only mode in SQL->DS replication process

We need to be able to save indices and save data about the replication
even when we're in read-only mode.
2021-11-24 11:51:25 -05:00
gbrodman
2d9e969f87
Remove converter for CreateAutoTimestamp (#1429)
We can handle it the same way that we handle UpdateAutoTimestamp, where
we simply populate it in SQL if it doesn't exist. This has the following
benefits:

1. The converter is unnecessary code
2. We get non-null column definitions for free (overridden in
EppResource to allow null creation times so that legacy *History objects
can contain null in that field
3. More importantly, this allows us for proper SQL->DS replay. If the
field is filled out using a converter (as before this PR) then the field
is only actually filled out on transaction commit (rather than when the
write occurs within the transaction). This means that when we serialize
the Transaction object during the transaction (the data that gets
replayed to Datastore), we are crucially missing the creation time.

If the creation time is written on commit, we have to start a new
transaction to write the Transaction object, and it's an absolute
necessity that the record of the transaction be included in the
transaction itself so as to avoid situations where the transaction
succeeds but the record fails.

If the field is filled out in a @PrePersist method, crucially that
occurs on the object write itself (before transaction commit).
2021-11-23 14:56:47 -05:00
Lai Jiang
65c8769c68
Refactor RDE pipeline (#1427)
The original RDE pipeline was a direct translation of the App Engine
MapReduce logic. It turned out to be too slow (taking more than a day to
run) due to the way it finds the most recent history entry.

This PR overhauled the pipeline by using embedded EPP resource entities
inside history entries (only available in SQL) and finding the most
recent entries using the SQL engine. It cuts the time done to ~2h.

Note that there are quota limits on the CPU cores and external IP
addresses for a given GCP region inside a project, which will need to
accommodate the resource requirements for the pipeline. More details are
provided in comments.

Also merged the update cursor stage and enqueue next action stage in
RdeIO so that they can be done within a transaction, same as how
MapReduce handles them.


<!-- Reviewable:start -->
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1427)
<!-- Reviewable:end -->
2021-11-23 11:29:00 -05:00
Michael Muller
bf4b6978a7
Add "postgres" robot id to nomulus (#1433) 2021-11-22 12:35:51 -05:00
gbrodman
8393c75929
Ignore read-only mode when running commit logs / backups (#1424)
We need to be able to continue running the backup and async replay code
while the database is in read-only mode
2021-11-18 15:42:23 -05:00
sarahcaseybot
1764ae0b3f
Remove TmchCrl singleton from Datastore (#1419) 2021-11-17 14:53:29 -05:00
Rachel Guan
d76abfc23a
Change TaskQueueUtils to CloudTaskUtils in CommitLogFanoutAction (#1408)
* Change TaskOptions to Task in CommitLogFanoutAction

* Add a createTask method that takes clock and jitterSeconds

* Change CreateTask parameter type and improve test cases

* Improve comments and test casse

* Improve test cases that handel jitterSeconds
2021-11-17 10:54:42 -05:00
Ben McIlwain
6af9299a3c
Grandfather in old data for one-time billing event requirement (#1423)
* Grandfather in old data for one-time billing event requirement

We have data from 2018 and earlier where we didn't consistently set periodYears
for OneTime BillingEvents with certain reasons. This grandfathers in that old
data so that we can successfully move it over to Cloud SQL for now, then we can
later run a query that will backfill it, after which we can then tighten up the
requirement again. Note that the requirement is still being enforced for all
billing events from 2019 onwards.

This also improves the handling of validation, by adding a private field to the
Reason enum rather than creating a throwaway inline ImmmutableSet in the
Builder.
2021-11-16 16:12:08 -05:00
gbrodman
a53c127573
Release the replay lock in SQL, not Datastore (#1422)
* Release the replay lock in SQL, not Datastore

It's always acquired in SQL, so it should always be released in SQL.
2021-11-16 11:37:20 -05:00
Ben McIlwain
8dbf4fced9
Send registrars poll messages when we add/remove server-side statuses (#1417)
* Send registrars poll messages when we add/remove server-side status values
2021-11-16 11:35:05 -05:00
gbrodman
5dc6354ebc
Add backend routing for ReplicateToDatastoreAction (#1415)
Otherwise it's not visible so we can't call it
2021-11-15 16:25:10 -05:00
Lai Jiang
c84767bd07
Make Nomulus compile on macOS (#1421)
BSD sed requires a parameter to -i to indicate the backup suffix. By
adding a blank suffix the sed command works on both Linux and macOS.

<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1421)
<!-- Reviewable:end -->
2021-11-15 11:35:48 -05:00
Michael Muller
b4b318f923
Make TaskMatcher default to POST methods (#1418)
* Make TaskMatcher default to POST methods

TaskOptions.Builder.withUrl() defaults to POST methods.  Therefore, it seems
reasonable to verify that task queue methods are using the POST method,
especially given that the method must now be identified explicitly when using
CloudTaskUtils.  This check would have guarded against the bug fixed by #1413.

* Elaborate on comment

* Further improved the comment
2021-11-12 14:03:23 -05:00
Rachel Guan
52550a9251
Correct HTTP method in CommitLogCheckPointAction (#1413)
* Correct HTTP method in CommitLogCheckPointAction
2021-11-11 15:59:48 -05:00
Weimin Yu
b4468d83a9
Remove the ineffective SQL injection check (#1412)
* Remove the ineffective SQL injection check

Remove the ineffective SQL-injection attack check in go/r3pr/954. It is
quite restrictive, causing a long exempt list. It also doesn't protect
queries made through helpers such as QueryComposer etc.

We will start from scratch for a new solution.
2021-11-10 16:28:32 -05:00
Rachel Guan
4dc4daffe6
Change from TaskQueueUtils to CLoudTasksUtils in PublishInvoicesAction (#1410)
* Change from TaskQueueUtils to CLoudTasksUtils in PublishInvoicesAction
2021-11-10 10:13:19 -05:00
Rachel Guan
76458bb3b9
Change TaskQueueUtils to CloudTaskUtils in CommitLogCheckPointAction (#1409)
* Change TaskQueueUtils to CloudTaskUtils in CommitLogCheckPointAction
2021-11-10 10:13:14 -05:00
sarahcaseybot
2d1a67b01b
Add a parameter to prevent spec11 from sending emails (#1407) 2021-11-05 13:02:59 -04:00
Rachel Guan
01d3932122
Test vkey behaviors when in a task queue (#1406)
* Test vkey behavior in task queue
2021-11-04 21:04:18 -04:00
sarahcaseybot
2eb8bb3996
Add Cloud SQL queries for transaction reports (#1397)
* Add the Cloud SQL queries for transaction reports

* Add the remaining queries

* Some query fixes

* Fix comments

* Fix indentation in total_nameservers

* Fix indentation on other Case condition
2021-11-03 11:25:31 -04:00
Rachel Guan
2218663d55
Add VKey to String and String to VKey methods (#1396)
* Add stringify and parse methods to SerializeUTils

* Improve comments and test cases

* Fix comments and test strings

* Fix dependency warning
2021-11-02 13:25:35 -04:00
gbrodman
e0dc2e43bb
Pass the ICANN reporting BQ dataset to the DNS query coordinator (#1405) 2021-11-02 13:24:04 -04:00
Weimin Yu
7fedd40739
Fix InitSqlPipeline regarding synthesized history (#1404)
* Fix InitSqlPipeline regarding synthesized history

There are a few bad domains in Datastore that we hardcoded to ignore
during SQL population. They didn't have history so we didn't try to
filter when writing history.

Recently we created synthesized history for domains, including the bad
domains. Now we need to filter History entries.
2021-11-02 11:12:57 -04:00
Weimin Yu
f793ca5b68
Support shared database snapshot (#1403)
* Support shared database snapshot

Allow multiple workers to share a CONSISTENT database snapshot. The
motivating use case is SQL database snapshot loading, where it is too
slow to depend on one worker to load everything.

This currently is postgresql-specific, but will be improved to be
vendor-independent.

Also made sure AppEngineEnvironment.java clears the cached environment
in call cases when tearing down.
2021-11-01 13:01:37 -04:00
gbrodman
395ed19601
Canonicalize domain/host names in async DS->SQL replay (#1350) 2021-11-01 12:08:20 -04:00
Rachel Guan
77bc072aac
Add domain pa notification response to first delete domain poll message (#1400)
* Add domain pa notification response to first delete domain poll message

* Add test case for poll message

* Change time in response data to now
2021-10-28 15:45:50 -04:00
Weimin Yu
93a479837f
Make entities serializable for DB validation (#1401)
* Make entities serializable for DB validation

Make entities that are asynchronously replicated between Datastore and
Cloud SQL serializable so that they may be used in BEAM pipeline based
comparison tool.

Introduced an UnsafeSerializable interface (extending Serializable) and
added to relevant classes. Implementing classes are allowed some
shortcuts as explained in the interface's Javadoc. Post migration we
will decide whether to revert this change or properly implement
serialization.

Verified with production data.
2021-10-28 12:19:09 -04:00
gbrodman
1e7aae26a3
Create a mechanism for storing / using locks explicitly only in SQL (#1392)
This is used for the replay locks so that Beam pipelines (which will be
used for database comparison) can acquire / release locks as necessary
to avoid database contention. If we're comparing contents of Datastore
and SQL databases, we shouldn't have replay actively running during the
comparison, so the pipeline will grab the locks.

Beam doesn't always play nicely with loading from / saving to Datastore,
so we need to make sure that we store the replay locks in SQL at all
times, even when Datastore is the primary DB.
2021-10-27 16:20:35 -04:00
Michael Muller
201b6e8e0b
Re-enable replay tests for most environments (#1399)
* Re-enable replay tests for most environments

This enables the replay tests except in environments where
the NOMULUS_DISABLE_REPLAY_TESTS environment variable is set to "true".

* Add a check for null
2021-10-25 12:11:02 -04:00
Rachel Guan
43074ea32f
Send expiring notification emails to admins if no tech emails are on file (#1387)
* Send emails to admin if tech emails are not present

* Improve test cases and comments
2021-10-21 12:59:31 -04:00
Weimin Yu
1a4a31569e
Alt entity model for fast JPA bulk query (#1398)
* Alt entity model for fast JPA bulk query

Defined an alternative JPA entity model that allows fast bulk loading of
multi-level entities, DomainBase and DomainHistory. The idea is to bulk
the base table as well as the child tables separately, and assemble them
into the target entity in memory in a pipeline.

For DomainBase:

- Defined a DomainBaseLite class that models the "Domain" table only.

- Defined a DomainHost class that models the "DomainHost" table
  (nsHosts field).

- Exposed ID fields in GracePeriod so that they can be mapped to domains
  after being loaded into memory.

For DomainHistory:

- Defined a DomainHistoryLite class that models the "DomainHistory"
  table only.

- Defined a DomainHistoryHost class that models its namesake table.

- Exposed ID fields in GracePeriodHistory and DomainDsDataHistory
  classes so that they can be mapped to DomainHistory after being
  loaded into memory.

In PersistenceModule, provisioned a JpaTransactionManager that uses
the alternative entity model.

Also added a pipeline option that specifies which JpaTransactionManager
to use in a pipeline.
2021-10-20 16:48:56 -04:00
gbrodman
c7f50dae92
Use READ_COMMITTED serialization level in CreateSyntheticHEA (#1395)
I observed an instance in which a couple queries from this action were,
for whatever reason, hanging around as idle for >30 minutes. Assuming
the behavior that we saw before where "an open idle serializable
transaction means all pg read-locks stick around forever" still holds,
that's the reason why the amount of read-locks in use spirals out of
control.

I'm not sure why those queries aren't timing out, but that's a separate
issue.
2021-10-19 11:36:15 -04:00
gbrodman
969fa2b68c
Fix weird flake (#1394) 2021-10-15 18:00:46 -04:00
gbrodman
9a569198fb
Ignore class visibility in EntityTest (#1389) 2021-10-15 17:08:51 -04:00
gbrodman
8a53edd57b
Use multiple transactions in IcannReportingUploadAction (#1386)
Relevant error log message: https://pantheon.corp.google.com/logs/viewer?project=domain-registry&minLogLevel=0&expandAll=false&timestamp=2021-10-11T15:28:01.047783000Z&customFacets=&limitCustomFacetWidth=true&dateRangeEnd=2021-10-11T20:51:40.591Z&interval=PT1H&resource=gae_app&logName=projects%2Fdomain-registry%2Flogs%2Fappengine.googleapis.com%252Frequest_log&scrollTimestamp=2021-10-11T15:10:23.174336000Z&filters=text:icannReportingUpload&dateRangeUnbound=backwardInTime&advancedFilter=resource.type%3D%22gae_app%22%0AlogName%3D%22projects%2Fdomain-registry%2Flogs%2Fappengine.googleapis.com%252Frequest_log%22%0A%22icannReportingUpload%22%0Aoperation.id%3D%22616453df00ff02a873d26cedb40001737e646f6d61696e2d726567697374727900016261636b656e643a6e6f6d756c75732d76303233000100%22

note the "invalid handle" bit

From https://cloud.google.com/datastore/docs/concepts/transactions:
"Transactions expire after 270 seconds or if idle for 60 seconds."

From b/202309933: "There is a 60 second timeout on Datastore operations
after which they will automatically rollback and the handles become
invalid."

From the logs we can see that the action is lasting significantly longer
than 270 seconds -- roughly 480 seconds in the linked log (more or
less). My running theory is that ICANN is, for some reason, now being
significantly more slow to respond than they used to be. Some uploads in
the log linked above are taking upwards of 10 seconds, especially when
they have to retry. Because we have >=45 TLDs, it's not surprising that
the action is taking >400 seconds to run.

The fix here is to perform each per-TLD operation in its own
transaction. The only reason why we need the transactions is for the
cursors anyway, and we can just grab and store those at the beginning of
the transaction.
2021-10-15 15:38:37 -04:00
Lai Jiang
d25d4073f5
Add a beam pipeline to create synthetic history entries in SQL (#1383)
* Add a beam pipeline to create synthetic history entries in SQL

The logic is mostly lifted from CreateSyntheticHistoryEntriesAction. We
do not need to test for the existence of an embedded EPP resource in the
history entry before create a synthetic one because after
InitSqlPipeline runs it is guaranteed that no embedded resource exists.
2021-10-15 14:51:01 -04:00
Ben McIlwain
6ffe84e93d
Add a scrap command to hard-delete a host resource (#1391) 2021-10-15 12:28:18 -04:00
Rachel Guan
bb8988ee4e
Set payload in success response after sending notification emails (#1377)
* Set payload in success response after sending expiring certificate notification emails

* Modify log message and test cases for run() in sendExpiringCertificateNotificationEmailAction
2021-10-13 15:58:25 -04:00
Rachel Guan
2aff72b3b6
Add reason and requestedByRegistrar to domain renew flow (#1378)
* Resolve merge conflict

* Include reason and requestedByRegistrar in URS test file

* Modify test cases for new parameters in renew flow

* Add reason and registrar_request to renew domain command

* Update comments for new params in renew flow

* Make changes based on feedback
2021-10-13 11:41:02 -04:00
Weimin Yu
35fd61f771
Update parameter to Datastore wipe pipeline (#1385)
* Update parameter to Datastore wipe pipeline

Add the newly required RegistryEnvironment parameter to
BulkDeleteDatastorePipeline.

Remove the nullable annotation for this parameter in options
class.

Update metadata files regarding this parameter.
2021-10-11 17:31:50 -04:00
Michael Muller
13cb17e9a4
Implement several fixes affecting test flakiness (#1379)
* Implement several fixes affecting test flakiness

- Continued to do transaction manager cleanups on reply failure (lack of this
  may be causing cascading failures.
- Fix UpdateDomainCommandTest's output check (the test was checking for error
  output in standard error, but the command writes its output to the logs.
  Apparently, these may or may not be reflected in standard error depending on
  current global state)
- Remove unnecessary locking and incorrect comment in CommandTestCase.  The
  JUnit tests are not run in parallel in the same JVM and, in general, there
  are much bigger obstacles to this than standard output stream locking.

* Fix bad log message check
2021-10-11 12:54:03 -04:00
Ben McIlwain
4f1c317bbc
Revert update auto timestamp non-transactional fallback (#1380)
This was added recently in PR #1341 as an attempted fix for our test flakiness,
but it turns out that it didn't address the root issue (whereas PR #1361
did). So this removes the fallback, as there's no reason this should ever be
called outside of a transactional context.
2021-10-08 16:44:45 -04:00
gbrodman
c8aa32ef05
Include more info in host/domain name failures (#1346)
We're seeing some of these in CreateSyntheticHistoryEntriesAction and I
can't tell why from the logs (it doesn't appear to print the repo ID or
domain/host name)
2021-10-08 15:17:22 -04:00
gbrodman
95a1bbf66a
Temporarily disable SQL->DS replay in all tests (#1363) 2021-10-08 14:15:57 -04:00
Rachel Guan
23aa16469e
Add WipeOutContactHistoryPiiAction to prod (#1356) 2021-10-08 11:46:26 -04:00