* Change TaskOptions to Task in CommitLogFanoutAction
* Add a createTask method that takes clock and jitterSeconds
* Change CreateTask parameter type and improve test cases
* Improve comments and test casse
* Improve test cases that handel jitterSeconds
* Grandfather in old data for one-time billing event requirement
We have data from 2018 and earlier where we didn't consistently set periodYears
for OneTime BillingEvents with certain reasons. This grandfathers in that old
data so that we can successfully move it over to Cloud SQL for now, then we can
later run a query that will backfill it, after which we can then tighten up the
requirement again. Note that the requirement is still being enforced for all
billing events from 2019 onwards.
This also improves the handling of validation, by adding a private field to the
Reason enum rather than creating a throwaway inline ImmmutableSet in the
Builder.
BSD sed requires a parameter to -i to indicate the backup suffix. By
adding a blank suffix the sed command works on both Linux and macOS.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1421)
<!-- Reviewable:end -->
* Make TaskMatcher default to POST methods
TaskOptions.Builder.withUrl() defaults to POST methods. Therefore, it seems
reasonable to verify that task queue methods are using the POST method,
especially given that the method must now be identified explicitly when using
CloudTaskUtils. This check would have guarded against the bug fixed by #1413.
* Elaborate on comment
* Further improved the comment
* Remove the ineffective SQL injection check
Remove the ineffective SQL-injection attack check in go/r3pr/954. It is
quite restrictive, causing a long exempt list. It also doesn't protect
queries made through helpers such as QueryComposer etc.
We will start from scratch for a new solution.
* Add the Cloud SQL queries for transaction reports
* Add the remaining queries
* Some query fixes
* Fix comments
* Fix indentation in total_nameservers
* Fix indentation on other Case condition
* Fix InitSqlPipeline regarding synthesized history
There are a few bad domains in Datastore that we hardcoded to ignore
during SQL population. They didn't have history so we didn't try to
filter when writing history.
Recently we created synthesized history for domains, including the bad
domains. Now we need to filter History entries.
* Support shared database snapshot
Allow multiple workers to share a CONSISTENT database snapshot. The
motivating use case is SQL database snapshot loading, where it is too
slow to depend on one worker to load everything.
This currently is postgresql-specific, but will be improved to be
vendor-independent.
Also made sure AppEngineEnvironment.java clears the cached environment
in call cases when tearing down.
* Update terraform files and instructions
Update proxy terraform files based on current best practices and allow
exclusion of forwarding rules for HTTP endpoints. Specifically:
- Add a "public_web_whois" input to allow disabling the public HTTP
whois forwarding.
- Add "description" fields to all variables.
- Move outputs of the top-level module into "outputs.tf".
- Auto-reformat using hclfmt.
* Make entities serializable for DB validation
Make entities that are asynchronously replicated between Datastore and
Cloud SQL serializable so that they may be used in BEAM pipeline based
comparison tool.
Introduced an UnsafeSerializable interface (extending Serializable) and
added to relevant classes. Implementing classes are allowed some
shortcuts as explained in the interface's Javadoc. Post migration we
will decide whether to revert this change or properly implement
serialization.
Verified with production data.
This is used for the replay locks so that Beam pipelines (which will be
used for database comparison) can acquire / release locks as necessary
to avoid database contention. If we're comparing contents of Datastore
and SQL databases, we shouldn't have replay actively running during the
comparison, so the pipeline will grab the locks.
Beam doesn't always play nicely with loading from / saving to Datastore,
so we need to make sure that we store the replay locks in SQL at all
times, even when Datastore is the primary DB.
* Re-enable replay tests for most environments
This enables the replay tests except in environments where
the NOMULUS_DISABLE_REPLAY_TESTS environment variable is set to "true".
* Add a check for null
* Alt entity model for fast JPA bulk query
Defined an alternative JPA entity model that allows fast bulk loading of
multi-level entities, DomainBase and DomainHistory. The idea is to bulk
the base table as well as the child tables separately, and assemble them
into the target entity in memory in a pipeline.
For DomainBase:
- Defined a DomainBaseLite class that models the "Domain" table only.
- Defined a DomainHost class that models the "DomainHost" table
(nsHosts field).
- Exposed ID fields in GracePeriod so that they can be mapped to domains
after being loaded into memory.
For DomainHistory:
- Defined a DomainHistoryLite class that models the "DomainHistory"
table only.
- Defined a DomainHistoryHost class that models its namesake table.
- Exposed ID fields in GracePeriodHistory and DomainDsDataHistory
classes so that they can be mapped to DomainHistory after being
loaded into memory.
In PersistenceModule, provisioned a JpaTransactionManager that uses
the alternative entity model.
Also added a pipeline option that specifies which JpaTransactionManager
to use in a pipeline.
I observed an instance in which a couple queries from this action were,
for whatever reason, hanging around as idle for >30 minutes. Assuming
the behavior that we saw before where "an open idle serializable
transaction means all pg read-locks stick around forever" still holds,
that's the reason why the amount of read-locks in use spirals out of
control.
I'm not sure why those queries aren't timing out, but that's a separate
issue.
* Fix problems with the format tasks
The format check is using python2, and if "python" doesn't exist on the path
(or isn't python 2, or there is any other error in the python code or in the
shell script...) the format check just succeeds.
This change:
- Refactors out the gradle code that finds a python3 executable and use it
to get the python executable to be used for the format check.
- Upgrades google-java-format-diff.py to python3 and removes #! line.
- Fixes shell script to ensure that failures are propagated.
- Suppresses error output when checking for python commands.
Tested:
- verified that python errors cause the build to fail
- verified that introducing a bad format diff causes check to fail
- verified that javaIncrementalFormatDryRun shows the diffs that would be
introduced.
- verified that javaIncrementalFormatApply reformats a file.
- verified that well formatted code passes the format check.
- verified that an invalid or missing PYTHON env var causes
google-java-format-git-diff.sh to fail with the appropriate error.
* Fix presubmit issues
Omit the format presubmit when not in a git repo and remove unused "string"
import.
* Add a beam pipeline to create synthetic history entries in SQL
The logic is mostly lifted from CreateSyntheticHistoryEntriesAction. We
do not need to test for the existence of an embedded EPP resource in the
history entry before create a synthetic one because after
InitSqlPipeline runs it is guaranteed that no embedded resource exists.
* Set payload in success response after sending expiring certificate notification emails
* Modify log message and test cases for run() in sendExpiringCertificateNotificationEmailAction
* Resolve merge conflict
* Include reason and requestedByRegistrar in URS test file
* Modify test cases for new parameters in renew flow
* Add reason and registrar_request to renew domain command
* Update comments for new params in renew flow
* Make changes based on feedback
* Update parameter to Datastore wipe pipeline
Add the newly required RegistryEnvironment parameter to
BulkDeleteDatastorePipeline.
Remove the nullable annotation for this parameter in options
class.
Update metadata files regarding this parameter.
* Implement several fixes affecting test flakiness
- Continued to do transaction manager cleanups on reply failure (lack of this
may be causing cascading failures.
- Fix UpdateDomainCommandTest's output check (the test was checking for error
output in standard error, but the command writes its output to the logs.
Apparently, these may or may not be reflected in standard error depending on
current global state)
- Remove unnecessary locking and incorrect comment in CommandTestCase. The
JUnit tests are not run in parallel in the same JVM and, in general, there
are much bigger obstacles to this than standard output stream locking.
* Fix bad log message check
This was added recently in PR #1341 as an attempted fix for our test flakiness,
but it turns out that it didn't address the root issue (whereas PR #1361
did). So this removes the fallback, as there's no reason this should ever be
called outside of a transactional context.
We're seeing some of these in CreateSyntheticHistoryEntriesAction and I
can't tell why from the logs (it doesn't appear to print the repo ID or
domain/host name)
* Add TmOverrideExtension for more safe TM overrides in tests
This is safer to use than calling setTmForTest() directly because this extension
also handles the corresponding call to removeTmOverrideForTest() automatically,
the forgetting of which has been a source of test flakiness/instability in the
past.
There are now broadly two ways to get tests to run in JPA: either use
DualDatabaseTest, an AppEngineExtension, and the corresponding JPA-specific
@Test annotations, OR use this override alongside a
JpaTransactionManagerExtension.