* Restore a fix for flaky test
Restore a speculative fix for the flakiness in
DeleteExpiredDomainsActionTest. Although we identified a bug and fixed
it in a previous commit, it may not be the only bug. The removed fix may
still be necessary.
* Combine the two Lock classes into one class
This allows us to remove the DAO and to just treat locks the same as we
would treat any other object -- generically grabbing them from the
transaction manager.
We do not need to be concerned about the changeover between Datastore
and SQL because we assume that any such changeover will require
sufficient downtime that any currently-valid acquired locks will expire
during the downtime. Otherwise, we could get into a situation where an
action has acquired a particular lock in Datastore but not SQL.
* Fix timestamp inversion bug
Set the number of commitLog buckets to 1 in CommitLog replay tests to
expose all timestamp inversion problems due to replay. Fixed
PollAckFlowTest which is related to this problem.
Also fixed a few tests that failed to advance the fake clock when they
should, using the following approaches:
- If DatabaseHelper used but clock is not injected, do it. This
allows us to remove some unnecessary manual clock advances.
- Manually advance the clock where convenient.
- Enable clock autoIncrement mode when calling production classes that
performs multiple transactions.
We should consider making 1-bucket the default setting for tests. This
is left to another PR.
* Add an auditedOfy marker method for allow-listed ofy() calls
This will allow us to make sure that every usage of ofy() has been
hand-examined and specifically allowed.
In SQL the contact of a domain is an indexed field and therefore we can
find linked domains synchronously, without the need for MapReduce.
The delete logic is mostly lifted from DeleteContactsAndHostsAction, but
because everything happens in a transaction we do not need to recheck a
lot of the preconditions that were necessary to ensure that the async
delete request still meets the conditions that when the request was
enqueued.
* Populate the contact in ContactHistory objects created in Contact flows
Minimal interesting changes here
- a bit of reconstruction in ContactHistory to get the repo ID from the
key
- making the History revision ID Long instead of long so that it can be
null in non-built intermediate entities
- adding a copyFrom(HistoryEntry.Builder) method in HistoryEntry.Builder
so that we don't need to allocate quite as many unnecessary IDs, i.e.
removing the .build() lines in provideContactHistory and
provideDomainHistory
* Add a BEAM read connector for JPA entities
Added a Read connector to load JPA entities from Cloud SQL.
Also attempted a fix to the null threadfactory problem.
This also forces the karma test to use the Gradle-installed version of
node instead of the global version. The global version installed on the
Kokoro machines is too old to function with some of the newer libraries.
Unfortunately, much of the time there's a bit of a circular dependency
in the object creation, e.g. the Domain object stores references to the
billing events which store references to the history object which
contains the Domain object. As a result, we allocate the history
object's ID before creating it, so that it can be referenced in the
other objects that store that reference, e.g. billing events.
In addition, we add a utility copyFrom method in HistoryEntry.Builder to
avoid unnecessary ID allocations.
* Remove unnecessary MockitoExtension from Spec11PipelineTest
This is kind of a shot in the dark here, but this is one of the obvious
differences between this test class (which frequently experiences flakes) and
the other pipeline test classes which do not.
It's also possible we were getting the wrong runner if the test framework was
incorrectly detecting an App Engine runtime environment, so I added an assert
that will make it very clear if this is the cause of any failures.
* Handle bad production data when migrating to SQL
Ignore or fix bad entites when populating SQL with production data in
Datastore. These are mostly inconsistent foreign keys.
See b/185954992 for details.
In tests we use a TestPipelineExtension which does some static
initialization that should not be repeated the same JVM. In our
XXXPipeline classes we save the pipeline as a field and usually write lambdas
that are pass to the pipeline. Because lambdas are effectively anonymous inner
classes they are bound to their enclosing instances. When they get serialized
during pipeline execution, their enclosing classes also do. This might result
in undefined behavior when multiple lambdas in the same XXXPipeline are used
on the same JVM (such as in tests) where the static initialization may be done
multiple times if different class loaders are used. This is very
unlikely to happen but as a best practice we still remove them as
fields.
* Improve usability of WipeOutCloudSqlAction
Replace the "drop owned" statement with ones that drops only tables and
sequences. The former statement also drops default grants for the
nomulus user, which must be restored before the database can be used by
the nomulus server and tools.
* Convert GenerateLordnCommand to tm
This makes use of QueryComposer and adds a `list()` method to it.
Since there was no test for GenerateLordnCommand, this also implements one.
* Changes requested in review
* Add test for list queries
* Stream domains instead of listing them
* Reformatted
These tests are flaky due to some kind of contention/collision on the mock task
queue. Retrying seems to fix the vast majority of flakes, is easy to implement,
and is more performant than moving these tests into the fragileTests test suite.
Note that there are many flow tests that aren't
@DualDatabaseTest-annotated yet but those will come later, as they will
require more changes to the flows (other PRs are coming or in progress).
This only includes the remaining EppResource flows that don't create a
history entry.
Nothing super crazy here other than persisting the entity changes in
DomainDeleteFlow at the end of the flow rather than almost at the end.
This means that when we return the results we give the results as they
were originally present, rather than the subsequently-changed values.
* Fix Spec11 domain check
We should be checking to see if there are _any_ active domains for a given
reported domain, not to see if _the_ domain for the name is active.
The last change caused an exception for domains with soft-deleted past domains
of the same name. The original code only checked the first domain returned
from the query, which may have been soft-deleted. This version checks all
domain records to see if any are active.
* filter().count() -> anyMatch()
* Begin saving the EppResource parent in *History objects
We use DomainCreateFlow as an example here of how this will work. There
were a few changes necessary:
- various changes around GracePeriod / GracePeriodHistory so that we can
actually store them without throwing NPEs
- Creating one injectable *History.Builder field and using in place of
the HistoryEntry.Builder injected field in DomainCreateFlow
- Saving the EppResource as the parent in the *History.Builder setParent
calls
- Converting to/from HistoryEntry/*History classes in
DatastoreTransactionManager. Basically, we'll want to return the
*History subclasses (and similar in the ofy portions of HistoryEntryDao)
- Converting a few HistoryEntry.Builder usages to DomainHistory.Builder
usages. Eventually we should convert all of them.
This is similar to the migration of the spec11 pipeline in #1073. Also removed
a few Dagger providers that are no longer needed.
TESTED=tested the dataflow job on alpha.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1100)
<!-- Reviewable:end -->
* Migrate Spec11 pipeline to flex template
Unfortunately this PR has turned out to be much bigger than I initially
conceived. However this is no good way to separate it out because the
changes are intertwined. This PR includes 3 main changes:
1. Change the spec11 pipline to use Dataflow Flex Template.
2. Retire the use of the old JPA layer that relies on credential saved
in KMS.
3. Some extensive refactoring to streamline the logic and improve test
isolation.
* Fix job name and remove projectId from options
* Add parameter logs
* Set RegistryEnvironment
* Remove logging and modify safe browsing API key regex
* Rename a test method and rebase
* Remove unused Junit extension
* Specify job region
* Send an immediate poll message for superuser domain deletes
This poll message is in addition to the normal poll message that is sent when
the domain's deletion is effective (typically 35 days later). It's needed
because, in the event of a superuser deletion, the owning registrar won't
otherwise necessarily know it's happening.
Note that, in the case of a --immediate superuser deletion, the normal poll
message is already being sent immediately, so this additional poll message is
not necessary.
* Update various tests to work with SQL as well
The main weird bit here is adding a method in DatabaseHelper to
retrieve and initialize all objects in either database. The
initialization is necessary since it's used post-command-dry-run to make
sure that no changes were actually made.
* Convert CountDomainsCommand to tm
As part of this, implement "select count(*)" queries in the QueryComposer.
* Replaced kludgy trick for objectify count
* Modify ClaimsList DAO to always use Cloud SQL as primary
* Revert ClaimsList add changes to SignedMarkRevocationList
* Fix flow tests
* Use start of time for empty list
* replace lambda with method reference
* Upload latest version of RDE report to icann
Currently the RdeReportAction is hard coded to load the initial version
of a report. This is wrong when reports have been regenerated.
Changed lines are copied from RdeUploadAction.
* Implement query abstraction
Implement a query abstraction layer ("QueryComposer") that allows us to
construct fluent-style queries that work across both Objectify and JPA.
As a demonstration of the concept, convert Spec11EmailUtils and its test to
use the new API.
Limitations:
- The primary limitations of this system are imposed by datastore, for
example all queryable fields must be indexed, orderBy must coincide with
the order of any inequality queries, inequality filters are limited to one
property...
- JPA queries are limited to a set of where clauses (all of which must match)
and an "order by" clause. Joins, functions, complex where logic and
multi-table queries are simply not allowed.
- Descending sort order is currently unsupported (this is simple enough to
add).
* Fix bug that was incorrectly assuming Cursor would always exist
In fact, the Cursor entity does not always exist (i.e. if an upload has never
previously been done on this TLD, i.e. it's a new TLD), and the code needs to be
resilient to its non-existence.
This bug was introduced in #1044.
* Use lazy injection in SendEscrow command
The injected object in SendEscrowReportToIcannCommand creates Ofy keys
in its static initialization routine. This happens before the RemoteApi
setup. Use lazy injection to prevent failure.
* Specify explicit ofyTm usage in SetDatabaseTransitionScheduleCommand
We cannot use the standard MutatingCommand because the DB schedule is
explicitly always stored in Datastore, and once we transition to
SQL-as-primary, MutatingCommand will stage the entity changes to SQL.
In addition, we remove the raw ofy() call from the test.