* Update GCL dependency to avoid security alert
This required a few changes in addition to the dependency update.
- a few transitive / required dependency updates as well
- updating soyutils_usegoog.js and adding checks.js because they're
necessary as part of the Soy compilation process
- Using a trustedResourceUri in the buildSrc Soy compilation instead of
a string
- changing the arguments to the Soy-to-Java compiler to comply with the
new version
- Moving all Soy UI files to be in the registrar directory. This was
not the case before due to previous thinking that we'd have separate
admin and registrar consoles -- this is no longer the case so it's no
longer necessary. This necessitated various refactorings and reference
changes.
- The new soy-to-javascript compiler requires this, as it removes the
"deps" param that we were previously using to say "use the general UI
utils as dependencies for the registrar-console files".
- Creating a SQL environment and loading test data in the test server
main method -- previously, the local test server did not work.
- Fix some JS code that was referencing now-deleted library functions
- Removal of the Karma tests, as the karma-closure library hasn't been
updated since 2018 and it no longer works. We never noticed any errors
from the Karma tests, we never change the JS, and we have the
Java+Selenium screenshot differ tests to test the UI anyway.
* Add sanity checks to history entry construction
* Add more missing setClientId() calls and delete scrap tool
* Merge branch 'master' into synthetic-requestedby
* Set more client IDs
* Merge branch 'master' into synthetic-requestedby
* Fix RegistryJpaIO.Read problem with large data
The read connector needs to detach loaded entities. This
is now the default behavior in QueryComposer
Also removed the 'transaction mode' property from the Read connector.
There are no obvious use cases for non-transaction query, and
implementation is not straightforward with the current code base.
Also changed the return type of QueryComposer.list() to ImmutableList.
* Clean up some SqlEntity classes
This started as having a better check for when to run the
ReplayCommitLogsToSqlAction but that'll require a bit more thought, and
this is a fairly simple PR that can be split out.
* Add mapreduce action to create synthetic history entries
RDE and zone file generation require being able to tell what objects
looked like in the past (though not beyond 30 days, or whatever the
Datastore retention period is set to). In Datastore, to answer this we
look at commit logs, and in SQL we will look at the History objects
stored for each EPP resource. This action can be run once while in
Datastore-primary-SQL-secondary to make sure that every EPP resource has
at least one history entry for which the resource-at-this-time field is
filled out in the SQL world.
* Fix the JPA Read connector for large data
Allow result set streaming by setting the fetchSize on JDBC statements.
Many JDBC drivers by default buffers the entire result set, causing
delays in first result and/or out of memory errors.
Also fixed a entity instantiation problem exposed in production runs.
Lastly, removed incorrect comments.
* Restore a fix for flaky test
Restore a speculative fix for the flakiness in
DeleteExpiredDomainsActionTest. Although we identified a bug and fixed
it in a previous commit, it may not be the only bug. The removed fix may
still be necessary.
* Combine the two Lock classes into one class
This allows us to remove the DAO and to just treat locks the same as we
would treat any other object -- generically grabbing them from the
transaction manager.
We do not need to be concerned about the changeover between Datastore
and SQL because we assume that any such changeover will require
sufficient downtime that any currently-valid acquired locks will expire
during the downtime. Otherwise, we could get into a situation where an
action has acquired a particular lock in Datastore but not SQL.
* Fix timestamp inversion bug
Set the number of commitLog buckets to 1 in CommitLog replay tests to
expose all timestamp inversion problems due to replay. Fixed
PollAckFlowTest which is related to this problem.
Also fixed a few tests that failed to advance the fake clock when they
should, using the following approaches:
- If DatabaseHelper used but clock is not injected, do it. This
allows us to remove some unnecessary manual clock advances.
- Manually advance the clock where convenient.
- Enable clock autoIncrement mode when calling production classes that
performs multiple transactions.
We should consider making 1-bucket the default setting for tests. This
is left to another PR.
* Add an auditedOfy marker method for allow-listed ofy() calls
This will allow us to make sure that every usage of ofy() has been
hand-examined and specifically allowed.
In SQL the contact of a domain is an indexed field and therefore we can
find linked domains synchronously, without the need for MapReduce.
The delete logic is mostly lifted from DeleteContactsAndHostsAction, but
because everything happens in a transaction we do not need to recheck a
lot of the preconditions that were necessary to ensure that the async
delete request still meets the conditions that when the request was
enqueued.
* Populate the contact in ContactHistory objects created in Contact flows
Minimal interesting changes here
- a bit of reconstruction in ContactHistory to get the repo ID from the
key
- making the History revision ID Long instead of long so that it can be
null in non-built intermediate entities
- adding a copyFrom(HistoryEntry.Builder) method in HistoryEntry.Builder
so that we don't need to allocate quite as many unnecessary IDs, i.e.
removing the .build() lines in provideContactHistory and
provideDomainHistory
* Add a BEAM read connector for JPA entities
Added a Read connector to load JPA entities from Cloud SQL.
Also attempted a fix to the null threadfactory problem.
There has been a case where the CI was broken on Friday and no one
noticied or fixed it and a RC build was built with broken tests.
The tests were disabled due to unknown test failures that have since
been fixed.
Also update the machine type used by GCB to be more powerful. This is
necessary for the tests to past because N1_HIGHCPU_8 is RAM constraint
and the tests crashes. I updated all jobs to use the new type which
hopefully will make the build faster as well.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1130)
<!-- Reviewable:end -->
This also forces the karma test to use the Gradle-installed version of
node instead of the global version. The global version installed on the
Kokoro machines is too old to function with some of the newer libraries.
Unfortunately, much of the time there's a bit of a circular dependency
in the object creation, e.g. the Domain object stores references to the
billing events which store references to the history object which
contains the Domain object. As a result, we allocate the history
object's ID before creating it, so that it can be referenced in the
other objects that store that reference, e.g. billing events.
In addition, we add a utility copyFrom method in HistoryEntry.Builder to
avoid unnecessary ID allocations.
* Remove unnecessary MockitoExtension from Spec11PipelineTest
This is kind of a shot in the dark here, but this is one of the obvious
differences between this test class (which frequently experiences flakes) and
the other pipeline test classes which do not.
It's also possible we were getting the wrong runner if the test framework was
incorrectly detecting an App Engine runtime environment, so I added an assert
that will make it very clear if this is the cause of any failures.
* Handle bad production data when migrating to SQL
Ignore or fix bad entites when populating SQL with production data in
Datastore. These are mostly inconsistent foreign keys.
See b/185954992 for details.
In tests we use a TestPipelineExtension which does some static
initialization that should not be repeated the same JVM. In our
XXXPipeline classes we save the pipeline as a field and usually write lambdas
that are pass to the pipeline. Because lambdas are effectively anonymous inner
classes they are bound to their enclosing instances. When they get serialized
during pipeline execution, their enclosing classes also do. This might result
in undefined behavior when multiple lambdas in the same XXXPipeline are used
on the same JVM (such as in tests) where the static initialization may be done
multiple times if different class loaders are used. This is very
unlikely to happen but as a best practice we still remove them as
fields.
* Improve usability of WipeOutCloudSqlAction
Replace the "drop owned" statement with ones that drops only tables and
sequences. The former statement also drops default grants for the
nomulus user, which must be restored before the database can be used by
the nomulus server and tools.
* Convert GenerateLordnCommand to tm
This makes use of QueryComposer and adds a `list()` method to it.
Since there was no test for GenerateLordnCommand, this also implements one.
* Changes requested in review
* Add test for list queries
* Stream domains instead of listing them
* Reformatted
* Make nom_build not check for ".git" directory
nom_build tries to verify that it is in the root of the tree prior to doing
anything, however checking for a .git directory doesn't work in a merged
directory.
* Minor formatting fix to attempt to force rebuild
These tests are flaky due to some kind of contention/collision on the mock task
queue. Retrying seems to fix the vast majority of flakes, is easy to implement,
and is more performant than moving these tests into the fragileTests test suite.
Note that there are many flow tests that aren't
@DualDatabaseTest-annotated yet but those will come later, as they will
require more changes to the flows (other PRs are coming or in progress).
This only includes the remaining EppResource flows that don't create a
history entry.