* Begin saving the EppResource parent in *History objects
We use DomainCreateFlow as an example here of how this will work. There
were a few changes necessary:
- various changes around GracePeriod / GracePeriodHistory so that we can
actually store them without throwing NPEs
- Creating one injectable *History.Builder field and using in place of
the HistoryEntry.Builder injected field in DomainCreateFlow
- Saving the EppResource as the parent in the *History.Builder setParent
calls
- Converting to/from HistoryEntry/*History classes in
DatastoreTransactionManager. Basically, we'll want to return the
*History subclasses (and similar in the ofy portions of HistoryEntryDao)
- Converting a few HistoryEntry.Builder usages to DomainHistory.Builder
usages. Eventually we should convert all of them.
* Migrate Spec11 pipeline to flex template
Unfortunately this PR has turned out to be much bigger than I initially
conceived. However this is no good way to separate it out because the
changes are intertwined. This PR includes 3 main changes:
1. Change the spec11 pipline to use Dataflow Flex Template.
2. Retire the use of the old JPA layer that relies on credential saved
in KMS.
3. Some extensive refactoring to streamline the logic and improve test
isolation.
* Fix job name and remove projectId from options
* Add parameter logs
* Set RegistryEnvironment
* Remove logging and modify safe browsing API key regex
* Rename a test method and rebase
* Remove unused Junit extension
* Specify job region
* Convert poll-message-related classes to use SQL as well
Two relatively complex parts. The first is that we needed a small
refactor on the AckPollMessagesCommand because we could theoretically be
acking more poll messages than the Datastore transaction size boundary.
This means that the normal flow of "gather the poll messages from the DB
into one collection, then act on it" needs to be changed to a more
functional flow.
The second is that acking the poll message (deleting it in most cases)
reduces the number of remaining poll messages in SQL but not in
Datastore, since in Datastore the deletion does not take effect until
after the transaction is over.
* Make TransactionManager.loadAllOf() smart w.r.t the cross-TLD entity group
The loadAllOf() method will now automatically append the cross-TLD entity group
ancestor query as necessary, iff the entity class being loaded is tagged with
the new @IsCrossTld annotation.
* Add tests
* Add SQL wipeout action in QA
Added the WipeOutSqlAction that deletes all data in Cloud SQL.
Wipe out is restricted to the QA environment, which will get production
data during migration testing.
Also added a cron job that invokes wipeout on every saturday morning.
This is part of the privacy requirments for using production data in QA.
Tested in QA.
* Add replay to remaining (non-trivial) flow tests
Convert all remaining flow tests to do replay/compare testing. In the course
of this:
- Move the class specific SetClock extension into its own place.
- Fix another "cyclic" foreign key (there may be another solution in this case
because HostHistory is actually different from HistoryEntry, but that would
require changing the way we establish priority since HostHistory is not
distinguished from HistoryEntry in the current methodology)
* Add daily cron entries to for DeleteExpiredDomainsAction
This also requires setting this action to GET instead of POST, as GAE cron makes
GET requests.
* Rewrite the JPA output connector for BEAM
Following BEAM's IO connector style, added a RegistryJpaIO class to hold
IO connectors, and implemented the Write connector as a static inner
class in it. The JpaTransactionManager used by the Write connector
retrieves SQL credentials from the SecretManager.
Cleaned up SQL-related pipeline parameters.
Converted the InitSqlPipeline to use RegistryJpaIO.
* Replay Cloud SQL transactions against datastore
Implement the ReplicateToDatastore cron job that will apply all Cloud SQL
transactions to the datastore. The last transaction id is stored in a
LastSqlTransaction entity in datastore.
Note that this will not be activated in production until a) the cron
configuration is updated and b) the cloudSql.replicateTransactions flag is set
to true in the nomulus config file.
* Post-review changes
Fixed immutability issues with LastSqlTransaction, write a single transaction
at a time to datastore.
* Changes requested in review
* Get a batch of SQL transactions
Read a batch of SQL transactions at a time and then process them
transactionally against datastore.
* Bring this up-to-date with the codebase
* Changes requested in review
* Fixed date in copyright
* Update a few plugins for Java 11 compatibility
Guice 5.0.1 is now compatible with Java 11. However we don't
directly depend on Guice. Rather Soy depends on Guice. So I added a
direct dependency on Guice 5.0 just before Soy in order to frontload Soy
and pull in the newer version.
Mockito 3.7.7 is now compatible with Java 11. The complication is that
we need to use the inline version of Mockito, which among other things
also allows mocking for final classes (hooray!). It will eventually
become the default Mockito mock maker but for now it needs to be
manually activated.
Note that the inline version now introduces another warning:
```
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
```
Which I think is WAI due to how the inline mock maker works. Waiting on
the author to confirm.
After these to changes the only illegal reflective access is caused by
App Engine SDK tools, which we will rid ourselves of when we migrate off
of GAE.
* Restore package-lock.json
* Properly set up JPA in BEAM workers
Sets up a singleton JpaTransactionManger on each worker JVM for all
pipeline nodes to share.
Also added/updated relevant dependencies. The BEAM SDK version change
caused the InitSqlPipeline's graph to change.
* Fix obscure bug when checking restore prices of duplicate domain names
There were instances of "java.lang.IllegalArgumentException: Multiple entries
with same key" in the logs, caused by attempting to construct an ImmutableMap
containing duplicate keys. It turns out this was happening in the domain check
flow when the following conditions were all simultaneously met:
1. The older v06 fee extension is used
2. The same domain name is being queried multiple times in a single check
command (which is valid per the spec but doesn't actually make any sense)
3. Said domain exists
4. The cost of a restore (an uncommon operation) is being checked
When all of those conditions were met, an error was being thrown when the
dupe-containing list of domain names was used as the keys of a new Map. This
fixes that bug by calling .distinct() first.
Give enough registrars enough typewriters ...
BUG=179052195
* Add databaseTransitionSchedule entitiy
* add UpdateDatabaseTransitionScheduleCommand
* small fixes
* change entity structure to no longer be singleton
* add get command
* fix getCache
* Change id to TransitionId enum
* more fixes
* Cleanup tests
* Add link to javadoc
* Add lastUpdateTime
* fix datatype of getCached
* Make BiqueryPollJobAction endpoint internal only
This endpoint makes use of java object deserialization, which allows a
malicious actor to craft a request that can initiate overly broad actions on
the server. Since this endpoint is not widely used for operational purposes,
limit its authorization to "internal only" so that no user agents (even with
admin privs) can access it.
* Wire up DeleteExpiredDomainsAction so that it can actually be called
For now I'm just going to be calling it manually (and on sandbox for starters),
but in a few weeks, if all looks good, I'll add the cron job to regularly call
it in production, and this feature will thus be done.
* Convert (most) HistoryEntry ofy calls to tm
As part of this change, it was necessary to do changes in the JPATM that
are similar (but the opposite) of the changes that we did in
DatastoreTM with regards to converting HistoryEntries to and from the
*History classes.
We leave the ofy() calls in the MapReduce ResaveAllHistoryEntriesAction
for now; that can be converted during the Beam pipeline transition.
Some other tests required registrar-name fixes as well -- because
*History objects have a foreign key on the Registrar table, we have to
use a "real" registrar name in tests.
* Add simple HistoryEntryDaoTest
* Update the Datastore to SQL migration pipeline
The pipeline now includes all entity types to be migrated by it, and has
completed successfully using the Sandbox data set. The running time in Sandbox
is about 3 hours, extrapolating by entity count to a 12-hour run with
production data. However, actual running time is likely to be longer since
throughput is lower with domains, which accounts for a higher percentage
of the total in production. More optimization will be needed.
The migrated data has not been validated.
* Use PollMessageVKey to replace VKey<PollMessage> in DomainBase
* Revert changes to DomainContent
* Use BillingVKey in GracePeriodBase to restore symmetric vkey
* Rebase on HEAD
* Reproduce DomainHistory double write failure
* Add fix for cascade sets and clean up hacks
* Fix DatastoreHelper to work with name change.
* Remove Ignored entities from ofy schema
* Add an extension to verify transaction replay
Add ReplayExtension, which can be applied to test suites to verify that
transactions committed to datastore can be replayed to SQL.
This introduces a ReplayQueue class, which serves as a stand-in for the
current lack of replay-from-commit-logs. It also includes replay logic in
TransactionInfo which introduces the concept of "entity class weights."
Entity weighting allows us store and delete objects in an order that is
consistent with the direction of foreign key and deferred foreign key
relationships. As a general rule, lower weight classes must have no direct or
indirect non-deferred foreign key relationships on higher weight classes.
It is expected that much of this code will change when the final replay
mechanism is implemented.
* Minor fixes:
- Initialize "requestedByRegistrar" to false (it's non-nullable). [reverted
during rebase: non-nullable was removed in another PR]
- Store test entities (registrar, hosts and contacts) in JPA.
* Make testbed save replay
This changes the replay system to make datastore saves initiated from the
testbed (as opposed to just the tested code) replay when the ReplayExtension
is enabled. This requires modifications to DatastoreHelper and the
AppEngineExtension that the ReplayExtension can plug into.
This changes also has some necessary fixes to objects that are persisted by
the testbed (such as PremiumList).
* Add schema for GracePeriodHistory
Rebase on HEAD
Rebase on HEAD
Rebase on HEAD and rename column
Use OfyService to generate id
Refactor GracePeriodsSubject
Rebase on HEAD
Remove GracePeriodSubject and GracePeriodsSubject
Rebase on HEAD
Rebase on HEAD
Rebase on HEAD
Add gracePeriodHistoryRevisionId and remove some foreign key
* Rebase on HEAD
* Restore ofy keys in GracePeriod objects
Restore the ofy keys when loading GracePeriod object from SQL. There's no
clear way to do this using the normal approach (fix-up during a PostLoad
method) because fixups to these violate immutability after hibernate has
already obtained their hash values. Instead, we force reconstitution of the
ofy keys in all public methods that access them (including equals() and
hashCode()) so that they can be generated before an invalid hash is generated.
As part of this change, convert the GracePeriod id from an autogenerated
sequence to a UUID allocated from ObjectifyService and enhance ImmutableObject
to allow it to exclude certain fields from hash/equals and print.
The ImmutableObject enhancements are necessary because we compare grace
periods against locally created test objects in a number of unit tests and
there's no way this can work with GracePeriods loaded from SQL currently, as
they will have an identifier field generated from the database and the test
objects will have an identifier field of null (or a new unique value, after
this change).
Removing autogeneration from GracePeriod ids ended up being likely not
strictly necessary for this change (it was a consequence of an earlier
iteration). However, it does alleviate the problem of mutation of an
immutable object after creation and is more in line with how we've decided to
allocate other identifiers.
* Changed needed after rebase.
* Create SQL schema for RdeRevision
* Split RdeRevision IDs into three separate DB fields as unified pkey
* Rename variable
* Merge remote-tracking branch 'origin/master' into rdeRevision
* Rename variable in one other location
* Implement no-op toDatastore/Sql for RdeRevision
* Responses to CR
* Merge remote-tracking branch 'origin/master' into rdeRevision
* Use a date for the date column
* Fix exception messages in tests
* Regen diagram to fix the test
* Use assignment in static factory methods
* Merge remote-tracking branch 'origin/master' into rdeRevision
* Use the parent to store the history repo ID and fill in the base object
Storing the repo ID in the parent and in the base object has two primary
benefits.
First, it unifies the parent information in the HistoryEntry's `parent`
object. This simplifies the builders and the data flow.
Second, when possible (which should be always, post-migration) we fill
out the DomainContent's repo ID (similarly for the other EPP resources)
which means that when reconstituting the ofy keys we don't need to pass
the repo ID in from a separate object. This way, all the data are
encapsulated where they should be.
The primary downside here is that it further reduces the "immutability"
of the history objects (since we're using the Hibernate setter for the
parent repo ID) but we weren't immutable anyway.
* Respond to CR
- compare the entire vkeys in tests
- always return the parent for repo ID
* Simplify creation of parent VKeys
* Fix flipped isAssignableFrom check in VKey
* Merge remote-tracking branch 'origin/master' into historyRepoId
When loading the VKeys for the BillingEvents hierarchy, it is necessary to
restore the original concrete class for the type, otherwise we end up with a
different (and incompatible) VKey.
As part of this, convert the cancellation matching billing event to
VKey<Recurring>, which seems like the only thing it actually can be.
* Create a separate per-tld registry lock/unlock cost
Currently we use the standard server status change cost for this, but
this might not be ideal at some point in the future if we wish to allow
manual forced updates outside of the standard registry lock system (we
would charge for these manual forced updates, even if we don't charge
for registry locks).
* Remove period
* Change disable invoicing flag to enable invoicing flag
This flag will be the sole determinor on if invoicing is enabled,
regardless of TLD types.
Once this PR is deployed we will need to run the nomulus command to
update this flag on all launched open TLDs.
For context on why this change is made, see b/159626744.
* Rename enableInvoicing to InvoicingEnabled
* Use composite primary key for DomainHistory
* Move History table's SequenceGenerator to orm.xml
* Rebase on HEAD and remove default value for key in History tables
* Use primitive type for id.
* Revert the cache change
* Persist *History objects as HistoryEntry objects
While Datastore is the primary database, we will store *History objects
as HistoryEntry objects and convert to/from the proper objects in the
Datastore transaction manager. This means that History objects will not
properly store the copy of the EppResource until we move to SQL as
primary, but this is the way the world exists anyway so it's not a
problem.
* Format code and simplify the bulk loading
* Add comments with context
* Add a SQL schema to AllocationToken
* Respond to CR
- rename field in tests
- rename allowed_registrar_ids field
- remove unnecessary db load in GATC
* Add TODO for HistoryEntry vkeys
* Run autoformat
* V48 -> V49
- Reuse DS record format processing from the create/update domain commands
(BIND format, commonly used in URS requests)
- Remove the CLIENT_HOLD status from domains that have it (this blocks us from
serving the new nameservers and DS record)
* End-to-end Datastore to SQL pipeline
Defined InitSqlPipeline that performs end-to-end migration from
a Datastore backup to a SQL database.
Also fixed/refined multiple tests related to this migration.