* Fix some SQL credential issues identified when deploying Beam pipelines
There are two issues fixed here.
1. Without calling `FileSystems.setDefaultPipelineOptions(PipelineOptionsFactory.create()), the Nomulus tool doesn't know how to handle gs:// scheme files. Thus, if you try to deploy (for instance) the Spec11 pipeline using a GCS credential file, it fails.
2. There was a misunderstanding before about what the credential file
actually refers to -- there is a credential file in JSON format that is
used for gcloud authorization, and there is a space-delimited SQL access
info file that has the instance name, username, and password. These are
separate options and should have separate command-line params.
* Actually we don't need this for remote deployment
* Create an ImmutableObjectSubject for comparing SQL objects
Many times, when comparing objects that are loaded in from / saved to
SQL in tests, there are some fields we don't care about. Specifically,
we might not care about the last update time, revision ID, or other
things like that that are autoassigned by the DB. If we use this, we can
ignore those fields while still comparing the other ones.
* Create an ImmutableObject Correspondence for more flexible usage
* Add a 'Host' parameter to the relock action enqueuer
I believe this is why we are seeing 404s currently -- we should be
specifying the backend host as the target like we do for the
resave-entity async action.
* Fix JpaIntegrationRule in JUnit4
Made DatastoreExtension a JUnit4 Rule.
Nomulus model objects need Datastore API when manipulating Ofy keys.
As a result, JpaIntegrationTestRule must be used with AppEngineRule
or DatastoreExtension.
Also fixed WriteToSqlTest, which is the only JUnit4 test that uses
JpaIntegrationTestRule.
* Add lastUpdateTime column to epp resources
Property was inadvertently left out.
Renamed getter and setter to match the property name.
Added a test helper to compare EppResources while ignoring
lastUpdateTime, which changes every time an instance is persisted.
* Write one PCollection to SQL
Defined a transform that writes a PCollection of entities to SQL using
JPA. Allows configuring parallelism level and batch size.
* Add JUnit Params and start using it
* Convert rest of RDE tests
* Don't check headers for generated tests
* Expand visibility to fix build breakage
* Bump JUnit versions to 5.6.2
* Add a "buildFmt" gradle target
This does the same thing as the automatic Java build target, except instead of
failing if the code formatting isn't correct, it just automatically reformats as
necessary and continues on.
* Remove unnecessary mustRunAfters
* Make it run tests too, and add :taskTree task
* Rename task to coreDev and remove run afters
* Add task tree dependency
* Actually that may not be necessary
* Change Subdomain class to contain domainRepoId
* Remove jpaTm from Spec11PipelineTest and change clientId -> registrarId
* Remove 'client' from a comment
* Include changes to Spec11Pipeline
* add SafeBrowsingTransforms
* Run style
* Use an enum instead of boolean in EntityTestCase constructor
It's more clear to use an enum rather than just a simple boolean
* Add Javadoc and make the enum name more verbose
* Double the # of pubapi instances to better handle traffic spikes
We may also consider switching to an automatic scaling mode soon, on the hope
that it's working better than the last time we tried it (it would help to keep
resource costs down at least).
* Create ContactHistory class + table
This is similar to #587, but with contacts instead of hosts.
This also includes a couple cleanups for HostHistoryTest and RegistryLockDaoTest, just making code more proper (we shouldn't be referencing constant revision IDs when using a sequence that is used by multiple classes, and RLDT can extend EntityTest)
Note as well that we set ContactHistory to use the same revision ID sequence as HostHistory.
* Move ContactResource -> ContactBase
* Alter ContactBase and ContactResource
* Verify that the RegistryLock input has the correct registrar ID
We already verify (correctly) that the user has access to the registrar
they specify, but nowhere did we verify that the registrar ID they used
is actually the current sponsor ID for the domain in question. This is
an oversight caused by the fact that our testing framework only uses
admin accounts, which by the nature of things have access to all
registrars and domains.
In addition, rename "clientId" to "registrarId" in the RLPA object
* Change the wording on the incorrect-registrar message
* Output PO number in detailed report
The PO number header was added during the beam migration but we forgot
to print the actual data in the corresponding column. This resulted in a
misalignment of columns in the detailed report.
This PR fixes it. Note that we cannot drop PO number from the header (as is not
useful in the detailed report) because the header represents all fields
that are to be parsed from the SQL query results, and PO number *is*
needed when generating the invoice itself. By dual-purposing the header
(both as the required fields in the parser and the first line in the
detailed report) we have to include the value of PO number in the
detailed report CSV as well.
* Disambiguate injected Cloud SQL parameter names
This allows us to also inject the BeamJpaModule into RegistryTool, which
allows us to use the SocketJpaTransactionManager in Beam pipelines.
Some side effects of this include:
- duplication of KMS connections -- one standard, one Beam
- duplication of the creation of the partial Hibernate SQL configs
- removal of ambiguity between credentialFileName, credentialFilename,
and credentialFilePath -- we now use the latter.
- Performing the credential null check when instantiating the SQL
connection rather than when instantiating the module object. See the code
comments for more details on this.
I verified that this compiles and the tests run successfully when
injecting a @SocketFactoryJpaTm into a Beam pipeline.
* Remove two unnecessary config points and change the name of two params
* Use @Config instead of @Named and change the pool size
* Replace non-visible link with code
* Load Datastore snapshot from backup files
Defined a composite transform that loads from a Datastore export and
concurrent CommitLog files, identify entities that still exist at the
end of the time window, and resolve their latest states in the window.
* Exclude Test/Monitoring Registrars from escrow
Registrars used for testing and monitoring should not be included
in Data escrow. They also lack the required ianaIdentifier property
and would fail ICANN data validation.
Note that since alpha and crash environments have bad data that
break the RDE process, we need to verify this change in Sandbox.
* Make EppResource.loadCached() use batched fetch
Use a batched fetch (ofy().load().keys(...)) from datastore in
EppResource.loadCached().
To support this, convert TransactionManager.load(Iterable<VKey>) to accept the
more flexible generic parameters and return a map.
* Simplify datastore key streaming
* Changes requested in review.
* Update to generic Spec11ThreatMatch table
* Fix SQL syntax
* Make changes to the schema and add a test for null and empty threatTypes
* Fix a small typo
* Change the exception thrown with illegal arguments
Change the import for isNullOrEmpty
* Fix import for checkArgument
* Added a threat to test multiple threat types
This adds an entriesToImmutableMap() collector that can be used in place of
toImmutableMap(Map.Entry::getkey, Map.Entry::getValue()).
It also fixes up some existing calls that use toImmutableMap() when terser
alternatives exist.