* Added "show_upgrade_diffs" script
"show_upgrade_diffs" pulls a git directory and a user branch from nomulus and
compares all of the versions of all dependencies specified in all lockfiles in
the master branch with those of the user branch and prints a nice, terse
little colorized report on the differences.
This is useful for reviewing a dependency upgrade.
* Add license header
* Changes requested in review
* Changes for review
- Change format of output so different actions are displayed somewhat
consistently.
- Make specifying a directory optional, if not specified create a temporary
directory and clean it up afterwards.
* Add a "ReplaySpecializer" to fix certain replays
Due to the fact that a given entity in either database type can map to
multiple entities in the other database, there are certain replication
scenarios that don't quite work. Current known examples include:
- propagation of cascading deletes from datastore to SQL
- creation of datastore indexed entities for SQL entities (where indexes are a
first-class concept)
This change introduces a ReplaySpecializer class, which allows us to declare
static method hooks at the entity class level that define any special
operations that need to be performed before or after replaying a mutation for
any given entity type.
Currently, "before SQL delete" is the only supported hook. A change to
DomainContent demonstrating how this facility can be used to fix problems in
cascading delete propagation will be sent as a subsequent PR.
* Throw exception on beforeSqlDelete failures
* Changes for review
* Convert DomainTransferRejectFlow to use tm() methods
This change includes a few other necessary dependencies to converting
DomainTransferRejectFlowTest to be a dual-database test. Namely:
- The basic "use tm() instead of ofy()" and "branching database
selection on what were previously raw ofy queries"
- Modification of the PollMessage convertVKey methods to do what they
say they do
- Filling the generic pending / response fields in PollMessage based on what type of
poll message it is (this has to be done because SQL is not very good at
storing ambiguous superclasses)
- Setting the generic pending / repsonse fields in PollMessage upon
build
- Filling out the serverApproveEntities field in DomainTransferData with
all necessary poll messages / billing events that should be cancelled on
rejection
- Scattered changes in DatabaseHelper to make sure that we're saving and
loading entities correctly where we weren't before
* Disable whois caching in nomulus tool
The whois commands previously served output generated from cached EppResource
objects in most cases. While this is entirely appropriate from server-side,
it is less useful when these commands are run from nomulus tool and, in fact,
when run from the "shell" command this results in changes that have been
applied from the shell not being visible from a subsequent "whois". The
command may instead serve information on an earlier, cached version of the
resource instead of the latest version.
This implementation uses dagger for parameterization of cached/non-cached
modes. I did consider the possibility of simply parameterizing the query
commands in all cases as discussed, however, having gone down the
daggerization path and having gotten it to work, I have to say I find this
approach preferrable. There's really no case for identifying
cached/non-cached on a per-command basis and doing so would require
propagating the flag throughout all levels of the API and all callsites.
Tested: In addition to the new unit test which explicitly verifies the
caching/noncaching behavior of the new commands, tested the actual failing
sequence from "nomulus -e sandbox shell" and verified that the correct results
are served after a mutation.
* Fixed copyright year
* More copyright date fixes
* Added WhoisCommandFactoryTest to fragile tests
I suspect that this test has some contention with other tests, it's not clear
why.
* Replay Cloud SQL transactions against datastore
Implement the ReplicateToDatastore cron job that will apply all Cloud SQL
transactions to the datastore. The last transaction id is stored in a
LastSqlTransaction entity in datastore.
Note that this will not be activated in production until a) the cron
configuration is updated and b) the cloudSql.replicateTransactions flag is set
to true in the nomulus config file.
* Post-review changes
Fixed immutability issues with LastSqlTransaction, write a single transaction
at a time to datastore.
* Changes requested in review
* Get a batch of SQL transactions
Read a batch of SQL transactions at a time and then process them
transactionally against datastore.
* Bring this up-to-date with the codebase
* Changes requested in review
* Fixed date in copyright
* Clean up Gradle Flyway tasks in :db
Simplified the command line by revising the semantics of some
properties.
Added examples of Flyway task invocations.
This script still uses the GCS file-based credential. We will migrate it
to the Secret Manager soon.
* Log forbidden HTTP request method at warning
This seems like more reasonable. It will potential issues with how
requests are generated more discoverable in the log.
* Reject handshakes with bad TLS protocols and ciphers
* Fix protocols
* make cipher suite list static and fix tests
* Delete unnecessary line
* Add start time configuration for enforcement
* small format fix
* Add multiple ciphersuite test
* fix gradle lint
* fix indentation
GAE cron only issuse HTTP GET requests to the endpoint in question. This
particular only allows POSTs. As a result this cron job never succeeded.
This is not a big problem as this job is meant to catch up any
unforeseen upload failures and in case it needs to catch up but fails,
every month the staging job (which is enqueued corrected by cron) will
eventually catch everything to date.
* Update a few plugins for Java 11 compatibility
Guice 5.0.1 is now compatible with Java 11. However we don't
directly depend on Guice. Rather Soy depends on Guice. So I added a
direct dependency on Guice 5.0 just before Soy in order to frontload Soy
and pull in the newer version.
Mockito 3.7.7 is now compatible with Java 11. The complication is that
we need to use the inline version of Mockito, which among other things
also allows mocking for final classes (hooray!). It will eventually
become the default Mockito mock maker but for now it needs to be
manually activated.
Note that the inline version now introduces another warning:
```
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
```
Which I think is WAI due to how the inline mock maker works. Waiting on
the author to confirm.
After these to changes the only illegal reflective access is caused by
App Engine SDK tools, which we will rid ourselves of when we migrate off
of GAE.
* Restore package-lock.json
* Add GetPremiumListCommand
When testing the premium list refactor, it would have been nice and
convenient to have this. Currently we have no way of inspecting premium
list contents that I'm aware of.
- Adds a CriteriaQueryBuilder class that allows us to build
CriteriaQuery objects with sane and modular WHERE and ORDER BY clauses.
CriteriaQuery requires that all WHERE and ORDER BY clauses be specified
at the same time (else later ones will overwrite the earlier ones) so in
order to have a proper builder pattern we need to wait to build the
query object until we are done adding clauses.
- In addition, encapsulating the query logic in the CriteriaQueryBuilder
class means that we don't need to deal with the complicated Root/Path
branching, otherwise we'd have to keep track of CriteriaQuery and Root
objects everywhere.
- Added a REPLAYED_ENTITIES TransitionId that will represent all
replayed entities, e.g. EppResources. Also sets this, by default, to
always be CLOUD_SQL if we're using the SQL transaction manager in tests.
- Added branching logic in RdapEntitySearchAction based on that transition
ID that determines whether we do the existing ofy query logic or JPA
logic.
Because we don't store serverApproveEntities specifically as a set in
the SQL world, we need to make sure that the entities are all separated
and stored if they exist. For domain transfers, there exist three
separate poll messages (client losing, client gaining, autorenew) so we
need to store and retrieve that one.
Founnd this while converting domain transfer flows to SQL.
* Partially convert RDAP ofy calls to tm calls
This converts the simple retrieval actions but does not touch the more
complicated search actions -- those use some ofy() query searching logic
and will likely end up being significantly more complicated than this
change. Here, though, we are just changing the calls that can be
converted easily to tm() lookups.
To change in future PRs:
- RdapDomainSearchAction
- RdapEntitySearchAction
- RdapNameserverSearchAction
- RdapSearchActionBase
* Update NPM plugin and hardcode versions of Node / NPM to use
The plugin we were using before was a bit old (last updated in March
2019) and this one is newer, updated, and updates the package-lock file
with the new dependency upgrades
* Refactor PremiumList storage and retrieval for dual-database setup
Previously, the storage and retrieval code was scattered across various
places haphazardly and there was no good way to set up dual database
access. This reorganizes the code so that retrieval is simpler and it
allows for dual-write and dual-read.
This includes the following changes:
- Move all static / object retrieval code out of PremiumList -- the
class should solely consist of its data and methods on its data and it
shouldn't have to worry about complicated caching or retrieval
- Split all PremiumList retrieval methods into PremiumListDatastoreDao
and PremiumListSqlDao that handle retrieval of the premium list entry
objects from the corresponding databases (since the way the actual data
itself is stored is not the same between the two
- Create a dual-DAO for PremiumList retrieval that branches between
SQL/Datastore depending on which is appropriate -- it will read from
and write to both but only log errors for the secondary DB
- Cache the mapping from name to premium list in the dual-DAO. This is a
common code path regardless of database so we can cache it at a high
level
- Cache the ways to go from premium list -> premium entries in the
Datastore and SQL DAOs. These caches are specific to the corresponding
DB and should thus be stored in the corresponding DAO.
- Moves the database-choosing code from the actions to the lower-level
dual-DAO. This is because we will often wish to access this premium list
data in flows and all accesses should use the proper DB-selecting code
* Properly set up JPA in BEAM workers
Sets up a singleton JpaTransactionManger on each worker JVM for all
pipeline nodes to share.
Also added/updated relevant dependencies. The BEAM SDK version change
caused the InitSqlPipeline's graph to change.
* Fix ContactTransferData SQL loads
ContactTransferData is currently loaded back from SQL as an unspecialized
TransferData object. Replace it with the ContactTransferData object that we
use it to reconstitute.
It's likely that this could be done more straightforwardly with a schema
change.
* Changes requested in review
* Fix obscure bug when checking restore prices of duplicate domain names
There were instances of "java.lang.IllegalArgumentException: Multiple entries
with same key" in the logs, caused by attempting to construct an ImmutableMap
containing duplicate keys. It turns out this was happening in the domain check
flow when the following conditions were all simultaneously met:
1. The older v06 fee extension is used
2. The same domain name is being queried multiple times in a single check
command (which is valid per the spec but doesn't actually make any sense)
3. Said domain exists
4. The cost of a restore (an uncommon operation) is being checked
When all of those conditions were met, an error was being thrown when the
dupe-containing list of domain names was used as the keys of a new Map. This
fixes that bug by calling .distinct() first.
Give enough registrars enough typewriters ...
BUG=179052195
* Revert BEAM pipeline back to SQL credential file
Stop using the SecretManager for SQL credential in BEAM for now. The
SecretManager cannot be injected into the code on pipeline workers
because RegistryEnvironment is not set.
See b/179839014 for details.
* Add db-compare tests to three more flows
Add database comparison to the replay tests for DomainDeleteFlowTest,
DomainRenewFlowTest and DomainUpdateFlowTest.
* Add databaseTransitionSchedule entitiy
* add UpdateDatabaseTransitionScheduleCommand
* small fixes
* change entity structure to no longer be singleton
* add get command
* fix getCache
* Change id to TransitionId enum
* more fixes
* Cleanup tests
* Add link to javadoc
* Add lastUpdateTime
* fix datatype of getCached
* Add a presubmit check to require use of templated SQL string literals
This PR proposes a coding style convention that helps prevent
SQL-injection attacks, and is easy to enforce in the presubmit check.
SQL-injections can be effectively prevented if all parameterized queries
are generated using the proper param-binding methods. In our project
which uses Hibernate exclusively, this can be achieved if we all follow
a simple convention: only use constant sql templates assigned to static
final String variables as the first parameter to creat(Native)Query
methods.
This PR adds a presubmit check to enforce the proposed rule, and
modified one class as a demo. If the team agrees with this proposal, we
will change all other use cases.
* Make BiqueryPollJobAction endpoint internal only
This endpoint makes use of java object deserialization, which allows a
malicious actor to craft a request that can initiate overly broad actions on
the server. Since this endpoint is not widely used for operational purposes,
limit its authorization to "internal only" so that no user agents (even with
admin privs) can access it.
* Add start date for cert enforcement in production
* Add TODO to remove start date check after start date
* revert changes to package-lock.json
* Make start time a constant
* Wire up DeleteExpiredDomainsAction so that it can actually be called
For now I'm just going to be calling it manually (and on sandbox for starters),
but in a few weeks, if all looks good, I'll add the cron job to regularly call
it in production, and this feature will thus be done.
* Use END_OF_TIME as sentinel value for domain's autorenewEndTime
Datastore inequality queries don't work correctly for null; null is treated as
the lowest value possible which is definitely the opposite of the intended
meaning here.
This includes an @OnLoad for backfilling purposes using the ResaveAll mapreduce.
* Make cross database comparison recursive
Cross-database comparison was previously just a shallow check: fields marked
with DoNotCompare on nested objects were still compared. This causes problems
in some cases where there are nested immutable objects.
This change introduces recursive comparison. It also provides a
hasCorrectHashCode() method that verifies that an object has not been mutated
since the hash code was calculated, which has been a problem in certain cases.
Finally, this also fixes the problem of objects that are mutated in multiple
transactions: we were previously comparing against the value in datastore, but
this doesn't work in these cases because the object in datastore may have
changed since the transaction that we are verifying. Instead, check against
the value that we would have persisted in the original transaction.
* Changes requested in review
* Converted check method interfaces
Per review discussion, converted check method interface so that they
consistently return a ComparisonResult object which encapsulates a success
indicator and an optional error message.
* Another round of changes on ImmutableObjectSubject
* Final changes for review
Removed unnecessary null check, minor reformatting.
(this also removes an obsolete nullness assertion from an earlier commit that
should have been fixed in the rebase)
* Try removing that nullness check import again....
* Convert certificate strings to certificates
* Format fixes
* Revert "Format fixes"
This reverts commit 26f88bd313.
* Revert "Convert certificate strings to certificates"
This reverts commit 6d47ed2861.
* Convert strings to certs for validation
* Add clarification comments
* Add test to verify endoded cert from proxy
* Add some helper methods
* add tests for PEM with metadata
* small changes
* replace .com with .test
* Add clientCertificate to TlsCredentials.toString()
FlowRunner.run() logs these credentials to the GAE logs by implicitly using the
toString() method, so we need to add it if we want it to appear in the logs.