* Don't retry permanent failures when uploading ICANN monthly reports
There are two kinds of permanent failures that this checks for that we know will
never succeed, so it makes no sense to continue retrying 11 more times before
moving onto the next file to upload. These errors are:
1.
com.google.api.client.http.HttpResponseException: 403
Your IP address xx.xx.xx.xx is not allowed to connect
2.
com.google.api.client.http.HttpResponseException: 400
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><response xmlns="urn:ietf:params:xml:ns:iirdea-1.0"><result code="2002"><msg>A report for that month already exists, the cut-off date already passed.</msg><description>Date: 2019-09</description></result></response>
In order to implement this new functionality, this commit also adds a new way to
call Retriable that allows specifying the isRetryable Predicate (which is quite
useful).
* Reenable JpaTransactionManager for Alpha and Crash
* Make UploadClaimsListCommand implement CommandWithCloudSql
* Fix wrong call to get password
* Use Cloud SQL Socket library to provision TransactionManager
* Change to use dataSource configs
* Add another test contact for Registry Lock testing
Previously, we only had two contacts -- one per registrar. This PR adds
a second, registry-lock-enabled, contact to one registrar for two
reasons:
1. For registry-lock-related testing, we'd like to be able to test both
positively and negatively, making sure that the permissions work the way
they should
2. In general, the UI tests should include the case where we have
multiple contacts in the same registrar. Previously, this was never the
case in tests.
* Merge remote-tracking branch 'origin/master' into addTestContact
* Don't destroy existing registry lock passwords in contacts
The existing code assumes that the "contacts" segment of the form
contains an exact representation of the registrar contacts. This breaks
when we have a contact with an existing registry lock password because
we don't want to keep passing around that password in plain text (we
never store it in plain text)
This PR changes the code so that instead of assuming the contact is
provided in its entirety, we load the contact from storage first
(matching by email address) if it exists. We then set the required
fields from the JSON object, and set the password optionally if it was
provided.
Alternatives:
- Create a separate RegistrarContactPassword object with a
RegistrarContact parent. This increases complexity significantly since
we'd be adding a parent-child relationship and adding more objects to
Datastore during the transition to SQL. It also doesn't completely solve
the problem of "When should we set the password?" because the password
field still must be part of the same form.
- Rearrange the UI so that the password is set as part of a completely
separate form with a separate submit action. This would be possible but
is sub-optimal for two reasons. First, we are trying to not re-engineer
the web console as much as possible since we're likely starting it from
scratch before too long anyway. Second, we want the
lock-password-setting to be part of the standard contact modification
workflow.
* Responses to CR
* Actually we need to allow "removal" of fields
* Remove optional
* one-statement building the contacts
We don't want to override toDiffableFieldMap because (per the javadoc)
that is supposed to contain sensitive information. So, we should just
remove it before sending it out.
The segregated test targets in core break the --tests filter. Fix this by
defining a "testFilter" property and creating the FilteringTest task type that
applies it to the property set by "--tests".
* Add Bloom filters to the Cloud SQL PremiumList schema
They are slightly different from the existing Bloom filters stored in Datastore
in that they now use an ASCII String encoding rather than the more generic
CharSequence, and there is no maximum size (whereas we previously had to live
within the 1 MB max entity size for Datastore).
* Load persistence.xml classes before adding test entities
* Also use persistence.xml in GenerateSqlSchemaCommand
* Add exception message
* remove duplicate line
* Add initial support for persisting premium lists to Cloud SQL
This adds support to the `nomulus create_premium_list` command only; support for
`nomulus update_premium_list` will be in a subsequent PR.
The design goals for this PR were:
1. Do not change the existing codepaths for premium lists at all, especially not
on the read path.
2. Write premium lists to Cloud SQL only if requested (i.e. not by default), and
write to Datastore first so as to not be blocked by errors with Cloud SQL.
3. Reuse existing codepaths to the maximum possible extent (e.g. don't yet
re-implement premium list parsing; take advantage of the existing logic), but
also ...
4. Some duplication is OK, since the existing Datastore path will be deleted
once this migration is complete, leaving only the codepaths for Cloud SQL.
* Refactor out common logic
* Add DAO test
* Add tests for parsing premium lists
* Use containsExactly
* Code review changes
* Format
* Re-generate schema
* Fix column names
* Make some tests pass
* Add SQL migration scripts
* Fix test errors
* Add an explanation to dummied-out JPA init
Add a more elaborate explanation of why actual JpaTransactionManager
initialization was removed from the factory.
* Add a DAO for RegistryLock objects
* Add an index on verification code and remove old file
* Move to v4
* Use camelCase in index names
* Javadoc fixes
* Allow alteration of RegistryLock objects in-place
* save, load-modify, read in separate transactions
* Change the creation timestamp to be a CreateAutoTimestamp
* Add persistence.xml to the war files
* Always use the DummyJpaTransactionManager
Use the DJTM until we get all of the dependencies set up for all of the
environments.
This shouldn't affect any of the unit tests, these use the
JpaTransactionManagerRule to set up a local database and connection.
This fixes the App Engine build.
* Implement CreateAutoTimestampConverter
Implement a JPA-based converter for CreateAutoTimestamp, allowing us to
persist instances of this class.
Note that converters appear to be required to convert to and from database
types that are generally known to JDBC. For example, conversion to Timestamp
works, conversion to OffsetDateTime does not (even though this works through
the JDBC interface directly).
* Give JpaTransactionManagerRule more parameters
Allow users of the rule to add annotated classes and properties, both useful
for testing.
* Change in response to review.
* Changes for review.
* Move test EntityManagerFactory create method
Move the test create method into the JpaTransactionManagerRuleTest.
* Remove nomulus SQL dialect from G.S.S.Command
Remove NomulusPostgreSQLDialect from GenerateSqlSchemaCommand (it has been
moved to its own top-level class).
* Upgrade to Truth 1.0
Refactored fail(...) to assertWithMessage().fail().
Upgraded com.google.monitoring-client family of dependencies to 1.0.6
Also fixed bad use of io.StringIO (on binary buffer) recently introduced to
google-java-format-diff.py.
* Fix dependency-locking config
Reenable dependency locking after a bug errorneouly turned it off.
Removed the guava-related workaround that forcefully resolve to
the -jre distribution.
Enabled locking for buildSrc by updating its property file.
Updated all lock files.
* Output command test output as well as consuming it
CommandTestCase currently consumes stdout & stderr for the command being
tested. Unfortunately, this results in us not being able to see the command
output. Add an output splitter so that output gets written to the original
stream in addition to being captured.
A simpler approach would be to print the captured data after command
completion. However, this won't work for tests that become hung and also
won't display results in real-time.
Tested: Ran a command test with verboseTestOutput=true, verified that standard
output was visible.
* Save and restore original stdout/err in cmd tests
We have to restore the original stdout/stderr print streams otherwise we end
up nesting them across tests which eventually causes the RDE tests to OOM.
* Add RegistryLock schema to Flyway deployment folder
Added creation script of RegistryLock to Flyway deployment folder.
Fixed previous scripts (PremiumList- and ClaimsList-related) for
FK name change (cause by table name changes: names are quoted now).
We should consider generating foreign key names by ourselves.
Since the alpha database is empty, we dropped and recreated the schema.
Added instructions on how to submit new database incremental changes
in the README file.
Updated RegistryLock.java, removing unnecessary annotations:
- For most fields, the 'name=' property is no longer necessary not that
the naming strategy is in place. The exceptions are the two used in
the unique index.
- The @Column annotation is implicit.
* Add RegistryLock SQL schema
* Refactor a bit
* Move registrylock -> domain
* Clearing up lock workflow
* Add more docs and remove LockStatus
* Responses to CR
* Add repoId javadoc
* Add registry lock to persistence xml file
* Quote rather than backtick
* Remove unnecessary check
* File TODO
* Remove uniqueness constraint on verification code
* Remove import
* add index
* Add to SQL generation task
* Move fields around to be the same order as Hibernate's generated sql
* Use Flyway to deploy SQL schema to non-prod
Added Gradle tasks to deploy and drop schema in alpha
using Flyway.
Updated ClaimsList.java so that Hibernate-generated
schema would use the right types.
Using 'varchar(255)' instead of 'text' for string columns
for now. We will need to investigate how to force Hibernate
to use the desired types in all cases.
* Use Flyway to deploy SQL schema to non-prod
Added Gradle tasks to deploy and drop schema in alpha
using Flyway.
Updated ClaimsList.java so that Hibernate-generated
schema would use the right types.
Using 'varchar(255)' instead of 'text' for string columns
for now. We will need to investigate how to force Hibernate
to use the desired types in all cases.Added Gradle tasks to deploy and drop schema in alpha
using Flyway.
Updated ClaimsList.java so that Hibernate-generated
schema would use the right types.
Using 'varchar(255)' instead of 'text' for string columns
for now. We will need to investigate how to force Hibernate
to use the desired types in all cases.
* Use Flyway to deploy SQL schema to non-prod
Added Gradle tasks to deploy and drop schema in alpha
using Flyway.
Corrected the type of ClaimsEntry's revision_id column.
It should be plain int8, not bigserial.
Make GenerateSqlSchemaCommand use a custom dialect that
converts all varchar type to 'text' and timestamp to
'timestamptz'.
* Use Flyway to deploy SQL schema to non-prod
Added Gradle tasks to deploy and drop schema in alpha
using Flyway.
Use a custome dialect in GenerateSqlSchemaCommand to
convert varchar type to 'text' and timestamp to 'timestamptz'.
Corrected ClaimsEntry's revision_id column type to int8.
This column tracks parent table's primary key and should
not be bigserial.
* Use Flyway to deploy SQL schema to non-prod
Added Gradle tasks to deploy and drop schema in alpha
using Flyway.
Use a custome dialect in GenerateSqlSchemaCommand to
convert varchar type to 'text' and timestamp to 'timestamptz'.
Corrected ClaimsEntry's revision_id column type to int8.
This column tracks parent table's primary key and should
not be bigserial.
* Use Flyway to deploy SQL schema to non-prod
Added Gradle tasks to deploy and drop schema in alpha
using Flyway.
Use a custome dialect in GenerateSqlSchemaCommand to
convert varchar type to 'text' and timestamp to 'timestamptz'.
Corrected ClaimsEntry's revision_id column type to int8.
This column tracks parent table's primary key and should
not be bigserial.
* Re-add other schema classes
* Add Cloud SQL schema for premium lists
This won't work quite yet, pending a solution for the type translator issue
(which will be needed for the currency field, and potentially others).
* Generate basic schema for all of DomainBase
Generate a basic schema for DomainBase and everything that is part of it.
This still isn't complete, in particular it lacks:
- Correct conversions for problematic types (e.g. DateTime, Key...)
- Schema generation for history records.
- Name translation.
* Make parameter names in generate_sql_schema command consistent
The rest of the nomulus commands use underscores for delimiting words in
parameter names, so this should too.
Also fixed capitalization of some proper nouns.
* Move EntityManagerFactoryProviderTest to fragile
* Add EMF Provider Test to docker tests
Add EntityManagerFactoryProviderTest to the docker incompatible test patterns
and use the latter list to compose the fragile tests.
* Start postgresql container in generate_sql_schema
Add a --start-postgresql option to the nomulus generate_sql_schema command so
that users don't have to start their own docker container to run it.
* Made default behavior be to give guidance
* Don't write TX records for domains deleted in autorenew grace period
When the project was originally being designed, we envisioned have a purely
point-in-time architecture that would allow the system to run indefinitely
without requiring any background batch jobs. That is, you could create a domain,
and 10 years later you could infer every autorenewal billing event that should
have happened during those 10 years, without ever having to run any code that
would go through and retroactively create those events as they happened.
This ended up being very complicated, especially when it came to generating
invoices, so we gave up on it and instead wrote the
ExpandRecurringBillingEventsAction mapreduce, which would run as a cronjob and
periodically expand the recurring billing information into actual one-time
billing events. This made the invoicing scripts MUCH less complicated since they
only had to tabulate one-time billing events that had actually occurred over the
past month, rather than perform complicated logic to infer every one-time event
over an arbitrarily long period.
I bring this up because this architectural legacy explains why billing events
are more complicated than could otherwise be explained from current
requirements. This is why, for instance, when a domain is deleted during the 45
day autorenewal period, the ExpandRecurringBillingEventsAction will still write
out a history entry (and corresponding billing events) on the 45th day, because
it needs to be offset by the cancellation billing event for the autorenew grace
period that was already written out synchronously as part of the delete flow.
This no longer really makes sense, and it would be simpler to just not write out
these phantom history entries and billing events at all, but it would be a
larger modification to fix this, so I'm not touching it here.
Instead, what I have done is to simply not write out the DomainTransactionRecord
in the mapreduce if the recurring billing event has already been canceled
(i.e. because the domain was deleted or transferred). This seems inconsistent
but actually does make sense, because domain transaction records are never
written out speculatively (unlike history entries and billing events); they
correspond only to actions that have actually happen. This is because they were
architected much more recently than billing events, and don't use the
point-in-time hierarchy.
So, here's a full accounting of how DomainTransactionRecords work as of this commit:
1. When a domain is created, one is written out.
2. When a domain is explicitly renewed, one is written out.
3. When a domain is autorenewed, one is written out at the end of the grace period.
4. When a domain is deleted (in all cases), a record is written out recording the
deletion.
5. When a domain is deleted in the add grace period, an offsetting record is
written out with a negative number of years, in addition to the deletion record.
6. When a domain is deleted in the renewal grace period, an offsetting record is
likely written out in addition.
7. When a domain is deleted in the autorenew grace period, there is no record that
needs to be offset because no code ran at the exact time of the autorenew, so
NO additional record should be written out by the expand mapreduce.
*THIS IS CHANGED AS OF THIS COMMIT*.
8. When a domain is transferred, all existing grace periods are cancelled and
corresponding cancelling records are written out. Note that transfers include a
mandatory, irrevocable 1 year renewal.
9. In the rare event that a domain is restored, all recurring events are
re-created, and there is a 1 year mandatory renewal as part of the restore with
corresponding record written out.
So, in summary, billing events and history entries are often written out
speculatively, and can subsequently be canceled, but the same is not true of
domain transaction records. Domain transaction records are only written out as
part of a corresponding action (which for autorenewals is the expand recurring
cronjob).
* rm unused import