* Run both the AppEngineRule and JpaTransactionManagerRule in tests
As previously written, the AppEngineRule wraps the infinite loop of the
server handling requests. This changes the code so that we'll have a
chain that does both the AppEngine management and the JPA transaction
manager management so we can actually use SQL in tests and in the test
server.
* remove log line
We should communicate to the users why this command failed, that they
are not allowed to use discounted allocation tokens on premium domains.
Currently it still fails, but we don't yet tell them why.
* Add a GET action and tests for registry lock retrieval
* Create isVerified method
* Allow lock access for admins even if they're not enabled on the registrar
* Simple CR responses
* Move locks retrieval to the GET action
* add newline at eof
* Switch to using ID
When we add the extra test entity to the current JpaTransactionManagerRule by calling jpaRule.withEntityClass(TestEntity.class) and jpaRule.withProperty(Environment.HBM2DDL_AUTO, "update"), the rule would override the golden database scheme with the schema derived from the current entity class(This is an expected behavior by enabling HBM2DDL_AUTO). This behavior prevents us from detecting breaking changes in ORM classes.
This PR fixed the issue by adding HibernateSchemaExporter to export the DDL script for given extra entity class, and make JpaTransactionManagerRule execute the DDL script to create the corresponding table for the extra entity when initializing the database. This PR also simplified the code when adding extra entity class for testing. For now, you don't need to(and shouldn't) call jpaRule.withProperty(Environment.HBM2DDL_AUTO, "update").
* Don't wrap exceptions thrown inside Cloud SQL transactions
There's no reason to wrap them in PersistenceException, and it makes dealing
with thrown exceptions harder as seen in the diffs on test classes in this
commit.
* Add a converter for CurrencyUnits stored in the database
This uses the well-known String representation for currency units. It also
provides a base class for other converters that will be persisting the
toString() representation.
* Add DB and formatting changes
* Add tests, make minor fixes
* Don't retry permanent failures when uploading ICANN monthly reports
There are two kinds of permanent failures that this checks for that we know will
never succeed, so it makes no sense to continue retrying 11 more times before
moving onto the next file to upload. These errors are:
1.
com.google.api.client.http.HttpResponseException: 403
Your IP address xx.xx.xx.xx is not allowed to connect
2.
com.google.api.client.http.HttpResponseException: 400
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><response xmlns="urn:ietf:params:xml:ns:iirdea-1.0"><result code="2002"><msg>A report for that month already exists, the cut-off date already passed.</msg><description>Date: 2019-09</description></result></response>
In order to implement this new functionality, this commit also adds a new way to
call Retriable that allows specifying the isRetryable Predicate (which is quite
useful).
* Add another test contact for Registry Lock testing
Previously, we only had two contacts -- one per registrar. This PR adds
a second, registry-lock-enabled, contact to one registrar for two
reasons:
1. For registry-lock-related testing, we'd like to be able to test both
positively and negatively, making sure that the permissions work the way
they should
2. In general, the UI tests should include the case where we have
multiple contacts in the same registrar. Previously, this was never the
case in tests.
* Merge remote-tracking branch 'origin/master' into addTestContact
* Don't destroy existing registry lock passwords in contacts
The existing code assumes that the "contacts" segment of the form
contains an exact representation of the registrar contacts. This breaks
when we have a contact with an existing registry lock password because
we don't want to keep passing around that password in plain text (we
never store it in plain text)
This PR changes the code so that instead of assuming the contact is
provided in its entirety, we load the contact from storage first
(matching by email address) if it exists. We then set the required
fields from the JSON object, and set the password optionally if it was
provided.
Alternatives:
- Create a separate RegistrarContactPassword object with a
RegistrarContact parent. This increases complexity significantly since
we'd be adding a parent-child relationship and adding more objects to
Datastore during the transition to SQL. It also doesn't completely solve
the problem of "When should we set the password?" because the password
field still must be part of the same form.
- Rearrange the UI so that the password is set as part of a completely
separate form with a separate submit action. This would be possible but
is sub-optimal for two reasons. First, we are trying to not re-engineer
the web console as much as possible since we're likely starting it from
scratch before too long anyway. Second, we want the
lock-password-setting to be part of the standard contact modification
workflow.
* Responses to CR
* Actually we need to allow "removal" of fields
* Remove optional
* one-statement building the contacts
* Add Bloom filters to the Cloud SQL PremiumList schema
They are slightly different from the existing Bloom filters stored in Datastore
in that they now use an ASCII String encoding rather than the more generic
CharSequence, and there is no maximum size (whereas we previously had to live
within the 1 MB max entity size for Datastore).
* Load persistence.xml classes before adding test entities
* Also use persistence.xml in GenerateSqlSchemaCommand
* Add exception message
* remove duplicate line
* Add initial support for persisting premium lists to Cloud SQL
This adds support to the `nomulus create_premium_list` command only; support for
`nomulus update_premium_list` will be in a subsequent PR.
The design goals for this PR were:
1. Do not change the existing codepaths for premium lists at all, especially not
on the read path.
2. Write premium lists to Cloud SQL only if requested (i.e. not by default), and
write to Datastore first so as to not be blocked by errors with Cloud SQL.
3. Reuse existing codepaths to the maximum possible extent (e.g. don't yet
re-implement premium list parsing; take advantage of the existing logic), but
also ...
4. Some duplication is OK, since the existing Datastore path will be deleted
once this migration is complete, leaving only the codepaths for Cloud SQL.
* Refactor out common logic
* Add DAO test
* Add tests for parsing premium lists
* Use containsExactly
* Code review changes
* Format
* Re-generate schema
* Fix column names
* Make some tests pass
* Add SQL migration scripts
* Fix test errors
* Add a DAO for RegistryLock objects
* Add an index on verification code and remove old file
* Move to v4
* Use camelCase in index names
* Javadoc fixes
* Allow alteration of RegistryLock objects in-place
* save, load-modify, read in separate transactions
* Change the creation timestamp to be a CreateAutoTimestamp
* Implement CreateAutoTimestampConverter
Implement a JPA-based converter for CreateAutoTimestamp, allowing us to
persist instances of this class.
Note that converters appear to be required to convert to and from database
types that are generally known to JDBC. For example, conversion to Timestamp
works, conversion to OffsetDateTime does not (even though this works through
the JDBC interface directly).
* Give JpaTransactionManagerRule more parameters
Allow users of the rule to add annotated classes and properties, both useful
for testing.
* Change in response to review.
* Changes for review.
* Move test EntityManagerFactory create method
Move the test create method into the JpaTransactionManagerRuleTest.
* Remove nomulus SQL dialect from G.S.S.Command
Remove NomulusPostgreSQLDialect from GenerateSqlSchemaCommand (it has been
moved to its own top-level class).
* Upgrade to Truth 1.0
Refactored fail(...) to assertWithMessage().fail().
Upgraded com.google.monitoring-client family of dependencies to 1.0.6
Also fixed bad use of io.StringIO (on binary buffer) recently introduced to
google-java-format-diff.py.
* Output command test output as well as consuming it
CommandTestCase currently consumes stdout & stderr for the command being
tested. Unfortunately, this results in us not being able to see the command
output. Add an output splitter so that output gets written to the original
stream in addition to being captured.
A simpler approach would be to print the captured data after command
completion. However, this won't work for tests that become hung and also
won't display results in real-time.
Tested: Ran a command test with verboseTestOutput=true, verified that standard
output was visible.
* Save and restore original stdout/err in cmd tests
We have to restore the original stdout/stderr print streams otherwise we end
up nesting them across tests which eventually causes the RDE tests to OOM.
* Re-add other schema classes
* Add Cloud SQL schema for premium lists
This won't work quite yet, pending a solution for the type translator issue
(which will be needed for the currency field, and potentially others).
* Make parameter names in generate_sql_schema command consistent
The rest of the nomulus commands use underscores for delimiting words in
parameter names, so this should too.
Also fixed capitalization of some proper nouns.
* Start postgresql container in generate_sql_schema
Add a --start-postgresql option to the nomulus generate_sql_schema command so
that users don't have to start their own docker container to run it.
* Made default behavior be to give guidance
* Don't write TX records for domains deleted in autorenew grace period
When the project was originally being designed, we envisioned have a purely
point-in-time architecture that would allow the system to run indefinitely
without requiring any background batch jobs. That is, you could create a domain,
and 10 years later you could infer every autorenewal billing event that should
have happened during those 10 years, without ever having to run any code that
would go through and retroactively create those events as they happened.
This ended up being very complicated, especially when it came to generating
invoices, so we gave up on it and instead wrote the
ExpandRecurringBillingEventsAction mapreduce, which would run as a cronjob and
periodically expand the recurring billing information into actual one-time
billing events. This made the invoicing scripts MUCH less complicated since they
only had to tabulate one-time billing events that had actually occurred over the
past month, rather than perform complicated logic to infer every one-time event
over an arbitrarily long period.
I bring this up because this architectural legacy explains why billing events
are more complicated than could otherwise be explained from current
requirements. This is why, for instance, when a domain is deleted during the 45
day autorenewal period, the ExpandRecurringBillingEventsAction will still write
out a history entry (and corresponding billing events) on the 45th day, because
it needs to be offset by the cancellation billing event for the autorenew grace
period that was already written out synchronously as part of the delete flow.
This no longer really makes sense, and it would be simpler to just not write out
these phantom history entries and billing events at all, but it would be a
larger modification to fix this, so I'm not touching it here.
Instead, what I have done is to simply not write out the DomainTransactionRecord
in the mapreduce if the recurring billing event has already been canceled
(i.e. because the domain was deleted or transferred). This seems inconsistent
but actually does make sense, because domain transaction records are never
written out speculatively (unlike history entries and billing events); they
correspond only to actions that have actually happen. This is because they were
architected much more recently than billing events, and don't use the
point-in-time hierarchy.
So, here's a full accounting of how DomainTransactionRecords work as of this commit:
1. When a domain is created, one is written out.
2. When a domain is explicitly renewed, one is written out.
3. When a domain is autorenewed, one is written out at the end of the grace period.
4. When a domain is deleted (in all cases), a record is written out recording the
deletion.
5. When a domain is deleted in the add grace period, an offsetting record is
written out with a negative number of years, in addition to the deletion record.
6. When a domain is deleted in the renewal grace period, an offsetting record is
likely written out in addition.
7. When a domain is deleted in the autorenew grace period, there is no record that
needs to be offset because no code ran at the exact time of the autorenew, so
NO additional record should be written out by the expand mapreduce.
*THIS IS CHANGED AS OF THIS COMMIT*.
8. When a domain is transferred, all existing grace periods are cancelled and
corresponding cancelling records are written out. Note that transfers include a
mandatory, irrevocable 1 year renewal.
9. In the rare event that a domain is restored, all recurring events are
re-created, and there is a 1 year mandatory renewal as part of the restore with
corresponding record written out.
So, in summary, billing events and history entries are often written out
speculatively, and can subsequently be canceled, but the same is not true of
domain transaction records. Domain transaction records are only written out as
part of a corresponding action (which for autorenewals is the expand recurring
cronjob).
* rm unused import
* Remove 'value' from RDAP link responses
* Change application type to rdap+json
* Merge remote-tracking branch 'origin/master' into removeValueRdap
* CR response
It is burdensome to have to maintain two sets of tools, one of which
contains a strict subset of functionalities of the other. All admins
should use the same tool and their ability to administer should be
restricted by the IAM roles they have, not the tools they use.
* Add a registry lock password to contacts
* enabled -> allowed
* Simple CR responses, still need to add tests
* Add a very simple hashing test file
* Allow setting of RL password rather than directly setting it
* Round out pw tests
* Include 'allowedToSet...' in registrar contact JSON
* Responses to CR
* fix the hardcoded tests
* Use null or empty rather than just null
* Add a generate_schema command
Add a generate_schema command to nomulus tool and add the necessary
instrumentation to EppResource and DomainBase to allow us to generate a
proof-of-concept schema for DomainBase.
* Added forgotten command description
* Revert "Added forgotten command description"
This reverts commit 09326cb8ac.
(checked in the wrong file)
* Added fixes requested during review
* Add a todo to start postgresql container
Add a todo to start a postgresql container from generate_sql_command.
* Clean up token generation
- Allow tokenLength of 0
- If specifying a token length of 0, throw an error if numTokens > 1
* Allow generation of 0-length strings
* Allow for --tokens option to generate specific tokens
* Revert String generators and disallow 0 'length' param
* Add verifyInput method and batch the listed tokens
* Check the number of tokens created
This PR created the new interface named TransactionManager which defines
methods to manage transaction. Also, the access to all transaction related
methods of Ofy.java are restricted to package private, and they will be exposed
by DatastoreTransactionManager which is the datastore implementation of
TransactionManager.