* Add initial support for persisting premium lists to Cloud SQL
This adds support to the `nomulus create_premium_list` command only; support for
`nomulus update_premium_list` will be in a subsequent PR.
The design goals for this PR were:
1. Do not change the existing codepaths for premium lists at all, especially not
on the read path.
2. Write premium lists to Cloud SQL only if requested (i.e. not by default), and
write to Datastore first so as to not be blocked by errors with Cloud SQL.
3. Reuse existing codepaths to the maximum possible extent (e.g. don't yet
re-implement premium list parsing; take advantage of the existing logic), but
also ...
4. Some duplication is OK, since the existing Datastore path will be deleted
once this migration is complete, leaving only the codepaths for Cloud SQL.
* Refactor out common logic
* Add DAO test
* Add tests for parsing premium lists
* Use containsExactly
* Code review changes
* Format
* Re-generate schema
* Fix column names
* Make some tests pass
* Add SQL migration scripts
* Fix test errors
* Add a DAO for RegistryLock objects
* Add an index on verification code and remove old file
* Move to v4
* Use camelCase in index names
* Javadoc fixes
* Allow alteration of RegistryLock objects in-place
* save, load-modify, read in separate transactions
* Change the creation timestamp to be a CreateAutoTimestamp
* Implement CreateAutoTimestampConverter
Implement a JPA-based converter for CreateAutoTimestamp, allowing us to
persist instances of this class.
Note that converters appear to be required to convert to and from database
types that are generally known to JDBC. For example, conversion to Timestamp
works, conversion to OffsetDateTime does not (even though this works through
the JDBC interface directly).
* Give JpaTransactionManagerRule more parameters
Allow users of the rule to add annotated classes and properties, both useful
for testing.
* Change in response to review.
* Changes for review.
* Move test EntityManagerFactory create method
Move the test create method into the JpaTransactionManagerRuleTest.
* Remove nomulus SQL dialect from G.S.S.Command
Remove NomulusPostgreSQLDialect from GenerateSqlSchemaCommand (it has been
moved to its own top-level class).
* Upgrade to Truth 1.0
Refactored fail(...) to assertWithMessage().fail().
Upgraded com.google.monitoring-client family of dependencies to 1.0.6
Also fixed bad use of io.StringIO (on binary buffer) recently introduced to
google-java-format-diff.py.
* Output command test output as well as consuming it
CommandTestCase currently consumes stdout & stderr for the command being
tested. Unfortunately, this results in us not being able to see the command
output. Add an output splitter so that output gets written to the original
stream in addition to being captured.
A simpler approach would be to print the captured data after command
completion. However, this won't work for tests that become hung and also
won't display results in real-time.
Tested: Ran a command test with verboseTestOutput=true, verified that standard
output was visible.
* Save and restore original stdout/err in cmd tests
We have to restore the original stdout/stderr print streams otherwise we end
up nesting them across tests which eventually causes the RDE tests to OOM.
* Re-add other schema classes
* Add Cloud SQL schema for premium lists
This won't work quite yet, pending a solution for the type translator issue
(which will be needed for the currency field, and potentially others).
* Make parameter names in generate_sql_schema command consistent
The rest of the nomulus commands use underscores for delimiting words in
parameter names, so this should too.
Also fixed capitalization of some proper nouns.
* Start postgresql container in generate_sql_schema
Add a --start-postgresql option to the nomulus generate_sql_schema command so
that users don't have to start their own docker container to run it.
* Made default behavior be to give guidance
* Don't write TX records for domains deleted in autorenew grace period
When the project was originally being designed, we envisioned have a purely
point-in-time architecture that would allow the system to run indefinitely
without requiring any background batch jobs. That is, you could create a domain,
and 10 years later you could infer every autorenewal billing event that should
have happened during those 10 years, without ever having to run any code that
would go through and retroactively create those events as they happened.
This ended up being very complicated, especially when it came to generating
invoices, so we gave up on it and instead wrote the
ExpandRecurringBillingEventsAction mapreduce, which would run as a cronjob and
periodically expand the recurring billing information into actual one-time
billing events. This made the invoicing scripts MUCH less complicated since they
only had to tabulate one-time billing events that had actually occurred over the
past month, rather than perform complicated logic to infer every one-time event
over an arbitrarily long period.
I bring this up because this architectural legacy explains why billing events
are more complicated than could otherwise be explained from current
requirements. This is why, for instance, when a domain is deleted during the 45
day autorenewal period, the ExpandRecurringBillingEventsAction will still write
out a history entry (and corresponding billing events) on the 45th day, because
it needs to be offset by the cancellation billing event for the autorenew grace
period that was already written out synchronously as part of the delete flow.
This no longer really makes sense, and it would be simpler to just not write out
these phantom history entries and billing events at all, but it would be a
larger modification to fix this, so I'm not touching it here.
Instead, what I have done is to simply not write out the DomainTransactionRecord
in the mapreduce if the recurring billing event has already been canceled
(i.e. because the domain was deleted or transferred). This seems inconsistent
but actually does make sense, because domain transaction records are never
written out speculatively (unlike history entries and billing events); they
correspond only to actions that have actually happen. This is because they were
architected much more recently than billing events, and don't use the
point-in-time hierarchy.
So, here's a full accounting of how DomainTransactionRecords work as of this commit:
1. When a domain is created, one is written out.
2. When a domain is explicitly renewed, one is written out.
3. When a domain is autorenewed, one is written out at the end of the grace period.
4. When a domain is deleted (in all cases), a record is written out recording the
deletion.
5. When a domain is deleted in the add grace period, an offsetting record is
written out with a negative number of years, in addition to the deletion record.
6. When a domain is deleted in the renewal grace period, an offsetting record is
likely written out in addition.
7. When a domain is deleted in the autorenew grace period, there is no record that
needs to be offset because no code ran at the exact time of the autorenew, so
NO additional record should be written out by the expand mapreduce.
*THIS IS CHANGED AS OF THIS COMMIT*.
8. When a domain is transferred, all existing grace periods are cancelled and
corresponding cancelling records are written out. Note that transfers include a
mandatory, irrevocable 1 year renewal.
9. In the rare event that a domain is restored, all recurring events are
re-created, and there is a 1 year mandatory renewal as part of the restore with
corresponding record written out.
So, in summary, billing events and history entries are often written out
speculatively, and can subsequently be canceled, but the same is not true of
domain transaction records. Domain transaction records are only written out as
part of a corresponding action (which for autorenewals is the expand recurring
cronjob).
* rm unused import
* Remove 'value' from RDAP link responses
* Change application type to rdap+json
* Merge remote-tracking branch 'origin/master' into removeValueRdap
* CR response
It is burdensome to have to maintain two sets of tools, one of which
contains a strict subset of functionalities of the other. All admins
should use the same tool and their ability to administer should be
restricted by the IAM roles they have, not the tools they use.
* Add a registry lock password to contacts
* enabled -> allowed
* Simple CR responses, still need to add tests
* Add a very simple hashing test file
* Allow setting of RL password rather than directly setting it
* Round out pw tests
* Include 'allowedToSet...' in registrar contact JSON
* Responses to CR
* fix the hardcoded tests
* Use null or empty rather than just null
* Add a generate_schema command
Add a generate_schema command to nomulus tool and add the necessary
instrumentation to EppResource and DomainBase to allow us to generate a
proof-of-concept schema for DomainBase.
* Added forgotten command description
* Revert "Added forgotten command description"
This reverts commit 09326cb8ac.
(checked in the wrong file)
* Added fixes requested during review
* Add a todo to start postgresql container
Add a todo to start a postgresql container from generate_sql_command.
* Clean up token generation
- Allow tokenLength of 0
- If specifying a token length of 0, throw an error if numTokens > 1
* Allow generation of 0-length strings
* Allow for --tokens option to generate specific tokens
* Revert String generators and disallow 0 'length' param
* Add verifyInput method and batch the listed tokens
* Check the number of tokens created
This PR created the new interface named TransactionManager which defines
methods to manage transaction. Also, the access to all transaction related
methods of Ofy.java are restricted to package private, and they will be exposed
by DatastoreTransactionManager which is the datastore implementation of
TransactionManager.
* Create a Gradle task to run the test server
As an artifact of the old build system, the test server relies on having
the built registrar_(bin|dbg)*(\.css)?.js in place (see ConsoleUiAction
among others). As a result, we create a Gradle task that puts those
files into the correct, readable, location before running the test
server.
* Depend on assemble rather than build
* refactor gitignores
Login failures will happen any time that we aren't coming from a
whitelisted IP for that particular TLD. Since whitelists are out of date
(and we don't whitelist IPs for every TLD anyway) those failures aren't
interesting. Store and fully-log the interesting failures if one
happened.
* Fail gracefully when copying detailed reports
When the detailed reports are copied from GCS to registrars' Drive
folders, do not fail the entire copy operation when a single registrar
fails. Instead, send an alert email about the failure, and continue to copy the
rest of the reports.
Also, instead of creating duplicates, overwrite the existing files on
Drive.
BUG=127690361
* Don't extend expiration times for deleted domains
* Flip order and add a comment
* oops forgot a period
* Use END_OF_TIME
* Add tests for expiration times of domains with pending transfers
* Add test for transfer during autorenew and clean up other tests
* Clarify tests
* Add domain expiration check in EppLifecycleDomainTest
* Add a comment and format test files
Sometimes, the webdriver tests get stuck forever for no reason. It could
be some issue in the test container but it is hard to root cause it. So,
adding a 30s timeout can either trigger the retry earlier or let the
test just fail.
This PR prevents Gradle from copying the golden images
to build/resources/test, so the screenshot test would
read golden images from src/test/resources directly and
display the path in test log if the test fails. Because
the path pointing to the actual file in src/ folder,
the engineer can easily find it.
* Attempt login to MosAPI via all available TLDs
There's no reason why we should need a TLD as input here because it
doesn't actually matter which one we use (they all have the same
password).
* Refactor the TLD loop and change cron jobs
* Re-throw the last exception if one exists
* Fix tests and exception
* Remove alpha cron job
* Move test resource files into src/test/resources
* fix a test
* Remove references to javatests/ in Java files
* fix import order
* fix semantic merge conflict
* Throw a more useful error message on attempted domain restore reports
Per DomainRestoreRequestFlow's Javadoc, we automatically approve and instantly
enact all domain restore requests, thus we don't use or support restore
reports. This improves the registrar-visible error message to help make this
more clear.
Replace deprecated GoogleCredential with new lib
This PR also introduced a CredentialsBundle class to carry
HttpTransport and JsonFactory object which are needed by
most of the GCP library to instantiate client.
We found that some webdriver tests failed because RegistryEnvironment
was set to 'production' by other test and was carried over to the
webdriver test, and it was nontrivial to fix them because the instance
of RegistryEnvironment was injected into the testing web server.
Also, to eliminate this problem from potential victims, we stopped
provisioning RegistryEnvironment from Dagger. Instead, we just use
RegistryEnvironment.get() function to get the instance, which indeed
retrives the value from system property every time. If any test case
needs to run the test with other environment, it can just simply use
the existing SystemPropertyRule and RegistryEnvironment.${ENV}.setup
to change it.
IntelliJ is complaining when this annotation is used on a base class that has no
actual runnable tests itself. The solution is different for different classes
depending on the existing pattern of where the @RunWith annotation is; for some,
it's simplest to move the annotation down to the few extending classes that are
missing it, whereas in others it's easiest just to annotate the base class as
abstract.