* Don't write TX records for domains deleted in autorenew grace period
When the project was originally being designed, we envisioned have a purely
point-in-time architecture that would allow the system to run indefinitely
without requiring any background batch jobs. That is, you could create a domain,
and 10 years later you could infer every autorenewal billing event that should
have happened during those 10 years, without ever having to run any code that
would go through and retroactively create those events as they happened.
This ended up being very complicated, especially when it came to generating
invoices, so we gave up on it and instead wrote the
ExpandRecurringBillingEventsAction mapreduce, which would run as a cronjob and
periodically expand the recurring billing information into actual one-time
billing events. This made the invoicing scripts MUCH less complicated since they
only had to tabulate one-time billing events that had actually occurred over the
past month, rather than perform complicated logic to infer every one-time event
over an arbitrarily long period.
I bring this up because this architectural legacy explains why billing events
are more complicated than could otherwise be explained from current
requirements. This is why, for instance, when a domain is deleted during the 45
day autorenewal period, the ExpandRecurringBillingEventsAction will still write
out a history entry (and corresponding billing events) on the 45th day, because
it needs to be offset by the cancellation billing event for the autorenew grace
period that was already written out synchronously as part of the delete flow.
This no longer really makes sense, and it would be simpler to just not write out
these phantom history entries and billing events at all, but it would be a
larger modification to fix this, so I'm not touching it here.
Instead, what I have done is to simply not write out the DomainTransactionRecord
in the mapreduce if the recurring billing event has already been canceled
(i.e. because the domain was deleted or transferred). This seems inconsistent
but actually does make sense, because domain transaction records are never
written out speculatively (unlike history entries and billing events); they
correspond only to actions that have actually happen. This is because they were
architected much more recently than billing events, and don't use the
point-in-time hierarchy.
So, here's a full accounting of how DomainTransactionRecords work as of this commit:
1. When a domain is created, one is written out.
2. When a domain is explicitly renewed, one is written out.
3. When a domain is autorenewed, one is written out at the end of the grace period.
4. When a domain is deleted (in all cases), a record is written out recording the
deletion.
5. When a domain is deleted in the add grace period, an offsetting record is
written out with a negative number of years, in addition to the deletion record.
6. When a domain is deleted in the renewal grace period, an offsetting record is
likely written out in addition.
7. When a domain is deleted in the autorenew grace period, there is no record that
needs to be offset because no code ran at the exact time of the autorenew, so
NO additional record should be written out by the expand mapreduce.
*THIS IS CHANGED AS OF THIS COMMIT*.
8. When a domain is transferred, all existing grace periods are cancelled and
corresponding cancelling records are written out. Note that transfers include a
mandatory, irrevocable 1 year renewal.
9. In the rare event that a domain is restored, all recurring events are
re-created, and there is a 1 year mandatory renewal as part of the restore with
corresponding record written out.
So, in summary, billing events and history entries are often written out
speculatively, and can subsequently be canceled, but the same is not true of
domain transaction records. Domain transaction records are only written out as
part of a corresponding action (which for autorenewals is the expand recurring
cronjob).
* rm unused import
* Remove 'value' from RDAP link responses
* Change application type to rdap+json
* Merge remote-tracking branch 'origin/master' into removeValueRdap
* CR response
It is burdensome to have to maintain two sets of tools, one of which
contains a strict subset of functionalities of the other. All admins
should use the same tool and their ability to administer should be
restricted by the IAM roles they have, not the tools they use.
* Add a registry lock password to contacts
* enabled -> allowed
* Simple CR responses, still need to add tests
* Add a very simple hashing test file
* Allow setting of RL password rather than directly setting it
* Round out pw tests
* Include 'allowedToSet...' in registrar contact JSON
* Responses to CR
* fix the hardcoded tests
* Use null or empty rather than just null
* Add a generate_schema command
Add a generate_schema command to nomulus tool and add the necessary
instrumentation to EppResource and DomainBase to allow us to generate a
proof-of-concept schema for DomainBase.
* Added forgotten command description
* Revert "Added forgotten command description"
This reverts commit 09326cb8ac.
(checked in the wrong file)
* Added fixes requested during review
* Add a todo to start postgresql container
Add a todo to start a postgresql container from generate_sql_command.
* Clean up token generation
- Allow tokenLength of 0
- If specifying a token length of 0, throw an error if numTokens > 1
* Allow generation of 0-length strings
* Allow for --tokens option to generate specific tokens
* Revert String generators and disallow 0 'length' param
* Add verifyInput method and batch the listed tokens
* Check the number of tokens created
This PR created the new interface named TransactionManager which defines
methods to manage transaction. Also, the access to all transaction related
methods of Ofy.java are restricted to package private, and they will be exposed
by DatastoreTransactionManager which is the datastore implementation of
TransactionManager.
* Create a Gradle task to run the test server
As an artifact of the old build system, the test server relies on having
the built registrar_(bin|dbg)*(\.css)?.js in place (see ConsoleUiAction
among others). As a result, we create a Gradle task that puts those
files into the correct, readable, location before running the test
server.
* Depend on assemble rather than build
* refactor gitignores
Login failures will happen any time that we aren't coming from a
whitelisted IP for that particular TLD. Since whitelists are out of date
(and we don't whitelist IPs for every TLD anyway) those failures aren't
interesting. Store and fully-log the interesting failures if one
happened.
* Fail gracefully when copying detailed reports
When the detailed reports are copied from GCS to registrars' Drive
folders, do not fail the entire copy operation when a single registrar
fails. Instead, send an alert email about the failure, and continue to copy the
rest of the reports.
Also, instead of creating duplicates, overwrite the existing files on
Drive.
BUG=127690361
* Don't extend expiration times for deleted domains
* Flip order and add a comment
* oops forgot a period
* Use END_OF_TIME
* Add tests for expiration times of domains with pending transfers
* Add test for transfer during autorenew and clean up other tests
* Clarify tests
* Add domain expiration check in EppLifecycleDomainTest
* Add a comment and format test files
Sometimes, the webdriver tests get stuck forever for no reason. It could
be some issue in the test container but it is hard to root cause it. So,
adding a 30s timeout can either trigger the retry earlier or let the
test just fail.
This PR prevents Gradle from copying the golden images
to build/resources/test, so the screenshot test would
read golden images from src/test/resources directly and
display the path in test log if the test fails. Because
the path pointing to the actual file in src/ folder,
the engineer can easily find it.
* Attempt login to MosAPI via all available TLDs
There's no reason why we should need a TLD as input here because it
doesn't actually matter which one we use (they all have the same
password).
* Refactor the TLD loop and change cron jobs
* Re-throw the last exception if one exists
* Fix tests and exception
* Remove alpha cron job
* Move test resource files into src/test/resources
* fix a test
* Remove references to javatests/ in Java files
* fix import order
* fix semantic merge conflict
* Throw a more useful error message on attempted domain restore reports
Per DomainRestoreRequestFlow's Javadoc, we automatically approve and instantly
enact all domain restore requests, thus we don't use or support restore
reports. This improves the registrar-visible error message to help make this
more clear.
Replace deprecated GoogleCredential with new lib
This PR also introduced a CredentialsBundle class to carry
HttpTransport and JsonFactory object which are needed by
most of the GCP library to instantiate client.
We found that some webdriver tests failed because RegistryEnvironment
was set to 'production' by other test and was carried over to the
webdriver test, and it was nontrivial to fix them because the instance
of RegistryEnvironment was injected into the testing web server.
Also, to eliminate this problem from potential victims, we stopped
provisioning RegistryEnvironment from Dagger. Instead, we just use
RegistryEnvironment.get() function to get the instance, which indeed
retrives the value from system property every time. If any test case
needs to run the test with other environment, it can just simply use
the existing SystemPropertyRule and RegistryEnvironment.${ENV}.setup
to change it.
IntelliJ is complaining when this annotation is used on a base class that has no
actual runnable tests itself. The solution is different for different classes
depending on the existing pattern of where the @RunWith annotation is; for some,
it's simplest to move the annotation down to the few extending classes that are
missing it, whereas in others it's easiest just to annotate the base class as
abstract.
Fix or suppress deprecation warnings except those about GoogleCredential,
which is being handled separately.
The @SuppressWarnings("deprecation") annotation does not cover imports
even when it is at the class level. We removed imports of deprecated
classes and use their fully qualified names in class body.
* Consolidate testcontainer used by WebDriver test
Previously, we hard coded the version of the docker image for
provisioning Chrome browser and WebDriver server because the
version of the Chrome browser has to match the version of
the webdriver client, otherwise the screenshot test will fail.
Changing to use BrowserWebDriverContainer can delegate the match
to the library itself because it chooses the correct docker image
version based on the WebDriver version on our classpath.
* Increase maxColorDiff to 20
This is to supress the test flakness after switching to use
BrowserWebDriverContainer to provision browser and webdriver
service.