* Make parameter names in generate_sql_schema command consistent
The rest of the nomulus commands use underscores for delimiting words in
parameter names, so this should too.
Also fixed capitalization of some proper nouns.
* Move EntityManagerFactoryProviderTest to fragile
* Add EMF Provider Test to docker tests
Add EntityManagerFactoryProviderTest to the docker incompatible test patterns
and use the latter list to compose the fragile tests.
* Start postgresql container in generate_sql_schema
Add a --start-postgresql option to the nomulus generate_sql_schema command so
that users don't have to start their own docker container to run it.
* Made default behavior be to give guidance
* Don't write TX records for domains deleted in autorenew grace period
When the project was originally being designed, we envisioned have a purely
point-in-time architecture that would allow the system to run indefinitely
without requiring any background batch jobs. That is, you could create a domain,
and 10 years later you could infer every autorenewal billing event that should
have happened during those 10 years, without ever having to run any code that
would go through and retroactively create those events as they happened.
This ended up being very complicated, especially when it came to generating
invoices, so we gave up on it and instead wrote the
ExpandRecurringBillingEventsAction mapreduce, which would run as a cronjob and
periodically expand the recurring billing information into actual one-time
billing events. This made the invoicing scripts MUCH less complicated since they
only had to tabulate one-time billing events that had actually occurred over the
past month, rather than perform complicated logic to infer every one-time event
over an arbitrarily long period.
I bring this up because this architectural legacy explains why billing events
are more complicated than could otherwise be explained from current
requirements. This is why, for instance, when a domain is deleted during the 45
day autorenewal period, the ExpandRecurringBillingEventsAction will still write
out a history entry (and corresponding billing events) on the 45th day, because
it needs to be offset by the cancellation billing event for the autorenew grace
period that was already written out synchronously as part of the delete flow.
This no longer really makes sense, and it would be simpler to just not write out
these phantom history entries and billing events at all, but it would be a
larger modification to fix this, so I'm not touching it here.
Instead, what I have done is to simply not write out the DomainTransactionRecord
in the mapreduce if the recurring billing event has already been canceled
(i.e. because the domain was deleted or transferred). This seems inconsistent
but actually does make sense, because domain transaction records are never
written out speculatively (unlike history entries and billing events); they
correspond only to actions that have actually happen. This is because they were
architected much more recently than billing events, and don't use the
point-in-time hierarchy.
So, here's a full accounting of how DomainTransactionRecords work as of this commit:
1. When a domain is created, one is written out.
2. When a domain is explicitly renewed, one is written out.
3. When a domain is autorenewed, one is written out at the end of the grace period.
4. When a domain is deleted (in all cases), a record is written out recording the
deletion.
5. When a domain is deleted in the add grace period, an offsetting record is
written out with a negative number of years, in addition to the deletion record.
6. When a domain is deleted in the renewal grace period, an offsetting record is
likely written out in addition.
7. When a domain is deleted in the autorenew grace period, there is no record that
needs to be offset because no code ran at the exact time of the autorenew, so
NO additional record should be written out by the expand mapreduce.
*THIS IS CHANGED AS OF THIS COMMIT*.
8. When a domain is transferred, all existing grace periods are cancelled and
corresponding cancelling records are written out. Note that transfers include a
mandatory, irrevocable 1 year renewal.
9. In the rare event that a domain is restored, all recurring events are
re-created, and there is a 1 year mandatory renewal as part of the restore with
corresponding record written out.
So, in summary, billing events and history entries are often written out
speculatively, and can subsequently be canceled, but the same is not true of
domain transaction records. Domain transaction records are only written out as
part of a corresponding action (which for autorenewals is the expand recurring
cronjob).
* rm unused import
* Remove the "showAllOutput" property from the build
It doesn't work very well and has been superseded by "verboseTestOutput",
which does the same thing and more.
* Remove 'value' from RDAP link responses
* Change application type to rdap+json
* Merge remote-tracking branch 'origin/master' into removeValueRdap
* CR response
It is burdensome to have to maintain two sets of tools, one of which
contains a strict subset of functionalities of the other. All admins
should use the same tool and their ability to administer should be
restricted by the IAM roles they have, not the tools they use.
* Add a registry lock password to contacts
* enabled -> allowed
* Simple CR responses, still need to add tests
* Add a very simple hashing test file
* Allow setting of RL password rather than directly setting it
* Round out pw tests
* Include 'allowedToSet...' in registrar contact JSON
* Responses to CR
* fix the hardcoded tests
* Use null or empty rather than just null
* Add a generate_schema command
Add a generate_schema command to nomulus tool and add the necessary
instrumentation to EppResource and DomainBase to allow us to generate a
proof-of-concept schema for DomainBase.
* Added forgotten command description
* Revert "Added forgotten command description"
This reverts commit 09326cb8ac.
(checked in the wrong file)
* Added fixes requested during review
* Add a todo to start postgresql container
Add a todo to start a postgresql container from generate_sql_command.
* Clean up token generation
- Allow tokenLength of 0
- If specifying a token length of 0, throw an error if numTokens > 1
* Allow generation of 0-length strings
* Allow for --tokens option to generate specific tokens
* Revert String generators and disallow 0 'length' param
* Add verifyInput method and batch the listed tokens
* Check the number of tokens created
This PR created the new interface named TransactionManager which defines
methods to manage transaction. Also, the access to all transaction related
methods of Ofy.java are restricted to package private, and they will be exposed
by DatastoreTransactionManager which is the datastore implementation of
TransactionManager.
* Remove unused log argument
* Use the right accept-encoding
By default we request gzip and theoretically we'd decode it
automatically on our end but for some reason that's not working. I
tested this on Alpha and it worked
* Create a Gradle task to run the test server
As an artifact of the old build system, the test server relies on having
the built registrar_(bin|dbg)*(\.css)?.js in place (see ConsoleUiAction
among others). As a result, we create a Gradle task that puts those
files into the correct, readable, location before running the test
server.
* Depend on assemble rather than build
* refactor gitignores
Login failures will happen any time that we aren't coming from a
whitelisted IP for that particular TLD. Since whitelists are out of date
(and we don't whitelist IPs for every TLD anyway) those failures aren't
interesting. Store and fully-log the interesting failures if one
happened.
Using the new GoogleCredentials to access Drive API caused 403 forbidden
exception. So, this PR brought back the old GoogleCredential to
temporarily resolve the production issue while we are figuring out the
long term fix.
TESTED=Deployed to alpha and verified exportPremiumTerms succeeded, see
https://paste.googleplex.com/6153215760400384.
* Fail gracefully when copying detailed reports
When the detailed reports are copied from GCS to registrars' Drive
folders, do not fail the entire copy operation when a single registrar
fails. Instead, send an alert email about the failure, and continue to copy the
rest of the reports.
Also, instead of creating duplicates, overwrite the existing files on
Drive.
BUG=127690361
* Build docker image of nomulus tool
In the course of "gradle build", build a docker image of nomulus tool so that
users can run this to allow us to bundle the java version with the image.
* Don't extend expiration times for deleted domains
* Flip order and add a comment
* oops forgot a period
* Use END_OF_TIME
* Add tests for expiration times of domains with pending transfers
* Add test for transfer during autorenew and clean up other tests
* Clarify tests
* Add domain expiration check in EppLifecycleDomainTest
* Add a comment and format test files
* Add a metric for EPP processing time regardless of ID/TLD
* Change name to request_time
* Record EPP processing time by traffic type
* grammar
* request type
* semicolon
* Remove the maybeRuntime configuration
It contains dependencies present in the bazel
build but not needed for compile. We now know
they are not needed in runtime either.
Sometimes, the webdriver tests get stuck forever for no reason. It could
be some issue in the test container but it is hard to root cause it. So,
adding a 30s timeout can either trigger the retry earlier or let the
test just fail.
This PR prevents Gradle from copying the golden images
to build/resources/test, so the screenshot test would
read golden images from src/test/resources directly and
display the path in test log if the test fails. Because
the path pointing to the actual file in src/ folder,
the engineer can easily find it.
We got non-serialization object error when deploying the invoicing
pipeline. It turns out that Beam requires every field in the pipeline
object is serilizable. However, it is non-trivial to make
GoogleCredentialsBundle serilizable because almost all of its
dependency are not serilizable and not contraled by us. Also,
it is non-necessary to inject the credential as the spec11
pipeline also writes output to GCS without having injected
credential. So, removing the injected variable can solve the
problem.
TESTED=First reproduced the problem locally by deploying the invoicing pipeline with the previous code; applied this change and successfully deploy the pipeline without having any issue.
* Attempt login to MosAPI via all available TLDs
There's no reason why we should need a TLD as input here because it
doesn't actually matter which one we use (they all have the same
password).
* Refactor the TLD loop and change cron jobs
* Re-throw the last exception if one exists
* Fix tests and exception
* Remove alpha cron job
* Move test resource files into src/test/resources
* fix a test
* Remove references to javatests/ in Java files
* fix import order
* fix semantic merge conflict
* Throw a more useful error message on attempted domain restore reports
Per DomainRestoreRequestFlow's Javadoc, we automatically approve and instantly
enact all domain restore requests, thus we don't use or support restore
reports. This improves the registrar-visible error message to help make this
more clear.