Specifically, this prevents suspended registrars from creating domains or applications. Pending registrars already can't perform these actions because they get an error message when attempting to log in.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=170481338
Since HistoryEntry always represents a read-only log of mutation to a core resource, that mutation should always happen in a transaction, and the HistoryEntry should be saved in that transaction. As such, it's always more accurate to use ofy().getTransactionTime() for the modificationTime of the HistoryEntry rather than just DateTime.now(UTC).
In addition, having these be the exact same timestamp makes it possible to align HistoryEntries with commit log manifests using modificationTime = transactionTime, which is useful for recovery and analysis purposes.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=170136957
We can easily end up enlisting too many entity groups (separate
DomainApplications) in a TransactionalFlow when loading all applications
tracked by the DomainApplicationIndex. This makes the load operation
transactionless, to avoid overenlisting.
Potential problems:
1. We fail to prevent landrush applications, if a single sunrise application
exists. This is likely fine, except for a brief moment in Sunrush when a
sunrise application is made immediately prior to a landrush application. The
result is we accept an invalid application- which can be mediated manually.
2. We fail to prevent a domain create for a domain with an open application.
This is a little more sinister, but also unlikely unless someone submits an
application immediately before someone tries to create the same domain (sans
application?)
3. We return an invalid DomainCheck response (instead of 'pending allocation').
Not the worst outcome.
4. We reduce the AuctionStatusCommand and GetApplicationIdsCommand to
eventual consistency. Since they're internal tools, that's not too big a deal.
A better solution:
DomainApplications really should just be normalized under a virtualEntityGroup
by fullyQualifiedDomainName, or a hash-bucket like EppResources are. The
DomainApplication -> DomainBase -> EppResource hierarchy seems to be purely for
code reuse, at the cost of Datastore consistency. This would, however, require
quite some refactoring, and a custom resave operation across all
DomainApplications.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=169395586
LogsExportCursor was only used by ExportLogsTaskServlet, which we removed a long time ago. It's just dead code. The PersistedRangeLong type was only written for use by LogsExportCursor, and since it hasn't picked up new users in 3+ years I don't think we need to keep it around.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=169264994
Sometimes requests "die" suddenly, without going through catch/finally blocks.
If this happens, any lock they own will remain locked until it times out (which
can take hours in some cases).
This cl implicitly unlocks any lock if the owner of the lock isn't running
anymore.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168880938
Allow superusers to change the grace period and allow
superusers to change the pending delete length.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168028545
Allow superusers to change the transfer period to zero years and allow
superusers to change the automatic transfer length.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=167598314
This is the last of many cls adding explicit logging in all our domain
mutation flows to facilitate transaction reporting.
The transfer process is as follows:
GAINING sends a TransferRequest to LOSING
LOSING either acks (TransferApprove), nacks (TransferReject) or does nothing
(auto approve). For acks and autoapproves, we produce a +1 counter for GAINING
and LOSING for domain-gaining/losing-successful for each registrar, to be
reported on the approve date + the transfer grace period. For nacks, we produce
a +1 counter for domain-gaining/losing-nacked for each registrar.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=166535579
This is the third of many cls adding explicit logging in all our domain
mutation flows to facilitate transaction reporting.
We add a +1 counter for either grace or nograce deletes, based on the grace period status of the domain. We then search back in time for DOMAIN_CREATE, DOMAIN_RENEW and DOMAIN_AUTORENEW HistoryEntries off the same resource that happened in their corresponding grace periods (5, 5 and 45 days respectively). All transaction records for these events are then given -1 counters to properly account for cancellations in the NET_CREATE and NET_RENEW fields.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=166506010
To log autorenews, we currently run a mapreduce daily that creates synthetic
billing events for each recurring event past its due time. These are all
parented under the original recurring event, which allows these synthetic events to incorrectly stack on the original mutating entry.
We now explicitly create a new HistoryEntry of type DOMAIN_AUTORENEW to log
autorenews alongside other mutating EPP flows. These also parent DomainTransactionRecords for the NET_RENEWS_1_YEAR field, with the reporting time equal to the billing time (which accounts for the autorenew grace period).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=166379700
After working further with domain deletes, I realized we'll need to record multiple reportingTimes under a single historyEntry when issuing a -1 counter to cancel grace-period adds. Since the TLD would be the only shared component within a record, we'll just duplicate it across all records to save an unnecessary layer of hierarchy.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=166261413
This is the second of many cls adding explicit logging in all our domain
mutation flows to facilitate transaction reporting.
Adds and renews each result in a +1 counter for the NET_ADDS/RENEWS_#_YR field,
which I've added simple (# of years, add or renew) -> Enum functions to get.
Allocates are just a special case of adds, and are counted in a similar manner.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=165963249
This is the first of many cls adding explicit logging in all our domain mutation flows to facilitate transaction reporting.
Restores are relatively simple- it happens immediately, so the reporting time is just the time of the HistoryEntry, and we add a single "RESTORED_DOMAINS" count of 1.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=165639084
This change adds the persisted data model necessary to facilitate transaction
reporting. TransactionRecord is an embedded repeated class within HistoryEntry
which is only added to when a HistoryEntry is created that counts towards
transaction reporting.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=165619552
This completes the data/functionality migration for multiple DNS writers.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=163835077
A NullPointerException reported via StackDriver appears to stem from trying to load the claims list right at the moment it was being updated. Since the update only happens once every 12 hours, retrying the load once should fix the problem, if this is really the cause.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=163732624
It was buggy (didn't work) and was never actually used.
Why never actually used: for it to be used executeWithLock has to be called
with different requesters on the same lockId. That never happend in the code.
How it was buggy: Logically, the queue is deleted on release of the lock (meaning it was
meaningless the only time it mattered - when the lock isn't taken). In
addition, a different bug meant that having items in the queue prevented the
lock from being released forcing all other tasks to have to wait for lock
timeout even if the task that acquired the lock is long done.
Alternative: fix the queue. This would mean we don't want to delete the lock on release (since we want to keep the queue). Instead, we resave the same lock with expiration date being START_OF_TIME. In addition - we need to fix the .equals used to determine if the lock the same as the acquired lock - instead use some isSame function that ignores the queue.
Note: the queue is dangerous! An item (calling class / action) in the first place of a queue means no other calling class can get that lock. Everything is waiting for the first calling class to be re-run - but that might take a long time (depending on that action's rerun policy) and even might never happen (if for some reason that action decided it was no longer needed without acquiring the lock) - causing all other actions to stall forever!
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=163705463
This makes the code more understandable from callsites, and also forces
users of this function to deal with the situation where the registrar
with a given client ID might not be present (it was previously silently
NPEing from some of the callsites).
This also adds a test helper method loadRegistrar(clientId) that retains
the old functionality for terseness in tests. It also fixes some instances
of using the load method with the wrong cachedness -- some uses in high-
traffic situations (WHOIS) that should have caching, but also low-traffic
reporting that don't benefit from caching so might as well always be
current.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=162990468
Note that even though the nomulus command line tool now supports multiple
DNS writers for all subcommands, this still won't work quite yet because
the DNS task queue format migration from [] is still in progress.
After next week's push that migration will be complete and we can remove
the final restriction against only having one DNS writer per TLD.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=162490399
After this point all data is migrated to use the new canonical
plural version, and subsequent code changes can be made that use
multiple writers.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=161673486
This is the first step in a multi-step data migration to allow multiple
DNS writers per TLD. The overall process looks like this:
1. Add a plural DNS writers field with backfill (this commit).
2. Deploy it.
3. Run the ResaveEnvironmentEntitiesCommand to populate this new field
on all entities.
4. Update the code to use the new field everywhere.
5. Deploy it.
6. Delete the now-unreferenced, old deprecated singular value field.
This process is rollback-safe.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=161253436
The billing account map will be serialized in the following format:
{currency1=id1, currency2=id2, ...}
In order for the output to be deterministic, the billing account map is stored as a sorted map.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=161075814
Now that the registration period has been added to DomainApplication, we
can remove this @OnLoad that was populating it for objects that were
missing the period.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=159464438
When doing update_registrar, it is now possible to only specify the currencies and the account ids that need updating.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=159262119
Memcache is already off but now it's not in the code anymore.
This includes removing domain creation failfast, since that is actually
slower now than just running the flow - all you gain is a non-transactional
read over a transactional read, but the cost is that you always pay that
read, which is going to drive up latency.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=158183506
Changed [] to use v1 instead of v1beta1, and replaced v1beta1 with v1 in all the java files.
If there is special build rules for open-source etc. that also need to be updated, or non "TAP-able" tests that need to be run, please check and see if they are OK.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=157895888
This was needed to correct bad data (LINKED status values on EppResources). The code has been fixed to no longer persist LINKED on any resources and I ran a resave all action yesterday to remove all persisted LINKED status values, so the migration @OnLoad can be safely removed now.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156580334
This replaces the memcache caching, which we think is overall a bad idea.
We load all registrars at once instead of caching each as needed, so that
the loadAllCached() methods can be cached as well, and therefore will
always produce results consistent with loadByClientIdCached()'s view of the
registrar's values. All of our prod registrars together total 300k of data
right now, so this is hardly worth optimizing further, and in any case this
will likely reduce latency even further since most requests will be
served out of memory.
While I was in the Registrar file I standardized the error messages for incorrect
password and clientId length to be the same format, and cleaned up a few
random things I noticed in the code.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156151828
The absence of these fields causes RDE failures, so they are in effect
required on any functioning registry system. We are currently
experiencing problems in sandbox caused by null values on these fields.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=155474895
We've determined that getting correctness semantics right, even
in the few cases that it is possible to do so (see linked bug for
audit) is not worth the bother in terms of highly complicated code
and potential bugs. This CL turns off memcache at the Ofy level
but doesn't rip out the annotations etc. so that we can quickly
turn it back on if this turns out to have been a mistake.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=155227761
Also added corresponding getters and setters for the new field. Note that
nothing has changed on the RDAP front for now, as the CL&D only concerns WHOIS.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=155116134
Only Ofy itself and its two helpers (AugmentedSaver and
AugmentedDeleter) need to use the real ofy(). All other
callers should be using Ofy. Fixing this even though it
doesn't change anything because I found it baffling to
follow the code while trying to make a small change.
Update: added a presubmit to enforce this.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154456603
We ran into a bunch of prober deployment issues this past week when
attempting to spin up a new cluster because the newly created prober
TLDs had null values for the dnsWriter field. Given that VoidDnsWriter
exists, we can require that dnsWriter always be set, and have people
use that if DNS publishing is not required.
Also cleans up a bunch of related inconsistent exception messages and
tests not verifying said exception messages properly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154325830
TESTED=The test fails if you change line 134 in Ofy to not use memcache
and use the unchanged original Registry.get() code. This is the
expected behavior.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154226534
TESTED=For all tests, I added @Cache to DomainBase because otherwise the tests will
fail. We aren't ready to do this in prod yet, which is why the tests are still
marked @Ignore. The new tests fail if you change line 134 in Ofy to not use memcache
and either use the unchanged original DomainCreateFlow code, or use the new
inlined code and change loadWithMemcache() to load(). They pass with the new
inlined code that calls loadWithMemcache(), as long as the @Cache is added to
DomainResource.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154224748
This primarily adds accessors to EppInput that will be used for flow reporting
logging in FlowReporter. Specifically, it adds:
- Optional<String> getResourceType() -> domain/host/contact
- Optional<String> getSingleTargetId() -> for SingleResourceCommands
And in addition, it adjusts getCommandName() so that it's now named
getCommandType() for better parallelism with the new getResourceType() (since
getResourceName() would be misleading), and it changes the value returned to be
lowercased, again for consistency. This isn't an issue because getCommandName()
isn't actually used anywhere right now (it was formerly used for EPP whitebox
metrics, but no longer due to recent changes there).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=153851957
This is required by ICANN Consistent Labeling & Display policy that WHOIS domain query response contains registrar abuse contact's phone number and email address. Add a helper function to load registrar contact of a certain type for a given registrar.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=153606137
We now send PendingActionNotificationResponses in our poll messages upon completion of an asynchronous contact or host deletion. This is part 1 of 2, which begins logging Trid in all enqueued Host/Contact deletion flows for use in batch deletions, and optionally consuming the resultant Trid info to emit a Host/ContactPendingActionNotifcationResponse.
Part 2 will make this response emission non-optional, which will happen once the queue is cleared of all non-Trid containing tasks.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=153084197
This is better than calling assertTldExists() inside a for loop because you can throw a single exception reporting all bad TLDs at once rather than only getting as far as the first failure. And then it's also a one-liner instead of 3 lines.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=152412876
As part of b/36599833, this makes FlowRunner log the appropriate ICANN activity
report field name for each flow it runs as part of a structured JSON log
statement which can be parsed to generate ICANN activity reports (under the key
"icannActivityReportField").
In order to support this, we introduce an annotation for Flow classes called
@ReportingSpec and a corresponding enum of values for this annotation, which is
IcannReportingTypes.ActivityReportField, that stores the mapping of constant
enum values to field names.
The mapping from flows to fields is fairly obvious, with three exceptions:
- Application flows are all accounted under domains, since applications are
technically just deferred domain creates within the EPP protocol
- ClaimsCheckFlow is counted as a domain check
- DomainAllocateFlow is counted as a domain create
In addition, I've added tests to all the corresponding flows that we are
indeed logging what we expect.
We'll also need to log the TLD for this to be useful, but I'm doing that in a
follow-up CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=151283411