Note that this gets rid of anchor tenant codes in reserved lists (yay!), which
are no longer valid. They have to come from allocation tokens now.
This removes support for LRP from domain application create flow (that's fine,
we never used it and I'm going to delete all of LRP later). It also uses
allocation tokens from EPP authcodes as a fallback, for now, but that will be
removed later once we switch fully to the allocation token mechanism.
This doesn't yet allow registration of RESERVED_FOR_SPECIFIC_USE domains using
the allocation token extension; that will come in the next CL. Ditto for
showing these reserved domains as available on domain checks when the allocation
token is specified.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209019617
This is one last hanging piece of work left over from last year's Java 8
migration. There's no functionality changes in this CL, just refactoring.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=201947600
You don't want to use the cache when loading them for the purposes of updating
them, but you definitely do still want to use the cache when checking the
price of individual domains.
In [] the cache clearing of premium lists on update was removed. This
is a good thing in aggregate because the cache is per-instance and thus
misleading, but it also caused us to not be able to update the same premium
list twice within an hour because the second update would hit a "PremiumList
was concurrently edited" exception, owing to first loading the stale version
from the cache for the purposes of updating it. Now we bypass the cache for
that purpose.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=197768142
The issue is that the premium list cache is configured to persist for 60
minutes. So after updating the list, checks/creates for up to the next 60
minutes could still be referring to the old list. That's fine and dandy, unless
you delete the old premium list immediately (*bad*), which makes all domains
appear to now be non-premium for as long as the cache lasts. The reason deleting
the premium list entries makes names appear as non-premium is that a load-by-key
existence check with the domain label itself is used to determine if a name is
premium.
I also removed a misleading cache update statement, which doesn't do what it
appears to be doing (it appears to fix this issue) because cache is
instance-level, and so even if the premium list were updated from the frontend
instance only one of 100 instances would have its cache updated. But it's
updated from the tools service anyway, so it's guaranteed to not be a shared
cache with any instance serving EPP traffic.
On a sidenote, I introduced this bug on 2014-10-27 in [] The domain
label list refactor was my Noogler project.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=197033604
This enables sharded DNS publishing on a per-TLD basis. Instead of a TLD-wide lock, the sharded scheme locks each update on the shard number, allowing parallel writes to DNS.
We allow N (the number of shards) to be 0 or 1 for no sharding, and N > 1 for an N-way sharding scheme. Unless explicitly set, all TLDs default to a numShards of 0, so we don't have to reload all registry objects explicitly.
WARNING: This will change the lock name upon deployment for the PublishDnsAction from "<TLD> Dns Updates" to "<TLD> Dns Updates shard 0". This may cause concurrency issues if the underlying DNSWriter is not parallel-write tolerant (currently all production usages are ZonemanWriter, which is parallel-tolerant, so no issues are expected).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187525655
In Truth8, we can do assertThat(stream) directly. It's less verbose and clearer
in most cases.
Note that for the "finishers" (e.g. "containsExactyElementsIn") - streams are
still not allowed. So when there is:
assertThat(stream.map(someTransformation).collect(toList()))
.containsExactlyElementsIn(expecteStream.map(someTransformation).collect(toList()));
I kept the .collect in the assertThat to preserve the symmetry with the
finisher.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179697587
This is in preparation for running the automatic refactoring script that
will replace all ExpectedExceptions with use of JUnit 4.13's assertThrows/
expectThrows.
Note that I have recorded the callsites of assertions about EppExceptions
being marshallable and will edit those specific assertions back in after
running the automatic refactoring script (which do not understand these).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178812403
The only remaining methods on ExceptionRule after this are methods that
also exist on ExpectedException, which will allow us to, in the next CL,
swap out the one for the other and then run the automated refactoring to
turn it all into assertThrows/expectThrows.
Note that there were some assertions about root causes that couldn't
easily be turned into ExpectedException invocations, so I simply
converted them directly to usages of assertThrows/expectThrows.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178623431
JUnit 4.13 isn't released yet, but these functions are essential to being
able to write good test assertions about thrown exceptions. Rather than
not using them until JUnit 4.13 comes out (which might be awhile, as JUnit
4.12 came out almost three years ago), we're making the same decision that
Google made internally, which is to backport them. Indeed, the only reason
this commit is necessary is to fix breakage in the Nomulus build, as the
existing code worked fine internally where the backports are already in
place.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=173435579
I could've sworn we were already doing this, but apparently not? Anyway,
ROID suffixes have a number of requirements on them that weren't being
enforced, so this enforces them. All existing production data is compliant
with these requirements; the only existing bad data we have is in alpha and
sandbox.
ROID suffixes are now required to match the regex ^[A-Z0-9_]{1,8}$
See also https://tools.ietf.org/html/rfc5730
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=173400001
This was a surprisingly involved change. Some of the difficulties included
java.util.Optional purposely not being Serializable (so I had to move a
few Optionals in mapreduce classes to @Nullable) and having to add the Truth
Java8 extension library for assertion support.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=171863777
The concrete implementation of a Metric is not of importance when asserting on the values it contains. Therefore this CL removes Metric<T> as a type parameter of AbstractMetricSubject. As a result the two implementations of the abstract subject can be used on any Metric<Long> and Metric<Distribution>, respectively.
Also migrate to Subject.Factory from deprecated SubjectFactory.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=171012012
This completes the data/functionality migration for multiple DNS writers.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=163835077
After this point all data is migrated to use the new canonical
plural version, and subsequent code changes can be made that use
multiple writers.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=161673486
We've determined that getting correctness semantics right, even
in the few cases that it is possible to do so (see linked bug for
audit) is not worth the bother in terms of highly complicated code
and potential bugs. This CL turns off memcache at the Ofy level
but doesn't rip out the annotations etc. so that we can quickly
turn it back on if this turns out to have been a mistake.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=155227761
We ran into a bunch of prober deployment issues this past week when
attempting to spin up a new cluster because the newly created prober
TLDs had null values for the dnsWriter field. Given that VoidDnsWriter
exists, we can require that dnsWriter always be set, and have people
use that if DNS publishing is not required.
Also cleans up a bunch of related inconsistent exception messages and
tests not verifying said exception messages properly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154325830
TESTED=The test fails if you change line 134 in Ofy to not use memcache
and use the unchanged original Registry.get() code. This is the
expected behavior.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154226534
This is a follow-up to Lai's refactoring of the get reservation types
code to return a set rather than a single type. Since we're always
returning a set now, the more natural way to represent a label that is
not reserved is to return an empty set rather than a set containing
UNRESERVED.
Also fixes some minor style issues I ran across regarding static
importing and test method naming that I ran across (no logic
implications).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=151132116
A new field (allowedNameservers) is added to ReservedListEntry that stores the allow nameservers for the label. The field itself is a comma separated string, but the actual lines within a reserved list file (from which the field is parsed) uses colon to separate nameservers, to avoid conflicting with the commas used as primary separators in a CSV file.
Combined with upcoming update(s) that enables locking down an entire TLD to only delegate domains with a nameserver restricted reservation type, this change will enable us to restrict domain delegation to nameservers specifically specified in the allowed nameservers list, in order to prevent malicious delegation in case the registrar for a brand TLD is compromised.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=149989330
This CL defines metrics for both premium and reserved lists, but actually uses only the reserved list metrics. The premium list metrics will be used in a future CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=149982639
Instead of only returning the most severe one, return all applicable ones. This is because the reserved list has grown to a list of types that are not strictly comparable but orthogonal to each other. We can no longer depend on the fact that the most severe type incorporates all properties of those beneath it. Therefore returning all of them and treat them one by one in the calling site is the correct behavior.
Due to constraint imposed in eppcom.xsd, during domain checks the response can only contain a reservation reason of fewer than 32 characters, therefore we are returning the message for the type with highest severity, in case of multiple reservation types for a label.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=149776106
This is an error condition that will soon throw an exception when
attempting to register the domain name, so it's good to let the registry
operator know of the error when it is first introduced.
Unfortunately there's still a backdoor that allows duplicate labels
that's harder to protect against (that this commit doesn't cover): the
case where reserved lists are already applied to a TLD, then one of the
reserved lists is updated to add another auth code, which then conflicts
with one on a different reserved list.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=149443007
Principally, this moves a load method into DatastoreHelper that is now
only used by tests.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=148649087
It was kind of messy having all of that logic living alongside the
entities themselves.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=148498024
This also cleans up the PremiumList API so that it only has one
method for checking premium prices, which is by TLD, rather than two.
I will be refactoring a lot of the static methods currently residing in
the PremiumList class into a separate utils class, but I don't want to
include too many changes in this one CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=148475345
This is the first step in the migration to remove the need to load all of
the premium list entries every time the cache expires (which causes slow-
downs). Once this is deployed, we can re-save all premium lists, creating
the bloom filters, and then the next step will be to read from them to
more efficiently determine if a label might be premium.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=147525017
It wasn't being used by any actual code, and having helper methods handling
saving/persistence on entities like this is not a pattern we want to encourage,
since it hides Datastore transactions from further up in the call chain. The
idea is that you can always look for ofy() calls in the same layer of code to
see where persisted data is being changed.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143036027