The parameters were optional during the transition to allow old jobs stuck in the queue to work properly. It's been 2 months now so it's time to end the transition.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=190235532
Implement a checkbox in the "Resources" tab to allow registrars to toggle
their "premium price ack required" flag.
Tested:
Verfied the console functionality by hand. I've started work on an
automated test, but we can't actually test those from blaze and the
kokoro tests are way too time-consuming to be practical for development, so
we're going to have to either find a way to run those locally outside of
the normal process or make do without a test.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=190212177
A "mark" tells us that the holder owns the trademark for a given domain name. It is signed for authentication.
If the signature's certificate is either "not yet valid" or "expired", we return explicit errors to that effect.
But in addition to the signature's certificate, the mark itself might not be valid yet or already expired. Right now if that happens - we return an error saying "the mark doesn't match the domain name".
That is wrong - as the mark can match the domain name, just be expired. Returning "the mark doesn't match the domain name" in that case is misleading.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=190069976
'afterFinalFailure' is called just before rethrowing a non-retrying error from
the retrier. This can happen either because the exception shouldn't be retried,
or because we exceeded the maximum number of retries.
The same thing can be done by catching that thrown error outside of the
retrier:
retrier.callWithRetry(
callable,
new FailureReporter() {
@Override
void afterFinalFailure(Throwable thrown, int failures) {
// do something with thrown
}
},
RetriableException.class);
is (almost) the same as:
try {
retrier.callWithRetry(callable, RetriableException.class);
} catch (Throwable thrown) {
// do something with thrown
throw thrown;
}
("almost" because the retrier might wrap the Throwable in a RuntimeException,
so you might need to getCause or getRootCause. Also - there is the
"beforeRetry" I ignored for the example)
Removing "afterFinalFailure" also makes the FailureReporter in line with Java 8
functional interface - meaning we can more easily create it when we do need to
override "beforeRetry".
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189972101
TldFanoutAction fans out a given endpoint to all TLDs (either TEST, REAL, or
both).
However, it is also used to delegate a single endpoint request that we want set
in a specific queue (so we can control retries). We do that by setting the TLD
list to "runInEmpty" rather than "forEachRealTld" or "forEachTestTld".
Currently, using "runInEmpty" would still specify a TLD - but that TLD would be
the empty string. This is a bug: it sets the TLD parameter to a bad value. It
worked only because none of the endpoints called with "runInEmpty" were using
the TLD parameter.
However, this will (and does) break if either (a) the endpoint accepts an
optional TLD parameter (like deleteProberData does), or (b) the given endpoint
already has a TLD parameter in it (we want to run the endpoint with a single
TLD, but still use the "fanout" to set the right queue).
This CL fixes several things:
- if runInEmpty is given, no TLD parameter is added
- 'runInEmpty' is now mutually exclusive with 'forEach*Tld' and 'excludes'
- we do some sanity checks and added logging
- removed the buggy and unused "':tld' in path is replaced by TLD"
- in the cron.xml, removed documentation for :tld and the broken :registrar
Note that none of the endpoints that were used with runInEmpty fanout had the TLD parameter prior to deleteProberData
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189954585
<launch:create> has an optional type argument, that can take either "application" or "registration":
https://tools.ietf.org/html/rfc8334#section-3.3.1
We get that type via createExtension.get().getCreateType(), where if the type= argument isn't given, the function returns null.
In that case, we need to decide based on the TLD - application for end-date sunrise, and registration for start-date sunrise.
For now we can't do that, because FlowPicker doesn't have access to the TLD information. Until that is fixed we decide as follows:
- landrush and sunrush phases will default to APPLICATION, because there's no possible
registration for it.
- sunrise defaults to REGISTRATION because we're currenly launching start-date sunrise that uses registration.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189942568
We are no longer using Eclipse internally and therefore stopped maintaining
stuff related to it. We cannot guarantee that any pertinent information remains correct
and relevant in the future.
Users are advised to use IntelliJ (Community Edition is fine) with Bazel plugin
if they want IDE support.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189586127
Associate the custom metrics with the correct monitored resource type. The labels of the monitored resource are either obtained from environment variables for the container, configured in the GKE deployment file, or queried from GCE metadate server. Using the correct monitored resource can help performance and reduced out-of-order metric writes.
Also changed the metrics display name to be more descriptive.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189184411
More information: []
Tested:
TAP for global presubmit queue
[]
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188968676
Add the "shell" command which lets you run multiple other command in a single
session, sparing you the initialization costs for all but the first of them.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188712815
Allow specifying certificate hash other than certificate file. This makes things easier when only setting up EAP registrars. The certificate hash can be easily pulled from existing registrars (SUNRISE, GA, etc) with automation.
Also fixes a bug where we always expect the registrar name + phase string to be at least 7-character long.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188511561
If the proxy protocol header contains a malformatted string, such as "PROXY UNKNOWN", instead of throwing and killing the connection, use the TCP source IP as the remote IP.
Also changed how the header is read from the buffer, to avoid a potential Netty resource leak. Originally the header is read into another ByteBuf, which needs be be explicit released in order for Netty to reclaim its memory (http://netty.io/wiki/reference-counted-objects.html). Now we just read it into a byte array and let JVM GC it.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188047084
This simplifies calculating the overall invoice by giving RESTORE fees a
period equal to the period of the associated RENEW (1 year). Older
BillingEvents will not be backfilled, and will have periodYears = null.
Invoicing and business both agree this is a valid representation, since RESTORE fees are intrinsically tied to the 1-year RENEW it's associated with.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188041777
When not running locally, the logging formatter is set to convert the log record to a single-line JSON string that Stackdriver logging agent running in GKE will pick up and parse correctly.
Also removed redundant logging handler in the proxy frontend connection. They have two problems: 1) it is possible to leak PII when all frontend traffic is logged, such as client IPs. Even though this is less of a concern because the GCP TCP proxy load balancer masquerade source IPs. 2) We are only logging the HTTP request/response that the frontend connection is sending to/receiving from the backend connection, but the backend already has its own logging handler to log the same message that it gets from/sends to the GAE app, so the logging in the frontend connection does not really give extra information.
Logging of some potential PII information such as the source IP of a proxied connection are also removed.
Thirdly, added a k8s autoscaling object that scales the containers based on CPU load. The default target load is 80%. This, in connection with GKE cluster VM autoscaling, means that when traffic is low, we'll only have one VM running one container of the proxy.
Fixes a bug where the MetricsComponent generates a separate ProxyConfig that does not call parse method on the command line args passed, resulting default Environment always being used in constructing the metric reporter.
Lastly a little bit of cleaning of the MOE config script, no newlines are necessary as the BUILD are formatted after string substitution.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188029019
Changed SUNRISE to START_SUNRISE and added a registry/registrar pair for testing EAP. The EAP period is set to 2018-03-01 to 2022-03-01 with a price of $100.
A temporary flag is added to only create EAP registry/registrar pair so that we can update existing registrars.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187897405
It was nullable all along, but wasn't tagged as such, and thus it was
possible to misuse the method from its call sites.
Also adds an assertion about no NORDN tasks being enqueued in a failing
domain create test for a required signed mark.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187649865
This enables sharded DNS publishing on a per-TLD basis. Instead of a TLD-wide lock, the sharded scheme locks each update on the shard number, allowing parallel writes to DNS.
We allow N (the number of shards) to be 0 or 1 for no sharding, and N > 1 for an N-way sharding scheme. Unless explicitly set, all TLDs default to a numShards of 0, so we don't have to reload all registry objects explicitly.
WARNING: This will change the lock name upon deployment for the PublishDnsAction from "<TLD> Dns Updates" to "<TLD> Dns Updates shard 0". This may cause concurrency issues if the underlying DNSWriter is not parallel-write tolerant (currently all production usages are ZonemanWriter, which is parallel-tolerant, so no issues are expected).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187525655
Also changed the name of "verifyRegistryStateAllowsLaunchFlows" to "verifyRegistryStateAllowsApplicationFlows", because there are now launch flows that don't use applications (start-date sunrise).
Finally, added a test to showcase the "super-user" power that EPPs with Anchor Tenants have. There's no change in behavior in that regard in this CL - we just add a test to make it explicit.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187517199
Also some minor cleanup to make renewal testdata files easier to reuse.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187508329
After investigating common domain create/update command usage
patterns by registrars, we noticed that it is frequent for a
given registrar to reuse both hosts (using a standardized set of
nameservers) as well as contacts (e.g. for privacy/proxy
services). With these usage patterns, potential per-registrar
throughput during high volume scenarios (i.e. first moments of
General Availability) suffers from hitting hot keys in Datastore.
The solution, implemented in this CL, is to add short-term
in-memory caching for contacts and hosts, analogous to how we are
already caching Registry and Registrar entities. These new
cached paths are only used inside domain flows to determine
existence and deleted/pending delete status of contacts and
hosts. This is a potential loss of transactional consistency, but
in practice it's hard to imagine this having negative effects, as
contacts or hosts that are in use cannot be deleted, and caching
would primarily affect widely used contacts and hosts.
Note that this caching can be turned on or off through a
configuration option, and by default would be off. We'd only want
it on when we really needed it, i.e. during a big launch.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187093378
Currently, DeleteProberDataAction goes over all the TLDs of type "TEST" that
end with .test, and deletes all their DomainResources and their subordinate
history entries, poll messages, billing events, ForeignKeyDomainIndex and
EppResourceIndex entities.
After this change, we can optionally supply TLDs to work on for the request using one or more "tld=" parameter. The default (if none are supplied) will still be "all TEST TLDs that end in .test".
All given TLDs must exist, and must all be of type TEST.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187064053
Even when the request is not permissioned to see contact information, we should
show information about the owning registrar.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187049833
The RDAP Pilot Program operational profile document indicates that domain
responses should list, in addition to their normal contacts, a special entity
for the registrar.
1.5.12. The domain object in the RDAP response MUST contain an entity with the registrar role (called registrar entity in this section). The handle of the entity MUST be equal to the IANA Registrar ID. A valid fn member MUST be present in the registrar entity. Other members MAY be present in the entity (as specified in RFC6350, the vCard Format Specification and its corresponding JSON mapping RFC7095). Contracted parties MUST include an entity with the abuse role (called Abuse Entity in this section) within the registrar entity. The Abuse Entity MUST include tel and email members, and MAY include other members.
1.5.13. The entity with the registrar role in the RDAP response MUST contain a publicIDs member [RFC7483] to identify the IANA Registrar ID from the IANA’s Registrar ID registry (https://www.iana.org/assignments/registrar-ids/registrar-ids.xhtml). The type value of the publicID object MUST be equal to IANA Registrar ID.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=186797360
Currently we validate the fee extension by summing up all fees present in the extension and comparing it against the total fee to be charged. While this works in most cases, we'd like the ability to individually validate each fee. This is especially useful during EAP when two fees are charged, a regular "create" fee that would also be amount we charge during renewal, and a one time "EAP" fee.
Because we can only distinguish fees by their descriptions, we try to match the description to the format string of the fee type enums. We also only require individual fee matches when we are charging more than one type of fees, which makes the change compatible with most existing use cases where only one fees is charged and the description field is ignored in the extension.
We expect the workflow to be that a registrar sends a domain check, and we reply with exactly what fees we are expecting, and then it will use the descriptions in the response to send us a domain create with the correct fees.
Note that we aggregate fees within the same FeeType together. Normally there will only be one fee per type, but in case of custom logic there could be more than one fee for the same type. There is no way to distinguish them as they both use the same description. So it is simpler to just aggregate them.
This CL also includes some reformatting that conforms to google-java-format output.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=186530316
A recent change in Netty 4.1.21 (978a46cc0a) tried to fix an issue where channels might be closed before any handshake exception can be propagated. This however introduced a regression where the the connection is not closed at all after a handshake failure, which caused test failures because we were expecting the connection to be closed after a handshake failure.
We rolled back dependency on Netty 4.1.21 so that the test would pass. A fix upstream is schedule for 4.1.22 (https://github.com/netty/netty/pull/7727).
However this does reveal some potential problem in our tests. Namely we did not wait for the connection to be closed before assertion on it. The old Netty behavior closes the connection before handshake exception is thrown, and we *do* wait for the handshake exception. The connection assertion happens after the handshake exception is verified, so by then the connection is always closed.
When the upstream fix is released, we'd run into concurrency problem described above. So we instead wait for the connection to be closed before checking handshake exception (by releasing the lock in a channel close listener), which guarantees that when we check the connection, it is always closed.
Also fixes some javadoc errors.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=186021997
With https://github.com/bazelbuild/bazel/issues/4376, bazel 0.10.0 now supports accessing system TMPDIR in its sandbox. Use this instead of hardcoding /tmp in BUILD rules to get around the gpg-agent path length restriction.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=186010932
Added:
- dns/update_latency, which measures the time since a DNS update was added to the pull queue until that updates is committed to the DnsWriter
- - It doesn't check that after being committed, it was actually published in the DNS.
- dns/publish_queue_delay, which measures how long since the initial insertion to the push queue until a publishDnsUpdate action was handled. It measures both for successes (which is what we care about) and various failures (which are important because the success for that publishDnsUpdate will be > than any of the previous failures)
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185995678
The intention (from []is:
- actualAsString() is the method that people call.
- actualCustomStringRepresentation() is the method that people override.
Fortunately, no one actually calls actualCustomStringRepresentation(), aside from some tests that call it to test a subject's implementation. That's easy enough to work around by extracting a method.
(Arguably @ForOverride should permit calls from tests in some cases (now that Error Prone knows how to identify test code). But it's not entirely clear, since, e.g., people shouldn't be testing Converter.doForward(null) because the method can never be invoked that way. Some discussion here: []Tested:
global TAP
[]
RELNOTES=Marked `actualCustomStringRepresentation()` as `@ForOverride`. To retrieve the string representation, call `actualAsString()`.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185672328
The START_DATE_SUNRISE phase allows registration of domains only with a signed mark. In all other respects - it is identical to the GENERAL_AVAILABILITY phase.
Note that Anchor Tenants bypass all checks, and are hence able to register domains without a signed mark.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185534793
The task-queue API only allows reading 1000 tasks at a time, hence the original reason for this limit. We get over that limit by reading (and processing) items from the queue in a loop - 1000 at a time.
This is important because the 1000 dns-updates are shared among all TLDs,
meaning that a TLD with >1000 waiting updates can affect the update latency of
other TLDs.
In addition, partially fixes the bug where if there are more than 1000 updates to paused
/ non-existing TLDs, we completely block all updated to all TLDs.
By partially fixed, I mean "if we have around 1000 updates to paused TLDs, we will read them every time ReadDnsUpdates is called, ignore then, and only then get to the actual updates we want to process".
This works for a number of 1000 updates waiting - but if paused TLDs have tens or hundreds of thousands of updates waiting - this might still choke up other TLDs (not to mention we keep reading / updating 10s or 100s of thousands of tasks in the queue, that's... bad.)
A more thorough fix will come in a future CL, as it requires a more thorough change in the code.
Note that the queue lease command supports a maximum of 10 QPS. Any more than
that - and we get errors / empty results. Hence we limit our QPS to 9 to be on
the safe side.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185218684
When a quota request is rejected, increment the metric counter by one.
Also makes both frontend and backend metrics singleton because all the fields they have a static.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185146804
The quota handler terminates connections when quota is exceeded.
The next CL will add instrumentation for quota related metrics.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185042675
Changes the code to be in compliance with the RDAP Pilot Profile document,
which specifies:
1.4.11. If permitted or required by an ICANN agreement provision, waiver, or Consensus Policy, an RDAP response may contain redacted registrant, administrative, technical and/or other contact information. If any information is redacted, the response MUST include a remarks member with title "Data Policy", type "object truncated due to authorization", a description containing the string "Some of the data in this object has been removed" and a links member with the elements rel:alternate and href indicating where the data policy can be found. An entity with redacted information MUST include the "removed" value in the status element.
We were using the "removed" status to indicate deleted contacts and inactive
registrars. Instead, we will now use "inactive", so that we can use "removed"
to indicated redaction.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185039201
As part of our commit log layer that we have built on top of Objectify, we
enforce the constraint of a monotonically increasing transaction time with a
millisecond granularity. Thus, if two transactions occur at exactly the same
millisecond, as had been the case with these tests, one will get a
TimestampInversionException and retry. However, since we're mocking time in
these tests as well, they will retry at exactly the same millisecond, and thus
continue failing for the same reason until the max retry threshold is hit. The
EPP flow then ultimately fails with a generic "Command failed" response. All
of this is actual findings from looking at test logs from a flake.
It's a mystery to me why these tests were merely flaky; it seems like they
should have always been failing for this reason, but they were still only
sometimes failing. Who knows.
The fix is simple -- Adjust the tests so that no two commands are run at exactly
the same millisecond. Note that this is a test-only problem; in the real world,
a command that temporarily fails will simply then succeed the next time it is
retried, since time is actually elapsing. This implies that our commit log system
imposes a max mutation rate of 1,000 QPS across our entire system. This is
unlikely to be a problem in practice for any existing registry of any size.
Also note that, as far the EPP XML itself is concerned, times only have second
granularity, so up to a thousand commands can execute in the same second and
still "appear" to have taken place at the same time as far as EPP is concerned.
That's why this CL only adds millisecond precision to the actual run time, not
to the expected values in the commands.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184777558
In publishDomain, we load the subordinate hosts of the domain from datastore and compare its nameservers to them. For any nameserver that is in-baliwick, we call publishSubordinateHost on it and stage the A/AAAA records of the host for publication.
This is superior to the old approach where we use hostName.endsWith(domainName) to check if a nameserver is in-baliwick because it will mistake ns.another-example.tld as a subordinate host of example.tld. It is also better than checking hostName.endsWith("." + domainName), which will catch false positives as above, but falls short in a corner case where the nameserver has been deleted before its superordinate domain's record is updated. In that case, subordinateHosts.cotains(hostName) will be false but hostName.endsWith("." + domainName) will still be true.
Note that we still use the suffix check in filterGlueRecords because it is filtering on existing records from Cloud DNS. It is even advantageous to do so because if there were (and there shouldn't be if everything is consistent) any orphaned glue records (suffix matches to the domain, but not actually in its subordinate host list), they would be retained by the filter and therefore be deleted when the staged changes are committed.
Also fixed a few tests that should have failed had we checked subrodinate hosts....
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184732005
It's been long enough since the format change adding in years that all
registrars should no longer have any IDs in the old format lying around
that they're still attempting to ACK. All poll messages have already been
coming back to registrars with the new format for months now.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184714735
Previously, CloudDnsWriter used InetAddress.toString() to produce the ipv4/6
address string (i.e. 127.0.0.1 or 0:0:0:0:0:0:0:1) used as an argument to the
Cloud DNS API. However, this fails because InetAddress uses the format
"HostName/IpAddress" for toString(), which uses the empty string as a HostName
if unspecified. This resulted in the erroneous use of a prefix slash (i.e.
"/127.0.01") as an InetAddress argument, causing all glue record updates to
fail.
This change replaces InetAddress.toString() with InetAddress.getHostAddress(),
which properly generates the IP address for the InetAddress. This also replaces
a lot of logic in the corresponding test with concrete equivalents, preventing
obvious errors like this from creeping up on us in the future.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184708896
Now that we've verified the new Beam billing pipeline works, we can delete the
old manual commands we used to use.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184707182
The DS records consist of 4 values:
- keyTag: unsigned short (2 bytes)
- alg: unsigned byte
- digestType: unsigned byte
- digest: binary hex
NOTE: the current CL doesn't support keyData, neither as the optional field in dsData nor as a replacement for dsData
The command tool accepts DS records as a string, where the 4 values are given
as one string separated by white-spaces as follows:
<keyTag> <alg> <digestType> <digest>
e.g. something like:
60485 5 2 D4B7D520E7BB5F0F67674A0CCEB1E3E0614B93C4F9E99B8383F6A1E4469DA50A
which is how it's written in Zone files, allowing easy copy-paste from existing values.
ommas is confusing when using spaces.
The various "numbers" (keyTag, alg, digestType) are only checked that they are
positive integers - the rest is left for the server.
digest it checked to be an even-lengthed hex string.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184583068
Rosie CL for []/third_party (local approval/rejection).
[]
b/71392935
Tested:
TAP --sample for global presubmit queue
[]
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184412611
When enabled for a registrar, all EPP operations on premium domains that have
costs (e.g. creates, renews, transfers) will fail unless the EPP fee extension
is used to explicitly ack the amount of fee as part of the EPP transaction.
This ack is required regardless of whether premium fee acking is required at
the registry level. No data migration is necessary since false is the desired
default for this new attribute.
This CL also contains some slight refactoring of static utility methods used to
perform fee verification; there was short-circuiting at call-sites in two
places when what was really needed was two methods, one implementing additional
functionality on top of the other, and calling the inner method in the places
where short-circuiting had previously been necessary.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184229363
"keepTasks" is a flag that prevents ReadDnsQueueAction from removing dns-update
tasks from the dns-pull queue, while still launching PublishDnsUpdates tasks to
update the DNS (meaning these tasks will be updated again in the next
ReadDnsQueueAction).
I'm not sure what's the purpose of this flag, but given we now allow multiple
writers (meaning we can already publish the same DNS multiple times) and given
that we can now recover from a bad writer (if a writer doesn't belong to a TLD,
we put the dns-updates queued for that writer back into the dns-pull queue) - I
suspect we don't need it anymore.
Alternative considered: changing this to a "dryRun" flag that won't actually
launch PublishDnsUpdates tasks, but will log which tasks it would have
launched. Decided against it because we will still need to "own" any task for a
significant amount of time if there are many (tens of thousands) tasks in the
queue. Hence a "dryRun" will still affect any actual runs for some time.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=183997187