Copied class and test from CheckApiAction. All unit tests passing.
Remaining work: add metrics
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=198916177
Currently, we have two different ways to parse a "set" parameter:
key=value1&key=value2&key=value3...
and
keys=value1,value2,value3
This is error prone for several reasons:
- different parts of the code must be "synchronized" to use the same style (the
place that creates the request, and the place that parses the request)
- for the key=value1&key=value2, we often use the same key name for the single
value and the set value. This can result in subtle bugs where part of the
code will successfully read the key assuming there's only one key (and will
get the first key=value1, ignoring the rest)
Here we transition everything to the keys=value1,value2,value3 method. This one
was chosen because:
- it's shorter
- it's more intuitive for users
- the key name is plural, differentiating it from the singular key=value that
other requests might need
-----------------------------------
To make sure there are not "transition issues", we will continue to support
(with warnings) the key=value1&key=value2 parameter parsing until we're sure we
haven't forgotten to update any part of the code.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=198810681
The migration plan is as follows:
1. This CL, which adds the new "pubapi" service that serves the check API, WHOIS, and RDAP.
2a. Update our public facing sites to switch over to use the new service.
2b. (either order) Rewrite the check API to remove dependencies on flows.
3. ... eventually, once the frontend service is no longer being hit by this traffic, remove its handling of these public endpoints.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=197716346
100 is way overkill with manual scaling. 30 is most likely still overkill too,
but we want to tune incrementally rather than all at once. Note that at 30
instances we're expecting around 3 QPS per instance, which is still an order
of magnitude less than each instance can actually handle.
This also fixes the instance type on sandbox to be the same as on prod.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=196875876
This allows list_domains to continue working for large TLDs.
TESTED=Deploys to alpha and it works to list the most recently created domains even
on a TLD with a huge number of domains on it (much more than .app has currently).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=196717389
This should decrease the average wait time when running nomulus tool.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=195465469
Increase the instances on alpha to achieve parity with sandbox.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=194980588
Five per minute just isn't working well enough on environments with lots of
entities (e.g. alpha and sandbox right now), and there doesn't seem to be a
real need to enforce such a low throttle. The mapreduce queue, for instance,
has 500/s (effectively no throttle).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=192474962
This hard-deletes all contacts and hosts owned by a specific set of registrar
client IDs, currently just "proxy".
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=192325211
This also removes RDE tasks that shouldn't/can't run on non-production environments, like upload/reporting.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=192177779
This also reduces the interval of the commitLogCheckpoint cron job to once
every three minutes, as this job needs to load all commit log bucket entities.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=191613858
Also increases the number of commit log buckets on alpha to 397 and correspondingly
reduces the frequency of commit log diff exporting to once every 3 minutes.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=191440586
Implement a checkbox in the "Resources" tab to allow registrars to toggle
their "premium price ack required" flag.
Tested:
Verfied the console functionality by hand. I've started work on an
automated test, but we can't actually test those from blaze and the
kokoro tests are way too time-consuming to be practical for development, so
we're going to have to either find a way to run those locally outside of
the normal process or make do without a test.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=190212177
TldFanoutAction fans out a given endpoint to all TLDs (either TEST, REAL, or
both).
However, it is also used to delegate a single endpoint request that we want set
in a specific queue (so we can control retries). We do that by setting the TLD
list to "runInEmpty" rather than "forEachRealTld" or "forEachTestTld".
Currently, using "runInEmpty" would still specify a TLD - but that TLD would be
the empty string. This is a bug: it sets the TLD parameter to a bad value. It
worked only because none of the endpoints called with "runInEmpty" were using
the TLD parameter.
However, this will (and does) break if either (a) the endpoint accepts an
optional TLD parameter (like deleteProberData does), or (b) the given endpoint
already has a TLD parameter in it (we want to run the endpoint with a single
TLD, but still use the "fanout" to set the right queue).
This CL fixes several things:
- if runInEmpty is given, no TLD parameter is added
- 'runInEmpty' is now mutually exclusive with 'forEach*Tld' and 'excludes'
- we do some sanity checks and added logging
- removed the buggy and unused "':tld' in path is replaced by TLD"
- in the cron.xml, removed documentation for :tld and the broken :registrar
Note that none of the endpoints that were used with runInEmpty fanout had the TLD parameter prior to deleteProberData
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189954585
The unlimited exponential backoff makes cascading failure a serious problem,
when encountering burst DNS load. Originally, it was exponential backoff, with min 1 sec max 1 hour.
This changes it to be linearly scaling from
30 seconds to 10 minutes. Min 30 seconds is used to avoid over-retrying due to lock contention. Max 10 minutes allows for more retries within our 1 hour SLA. Finally, we're
switching to linear scaling to increase the number of 'quick' retries for low
backoff time, before ultimately settling on the upper bound of 10 minutes (if a
task ever gets to that point, it's probably misconfigured.)
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=186041553
The higher the number the better for serious launches. These used to be 100
but had been detuned because instances weren't dying correctly when no longer
needed, thus contributing to higher costs than necessary. That problem was
fixed when we migrated to the Java 8 runtime, however, so there's no reason
not to use the higher number.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=184742738
There are 2 types of changed done here:
- reorder the existing cron jobs to be in the same order as production (for
easier diffing)
- add missing cron-jobs to either alpha or sandbox
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=183232936
This closes the end-to-end billing pipeline, allowing us to share generated detail reports with registrars via Drive and e-mail the invoicing team a link to the generated invoice.
This also factors out the email configs from ICANN reporting into the common 'misc' config, since we'll likely need alert e-mails for future periodic tasks.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=180805972
This makes a few cosmetic changes that prepares the pipeline for production.
Namely:
- Converts file names to include the input yearMonth, mostly mirroring the original invoicing pipeline.
- Factors out the yearMonth logic from the reporting module to the more common backend module. We will likely use the default yearMonth logic in other backend tasks (such as spec11 reporting).
- Adds the "withTemplateCompatability" flag to the Bigquery read, which allows multiple uses of the same template.
- Adds the 'billing' task queue, which retries up to 5 times every 3 minutes, which is about the rate we desire for checking if the pipeline is complete.
- Adds a shell 'invoicing upload' class, which tests the retry semantics we want for post-generation work (e-mailing the invoice to crr-tech, and publishing detail reports)
While this cl may look big, it's mostly just a refactor and setting up boilerplate needed to frame the upload logic.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179849586
That's 50 each for frontend and backend and 5 for tools. Since the
MetricExporter bug has been fixed for awhile now, we aren't gaining anything by
artificially keeping the instance number low, whereas we might benefit from
higher instance counts, e.g. for load-testing.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179432038
Loadtesting data is identified as "prober data" by this job (it removes
anything under ".test", not only prober data)
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177309096
This mirrors production in hopes of triggering b/67508570 to test the fix.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=175295742
This converts the upload task from a cron job to a task chained after staging.
This ensures the upload job only occurs when its dependencies are met, and
provides a faster turnaround time to verify both the staging and upload jobs
are complete.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=175045489
This is the initial commit of the new billing system, rewritten as an Apache
Beam pipeline. This contains a basic end-to-end pipeline as proof of concept,
and boilerplate for GenerateInvoicesAction, which will eventually be our
automated invoice generation endpoint.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=174184171
I am not happy that another index is required, but the Pantheon console shows that domain indexes are much smaller than the other indexes (because there are fewer domains), so it's not adding an appreciable amount of storage space.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=173561771
This should help reduce the occurrence of requests taking a long time
to process because a new instance is being spun up. We might consider
increasing this further to 60 minutes in the future if necessary.
This also increases the number of frontend instances on production to 8
from 6, since it appears like the issue we were attempting to mitigate
with that change is now fixed.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=173440059
We'll revert this once the stuck instance issue in Java 8 is fixed.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=173183426
This CL adds the functionality for domain searches. Entities and nameservers have already been handled by previous CLs.
Deleted items can only be seen by admins, and by registrars viewing their own deleted items.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=172097922
RDAP searches for contacts with a specific desired registrar need an additional
index term. The tests were not extensive enough to catch this particular case.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=172013843
It occurs to me that we can't have this setting different between sandbox
and production, otherwise we can end up with a situation where we push code
that works on sandbox but then fails only when it is pushed to production.
Sandbox and production need to always be set up similarly for this reason.
We'll just have to pay a greater amount of attention to the release process
next week than normal, and continue playing around in alpha for the mean
time with a fully Java 8 build.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=170197703
It makes sense for all mapreduces to run in backend, especially onces
that are scheduled regularly to run in cron like this one now. We don't
have many instances configured for the tools service anymore on some
of our environments, so backend is the friendliest place for a mapreduce
to run.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168882122
Also adds a "resave all epp" cron job that's needed for the delete to work correctly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168879965
This pattern will mainly be used for data migrations, i.e. updating all
HistoryEntries' DomainTransactionRecords to the new schema.
TESTED=Ran in alpha, the underlying data dropped non-Objectify fields as
expected.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168684356
This is the first in a series of CLs containing code from an old CL of Dai's that had never been completed, which compares zone data between Datastore and DNS. I had written a script to do this by calling two nomulus commands, but maybe it can be done directly in Java, which would be convenient.
This CL is just the plumbing to check on the status of a Mapreduce. We will need this to know that we can proceed with the next step of comparing the output to the DNS data.
Cloned from CL 134295050 by 'g4 patch'.
Original change by dxy@dxy:zoneman-reader:1939:citc on 2016/09/26 10:34:22.
Add a command for comparing zone data between DNS and datastore
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=167188979
This adds Bigquery API client code to generate the activity reports from our
now standardSQL queries. The naming mirrors that of RDE (Staging generates the
reports and uploads them to GCS).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=164656344
This is the first step in moving the current []cron-Python reporting scripts
into App Engine, as an official part of the Nomulus package. This copies the
structure of RDE uploads, with a few changes specific to monthly reporting.
I've left some TODOs related to actually testing it on the ICANN endpoint, as we're still not sure how files to be uploaded will be staged, and whether we can actually ping their endpoint on valid ports (80 or 443).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=160408703
I'm moving it out of the scrap folder too because there's nothing else
in there and we do want to retain this indefinitely because it's a useful
tool for performing DNS writer migrations.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=160168902
Move the "restoreCommitLogs" command from the backend module to the tools
module so it's easier to access with nomulus.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156768389