It occurs to me that we can't have this setting different between sandbox
and production, otherwise we can end up with a situation where we push code
that works on sandbox but then fails only when it is pushed to production.
Sandbox and production need to always be set up similarly for this reason.
We'll just have to pay a greater amount of attention to the release process
next week than normal, and continue playing around in alpha for the mean
time with a fully Java 8 build.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=170197703
It makes sense for all mapreduces to run in backend, especially onces
that are scheduled regularly to run in cron like this one now. We don't
have many instances configured for the tools service anymore on some
of our environments, so backend is the friendliest place for a mapreduce
to run.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168882122
Also adds a "resave all epp" cron job that's needed for the delete to work correctly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168879965
This pattern will mainly be used for data migrations, i.e. updating all
HistoryEntries' DomainTransactionRecords to the new schema.
TESTED=Ran in alpha, the underlying data dropped non-Objectify fields as
expected.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168684356
This is the first in a series of CLs containing code from an old CL of Dai's that had never been completed, which compares zone data between Datastore and DNS. I had written a script to do this by calling two nomulus commands, but maybe it can be done directly in Java, which would be convenient.
This CL is just the plumbing to check on the status of a Mapreduce. We will need this to know that we can proceed with the next step of comparing the output to the DNS data.
Cloned from CL 134295050 by 'g4 patch'.
Original change by dxy@dxy:zoneman-reader:1939:citc on 2016/09/26 10:34:22.
Add a command for comparing zone data between DNS and datastore
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=167188979
This adds Bigquery API client code to generate the activity reports from our
now standardSQL queries. The naming mirrors that of RDE (Staging generates the
reports and uploads them to GCS).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=164656344
This is the first step in moving the current []cron-Python reporting scripts
into App Engine, as an official part of the Nomulus package. This copies the
structure of RDE uploads, with a few changes specific to monthly reporting.
I've left some TODOs related to actually testing it on the ICANN endpoint, as we're still not sure how files to be uploaded will be staged, and whether we can actually ping their endpoint on valid ports (80 or 443).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=160408703
I'm moving it out of the scrap folder too because there's nothing else
in there and we do want to retain this indefinitely because it's a useful
tool for performing DNS writer migrations.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=160168902
Move the "restoreCommitLogs" command from the backend module to the tools
module so it's easier to access with nomulus.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156768389
My continuing investigations into necessary resources for running
our environments seems to indicate that four instances should be
sufficient for our purposes. If it's not, we can always revert.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=155607688
We want to lower the maximum number of service instances as much as
possible without affecting service reliability so that we can make
stronger statements about what the maximum cost of running a typical
Nomulus environment might be. This first step likely won't affect the
frontend and tools modules in practice because they aren't typically
running even this many instances, but it will clamp down on the
number of backend instances (which should be fine; it just means the
mapreduces will take longer).
Alpha is tuned down the same as sandbox for consistency reasons.
This also standardizes on the B4 size (which has 512 MiB RAM) for
all instances. Most instances were already using this, and the
deviations from it were seemingly at random. Crucially, backend,
which is likely most sensitive to this because it uses the mapreduce
library, is already on the smaller memory size.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154537995
The YAML configuration files are now being built directly into the
JAR, and not stored in the WEB-INF/ directory, so this is unnecessary.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=146815937
This allows configuration to work properly from the nomulus tool.
TESTED=I built and ran it against several environments, and all worked
properly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=146697124
This implements the basic framework that allows global YAML
configuration, per-environment custom configuration, and unit-
test-specific configuration.
TESTED=I deployed to alpha, ran some EPP commands through the
nomulus tool, and verified no errors.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=145422680
This implements the basic framework that allows global YAML
configuration, per-environment custom configuration, and unit-
test-specific configuration.
TESTED=I deployed to alpha, ran some EPP commands through the
nomulus tool, and verified no errors.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=145422680
Effectively a revert of [] now that synthetic billing events have been verified in production.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=144473744
Note that this merely starts this MR on a daily schedule -- the billing queries that ultimately consume the synthetic OneTime events are filtering out the events at this time, so we're still relying on query-time expansion of Recurrings.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=144450565
This is temporary until we verify that recurring billing event expansion is working as expected. I want to have this available in case things go south, though in a perfect world, we won't need this.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143693458
The job was starting at midnight and noon, which is exactly when the files are changing. This resulted in intermittent failures, as the files are temporarily missing during the changeover.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=139081163
This is the third and final phase in the migration away from ReferenceUnions.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=138778148
[] enabled the built-in App Engine session cleanup servlet in alpha and sandbox, and it appears to be deleting expired sessions at the expected rate of 100 every 15 minutes. So enable it for production as well.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=138071390
App Engine provides a servlet which deletes up to 100 expired _ah_SESSION entities from DataStore. This CL adds a cron job to call the servlet every 15 minutes in both alpha and sandbox. Assuming all goes well, we will turn it on in production.
I originally learned about this servlet here:
http://www.radomirml.com/blog/2011/03/26/cleaning-up-expired-sessions-from-app-engine-datastore/
But it appears that we do not need a servlet definition, just a cron entry.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=137533532
Convert to an action and remove ResourceServlet, JsonTransportServlet and
JsonTransportServlet, all of which exist only to support it.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=137519385
It is important to get at least this one commit in before the public Nomulus
release so that none of our public users will have to go through this data
migration (although we will have to).
The migration strategy is as follows:
1. Dual-write to non-ReferenceUnion fields in addition to the current
ReferenceUnion fields in use, and add new indexes (this commit). Deploy.
2. Run the ResaveAllEppResourcesAction backfill [].
3. Switch all code over to using the new fields. Dual-write is still in effect,
except it is now copying over the values of the new fields to the old
fields. Switch over all BigQuery reporting scripts to use the new
fields. Deploy.
4. Remove all of the old code and indexes. Deploy.
5. (Optional, at our leisure) Re-run the ResaveAllEppResourcesAction backfill
[] to delete the old obsolete fields.
Note that this migration strategy is rollback-safe at every step -- new data is
not read until it has already been written out in the previous step, and old
data is not removed immediately following a step in which it was still being
read, so the previous step is safe to roll back to.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=136196988
This will replace the existing DnsRefreshForHostRenameAction.
This is stage one of a three stage migration process. It adds the new queue and
[] but doesn't call them yet. Stage two will cut over to using the new
functionality, and stage three will remove the old functionality.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=134793963
Also creates a new package named 'batch' to house it.
TESTED=I deployed it to alpha, sent a POST request to the task URL, and it
successfully ran the [].
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=134332999
Zone files generated on GCS with this command will be used by a MR (to be implemented in a separate CL) for comparing zone data between zoneman and datastore.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=134134401
This allows handling of N asynchronous deletion requests simultaneously instead
of just 1. An accumulation pull queue is used for deletion requests, and the
async deletion [] is now fired off whenever that pull queue isn't empty,
and processes many tasks at once. This doesn't particularly take more time,
because the bulk of the cost of the async delete operation is simply iterating
over all DomainBases (which has to happen regardless of how many contacts and
hosts are being deleted).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=133169336