This moves the new pipeline's invoice generation to the billing bucket, under the 'invoices/yyyy-MM' subdirectory.
This also changes the invoice e-mail to use a multipart message that attaches the invoice to the e-mail, to guarantee the correct MIME type and download.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=181746191
This closes the end-to-end billing pipeline, allowing us to share generated detail reports with registrars via Drive and e-mail the invoicing team a link to the generated invoice.
This also factors out the email configs from ICANN reporting into the common 'misc' config, since we'll likely need alert e-mails for future periodic tasks.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=180805972
This makes a few cosmetic changes that prepares the pipeline for production.
Namely:
- Converts file names to include the input yearMonth, mostly mirroring the original invoicing pipeline.
- Factors out the yearMonth logic from the reporting module to the more common backend module. We will likely use the default yearMonth logic in other backend tasks (such as spec11 reporting).
- Adds the "withTemplateCompatability" flag to the Bigquery read, which allows multiple uses of the same template.
- Adds the 'billing' task queue, which retries up to 5 times every 3 minutes, which is about the rate we desire for checking if the pipeline is complete.
- Adds a shell 'invoicing upload' class, which tests the retry semantics we want for post-generation work (e-mailing the invoice to crr-tech, and publishing detail reports)
While this cl may look big, it's mostly just a refactor and setting up boilerplate needed to frame the upload logic.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179849586
Right now VoidDnsWriter is injected in the tools - meaning it's possible to
*set* VoidDnsWriter as the writer of a TLD - but it isn't injected in the
backend - meaning we get an error if we actually try to use it.
We need VoidDnsWriter at least for load-testing, and in general for any test
TLD.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179425574
DummyKeyringModule provides a fake string as the JSON credential used to instantiate a GoogleCredential. Of course this would not work and when the metric reporter requests a GoogleCredential in the main thread. This causes the FOSS build to crash on startup, because it defaults to use DummyKeyringModule.
This change allows a graceful handling of such an error by wrapping any calls to instantiate a metric reporter in a try block. Note that any attempt to write to stackdriver will still fail, but that happens in a different thread and will not make the whole program crash.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177183337
Per discussions here:
https://groups.google.com/forum/#!topic/nomulus-discuss/ylDW2PblL60
Any use of keyring in the FOSS build would result in crashes because KMS is not configured. We should use the dummy keyring instead so that a vanilla FOSS deployment to GAE can run. Of course users would still need to configure their keyrings (and revert back to KMS keyring module) when they actually use any of the keys.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=175868399
This serves as proof-of-concept to verify we can use Beam for our invoice generation use case. Namely, it checks that we can:
- Deploy a Beam template to GCS
- Read from Bigquery within the template
- Run the template from App Engine
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=175755390
This is the initial commit of the new billing system, rewritten as an Apache
Beam pipeline. This contains a basic end-to-end pipeline as proof of concept,
and boilerplate for GenerateInvoicesAction, which will eventually be our
automated invoice generation endpoint.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=174184171
With Java 8 in GAE standard environment, we can now use standard java thread factory to run the metric reporter in the background in daemon mode, which would not interfere with basic scaling idle timeout as App Engine thread would.
Because the thread is not created by ThreadManager, no App Engine APIs can be called from it. We therefore use GoogleCredential instead of AppIdentityCredential as HttpRequestInitializer, and NetHttpTransport instead of UlrFetchTransport as HttpTransport.
MetricReporter is lazy injected because it depends on jsonCredential retrieved from CloudKms, which is not available in a test environment, causing FrontendServletTest and BackendServletTest to fail.
Some minor re-formatting with google-java-format on edited files.
Lastly removed moe comments in import statement, which makes the linter unhappy.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=172896227
This was a surprisingly involved change. Some of the difficulties included
java.util.Optional purposely not being Serializable (so I had to move a
few Optionals in mapreduce classes to @Nullable) and having to add the Truth
Java8 extension library for assertion support.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=171863777
This moves us from the oudated google/data XML api to the OnePlatform REST/JSON api, finally silencing the deprecation warnings we've been seeing.
The synchronization algorithm diffs the spreadsheet's current values with its internally sourced values, adding the row to a batch update request if there's a discrepancy. Additional internal data are added as an append operation to the end of the sheet, and any extraneous spreadsheet data is cleared from the spreadsheet.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=169273590
It makes sense for all mapreduces to run in backend, especially onces
that are scheduled regularly to run in cron like this one now. We don't
have many instances configured for the tools service anymore on some
of our environments, so backend is the friendliest place for a mapreduce
to run.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=168882122
This adds Bigquery API client code to generate the activity reports from our
now standardSQL queries. The naming mirrors that of RDE (Staging generates the
reports and uploads them to GCS).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=164656344
This is the first step in moving the current []cron-Python reporting scripts
into App Engine, as an official part of the Nomulus package. This copies the
structure of RDE uploads, with a few changes specific to monthly reporting.
I've left some TODOs related to actually testing it on the ICANN endpoint, as we're still not sure how files to be uploaded will be staged, and whether we can actually ping their endpoint on valid ports (80 or 443).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=160408703
The affected actions have been changed to check that the user is logged in by [] so this attribute is no longer needed.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=159572365
Move the "restoreCommitLogs" command from the backend module to the tools
module so it's easier to access with nomulus.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156768389
HttpServlet#destroy() is not called on instance shutdown, so the metrics code was never shut down correctly before.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156360866
Replace KeystoreKeyring with ComparatorKeyring between KeystoreKeyring and
KmsKeystore. In the opensource version, will replace DummyKeyring with
KmsKeyring directly.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=152893767
RdeStagingAction always processed all RDE and BRDA deposits currently outstanding, updating the cursors appropriately and kicking off the upload job. Sometimes we don't want all that. We just want to create a specific deposit by hand, without modifying the cursors or uploading. This CL adds parameters to support that.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=152415959
It turns out this type parameter was never necessary. A builder only needs the reflexive second type parameter when you want to have a builder inheritance hierarchy where the descendant builders have methods that the ancestor builder doesn't. In that case, the type param enables the ancestor builder's setter methods to automatically return the correct derived type, so that if you start with a derived builder, you can call a setter method inherited from an ancestor and then continue the chain with setters from the derived builder (e.g. new ContactResource.Builder().setCreationTime(now).setContactId(), which otherwise would have returned an EppResource.Builder from setCreationTime(), at which point the call to setContactId() would not compile).
Even then, it's not strictly necessary to use the type parameter, since you could instead just have each derived type override every inherited method to specify itself as the return type. But that would be a lot of extra boilerplate and brittleness.
Anyway, in this case, there is a builder hierarchy, but RequestComponentBuilder specifies all the methods that we're ever going to want on our builders, so there's never any need to be able to call specific derived builder methods. We only even need the individual builder classes so that Dagger can generate them separately for each component.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=148269178
The one-day validity period is also moved from the caller into XsrfTokenManager.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=147857716
This is the final preparatory step necessary in order to load and load
configuration from YAML in a static context and then provide it either via
Dagger (using ConfigModule) or through RegistryConfig's existing static
functions.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143819983
We're now using java_import_external instead of maven_jar. This allows
us to specify the relationships between jars, thereby allowing us to
eliminate scores of vendor BUILD files that did nothing but re-export
@foo//jar targets, thus addressing the concerns of djhworld on Hacker
News: https://news.ycombinator.com/item?id=12738072
We now have redundant failover mirrors, which is a feature I added to
Bazel 0.4.2 in ed7ced0018
A new standard naming convention is now being used for all Maven repos.
Those names are calculated from the group_artifact name using the
following algorithm that eliminates redundancy:
https://gist.github.com/jart/41bfd977b913c2301627162f1c038e55
The JSR330 dep has been removed from java targets if they also depend
on Dagger, since Dagger always exports JSR330.
Annotation processor dependencies should now be leaner and meaner, by
more appropriately managing what needs to be on the classpath at
runtime. This should trim down the production jar by >1MB. As it stands
currently in the open source world:
- backend_jar_deploy.jar: 50MB
- frontend_jar_deploy.jar: 30MB
- tools_jar_deploy.jar: 45MB
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143487929
By moving []s into the batch package, which is not included in the frontend service, we pave the way to remove the dependency of frontend on the [] library.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=142265351
We should be able to remove the dependency on the App Engine [] library from the frontend service, since no []s actually run there. In order to do this, we need to remove the various []-reliant classes from the frontend service build. This CL begins the process by moving the two async "flows" to a different package which is not included in the frontend service.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=142159568
This refactors RequestHandler so that it handles the construction of the request
component itself, rather than being handed a pre-built request component
instance constructed by the invoking servlet.
The motivation for this change is so that RequestHandler can be extended in
future CLs to compute authentication results, and can provide those results as
an available binding in the constructed request component. An alternative
approach could have been to compute the authentication results within
RequestModule itself, but I think it's clearer to keep business logic like
that outside of Dagger providers.
This CL makes the following individual changes:
- Adds request component builders, which implement a RequestComponentBuilder
interface so they can all be manipulated by RequestHandler
- Instead of obtaining request components via factory methods on the global
components, one now can have global-scoped bindings just inject the request
component builders (which requires adding a module to each global component
declaring the subcomponent). This follows the recommended approach here:
http://google.github.io/dagger/subcomponents.html
- Instead of exposing request components on the global component interface,
we now expose module-specific subclasses of RequestHandler that @Inject the
appropriate request component builder's provider and pass it to the superclass
(note that inheritance isn't strictly necessary here but saves boilerplate)
- RequestHandler now takes the Provider<RequestComponentBuilder> and builds
the component itself using its own fresh RequestModule instance. This provides
some nice encapsulation but is mainly needed for adding a RequestAuthModule
in future work.
- RequestHandler also takes UserService now, which can be provided via Dagger
by the subclass. Longer-term that will go away in favor of instead providing
AuthStrategy instances, some of which will use UserService internally.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=138815648
This change moves the reflective setAccessible() calls on the request component
methods (needed so that they can be invoked reflectively from RequestHandler)
to within Router itself, eliminating the need to manually call this from each
Servlet class and then pass in the resulting Method objects. Instead, we just
pass in the request component class and let Router do the rest.
Old comments say that cross-package reflection is not allowed on AppEngine, but
while it's quite possible this was once the case, I can't reproduce that
limitation, and the documentation seems to contradict any such restriction:
"""
An application is allowed full, unrestricted, reflective access to its own
classes. It can query any private members, call the method
java.lang.reflect.AccessibleObject.setAccessible(), and read/set private
members.
"""
https://cloud.google.com/appengine/docs/java/runtime#reflection
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=138693006
This will replace the existing DnsRefreshForHostRenameAction.
This is stage one of a three stage migration process. It adds the new queue and
[] but doesn't call them yet. Stage two will cut over to using the new
functionality, and stage three will remove the old functionality.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=134793963
Now it lives alongside the delete prober data action, as well as any future
batch/maintenance tasks that should run periodically.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=134435668
Also creates a new package named 'batch' to house it.
TESTED=I deployed it to alpha, sent a POST request to the task URL, and it
successfully ran the [].
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=134332999
This allows handling of N asynchronous deletion requests simultaneously instead
of just 1. An accumulation pull queue is used for deletion requests, and the
async deletion [] is now fired off whenever that pull queue isn't empty,
and processes many tasks at once. This doesn't particularly take more time,
because the bulk of the cost of the async delete operation is simply iterating
over all DomainBases (which has to happen regardless of how many contacts and
hosts are being deleted).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=133169336