This makes a few cosmetic changes that prepares the pipeline for production.
Namely:
- Converts file names to include the input yearMonth, mostly mirroring the original invoicing pipeline.
- Factors out the yearMonth logic from the reporting module to the more common backend module. We will likely use the default yearMonth logic in other backend tasks (such as spec11 reporting).
- Adds the "withTemplateCompatability" flag to the Bigquery read, which allows multiple uses of the same template.
- Adds the 'billing' task queue, which retries up to 5 times every 3 minutes, which is about the rate we desire for checking if the pipeline is complete.
- Adds a shell 'invoicing upload' class, which tests the retry semantics we want for post-generation work (e-mailing the invoice to crr-tech, and publishing detail reports)
While this cl may look big, it's mostly just a refactor and setting up boilerplate needed to frame the upload logic.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179849586
ArrayList is more performant and there's no reason to use a LinkedList here.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179717525
Stream.concat only accepts 2 parameters. Streams.concat on the other hand
accepts any number of parameters.
Moving to Streams.concat for all uses (2 or more) makes sense for uniformity
and convenience reasons.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179716648
This creates an end-to-end test that checks for proper billing pipeline IO writes. The only remaining test would be to add a test for the Bigquery query, but see b/70839142 for why I've deemed that more work than worthwhile.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179706561
In Truth8, we can do assertThat(stream) directly. It's less verbose and clearer
in most cases.
Note that for the "finishers" (e.g. "containsExactyElementsIn") - streams are
still not allowed. So when there is:
assertThat(stream.map(someTransformation).collect(toList()))
.containsExactlyElementsIn(expecteStream.map(someTransformation).collect(toList()));
I kept the .collect in the assertThat to preserve the symmetry with the
finisher.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179697587
This also incorporates general improvements and additions to the existing EPP
lifecycle tests around domain deletion. As a refresher: There is a 5 day
add grace period (AGP) following domain creation. Domains that are deleted
during that period have their create costs (but not EAP costs) refunded. This
deletion takes place immediately. Refunds are implemented by issuing a
Cancellation for the associated OneTime billing event.
Domains that are deleted after AGP ends first go through a 30 day redemption
grace period followed by a 5 day pending deletion period. No create fees are
refunded in this case.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179597874
This is done first and formost to stop "empty" commits that cause errors in
publishDnsUpdates. The reason being that the Cloud DNS api fails when there are
no updates at all in a change.
Allowing this is a requirement for the writer to be idempotent - if we delete a
domain, then run the writer to delete it again - we'll get 0 additions and 0
deletions which fails.
This isn't theoretical either - we've seen it happen, causing a
publishDnsUpdates to fail over and over again.
While fixing this, we also remove all RRS that are common between additions and
deletions. This is just an optimization and shouldn't affect behavior.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179525218
In addition, while adding the tests, I became discontented with the thoroughness of the cursor navigation tests, which checked only the number of items returned, not their proper ordering. So I updated them to be more careful, and backported the changes to the nameserver and entity search tests as well.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179442118
Apologies for the reformatting, but this refactoring is quite rote and it's
definitely a bigger use of total time to perform the reformatting individually
than to simply do it file-wide.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179238852
Apologies for the reformatting, but this refactoring is quite rote and it's
definitely a bigger use of total time to perform the reformatting individually
than to simply do it file-wide.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179238745
Apologies for the reformatting, but this refactoring is quite rote and it's
definitely a bigger use of total time to perform the reformatting individually
than to simply do it file-wide.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179238504
Apologies for the reformatting, but this refactoring is quite rote and it's
definitely a bigger use of total time to perform the reformatting individually
than to simply do it file-wide.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179221800
These checks were removed in [] and re-adding them is the last
step of the migration to using expectThrows/assertThrows globally.
Note that this is roughly half of them. More to come in a follow-up CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179216707
The assertThrows/expectThrows refactoring script does not use method
references, apparently.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179089048
A couple methods were moved to new locations so they are accessible to all types of search queries, not just nameservers like they originally were.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179089014
This is needed to fix an inability in Java 8 to correctly infer the type
when the transaction was being allowed to return the value it loaded. The
error was:
INFO: Compilation unit has error diagnostics: [third_party/java_src/gtld/javatests/google/registry/rde/imports/RdeImportUtilsTest.java:109: error: incompatible types: inference variable R has incompatible bounds
ofy().transact(() -> rdeImportUtils.importEppResource(newContact));
^
upper bounds: java.lang.Object
lower bounds: void, third_party/java_src/gtld/javatests/google/registry/rde/imports/RdeImportUtilsTest.java:132:
error: incompatible types: inference variable R has incompatible bounds
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179082154
we set the "denial of existence" to NSEC (rather than NSEC3), because preventing "walking the zone" isn't an issue for TLDs.
It uses the default security configuration for everything else, which at the time of this writing is:
Key signing: RSASHA256, key length of 2048
Zone signing: RSASHA256, key length of 1024
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=179045575
It was easier to simply move these over manually than to try to debug
the automated tooling.
I also changed the case in an exception message.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178926365
These testing helper functions can't be handled by the automatic refactoring
tool because they're taking in expected exception details as parameters.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178832406
This is in preparation for running the automatic refactoring script that
will replace all ExpectedExceptions with use of JUnit 4.13's assertThrows/
expectThrows.
Note that I have recorded the callsites of assertions about EppExceptions
being marshallable and will edit those specific assertions back in after
running the automatic refactoring script (which do not understand these).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178812403
The quotas can be configured in the yaml configuration file. Default quota will be applied to any userId that is not matched in the custom quota list.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178804649
Detailed discussion can be found on github (https://github.com/bazelbuild/bazel/issues/3215). On newer Debian systems as well as on macOS, the temporary folder created to store gpg configs are too long, which results in an error:
gpg: can't connect to the agent: File name too long
This CL makes affected tests use /tmp as temp folder base. Other options are discussed in the github issue, but this one seems to be the mostly non-harmful one. Note that this change only affect the FOSS build, as internal builds are not affected by the temp directory issue, probably due to the internal build tool using a different base temp directory.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178803914
This fixes a bug where collections of incompatible types are being tested for equality to each other (e.g.: Set<Foo> equals Set<Integer> should never return true unless both sets are empty, a bit of a vacuous assertion).
This change is necessary to unblock future improvements to the static analysis capabilities of the java compiler.
Tested:
TAP --sample for global presubmit queue
[]
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178798071
It turns out that the RDAP spec does not envision multiple help pages. We can
still support them (for the TOS, for instance), but we shouldn't expect users
to go searching for help other than the main page. Therefore, consolidate the
useful information on the main page, and get rid of some of the others.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178792548
The only remaining methods on ExceptionRule after this are methods that
also exist on ExpectedException, which will allow us to, in the next CL,
swap out the one for the other and then run the automated refactoring to
turn it all into assertThrows/expectThrows.
Note that there were some assertions about root causes that couldn't
easily be turned into ExpectedException invocations, so I simply
converted them directly to usages of assertThrows/expectThrows.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178623431
This also cleans up a few miscellaneous code quality issues encountered
while adding the new setter: using a cleaner way to conditionally set field
values, documenting the format of the add grace period parameters, and
improves some code comments and formatting.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178387731
We were running FOSS tests on an old version of Ubuntu (14.04) which comes with a rather old gpg version 1.4.16 compared to the current version 2.1.22 (https://lists.gnupg.org/pipermail/gnupg-announce/2017q3/000411.html). As a result some default settings have changed between these versions, leading to test failures when tests are run on newer platforms. In this CL several of the settings are made explicit, no longer depending on default values, which makes them work on either platform.
1. "--no-mdc-error" is set. We do not have mdc integrity protection for the test keys, which results in a non-zero return value for newer versions of GPG. Setting this flag makes return value zero again.
2. "--keyid-format" long is set. GPG key IDs are the last 16 (long key id) or 8 (short key id) octets of the key fingerprint (https://security.stackexchange.com/questions/84280/short-openpgp-key-ids-are-insecure-how-to-configure-gnupg-to-use-long-key-ids-i). Older version uses the short id as default, whereas newer versions defaults to long id. Also change the expected key ID to 16 bytes accordingly.
3. Output stderr in GpgSystemCommandRule when failure occurs during key import. This help debug key import failure due to too long gpg root path. Note: this failure itself is not fixed and still would occur on newer Debian or macOS systems.
4. Set gpg root folder permission correctly to 700, otherwise the newer version gpg will return non-zero value. Previously used method set it to 755.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178163297
Before pushing an update to Cloud DNS, the CloudDnsWriter needs to read all the domain RRSs from Cloud DNS one by one to know what to delete.
Doing so sequentially results in update times that are too long (approx 200ms per domain, which is 20 seconds per batch of 100) severely limiting our QPS.
This CL uses Concurrent threading to do the Cloud DNS queries in parallel. Unfortunately, my preferred method (Set.parallelStream) doesn't work on App Engine :(
This reduces the per-item time from 200ms to 80ms, which can be further reduced to 50ms if we remove the rate limiter (currently set to 20 per second).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178126877
This establishes a fully functional pipeline which generates detail reports for each registrar_tld pair from Bigquery. The main features:
1. Deserialization from AVRO GenericRecord (from Bigquery) into BillingEvent, a POJO we control. This is especially valuable to enable intrinsic type-safety at the start of the pipeline.
2. Addition of .sql files containing the queries used to generate detail reports. These will later be templated to enable general usage.
3. Multi-file-writing within a single TextIO transform, which writes BillingEvents to different files based on their registrar_tld key combo.
This also upgrades the Beam core SDK referenced in repositories.bzl to 2.2.0 and returns the definitions to alphabetical order, to facilitate use of the check_bazel_deps.py script.
The final steps are:
- Converting this to a Nomulus command
- Templating the .sql queries
- @Injecting the @Config values for a given project
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178124838
This solves the problem of external poll message IDs not being globally
unique by simply appending the event year. This means that autorenew poll
messages will increment by one every year, so they will always be unique.
This also requires no data schema changes, and thus most importantly, no
data migration.
Incoming requests lacking this new year field will continue to work for
now for backwards compatibility reasons. This is possible because we don't
actually use the year for anything.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178012685
Currently, if for some reason publishDnsUpdates gets a request to publish
domains to a DnsWriter that doesn't belong to said domain - it logs a warning
but published anyway.
This can happen when Writers are changed (swapped for a different writer)
leaving update commands "stuck" with the wrong writer.
Normally you'd expect these update commands to just publish their data and be
on their way. However, if the update fails for some reason (likely - if the
Writer change happened BECAUSE the updates are failing) then the same
publishDnsUpdate command will continue to run forever.
This CL changes the behavior for "publish to wrong DnsWriter" to instead
requeue the batched domains / hosts back to the Dns-pull queue, allowing them
to be re-batched (and hence published) with the correct DnsWriter(s). This
re-batching will take place in ReadDnsQueueAction.java
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177863076
There has been quite some descriptiveness between github and our internal build. I had to manually push a commit (1c1f95992a) to bring github up-to-date.
Now the github version is identical to what we'd get from doing a -dr-mkfoss. Hopefully the next time things will go smoothly.
The culprit turns out to be MOE itself. It was not attributing changes to commits correctly when the change involves moving files as a result of modifications made to moe_config.json. When moe_config.json is altered in a CL to move files around, MOE always thinks that move happens in the first commit to be pushed.
For example, if we have CL1,CL2,CL3, which correspond to CM1, CM2 and CM3 to be pushed to github, and a change to moe_config.json in CL3 moves a folder, MOE will think that move happens in CM1. This usually is not a problem when all commits are pushed, but when doing manual rebasing and cherry-picking, this will result in unintended dropped changes along with a commit.
I'll do a push to github early next week to confirm that things are back to normal.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177844920
Last commit did not pick up all the changes because MOE incorrectly attributed some changes to the wrong commit. This commit should reconcile these. Also picked up some changes to how hamcrest library is depended upon in BUILD file, which should have been included in previous commits.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177637931
The scheme is:
- loadBytes: returns a ByteSource of the data
- loadFile: returns a string using UTF8 encoding, optionally applying
substitutions
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177606406
Failing to use the fee extension during EAP can result in charges to registrars
that are radically different than what they may have been expecting.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177597883
We're doing this to allow several new tests:
- xml files (that exist today)
- xml files with substitutions
- xml content (maybe? Currently private. Caching the files seems more readable)
- no data at all
Instead of having only one interface
eppToolVerifier.verifySent("file1.xml", "file2.xml");
we're refactoring to allow:
eppToolVerifier
.verifySent("file1.xml")
.verifySentAny() // we don't care about this epps
.verifySent("file2.xml", substitutions)
.verifyNoMoreSent();
In this case we're checking that "exactly 3 EPPs were sent, where the 1st one has content from file1.xml, and the 3rd one has the content from file2.xml, after the given substitutions were applied"
This also updates EppToolCommandTestCase to have only one EppToolVerifier, and
always finish by checking verifyNoMoreSent, meaning that in every test - all
sent epps must be accounted for (verified or skiped)
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177353887
This removes some qualifiers that aren't necessary (e.g. public/abstract on interfaces, private on enum constructors, final on private methods, static on nested interfaces/enums), uses Java 8 lambdas and features where that's an improvement
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177182945
*** Reason for rollback ***
Going with a safer approach to using fresh poll message IDs that doesn't mutate domains themselves.
*** Original change description ***
Use PollMessage IDs that are globally unique across all time
The previous functionality was reusing the same PollMessage ID for Autorenews
every year. This can potentially cause confusion at registrars if they were
expecting these to be globall unique across all time. So this change simply
changes the ID during autorenew.
***
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=177081870