mirror of
https://github.com/google/nomulus.git
synced 2025-04-30 12:07:51 +02:00
Attempting to run DeleteOldCommitLogs in prod resulted in a lot of DatastoreTimeoutException errors. The assumption is that attempting to load so many CommitLogManifests (over 200 million of them), when each one has a slight possibility of failure, has a very high probability of error. The shard aborts after 20 of these errors, and by eliminating as many loads as possible and retrying the remaining loads inside a transaction we are effectively eliminating any exceptions "leaking" out to the mapreduce framework, which will hopefully keep us bellow 20. At least, that's our best guess currently as to why the mapreduce fails. EppResources are loaded in the map stage to get the revisions, and CommitLogManifests are only loaded in the reduce stage for sanity check so we don't accidentally delete resources we need in prod. Both of these are wrapped in transactNew to make sure they retry individually. The only "load" not done inside a transaction is the EppResourceIndex, but there's no getting around that without rewriting the EppResourceInputs. ------------- Created by MOE: https://github.com/google/moe MOE_MIGRATED_REVID=164176764 |
||
---|---|---|
.. | ||
backup | ||
batch | ||
bigquery | ||
builddefs | ||
config | ||
cron | ||
dns | ||
export | ||
flows | ||
groups | ||
keyring/kms | ||
mapreduce/inputs | ||
model | ||
module | ||
monitoring | ||
pricing | ||
rdap | ||
rde | ||
reporting | ||
request | ||
security | ||
server | ||
storage/drive | ||
testing | ||
tldconfig/idn | ||
tmch | ||
tools | ||
ui | ||
util | ||
whois | ||
xjc | ||
xml |