This was a surprisingly involved change. Some of the difficulties included
java.util.Optional purposely not being Serializable (so I had to move a
few Optionals in mapreduce classes to @Nullable) and having to add the Truth
Java8 extension library for assertion support.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=171863777
This moves us from the oudated google/data XML api to the OnePlatform REST/JSON api, finally silencing the deprecation warnings we've been seeing.
The synchronization algorithm diffs the spreadsheet's current values with its internally sourced values, adding the row to a batch update request if there's a discrepancy. Additional internal data are added as an append operation to the end of the sheet, and any extraneous spreadsheet data is cleared from the spreadsheet.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=169273590
We create an injectable LockHandler that just calls the static
Lock.executeWithLocks function.
I'm not sure what's the correct place to put the LockHandler. I think
model/server is only appropriate for the actual datastore lock. This is a "per request" lock, so maybe request/lock?
-----------------------------
This is the initial step in adding the "lock implicitly released on request death" feature, but it's also useful on its own - easier to test Actions when we can use a fake lock.
To keep this CL simple, we keep using the old Lock as is in most places. We just choose a single example to convert to LockHandler to showcase it. Converting all other uses will be in a subsequent CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=167357564
It was buggy (didn't work) and was never actually used.
Why never actually used: for it to be used executeWithLock has to be called
with different requesters on the same lockId. That never happend in the code.
How it was buggy: Logically, the queue is deleted on release of the lock (meaning it was
meaningless the only time it mattered - when the lock isn't taken). In
addition, a different bug meant that having items in the queue prevented the
lock from being released forcing all other tasks to have to wait for lock
timeout even if the task that acquired the lock is long done.
Alternative: fix the queue. This would mean we don't want to delete the lock on release (since we want to keep the queue). Instead, we resave the same lock with expiration date being START_OF_TIME. In addition - we need to fix the .equals used to determine if the lock the same as the acquired lock - instead use some isSame function that ignores the queue.
Note: the queue is dangerous! An item (calling class / action) in the first place of a queue means no other calling class can get that lock. Everything is waiting for the first calling class to be re-run - but that might take a long time (depending on that action's rerun policy) and even might never happen (if for some reason that action decided it was no longer needed without acquiring the lock) - causing all other actions to stall forever!
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=163705463
We want to be safer and more explicit about the authentication needed by the many actions that exist.
As such, we make the 'auth' parameter required in @Action (so it's always clear who can run a specific action) and we replace the @Auth with an enum so that only pre-approved configurations that are aptly named and documented can be used.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=162210306
The billing account map will be serialized in the following format:
{currency1=id1, currency2=id2, ...}
In order for the output to be deterministic, the billing account map is stored as a sorted map.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=161075814
This replaces the memcache caching, which we think is overall a bad idea.
We load all registrars at once instead of caching each as needed, so that
the loadAllCached() methods can be cached as well, and therefore will
always produce results consistent with loadByClientIdCached()'s view of the
registrar's values. All of our prod registrars together total 300k of data
right now, so this is hardly worth optimizing further, and in any case this
will likely reduce latency even further since most requests will be
served out of memory.
While I was in the Registrar file I standardized the error messages for incorrect
password and clientId length to be the same format, and cleaned up a few
random things I noticed in the code.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156151828
This is the final preparatory step necessary in order to load and load
configuration from YAML in a static context and then provide it either via
Dagger (using ConfigModule) or through RegistryConfig's existing static
functions.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143819983
We're now using java_import_external instead of maven_jar. This allows
us to specify the relationships between jars, thereby allowing us to
eliminate scores of vendor BUILD files that did nothing but re-export
@foo//jar targets, thus addressing the concerns of djhworld on Hacker
News: https://news.ycombinator.com/item?id=12738072
We now have redundant failover mirrors, which is a feature I added to
Bazel 0.4.2 in ed7ced0018
A new standard naming convention is now being used for all Maven repos.
Those names are calculated from the group_artifact name using the
following algorithm that eliminates redundancy:
https://gist.github.com/jart/41bfd977b913c2301627162f1c038e55
The JSR330 dep has been removed from java targets if they also depend
on Dagger, since Dagger always exports JSR330.
Annotation processor dependencies should now be leaner and meaner, by
more appropriately managing what needs to be on the classpath at
runtime. This should trim down the production jar by >1MB. As it stands
currently in the open source world:
- backend_jar_deploy.jar: 50MB
- frontend_jar_deploy.jar: 30MB
- tools_jar_deploy.jar: 45MB
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143487929
This allows separate Bazel projects to reference Nomulus as an external
repository. They can then copy the []
directory structure into their own project and customize the Action
and Module lists for the GAE modules in their own deployment.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=136863886
This doesn't change the end result of a successful run, though this is what a typical flow looks like prior to this fix:
Consider a sheet with 10 data rows (+ 1 header row = 11). A 10-row data set will call worksheet.setRowCount(10), which truncates the last row of the existing sheet. This row will eventually be added again in the last for loop, but if the synchronizer fails mid-sync, this last row will remain dropped. This fix will prevent this last row from being dropped.
This doesn't fix the broader issue of SheetSynchronizer not behaving transactionally -- that's a different can of worms.
See the linked bug for an instance where the synchronizer failed mid-run and dropped a data row as a result.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=136398109
It's best to be consistent and use the same thing everywhere. "clientId" was
already used in more places and is shorter and no more ambiguous, so it's the
logical one to win out.
Note that this CL is almost solely a big Eclipse-assisted refactoring. There are
two places that I did not change clientIdentifier -- the actual entity field on
Registrar (though I did change all getters and setters), and the name of a
column on the exported registrar spreadsheet. Both would require data
migrations.
Also fixes a few minor nits discovered in touched files, including an incorrect
test in OfyFilterTest.java and some superfluous uses of String.format() when
calling checkArgument().
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=133956465
The dark lord Gosling designed the Java package naming system so that
ownership flows from the DNS system. Since we own the domain name
registry.google, it seems only appropriate that we should use
google.registry as our package name.
This change renames directories in preparation for the great package
rename. The repository is now in a broken state because the code
itself hasn't been updated. However this should ensure that git
correctly preserves history for each file.