Specifically, this prevents suspended registrars from creating domains or applications. Pending registrars already can't perform these actions because they get an error message when attempting to log in.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=170481338
This moves us from the oudated google/data XML api to the OnePlatform REST/JSON api, finally silencing the deprecation warnings we've been seeing.
The synchronization algorithm diffs the spreadsheet's current values with its internally sourced values, adding the row to a batch update request if there's a discrepancy. Additional internal data are added as an append operation to the end of the sheet, and any extraneous spreadsheet data is cleared from the spreadsheet.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=169273590
We create an injectable LockHandler that just calls the static
Lock.executeWithLocks function.
I'm not sure what's the correct place to put the LockHandler. I think
model/server is only appropriate for the actual datastore lock. This is a "per request" lock, so maybe request/lock?
-----------------------------
This is the initial step in adding the "lock implicitly released on request death" feature, but it's also useful on its own - easier to test Actions when we can use a fake lock.
To keep this CL simple, we keep using the old Lock as is in most places. We just choose a single example to convert to LockHandler to showcase it. Converting all other uses will be in a subsequent CL.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=167357564
It turns out the Bigquery JSON api selects its validator exclusively through
the useLegacySql flag (the #standardSQL directive isn't considered). To fix
this, we add back the explicit flag.
This also logs unexpected API errors, instead of allowing the job to quietly fail.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=166757569
It was buggy (didn't work) and was never actually used.
Why never actually used: for it to be used executeWithLock has to be called
with different requesters on the same lockId. That never happend in the code.
How it was buggy: Logically, the queue is deleted on release of the lock (meaning it was
meaningless the only time it mattered - when the lock isn't taken). In
addition, a different bug meant that having items in the queue prevented the
lock from being released forcing all other tasks to have to wait for lock
timeout even if the task that acquired the lock is long done.
Alternative: fix the queue. This would mean we don't want to delete the lock on release (since we want to keep the queue). Instead, we resave the same lock with expiration date being START_OF_TIME. In addition - we need to fix the .equals used to determine if the lock the same as the acquired lock - instead use some isSame function that ignores the queue.
Note: the queue is dangerous! An item (calling class / action) in the first place of a queue means no other calling class can get that lock. Everything is waiting for the first calling class to be re-run - but that might take a long time (depending on that action's rerun policy) and even might never happen (if for some reason that action decided it was no longer needed without acquiring the lock) - causing all other actions to stall forever!
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=163705463
This makes the code more understandable from callsites, and also forces
users of this function to deal with the situation where the registrar
with a given client ID might not be present (it was previously silently
NPEing from some of the callsites).
This also adds a test helper method loadRegistrar(clientId) that retains
the old functionality for terseness in tests. It also fixes some instances
of using the load method with the wrong cachedness -- some uses in high-
traffic situations (WHOIS) that should have caching, but also low-traffic
reporting that don't benefit from caching so might as well always be
current.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=162990468
We want to be safer and more explicit about the authentication needed by the many actions that exist.
As such, we make the 'auth' parameter required in @Action (so it's always clear who can run a specific action) and we replace the @Auth with an enum so that only pre-approved configurations that are aptly named and documented can be used.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=162210306
The billing account map will be serialized in the following format:
{currency1=id1, currency2=id2, ...}
In order for the output to be deterministic, the billing account map is stored as a sorted map.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=161075814
This replaces the memcache caching, which we think is overall a bad idea.
We load all registrars at once instead of caching each as needed, so that
the loadAllCached() methods can be cached as well, and therefore will
always produce results consistent with loadByClientIdCached()'s view of the
registrar's values. All of our prod registrars together total 300k of data
right now, so this is hardly worth optimizing further, and in any case this
will likely reduce latency even further since most requests will be
served out of memory.
While I was in the Registrar file I standardized the error messages for incorrect
password and clientId length to be the same format, and cleaned up a few
random things I noticed in the code.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156151828
This is a follow-up to Lai's refactoring of the get reservation types
code to return a set rather than a single type. Since we're always
returning a set now, the more natural way to represent a label that is
not reserved is to return an empty set rather than a set containing
UNRESERVED.
Also fixes some minor style issues I ran across regarding static
importing and test method naming that I ran across (no logic
implications).
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=151132116
This is the final preparatory step necessary in order to load and load
configuration from YAML in a static context and then provide it either via
Dagger (using ConfigModule) or through RegistryConfig's existing static
functions.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143819983
The next step will be to get rid of RegistryConfig descendants and RegistryConfigLoader entirely.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143812815
This primarily addresses issues with TMCH testing mode and email sending utils.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143710550
We're now using java_import_external instead of maven_jar. This allows
us to specify the relationships between jars, thereby allowing us to
eliminate scores of vendor BUILD files that did nothing but re-export
@foo//jar targets, thus addressing the concerns of djhworld on Hacker
News: https://news.ycombinator.com/item?id=12738072
We now have redundant failover mirrors, which is a feature I added to
Bazel 0.4.2 in ed7ced0018
A new standard naming convention is now being used for all Maven repos.
Those names are calculated from the group_artifact name using the
following algorithm that eliminates redundancy:
https://gist.github.com/jart/41bfd977b913c2301627162f1c038e55
The JSR330 dep has been removed from java targets if they also depend
on Dagger, since Dagger always exports JSR330.
Annotation processor dependencies should now be leaner and meaner, by
more appropriately managing what needs to be on the classpath at
runtime. This should trim down the production jar by >1MB. As it stands
currently in the open source world:
- backend_jar_deploy.jar: 50MB
- frontend_jar_deploy.jar: 30MB
- tools_jar_deploy.jar: 45MB
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=143487929
I also moved to a non-concurrent modification syncing model. It was adding more
complexity than was justified just to have two requests going simultaneously
instead of one. The API doesn't reliably allow much more than that anyway.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=141210192
This defaults to null, and leaving it to null now simply disables reserved terms
exporting, rather than throwing an error every time the action runs.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=138763161
This allows separate Bazel projects to reference Nomulus as an external
repository. They can then copy the []
directory structure into their own project and customize the Action
and Module lists for the GAE modules in their own deployment.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=136863886
This doesn't change the end result of a successful run, though this is what a typical flow looks like prior to this fix:
Consider a sheet with 10 data rows (+ 1 header row = 11). A 10-row data set will call worksheet.setRowCount(10), which truncates the last row of the existing sheet. This row will eventually be added again in the last for loop, but if the synchronizer fails mid-sync, this last row will remain dropped. This fix will prevent this last row from being dropped.
This doesn't fix the broader issue of SheetSynchronizer not behaving transactionally -- that's a different can of worms.
See the linked bug for an instance where the synchronizer failed mid-run and dropped a data row as a result.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=136398109
This is to better distinguish between an LRP "token" (the string passed along in EPP) and the datastore entity that contains the token and all metadata.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=135943480
The default is to support GET, which doesn't work with cron fanout which only
uses POST.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=134284855
It's best to be consistent and use the same thing everywhere. "clientId" was
already used in more places and is shorter and no more ambiguous, so it's the
logical one to win out.
Note that this CL is almost solely a big Eclipse-assisted refactoring. There are
two places that I did not change clientIdentifier -- the actual entity field on
Registrar (though I did change all getters and setters), and the name of a
column on the exported registrar spreadsheet. Both would require data
migrations.
Also fixes a few minor nits discovered in touched files, including an incorrect
test in OfyFilterTest.java and some superfluous uses of String.format() when
calling checkArgument().
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=133956465
This is an internal-only feature that breaks the open source build.
CL created with:
dr-replace '(compatible_with.*)' '\1 # MOE:strip_line'
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=128852873
The presubmits are warning that toUpperCase() and toLowerCase() are locale-specific, and advise using Ascii.toUpperCase() and Ascii.toLowerCase() as a local-invariant alternative.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=127583677
See Rosie [] for context.
We've already switched over to using Dagger 2.4 in respositories.bzl,
so this change is fine for our open source drop.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=121863923
The dark lord Gosling designed the Java package naming system so that
ownership flows from the DNS system. Since we own the domain name
registry.google, it seems only appropriate that we should use
google.registry as our package name.