Commit graph

19 commits

Author SHA1 Message Date
cgoldfeder
ae039aa0d8 Remove all vestiges of memcache
Memcache is already off but now it's not in the code anymore.

This includes removing domain creation failfast, since that is actually
slower now than just running the flow - all you gain is a non-transactional
read over a transactional read, but the cost is that you always pay that
read, which is going to drive up latency.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=158183506
2017-06-14 10:28:24 -04:00
mcilwain
bb67841884 Add metrics for async batch operation processing
We want to know how long it's actually taking to process asynchronous
contact/host deletions and DNS refreshes on host renames. This adds
instrumentation. Five metrics are recorded as follows:

* An incrementable metric for each async task processed (split out by
  type of task and result).
* Two event metrics for processing time between when a task is enqueued
  and when it is processed -- tracked separately for contact/host
  deletion and DNS refresh on host rename.
* Two event metrics for batch size every time the two mapreduces are
  run (this is usually 0). Tracked separately for contact/host deletion
  and DNS refresh on host rename.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=157001310
2017-06-05 18:17:09 -04:00
nickfelt
65aaeccfc6 Fix TaskQueueHelper param matching for pull queue tasks
This fixes TaskQueueHelper methods and MatchableTaskInfo so that the .param() matching works for pull queues, by parsing the payload for URL-encoded parameters more liberally.  As such, it updates all the places where formerly we were hacking around this by manually constructing the expected payloads and using TaskMatcher.payload() instead.

It also adds a TaskQueueHelper.assertTasksEnqueued() overload that accepts an Iterable<TaskStateInfo> so that you can cleanly assert that a queue contains the same tasks that were returned via a previous call to getQueueInfo("queue").getTaskInfo().

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=156604901
2017-05-23 17:22:49 -04:00
nickfelt
f296b225af Make FlowReporter log tld and various other fields
As part of b/36599833, this makes FlowReporter log the tld(s) of every domain
flow it executes, so we can provide ICANN reporting totals on a per-TLD basis.

It also adds several other fields that we're computing anyway and which seem
useful, particularly for debugging any issues we see in production with the data
that we're attempting to record for ICANN reporting.  The full set of fields is:

  - commandType (e.g. "create", "info", "transfer")
  - resourceType* (e.g. "domain", "contact", "host")
  - flowClassName (e.g. "ContactCreateFlow", "DomainRestoreRequestFlow")
  - targetId* (e.g. "ns1.foo.com", "bar.org", "contact-1234")
  - targetIds* - plural of the above, for multi-resource checks
  - tld** (e.g. "com", "co.uk") - extracted from targetId, lowercased
  - tlds** - plural of the above, deduplicated, for multi-resource checks

* = only non-empty for resource flows (not e.g. login, logout, poll)
** = only non-empty for domain flows

Note that TLD extraction is deliberately very lenient to avoid the complexity
overhead of double-validation of the domain names in the common case.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=154070794
2017-04-26 10:59:09 -04:00
nickfelt
f2c6021db0 Split FlowReporter logging into two lines for robustness
This prevents a possible failure mode of the logging where the logged
EPP input XML is very large (which can happen e.g. for domain creates
with large SMD values).  In those cases, the XML might cause the overall
JSON string to be too large to fit within a single log entry [1], in which
case it gets split over multiple lines and breaks automatic parsing.

This mitigates that case by logging the EPP input (raw and base64-encoded)
in a separate log statement so that the more compact metadata (like clientId)
and derived values (like ICANN reporting field) will still be in an intact
JSON string even in that case, and can still be readily parsed.  It's okay
if the actual EPP XML is harder to parse, since once we're logging the right
metadata fields we shouldn't need to automatically parse the EPP XML in any
normal cases.

[1] I haven't found this exact limit or splitting algorithm, or whether it's
a property of java logging or GAE log ingestion.  The GAE logs page does note
that a single application log entry (within a request, which can have up to
1000 such entries) maxes out at 8KB, so that might be it:
https://cloud.google.com/appengine/docs/standard/java/logs/#writing_application_logs

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=153771335
2017-04-26 10:50:13 -04:00
larryruili
5047d568de Notify registrars of async contact/host deletions
We now send PendingActionNotificationResponses in our poll messages upon completion of an asynchronous contact or host deletion. This is part 1 of 2, which begins logging Trid in all enqueued Host/Contact deletion flows for use in batch deletions, and optionally consuming the resultant Trid info to emit a Host/ContactPendingActionNotifcationResponse.

Part 2 will make this response emission non-optional, which will happen once the queue is cleared of all non-Trid containing tasks.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=153084197
2017-04-26 10:33:55 -04:00
nickfelt
91c2558feb Make FlowRunner log ICANN activity report field name
As part of b/36599833, this makes FlowRunner log the appropriate ICANN activity
report field name for each flow it runs as part of a structured JSON log
statement which can be parsed to generate ICANN activity reports (under the key
"icannActivityReportField").

In order to support this, we introduce an annotation for Flow classes called
@ReportingSpec and a corresponding enum of values for this annotation, which is
IcannReportingTypes.ActivityReportField, that stores the mapping of constant
enum values to field names.

The mapping from flows to fields is fairly obvious, with three exceptions:

 - Application flows are all accounted under domains, since applications are
   technically just deferred domain creates within the EPP protocol
 - ClaimsCheckFlow is counted as a domain check
 - DomainAllocateFlow is counted as a domain create

In addition, I've added tests to all the corresponding flows that we are
indeed logging what we expect.

We'll also need to log the TLD for this to be useful, but I'm doing that in a
follow-up CL.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=151283411
2017-03-27 13:32:57 -04:00
mcilwain
cdadb54acd Refer to Datastore everywhere correctly by its capitalized form
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=147479683
2017-02-17 12:12:12 -05:00
mmuller
b70f57b7c7 Update copyright year on all license headers
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=146111211
2017-02-02 16:27:22 -05:00
cgoldfeder
b84d7f1fb5 Remove LoggedInFlow
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=137444791
2016-11-02 15:19:34 -04:00
mcilwain
cfbf62ef4e Move ExceptionRule up to FlowTestCase
This removes a much larger number of independent field defs.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=135846921
2016-10-14 16:58:07 -04:00
shikhman
f76bc70f91 Preserve test logs and test summary output for Kokoro CI runs
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=135494972
2016-10-14 16:57:43 -04:00
cgoldfeder
f3a0b78145 Move thrown.expect() right before the throwing statement
aka regexing for fun and profit.

This also makes sure that there are no statements after the
throwing statement, since these would be dead code. There
were a surprising number of places with assertions after
the throw, and none of these are actually triggered in tests
ever. When I found these, I replaced them with try/catch/rethrow
which makes the assertions actually happen:

before:

// This is the ExceptionRule that checks EppException marshaling
thrown.expect(FooException.class);
doThrowingThing();
assertSomething();  // Dead code!

after:

try {
  doThrowingThing();
  assertWithMessage("...").fail();
} catch (FooException e) {
  assertSomething();
  // For EppExceptions:
  assertAboutEppExceptins().that(e).marshalsToXml();
}

To make this work, I added EppExceptionSubject.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=135793407
2016-10-11 11:27:54 -04:00
mcilwain
21a98b899c Replace loadByUniqueId() with methods that don't overload unique id
It is replaced by loadByForeignKey(), which does the same thing that
loadByUniqueId() did for contacts, hosts, and domains, and also
loadDomainApplication(), which loads domain application by ROID. This eliminates
the ugly mode-switching of attemping to load by other foreign key or ROID.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=133980156
2016-09-26 13:20:22 -04:00
mcilwain
2dcac3ca68 Cut over to batched async deletion for contacts/hosts
Also consolidates the DNS refresh functionality in AsyncFlowUtils that was
being used by HostUpdateFlow into AsyncFlowEnqueuer.

TESTED=I threw together some batch scripts to create dozens of contacts on
alpha and then request their deletion, and the [] ran fine and
successfully deleted them in batches.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=133714691
2016-09-22 14:08:14 -04:00
cgoldfeder
5098b03af4 DeReference the codebase
This change replaces all Ref objects in the code with Key objects. These are
stored in datastore as the same object (raw datastore keys), so this is not
a model change.

Our best practices doc says to use Keys not Refs because:
 * The .get() method obscures what's actually going on
   - Much harder to visually audit the code for datastore loads
   - Hard to distinguish Ref<T> get()'s from Optional get()'s and Supplier get()'s
 * Implicit ofy().load() offers much less control
   - Antipattern for ultimate goal of making Ofy injectable
   - Can't control cache use or batch loading without making ofy() explicit anyway
 * Serialization behavior is surprising and could be quite dangerous/incorrect
   - Can lead to serialization errors. If it actually worked "as intended",
     it would lead to a Ref<> on a serialized object being replaced upon
     deserialization with a stale copy of the old value, which could potentially
     break all kinds of transactional expectations
 * Having both Ref<T> and Key<T> introduces extra boilerplate everywhere
   - E.g. helper methods all need to have Ref and Key overloads, or you need to
     call .key() to get the Key<T> for every Ref<T> you want to pass in
   - Creating a Ref<T> is more cumbersome, since it doesn't have all the create()
     overloads that Key<T> has, only create(Key<T>) and create(Entity) - no way to
     create directly from kind+ID/name, raw Key, websafe key string, etc.

(Note that Refs are treated specially by Objectify's @Load method and Keys are not;
we don't use that feature, but it is the one advantage Refs have over Keys.)

The direct impetus for this change is that I am trying to audit our use of memcache,
and the implicit .get() calls to datastore were making that very hard.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=131965491
2016-09-02 13:50:20 -04:00
mcilwain
aa2f283f7c Convert entire project to strict lexicographical import sort ordering
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=127234970
2016-07-13 15:59:53 -04:00
Michael Muller
c458c05801 Rename Java packages to use the .google TLD
The dark lord Gosling designed the Java package naming system so that
ownership flows from the DNS system. Since we own the domain name
registry.google, it seems only appropriate that we should use
google.registry as our package name.
2016-05-13 20:04:42 -04:00
Justine Tunney
5012893c1d mv com/google/domain/registry google/registry
This change renames directories in preparation for the great package
rename. The repository is now in a broken state because the code
itself hasn't been updated. However this should ensure that git
correctly preserves history for each file.
2016-05-13 18:55:08 -04:00
Renamed from javatests/com/google/domain/registry/flows/ResourceFlowTestCase.java (Browse further)