Merge in latest Google changes

This commit is contained in:
Ben McIlwain 2016-09-14 16:25:17 -04:00
commit 8c39e10dec
157 changed files with 4210 additions and 1466 deletions

130
README.md
View file

@ -15,10 +15,10 @@ the Markdown documents in the `docs` directory.
When it comes to internet land, ownership flows down the following hierarchy:
1. [ICANN][icann]
2. [Registries][registry] (e.g. Google Registry)
3. [Registrars][registrar] (e.g. Google Domains)
4. Registrants (e.g. you)
1. [ICANN][icann]
2. [Registries][registry] (e.g. Google Registry)
3. [Registrars][registrar] (e.g. Google Domains)
4. Registrants (e.g. you)
A registry is any organization that operates an entire top-level domain. For
example, Verisign controls all the .COM domains and Affilias controls all the
@ -50,9 +50,9 @@ are limited to four minutes and ten megabytes in size. Furthermore, queries and
indexes that span entity groups are always eventually consistent, which means
they could take seconds, and very rarely, days to update. While most online
services find eventual consistency useful, it is not appropriate for a service
conducting financial exchanges. Therefore Domain Registry has been engineered
to employ performance and complexity tradeoffs that allow strong consistency to
be applied throughout the codebase.
conducting financial exchanges. Therefore Domain Registry has been engineered to
employ performance and complexity tradeoffs that allow strong consistency to be
applied throughout the codebase.
Domain Registry has a commit log system. Commit logs are retained in datastore
for thirty days. They are also streamed to Cloud Storage for backup purposes.
@ -63,8 +63,8 @@ order to do restores. Each EPP resource entity also stores a map of its past
mutations with 24-hour granularity. This makes it possible to have point-in-time
projection queries with effectively no overhead.
The Registry Data Escrow (RDE) system is also built with reliability in mind.
It executes on top of App Engine task queues, which can be double-executed and
The Registry Data Escrow (RDE) system is also built with reliability in mind. It
executes on top of App Engine task queues, which can be double-executed and
therefore require operations to be idempotent. RDE isn't idempotent. To work
around this, RDE uses datastore transactions to achieve mutual exclusion and
serialization. We call this the "Locking Rolling Cursor Pattern." One benefit of
@ -94,14 +94,15 @@ proxy listening on port 700. Poll message support is also included.
To supplement EPP, Domain Registry also provides a public API for performing
domain availability checks. This service listens on the `/check` path.
* [RFC 5730: EPP](http://tools.ietf.org/html/rfc5730)
* [RFC 5731: EPP Domain Mapping](http://tools.ietf.org/html/rfc5731)
* [RFC 5732: EPP Host Mapping](http://tools.ietf.org/html/rfc5732)
* [RFC 5733: EPP Contact Mapping](http://tools.ietf.org/html/rfc5733)
* [RFC 3915: EPP Grace Period Mapping](http://tools.ietf.org/html/rfc3915)
* [RFC 5734: EPP Transport over TCP](http://tools.ietf.org/html/rfc5734)
* [RFC 5910: EPP DNSSEC Mapping](http://tools.ietf.org/html/rfc5910)
* [Draft: EPP Launch Phase Mapping (Proposed)](http://tools.ietf.org/html/draft-tan-epp-launchphase-11)
* [RFC 5730: EPP](http://tools.ietf.org/html/rfc5730)
* [RFC 5731: EPP Domain Mapping](http://tools.ietf.org/html/rfc5731)
* [RFC 5732: EPP Host Mapping](http://tools.ietf.org/html/rfc5732)
* [RFC 5733: EPP Contact Mapping](http://tools.ietf.org/html/rfc5733)
* [RFC 3915: EPP Grace Period Mapping](http://tools.ietf.org/html/rfc3915)
* [RFC 5734: EPP Transport over TCP](http://tools.ietf.org/html/rfc5734)
* [RFC 5910: EPP DNSSEC Mapping](http://tools.ietf.org/html/rfc5910)
* [Draft: EPP Launch Phase Mapping (Proposed)]
(http://tools.ietf.org/html/draft-tan-epp-launchphase-11)
### Registry Data Escrow (RDE)
@ -114,17 +115,22 @@ This service exists for ICANN regulatory purposes. ICANN needs to know that,
should a registry business ever implode, that they can quickly migrate their
TLDs to a different company so that they'll continue to operate.
* [Draft: Registry Data Escrow Specification](http://tools.ietf.org/html/draft-arias-noguchi-registry-data-escrow-06)
* [Draft: Domain Name Registration Data (DNRD) Objects Mapping](http://tools.ietf.org/html/draft-arias-noguchi-dnrd-objects-mapping-05)
* [Draft: ICANN Registry Interfaces](http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05)
* [Draft: Registry Data Escrow Specification]
(http://tools.ietf.org/html/draft-arias-noguchi-registry-data-escrow-06)
* [Draft: Domain Name Registration Data (DNRD) Objects Mapping]
(http://tools.ietf.org/html/draft-arias-noguchi-dnrd-objects-mapping-05)
* [Draft: ICANN Registry Interfaces]
(http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05)
### Trademark Clearing House (TMCH)
Domain Registry integrates with ICANN and IBM's MarksDB in order to protect
trademark holders, when new TLDs are being launched.
* [Draft: TMCH Functional Spec](http://tools.ietf.org/html/draft-lozano-tmch-func-spec-08)
* [Draft: Mark and Signed Mark Objects Mapping](https://tools.ietf.org/html/draft-lozano-tmch-smd-02)
* [Draft: TMCH Functional Spec]
(http://tools.ietf.org/html/draft-lozano-tmch-func-spec-08)
* [Draft: Mark and Signed Mark Objects Mapping]
(https://tools.ietf.org/html/draft-lozano-tmch-smd-02)
### WHOIS
@ -134,8 +140,10 @@ internal HTTP endpoint running on `/_dr/whois`. A separate proxy running on port
43 forwards requests to that path. Domain Registry also implements a public HTTP
endpoint that listens on the `/whois` path.
* [RFC 3912: WHOIS Protocol Specification](https://tools.ietf.org/html/rfc3912)
* [RFC 7485: Inventory and Analysis of Registration Objects](http://tools.ietf.org/html/rfc7485)
* [RFC 3912: WHOIS Protocol Specification]
(https://tools.ietf.org/html/rfc3912)
* [RFC 7485: Inventory and Analysis of Registration Objects]
(http://tools.ietf.org/html/rfc7485)
### Registration Data Access Protocol (RDAP)
@ -143,23 +151,24 @@ RDAP is the new standard for WHOIS. It provides much richer functionality, such
as the ability to perform wildcard searches. Domain Registry makes this HTTP
service available under the `/rdap/...` path.
* [RFC 7480: RDAP HTTP Usage](http://tools.ietf.org/html/rfc7480)
* [RFC 7481: RDAP Security Services](http://tools.ietf.org/html/rfc7481)
* [RFC 7482: RDAP Query Format](http://tools.ietf.org/html/rfc7482)
* [RFC 7483: RDAP JSON Responses](http://tools.ietf.org/html/rfc7483)
* [RFC 7484: RDAP Finding the Authoritative Registration Data](http://tools.ietf.org/html/rfc7484)
* [RFC 7480: RDAP HTTP Usage](http://tools.ietf.org/html/rfc7480)
* [RFC 7481: RDAP Security Services](http://tools.ietf.org/html/rfc7481)
* [RFC 7482: RDAP Query Format](http://tools.ietf.org/html/rfc7482)
* [RFC 7483: RDAP JSON Responses](http://tools.ietf.org/html/rfc7483)
* [RFC 7484: RDAP Finding the Authoritative Registration Data]
(http://tools.ietf.org/html/rfc7484)
### Backups
The registry provides a system for generating and restoring from backups with
strong point-in-time consistency. Datastore backups are written out once daily
strong point-in-time consistency. Datastore backups are written out once daily
to Cloud Storage using the built-in Datastore snapshot export functionality.
Separately, entities called commit logs are continuously exported to track
changes that occur in between the regularly scheduled backups.
A restore involves wiping out all entities in Datastore, importing the most
recent complete daily backup snapshot, then replaying all of the commit logs
since that snapshot. This yields a system state that is guaranteed
since that snapshot. This yields a system state that is guaranteed
transactionally consistent.
### Billing
@ -173,24 +182,26 @@ monthly invoices per registrar.
Because the registry runs on the Google Cloud Platform stack, it benefits from
high availability, automatic fail-over, and horizontal auto-scaling of compute
and database resources. This makes it quite flexible for running TLDs of any
and database resources. This makes it quite flexible for running TLDs of any
size.
### Automated tests
The registry codebase includes ~400 test classes with ~4,000 total unit and
integration tests. This limits regressions, ensures correct system
integration tests. This limits regressions, ensures correct system
functionality, and allows for easy continued future development and refactoring.
### DNS
An interface for DNS operations is provided, along with a sample implementation
that uses the [Google Cloud DNS](https://cloud.google.com/dns/) API. A bulk
that uses the [Google Cloud DNS](https://cloud.google.com/dns/) API. A bulk
export tool is also provided to export a zone file for an entire TLD in BIND
format.
* [RFC 1034: Domain Names - Concepts and Facilities](https://www.ietf.org/rfc/rfc1034.txt)
* [RFC 1035: Domain Names - Implementation and Specification](https://www.ietf.org/rfc/rfc1034.txt)
* [RFC 1034: Domain Names - Concepts and Facilities]
(https://www.ietf.org/rfc/rfc1034.txt)
* [RFC 1035: Domain Names - Implementation and Specification]
(https://www.ietf.org/rfc/rfc1034.txt)
### Exports
@ -202,21 +213,20 @@ ICANN-mandated reports, database snapshots, and reserved terms.
### Metrics and reporting
The registry records metrics and regularly exports them to BigQuery so that
analyses can be run on them using full SQL queries. Metrics include which EPP
analyses can be run on them using full SQL queries. Metrics include which EPP
commands were run and when and by whom, information on failed commands, activity
per registrar, and length of each request.
[BigQuery][bigquery] reporting scripts are provided to generate the required
per-TLD monthly
[registry reports](https://www.icann.org/resources/pages/registry-reports) for
ICANN.
per-TLD monthly [registry reports]
(https://www.icann.org/resources/pages/registry-reports) for ICANN.
### Registrar console
The registry includes a web-based registrar console that registrars can access
in a browser. It provides the ability for registrars to view their billing
in a browser. It provides the ability for registrars to view their billing
invoices in Google Drive, contact the registry provider, and modify WHOIS,
security (including SSL certificates), and registrar contact settings. Main
security (including SSL certificates), and registrar contact settings. Main
registry commands such as creating domains, hosts, and contacts must go through
EPP and are not provided in the console.
@ -231,7 +241,7 @@ system, and creating new TLDs.
### Plug-and-play pricing engines
The registry has the ability to configure per-TLD pricing engines to
programmatically determine the price of domain names on the fly. An
programmatically determine the price of domain names on the fly. An
implementation is provided that uses the contents of a static list of prices
(this being by far the most common type of premium pricing used for TLDs).
@ -240,23 +250,23 @@ implementation is provided that uses the contents of a static list of prices
There are a few things that the registry cannot currently do, and a few things
that are out of scope that it will never do.
* You will need a DNS system in order to run a fully-fledged registry. If you
are planning on using anything other than Google Cloud DNS you will need to
provide an implementation.
* You will need an invoicing system to convert the internal registry billing
events into registrar invoices using whatever accounts receivable setup you
already have. A partial implementation is provided that generates generic CSV
invoices (see `MakeBillingTablesCommand`), but you will need to integrate it
with your payments system.
* You will likely need monitoring to continuously monitor the status of the
system. Any of a large variety of tools can be used for this, or you can
write your own.
* You will need a proxy to forward traffic on EPP and WHOIS ports to the HTTPS
endpoint on App Engine, as App Engine only allows incoming traffic on
HTTP/HTTPS ports. Similarly, App Engine does not yet support IPv6, so your
proxy would have to support that as well if you need IPv6 support. Future
versions of [App Engine Flexible][flex] should provide these out of the box,
but they aren't ready yet.
* You will need a DNS system in order to run a fully-fledged registry. If you
are planning on using anything other than Google Cloud DNS you will need to
provide an implementation.
* You will need an invoicing system to convert the internal registry billing
events into registrar invoices using whatever accounts receivable setup you
already have. A partial implementation is provided that generates generic
CSV invoices (see `MakeBillingTablesCommand`), but you will need to
integrate it with your payments system.
* You will likely need monitoring to continuously monitor the status of the
system. Any of a large variety of tools can be used for this, or you can
write your own.
* You will need a proxy to forward traffic on EPP and WHOIS ports to the HTTPS
endpoint on App Engine, as App Engine only allows incoming traffic on
HTTP/HTTPS ports. Similarly, App Engine does not yet support IPv6, so your
proxy would have to support that as well if you need IPv6 support. Future
versions of [App Engine Flexible][flex] should provide these out of the box,
but they aren't ready yet.
[bigquery]: https://cloud.google.com/bigquery/
[datastore]: https://cloud.google.com/datastore/docs/concepts/overview

View file

@ -5,19 +5,19 @@ Registry project as it is implemented in App Engine.
## Services
The Domain Registry contains three
[services](https://cloud.google.com/appengine/docs/python/an-overview-of-app-engine),
which were previously called modules in earlier versions of App Engine. The
services are: default (also called front-end), backend, and tools. Each service
The Domain Registry contains three [services]
(https://cloud.google.com/appengine/docs/python/an-overview-of-app-engine),
which were previously called modules in earlier versions of App Engine. The
services are: default (also called front-end), backend, and tools. Each service
runs independently in a lot of ways, including that they can be upgraded
individually, their log outputs are separate, and their servers and configured
scaling are separate as well.
Once you have your app deployed and running, the default service can be accessed
at `https://project-id.appspot.com`, substituting whatever your App Engine app
is named for "project-id". Note that that is the URL for the production
instance of your app; other environments will have the environment name appended
with a hyphen in the hostname, e.g. `https://project-id-sandbox.appspot.com`.
is named for "project-id". Note that that is the URL for the production instance
of your app; other environments will have the environment name appended with a
hyphen in the hostname, e.g. `https://project-id-sandbox.appspot.com`.
The URL for the backend service is `https://backend-dot-project-id.appspot.com`
and the URL for the tools service is `https://tools-dot-project-id.appspot.com`.
@ -27,32 +27,32 @@ wild-cards).
### Default service
The default service is responsible for all registrar-facing
[EPP](https://en.wikipedia.org/wiki/Extensible_Provisioning_Protocol) command
The default service is responsible for all registrar-facing [EPP]
(https://en.wikipedia.org/wiki/Extensible_Provisioning_Protocol) command
traffic, all user-facing WHOIS and RDAP traffic, and the admin and registrar web
consoles, and is thus the most important service. If the service has any
consoles, and is thus the most important service. If the service has any
problems and goes down or stops servicing requests in a timely manner, it will
begin to impact users immediately. Requests to the default service are handled
begin to impact users immediately. Requests to the default service are handled
by the `FrontendServlet`, which provides all of the endpoints exposed in
`FrontendRequestComponent`.
### Backend service
The backend service is responsible for executing all regularly scheduled
background tasks (using cron) as well as all asynchronous tasks. Requests to
the backend service are handled by the `BackendServlet`, which provides all of
the endpoints exposed in `BackendRequestComponent`. These include tasks for
background tasks (using cron) as well as all asynchronous tasks. Requests to the
backend service are handled by the `BackendServlet`, which provides all of the
endpoints exposed in `BackendRequestComponent`. These include tasks for
generating/exporting RDE, syncing the trademark list from TMDB, exporting
backups, writing out DNS updates, handling asynchronous contact and host
deletions, writing out commit logs, exporting metrics to BigQuery, and many
more. Issues in the backend service will not immediately be apparent to end
more. Issues in the backend service will not immediately be apparent to end
users, but the longer it is down, the more obvious it will become that
user-visible tasks such as DNS and deletion are not being handled in a timely
manner.
The backend service is also where all MapReduces run, which includes some of the
aforementioned tasks such as RDE and asynchronous resource deletion, as well as
any one-off data migration MapReduces. Consequently, the backend service should
any one-off data migration MapReduces. Consequently, the backend service should
be sized to support not just the normal ongoing DNS load but also the load
incurred by MapReduces, both scheduled (such as RDE) and on-demand (asynchronous
contact/host deletion).
@ -61,364 +61,369 @@ contact/host deletion).
The tools service is responsible for servicing requests from the `registry_tool`
command line tool, which provides administrative-level functionality for
developers and tech support employees of the registry. It is thus the least
critical of the three services. Requests to the tools service are handled by
the `ToolsServlet`, which provides all of the endpoints exposed in
`ToolsRequestComponent`. Some example functionality that this service provides
developers and tech support employees of the registry. It is thus the least
critical of the three services. Requests to the tools service are handled by the
`ToolsServlet`, which provides all of the endpoints exposed in
`ToolsRequestComponent`. Some example functionality that this service provides
includes the server-side code to update premium lists, run EPP commands from the
tool, and manually modify contacts/hosts/domains/and other resources. Problems
tool, and manually modify contacts/hosts/domains/and other resources. Problems
with the tools service are not visible to users.
## Task queues
[Task queues](https://cloud.google.com/appengine/docs/java/taskqueue/) in App
Engine provide an asynchronous way to enqueue tasks and then execute them on
some kind of schedule. There are two types of queues, push queues and pull
queues. Tasks in push queues are always executing up to some throttlable limit.
some kind of schedule. There are two types of queues, push queues and pull
queues. Tasks in push queues are always executing up to some throttlable limit.
Tasks in pull queues remain there indefinitely until the queue is polled by code
that is running for some other reason. Essentially, push queues run their own
tasks while pull queues just enqueue data that is used by something else. Many
other parts of App Engine are implemented using task queues. For example,
[App Engine cron](https://cloud.google.com/appengine/docs/java/config/cron) adds
tasks to push queues at regularly scheduled intervals, and the
[MapReduce framework](https://cloud.google.com/appengine/docs/java/dataprocessing/)
adds tasks for each phase of the MapReduce algorithm.
that is running for some other reason. Essentially, push queues run their own
tasks while pull queues just enqueue data that is used by something else. Many
other parts of App Engine are implemented using task queues. For example, [App
Engine cron](https://cloud.google.com/appengine/docs/java/config/cron) adds
tasks to push queues at regularly scheduled intervals, and the [MapReduce
framework](https://cloud.google.com/appengine/docs/java/dataprocessing/) adds
tasks for each phase of the MapReduce algorithm.
The Domain Registry project uses a particular pattern of paired push/pull queues
that is worth explaining in detail. Push queues are essential because App
that is worth explaining in detail. Push queues are essential because App
Engine's architecture does not support long-running background processes, and so
push queues are thus the fundamental building block that allows asynchronous and
background execution of code that is not in response to incoming web requests.
However, they also have limitations in that they do not allow batch processing
or grouping. That's where the pull queue comes in. Regularly scheduled tasks
in the push queue will, upon execution, poll the corresponding pull queue for a
specified number of tasks and execute them in a batch. This allows the code to
or grouping. That's where the pull queue comes in. Regularly scheduled tasks in
the push queue will, upon execution, poll the corresponding pull queue for a
specified number of tasks and execute them in a batch. This allows the code to
execute in the background while taking advantage of batch processing.
Particulars on the task queues in use by the Domain Registry project are
specified in the `queue.xml` file. Note that many push queues have a direct
specified in the `queue.xml` file. Note that many push queues have a direct
one-to-one correspondence with entries in `cron.xml` because they need to be
fanned-out on a per-TLD or other basis (see the Cron section below for more
explanation). The exact queue that a given cron task will use is passed as the
explanation). The exact queue that a given cron task will use is passed as the
query string parameter "queue" in the url specification for the cron task.
Here are the task queues in use by the system. All are push queues unless
Here are the task queues in use by the system. All are push queues unless
explicitly marked as otherwise.
* `bigquery-streaming-metrics` -- Queue for metrics that are asynchronously
streamed to BigQuery in the `Metrics` class. Tasks are enqueued during EPP
flows in `EppController`. This means that there is a lag of a few seconds to
a few minutes between when metrics are generated and when they are queryable
in BigQuery, but this is preferable to slowing all EPP flows down and blocking
them on BigQuery streaming.
* `brda` -- Queue for tasks to upload weekly Bulk Registration Data Access
(BRDA) files to a location where they are available to ICANN. The
`RdeStagingReducer` (part of the RDE MapReduce) creates these tasks at the end
of generating an RDE dump.
* `delete-commits` -- Cron queue for tasks to regularly delete commit logs that
are more than thirty days stale. These tasks execute the
`DeleteOldCommitLogsAction`.
* `dns-cron` (cron queue) and `dns-pull` (pull queue) -- A push/pull pair of
queues. Cron regularly enqueues tasks in dns-cron each minute, which are then
executed by `ReadDnsQueueAction`, which leases a batch of tasks from the pull
queue, groups them by TLD, and writes them as a single task to `dns-publish`
to be published to the configured DNS writer for the TLD.
* `dns-publish` -- Queue for batches of DNS updates to be pushed to DNS writers.
* `export-bigquery-poll` -- Queue for tasks to query the success/failure of a
given BigQuery export job. Tasks are enqueued by `BigqueryPollJobAction`.
* `export-commits` -- Queue for tasks to export commit log checkpoints. Tasks
are enqueued by `CommitLogCheckpointAction` (which is run every minute by
cron) and executed by `ExportCommitLogDiffAction`.
* `export-reserved-terms` -- Cron queue for tasks to export the list of reserved
terms for each TLD. The tasks are executed by `ExportReservedTermsAction`.
* `export-snapshot` -- Cron and push queue for tasks to load a Datastore
snapshot that was stored in Google Cloud Storage and export it to BigQuery.
Tasks are enqueued by both cron and `CheckSnapshotServlet` and are executed by
both `ExportSnapshotServlet` and `LoadSnapshotAction`.
* `export-snapshot-poll` -- Queue for tasks to check that a Datastore snapshot
has been successfully uploaded to Google Cloud Storage (this is an
asynchronous background operation that can take an indeterminate amount of
time). Once the snapshot is successfully uploaded, it is imported into
BigQuery. Tasks are enqueued by `ExportSnapshotServlet` and executed by
`CheckSnapshotServlet`.
* `export-snapshot-update-view` -- Queue for tasks to update the BigQuery views
to point to the most recently uploaded snapshot. Tasks are enqueued by
`LoadSnapshotAction` and executed by `UpdateSnapshotViewAction`.
* `flows-async` -- Queue for asynchronous tasks that are enqueued during EPP
command flows. Currently all of these tasks correspond to invocations of any
of the following three MapReduces: `DnsRefreshForHostRenameAction`,
`DeleteHostResourceAction`, or `DeleteContactResourceAction`.
* `group-members-sync` -- Cron queue for tasks to sync registrar contacts (not
domain contacts!) to Google Groups. Tasks are executed by
`SyncGroupMembersAction`.
* `load[0-9]` -- Queues used to load-test the system by `LoadTestAction`. These
queues don't need to exist except when actively running load tests (which is
not recommended on production environments). There are ten of these queues to
provide simple sharding, because the Domain Registry system is capable of
handling significantly more Queries Per Second than the highest throttle limit
available on task queues (which is 500 qps).
* `lordn-claims` and `lordn-sunrise` -- Pull queues for handling LORDN exports.
Tasks are enqueued synchronously during EPP commands depending on whether the
domain name in question has a claims notice ID.
* `marksdb` -- Queue for tasks to verify that an upload to NORDN was
successfully received and verified. These tasks are enqueued by
`NordnUploadAction` following an upload and are executed by
`NordnVerifyAction`.
* `nordn` -- Cron queue used for NORDN exporting. Tasks are executed by
`NordnUploadAction`, which pulls LORDN data from the `lordn-claims` and
`lordn-sunrise` pull queues (above).
* `rde-report` -- Queue for tasks to upload RDE reports to ICANN following
successful upload of full RDE files to the escrow provider. Tasks are
enqueued by `RdeUploadAction` and executed by `RdeReportAction`.
* `rde-upload` -- Cron queue for tasks to upload already-generated RDE files
from Cloud Storage to the escrow provider. Tasks are executed by
`RdeUploadAction`.
* `sheet` -- Queue for tasks to sync registrar updates to a Google Sheets
spreadsheet. Tasks are enqueued by `RegistrarServlet` when changes are made
to registrar fields and are executed by `SyncRegistrarsSheetAction`.
* `bigquery-streaming-metrics` -- Queue for metrics that are asynchronously
streamed to BigQuery in the `Metrics` class. Tasks are enqueued during EPP
flows in `EppController`. This means that there is a lag of a few seconds to
a few minutes between when metrics are generated and when they are queryable
in BigQuery, but this is preferable to slowing all EPP flows down and
blocking them on BigQuery streaming.
* `brda` -- Queue for tasks to upload weekly Bulk Registration Data Access
(BRDA) files to a location where they are available to ICANN. The
`RdeStagingReducer` (part of the RDE MapReduce) creates these tasks at the
end of generating an RDE dump.
* `delete-commits` -- Cron queue for tasks to regularly delete commit logs
that are more than thirty days stale. These tasks execute the
`DeleteOldCommitLogsAction`.
* `dns-pull` -- A pull queue to enqueue DNS modifications. Cron regularly runs
`ReadDnsQueueAction`, which drains the queue, batches modifications by TLD,
and writes the batches to `dns-publish` to be published to the configured
`DnsWriter` for the TLD.
* `dns-publish` -- Queue for batches of DNS updates to be pushed to DNS
writers.
* `export-bigquery-poll` -- Queue for tasks to query the success/failure of a
given BigQuery export job. Tasks are enqueued by `BigqueryPollJobAction`.
* `export-commits` -- Queue for tasks to export commit log checkpoints. Tasks
are enqueued by `CommitLogCheckpointAction` (which is run every minute by
cron) and executed by `ExportCommitLogDiffAction`.
* `export-reserved-terms` -- Cron queue for tasks to export the list of
reserved terms for each TLD. The tasks are executed by
`ExportReservedTermsAction`.
* `export-snapshot` -- Cron and push queue for tasks to load a Datastore
snapshot that was stored in Google Cloud Storage and export it to BigQuery.
Tasks are enqueued by both cron and `CheckSnapshotServlet` and are executed
by both `ExportSnapshotServlet` and `LoadSnapshotAction`.
* `export-snapshot-poll` -- Queue for tasks to check that a Datastore snapshot
has been successfully uploaded to Google Cloud Storage (this is an
asynchronous background operation that can take an indeterminate amount of
time). Once the snapshot is successfully uploaded, it is imported into
BigQuery. Tasks are enqueued by `ExportSnapshotServlet` and executed by
`CheckSnapshotServlet`.
* `export-snapshot-update-view` -- Queue for tasks to update the BigQuery
views to point to the most recently uploaded snapshot. Tasks are enqueued by
`LoadSnapshotAction` and executed by `UpdateSnapshotViewAction`.
* `flows-async` -- Queue for asynchronous tasks that are enqueued during EPP
command flows. Currently all of these tasks correspond to invocations of any
of the following three MapReduces: `DnsRefreshForHostRenameAction`,
`DeleteHostResourceAction`, or `DeleteContactResourceAction`.
* `group-members-sync` -- Cron queue for tasks to sync registrar contacts (not
domain contacts!) to Google Groups. Tasks are executed by
`SyncGroupMembersAction`.
* `load[0-9]` -- Queues used to load-test the system by `LoadTestAction`.
These queues don't need to exist except when actively running load tests
(which is not recommended on production environments). There are ten of
these queues to provide simple sharding, because the Domain Registry system
is capable of handling significantly more Queries Per Second than the
highest throttle limit available on task queues (which is 500 qps).
* `lordn-claims` and `lordn-sunrise` -- Pull queues for handling LORDN
exports. Tasks are enqueued synchronously during EPP commands depending on
whether the domain name in question has a claims notice ID.
* `marksdb` -- Queue for tasks to verify that an upload to NORDN was
successfully received and verified. These tasks are enqueued by
`NordnUploadAction` following an upload and are executed by
`NordnVerifyAction`.
* `nordn` -- Cron queue used for NORDN exporting. Tasks are executed by
`NordnUploadAction`, which pulls LORDN data from the `lordn-claims` and
`lordn-sunrise` pull queues (above).
* `rde-report` -- Queue for tasks to upload RDE reports to ICANN following
successful upload of full RDE files to the escrow provider. Tasks are
enqueued by `RdeUploadAction` and executed by `RdeReportAction`.
* `rde-upload` -- Cron queue for tasks to upload already-generated RDE files
from Cloud Storage to the escrow provider. Tasks are executed by
`RdeUploadAction`.
* `sheet` -- Queue for tasks to sync registrar updates to a Google Sheets
spreadsheet. Tasks are enqueued by `RegistrarServlet` when changes are made
to registrar fields and are executed by `SyncRegistrarsSheetAction`.
## Environments
The domain registry codebase comes pre-configured with support for a number of
different environments, all of which are used in Google's registry system.
Other registry operators may choose to user more or fewer environments,
depending on their needs.
different environments, all of which are used in Google's registry system. Other
registry operators may choose to user more or fewer environments, depending on
their needs.
The different environments are specified in `RegistryEnvironment`. Most
The different environments are specified in `RegistryEnvironment`. Most
correspond to a separate App Engine app except for `UNITTEST` and `LOCAL`, which
by their nature do not use real environments running in the cloud. The
by their nature do not use real environments running in the cloud. The
recommended naming scheme for the App Engine apps that has the best possible
compatibility with the codebase and thus requires the least configuration is to
pick a name for the production app and then suffix it for the other
environments. E.g., if the production app is to be named 'registry-platform',
environments. E.g., if the production app is to be named 'registry-platform',
then the sandbox app would be named 'registry-platform-sandbox'.
The full list of environments supported out-of-the-box, in descending order from
real to not, is:
* `PRODUCTION` -- The real production environment that is actually running live
TLDs. Since the Domain Registry is a shared registry platform, there need
only ever be one of these.
* `SANDBOX` -- A playground environment for external users to test commands in
without the possibility of affecting production data. This is the environment
new registrars go through
[OT&E](https://www.icann.org/resources/unthemed-pages/registry-agmt-appc-e-2001-04-26-en)
in. Sandbox is also useful as a final sanity check to push a new prospective
build to and allow it to "bake" before pushing it to production.
* `QA` -- An internal environment used by business users to play with and sign
off on new features to be released. This environment can be pushed to
frequently and is where manual testers should be spending the majority of
their time.
* `CRASH` -- Another environment similar to QA, except with no expectations of
data preservation. Crash is used for testing of backup/restore (which brings
the entire system down until it is completed) without affecting the QA
environment.
* `ALPHA` -- The developers' playground. Experimental builds are routinely
pushed here in order to test them on a real app running on App Engine. You
may end up wanting multiple environments like Alpha if you regularly
experience contention (i.e. developers being blocked from testing their code
on Alpha because others are already using it).
* `LOCAL` -- A fake environment that is used when running the app locally on a
simulated App Engine instance.
* `UNITTEST` -- A fake environment that is used in unit tests, where everything
in the App Engine stack is simulated or mocked.
* `PRODUCTION` -- The real production environment that is actually running
live TLDs. Since the Domain Registry is a shared registry platform, there
need only ever be one of these.
* `SANDBOX` -- A playground environment for external users to test commands in
without the possibility of affecting production data. This is the
environment new registrars go through [OT&E]
(https://www.icann.org/resources/unthemed-pages/registry-agmt-appc-e-2001-04-26-en)
in. Sandbox is also useful as a final sanity check to push a new prospective
build to and allow it to "bake" before pushing it to production.
* `QA` -- An internal environment used by business users to play with and sign
off on new features to be released. This environment can be pushed to
frequently and is where manual testers should be spending the majority of
their time.
* `CRASH` -- Another environment similar to QA, except with no expectations of
data preservation. Crash is used for testing of backup/restore (which brings
the entire system down until it is completed) without affecting the QA
environment.
* `ALPHA` -- The developers' playground. Experimental builds are routinely
pushed here in order to test them on a real app running on App Engine. You
may end up wanting multiple environments like Alpha if you regularly
experience contention (i.e. developers being blocked from testing their code
on Alpha because others are already using it).
* `LOCAL` -- A fake environment that is used when running the app locally on a
simulated App Engine instance.
* `UNITTEST` -- A fake environment that is used in unit tests, where
everything in the App Engine stack is simulated or mocked.
## Release process
The following is a recommended release process based on Google's several years
of experience running a production registry using this codebase.
1. Developers write code and associated unit tests verifying that the new code
works properly.
2. New features or potentially risky bug fixes are pushed to Alpha and tested by
the developers before being committed to the source code repository.
3. New builds are cut and first pushed to Sandbox.
4. Once a build has been running successfully in Sandbox for a day with no
errors, it can be pushed to Production.
5. Repeat once weekly, or potentially more often.
1. Developers write code and associated unit tests verifying that the new code
works properly.
2. New features or potentially risky bug fixes are pushed to Alpha and tested
by the developers before being committed to the source code repository.
3. New builds are cut and first pushed to Sandbox.
4. Once a build has been running successfully in Sandbox for a day with no
errors, it can be pushed to Production.
5. Repeat once weekly, or potentially more often.
## Cron tasks
All [cron tasks](https://cloud.google.com/appengine/docs/java/config/cron) are
specified in `cron.xml` files, with one per environment. There are more tasks
specified in `cron.xml` files, with one per environment. There are more tasks
that execute in Production than in other environments, because tasks like
uploading RDE dumps are only done for the live system. Cron tasks execute on
the `backend` service.
uploading RDE dumps are only done for the live system. Cron tasks execute on the
`backend` service.
Most cron tasks use the `TldFanoutAction` which is accessed via the
`/_dr/cron/fanout` URL path. This action, which is run by the BackendServlet on
`/_dr/cron/fanout` URL path. This action, which is run by the BackendServlet on
the backend service, fans out a given cron task for each TLD that exists in the
registry system, using the queue that is specified in the `cron.xml` entry.
Because some tasks may be computationally intensive and could risk spiking
system latency if all start executing immediately at the same time, there is a
`jitterSeconds` parameter that spreads out tasks over the given number of
seconds. This is used with DNS updates and commit log deletion.
seconds. This is used with DNS updates and commit log deletion.
The reason the `TldFanoutAction` exists is that a lot of tasks need to be done
separately for each TLD, such as RDE exports and NORDN uploads. It's simpler to
separately for each TLD, such as RDE exports and NORDN uploads. It's simpler to
have a single cron entry that will create tasks for all TLDs than to have to
specify a separate cron task for each action for each TLD (though that is still
an option). Task queues also provide retry semantics in the event of transient
failures that a raw cron task does not. This is why there are some tasks that
do not fan out across TLDs that still use `TldFanoutAction` -- it's so that the
an option). Task queues also provide retry semantics in the event of transient
failures that a raw cron task does not. This is why there are some tasks that do
not fan out across TLDs that still use `TldFanoutAction` -- it's so that the
tasks retry in the face of transient errors.
The full list of URL parameters to `TldFanoutAction` that can be specified in
cron.xml is:
* `endpoint` -- The path of the action that should be executed (see `web.xml`).
* `queue` -- The cron queue to enqueue tasks in.
* `forEachRealTld` -- Specifies that the task should be run in each TLD of type
`REAL`. This can be combined with `forEachTestTld`.
* `forEachTestTld` -- Specifies that the task should be run in each TLD of type
`TEST`. This can be combined with `forEachRealTld`.
* `runInEmpty` -- Specifies that the task should be run globally, i.e. just
once, rather than individually per TLD. This is provided to allow tasks to
retry. It is called "`runInEmpty`" for historical reasons.
* `excludes` -- A list of TLDs to exclude from processing.
* `jitterSeconds` -- The execution of each per-TLD task is delayed by a
different random number of seconds between zero and this max value.
* `endpoint` -- The path of the action that should be executed (see
`web.xml`).
* `queue` -- The cron queue to enqueue tasks in.
* `forEachRealTld` -- Specifies that the task should be run in each TLD of
type `REAL`. This can be combined with `forEachTestTld`.
* `forEachTestTld` -- Specifies that the task should be run in each TLD of
type `TEST`. This can be combined with `forEachRealTld`.
* `runInEmpty` -- Specifies that the task should be run globally, i.e. just
once, rather than individually per TLD. This is provided to allow tasks to
retry. It is called "`runInEmpty`" for historical reasons.
* `excludes` -- A list of TLDs to exclude from processing.
* `jitterSeconds` -- The execution of each per-TLD task is delayed by a
different random number of seconds between zero and this max value.
## Cloud Datastore
The Domain Registry platform uses
[Cloud Datastore](https://cloud.google.com/appengine/docs/java/datastore/) as
its primary database. Cloud Datastore is a NoSQL document database that
provides automatic horizontal scaling, high performance, and high availability.
All information that is persisted to Cloud Datastore takes the form of Java
classes annotated with `@Entity` that are located in the `model` package. The
[Objectify library](https://cloud.google.com/appengine/docs/java/gettingstarted/using-datastore-objectify)
The Domain Registry platform uses [Cloud Datastore]
(https://cloud.google.com/appengine/docs/java/datastore/) as its primary
database. Cloud Datastore is a NoSQL document database that provides automatic
horizontal scaling, high performance, and high availability. All information
that is persisted to Cloud Datastore takes the form of Java classes annotated
with `@Entity` that are located in the `model` package. The [Objectify library]
(https://cloud.google.com/appengine/docs/java/gettingstarted/using-datastore-objectify)
is used to persist instances of these classes in a format that Datastore
understands.
A brief overview of the different entity types found in the App Engine Datastore
Viewer may help administrators understand what they are seeing. Note that some
Viewer may help administrators understand what they are seeing. Note that some
of these entities are part of App Engine tools that are outside of the domain
registry codebase:
* `_AE_*` -- These entities are created by App Engine.
* `_ah_SESSION` -- These entities track App Engine client sessions.
* `_GAE_MR_*` -- These entities are generated by App Engine while running
MapReduces.
* `BackupStatus` -- There should only be one of these entities, used to maintain
the state of the backup process.
* `Cancellation` -- A cancellation is a special type of billing event which
represents the cancellation of another billing event such as a OneTime or
Recurring.
* `ClaimsList`, `ClaimsListShard`, and `ClaimsListSingleton` -- These entities
store the TMCH claims list, for use in trademark processing.
* `CommitLog*` -- These entities store the commit log information.
* `ContactResource` -- These hold the ICANN contact information (but not
registrar contacts, who have a separate entity type).
* `Cursor` -- We use Cursor entities to maintain state about daily processes,
remembering which dates have been processed. For instance, for the RDE export,
Cursor entities maintain the date up to which each TLD has been exported.
* `DomainApplicationIndex` -- These hold domain applications received during the
sunrise period.
* `DomainBase` -- These hold the ICANN domain information.
* `DomainRecord` -- These are used during the DNS update process.
* `EntityGroupRoot` -- There is only one EntityGroupRoot entity, which serves as
the Datastore parent of many other entities.
* `EppResourceIndex` -- These entities allow enumeration of EPP resources (such
as domains, hosts and contacts), which would otherwise be difficult to do in
Datastore.
* `ExceptionReportEntity` -- These entities are generated automatically by
ECatcher, a Google-internal logging and debugging tool. Non-Google users
should not encounter these entries.
* `ForeignKeyContactIndex`, `ForeignKeyDomainIndex`, and `ForeignKeyHostIndex`
-- These act as a unique index on contacts, domains and hosts, allowing
transactional lookup by foreign key.
* `HistoryEntry` -- A HistoryEntry is the record of a command which mutated an
EPP resource. It serves as the parent of BillingEvents and PollMessages.
* `HostRecord` -- These are used during the DNS update process.
* `HostResource` -- These hold the ICANN host information.
* `Lock` -- Lock entities are used to control access to a shared resource such
as an App Engine queue. Under ordinary circumstances, these locks will be
cleaned up automatically, and should not accumulate.
* `LogsExportCursor` -- This is a single entity which maintains the state of log
export.
* `MR-*` -- These entities are generated by the App Engine MapReduce library in
the course of running MapReduces.
* `Modification` -- A Modification is a special type of billing event which
represents the modification of a OneTime billing event.
* `OneTime` -- A OneTime is a billing event which represents a one-time charge
or credit to the client (as opposed to Recurring).
* `pipeline-*` -- These entities are also generated by the App Engine MapReduce
library.
* `PollMessage` -- PollMessages are generated by the system to notify registrars
of asynchronous responses and status changes.
* `PremiumList`, `PremiumListEntry`, and `PremiumListRevision` -- The standard
method for determining which domain names receive premium pricing is to
maintain a static list of premium names. Each PremiumList contains some number
of PremiumListRevisions, each of which in turn contains a PremiumListEntry for
each premium name.
* `RdeRevision` -- These entities are used by the RDE subsystem in the process
of generating files.
* `Recurring` -- A Recurring is a billing event which represents a recurring
charge to the client (as opposed to OneTime).
* `Registrar` -- These hold information about client registrars.
* `RegistrarContact` -- Registrars have contacts just as domains do. These are
stored in a special RegistrarContact entity.
* `RegistrarCredit` and `RegistrarCreditBalance` -- The system supports the
concept of a registrar credit balance, which is a pool of credit that the
registrar can use to offset amounts they owe. This might come from promotions,
for instance. These entities maintain registrars' balances.
* `Registry` -- These hold information about the TLDs supported by the Registry
system.
* `RegistryCursor` -- These entities are the predecessor to the Cursor
entities. We are no longer using them, and will be deleting them soon.
* `ReservedList` -- Each ReservedList entity represents an entire list of
reserved names which cannot be registered. Each TLD can have one or more
attached reserved lists.
* `ServerSecret` -- this is a single entity containing the secret numbers used
for generating tokens such as XSRF tokens.
* `SignedMarkRevocationList` -- The entities together contain the Signed Mark
Data Revocation List file downloaded from the TMCH MarksDB each day. Each
entity contains up to 10,000 rows of the file, so depending on the size of the
file, there will be some handful of entities.
* `TmchCrl` -- This is a single entity containing ICANN's TMCH CA Certificate
Revocation List.
* `_AE_*` -- These entities are created by App Engine.
* `_ah_SESSION` -- These entities track App Engine client sessions.
* `_GAE_MR_*` -- These entities are generated by App Engine while running
MapReduces.
* `BackupStatus` -- There should only be one of these entities, used to
maintain the state of the backup process.
* `Cancellation` -- A cancellation is a special type of billing event which
represents the cancellation of another billing event such as a OneTime or
Recurring.
* `ClaimsList`, `ClaimsListShard`, and `ClaimsListSingleton` -- These entities
store the TMCH claims list, for use in trademark processing.
* `CommitLog*` -- These entities store the commit log information.
* `ContactResource` -- These hold the ICANN contact information (but not
registrar contacts, who have a separate entity type).
* `Cursor` -- We use Cursor entities to maintain state about daily processes,
remembering which dates have been processed. For instance, for the RDE
export, Cursor entities maintain the date up to which each TLD has been
exported.
* `DomainApplicationIndex` -- These hold domain applications received during
the sunrise period.
* `DomainBase` -- These hold the ICANN domain information.
* `DomainRecord` -- These are used during the DNS update process.
* `EntityGroupRoot` -- There is only one EntityGroupRoot entity, which serves
as the Datastore parent of many other entities.
* `EppResourceIndex` -- These entities allow enumeration of EPP resources
(such as domains, hosts and contacts), which would otherwise be difficult to
do in Datastore.
* `ExceptionReportEntity` -- These entities are generated automatically by
ECatcher, a Google-internal logging and debugging tool. Non-Google users
should not encounter these entries.
* `ForeignKeyContactIndex`, `ForeignKeyDomainIndex`, and
`ForeignKeyHostIndex` -- These act as a unique index on contacts, domains
and hosts, allowing transactional lookup by foreign key.
* `HistoryEntry` -- A HistoryEntry is the record of a command which mutated an
EPP resource. It serves as the parent of BillingEvents and PollMessages.
* `HostRecord` -- These are used during the DNS update process.
* `HostResource` -- These hold the ICANN host information.
* `Lock` -- Lock entities are used to control access to a shared resource such
as an App Engine queue. Under ordinary circumstances, these locks will be
cleaned up automatically, and should not accumulate.
* `LogsExportCursor` -- This is a single entity which maintains the state of
log export.
* `MR-*` -- These entities are generated by the App Engine MapReduce library
in the course of running MapReduces.
* `Modification` -- A Modification is a special type of billing event which
represents the modification of a OneTime billing event.
* `OneTime` -- A OneTime is a billing event which represents a one-time charge
or credit to the client (as opposed to Recurring).
* `pipeline-*` -- These entities are also generated by the App Engine
MapReduce library.
* `PollMessage` -- PollMessages are generated by the system to notify
registrars of asynchronous responses and status changes.
* `PremiumList`, `PremiumListEntry`, and `PremiumListRevision` -- The standard
method for determining which domain names receive premium pricing is to
maintain a static list of premium names. Each PremiumList contains some
number of PremiumListRevisions, each of which in turn contains a
PremiumListEntry for each premium name.
* `RdeRevision` -- These entities are used by the RDE subsystem in the process
of generating files.
* `Recurring` -- A Recurring is a billing event which represents a recurring
charge to the client (as opposed to OneTime).
* `Registrar` -- These hold information about client registrars.
* `RegistrarContact` -- Registrars have contacts just as domains do. These are
stored in a special RegistrarContact entity.
* `RegistrarCredit` and `RegistrarCreditBalance` -- The system supports the
concept of a registrar credit balance, which is a pool of credit that the
registrar can use to offset amounts they owe. This might come from
promotions, for instance. These entities maintain registrars' balances.
* `Registry` -- These hold information about the TLDs supported by the
Registry system.
* `RegistryCursor` -- These entities are the predecessor to the Cursor
entities. We are no longer using them, and will be deleting them soon.
* `ReservedList` -- Each ReservedList entity represents an entire list of
reserved names which cannot be registered. Each TLD can have one or more
attached reserved lists.
* `ServerSecret` -- this is a single entity containing the secret numbers used
for generating tokens such as XSRF tokens.
* `SignedMarkRevocationList` -- The entities together contain the Signed Mark
Data Revocation List file downloaded from the TMCH MarksDB each day. Each
entity contains up to 10,000 rows of the file, so depending on the size of
the file, there will be some handful of entities.
* `TmchCrl` -- This is a single entity containing ICANN's TMCH CA Certificate
Revocation List.
## Cloud Storage buckets
The Domain Registry platform uses
[Cloud Storage](https://cloud.google.com/storage/) for bulk storage of large
flat files that aren't suitable for Datastore. These files include backups, RDE
exports, Datastore snapshots (for ingestion into BigQuery), and reports. Each
bucket name must be unique across all of Google Cloud Storage, so we use the
common recommended pattern of prefixing all buckets with the name of the App
Engine app (which is itself globally unique). Most of the bucket names are
configurable, but the defaults are as follows, with PROJECT standing in as a
placeholder for the App Engine app name:
The Domain Registry platform uses [Cloud Storage]
(https://cloud.google.com/storage/) for bulk storage of large flat files that
aren't suitable for Datastore. These files include backups, RDE exports,
Datastore snapshots (for ingestion into BigQuery), and reports. Each bucket name
must be unique across all of Google Cloud Storage, so we use the common
recommended pattern of prefixing all buckets with the name of the App Engine app
(which is itself globally unique). Most of the bucket names are configurable,
but the defaults are as follows, with PROJECT standing in as a placeholder for
the App Engine app name:
* `PROJECT-billing` -- Monthly invoice files for each registrar.
* `PROJECT-commits` -- Daily exports of commit logs that are needed for
potentially performing a restore.
* `PROJECT-domain-lists` -- Daily exports of all registered domain names per
TLD.
* `PROJECT-gcs-logs` -- This bucket is used at Google to store the GCS access
logs and storage data. This bucket is not required by the Registry system,
but can provide useful logging information. For instructions on setup, see
the
[Cloud Storage documentation](https://cloud.google.com/storage/docs/access-logs).
* `PROJECT-icann-brda` -- This bucket contains the weekly ICANN BRDA files.
There is no lifecycle expiration; we keep a history of all the files. This
bucket must exist for the BRDA process to function.
* `PROJECT-icann-zfa` -- This bucket contains the most recent ICANN ZFA
files. No lifecycle is needed, because the files are overwritten each time.
* `PROJECT-rde` -- This bucket contains RDE exports, which should then be
regularly uploaded to the escrow provider. Lifecycle is set to 90 days. The
bucket must exist.
* `PROJECT-reporting` -- Contains monthly ICANN reporting files.
* `PROJECT-snapshots` -- Contains daily exports of Datastore entities of types
defined in `ExportConstants.java`. These are imported into BigQuery daily to
allow for in-depth querying.
* `PROJECT.appspot.com` -- Temporary MapReduce files are stored here. By
default, the App Engine MapReduce library places its temporary files in a
bucket named {project}.appspot.com. This bucket must exist. To keep temporary
files from building up, a 90-day or 180-day lifecycle should be applied to the
bucket, depending on how long you want to be able to go back and debug
MapReduce problems. At 30 GB per day of generate temporary files, this bucket
may be the largest consumer of storage, so only save what you actually use.
* `PROJECT-billing` -- Monthly invoice files for each registrar.
* `PROJECT-commits` -- Daily exports of commit logs that are needed for
potentially performing a restore.
* `PROJECT-domain-lists` -- Daily exports of all registered domain names per
TLD.
* `PROJECT-gcs-logs` -- This bucket is used at Google to store the GCS access
logs and storage data. This bucket is not required by the Registry system,
but can provide useful logging information. For instructions on setup, see
the [Cloud Storage documentation]
(https://cloud.google.com/storage/docs/access-logs).
* `PROJECT-icann-brda` -- This bucket contains the weekly ICANN BRDA files.
There is no lifecycle expiration; we keep a history of all the files. This
bucket must exist for the BRDA process to function.
* `PROJECT-icann-zfa` -- This bucket contains the most recent ICANN ZFA files.
No lifecycle is needed, because the files are overwritten each time.
* `PROJECT-rde` -- This bucket contains RDE exports, which should then be
regularly uploaded to the escrow provider. Lifecycle is set to 90 days. The
bucket must exist.
* `PROJECT-reporting` -- Contains monthly ICANN reporting files.
* `PROJECT-snapshots` -- Contains daily exports of Datastore entities of types
defined in `ExportConstants.java`. These are imported into BigQuery daily to
allow for in-depth querying.
* `PROJECT.appspot.com` -- Temporary MapReduce files are stored here. By
default, the App Engine MapReduce library places its temporary files in a
bucket named {project}.appspot.com. This bucket must exist. To keep
temporary files from building up, a 90-day or 180-day lifecycle should be
applied to the bucket, depending on how long you want to be able to go back
and debug MapReduce problems. At 30 GB per day of generate temporary files,
this bucket may be the largest consumer of storage, so only save what you
actually use.
## Commit logs

View file

@ -1,19 +1,19 @@
# Configuration
There are multiple different kinds of configuration that go into getting a
working registry system up and running. Broadly speaking, configuration works
in two ways -- globally, for the entire sytem, and per-TLD. Global
configuration is managed by editing code and deploying a new version, whereas
per-TLD configuration is data that lives in Datastore in `Registry` entities,
and is updated by running `registry_tool` commands without having to deploy a
new version.
working registry system up and running. Broadly speaking, configuration works in
two ways -- globally, for the entire sytem, and per-TLD. Global configuration is
managed by editing code and deploying a new version, whereas per-TLD
configuration is data that lives in Datastore in `Registry` entities, and is
updated by running `registry_tool` commands without having to deploy a new
version.
## Environments
Before getting into the details of configuration, it's important to note that a
lot of configuration is environment-dependent. It is common to see `switch`
lot of configuration is environment-dependent. It is common to see `switch`
statements that operate on the current `RegistryEnvironment`, and return
different values for different environments. This is especially pronounced in
different values for different environments. This is especially pronounced in
the `UNITTEST` and `LOCAL` environments, which don't run on App Engine at all.
As an example, some timeouts may be long in production and short in unit tests.
@ -27,34 +27,34 @@ thoroughly documented in the [App Engine configuration docs][app-engine-config].
The main files of note that come pre-configured along with the domain registry
are:
* `cron.xml` -- Configuration of cronjobs
* `web.xml` -- Configuration of URL paths on the webserver
* `appengine-web.xml` -- Overall App Engine settings including number and type
of instances
* `datastore-indexes.xml` -- Configuration of entity indexes in Datastore
* `queue.xml` -- Configuration of App Engine task queues
* `application.xml` -- Configuration of the application name and its services
* `cron.xml` -- Configuration of cronjobs
* `web.xml` -- Configuration of URL paths on the webserver
* `appengine-web.xml` -- Overall App Engine settings including number and type
of instances
* `datastore-indexes.xml` -- Configuration of entity indexes in Datastore
* `queue.xml` -- Configuration of App Engine task queues
* `application.xml` -- Configuration of the application name and its services
Cron, web, and queue are covered in more detail in the "App Engine architecture"
doc, and the rest are covered in the general App Engine documentation.
If you are not writing new code to implement custom features, is unlikely that
you will need to make any modifications beyond simple changes to
`application.xml` and `appengine-web.xml`. If you are writing new features,
it's likely you'll need to add cronjobs, URL paths, Datastore indexes, and task
`application.xml` and `appengine-web.xml`. If you are writing new features, it's
likely you'll need to add cronjobs, URL paths, Datastore indexes, and task
queues, and thus edit those associated XML files.
## Global configuration
There are two different mechanisms by which global configuration is managed:
`RegistryConfig` (the old way) and `ConfigModule` (the new way). Ideally there
`RegistryConfig` (the old way) and `ConfigModule` (the new way). Ideally there
would just be one, but the required code cleanup that hasn't been completed yet.
If you are adding new options, prefer adding them to `ConfigModule`.
**`RegistryConfig`** is an interface, of which you write an implementing class
containing the configuration values. `RegistryConfigLoader` is the class that
containing the configuration values. `RegistryConfigLoader` is the class that
provides the instance of `RegistryConfig`, and defaults to returning
`ProductionRegistryConfigExample`. In order to create a configuration specific
`ProductionRegistryConfigExample`. In order to create a configuration specific
to your registry, we recommend copying the `ProductionRegistryConfigExample`
class to a new class that will not be shared publicly, setting the
`com.google.domain.registry.config` system property in `appengine-web.xml` to
@ -64,16 +64,16 @@ configuration options.
The `RegistryConfig` class has documentation on all of the methods that should
be sufficient to explain what each option is, and
`ProductionRegistryConfigExample` provides an example value for each one. Some
`ProductionRegistryConfigExample` provides an example value for each one. Some
example configuration options in this interface include the App Engine project
ID, the number of days to retain commit logs, the names of various Cloud Storage
bucket names, and URLs for some required services both external and internal.
**`ConfigModule`** is a Dagger module that provides injectable configuration
options (some of which come from `RegistryConfig` above, but most of which do
not). This is preferred over `RegistryConfig` for new configuration options
not). This is preferred over `RegistryConfig` for new configuration options
because being able to inject configuration options is a nicer pattern that makes
for cleaner code. Some configuration options that can be changed in this class
for cleaner code. Some configuration options that can be changed in this class
include timeout lengths and buffer sizes for various tasks, email addresses and
URLs to use for various services, more Cloud Storage bucket names, and WHOIS
disclaimer text.
@ -83,39 +83,39 @@ disclaimer text.
Some configuration values, such as PGP private keys, are so sensitive that they
should not be written in code as per the configuration methods above, as that
would pose too high a risk of them accidentally being leaked, e.g. in a source
control mishap. We use a secret store to persist these values in a secure
control mishap. We use a secret store to persist these values in a secure
manner, and abstract access to them using the `Keyring` interface.
The `Keyring` interface contains methods for all sensitive configuration values,
which are primarily credentials used to access various ICANN and ICANN-
affiliated services (such as RDE). These values are only needed for real
production registries and PDT environments. If you are just playing around with
affiliated services (such as RDE). These values are only needed for real
production registries and PDT environments. If you are just playing around with
the platform at first, it is OK to put off defining these values until
necessary. To that end, a `DummyKeyringModule` is included that simply provides
an `InMemoryKeyring` populated with dummy values for all secret keys. This
necessary. To that end, a `DummyKeyringModule` is included that simply provides
an `InMemoryKeyring` populated with dummy values for all secret keys. This
allows the codebase to compile and run, but of course any actions that attempt
to connect to external services will fail because none of the keys are real.
To configure a production registry system, you will need to write a replacement
module for `DummyKeyringModule` that loads the credentials in a secure way, and
provides them using either an instance of `InMemoryKeyring` or your own custom
implementation of `Keyring`. You then need to replace all usages of
implementation of `Keyring`. You then need to replace all usages of
`DummyKeyringModule` with your own module in all of the per-service components
in which it is referenced. The functions in `PgpHelper` will likely prove
useful for loading keys stored in PGP format into the PGP key classes that
you'll need to provide from `Keyring`, and you can see examples of them in
action in `DummyKeyringModule`.
in which it is referenced. The functions in `PgpHelper` will likely prove useful
for loading keys stored in PGP format into the PGP key classes that you'll need
to provide from `Keyring`, and you can see examples of them in action in
`DummyKeyringModule`.
## Per-TLD configuration
`Registry` entities, which are persisted to Datastore, are used for per-TLD
configuration. They contain any kind of configuration that is specific to a
TLD, such as the create/renew price of a domain name, the pricing engine
configuration. They contain any kind of configuration that is specific to a TLD,
such as the create/renew price of a domain name, the pricing engine
implementation, the DNS writer implementation, whether escrow exports are
enabled, the default currency, the reserved label lists, and more. The
`update_tld` command in `registry_tool` is used to set all of these options.
See the "Registry tool" documentation for more information, as well as the
command-line help for the `update_tld` command. Unlike global configuration
enabled, the default currency, the reserved label lists, and more. The
`update_tld` command in `registry_tool` is used to set all of these options. See
the "Registry tool" documentation for more information, as well as the
command-line help for the `update_tld` command. Unlike global configuration
above, per-TLD configuration options are stored as data in the running system,
and thus do not require code pushes to update.

View file

@ -5,25 +5,27 @@ working running instance.
## Prerequisites
* A recent version of the
[Java 7 JDK](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html)
(note that Java 8 support should be coming to App Engine soon).
* [Bazel](http://bazel.io/), which is the buld system that
the Domain Registry project uses. The minimum required version is 0.3.1.
* [Google App Engine SDK for Java](https://cloud.google.com/appengine/downloads#Google_App_Engine_SDK_for_Java),
especially `appcfg`, which is a command-line tool that runs locally that is used
to communicate with the App Engine cloud.
* [Create an application](https://cloud.google.com/appengine/docs/java/quickstart)
on App Engine to deploy to, and set up `appcfg` to connect to it.
* A recent version of the [Java 7 JDK]
(http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html)
(note that Java 8 support should be coming to App Engine soon).
* [Bazel](http://bazel.io/), which is the buld system that the Domain Registry
project uses. The minimum required version is 0.3.1.
* [Google App Engine SDK for Java]
(https://cloud.google.com/appengine/downloads#Google_App_Engine_SDK_for_Java),
especially `appcfg`, which is a command-line tool that runs locally that is
used to communicate with the App Engine cloud.
* [Create an application]
(https://cloud.google.com/appengine/docs/java/quickstart) on App Engine to
deploy to, and set up `appcfg` to connect to it.
## Downloading the code
Start off by grabbing the latest version from the
[Domain Registry project on GitHub](https://github.com/google/domain-registry).
This can be done either by cloning the Git repo (if you expect to make code
changes to contribute back), or simply by downloading the latest release as a
zip file. This guide will cover cloning from Git, but should work almost
identically for downloading the zip file.
Start off by grabbing the latest version from the [Domain Registry project on
GitHub](https://github.com/google/domain-registry). This can be done either by
cloning the Git repo (if you expect to make code changes to contribute back), or
simply by downloading the latest release as a zip file. This guide will cover
cloning from Git, but should work almost identically for downloading the zip
file.
$ git clone git@github.com:google/domain-registry.git
Cloning into 'domain-registry'...
@ -36,19 +38,19 @@ identically for downloading the zip file.
The most important directories are:
* `docs` -- the documentation (including this install guide)
* `java/google/registry` -- all of the source code of the main project
* `javatests/google/registry` -- all of the tests for the project
* `python` -- Some Python reporting scripts
* `scripts` -- Scripts for configuring development environments
* `docs` -- the documentation (including this install guide)
* `java/google/registry` -- all of the source code of the main project
* `javatests/google/registry` -- all of the tests for the project
* `python` -- Some Python reporting scripts
* `scripts` -- Scripts for configuring development environments
Everything else, especially `third_party`, contains dependencies that are used
by the project.
## Building and verifying the code
The first step is to verify that the project successfully builds. This will
also download and install dependencies.
The first step is to verify that the project successfully builds. This will also
download and install dependencies.
$ bazel --batch build //java{,tests}/google/registry/...
INFO: Found 584 targets...
@ -56,7 +58,7 @@ also download and install dependencies.
INFO: Elapsed time: 124.433s, Critical Path: 116.92s
There may be some warnings thrown, but if there are no errors, then you are good
to go. Next, run the tests to verify that everything works properly. The tests
to go. Next, run the tests to verify that everything works properly. The tests
can be pretty resource intensive, so experiment with different values of
parameters to optimize between low running time and not slowing down your
computer too badly.
@ -68,10 +70,10 @@ computer too badly.
## Running a development instance locally
`RegistryTestServer` is a lightweight test server for the registry that is
suitable for running locally for development. It uses local versions of all
Google Cloud Platform dependencies, when available. Correspondingly, its
suitable for running locally for development. It uses local versions of all
Google Cloud Platform dependencies, when available. Correspondingly, its
functionality is limited compared to a Domain Registry instance running on an
actual App Engine instance. To see its command-line parameters, run:
actual App Engine instance. To see its command-line parameters, run:
$ bazel run //javatests/google/registry/server -- --help
@ -86,13 +88,13 @@ http://localhost:8080/registrar .
## Deploying the code
You are going to need to configure a variety of things before a working
installation can be deployed (see the Configuration guide for that). It's
installation can be deployed (see the Configuration guide for that). It's
recommended to at least confirm that the default version of the code can be
pushed at all first before diving into that, with the expectation that things
won't work properly until they are configured.
All of the [EAR](https://en.wikipedia.org/wiki/EAR_(file_format)) and
[WAR](https://en.wikipedia.org/wiki/WAR_(file_format)) files for the different
All of the [EAR](https://en.wikipedia.org/wiki/EAR_\(file_format\)) and [WAR]
(https://en.wikipedia.org/wiki/WAR_\(file_format\)) files for the different
environments, which were built in the previous step, are outputted to the
`bazel-genfiles` directory as follows:
@ -115,7 +117,8 @@ an environment in the file name), whereas there is one WAR file per service per
environment, with there being three services in total: default, backend, and
tools.
Then, use `appcfg` to [deploy the WAR files](https://cloud.google.com/appengine/docs/java/tools/uploadinganapp):
Then, use `appcfg` to [deploy the WAR files]
(https://cloud.google.com/appengine/docs/java/tools/uploadinganapp):
$ cd /path/to/downloaded/appengine/app
$ /path/to/appcfg.sh update /path/to/registry_default.war
@ -126,15 +129,15 @@ Then, use `appcfg` to [deploy the WAR files](https://cloud.google.com/appengine/
Once the code is deployed, the next step is to play around with creating some
entities in the registry, including a TLD, a registrar, a domain, a contact, and
a host. Note: Do this on a non-production environment! All commands below use
a host. Note: Do this on a non-production environment! All commands below use
`registry_tool` to interact with the running registry system; see the
documentation on `registry_tool` for additional information on it. We'll assume
documentation on `registry_tool` for additional information on it. We'll assume
that all commands below are running in the `alpha` environment; if you named
your environment differently, then use that everywhere that `alpha` appears.
### Create a TLD
Pick the name of a TLD to create. For the purposes of this example we'll use
Pick the name of a TLD to create. For the purposes of this example we'll use
"example", which conveniently happens to be an ICANN reserved string, meaning
it'll never be created for real on the Internet at large.
@ -144,25 +147,25 @@ it'll never be created for real on the Internet at large.
Perform this command? (y/N): y
Updated 1 entities.
The name of the TLD is the main parameter passed to the command. The initial
TLD state is set here to general availability, bypassing sunrise and landrush,
so that domain names can be created immediately in the following steps. The TLD
The name of the TLD is the main parameter passed to the command. The initial TLD
state is set here to general availability, bypassing sunrise and landrush, so
that domain names can be created immediately in the following steps. The TLD
type is set to `TEST` (the other alternative being `REAL`) for obvious reasons.
`roid_suffix` is the suffix that will be used for repository ids of domains on
the TLD -- it must be all uppercase and a maximum of eight ASCII characters.
ICANN
[recommends](https://www.icann.org/resources/pages/correction-non-compliant-roids-2015-08-26-en)
a unique ROID suffix per TLD. The easiest way to come up with one is to simply
ICANN [recommends]
(https://www.icann.org/resources/pages/correction-non-compliant-roids-2015-08-26-en)
a unique ROID suffix per TLD. The easiest way to come up with one is to simply
use the entire uppercased TLD string if it is eight characters or fewer, or
abbreviate it in some sensible way down to eight if it is longer. The full repo
id of a domain resource is a hex string followed by the suffix,
e.g. `12F7CDF3-EXAMPLE` for our example TLD.
abbreviate it in some sensible way down to eight if it is longer. The full repo
id of a domain resource is a hex string followed by the suffix, e.g.
`12F7CDF3-EXAMPLE` for our example TLD.
### Create a registrar
Now we need to create a registrar and give it access to operate on the example
TLD. For the purposes of our example we'll name the registrar "Acme".
TLD. For the purposes of our example we'll name the registrar "Acme".
$ registry_tool -e alpha create_registrar acme --name 'ACME Corp' \
--registrar_type TEST --password hunter2 \
@ -175,27 +178,27 @@ TLD. For the purposes of our example we'll name the registrar "Acme".
support it.
In the command above, "acme" is the internal registrar id that is the primary
key used to refer to the registrar. The `name` is the display name that is used
less often, primarily in user interfaces. We again set the type of the resource
here to `TEST`. The `password` is the EPP password that the registrar uses to
log in with. The `icann_referral_email` is the email address associated with
the initial creation of the registrar -- note that the registrar cannot change
it later. The address fields are self-explanatory (note that other parameters
are available for international addresses). The `allowed_tlds` parameter is a
key used to refer to the registrar. The `name` is the display name that is used
less often, primarily in user interfaces. We again set the type of the resource
here to `TEST`. The `password` is the EPP password that the registrar uses to
log in with. The `icann_referral_email` is the email address associated with the
initial creation of the registrar -- note that the registrar cannot change it
later. The address fields are self-explanatory (note that other parameters are
available for international addresses). The `allowed_tlds` parameter is a
comma-delimited list of TLDs that the registrar has access to, and here is set
to the example TLD.
### Create a contact
Now we want to create a contact, as a contact is required before a domain can be
created. Contacts can be used on any number of domains across any number of
created. Contacts can be used on any number of domains across any number of
TLDs, and contain the information on who owns or provides technical support for
a TLD. These details will appear in WHOIS queries. Note the `-c` parameter,
a TLD. These details will appear in WHOIS queries. Note the `-c` parameter,
which stands for client identifier: This is used on most `registry_tool`
commands, and is used to specify the id of the registrar that the command will
be executed using. Contact, domain, and host creation all work by constructing
be executed using. Contact, domain, and host creation all work by constructing
an EPP message that is sent to the registry, and EPP commands need to run under
the context of a registrar. The "acme" registrar that was created above is used
the context of a registrar. The "acme" registrar that was created above is used
for this purpose.
$ registry_tool -e alpha create_contact -c acme --id abcd1234 \
@ -204,24 +207,24 @@ for this purpose.
[ ... snip EPP response ... ]
The `id` is the contact id, and is referenced elsewhere in the system (e.g. when
a domain is created and the admin contact is specified). The `name` is the
a domain is created and the admin contact is specified). The `name` is the
display name of the contact, which is usually the name of a company or of a
person. Again, the address fields are required, along with an `email`.
person. Again, the address fields are required, along with an `email`.
### Create a host
Hosts are used to specify the IP addresses (either v4 or v6) that are associated
with a given nameserver. Note that hosts may either be in-bailiwick (on a TLD
that this registry runs) or out-of-bailiwick. In-bailiwick hosts may
with a given nameserver. Note that hosts may either be in-bailiwick (on a TLD
that this registry runs) or out-of-bailiwick. In-bailiwick hosts may
additionally be subordinate (a subdomain of a domain name that is on this
registry). Let's create an out-of-bailiwick nameserver, which is the simplest
registry). Let's create an out-of-bailiwick nameserver, which is the simplest
type.
$ my_registry_tool -e alpha create_host -c acme --host ns1.google.com
[ ... snip EPP response ... ]
Note that hosts are required to have IP addresses if they are subordinate, and
must not have IP addresses if they are not subordinate. Use the `--addresses`
must not have IP addresses if they are not subordinate. Use the `--addresses`
parameter to set the IP addresses on a host, passing in a comma-delimited list
of IP addresses in either IPv4 or IPv6 format.
@ -236,7 +239,7 @@ and host.
[ ... snip EPP response ... ]
Note how the same contact id (from above) is used for the administrative,
technical, and registrant contact. This is quite common on domain names.
technical, and registrant contact. This is quite common on domain names.
To verify that everything worked, let's query the WHOIS information for
fake.example:

View file

@ -1,11 +1,11 @@
# Registry tool
The registry tool is a command-line registry administration tool that is invoked
using the `registry_tool` command. It has the ability to view and change a
large number of things in a running domain registry environment, including
creating registrars, updating premium and reserved lists, running an EPP command
from a given XML file, and performing various backend tasks like re-running RDE
if the most recent export failed. Its code lives inside the tools package
using the `registry_tool` command. It has the ability to view and change a large
number of things in a running domain registry environment, including creating
registrars, updating premium and reserved lists, running an EPP command from a
given XML file, and performing various backend tasks like re-running RDE if the
most recent export failed. Its code lives inside the tools package
(`java/google/registry/tools`), and is compiled by building the `registry_tool`
target in the Bazel BUILD file in that package.
@ -15,11 +15,11 @@ To build the tool and display its command-line help, execute this command:
For future invocations you should alias the compiled binary in the
`bazel-genfiles/java/google/registry` directory or add it to your path so that
you can run it more easily. The rest of this guide assumes that it has been
you can run it more easily. The rest of this guide assumes that it has been
aliased to `registry_tool`.
The registry tool is always called with a specific environment to run in using
the -e parameter. This looks like:
the -e parameter. This looks like:
$ registry_tool -e production {command name} {command parameters}
@ -37,7 +37,7 @@ There are actually two separate tools, `gtech_tool`, which is a collection of
lower impact commands intended to be used by tech support personnel, and
`registry_tool`, which is a superset of `gtech_tool` that contains additional
commands that are potentially more destructive and can change more aspects of
the system. A full list of `gtech_tool` commands can be found in
the system. A full list of `gtech_tool` commands can be found in
`GtechTool.java`, and the additional commands that only `registry_tool` has
access to are in `RegistryTool.java`.
@ -47,7 +47,7 @@ There are two broad ways that commands are implemented: some that send requests
to `ToolsServlet` to execute the action on the server (these commands implement
`ServerSideCommand`), and others that execute the command locally using the
[Remote API](https://cloud.google.com/appengine/docs/java/tools/remoteapi)
(these commands implement `RemoteApiCommand`). Server-side commands take more
(these commands implement `RemoteApiCommand`). Server-side commands take more
work to implement because they require both a client and a server-side
component, e.g. `CreatePremiumListCommand.java` and
`CreatePremiumListAction.java` respectively for creating a premium list.
@ -56,35 +56,36 @@ Engine, including running a large MapReduce, because they execute on the tools
service in the App Engine cloud.
Local commands, by contrast, are easier to implement, because there is only a
local component to write, but they aren't as powerful. A general rule of thumb
local component to write, but they aren't as powerful. A general rule of thumb
for making this determination is to use a local command if possible, or a
server-side command otherwise.
## Common tool patterns
All tools ultimately implement the `Command` interface located in the `tools`
package. If you use an IDE such as Eclipse to view the type hierarchy of that
package. If you use an IDE such as Eclipse to view the type hierarchy of that
interface, you'll see all of the commands that exist, as well as how a lot of
them are grouped using sub-interfaces or abstract classes that provide
additional functionality. The most common patterns that are used by a large
additional functionality. The most common patterns that are used by a large
number of other tools are:
* **`BigqueryCommand`** -- Provides a connection to BigQuery for tools that need
it.
* **`ConfirmingCommand`** -- Provides the methods `prompt()` and `execute()` to
override. `prompt()` outputs a message (usually what the command is going to
do) and prompts the user to confirm execution of the command, and then
`execute()` actually does it.
* **`EppToolCommand`** -- Commands that work by executing EPP commands against
the server, usually by filling in a template with parameters that were passed
on the command-line.
* **`MutatingEppToolCommand`** -- A sub-class of `EppToolCommand` that provides
a `--dry_run` flag, that, if passed, will display the output from the server
of what the command would've done without actually committing those changes.
* **`GetEppResourceCommand`** -- Gets individual EPP resources from the server
and outputs them.
* **`ListObjectsCommand`** -- Lists all objects of a specific type from the
server and outputs them.
* **`MutatingCommand`** -- Provides a facility to create or update entities in
Datastore, and uses a diff algorithm to display the changes that will be made
before committing them.
* **`BigqueryCommand`** -- Provides a connection to BigQuery for tools that
need it.
* **`ConfirmingCommand`** -- Provides the methods `prompt()` and `execute()`
to override. `prompt()` outputs a message (usually what the command is going
to do) and prompts the user to confirm execution of the command, and then
`execute()` actually does it.
* **`EppToolCommand`** -- Commands that work by executing EPP commands against
the server, usually by filling in a template with parameters that were
passed on the command-line.
* **`MutatingEppToolCommand`** -- A sub-class of `EppToolCommand` that
provides a `--dry_run` flag, that, if passed, will display the output from
the server of what the command would've done without actually committing
those changes.
* **`GetEppResourceCommand`** -- Gets individual EPP resources from the server
and outputs them.
* **`ListObjectsCommand`** -- Lists all objects of a specific type from the
server and outputs them.
* **`MutatingCommand`** -- Provides a facility to create or update entities in
Datastore, and uses a diff algorithm to display the changes that will be
made before committing them.

View file

@ -658,4 +658,22 @@ public final class ConfigModule {
public static Duration provideMetricsWriteInterval() {
return Duration.standardSeconds(60);
}
@Provides
@Config("contactAutomaticTransferLength")
public static Duration provideContactAutomaticTransferLength(RegistryConfig config) {
return config.getContactAutomaticTransferLength();
}
@Provides
@Config("asyncDeleteFlowMapreduceDelay")
public static Duration provideAsyncDeleteFlowMapreduceDelay(RegistryConfig config) {
return config.getAsyncDeleteFlowMapreduceDelay();
}
@Provides
@Config("maxChecks")
public static int provideMaxChecks(RegistryConfig config) {
return config.getMaxChecks();
}
}

View file

@ -6,14 +6,11 @@
<module>default</module>
<threadsafe>true</threadsafe>
<sessions-enabled>true</sessions-enabled>
<instance-class>F4_1G</instance-class>
<automatic-scaling>
<min-idle-instances>0</min-idle-instances>
<max-idle-instances>automatic</max-idle-instances>
<min-pending-latency>automatic</min-pending-latency>
<max-pending-latency>100ms</max-pending-latency>
<max-concurrent-requests>10</max-concurrent-requests>
</automatic-scaling>
<instance-class>B4_1G</instance-class>
<basic-scaling>
<max-instances>10</max-instances>
<idle-timeout>10m</idle-timeout>
</basic-scaling>
<system-properties>
<property name="java.util.logging.config.file"

View file

@ -7,18 +7,6 @@
<bucket-size>5</bucket-size>
</queue>
<queue>
<name>dns-cron</name>
<!-- There is no point allowing more than 10/s because the pull queue that feeds
this job will refuse to service more than 10 qps. See
https://cloud.google.com/appengine/docs/java/javadoc/com/google/appengine/api/taskqueue/Queue#leaseTasks-long-java.util.concurrent.TimeUnit-long- -->
<rate>10/s</rate>
<bucket-size>100</bucket-size>
<retry-parameters>
<task-retry-limit>1</task-retry-limit>
</retry-parameters>
</queue>
<queue>
<name>dns-pull</name>
<mode>pull</mode>

View file

@ -6,14 +6,11 @@
<module>default</module>
<threadsafe>true</threadsafe>
<sessions-enabled>true</sessions-enabled>
<instance-class>F4_1G</instance-class>
<automatic-scaling>
<min-idle-instances>0</min-idle-instances>
<max-idle-instances>automatic</max-idle-instances>
<min-pending-latency>automatic</min-pending-latency>
<max-pending-latency>100ms</max-pending-latency>
<max-concurrent-requests>10</max-concurrent-requests>
</automatic-scaling>
<instance-class>B4_1G</instance-class>
<basic-scaling>
<max-instances>10</max-instances>
<idle-timeout>10m</idle-timeout>
</basic-scaling>
<system-properties>
<property name="java.util.logging.config.file"

View file

@ -6,14 +6,11 @@
<module>default</module>
<threadsafe>true</threadsafe>
<sessions-enabled>true</sessions-enabled>
<instance-class>F4_1G</instance-class>
<automatic-scaling>
<min-idle-instances>1</min-idle-instances>
<max-idle-instances>automatic</max-idle-instances>
<min-pending-latency>automatic</min-pending-latency>
<max-pending-latency>100ms</max-pending-latency>
<max-concurrent-requests>10</max-concurrent-requests>
</automatic-scaling>
<instance-class>B4_1G</instance-class>
<basic-scaling>
<max-instances>10</max-instances>
<idle-timeout>10m</idle-timeout>
</basic-scaling>
<system-properties>

View file

@ -44,6 +44,7 @@ java_library(
"//java/google/registry/mapreduce",
"//java/google/registry/mapreduce/inputs",
"//java/google/registry/model",
"//java/google/registry/monitoring/metrics",
"//java/google/registry/monitoring/whitebox",
"//java/google/registry/pricing",
"//java/google/registry/request",

View file

@ -14,8 +14,7 @@
package google.registry.flows;
import static com.google.appengine.api.users.UserServiceFactory.getUserService;
import com.google.appengine.api.users.UserService;
import google.registry.request.Action;
import google.registry.request.Action.Method;
import google.registry.request.Payload;
@ -35,13 +34,14 @@ public class EppConsoleAction implements Runnable {
@Inject @Payload byte[] inputXmlBytes;
@Inject HttpSession session;
@Inject EppRequestHandler eppRequestHandler;
@Inject UserService userService;
@Inject EppConsoleAction() {}
@Override
public void run() {
eppRequestHandler.executeEpp(
new HttpSessionMetadata(session),
new GaeUserCredentials(getUserService().getCurrentUser()),
GaeUserCredentials.forCurrentUser(userService),
EppRequestSource.CONSOLE,
false, // This endpoint is never a dry run.
false, // This endpoint is never a superuser.

View file

@ -18,6 +18,7 @@ import static google.registry.flows.EppXmlTransformer.unmarshal;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Joiner;
import com.google.common.base.Optional;
import google.registry.flows.FlowModule.EppExceptionInProviderException;
import google.registry.model.eppcommon.Trid;
import google.registry.model.eppinput.EppInput;
@ -42,7 +43,8 @@ public final class EppController {
@Inject Clock clock;
@Inject FlowComponent.Builder flowComponentBuilder;
@Inject EppMetric.Builder metric;
@Inject EppMetric.Builder metricBuilder;
@Inject EppMetrics eppMetrics;
@Inject BigQueryMetricsEnqueuer bigQueryMetricsEnqueuer;
@Inject EppController() {}
@ -54,20 +56,20 @@ public final class EppController {
boolean isDryRun,
boolean isSuperuser,
byte[] inputXmlBytes) {
metric.setClientId(sessionMetadata.getClientId());
metric.setPrivilegeLevel(isSuperuser ? "SUPERUSER" : "NORMAL");
metricBuilder.setClientId(Optional.fromNullable(sessionMetadata.getClientId()));
metricBuilder.setPrivilegeLevel(isSuperuser ? "SUPERUSER" : "NORMAL");
try {
EppInput eppInput;
try {
eppInput = unmarshal(EppInput.class, inputXmlBytes);
} catch (EppException e) {
// Send the client an error message, with no clTRID since we couldn't unmarshal it.
metric.setStatus(e.getResult().getCode());
metricBuilder.setStatus(e.getResult().getCode());
return getErrorResponse(clock, e.getResult(), Trid.create(null));
}
metric.setCommandName(eppInput.getCommandName());
metricBuilder.setCommandName(eppInput.getCommandName());
if (!eppInput.getTargetIds().isEmpty()) {
metric.setEppTarget(Joiner.on(',').join(eppInput.getTargetIds()));
metricBuilder.setEppTarget(Joiner.on(',').join(eppInput.getTargetIds()));
}
EppOutput output = runFlowConvertEppErrors(flowComponentBuilder
.flowModule(new FlowModule.Builder()
@ -81,11 +83,14 @@ public final class EppController {
.build())
.build());
if (output.isResponse()) {
metric.setStatus(output.getResponse().getResult().getCode());
metricBuilder.setStatus(output.getResponse().getResult().getCode());
}
return output;
} finally {
bigQueryMetricsEnqueuer.export(metric.build());
EppMetric metric = metricBuilder.build();
bigQueryMetricsEnqueuer.export(metric);
eppMetrics.incrementEppRequests(metric);
eppMetrics.recordProcessingTime(metric);
}
}

View file

@ -250,4 +250,12 @@ public abstract class EppException extends Exception {
super("Specified protocol version is not implemented");
}
}
/** Command failed. */
@EppResultCode(Code.CommandFailed)
public static class CommandFailedException extends EppException {
public CommandFailedException() {
super("Command failed");
}
}
}

View file

@ -0,0 +1,72 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows;
import com.google.common.collect.ImmutableSet;
import google.registry.monitoring.metrics.EventMetric;
import google.registry.monitoring.metrics.IncrementableMetric;
import google.registry.monitoring.metrics.LabelDescriptor;
import google.registry.monitoring.metrics.MetricRegistryImpl;
import google.registry.monitoring.whitebox.EppMetric;
import javax.inject.Inject;
/** EPP Instrumentation. */
public class EppMetrics {
private static final ImmutableSet<LabelDescriptor> LABEL_DESCRIPTORS =
ImmutableSet.of(
LabelDescriptor.create("command", "The name of the command."),
LabelDescriptor.create("client_id", "The name of the client."),
LabelDescriptor.create("status", "The return status of the command."));
private static final IncrementableMetric eppRequests =
MetricRegistryImpl.getDefault()
.newIncrementableMetric(
"/epp/requests", "Count of EPP Requests", "count", LABEL_DESCRIPTORS);
private static final EventMetric processingTime =
MetricRegistryImpl.getDefault()
.newEventMetric(
"/epp/processing_time",
"EPP Processing Time",
"milliseconds",
LABEL_DESCRIPTORS,
EventMetric.DEFAULT_FITTER);
@Inject
public EppMetrics() {}
/**
* Increment a counter which tracks EPP requests.
*
* @see EppController
* @see FlowRunner
*/
public void incrementEppRequests(EppMetric metric) {
eppRequests.increment(
metric.getCommandName().or(""),
metric.getClientId().or(""),
metric.getStatus().isPresent() ? metric.getStatus().toString() : "");
}
/** Record the server-side processing time for an EPP request. */
public void recordProcessingTime(EppMetric metric) {
processingTime.record(
metric.getEndTimestamp().getMillis() - metric.getStartTimestamp().getMillis(),
metric.getCommandName().or(""),
metric.getClientId().or(""),
metric.getStatus().isPresent() ? metric.getStatus().toString() : "");
}
}

View file

@ -16,13 +16,21 @@ package google.registry.flows;
import static com.google.common.base.Preconditions.checkState;
import com.google.common.base.Optional;
import com.google.common.base.Strings;
import dagger.Module;
import dagger.Provides;
import google.registry.flows.exceptions.OnlyToolCanPassMetadataException;
import google.registry.flows.picker.FlowPicker;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppcommon.Trid;
import google.registry.model.eppinput.EppInput;
import google.registry.model.eppinput.EppInput.ResourceCommandWrapper;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppinput.ResourceCommand.SingleResourceCommand;
import google.registry.model.reporting.HistoryEntry;
import java.lang.annotation.Documented;
import javax.annotation.Nullable;
import javax.inject.Qualifier;
/** Module to choose and instantiate an EPP flow. */
@ -142,10 +150,11 @@ public class FlowModule {
@Provides
@FlowScope
@Nullable
@ClientId
static String provideClientId(SessionMetadata sessionMetadata) {
return sessionMetadata.getClientId();
// Treat a missing clientId as null so we can always inject a non-null value. All we do with the
// clientId is log it (as "") or detect its absence, both of which work fine with empty.
return Strings.nullToEmpty(sessionMetadata.getClientId());
}
@Provides
@ -164,6 +173,50 @@ public class FlowModule {
}
}
@Provides
@FlowScope
static ResourceCommand provideResourceCommand(EppInput eppInput) {
return ((ResourceCommandWrapper) eppInput.getCommandWrapper().getCommand())
.getResourceCommand();
}
@Provides
@FlowScope
static Optional<AuthInfo> provideAuthInfo(ResourceCommand resourceCommand) {
return Optional.fromNullable(((SingleResourceCommand) resourceCommand).getAuthInfo());
}
/**
* Provides a partially filled in {@link HistoryEntry} builder.
*
* <p>This is not marked with {@link FlowScope} so that each retry gets a fresh one. Otherwise,
* the fact that the builder is one-use would cause NPEs.
*/
@Provides
static HistoryEntry.Builder provideHistoryEntryBuilder(
Trid trid,
@InputXml byte[] inputXmlBytes,
@Superuser boolean isSuperuser,
@ClientId String clientId,
EppRequestSource eppRequestSource,
EppInput eppInput) {
HistoryEntry.Builder historyBuilder = new HistoryEntry.Builder()
.setTrid(trid)
.setXmlBytes(inputXmlBytes)
.setBySuperuser(isSuperuser)
.setClientId(clientId);
MetadataExtension metadataExtension = eppInput.getSingleExtension(MetadataExtension.class);
if (metadataExtension != null) {
if (!eppRequestSource.equals(EppRequestSource.TOOL)) {
throw new EppExceptionInProviderException(new OnlyToolCanPassMetadataException());
}
historyBuilder
.setReason(metadataExtension.getReason())
.setRequestedByRegistrar(metadataExtension.getRequestedByRegistrar());
}
return historyBuilder;
}
/** Wrapper class to carry an {@link EppException} to the calling code. */
static class EppExceptionInProviderException extends RuntimeException {
EppExceptionInProviderException(EppException exception) {

View file

@ -14,7 +14,6 @@
package google.registry.flows;
import static com.google.common.base.Strings.nullToEmpty;
import static com.google.common.base.Throwables.getStackTraceAsString;
import static com.google.common.io.BaseEncoding.base64;
import static google.registry.model.ofy.ObjectifyService.ofy;
@ -34,7 +33,6 @@ import google.registry.model.eppoutput.EppOutput;
import google.registry.monitoring.whitebox.EppMetric;
import google.registry.util.Clock;
import google.registry.util.FormattingLogger;
import javax.annotation.Nullable;
import javax.inject.Inject;
import javax.inject.Provider;
import org.joda.time.DateTime;
@ -57,7 +55,7 @@ public class FlowRunner {
private static final FormattingLogger logger = FormattingLogger.getLoggerForCallerClass();
@Inject @Nullable @ClientId String clientId;
@Inject @ClientId String clientId;
@Inject Clock clock;
@Inject TransportCredentials credentials;
@Inject EppInput eppInput;
@ -96,7 +94,7 @@ public class FlowRunner {
REPORTING_LOG_SIGNATURE,
JSONValue.toJSONString(ImmutableMap.<String, Object>of(
"trid", trid.getServerTransactionId(),
"clientId", nullToEmpty(clientId),
"clientId", clientId,
"xml", prettyXml,
"xmlBytes", xmlBase64)));
if (!isTransactional) {

View file

@ -14,11 +14,12 @@
package google.registry.flows;
import static com.google.appengine.api.users.UserServiceFactory.getUserService;
import static com.google.common.base.MoreObjects.toStringHelper;
import static com.google.common.base.Strings.nullToEmpty;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import com.google.appengine.api.users.User;
import com.google.appengine.api.users.UserService;
import com.google.common.annotations.VisibleForTesting;
import google.registry.flows.EppException.AuthenticationErrorException;
import google.registry.model.registrar.Registrar;
@ -28,11 +29,41 @@ import javax.annotation.Nullable;
/** Credentials provided by {@link com.google.appengine.api.users.UserService}. */
public class GaeUserCredentials implements TransportCredentials {
final User gaeUser;
private final User gaeUser;
private final Boolean isAdmin;
/**
* Create an instance for the current user, as determined by {@code UserService}.
*
* <p>Note that the current user may be null (i.e. there is no logged in user).
*/
public static GaeUserCredentials forCurrentUser(UserService userService) {
User user = userService.getCurrentUser();
return new GaeUserCredentials(user, user != null ? userService.isUserAdmin() : null);
}
/** Create an instance that represents an explicit user (for testing purposes). */
@VisibleForTesting
public static GaeUserCredentials forTestingUser(User gaeUser, Boolean isAdmin) {
checkArgumentNotNull(gaeUser);
checkArgumentNotNull(isAdmin);
return new GaeUserCredentials(gaeUser, isAdmin);
}
/** Create an instance that represents a non-logged in user (for testing purposes). */
@VisibleForTesting
public static GaeUserCredentials forLoggedOutUser() {
return new GaeUserCredentials(null, null);
}
private GaeUserCredentials(@Nullable User gaeUser, @Nullable Boolean isAdmin) {
this.gaeUser = gaeUser;
this.isAdmin = isAdmin;
}
@VisibleForTesting
public GaeUserCredentials(@Nullable User gaeUser) {
this.gaeUser = gaeUser;
User getUser() {
return gaeUser;
}
@Override
@ -42,7 +73,7 @@ public class GaeUserCredentials implements TransportCredentials {
throw new UserNotLoggedInException();
}
// Allow admins to act as any registrar.
if (getUserService().isUserAdmin()) {
if (Boolean.TRUE.equals(isAdmin)) {
return;
}
// Check Registrar's contacts to see if any are associated with this gaeUserId.
@ -59,6 +90,7 @@ public class GaeUserCredentials implements TransportCredentials {
public String toString() {
return toStringHelper(getClass())
.add("gaeUser", gaeUser)
.add("isAdmin", isAdmin)
.toString();
}

View file

@ -101,7 +101,10 @@ public abstract class LoggedInFlow extends Flow {
allowedTlds = registrar.getAllowedTlds();
}
initLoggedInFlow();
if (!difference(extensionClasses, getValidRequestExtensions()).isEmpty()) {
Set<Class<? extends CommandExtension>> unimplementedExtensions =
difference(extensionClasses, getValidRequestExtensions());
if (!unimplementedExtensions.isEmpty()) {
logger.infofmt("Unimplemented extensions: %s", unimplementedExtensions);
throw new UnimplementedExtensionException();
}
}

View file

@ -15,18 +15,29 @@
package google.registry.flows;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.model.EppResourceUtils.queryDomainsUsingResource;
import static google.registry.model.domain.DomainResource.extendRegistrationWithCap;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.google.common.base.Function;
import com.google.common.base.Optional;
import com.google.common.base.Predicate;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Iterables;
import com.google.common.collect.Sets;
import com.googlecode.objectify.Key;
import com.googlecode.objectify.Work;
import google.registry.flows.EppException.AuthorizationErrorException;
import google.registry.flows.EppException.InvalidAuthorizationInformationErrorException;
import google.registry.flows.exceptions.ResourceStatusProhibitsOperationException;
import google.registry.flows.exceptions.ResourceToDeleteIsReferencedException;
import google.registry.flows.exceptions.ResourceToMutateDoesNotExistException;
import google.registry.model.EppResource;
import google.registry.model.EppResource.Builder;
import google.registry.model.EppResource.ForeignKeyedEppResource;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainResource;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppcommon.AuthInfo.BadAuthInfoException;
@ -43,6 +54,8 @@ import google.registry.model.transfer.TransferResponse;
import google.registry.model.transfer.TransferResponse.ContactTransferResponse;
import google.registry.model.transfer.TransferResponse.DomainTransferResponse;
import google.registry.model.transfer.TransferStatus;
import java.util.List;
import java.util.Set;
import org.joda.time.DateTime;
/** Static utility functions for resource transfer flows. */
@ -52,6 +65,9 @@ public class ResourceFlowUtils {
private static final ImmutableSet<TransferStatus> ADD_EXDATE_STATUSES = Sets.immutableEnumSet(
TransferStatus.PENDING, TransferStatus.CLIENT_APPROVED, TransferStatus.SERVER_APPROVED);
/** In {@link #failfastForAsyncDelete}, check this (arbitrary) number of query results. */
private static final int FAILFAST_CHECK_COUNT = 5;
/**
* Create a transfer response using the id and type of this resource and the specified
* {@link TransferData}.
@ -166,6 +182,41 @@ public class ResourceFlowUtils {
}
}
/** Check whether an asynchronous delete would obviously fail, and throw an exception if so. */
public static <R extends EppResource> void failfastForAsyncDelete(
final String targetId,
final DateTime now,
final Class<R> resourceClass,
final Function<DomainBase, ImmutableSet<?>> getPotentialReferences) throws EppException {
// Enter a transactionless context briefly.
EppException failfastException = ofy().doTransactionless(new Work<EppException>() {
@Override
public EppException run() {
final ForeignKeyIndex<R> fki = ForeignKeyIndex.load(resourceClass, targetId, now);
if (fki == null) {
return new ResourceToMutateDoesNotExistException(resourceClass, targetId);
}
// Query for the first few linked domains, and if found, actually load them. The query is
// eventually consistent and so might be very stale, but the direct load will not be stale,
// just non-transactional. If we find at least one actual reference then we can reliably
// fail. If we don't find any, we can't trust the query and need to do the full mapreduce.
List<Key<DomainBase>> keys = queryDomainsUsingResource(
resourceClass, fki.getResourceKey(), now, FAILFAST_CHECK_COUNT);
Predicate<DomainBase> predicate = new Predicate<DomainBase>() {
@Override
public boolean apply(DomainBase domain) {
return getPotentialReferences.apply(domain).contains(fki.getResourceKey());
}};
return Iterables.any(ofy().load().keys(keys).values(), predicate)
? new ResourceToDeleteIsReferencedException()
: null;
}
});
if (failfastException != null) {
throw failfastException;
}
}
/** The specified resource belongs to another client. */
public static class ResourceNotOwnedException extends AuthorizationErrorException {
public ResourceNotOwnedException() {
@ -173,6 +224,14 @@ public class ResourceFlowUtils {
}
}
/** Check that the given AuthInfo is either missing or else is valid for the given resource. */
public static void verifyOptionalAuthInfoForResource(
Optional<AuthInfo> authInfo, EppResource resource) throws EppException {
if (authInfo.isPresent()) {
verifyAuthInfoForResource(authInfo.get(), resource);
}
}
/** Check that the given AuthInfo is valid for the given resource. */
public static void verifyAuthInfoForResource(AuthInfo authInfo, EppResource resource)
throws EppException {
@ -183,6 +242,15 @@ public class ResourceFlowUtils {
}
}
/** Check that the resource does not have any disallowed status values. */
public static void verifyNoDisallowedStatuses(
EppResource resource, ImmutableSet<StatusValue> disallowedStatuses) throws EppException {
Set<StatusValue> problems = Sets.intersection(resource.getStatusValues(), disallowedStatuses);
if (!problems.isEmpty()) {
throw new ResourceStatusProhibitsOperationException(problems);
}
}
/** Authorization information for accessing resource is invalid. */
public static class BadAuthInfoForResourceException
extends InvalidAuthorizationInformationErrorException {

View file

@ -36,7 +36,7 @@ public abstract class ResourceSyncDeleteFlow
@Override
@SuppressWarnings("unchecked")
protected final R createOrMutateResource() {
protected final R createOrMutateResource() throws EppException {
B builder = (B) prepareDeletedResourceAsBuilder(existingResource, now);
setDeleteProperties(builder);
return builder.build();
@ -52,7 +52,7 @@ public abstract class ResourceSyncDeleteFlow
/** Set any resource-specific properties before deleting. */
@SuppressWarnings("unused")
protected void setDeleteProperties(B builder) {}
protected void setDeleteProperties(B builder) throws EppException {}
/** Modify any other resources that need to be informed of this delete. */
protected void modifySyncDeleteRelatedResources() {}

View file

@ -77,7 +77,7 @@ import org.joda.time.Duration;
}};
@Override
protected final void initResourceCreateOrMutateFlow() {
protected final void initResourceCreateOrMutateFlow() throws EppException {
initResourceTransferRequestFlow();
}
@ -100,7 +100,8 @@ import org.joda.time.Duration;
verifyTransferRequestIsAllowed();
}
private TransferData.Builder createTransferDataBuilder(TransferStatus transferStatus) {
private TransferData.Builder
createTransferDataBuilder(TransferStatus transferStatus) throws EppException {
TransferData.Builder builder = new TransferData.Builder()
.setGainingClientId(gainingClient.getId())
.setTransferRequestTime(now)
@ -113,7 +114,7 @@ import org.joda.time.Duration;
}
private PollMessage createPollMessage(
Client client, TransferStatus transferStatus, DateTime eventTime) {
Client client, TransferStatus transferStatus, DateTime eventTime) throws EppException {
ImmutableList.Builder<ResponseData> responseData = new ImmutableList.Builder<>();
responseData.add(createTransferResponse(
existingResource, createTransferDataBuilder(transferStatus).build(), now));
@ -132,7 +133,7 @@ import org.joda.time.Duration;
@Override
@SuppressWarnings("unchecked")
protected final R createOrMutateResource() {
protected final R createOrMutateResource() throws EppException {
// Figure out transfer expiration time once we've verified that the existingResource does in
// fact exist (otherwise we won't know which TLD to get this figure off of).
transferExpirationTime = now.plus(getAutomaticTransferLength());
@ -158,7 +159,7 @@ import org.joda.time.Duration;
}
/** Subclasses can override this to do further initialization. */
protected void initResourceTransferRequestFlow() {}
protected void initResourceTransferRequestFlow() throws EppException {}
/**
* Subclasses can override this to return the keys of any entities that need to be deleted if the
@ -173,8 +174,8 @@ import org.joda.time.Duration;
protected void verifyTransferRequestIsAllowed() throws EppException {}
/** Subclasses can override this to modify fields on the transfer data builder. */
protected void setTransferDataProperties(
@SuppressWarnings("unused") TransferData.Builder builder) {}
@SuppressWarnings("unused")
protected void setTransferDataProperties(TransferData.Builder builder) throws EppException {}
@Override
protected final EppOutput getOutput() throws EppException {

View file

@ -15,34 +15,46 @@
package google.registry.flows.contact;
import static google.registry.model.EppResourceUtils.checkResourcesExist;
import static google.registry.model.eppoutput.Result.Code.Success;
import com.google.common.collect.ImmutableList;
import google.registry.flows.ResourceCheckFlow;
import google.registry.config.ConfigModule.Config;
import google.registry.flows.EppException;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.exceptions.TooManyResourceChecksException;
import google.registry.model.contact.ContactCommand.Check;
import google.registry.model.contact.ContactResource;
import google.registry.model.eppoutput.CheckData;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.CheckData.ContactCheck;
import google.registry.model.eppoutput.CheckData.ContactCheckData;
import google.registry.model.eppoutput.EppOutput;
import java.util.List;
import java.util.Set;
import javax.inject.Inject;
/**
* An EPP flow that checks whether a contact can be provisioned.
*
* @error {@link google.registry.flows.ResourceCheckFlow.TooManyResourceChecksException}
* @error {@link google.registry.flows.exceptions.TooManyResourceChecksException}
*/
public class ContactCheckFlow extends ResourceCheckFlow<ContactResource, Check> {
public class ContactCheckFlow extends LoggedInFlow {
@Inject ResourceCommand resourceCommand;
@Inject @Config("maxChecks") int maxChecks;
@Inject ContactCheckFlow() {}
@Override
protected CheckData getCheckData() {
Set<String> existingIds = checkResourcesExist(resourceClass, targetIds, now);
public final EppOutput run() throws EppException {
List<String> targetIds = ((Check) resourceCommand).getTargetIds();
if (targetIds.size() > maxChecks) {
throw new TooManyResourceChecksException(maxChecks);
}
Set<String> existingIds = checkResourcesExist(ContactResource.class, targetIds, now);
ImmutableList.Builder<ContactCheck> checks = new ImmutableList.Builder<>();
for (String id : targetIds) {
boolean unused = !existingIds.contains(id);
checks.add(ContactCheck.create(unused, id, unused ? null : "In use"));
}
return ContactCheckData.create(checks.build());
return createOutput(Success, ContactCheckData.create(checks.build()));
}
}

View file

@ -17,15 +17,25 @@ package google.registry.flows.contact;
import static google.registry.flows.contact.ContactFlowUtils.validateAsciiPostalInfo;
import static google.registry.flows.contact.ContactFlowUtils.validateContactAgainstPolicy;
import static google.registry.model.EppResourceUtils.createContactHostRoid;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.Success;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.ResourceCreateFlow;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.exceptions.ResourceAlreadyExistsException;
import google.registry.model.contact.ContactCommand.Create;
import google.registry.model.contact.ContactResource;
import google.registry.model.contact.ContactResource.Builder;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.CreateData.ContactCreateData;
import google.registry.model.eppoutput.EppOutput;
import google.registry.model.index.EppResourceIndex;
import google.registry.model.index.ForeignKeyIndex;
import google.registry.model.ofy.ObjectifyService;
import google.registry.model.reporting.HistoryEntry;
import javax.inject.Inject;
@ -33,37 +43,47 @@ import javax.inject.Inject;
/**
* An EPP flow that creates a new contact resource.
*
* @error {@link google.registry.flows.ResourceCreateFlow.ResourceAlreadyExistsException}
* @error {@link google.registry.flows.exceptions.ResourceAlreadyExistsException}
* @error {@link ContactFlowUtils.BadInternationalizedPostalInfoException}
* @error {@link ContactFlowUtils.DeclineContactDisclosureFieldDisallowedPolicyException}
*/
public class ContactCreateFlow extends ResourceCreateFlow<ContactResource, Builder, Create> {
public class ContactCreateFlow extends LoggedInFlow implements TransactionalFlow {
@Inject ResourceCommand resourceCommand;
@Inject @ClientId String clientId;
@Inject HistoryEntry.Builder historyBuilder;
@Inject ContactCreateFlow() {}
@Override
protected EppOutput getOutput() {
return createOutput(Success, ContactCreateData.create(newResource.getContactId(), now));
protected final void initLoggedInFlow() throws EppException {
registerExtensions(MetadataExtension.class);
}
@Override
protected String createFlowRepoId() {
return createContactHostRoid(ObjectifyService.allocateId());
}
@Override
protected void verifyNewStateIsAllowed() throws EppException {
protected final EppOutput run() throws EppException {
Create command = (Create) resourceCommand;
if (loadByUniqueId(ContactResource.class, command.getTargetId(), now) != null) {
throw new ResourceAlreadyExistsException(command.getTargetId());
}
Builder builder = new Builder();
command.applyTo(builder);
ContactResource newResource = builder
.setCreationClientId(clientId)
.setCurrentSponsorClientId(clientId)
.setRepoId(createContactHostRoid(ObjectifyService.allocateId()))
.build();
validateAsciiPostalInfo(newResource.getInternationalizedPostalInfo());
validateContactAgainstPolicy(newResource);
}
@Override
protected boolean storeXmlInHistoryEntry() {
return false;
}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.CONTACT_CREATE;
historyBuilder
.setType(HistoryEntry.Type.CONTACT_CREATE)
.setModificationTime(now)
.setXmlBytes(null) // We don't want to store contact details in the history entry.
.setParent(Key.create(newResource));
ofy().save().entities(
newResource,
historyBuilder.build(),
ForeignKeyIndex.create(newResource, newResource.getDeletionTime()),
EppResourceIndex.create(Key.create(newResource)));
return createOutput(Success, ContactCreateData.create(newResource.getContactId(), now));
}
}

View file

@ -14,75 +14,107 @@
package google.registry.flows.contact;
import static google.registry.model.EppResourceUtils.queryDomainsUsingResource;
import static google.registry.flows.ResourceFlowUtils.failfastForAsyncDelete;
import static google.registry.flows.ResourceFlowUtils.verifyNoDisallowedStatuses;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfoForResource;
import static google.registry.flows.ResourceFlowUtils.verifyResourceOwnership;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.SuccessWithActionPending;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.google.common.base.Predicate;
import com.google.common.base.Function;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Iterables;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.config.RegistryEnvironment;
import google.registry.config.ConfigModule.Config;
import google.registry.flows.EppException;
import google.registry.flows.ResourceAsyncDeleteFlow;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.async.AsyncFlowUtils;
import google.registry.flows.async.DeleteContactResourceAction;
import google.registry.flows.async.DeleteEppResourceAction;
import google.registry.flows.exceptions.ResourceToMutateDoesNotExistException;
import google.registry.model.contact.ContactCommand.Delete;
import google.registry.model.contact.ContactResource;
import google.registry.model.contact.ContactResource.Builder;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppcommon.StatusValue;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppOutput;
import google.registry.model.reporting.HistoryEntry;
import javax.inject.Inject;
import org.joda.time.Duration;
/**
* An EPP flow that deletes a contact resource.
*
* @error {@link google.registry.flows.ResourceAsyncDeleteFlow.ResourceToDeleteIsReferencedException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
* @error {@link google.registry.flows.ResourceMutateFlow.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.SingleResourceFlow.ResourceStatusProhibitsOperationException}
* @error {@link google.registry.flows.exceptions.ResourceStatusProhibitsOperationException}
* @error {@link google.registry.flows.exceptions.ResourceToDeleteIsReferencedException}
* @error {@link google.registry.flows.exceptions.ResourceToMutateDoesNotExistException}
*/
public class ContactDeleteFlow extends ResourceAsyncDeleteFlow<ContactResource, Builder, Delete> {
public class ContactDeleteFlow extends LoggedInFlow implements TransactionalFlow {
/** In {@link #isLinkedForFailfast}, check this (arbitrary) number of resources from the query. */
private static final int FAILFAST_CHECK_COUNT = 5;
private static final ImmutableSet<StatusValue> DISALLOWED_STATUSES = ImmutableSet.of(
StatusValue.LINKED,
StatusValue.CLIENT_DELETE_PROHIBITED,
StatusValue.PENDING_DELETE,
StatusValue.SERVER_DELETE_PROHIBITED);
@Inject ResourceCommand resourceCommand;
@Inject @ClientId String clientId;
@Inject Optional<AuthInfo> authInfo;
@Inject @Config("asyncDeleteFlowMapreduceDelay") Duration mapreduceDelay;
@Inject HistoryEntry.Builder historyBuilder;
@Inject ContactDeleteFlow() {}
@Override
protected boolean isLinkedForFailfast(final Key<ContactResource> key) {
// Query for the first few linked domains, and if found, actually load them. The query is
// eventually consistent and so might be very stale, but the direct load will not be stale,
// just non-transactional. If we find at least one actual reference then we can reliably
// fail. If we don't find any, we can't trust the query and need to do the full mapreduce.
return Iterables.any(
ofy().load().keys(
queryDomainsUsingResource(
ContactResource.class, key, now, FAILFAST_CHECK_COUNT)).values(),
new Predicate<DomainBase>() {
@Override
public boolean apply(DomainBase domain) {
return domain.getReferencedContacts().contains(key);
}});
protected final void initLoggedInFlow() throws EppException {
registerExtensions(MetadataExtension.class);
}
/** Enqueues a contact resource deletion on the mapreduce queue. */
@Override
protected final void enqueueTasks() throws EppException {
public final EppOutput run() throws EppException {
Delete command = (Delete) resourceCommand;
String targetId = command.getTargetId();
failfastForAsyncDelete(
targetId,
now,
ContactResource.class,
new Function<DomainBase, ImmutableSet<?>>() {
@Override
public ImmutableSet<?> apply(DomainBase domain) {
return domain.getReferencedContacts();
}});
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
if (existingResource == null) {
throw new ResourceToMutateDoesNotExistException(ContactResource.class, targetId);
}
verifyNoDisallowedStatuses(existingResource, DISALLOWED_STATUSES);
verifyOptionalAuthInfoForResource(authInfo, existingResource);
if (!isSuperuser) {
verifyResourceOwnership(clientId, existingResource);
}
AsyncFlowUtils.enqueueMapreduceAction(
DeleteContactResourceAction.class,
ImmutableMap.of(
DeleteEppResourceAction.PARAM_RESOURCE_KEY,
Key.create(existingResource).getString(),
DeleteEppResourceAction.PARAM_REQUESTING_CLIENT_ID,
getClientId(),
clientId,
DeleteEppResourceAction.PARAM_IS_SUPERUSER,
Boolean.toString(isSuperuser)),
RegistryEnvironment.get().config().getAsyncDeleteFlowMapreduceDelay());
}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.CONTACT_PENDING_DELETE;
mapreduceDelay);
ContactResource newResource =
existingResource.asBuilder().addStatusValue(StatusValue.PENDING_DELETE).build();
historyBuilder
.setType(HistoryEntry.Type.CONTACT_PENDING_DELETE)
.setModificationTime(now)
.setParent(Key.create(existingResource));
ofy().save().<Object>entities(newResource, historyBuilder.build());
return createOutput(SuccessWithActionPending);
}
}

View file

@ -18,6 +18,7 @@ import static google.registry.model.contact.PostalInfo.Type.INTERNATIONALIZED;
import com.google.common.base.CharMatcher;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Sets;
import google.registry.flows.EppException;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
@ -25,6 +26,11 @@ import google.registry.flows.EppException.ParameterValueSyntaxErrorException;
import google.registry.model.contact.ContactAddress;
import google.registry.model.contact.ContactResource;
import google.registry.model.contact.PostalInfo;
import google.registry.model.poll.PendingActionNotificationResponse.ContactPendingActionNotificationResponse;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.transfer.TransferData;
import google.registry.model.transfer.TransferResponse.ContactTransferResponse;
import java.util.Set;
import javax.annotation.Nullable;
@ -50,7 +56,7 @@ public class ContactFlowUtils {
}
}
}
/** Check contact's state against server policy. */
static void validateContactAgainstPolicy(ContactResource contact) throws EppException {
if (contact.getDisclose() != null && !contact.getDisclose().getFlag()) {
@ -58,6 +64,49 @@ public class ContactFlowUtils {
}
}
/** Create a poll message for the gaining client in a transfer. */
static PollMessage createGainingTransferPollMessage(
String targetId, TransferData transferData, HistoryEntry historyEntry) {
return new PollMessage.OneTime.Builder()
.setClientId(transferData.getGainingClientId())
.setEventTime(transferData.getPendingTransferExpirationTime())
.setMsg(transferData.getTransferStatus().getMessage())
.setResponseData(ImmutableList.of(
createTransferResponse(targetId, transferData),
ContactPendingActionNotificationResponse.create(
targetId,
transferData.getTransferStatus().isApproved(),
transferData.getTransferRequestTrid(),
historyEntry.getModificationTime())))
.setParent(historyEntry)
.build();
}
/** Create a poll message for the losing client in a transfer. */
static PollMessage createLosingTransferPollMessage(
String targetId, TransferData transferData, HistoryEntry historyEntry) {
return new PollMessage.OneTime.Builder()
.setClientId(transferData.getLosingClientId())
.setEventTime(transferData.getPendingTransferExpirationTime())
.setMsg(transferData.getTransferStatus().getMessage())
.setResponseData(ImmutableList.of(createTransferResponse(targetId, transferData)))
.setParent(historyEntry)
.build();
}
/** Create a {@link ContactTransferResponse} off of the info in a {@link TransferData}. */
static ContactTransferResponse createTransferResponse(
String targetId, TransferData transferData) {
return new ContactTransferResponse.Builder()
.setContactId(targetId)
.setGainingClientId(transferData.getGainingClientId())
.setLosingClientId(transferData.getLosingClientId())
.setPendingTransferExpirationTime(transferData.getPendingTransferExpirationTime())
.setTransferRequestTime(transferData.getTransferRequestTime())
.setTransferStatus(transferData.getTransferStatus())
.build();
}
/** Declining contact disclosure is disallowed by server policy. */
static class DeclineContactDisclosureFieldDisallowedPolicyException
extends ParameterValuePolicyErrorException {
@ -65,7 +114,7 @@ public class ContactFlowUtils {
super("Declining contact disclosure is disallowed by server policy.");
}
}
/** Internationalized postal infos can only contain ASCII characters. */
static class BadInternationalizedPostalInfoException extends ParameterValueSyntaxErrorException {
public BadInternationalizedPostalInfoException() {

View file

@ -14,17 +14,42 @@
package google.registry.flows.contact;
import google.registry.flows.ResourceInfoFlow;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfoForResource;
import static google.registry.model.EppResourceUtils.cloneResourceWithLinkedStatus;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.Success;
import com.google.common.base.Optional;
import google.registry.flows.EppException;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.exceptions.ResourceToQueryDoesNotExistException;
import google.registry.model.contact.ContactCommand.Info;
import google.registry.model.contact.ContactResource;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppOutput;
import javax.inject.Inject;
/**
* An EPP flow that reads a contact.
*
* @error {@link google.registry.flows.ResourceQueryFlow.ResourceToQueryDoesNotExistException}
* @error {@link google.registry.flows.exceptions.ResourceToQueryDoesNotExistException}
*/
public class ContactInfoFlow extends ResourceInfoFlow<ContactResource, Info> {
@Inject ContactInfoFlow() {}
}
public class ContactInfoFlow extends LoggedInFlow {
@Inject ResourceCommand resourceCommand;
@Inject Optional<AuthInfo> authInfo;
@Inject ContactInfoFlow() {}
@Override
public final EppOutput run() throws EppException {
Info command = (Info) resourceCommand;
String targetId = command.getTargetId();
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
if (existingResource == null) {
throw new ResourceToQueryDoesNotExistException(ContactResource.class, targetId);
}
verifyOptionalAuthInfoForResource(authInfo, existingResource);
return createOutput(Success, cloneResourceWithLinkedStatus(existingResource, now));
}
}

View file

@ -14,11 +14,31 @@
package google.registry.flows.contact;
import google.registry.flows.ResourceTransferApproveFlow;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfoForResource;
import static google.registry.flows.ResourceFlowUtils.verifyResourceOwnership;
import static google.registry.flows.contact.ContactFlowUtils.createGainingTransferPollMessage;
import static google.registry.flows.contact.ContactFlowUtils.createTransferResponse;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.Success;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.google.common.base.Optional;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.exceptions.NotPendingTransferException;
import google.registry.flows.exceptions.ResourceToMutateDoesNotExistException;
import google.registry.model.contact.ContactCommand.Transfer;
import google.registry.model.contact.ContactResource;
import google.registry.model.contact.ContactResource.Builder;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppOutput;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.transfer.TransferStatus;
import javax.inject.Inject;
/**
@ -26,16 +46,52 @@ import javax.inject.Inject;
*
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
* @error {@link google.registry.flows.ResourceMutateFlow.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.ResourceMutatePendingTransferFlow.NotPendingTransferException}
* @error {@link google.registry.flows.exceptions.NotPendingTransferException}
* @error {@link google.registry.flows.exceptions.ResourceToMutateDoesNotExistException}
*/
public class ContactTransferApproveFlow
extends ResourceTransferApproveFlow<ContactResource, Builder, Transfer> {
public class ContactTransferApproveFlow extends LoggedInFlow implements TransactionalFlow {
@Inject ResourceCommand resourceCommand;
@Inject @ClientId String clientId;
@Inject Optional<AuthInfo> authInfo;
@Inject HistoryEntry.Builder historyBuilder;
@Inject ContactTransferApproveFlow() {}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.CONTACT_TRANSFER_APPROVE;
protected final void initLoggedInFlow() throws EppException {
registerExtensions(MetadataExtension.class);
}
@Override
public final EppOutput run() throws EppException {
Transfer command = (Transfer) resourceCommand;
String targetId = command.getTargetId();
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
if (existingResource == null) {
throw new ResourceToMutateDoesNotExistException(ContactResource.class, targetId);
}
verifyOptionalAuthInfoForResource(authInfo, existingResource);
if (existingResource.getTransferData().getTransferStatus() != TransferStatus.PENDING) {
throw new NotPendingTransferException(targetId);
}
verifyResourceOwnership(clientId, existingResource);
ContactResource newResource = existingResource.asBuilder()
.clearPendingTransfer(TransferStatus.CLIENT_APPROVED, now)
.setLastTransferTime(now)
.setCurrentSponsorClientId(existingResource.getTransferData().getGainingClientId())
.build();
HistoryEntry historyEntry = historyBuilder
.setType(HistoryEntry.Type.CONTACT_TRANSFER_APPROVE)
.setModificationTime(now)
.setParent(Key.create(existingResource))
.build();
// Create a poll message for the gaining client.
PollMessage gainingPollMessage =
createGainingTransferPollMessage(targetId, newResource.getTransferData(), historyEntry);
ofy().save().<Object>entities(newResource, historyEntry, gainingPollMessage);
// Delete the billing event and poll messages that were written in case the transfer would have
// been implicitly server approved.
ofy().delete().keys(existingResource.getTransferData().getServerApproveEntities());
return createOutput(Success, createTransferResponse(targetId, newResource.getTransferData()));
}
}

View file

@ -14,28 +14,87 @@
package google.registry.flows.contact;
import google.registry.flows.ResourceTransferCancelFlow;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfoForResource;
import static google.registry.flows.contact.ContactFlowUtils.createLosingTransferPollMessage;
import static google.registry.flows.contact.ContactFlowUtils.createTransferResponse;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.Success;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.google.common.base.Optional;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.exceptions.NotPendingTransferException;
import google.registry.flows.exceptions.NotTransferInitiatorException;
import google.registry.flows.exceptions.ResourceToMutateDoesNotExistException;
import google.registry.model.contact.ContactCommand.Transfer;
import google.registry.model.contact.ContactResource;
import google.registry.model.contact.ContactResource.Builder;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppOutput;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.transfer.TransferStatus;
import javax.inject.Inject;
/**
* An EPP flow that cancels a pending transfer on a {@link ContactResource}.
*
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceMutateFlow.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.ResourceMutatePendingTransferFlow.NotPendingTransferException}
* @error {@link google.registry.flows.ResourceTransferCancelFlow.NotTransferInitiatorException}
* @error {@link google.registry.flows.exceptions.NotPendingTransferException}
* @error {@link google.registry.flows.exceptions.NotTransferInitiatorException}
* @error {@link google.registry.flows.exceptions.ResourceToMutateDoesNotExistException}
*/
public class ContactTransferCancelFlow
extends ResourceTransferCancelFlow<ContactResource, Builder, Transfer> {
public class ContactTransferCancelFlow extends LoggedInFlow implements TransactionalFlow {
@Inject ResourceCommand resourceCommand;
@Inject Optional<AuthInfo> authInfo;
@Inject @ClientId String clientId;
@Inject HistoryEntry.Builder historyBuilder;
@Inject ContactTransferCancelFlow() {}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.CONTACT_TRANSFER_CANCEL;
protected final void initLoggedInFlow() throws EppException {
registerExtensions(MetadataExtension.class);
}
@Override
protected final EppOutput run() throws EppException {
Transfer command = (Transfer) resourceCommand;
String targetId = command.getTargetId();
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
// Fail if the object doesn't exist or was deleted.
if (existingResource == null) {
throw new ResourceToMutateDoesNotExistException(ContactResource.class, targetId);
}
verifyOptionalAuthInfoForResource(authInfo, existingResource);
// Fail if object doesn't have a pending transfer, or if authinfo doesn't match. */
if (existingResource.getTransferData().getTransferStatus() != TransferStatus.PENDING) {
throw new NotPendingTransferException(targetId);
}
// TODO(b/18997997): Determine if authInfo is necessary to cancel a transfer.
if (!clientId.equals(existingResource.getTransferData().getGainingClientId())) {
throw new NotTransferInitiatorException();
}
ContactResource newResource = existingResource.asBuilder()
.clearPendingTransfer(TransferStatus.CLIENT_CANCELLED, now)
.build();
HistoryEntry historyEntry = historyBuilder
.setType(HistoryEntry.Type.CONTACT_TRANSFER_CANCEL)
.setModificationTime(now)
.setParent(Key.create(existingResource))
.build();
// Create a poll message for the losing client.
PollMessage losingPollMessage =
createLosingTransferPollMessage(targetId, newResource.getTransferData(), historyEntry);
ofy().save().<Object>entities(newResource, historyEntry, losingPollMessage);
// Delete the billing event and poll messages that were written in case the transfer would have
// been implicitly server approved.
ofy().delete().keys(existingResource.getTransferData().getServerApproveEntities());
return createOutput(Success, createTransferResponse(targetId, newResource.getTransferData()));
}
}

View file

@ -14,19 +14,62 @@
package google.registry.flows.contact;
import google.registry.flows.ResourceTransferQueryFlow;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfoForResource;
import static google.registry.flows.contact.ContactFlowUtils.createTransferResponse;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.Success;
import com.google.common.base.Optional;
import google.registry.flows.EppException;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.exceptions.NoTransferHistoryToQueryException;
import google.registry.flows.exceptions.NotAuthorizedToViewTransferException;
import google.registry.flows.exceptions.ResourceToQueryDoesNotExistException;
import google.registry.model.contact.ContactCommand.Transfer;
import google.registry.model.contact.ContactResource;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppOutput;
import javax.inject.Inject;
/**
* An EPP flow that queries a pending transfer on a {@link ContactResource}.
*
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceQueryFlow.ResourceToQueryDoesNotExistException}
* @error {@link google.registry.flows.ResourceTransferQueryFlow.NoTransferHistoryToQueryException}
* @error {@link google.registry.flows.ResourceTransferQueryFlow.NotAuthorizedToViewTransferException}
* @error {@link google.registry.flows.exceptions.NoTransferHistoryToQueryException}
* @error {@link google.registry.flows.exceptions.NotAuthorizedToViewTransferException}
* @error {@link google.registry.flows.exceptions.ResourceToQueryDoesNotExistException}
*/
public class ContactTransferQueryFlow extends ResourceTransferQueryFlow<ContactResource, Transfer> {
public class ContactTransferQueryFlow extends LoggedInFlow {
@Inject ResourceCommand resourceCommand;
@Inject Optional<AuthInfo> authInfo;
@Inject @ClientId String clientId;
@Inject ContactTransferQueryFlow() {}
@Override
public final EppOutput run() throws EppException {
Transfer command = (Transfer) resourceCommand;
String targetId = command.getTargetId();
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
if (existingResource == null) {
throw new ResourceToQueryDoesNotExistException(ContactResource.class, targetId);
}
verifyOptionalAuthInfoForResource(authInfo, existingResource);
// Most of the fields on the transfer response are required, so there's no way to return valid
// XML if the object has never been transferred (and hence the fields aren't populated).
if (existingResource.getTransferData().getTransferStatus() == null) {
throw new NoTransferHistoryToQueryException();
}
// Note that the authorization info on the command (if present) has already been verified. If
// it's present, then the other checks are unnecessary.
if (command.getAuthInfo() == null
&& !clientId.equals(existingResource.getTransferData().getGainingClientId())
&& !clientId.equals(existingResource.getTransferData().getLosingClientId())) {
throw new NotAuthorizedToViewTransferException();
}
return createOutput(
Success, createTransferResponse(targetId, existingResource.getTransferData()));
}
}

View file

@ -14,11 +14,29 @@
package google.registry.flows.contact;
import google.registry.flows.ResourceTransferRejectFlow;
import static google.registry.flows.ResourceFlowUtils.verifyAuthInfoForResource;
import static google.registry.flows.ResourceFlowUtils.verifyResourceOwnership;
import static google.registry.flows.contact.ContactFlowUtils.createGainingTransferPollMessage;
import static google.registry.flows.contact.ContactFlowUtils.createTransferResponse;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.Success;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.exceptions.NotPendingTransferException;
import google.registry.flows.exceptions.ResourceToMutateDoesNotExistException;
import google.registry.model.contact.ContactCommand.Transfer;
import google.registry.model.contact.ContactResource;
import google.registry.model.contact.ContactResource.Builder;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppOutput;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.transfer.TransferStatus;
import javax.inject.Inject;
/**
@ -26,16 +44,50 @@ import javax.inject.Inject;
*
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
* @error {@link google.registry.flows.ResourceMutateFlow.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.ResourceMutatePendingTransferFlow.NotPendingTransferException}
* @error {@link google.registry.flows.exceptions.NotPendingTransferException}
* @error {@link google.registry.flows.exceptions.ResourceToMutateDoesNotExistException}
*/
public class ContactTransferRejectFlow
extends ResourceTransferRejectFlow<ContactResource, Builder, Transfer> {
public class ContactTransferRejectFlow extends LoggedInFlow implements TransactionalFlow {
@Inject ResourceCommand resourceCommand;
@Inject @ClientId String clientId;
@Inject HistoryEntry.Builder historyBuilder;
@Inject ContactTransferRejectFlow() {}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.CONTACT_TRANSFER_REJECT;
protected final void initLoggedInFlow() throws EppException {
registerExtensions(MetadataExtension.class);
}
@Override
protected final EppOutput run() throws EppException {
Transfer command = (Transfer) resourceCommand;
String targetId = command.getTargetId();
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
if (existingResource == null) {
throw new ResourceToMutateDoesNotExistException(ContactResource.class, targetId);
}
if (command.getAuthInfo() != null) {
verifyAuthInfoForResource(command.getAuthInfo(), existingResource);
}
if (existingResource.getTransferData().getTransferStatus() != TransferStatus.PENDING) {
throw new NotPendingTransferException(targetId);
}
verifyResourceOwnership(clientId, existingResource);
ContactResource newResource = existingResource.asBuilder()
.clearPendingTransfer(TransferStatus.CLIENT_REJECTED, now)
.build();
HistoryEntry historyEntry = historyBuilder
.setType(HistoryEntry.Type.CONTACT_TRANSFER_REJECT)
.setModificationTime(now)
.setParent(Key.create(existingResource))
.build();
PollMessage gainingPollMessage =
createGainingTransferPollMessage(targetId, newResource.getTransferData(), historyEntry);
ofy().save().<Object>entities(newResource, historyEntry, gainingPollMessage);
// Delete the billing event and poll messages that were written in case the transfer would have
// been implicitly server approved.
ofy().delete().keys(existingResource.getTransferData().getServerApproveEntities());
return createOutput(Success, createTransferResponse(targetId, newResource.getTransferData()));
}
}

View file

@ -14,35 +14,136 @@
package google.registry.flows.contact;
import google.registry.config.RegistryEnvironment;
import google.registry.flows.ResourceTransferRequestFlow;
import static google.registry.flows.ResourceFlowUtils.verifyAuthInfoForResource;
import static google.registry.flows.ResourceFlowUtils.verifyNoDisallowedStatuses;
import static google.registry.flows.contact.ContactFlowUtils.createGainingTransferPollMessage;
import static google.registry.flows.contact.ContactFlowUtils.createLosingTransferPollMessage;
import static google.registry.flows.contact.ContactFlowUtils.createTransferResponse;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.SuccessWithActionPending;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
import google.registry.config.ConfigModule.Config;
import google.registry.flows.EppException;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.exceptions.AlreadyPendingTransferException;
import google.registry.flows.exceptions.MissingTransferRequestAuthInfoException;
import google.registry.flows.exceptions.ObjectAlreadySponsoredException;
import google.registry.flows.exceptions.ResourceToMutateDoesNotExistException;
import google.registry.model.contact.ContactCommand.Transfer;
import google.registry.model.contact.ContactResource;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppcommon.StatusValue;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppoutput.EppOutput;
import google.registry.model.poll.PollMessage;
import google.registry.model.reporting.HistoryEntry;
import google.registry.model.transfer.TransferData;
import google.registry.model.transfer.TransferStatus;
import javax.inject.Inject;
import org.joda.time.DateTime;
import org.joda.time.Duration;
/**
* An EPP flow that requests a transfer on a {@link ContactResource}.
*
* @error {@link google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException}
* @error {@link google.registry.flows.ResourceMutateFlow.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.ResourceTransferRequestFlow.AlreadyPendingTransferException}
* @error {@link google.registry.flows.ResourceTransferRequestFlow.MissingTransferRequestAuthInfoException}
* @error {@link google.registry.flows.ResourceTransferRequestFlow.ObjectAlreadySponsoredException}
* @error {@link google.registry.flows.exceptions.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.exceptions.AlreadyPendingTransferException}
* @error {@link google.registry.flows.exceptions.MissingTransferRequestAuthInfoException}
* @error {@link google.registry.flows.exceptions.ObjectAlreadySponsoredException}
*/
public class ContactTransferRequestFlow
extends ResourceTransferRequestFlow<ContactResource, Transfer> {
public class ContactTransferRequestFlow extends LoggedInFlow implements TransactionalFlow {
private static final ImmutableSet<StatusValue> DISALLOWED_STATUSES = ImmutableSet.of(
StatusValue.CLIENT_TRANSFER_PROHIBITED,
StatusValue.PENDING_DELETE,
StatusValue.SERVER_TRANSFER_PROHIBITED);
@Inject ResourceCommand resourceCommand;
@Inject Optional<AuthInfo> authInfo;
@Inject @ClientId String gainingClientId;
@Inject @Config("contactAutomaticTransferLength") Duration automaticTransferLength;
@Inject HistoryEntry.Builder historyBuilder;
@Inject ContactTransferRequestFlow() {}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.CONTACT_TRANSFER_REQUEST;
protected final void initLoggedInFlow() throws EppException {
registerExtensions(MetadataExtension.class);
}
@Override
protected Duration getAutomaticTransferLength() {
return RegistryEnvironment.get().config().getContactAutomaticTransferLength();
protected final EppOutput run() throws EppException {
Transfer command = (Transfer) resourceCommand;
String targetId = command.getTargetId();
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
if (existingResource == null) {
throw new ResourceToMutateDoesNotExistException(ContactResource.class, targetId);
}
if (!authInfo.isPresent()) {
throw new MissingTransferRequestAuthInfoException();
}
verifyAuthInfoForResource(authInfo.get(), existingResource);
// Verify that the resource does not already have a pending transfer.
if (TransferStatus.PENDING.equals(existingResource.getTransferData().getTransferStatus())) {
throw new AlreadyPendingTransferException(targetId);
}
String losingClientId = existingResource.getCurrentSponsorClientId();
// Verify that this client doesn't already sponsor this resource.
if (gainingClientId.equals(losingClientId)) {
throw new ObjectAlreadySponsoredException();
}
verifyNoDisallowedStatuses(existingResource, DISALLOWED_STATUSES);
HistoryEntry historyEntry = historyBuilder
.setType(HistoryEntry.Type.CONTACT_TRANSFER_REQUEST)
.setModificationTime(now)
.setParent(Key.create(existingResource))
.build();
DateTime transferExpirationTime = now.plus(automaticTransferLength);
TransferData serverApproveTransferData = new TransferData.Builder()
.setTransferRequestTime(now)
.setTransferRequestTrid(trid)
.setGainingClientId(gainingClientId)
.setLosingClientId(losingClientId)
.setPendingTransferExpirationTime(transferExpirationTime)
.setTransferStatus(TransferStatus.SERVER_APPROVED)
.build();
// If the transfer is server approved, this message will be sent to the losing registrar. */
PollMessage serverApproveLosingPollMessage =
createLosingTransferPollMessage(targetId, serverApproveTransferData, historyEntry);
// If the transfer is server approved, this message will be sent to the gaining registrar. */
PollMessage serverApproveGainingPollMessage =
createGainingTransferPollMessage(targetId, serverApproveTransferData, historyEntry);
TransferData pendingTransferData = serverApproveTransferData.asBuilder()
.setTransferStatus(TransferStatus.PENDING)
.setServerApproveEntities(
ImmutableSet.<Key<? extends TransferData.TransferServerApproveEntity>>of(
Key.create(serverApproveGainingPollMessage),
Key.create(serverApproveLosingPollMessage)))
.build();
// When a transfer is requested, a poll message is created to notify the losing registrar.
PollMessage requestPollMessage =
createLosingTransferPollMessage(targetId, pendingTransferData, historyEntry).asBuilder()
.setEventTime(now) // Unlike the serverApprove messages, this applies immediately.
.build();
ContactResource newResource = existingResource.asBuilder()
.setTransferData(pendingTransferData)
.addStatusValue(StatusValue.PENDING_TRANSFER)
.build();
ofy().save().<Object>entities(
newResource,
historyEntry,
requestPollMessage,
serverApproveGainingPollMessage,
serverApproveLosingPollMessage);
return createOutput(
SuccessWithActionPending, createTransferResponse(targetId, newResource.getTransferData()));
}
}

View file

@ -14,14 +14,36 @@
package google.registry.flows.contact;
import static google.registry.flows.ResourceFlowUtils.verifyNoDisallowedStatuses;
import static google.registry.flows.ResourceFlowUtils.verifyOptionalAuthInfoForResource;
import static google.registry.flows.ResourceFlowUtils.verifyResourceOwnership;
import static google.registry.flows.contact.ContactFlowUtils.validateAsciiPostalInfo;
import static google.registry.flows.contact.ContactFlowUtils.validateContactAgainstPolicy;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.eppoutput.Result.Code.Success;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Sets;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.ResourceUpdateFlow;
import google.registry.flows.FlowModule.ClientId;
import google.registry.flows.LoggedInFlow;
import google.registry.flows.TransactionalFlow;
import google.registry.flows.exceptions.AddRemoveSameValueEppException;
import google.registry.flows.exceptions.ResourceHasClientUpdateProhibitedException;
import google.registry.flows.exceptions.ResourceToMutateDoesNotExistException;
import google.registry.flows.exceptions.StatusNotClientSettableException;
import google.registry.model.contact.ContactCommand.Update;
import google.registry.model.contact.ContactResource;
import google.registry.model.contact.ContactResource.Builder;
import google.registry.model.domain.metadata.MetadataExtension;
import google.registry.model.eppcommon.AuthInfo;
import google.registry.model.eppcommon.StatusValue;
import google.registry.model.eppinput.ResourceCommand;
import google.registry.model.eppinput.ResourceCommand.AddRemoveSameValueException;
import google.registry.model.eppoutput.EppOutput;
import google.registry.model.reporting.HistoryEntry;
import javax.inject.Inject;
@ -29,30 +51,81 @@ import javax.inject.Inject;
* An EPP flow that updates a contact resource.
*
* @error {@link google.registry.flows.ResourceFlowUtils.ResourceNotOwnedException}
* @error {@link google.registry.flows.ResourceMutateFlow.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.ResourceUpdateFlow.ResourceHasClientUpdateProhibitedException}
* @error {@link google.registry.flows.ResourceUpdateFlow.StatusNotClientSettableException}
* @error {@link google.registry.flows.SingleResourceFlow.ResourceStatusProhibitsOperationException}
* @error {@link google.registry.flows.exceptions.AddRemoveSameValueEppException}
* @error {@link google.registry.flows.exceptions.ResourceHasClientUpdateProhibitedException}
* @error {@link google.registry.flows.exceptions.ResourceStatusProhibitsOperationException}
* @error {@link google.registry.flows.exceptions.ResourceToMutateDoesNotExistException}
* @error {@link google.registry.flows.exceptions.StatusNotClientSettableException}
* @error {@link ContactFlowUtils.BadInternationalizedPostalInfoException}
* @error {@link ContactFlowUtils.DeclineContactDisclosureFieldDisallowedPolicyException}
*/
public class ContactUpdateFlow extends ResourceUpdateFlow<ContactResource, Builder, Update> {
public class ContactUpdateFlow extends LoggedInFlow implements TransactionalFlow {
/**
* Note that CLIENT_UPDATE_PROHIBITED is intentionally not in this list. This is because it
* requires special checking, since you must be able to clear the status off the object with an
* update.
*/
private static final ImmutableSet<StatusValue> DISALLOWED_STATUSES = ImmutableSet.of(
StatusValue.PENDING_DELETE,
StatusValue.SERVER_UPDATE_PROHIBITED);
@Inject ResourceCommand resourceCommand;
@Inject Optional<AuthInfo> authInfo;
@Inject @ClientId String clientId;
@Inject HistoryEntry.Builder historyBuilder;
@Inject ContactUpdateFlow() {}
@Override
protected void verifyNewUpdatedStateIsAllowed() throws EppException {
protected final void initLoggedInFlow() throws EppException {
registerExtensions(MetadataExtension.class);
}
@Override
public final EppOutput run() throws EppException {
Update command = (Update) resourceCommand;
String targetId = command.getTargetId();
ContactResource existingResource = loadByUniqueId(ContactResource.class, targetId, now);
if (existingResource == null) {
throw new ResourceToMutateDoesNotExistException(ContactResource.class, targetId);
}
verifyOptionalAuthInfoForResource(authInfo, existingResource);
if (!isSuperuser) {
verifyResourceOwnership(clientId, existingResource);
}
for (StatusValue statusValue : Sets.union(
command.getInnerAdd().getStatusValues(),
command.getInnerRemove().getStatusValues())) {
if (!isSuperuser && !statusValue.isClientSettable()) { // The superuser can set any status.
throw new StatusNotClientSettableException(statusValue.getXmlName());
}
}
verifyNoDisallowedStatuses(existingResource, DISALLOWED_STATUSES);
historyBuilder
.setType(HistoryEntry.Type.CONTACT_UPDATE)
.setModificationTime(now)
.setXmlBytes(null) // We don't want to store contact details in the history entry.
.setParent(Key.create(existingResource));
Builder builder = existingResource.asBuilder();
try {
command.applyTo(builder);
} catch (AddRemoveSameValueException e) {
throw new AddRemoveSameValueEppException();
}
ContactResource newResource = builder
.setLastEppUpdateTime(now)
.setLastEppUpdateClientId(clientId)
.build();
// If the resource is marked with clientUpdateProhibited, and this update did not clear that
// status, then the update must be disallowed (unless a superuser is requesting the change).
if (!isSuperuser
&& existingResource.getStatusValues().contains(StatusValue.CLIENT_UPDATE_PROHIBITED)
&& newResource.getStatusValues().contains(StatusValue.CLIENT_UPDATE_PROHIBITED)) {
throw new ResourceHasClientUpdateProhibitedException();
}
validateAsciiPostalInfo(newResource.getInternationalizedPostalInfo());
validateContactAgainstPolicy(newResource);
}
@Override
protected boolean storeXmlInHistoryEntry() {
return false;
}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.CONTACT_UPDATE;
ofy().save().<Object>entities(newResource, historyBuilder.build());
return createOutput(Success);
}
}

View file

@ -51,12 +51,14 @@ import google.registry.flows.EppException.StatusProhibitsOperationException;
import google.registry.flows.EppException.UnimplementedOptionException;
import google.registry.flows.ResourceCreateFlow;
import google.registry.flows.ResourceFlowUtils.BadAuthInfoForResourceException;
import google.registry.flows.domain.TldSpecificLogicProxy.EppCommandOperations;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainBase.Builder;
import google.registry.model.domain.DomainCommand.Create;
import google.registry.model.domain.DomainResource;
import google.registry.model.domain.LrpToken;
import google.registry.model.domain.fee.FeeTransformCommandExtension;
import google.registry.model.domain.flags.FlagsCreateCommandExtension;
import google.registry.model.domain.launch.LaunchCreateExtension;
import google.registry.model.domain.launch.LaunchNotice;
import google.registry.model.domain.launch.LaunchNotice.InvalidChecksumException;
@ -67,8 +69,6 @@ import google.registry.model.registry.Registry;
import google.registry.model.registry.Registry.TldState;
import google.registry.model.smd.SignedMark;
import google.registry.model.tmch.ClaimsListShard;
import google.registry.pricing.TldSpecificLogicProxy;
import google.registry.pricing.TldSpecificLogicProxy.EppCommandOperations;
import java.util.Set;
import javax.annotation.Nullable;
@ -95,16 +95,20 @@ public abstract class BaseDomainCreateFlow<R extends DomainBase, B extends Build
protected TldState tldState;
protected Optional<LrpToken> lrpToken;
protected Optional<RegistryExtraFlowLogic> extraFlowLogic;
@Override
public final void initResourceCreateOrMutateFlow() throws EppException {
command = cloneAndLinkReferences(command, now);
registerExtensions(SecDnsCreateExtension.class);
registerExtensions(SecDnsCreateExtension.class, FlagsCreateCommandExtension.class);
registerExtensions(FEE_CREATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
secDnsCreate = eppInput.getSingleExtension(SecDnsCreateExtension.class);
launchCreate = eppInput.getSingleExtension(LaunchCreateExtension.class);
feeCreate =
eppInput.getFirstExtensionOfClasses(FEE_CREATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
hasSignedMarks = launchCreate != null && !launchCreate.getSignedMarks().isEmpty();
initDomainCreateFlow();
// We can't initialize extraFlowLogic here, because the TLD has not been checked yet.
}
@Override
@ -181,9 +185,19 @@ public abstract class BaseDomainCreateFlow<R extends DomainBase, B extends Build
Registry registry = Registry.get(tld);
tldState = registry.getTldState(now);
checkRegistryStateForTld(tld);
// Now that the TLD has been verified, we can go ahead and initialize extraFlowLogic. The
// initialization and matching commit are done at the topmost possible level in the flow
// hierarchy, but the actual processing takes place only when needed in the children, e.g.
// DomainCreateFlow.
extraFlowLogic = RegistryExtraFlowLogicProxy.newInstanceForTld(tld);
domainLabel = domainName.parts().get(0);
commandOperations = TldSpecificLogicProxy.getCreatePrice(
registry, domainName.toString(), now, command.getPeriod().getValue());
registry,
domainName.toString(),
getClientId(),
now,
command.getPeriod().getValue(),
eppInput);
// The TLD should always be the parent of the requested domain name.
isAnchorTenantViaReservation = matchesAnchorTenantReservation(
domainLabel, tld, command.getAuthInfo().getPw().getValue());
@ -252,6 +266,9 @@ public abstract class BaseDomainCreateFlow<R extends DomainBase, B extends Build
.setRedemptionHistoryEntry(Key.create(historyEntry))
.build());
}
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().commitAdditionalLogicChanges();
}
}
/** Validate the secDNS extension, if present. */

View file

@ -26,6 +26,7 @@ import static google.registry.flows.domain.DomainFlowUtils.validateNoDuplicateCo
import static google.registry.flows.domain.DomainFlowUtils.validateRegistrantAllowedOnTld;
import static google.registry.flows.domain.DomainFlowUtils.validateRequiredContactsPresent;
import static google.registry.flows.domain.DomainFlowUtils.verifyNotInPendingDelete;
import static google.registry.model.domain.fee.Fee.FEE_UPDATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableSet;
@ -37,11 +38,13 @@ import google.registry.flows.ResourceUpdateFlow;
import google.registry.model.domain.DomainBase;
import google.registry.model.domain.DomainBase.Builder;
import google.registry.model.domain.DomainCommand.Update;
import google.registry.model.domain.fee.FeeTransformCommandExtension;
import google.registry.model.domain.secdns.DelegationSignerData;
import google.registry.model.domain.secdns.SecDnsUpdateExtension;
import google.registry.model.domain.secdns.SecDnsUpdateExtension.Add;
import google.registry.model.domain.secdns.SecDnsUpdateExtension.Remove;
import java.util.Set;
import javax.annotation.Nullable;
/**
* An EPP flow that updates a domain application or resource.
@ -52,18 +55,19 @@ import java.util.Set;
public abstract class BaseDomainUpdateFlow<R extends DomainBase, B extends Builder<R, B>>
extends ResourceUpdateFlow<R, B, Update> {
@Nullable
protected FeeTransformCommandExtension feeUpdate;
protected Optional<RegistryExtraFlowLogic> extraFlowLogic;
@Override
public final void initResourceCreateOrMutateFlow() throws EppException {
registerExtensions(FEE_UPDATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
feeUpdate =
eppInput.getFirstExtensionOfClasses(FEE_UPDATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
command = cloneAndLinkReferences(command, now);
initDomainUpdateFlow();
// In certain conditions (for instance, errors), there is no existing resource.
if (existingResource == null) {
extraFlowLogic = Optional.absent();
} else {
extraFlowLogic = RegistryExtraFlowLogicProxy.newInstanceForTld(existingResource.getTld());
}
extraFlowLogic = RegistryExtraFlowLogicProxy.newInstanceForDomain(existingResource);
}
@SuppressWarnings("unused")
@ -143,6 +147,18 @@ public abstract class BaseDomainUpdateFlow<R extends DomainBase, B extends Build
validateNameserversCountForTld(newResource.getTld(), newResource.getNameservers().size());
}
/** Call the subclass method, then commit any extra flow logic. */
@Override
protected final void modifyRelatedResources() {
modifyUpdateRelatedResources();
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().commitAdditionalLogicChanges();
}
}
/** Modify any other resources that need to be informed of this update. */
protected void modifyUpdateRelatedResources() {}
/** The secDNS:all element must have value 'true' if present. */
static class SecDnsAllUsageException extends ParameterValuePolicyErrorException {
public SecDnsAllUsageException() {

View file

@ -16,7 +16,6 @@ package google.registry.flows.domain;
import static google.registry.flows.domain.DomainFlowUtils.DISALLOWED_TLD_STATES_FOR_LAUNCH_FLOWS;
import static google.registry.flows.domain.DomainFlowUtils.validateFeeChallenge;
import static google.registry.model.domain.fee.Fee.FEE_CREATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER;
import static google.registry.model.eppoutput.Result.Code.Success;
import static google.registry.model.index.DomainApplicationIndex.loadActiveApplicationsByDomainName;
import static google.registry.model.index.ForeignKeyIndex.loadAndGetKey;
@ -118,7 +117,6 @@ public class DomainApplicationCreateFlow extends BaseDomainCreateFlow<DomainAppl
@Override
protected void initDomainCreateFlow() {
registerExtensions(LaunchCreateExtension.class);
registerExtensions(FEE_CREATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
}
@Override
@ -215,6 +213,7 @@ public class DomainApplicationCreateFlow extends BaseDomainCreateFlow<DomainAppl
responseExtensionsBuilder.add(feeCreate.createResponseBuilder()
.setCurrency(commandOperations.getCurrency())
.setFees(commandOperations.getFees())
.setCredits(commandOperations.getCredits())
.build());
}

View file

@ -23,7 +23,6 @@ import static google.registry.model.index.DomainApplicationIndex.loadActiveAppli
import static google.registry.model.registry.label.ReservationType.UNRESERVED;
import static google.registry.pricing.PricingEngineProxy.getPricesForDomainName;
import static google.registry.util.CollectionUtils.nullToEmpty;
import static google.registry.util.DomainNameUtils.getTldFromDomainName;
import com.google.common.base.Predicate;
import com.google.common.collect.FluentIterable;
@ -49,7 +48,6 @@ import google.registry.model.registry.label.ReservationType;
import java.util.Collections;
import java.util.Set;
import javax.inject.Inject;
import org.joda.money.CurrencyUnit;
/**
* An EPP flow that checks whether a domain can be provisioned.
@ -148,7 +146,7 @@ public class DomainCheckFlow extends BaseDomainCheckFlow {
// If this version of the fee extension is nameless, use the full list of domains.
return domainNames.keySet();
}
}
}
/** Handle the fee check extension. */
@Override
@ -160,7 +158,6 @@ public class DomainCheckFlow extends BaseDomainCheckFlow {
if (feeCheck == null) {
return null; // No fee checks were requested.
}
CurrencyUnit topLevelCurrency = feeCheck.isCurrencySupported() ? feeCheck.getCurrency() : null;
ImmutableList.Builder<FeeCheckResponseExtensionItem> feeCheckResponseItemsBuilder =
new ImmutableList.Builder<>();
for (FeeCheckCommandExtensionItem feeCheckItem : feeCheck.getItems()) {
@ -169,10 +166,11 @@ public class DomainCheckFlow extends BaseDomainCheckFlow {
handleFeeRequest(
feeCheckItem,
builder,
domainName,
getTldFromDomainName(domainName),
topLevelCurrency,
now);
domainNames.get(domainName),
getClientId(),
feeCheck.getCurrency(),
now,
eppInput);
feeCheckResponseItemsBuilder
.add(builder.setDomainNameIfSupported(domainName).build());
}
@ -180,7 +178,7 @@ public class DomainCheckFlow extends BaseDomainCheckFlow {
return ImmutableList.<ResponseExtension>of(
feeCheck.createResponse(feeCheckResponseItemsBuilder.build()));
}
/** By server policy, fee check names must be listed in the availability check. */
static class OnlyCheckedNamesCanBeFeeCheckedException extends ParameterValuePolicyErrorException {
OnlyCheckedNamesCanBeFeeCheckedException() {

View file

@ -16,11 +16,9 @@ package google.registry.flows.domain;
import static com.google.common.collect.Sets.union;
import static google.registry.flows.domain.DomainFlowUtils.validateFeeChallenge;
import static google.registry.model.domain.fee.Fee.FEE_CREATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER;
import static google.registry.model.index.DomainApplicationIndex.loadActiveApplicationsByDomainName;
import static google.registry.model.ofy.ObjectifyService.ofy;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Sets;
import google.registry.flows.EppException;
@ -127,7 +125,6 @@ public class DomainCreateFlow extends DomainCreateOrAllocateFlow {
@Override
protected final void initDomainCreateOrAllocateFlow() {
registerExtensions(LaunchCreateExtension.class);
registerExtensions(FEE_CREATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
}
@Override
@ -190,6 +187,17 @@ public class DomainCreateFlow extends DomainCreateOrAllocateFlow {
.setLaunchNotice(launchCreate.getNotice())
.setSmdId(signedMark == null ? null : signedMark.getId());
}
// Handle extra flow logic, if any. The initialization and commit are performed higher up in the
// flow hierarchy, in BaseDomainCreateFlow.
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().performAdditionalDomainCreateLogic(
existingResource,
getClientId(),
now,
command.getPeriod().getValue(),
eppInput,
historyEntry);
}
}
@Override

View file

@ -110,6 +110,7 @@ public abstract class DomainCreateOrAllocateFlow
feeCreate.createResponseBuilder()
.setCurrency(commandOperations.getCurrency())
.setFees(commandOperations.getFees())
.setCredits(commandOperations.getCredits())
.build()));
}
}

View file

@ -23,6 +23,7 @@ import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.pricing.PricingEngineProxy.getDomainRenewCost;
import static google.registry.util.CollectionUtils.nullToEmpty;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
@ -36,6 +37,7 @@ import google.registry.model.domain.DomainCommand.Delete;
import google.registry.model.domain.DomainResource;
import google.registry.model.domain.DomainResource.Builder;
import google.registry.model.domain.GracePeriod;
import google.registry.model.domain.fee.BaseFee.FeeType;
import google.registry.model.domain.fee.Credit;
import google.registry.model.domain.fee.FeeTransformResponseExtension;
import google.registry.model.domain.fee06.FeeDeleteResponseExtensionV06;
@ -76,11 +78,14 @@ public class DomainDeleteFlow extends ResourceSyncDeleteFlow<DomainResource, Bui
ImmutableList<Credit> credits;
protected Optional<RegistryExtraFlowLogic> extraFlowLogic;
@Inject DomainDeleteFlow() {}
@Override
protected void initResourceCreateOrMutateFlow() throws EppException {
registerExtensions(SecDnsUpdateExtension.class);
extraFlowLogic = RegistryExtraFlowLogicProxy.newInstanceForDomain(existingResource);
}
@Override
@ -93,7 +98,7 @@ public class DomainDeleteFlow extends ResourceSyncDeleteFlow<DomainResource, Bui
}
@Override
protected final void setDeleteProperties(Builder builder) {
protected final void setDeleteProperties(Builder builder) throws EppException {
// Only set to PENDING_DELETE if this domain is not in the Add Grace Period. If domain is in Add
// Grace Period, we delete it immediately.
// The base class code already handles the immediate delete case, so we only have to handle the
@ -122,6 +127,12 @@ public class DomainDeleteFlow extends ResourceSyncDeleteFlow<DomainResource, Bui
getClientId())))
.setDeletePollMessage(Key.create(deletePollMessage));
}
// Handle extra flow logic, if any.
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().performAdditionalDomainDeleteLogic(
existingResource, getClientId(), now, eppInput, historyEntry);
}
}
@Override
@ -151,8 +162,7 @@ public class DomainDeleteFlow extends ResourceSyncDeleteFlow<DomainResource, Bui
ofy().load().key(checkNotNull(gracePeriod.getOneTimeBillingEvent())).now().getCost();
}
creditsBuilder.add(Credit.create(
cost.negated().getAmount(),
String.format("%s credit", gracePeriod.getType().getXmlName())));
cost.negated().getAmount(), FeeType.CREDIT, gracePeriod.getType().getXmlName()));
creditsCurrencyUnit = cost.getCurrencyUnit();
}
}
@ -165,6 +175,10 @@ public class DomainDeleteFlow extends ResourceSyncDeleteFlow<DomainResource, Bui
// Close the autorenew billing event and poll message. This may delete the poll message.
updateAutorenewRecurrenceEndTime(existingResource, now);
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().commitAdditionalLogicChanges();
}
// If there's a pending transfer, the gaining client's autorenew billing
// event and poll message will already have been deleted in
// ResourceDeleteFlow since it's listed in serverApproveEntities.

View file

@ -61,6 +61,7 @@ import google.registry.model.domain.DomainCommand.CreateOrUpdate;
import google.registry.model.domain.DomainCommand.InvalidReferencesException;
import google.registry.model.domain.DomainResource;
import google.registry.model.domain.Period;
import google.registry.model.domain.fee.Credit;
import google.registry.model.domain.fee.Fee;
import google.registry.model.domain.fee.FeeCheckCommandExtensionItem;
import google.registry.model.domain.fee.FeeCheckResponseExtensionItem;
@ -71,6 +72,7 @@ import google.registry.model.domain.launch.LaunchExtension;
import google.registry.model.domain.launch.LaunchPhase;
import google.registry.model.domain.secdns.DelegationSignerData;
import google.registry.model.eppcommon.StatusValue;
import google.registry.model.eppinput.EppInput;
import google.registry.model.eppinput.ResourceCommand.SingleResourceCommand;
import google.registry.model.host.HostResource;
import google.registry.model.mark.Mark;
@ -86,7 +88,6 @@ import google.registry.model.smd.AbstractSignedMark;
import google.registry.model.smd.EncodedSignedMark;
import google.registry.model.smd.SignedMark;
import google.registry.model.smd.SignedMarkRevocationList;
import google.registry.pricing.TldSpecificLogicProxy;
import google.registry.tmch.TmchXmlSignature;
import google.registry.tmch.TmchXmlSignature.CertificateSignatureException;
import google.registry.util.Idn;
@ -563,12 +564,13 @@ public class DomainFlowUtils {
static void handleFeeRequest(
FeeQueryCommandExtensionItem feeRequest,
FeeQueryResponseExtensionItem.Builder builder,
String domainName,
String tld,
InternetDomainName domain,
String clientIdentifier,
@Nullable CurrencyUnit topLevelCurrency,
DateTime now) throws EppException {
InternetDomainName domain = InternetDomainName.from(domainName);
Registry registry = Registry.get(tld);
DateTime now,
EppInput eppInput) throws EppException {
String domainNameString = domain.toString();
Registry registry = Registry.get(domain.parent().toString());
int years = verifyUnitIsYears(feeRequest.getPeriod()).getValue();
boolean isSunrise = registry.getTldState(now).equals(TldState.SUNRISE);
@ -577,7 +579,7 @@ public class DomainFlowUtils {
}
CurrencyUnit currency =
feeRequest.isCurrencySupported() ? feeRequest.getCurrency() : topLevelCurrency;
feeRequest.getCurrency() != null ? feeRequest.getCurrency() : topLevelCurrency;
if ((currency != null) && !currency.equals(registry.getCurrency())) {
throw new CurrencyUnitMismatchException();
}
@ -586,11 +588,9 @@ public class DomainFlowUtils {
.setCommand(feeRequest.getCommandName(), feeRequest.getPhase(), feeRequest.getSubphase())
.setCurrencyIfSupported(registry.getCurrency())
.setPeriod(feeRequest.getPeriod())
.setClass(TldSpecificLogicProxy.getFeeClass(domainName, now).orNull());
.setClass(TldSpecificLogicProxy.getFeeClass(domainNameString, now).orNull());
switch (feeRequest.getCommandName()) {
case UNKNOWN:
throw new UnknownFeeCommandException(feeRequest.getUnparsedCommandName());
case CREATE:
if (isReserved(domain, isSunrise)) { // Don't return a create price for reserved names.
builder.setClass("reserved"); // Override whatever class we've set above.
@ -598,24 +598,35 @@ public class DomainFlowUtils {
builder.setReasonIfSupported("reserved");
} else {
builder.setAvailIfSupported(true);
builder.setFees(
TldSpecificLogicProxy.getCreatePrice(registry, domainName, now, years).getFees());
builder.setFees(TldSpecificLogicProxy.getCreatePrice(
registry, domainNameString, clientIdentifier, now, years, eppInput).getFees());
}
break;
case RENEW:
builder.setAvailIfSupported(true);
builder.setFees(TldSpecificLogicProxy.getRenewPrice(
registry, domainNameString, clientIdentifier, now, years, eppInput).getFees());
break;
case RESTORE:
if (years != 1) {
throw new RestoresAreAlwaysForOneYearException();
}
builder.setAvailIfSupported(true);
builder.setFees(
TldSpecificLogicProxy.getRestorePrice(registry, domainName, now, years).getFees());
builder.setFees(TldSpecificLogicProxy.getRestorePrice(
registry, domainNameString, clientIdentifier, now, eppInput).getFees());
break;
// TODO(mountford): handle UPDATE
default:
// Anything else (transfer|renew) will have a "renew" fee.
case TRANSFER:
builder.setAvailIfSupported(true);
builder.setFees(
TldSpecificLogicProxy.getRenewPrice(registry, domainName, now, years).getFees());
builder.setFees(TldSpecificLogicProxy.getTransferPrice(
registry, domainNameString, clientIdentifier, now, years, eppInput).getFees());
break;
case UPDATE:
builder.setAvailIfSupported(true);
builder.setFees(TldSpecificLogicProxy.getUpdatePrice(
registry, domainNameString, clientIdentifier, now, eppInput).getFees());
break;
default:
throw new UnknownFeeCommandException(feeRequest.getUnparsedCommandName());
}
}
@ -646,6 +657,12 @@ public class DomainFlowUtils {
}
total = total.add(fee.getCost());
}
for (Credit credit : feeCommand.getCredits()) {
if (!credit.hasDefaultAttributes()) {
throw new UnsupportedFeeAttributeException();
}
total = total.add(credit.getCost());
}
Money feeTotal = null;
try {
@ -953,6 +970,13 @@ public class DomainFlowUtils {
}
}
/** Fees must be explicitly acknowledged when performing an update which is not free. */
static class FeesRequiredForNonFreeUpdateException extends RequiredParameterMissingException {
FeesRequiredForNonFreeUpdateException() {
super("Fees must be explicitly acknowledged when performing an update which is not free.");
}
}
/** The 'grace-period', 'applied' and 'refundable' fields are disallowed by server policy. */
static class UnsupportedFeeAttributeException extends UnimplementedOptionException {
UnsupportedFeeAttributeException() {
@ -1030,4 +1054,3 @@ public class DomainFlowUtils {
}
}

View file

@ -19,6 +19,7 @@ import static google.registry.flows.domain.DomainFlowUtils.handleFeeRequest;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.net.InternetDomainName;
import google.registry.flows.EppException;
import google.registry.model.domain.DomainResource;
import google.registry.model.domain.DomainResource.Builder;
@ -97,15 +98,16 @@ public class DomainInfoFlow extends BaseDomainInfoFlow<DomainResource, Builder>
handleFeeRequest(
feeInfo,
builder,
getTargetId(),
existingResource.getTld(),
InternetDomainName.from(getTargetId()),
getClientId(),
null,
now);
now,
eppInput);
extensions.add(builder.build());
}
// If the TLD uses the flags extension, add it to the info response.
Optional<RegistryExtraFlowLogic> extraLogicManager =
RegistryExtraFlowLogicProxy.newInstanceForTld(existingResource.getTld());
RegistryExtraFlowLogicProxy.newInstanceForDomain(existingResource);
if (extraLogicManager.isPresent()) {
List<String> flags = extraLogicManager.get().getExtensionFlags(
existingResource, this.getClientId(), now); // As-of date is always now for info commands.

View file

@ -28,6 +28,7 @@ import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.pricing.PricingEngineProxy.getDomainRenewCost;
import static google.registry.util.DateTimeUtils.leapSafeAddYears;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
@ -84,6 +85,8 @@ public class DomainRenewFlow extends OwnedResourceMutateFlow<DomainResource, Ren
protected FeeTransformCommandExtension feeRenew;
protected Money renewCost;
protected Optional<RegistryExtraFlowLogic> extraFlowLogic;
@Inject DomainRenewFlow() {}
@Override
@ -96,6 +99,7 @@ public class DomainRenewFlow extends OwnedResourceMutateFlow<DomainResource, Ren
registerExtensions(FEE_RENEW_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
feeRenew =
eppInput.getFirstExtensionOfClasses(FEE_RENEW_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
extraFlowLogic = RegistryExtraFlowLogicProxy.newInstanceForDomain(existingResource);
}
@Override
@ -117,7 +121,7 @@ public class DomainRenewFlow extends OwnedResourceMutateFlow<DomainResource, Ren
}
@Override
protected DomainResource createOrMutateResource() {
protected DomainResource createOrMutateResource() throws EppException {
DateTime newExpirationTime = leapSafeAddYears(
existingResource.getRegistrationExpirationTime(), command.getPeriod().getValue());
// Bill for this explicit renew itself.
@ -143,6 +147,18 @@ public class DomainRenewFlow extends OwnedResourceMutateFlow<DomainResource, Ren
.setEventTime(newExpirationTime)
.setParent(historyEntry)
.build();
// Handle extra flow logic, if any.
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().performAdditionalDomainRenewLogic(
existingResource,
getClientId(),
now,
command.getPeriod().getValue(),
eppInput,
historyEntry);
}
ofy().save().<Object>entities(explicitRenewEvent, newAutorenewEvent, newAutorenewPollMessage);
return existingResource.asBuilder()
.setRegistrationExpirationTime(newExpirationTime)
@ -160,6 +176,14 @@ public class DomainRenewFlow extends OwnedResourceMutateFlow<DomainResource, Ren
}
}
/** Commit any extra flow logic. */
@Override
protected final void modifyRelatedResources() {
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().commitAdditionalLogicChanges();
}
}
@Override
protected final HistoryEntry.Type getHistoryEntryType() {
return HistoryEntry.Type.DOMAIN_RENEW;

View file

@ -26,6 +26,7 @@ import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.pricing.PricingEngineProxy.getDomainRenewCost;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import com.google.common.net.InternetDomainName;
import com.googlecode.objectify.Key;
@ -75,6 +76,7 @@ public class DomainRestoreRequestFlow extends OwnedResourceMutateFlow<DomainReso
protected FeeTransformCommandExtension feeUpdate;
protected Money restoreCost;
protected Money renewCost;
protected Optional<RegistryExtraFlowLogic> extraFlowLogic;
@Inject DomainRestoreRequestFlow() {}
@ -82,6 +84,7 @@ public class DomainRestoreRequestFlow extends OwnedResourceMutateFlow<DomainReso
protected final void initResourceCreateOrMutateFlow() throws EppException {
registerExtensions(RgpUpdateExtension.class);
registerExtensions(FEE_UPDATE_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
extraFlowLogic = RegistryExtraFlowLogicProxy.newInstanceForDomain(existingResource);
}
@Override
@ -155,6 +158,13 @@ public class DomainRestoreRequestFlow extends OwnedResourceMutateFlow<DomainReso
.build();
ofy().save().<Object>entities(restoreEvent, autorenewEvent, autorenewPollMessage, renewEvent);
// Handle extra flow logic, if any.
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().performAdditionalDomainRestoreLogic(
existingResource, getClientId(), now, eppInput, historyEntry);
}
return existingResource.asBuilder()
.setRegistrationExpirationTime(newExpirationTime)
.setDeletionTime(END_OF_TIME)
@ -171,6 +181,10 @@ public class DomainRestoreRequestFlow extends OwnedResourceMutateFlow<DomainReso
// Update the relevant {@link ForeignKey} to cache the new deletion time.
ofy().save().entity(ForeignKeyIndex.create(newResource, newResource.getDeletionTime()));
ofy().delete().key(existingResource.getDeletePollMessage());
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().commitAdditionalLogicChanges();
}
}
@Override

View file

@ -25,6 +25,7 @@ import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.pricing.PricingEngineProxy.getDomainRenewCost;
import static google.registry.util.DateTimeUtils.END_OF_TIME;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.googlecode.objectify.Key;
@ -39,6 +40,7 @@ import google.registry.model.domain.Period;
import google.registry.model.domain.fee.BaseFee.FeeType;
import google.registry.model.domain.fee.Fee;
import google.registry.model.domain.fee.FeeTransformCommandExtension;
import google.registry.model.domain.flags.FlagsTransferCommandExtension;
import google.registry.model.eppoutput.EppResponse.ResponseExtension;
import google.registry.model.poll.PollMessage;
import google.registry.model.registry.Registry;
@ -87,6 +89,9 @@ public class DomainTransferRequestFlow
/** The amount that this transfer will cost due to the implied renew. */
private Money renewCost;
/** Extra flow logic instance. */
protected Optional<RegistryExtraFlowLogic> extraFlowLogic;
/**
* An optional extension from the client specifying how much they think the transfer should cost.
@ -101,7 +106,8 @@ public class DomainTransferRequestFlow
}
@Override
protected final void initResourceTransferRequestFlow() {
protected final void initResourceTransferRequestFlow() throws EppException {
registerExtensions(FlagsTransferCommandExtension.class);
registerExtensions(FEE_TRANSFER_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
feeTransfer = eppInput.getFirstExtensionOfClasses(
FEE_TRANSFER_COMMAND_EXTENSIONS_IN_PREFERENCE_ORDER);
@ -146,6 +152,7 @@ public class DomainTransferRequestFlow
.setMsg("Domain was auto-renewed.")
.setParent(historyEntry)
.build();
extraFlowLogic = RegistryExtraFlowLogicProxy.newInstanceForDomain(existingResource);
}
@Override
@ -174,12 +181,23 @@ public class DomainTransferRequestFlow
}
@Override
protected void setTransferDataProperties(TransferData.Builder builder) {
protected void setTransferDataProperties(TransferData.Builder builder) throws EppException {
builder
.setServerApproveBillingEvent(Key.create(transferBillingEvent))
.setServerApproveAutorenewEvent(Key.create(gainingClientAutorenewEvent))
.setServerApproveAutorenewPollMessage(Key.create(gainingClientAutorenewPollMessage))
.setExtendedRegistrationYears(command.getPeriod().getValue());
// Handle extra flow logic, if any.
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().performAdditionalDomainTransferLogic(
existingResource,
getClientId(),
now,
command.getPeriod().getValue(),
eppInput,
historyEntry);
}
}
/**
@ -233,6 +251,10 @@ public class DomainTransferRequestFlow
// transfer occurs, then the logic in cloneProjectedAtTime() will move the
// serverApproveAutoRenewEvent into the autoRenewEvent field.
updateAutorenewRecurrenceEndTime(existingResource, automaticTransferTime);
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().commitAdditionalLogicChanges();
}
}
@Override

View file

@ -15,6 +15,7 @@
package google.registry.flows.domain;
import static com.google.common.collect.Sets.symmetricDifference;
import static google.registry.flows.domain.DomainFlowUtils.validateFeeChallenge;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.util.DateTimeUtils.earliestOf;
@ -23,6 +24,8 @@ import com.google.common.base.Predicate;
import com.google.common.collect.Iterables;
import google.registry.dns.DnsQueue;
import google.registry.flows.EppException;
import google.registry.flows.domain.DomainFlowUtils.FeesRequiredForNonFreeUpdateException;
import google.registry.flows.domain.TldSpecificLogicProxy.EppCommandOperations;
import google.registry.model.billing.BillingEvent;
import google.registry.model.billing.BillingEvent.Reason;
import google.registry.model.domain.DomainResource;
@ -55,6 +58,8 @@ import org.joda.time.DateTime;
* @error {@link BaseDomainUpdateFlow.SecDnsAllUsageException}
* @error {@link BaseDomainUpdateFlow.UrgentAttributeNotSupportedException}
* @error {@link DomainFlowUtils.DuplicateContactForRoleException}
* @error {@link DomainFlowUtils.FeesMismatchException}
* @error {@link DomainFlowUtils.FeesRequiredForNonFreeUpdateException}
* @error {@link DomainFlowUtils.LinkedResourcesDoNotExistException}
* @error {@link DomainFlowUtils.LinkedResourceInPendingDeleteProhibitsOperationException}
* @error {@link DomainFlowUtils.MissingAdminContactException}
@ -132,13 +137,29 @@ public class DomainUpdateFlow extends BaseDomainUpdateFlow<DomainResource, Build
// Handle extra flow logic, if any.
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().performAdditionalDomainUpdateLogic(
existingResource, getClientId(), now, eppInput);
existingResource, getClientId(), now, eppInput, historyEntry);
}
return builder;
}
@Override
protected final void modifyRelatedResources() {
protected final void verifyDomainUpdateIsAllowed() throws EppException {
EppCommandOperations commandOperations = TldSpecificLogicProxy.getUpdatePrice(
Registry.get(existingResource.getTld()),
existingResource.getFullyQualifiedDomainName(),
getClientId(),
now,
eppInput);
// The fee extension must be present if the update is not free.
if ((feeUpdate == null) && !commandOperations.getTotalCost().isZero()) {
throw new FeesRequiredForNonFreeUpdateException();
}
validateFeeChallenge(
targetId, existingResource.getTld(), now, feeUpdate, commandOperations.getTotalCost());
}
@Override
protected final void modifyUpdateRelatedResources() {
// Determine the status changes, and filter to server statuses.
// If any of these statuses have been added or removed, bill once.
if (metadataExtension != null && metadataExtension.getRequestedByRegistrar()) {
@ -161,10 +182,6 @@ public class DomainUpdateFlow extends BaseDomainUpdateFlow<DomainResource, Build
ofy().save().entity(billingEvent);
}
}
if (extraFlowLogic.isPresent()) {
extraFlowLogic.get().commitAdditionalDomainUpdates();
}
}
@Override

View file

@ -16,7 +16,9 @@ package google.registry.flows.domain;
import google.registry.flows.EppException;
import google.registry.model.domain.DomainResource;
import google.registry.model.domain.fee.BaseFee;
import google.registry.model.eppinput.EppInput;
import google.registry.model.reporting.HistoryEntry;
import java.util.List;
import org.joda.time.DateTime;
@ -26,20 +28,102 @@ import org.joda.time.DateTime;
*/
public interface RegistryExtraFlowLogic {
/** Get the flags to be used in the EPP flags extension. This is used for EPP info commands. */
/** Gets the flags to be used in the EPP flags extension. This is used for EPP info commands. */
public List<String> getExtensionFlags(
DomainResource domainResource, String clientIdentifier, DateTime asOfDate);
/** Computes the expected creation fee, for use in fee challenges and the like. */
public BaseFee getCreateFeeOrCredit(
String domainName,
String clientIdentifier,
DateTime asOfDate,
int years,
EppInput eppInput) throws EppException;
/**
* Add and remove flags passed via the EPP flags extension. Any changes should not be persisted to
* Datastore until commitAdditionalDomainUpdates is called. Name suggested by Benjamin McIlwain.
* Performs additional tasks required for a create command. Any changes should not be persisted to
* Datastore until commitAdditionalLogicChanges is called.
*/
public void performAdditionalDomainUpdateLogic(
DomainResource domainResource,
public void performAdditionalDomainCreateLogic(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
int years,
EppInput eppInput,
HistoryEntry historyEntry) throws EppException;
/**
* Performs additional tasks required for a delete command. Any changes should not be persisted to
* Datastore until commitAdditionalLogicChanges is called.
*/
public void performAdditionalDomainDeleteLogic(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
EppInput eppInput,
HistoryEntry historyEntry) throws EppException;
/** Computes the expected renewal fee, for use in fee challenges and the like. */
public BaseFee getRenewFeeOrCredit(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
int years,
EppInput eppInput) throws EppException;
/**
* Performs additional tasks required for a renew command. Any changes should not be persisted
* to Datastore until commitAdditionalLogicChanges is called.
*/
public void performAdditionalDomainRenewLogic(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
int years,
EppInput eppInput,
HistoryEntry historyEntry) throws EppException;
/**
* Performs additional tasks required for a restore command. Any changes should not be persisted
* to Datastore until commitAdditionalLogicChanges is called.
*/
public void performAdditionalDomainRestoreLogic(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
EppInput eppInput,
HistoryEntry historyEntry) throws EppException;
/**
* Performs additional tasks required for a transfer command. Any changes should not be persisted
* to Datastore until commitAdditionalLogicChanges is called.
*/
public void performAdditionalDomainTransferLogic(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
int years,
EppInput eppInput,
HistoryEntry historyEntry) throws EppException;
/** Computes the expected update fee, for use in fee challenges and the like. */
public BaseFee getUpdateFeeOrCredit(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
EppInput eppInput) throws EppException;
/** Commit any changes made as a result of a call to performAdditionalDomainUpdateLogic(). */
public void commitAdditionalDomainUpdates();
/**
* Performs additional tasks required for an update command. Any changes should not be persisted
* to Datastore until commitAdditionalLogicChanges is called.
*/
public void performAdditionalDomainUpdateLogic(
DomainResource domain,
String clientIdentifier,
DateTime asOfDate,
EppInput eppInput,
HistoryEntry historyEntry) throws EppException;
/** Commits any changes made as a result of a call to one of the performXXX methods. */
public void commitAdditionalLogicChanges();
}

View file

@ -15,8 +15,12 @@
package google.registry.flows.domain;
import com.google.common.base.Optional;
import google.registry.flows.EppException;
import google.registry.flows.EppException.CommandFailedException;
import google.registry.model.domain.DomainBase;
import google.registry.model.registry.Registry;
import java.util.HashMap;
import javax.annotation.Nullable;
/**
* Static class to return the correct {@link RegistryExtraFlowLogic} for a particular TLD.
@ -36,12 +40,23 @@ public class RegistryExtraFlowLogicProxy {
extraLogicOverrideMap.put(tld, extraLogicClass);
}
public static Optional<RegistryExtraFlowLogic> newInstanceForTld(String tld) {
public static <D extends DomainBase> Optional<RegistryExtraFlowLogic>
newInstanceForDomain(@Nullable D domain) throws EppException {
if (domain == null) {
return Optional.absent();
} else {
return newInstanceForTld(domain.getTld());
}
}
public static Optional<RegistryExtraFlowLogic>
newInstanceForTld(String tld) throws EppException {
if (extraLogicOverrideMap.containsKey(tld)) {
try {
return Optional.<RegistryExtraFlowLogic>of(extraLogicOverrideMap.get(tld).newInstance());
} catch (InstantiationException | IllegalAccessException e) {
return Optional.absent();
return Optional.<RegistryExtraFlowLogic>of(
extraLogicOverrideMap.get(tld).getConstructor().newInstance());
} catch (ReflectiveOperationException ex) {
throw new CommandFailedException();
}
}
return Optional.absent();

View file

@ -0,0 +1,297 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Preconditions.checkState;
import static google.registry.model.EppResourceUtils.loadByUniqueId;
import static google.registry.model.ofy.ObjectifyService.ofy;
import static google.registry.pricing.PricingEngineProxy.getPricesForDomainName;
import static google.registry.util.CollectionUtils.nullToEmpty;
import static google.registry.util.PreconditionsUtils.checkArgumentNotNull;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import com.googlecode.objectify.Key;
import google.registry.flows.EppException;
import google.registry.flows.ResourceMutateFlow.ResourceToMutateDoesNotExistException;
import google.registry.model.ImmutableObject;
import google.registry.model.domain.DomainCommand.Create;
import google.registry.model.domain.DomainResource;
import google.registry.model.domain.LrpToken;
import google.registry.model.domain.fee.BaseFee;
import google.registry.model.domain.fee.BaseFee.FeeType;
import google.registry.model.domain.fee.Credit;
import google.registry.model.domain.fee.EapFee;
import google.registry.model.domain.fee.Fee;
import google.registry.model.eppinput.EppInput;
import google.registry.model.pricing.PremiumPricingEngine.DomainPrices;
import google.registry.model.registry.Registry;
import java.util.List;
import org.joda.money.CurrencyUnit;
import org.joda.money.Money;
import org.joda.time.DateTime;
/**
* Provides pricing, billing, and update logic, with call-outs that can be customized by providing
* implementations on a per-TLD basis.
*/
public final class TldSpecificLogicProxy {
/** A collection of fees and credits for a specific EPP transform. */
public static final class EppCommandOperations extends ImmutableObject {
private final CurrencyUnit currency;
private final ImmutableList<Fee> fees;
private final ImmutableList<Credit> credits;
/** Constructs an EppCommandOperations object using separate lists of fees and credits. */
EppCommandOperations(
CurrencyUnit currency, ImmutableList<Fee> fees, ImmutableList<Credit> credits) {
this.currency = checkArgumentNotNull(
currency, "Currency may not be null in EppCommandOperations.");
checkArgument(!fees.isEmpty(), "You must specify one or more fees.");
this.fees = checkArgumentNotNull(fees, "Fees may not be null in EppCommandOperations.");
this.credits =
checkArgumentNotNull(credits, "Credits may not be null in EppCommandOperations.");
}
/**
* Constructs an EppCommandOperations object. The arguments are sorted into fees and credits.
*/
EppCommandOperations(CurrencyUnit currency, BaseFee... feesAndCredits) {
this.currency = checkArgumentNotNull(
currency, "Currency may not be null in EppCommandOperations.");
ImmutableList.Builder<Fee> feeBuilder = new ImmutableList.Builder<>();
ImmutableList.Builder<Credit> creditBuilder = new ImmutableList.Builder<>();
for (BaseFee feeOrCredit : feesAndCredits) {
if (feeOrCredit instanceof Credit) {
creditBuilder.add((Credit) feeOrCredit);
} else {
feeBuilder.add((Fee) feeOrCredit);
}
}
this.fees = feeBuilder.build();
this.credits = creditBuilder.build();
}
private Money getTotalCostForType(FeeType type) {
Money result = Money.zero(currency);
checkArgumentNotNull(type);
for (Fee fee : fees) {
if (fee.getType() == type) {
result = result.plus(fee.getCost());
}
}
return result;
}
/** Returns the total cost of all fees and credits for the event. */
public Money getTotalCost() {
Money result = Money.zero(currency);
for (Fee fee : fees) {
result = result.plus(fee.getCost());
}
for (Credit credit : credits) {
result = result.plus(credit.getCost());
}
return result;
}
/** Returns the create cost for the event. */
public Money getCreateCost() {
return getTotalCostForType(FeeType.CREATE);
}
/** Returns the EAP cost for the event. */
public Money getEapCost() {
return getTotalCostForType(FeeType.EAP);
}
/** Returns the list of fees for the event. */
public ImmutableList<Fee> getFees() {
return fees;
}
/** Returns the list of credits for the event. */
public List<Credit> getCredits() {
return nullToEmpty(credits);
}
/** Returns the currency for all fees in the event. */
public final CurrencyUnit getCurrency() {
return currency;
}
}
private TldSpecificLogicProxy() {}
/** Returns a new create price for the Pricer. */
public static EppCommandOperations getCreatePrice(
Registry registry,
String domainName,
String clientIdentifier,
DateTime date,
int years,
EppInput eppInput) throws EppException {
CurrencyUnit currency = registry.getCurrency();
// Get the create cost, either from the extra flow logic or straight from PricingEngineProxy.
BaseFee createFeeOrCredit;
Optional<RegistryExtraFlowLogic> extraFlowLogic =
RegistryExtraFlowLogicProxy.newInstanceForTld(registry.getTldStr());
if (extraFlowLogic.isPresent()) {
createFeeOrCredit = extraFlowLogic.get()
.getCreateFeeOrCredit(domainName, clientIdentifier, date, years, eppInput);
} else {
DomainPrices prices = getPricesForDomainName(domainName, date);
createFeeOrCredit =
Fee.create(prices.getCreateCost().multipliedBy(years).getAmount(), FeeType.CREATE);
}
// Create fees for the cost and the EAP fee, if any.
EapFee eapFee = registry.getEapFeeFor(date);
Money eapFeeCost = eapFee.getCost();
checkState(eapFeeCost.getCurrencyUnit().equals(currency));
if (!eapFeeCost.getAmount().equals(Money.zero(currency).getAmount())) {
return new EppCommandOperations(
currency,
createFeeOrCredit,
Fee.create(eapFeeCost.getAmount(), FeeType.EAP, eapFee.getPeriod().upperEndpoint()));
} else {
return new EppCommandOperations(currency, createFeeOrCredit);
}
}
/**
* Computes the renew fee or credit. This is called by other methods which use the renew fee
* (renew, restore, etc).
*/
static BaseFee getRenewFeeOrCredit(
Registry registry,
String domainName,
String clientIdentifier,
DateTime date,
int years,
EppInput eppInput) throws EppException {
Optional<RegistryExtraFlowLogic> extraFlowLogic =
RegistryExtraFlowLogicProxy.newInstanceForTld(registry.getTldStr());
if (extraFlowLogic.isPresent()) {
// TODO: Consider changing the method definition to have the domain passed in to begin with.
DomainResource domain = loadByUniqueId(DomainResource.class, domainName, date);
if (domain == null) {
throw new ResourceToMutateDoesNotExistException(DomainResource.class, domainName);
}
return
extraFlowLogic.get().getRenewFeeOrCredit(domain, clientIdentifier, date, years, eppInput);
} else {
DomainPrices prices = getPricesForDomainName(domainName, date);
return Fee.create(prices.getRenewCost().multipliedBy(years).getAmount(), FeeType.RENEW);
}
}
/** Returns a new renew price for the pricer. */
public static EppCommandOperations getRenewPrice(
Registry registry,
String domainName,
String clientIdentifier,
DateTime date,
int years,
EppInput eppInput) throws EppException {
return new EppCommandOperations(
registry.getCurrency(),
getRenewFeeOrCredit(registry, domainName, clientIdentifier, date, years, eppInput));
}
/** Returns a new restore price for the pricer. */
public static EppCommandOperations getRestorePrice(
Registry registry,
String domainName,
String clientIdentifier,
DateTime date,
EppInput eppInput) throws EppException {
return new EppCommandOperations(
registry.getCurrency(),
getRenewFeeOrCredit(registry, domainName, clientIdentifier, date, 1, eppInput),
Fee.create(registry.getStandardRestoreCost().getAmount(), FeeType.RESTORE));
}
/** Returns a new transfer price for the pricer. */
public static EppCommandOperations getTransferPrice(
Registry registry,
String domainName,
String clientIdentifier,
DateTime transferDate,
int years,
EppInput eppInput) throws EppException {
// Currently, all transfer prices = renew prices, so just pass through.
return getRenewPrice(
registry, domainName, clientIdentifier, transferDate, years, eppInput);
}
/** Returns a new update price for the pricer. */
public static EppCommandOperations getUpdatePrice(
Registry registry,
String domainName,
String clientIdentifier,
DateTime date,
EppInput eppInput) throws EppException {
CurrencyUnit currency = registry.getCurrency();
// If there is extra flow logic, it may specify an update price. Otherwise, there is none.
BaseFee feeOrCredit;
Optional<RegistryExtraFlowLogic> extraFlowLogic =
RegistryExtraFlowLogicProxy.newInstanceForTld(registry.getTldStr());
if (extraFlowLogic.isPresent()) {
// TODO: Consider changing the method definition to have the domain passed in to begin with.
DomainResource domain = loadByUniqueId(DomainResource.class, domainName, date);
if (domain == null) {
throw new ResourceToMutateDoesNotExistException(DomainResource.class, domainName);
}
feeOrCredit =
extraFlowLogic.get().getUpdateFeeOrCredit(domain, clientIdentifier, date, eppInput);
} else {
feeOrCredit = Fee.create(Money.zero(registry.getCurrency()).getAmount(), FeeType.UPDATE);
}
return new EppCommandOperations(currency, feeOrCredit);
}
/** Returns the fee class for a given domain and date. */
public static Optional<String> getFeeClass(String domainName, DateTime date) {
return getPricesForDomainName(domainName, date).getFeeClass();
}
/**
* Checks whether a {@link Create} command has a valid {@link LrpToken} for a particular TLD, and
* return that token (wrapped in an {@link Optional}) if one exists.
*
* <p>This method has no knowledge of whether or not an auth code (interpreted here as an LRP
* token) has already been checked against the reserved list for QLP (anchor tenant), as auth
* codes are used for both types of registrations.
*/
public static Optional<LrpToken> getMatchingLrpToken(Create createCommand, String tld) {
// Note that until the actual per-TLD logic is built out, what's being done here is a basic
// domain-name-to-assignee match.
String lrpToken = createCommand.getAuthInfo().getPw().getValue();
LrpToken token = ofy().load().key(Key.create(LrpToken.class, lrpToken)).now();
if (token != null) {
if (token.getAssignee().equalsIgnoreCase(createCommand.getFullyQualifiedDomainName())
&& token.getRedemptionHistoryEntry() == null
&& token.getValidTlds().contains(tld)) {
return Optional.of(token);
}
}
return Optional.<LrpToken>absent();
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.StatusProhibitsOperationException;
/** Extension flag is not currently valid for this domain. */
public class ExtensionFlagDomainPolicyErrorException extends StatusProhibitsOperationException {
public ExtensionFlagDomainPolicyErrorException(String flag) {
super(String.format("Extension flag %s is not valid for this domain", flag));
}
}

View file

@ -0,0 +1,28 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.RequiredParameterMissingException;
/** Required extension flag missing. */
public class ExtensionFlagMissingException extends RequiredParameterMissingException {
public ExtensionFlagMissingException(String flag) {
super(String.format("Flag %s must be specified", flag));
}
public ExtensionFlagMissingException(String flag1, String flag2) {
super(String.format("Either %s or %s must be specified", flag1, flag2));
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
/** Extension flag is not currently valid for this registrar. */
public class ExtensionFlagRegistrarPolicyErrorException extends ParameterValuePolicyErrorException {
public ExtensionFlagRegistrarPolicyErrorException(String flag) {
super(String.format("Extension flag %s is not valid for this registrar", flag));
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.StatusProhibitsOperationException;
/** Extension flag cannot currently be set for this domain. */
public class ExtensionFlagSetDomainPolicyErrorException extends StatusProhibitsOperationException {
public ExtensionFlagSetDomainPolicyErrorException(String flag) {
super(String.format("Extension flag %s cannot be set for this domain", flag));
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.ParameterValueRangeErrorException;
/** Extension flag is not valid. */
public class InvalidExtensionFlagException extends ParameterValueRangeErrorException {
public InvalidExtensionFlagException(String flag) {
super(String.format("Extension flag %s is not defined", flag));
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
/** Specified extension flags are mutually exclusive. */
public class MutuallyExclusiveExtensionFlagsException extends ParameterValuePolicyErrorException {
public MutuallyExclusiveExtensionFlagsException(String flag1, String flag2) {
super(String.format("Extension flags %s and %s are mutually exclusive", flag1, flag2));
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
/** Only client flags can be updated. */
public class NonClientFlagException extends ParameterValuePolicyErrorException {
public NonClientFlagException() {
super("Non-client flags cannot be added or removed");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.domain.flags;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
/** The same flag was specified in both add and remove lists. */
public class SameFlagAddedAndRemovedException extends ParameterValuePolicyErrorException {
public SameFlagAddedAndRemovedException() {
super("An extension flag cannot be both added and removed in the same command");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
/** Cannot add and remove the same value. */
public class AddRemoveSameValueEppException extends ParameterValuePolicyErrorException {
public AddRemoveSameValueEppException() {
super("Cannot add and remove the same value");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.ObjectPendingTransferException;
/** The resource is already pending transfer. */
public class AlreadyPendingTransferException extends ObjectPendingTransferException {
public AlreadyPendingTransferException(String targetId) {
super(targetId);
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.CommandUseErrorException;
/** Command is not allowed in the current registry phase. */
public class BadCommandForRegistryPhaseException extends CommandUseErrorException {
public BadCommandForRegistryPhaseException() {
super("Command is not allowed in the current registry phase");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.AuthorizationErrorException;
/** Authorization info is required to request a transfer. */
public class MissingTransferRequestAuthInfoException extends AuthorizationErrorException {
public MissingTransferRequestAuthInfoException() {
super("Authorization info is required to request a transfer");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.CommandUseErrorException;
/** Object has no transfer history. */
public class NoTransferHistoryToQueryException extends CommandUseErrorException {
public NoTransferHistoryToQueryException() {
super("Object has no transfer history");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.AuthorizationErrorException;
/** Registrar is not authorized to view transfer status. */
public class NotAuthorizedToViewTransferException extends AuthorizationErrorException {
public NotAuthorizedToViewTransferException() {
super("Registrar is not authorized to view transfer status");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.ObjectNotPendingTransferException;
/** The resource does not have a pending transfer. */
public class NotPendingTransferException extends ObjectNotPendingTransferException {
public NotPendingTransferException(String objectId) {
super(objectId);
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.AuthorizationErrorException;
/** Registrar is not the initiator of this transfer. */
public class NotTransferInitiatorException extends AuthorizationErrorException {
public NotTransferInitiatorException() {
super("Registrar is not the initiator of this transfer");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.CommandUseErrorException;
/** Registrar already sponsors the object of this transfer request. */
public class ObjectAlreadySponsoredException extends CommandUseErrorException {
public ObjectAlreadySponsoredException() {
super("Registrar already sponsors the object of this transfer request");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.AuthorizationErrorException;
/** Only a tool can pass a metadata extension. */
public class OnlyToolCanPassMetadataException extends AuthorizationErrorException {
public OnlyToolCanPassMetadataException() {
super("Metadata extensions can only be passed by tools.");
}
}

View file

@ -0,0 +1,39 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import com.google.common.annotations.VisibleForTesting;
import google.registry.flows.EppException.ObjectAlreadyExistsException;
/** Resource with this id already exists. */
public class ResourceAlreadyExistsException extends ObjectAlreadyExistsException {
/** Whether this was thrown from a "failfast" context. Useful for testing. */
final boolean failfast;
public ResourceAlreadyExistsException(String resourceId, boolean failfast) {
super(String.format("Object with given ID (%s) already exists", resourceId));
this.failfast = failfast;
}
public ResourceAlreadyExistsException(String resourceId) {
this(resourceId, false);
}
@VisibleForTesting
public boolean isFailfast() {
return failfast;
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.StatusProhibitsOperationException;
/** This resource has clientUpdateProhibited on it, and the update does not clear that status. */
public class ResourceHasClientUpdateProhibitedException extends StatusProhibitsOperationException {
public ResourceHasClientUpdateProhibitedException() {
super("Operation disallowed by status: clientUpdateProhibited");
}
}

View file

@ -0,0 +1,28 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import com.google.common.base.Joiner;
import google.registry.flows.EppException.StatusProhibitsOperationException;
import google.registry.model.eppcommon.StatusValue;
import java.util.Set;
/** Resource status prohibits this operation. */
public class ResourceStatusProhibitsOperationException
extends StatusProhibitsOperationException {
public ResourceStatusProhibitsOperationException(Set<StatusValue> status) {
super("Operation disallowed by status: " + Joiner.on(", ").join(status));
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.AssociationProhibitsOperationException;
/** Resource to be deleted has active incoming references. */
public class ResourceToDeleteIsReferencedException extends AssociationProhibitsOperationException {
public ResourceToDeleteIsReferencedException() {
super("Resource to be deleted has active incoming references");
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.ObjectDoesNotExistException;
/** Resource with this id does not exist. */
public class ResourceToMutateDoesNotExistException extends ObjectDoesNotExistException {
public ResourceToMutateDoesNotExistException(Class<?> type, String targetId) {
super(type, targetId);
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.ObjectDoesNotExistException;
/** Resource with this id does not exist. */
public class ResourceToQueryDoesNotExistException extends ObjectDoesNotExistException {
public ResourceToQueryDoesNotExistException(Class<?> type, String targetId) {
super(type, targetId);
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.ParameterValueRangeErrorException;
/** The specified status value cannot be set by clients. */
public class StatusNotClientSettableException extends ParameterValueRangeErrorException {
public StatusNotClientSettableException(String statusValue) {
super(String.format("Status value %s cannot be set by clients", statusValue));
}
}

View file

@ -0,0 +1,24 @@
// Copyright 2016 The Domain Registry Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package google.registry.flows.exceptions;
import google.registry.flows.EppException.ParameterValuePolicyErrorException;
/** Too many resource checks requested in one check command. */
public class TooManyResourceChecksException extends ParameterValuePolicyErrorException {
public TooManyResourceChecksException(int maxChecks) {
super(String.format("No more than %s resources may be checked at a time", maxChecks));
}
}

View file

@ -32,6 +32,7 @@ import google.registry.model.eppcommon.StatusValue;
import google.registry.model.eppoutput.EppResponse.ResponseData;
import google.registry.model.ofy.CommitLogManifest;
import google.registry.model.transfer.TransferData;
import google.registry.model.transfer.TransferStatus;
import java.util.Set;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlTransient;
@ -303,6 +304,27 @@ public abstract class EppResource extends BackupGroupRoot implements Buildable,
return thisCastToDerived();
}
/**
* Remove a pending transfer.
*
* <p>This removes the {@link StatusValue#PENDING_TRANSFER} status, clears all the
* server-approve fields on the {@link TransferData} including the extended registration years
* field, and sets the expiration time of the last pending transfer (i.e. the one being cleared)
* to now.
*/
public B clearPendingTransfer(TransferStatus transferStatus, DateTime now) {
removeStatusValue(StatusValue.PENDING_TRANSFER);
return setTransferData(getInstance().getTransferData().asBuilder()
.setExtendedRegistrationYears(null)
.setServerApproveEntities(null)
.setServerApproveBillingEvent(null)
.setServerApproveAutorenewEvent(null)
.setServerApproveAutorenewPollMessage(null)
.setTransferStatus(transferStatus)
.setPendingTransferExpirationTime(now)
.build());
}
/** Wipe out any personal information in the resource. */
public B wipeOut() {
return thisCastToDerived();

View file

@ -45,7 +45,9 @@ public abstract class BaseFee extends ImmutableObject {
CREATE("create"),
EAP("Early Access Period, fee expires: %s"),
RENEW("renew"),
RESTORE("restore");
RESTORE("restore"),
UPDATE("update"),
CREDIT("%s credit");
private final String formatString;

View file

@ -21,11 +21,12 @@ import java.math.BigDecimal;
/** A credit, in currency units specified elsewhere in the xml, and with an optional description. */
public class Credit extends BaseFee {
public static Credit create(BigDecimal cost, String description) {
public static Credit create(BigDecimal cost, FeeType type, Object... descriptionArgs) {
Credit instance = new Credit();
instance.cost = checkNotNull(cost);
checkArgument(instance.cost.signum() < 0);
instance.description = description;
instance.type = checkNotNull(type);
instance.generateDescription(descriptionArgs);
return instance;
}
}

View file

@ -24,7 +24,7 @@ import org.joda.money.CurrencyUnit;
* of items requesting the fees for particular commands and domains. For some versions of the fee
* extension, the currency is also specified here; for other versions it is contained in the
* individual items.
*
*
* @type C the type of extension item used by this command (e.g. v6 items for a v6 extension)
* @type R the type of response returned for for this command (e.g. v6 responses for a v6 extension)
*/
@ -33,13 +33,14 @@ public interface FeeCheckCommandExtension<
R extends FeeCheckResponseExtension<?>>
extends CommandExtension {
/** True if this version of the fee extension specifies the currency at the top level. */
public boolean isCurrencySupported();
/**
* Three-character ISO4217 currency code.
*
* <p>Returns null if this version of the fee extension doesn't specify currency at the top level.
*/
public CurrencyUnit getCurrency();
/** Three-character currency code; throws an exception if currency is not supported. */
public CurrencyUnit getCurrency() throws UnsupportedOperationException;
public ImmutableSet<C> getItems();
public R createResponse(ImmutableList<? extends FeeCheckResponseExtensionItem> items);
}

View file

@ -35,18 +35,19 @@ public interface FeeQueryCommandExtensionItem {
UPDATE
}
/** True if this version of fee extension includes a currency in this type of query item. */
public boolean isCurrencySupported();
/** A three-character ISO4217 currency code; throws an exception if currency is not supported. */
public CurrencyUnit getCurrency() throws UnsupportedOperationException;
/**
* Three-character ISO4217 currency code.
*
* <p>Returns null if this version of the fee extension doesn't specify currency at the top level.
*/
public CurrencyUnit getCurrency();
/** The name of the command being checked. */
public CommandName getCommandName();
/** The unparse name of the command being checked, for use in error strings. */
public String getUnparsedCommandName();
/** The phase of the command being checked. */
public String getPhase();

View file

@ -14,6 +14,8 @@
package google.registry.model.domain.fee;
import static google.registry.util.CollectionUtils.nullToEmpty;
import google.registry.model.ImmutableObject;
import java.util.List;
import javax.xml.bind.annotation.XmlElement;
@ -51,6 +53,6 @@ public abstract class FeeTransformCommandExtensionImpl
@Override
public List<Credit> getCredits() {
return credits;
return nullToEmpty(credits);
}
}

View file

@ -14,8 +14,8 @@
package google.registry.model.domain.fee;
import com.google.common.collect.ImmutableList;
import google.registry.model.eppoutput.EppResponse.ResponseExtension;
import java.util.List;
import org.joda.money.CurrencyUnit;
/** Interface for fee extensions in Create, Renew, Transfer and Update responses. */
@ -24,8 +24,8 @@ public interface FeeTransformResponseExtension extends ResponseExtension {
/** Builder for {@link FeeTransformResponseExtension}. */
public interface Builder {
Builder setCurrency(CurrencyUnit currency);
Builder setFees(ImmutableList<Fee> fees);
Builder setCredits(ImmutableList<Credit> credits);
Builder setFees(List<Fee> fees);
Builder setCredits(List<Credit> credits);
FeeTransformResponseExtension build();
}
}

View file

@ -14,7 +14,8 @@
package google.registry.model.domain.fee;
import com.google.common.collect.ImmutableList;
import static google.registry.util.CollectionUtils.forceEmptyToNull;
import google.registry.model.Buildable.GenericBuilder;
import google.registry.model.ImmutableObject;
import java.util.List;
@ -53,14 +54,14 @@ public class FeeTransformResponseExtensionImpl extends ImmutableObject
}
@Override
public B setFees(ImmutableList<Fee> fees) {
public B setFees(List<Fee> fees) {
getInstance().fees = fees;
return thisCastToDerived();
}
@Override
public B setCredits(ImmutableList<Credit> credits) {
getInstance().credits = credits;
public B setCredits(List<Credit> credits) {
getInstance().credits = forceEmptyToNull(credits);
return thisCastToDerived();
}
}

View file

@ -14,7 +14,6 @@
package google.registry.model.domain.fee;
import com.google.common.collect.ImmutableList;
import google.registry.model.Buildable.GenericBuilder;
import google.registry.model.ImmutableObject;
import java.util.List;
@ -54,13 +53,13 @@ public class FeeTransformResponseExtensionImplNoCredits extends ImmutableObject
}
@Override
public B setFees(ImmutableList<Fee> fees) {
public B setFees(List<Fee> fees) {
getInstance().fees = fees;
return thisCastToDerived();
}
@Override
public B setCredits(ImmutableList<Credit> credits) {
public B setCredits(List<Credit> credits) {
return thisCastToDerived();
}
}

View file

@ -29,7 +29,7 @@ public class FeeCheckCommandExtensionItemV06
String name;
CurrencyUnit currency;
@Override
public boolean isDomainNameSupported() {
return true;
@ -40,11 +40,6 @@ public class FeeCheckCommandExtensionItemV06
return name;
}
@Override
public boolean isCurrencySupported() {
return true;
}
@Override
public CurrencyUnit getCurrency() {
return currency;

View file

@ -32,18 +32,13 @@ public class FeeCheckCommandExtensionV06 extends ImmutableObject
implements FeeCheckCommandExtension<
FeeCheckCommandExtensionItemV06,
FeeCheckResponseExtensionV06> {
@XmlElement(name = "domain")
Set<FeeCheckCommandExtensionItemV06> items;
@Override
public boolean isCurrencySupported() {
return false;
}
@Override
public CurrencyUnit getCurrency() {
throw new UnsupportedOperationException("Currency not supported");
return null; // This version of the fee extension doesn't specify a top-level currency.
}
@Override

View file

@ -25,15 +25,10 @@ import org.joda.money.CurrencyUnit;
@XmlType(propOrder = {"currency", "command", "period"})
public class FeeInfoCommandExtensionV06
extends FeeQueryCommandExtensionItemImpl implements CommandExtension {
/** A three-character ISO4217 currency code. */
CurrencyUnit currency;
@Override
public boolean isCurrencySupported() {
return true;
}
@Override
public CurrencyUnit getCurrency() {
return currency;

View file

@ -52,22 +52,20 @@ public class FeeCheckCommandExtensionV11 extends ImmutableObject
/** Three-letter currency code in which results should be returned. */
CurrencyUnit currency;
/** The period to check. */
Period period;
/** The class to check. */
@XmlElement(name = "class")
String feeClass;
@Override
public boolean isCurrencySupported() {
return false;
}
@Override
public CurrencyUnit getCurrency() {
throw new UnsupportedOperationException("Currency not supported");
// This version of the fee extension does not have any items, and although the currency is
// specified at the top level we've modeled it as a single fake item with the currency inside,
// so there's no top level currency to return here.
return null;
}
@Override
@ -96,13 +94,13 @@ public class FeeCheckCommandExtensionV11 extends ImmutableObject
public CommandName getCommandName() {
return command.getCommand();
}
/** The command name before being parsed into an enum, for use in error strings. */
@Override
public String getUnparsedCommandName() {
return command.getUnparsedCommandName();
}
/** The phase of the command being checked. */
@Override
public String getPhase() {
@ -119,22 +117,17 @@ public class FeeCheckCommandExtensionV11 extends ImmutableObject
public Period getPeriod() {
return Optional.fromNullable(period).or(DEFAULT_PERIOD);
}
@Override
public boolean isDomainNameSupported() {
return false;
}
@Override
public String getDomainName() {
throw new UnsupportedOperationException("Domain not supported");
}
@Override
public boolean isCurrencySupported() {
return true;
}
@Override
public CurrencyUnit getCurrency() {
return currency;

View file

@ -30,13 +30,13 @@ import org.joda.time.DateTime;
/**
* An individual price check item in version 0.12 of the fee extension on domain check commands.
* Items look like:
*
*
* <fee:command name="renew" phase="sunrise" subphase="hello">
* <fee:period unit="y">1</fee:period>
* <fee:class>premium</fee:class>
* <fee:date>2017-05-17T13:22:21.0Z</fee:date>
* </fee:command>
*
*
* In a change from previous versions of the extension, items do not contain domain names; instead,
* the names from the non-extension check element are used.
*/
@ -49,7 +49,7 @@ public class FeeCheckCommandExtensionItemV12
@XmlAttribute(name = "name")
String commandName;
@XmlAttribute
String phase;
@ -58,10 +58,10 @@ public class FeeCheckCommandExtensionItemV12
@XmlElement
Period period;
@XmlElement(name = "class")
String feeClass;
@XmlElement(name = "date")
DateTime feeDate;
@ -75,22 +75,17 @@ public class FeeCheckCommandExtensionItemV12
public String getDomainName() {
throw new UnsupportedOperationException("Domain not supported");
}
@Override
public boolean isCurrencySupported() {
return false;
}
@Override
public CurrencyUnit getCurrency() {
throw new UnsupportedOperationException("Currency not supported");
return null; // This version of the fee extension doesn't specify currency per-item.
}
@Override
public String getUnparsedCommandName() {
return commandName;
}
@Override
public CommandName getCommandName() {
// Require the xml string to be lowercase.
@ -108,7 +103,7 @@ public class FeeCheckCommandExtensionItemV12
public String getPhase() {
return phase;
}
@Override
public String getSubphase() {
return subphase;

View file

@ -36,17 +36,12 @@ public class FeeCheckCommandExtensionV12 extends ImmutableObject
FeeCheckResponseExtensionV12> {
CurrencyUnit currency;
@Override
public boolean isCurrencySupported() {
return true;
}
@Override
public CurrencyUnit getCurrency() {
return currency;
}
@XmlElement(name = "command")
Set<FeeCheckCommandExtensionItemV12> items;

View file

@ -30,4 +30,8 @@ import javax.xml.bind.annotation.XmlRootElement;
public class FlagsCreateCommandExtension implements CommandExtension {
@XmlElement(name = "flag")
List<String> flags;
public List<String> getFlags() {
return flags;
}
}

View file

@ -30,4 +30,12 @@ import javax.xml.bind.annotation.XmlType;
public class FlagsTransferCommandExtension implements CommandExtension {
FlagsList add; // list of flags to be added (turned on)
FlagsList rem; // list of flags to be removed (turned off)
public FlagsList getAddFlags() {
return add;
}
public FlagsList getRemoveFlags() {
return rem;
}
}

View file

@ -50,4 +50,8 @@ public enum TransferStatus {
public String getXmlName() {
return CaseFormat.UPPER_UNDERSCORE.to(CaseFormat.LOWER_CAMEL, toString());
}
public boolean isApproved() {
return this.equals(CLIENT_APPROVED) || this.equals(SERVER_APPROVED);
}
}

View file

@ -24,8 +24,8 @@ import google.registry.gcs.GcsServiceModule;
import google.registry.groups.DirectoryModule;
import google.registry.groups.GroupsModule;
import google.registry.groups.GroupssettingsModule;
import google.registry.keyring.api.KeyModule;
import google.registry.keyring.api.DummyKeyringModule;
import google.registry.keyring.api.KeyModule;
import google.registry.monitoring.metrics.MetricReporter;
import google.registry.monitoring.whitebox.StackdriverModule;
import google.registry.rde.JSchModule;
@ -52,6 +52,7 @@ import javax.inject.Singleton;
DatastoreServiceModule.class,
DirectoryModule.class,
DriveModule.class,
DummyKeyringModule.class,
GcsServiceModule.class,
GoogleCredentialModule.class,
GroupsModule.class,
@ -68,7 +69,6 @@ import javax.inject.Singleton;
UrlFetchTransportModule.class,
UseAppIdentityCredentialForGoogleApisModule.class,
VoidDnsWriterModule.class,
DummyKeyringModule.class,
})
interface BackendComponent {
BackendRequestComponent startRequest(RequestModule requestModule);

View file

@ -17,8 +17,8 @@ package google.registry.module.frontend;
import dagger.Component;
import google.registry.braintree.BraintreeModule;
import google.registry.config.ConfigModule;
import google.registry.keyring.api.KeyModule;
import google.registry.keyring.api.DummyKeyringModule;
import google.registry.keyring.api.KeyModule;
import google.registry.monitoring.metrics.MetricReporter;
import google.registry.monitoring.whitebox.StackdriverModule;
import google.registry.request.Modules.AppIdentityCredentialModule;
@ -40,6 +40,7 @@ import javax.inject.Singleton;
BraintreeModule.class,
ConfigModule.class,
ConsoleConfigModule.class,
DummyKeyringModule.class,
FrontendMetricsModule.class,
Jackson2Module.class,
KeyModule.class,
@ -49,7 +50,6 @@ import javax.inject.Singleton;
UrlFetchTransportModule.class,
UseAppIdentityCredentialForGoogleApisModule.class,
UserServiceModule.class,
DummyKeyringModule.class,
})
interface FrontendComponent {
FrontendRequestComponent startRequest(RequestModule requestModule);

View file

@ -21,6 +21,7 @@ import google.registry.flows.EppConsoleAction;
import google.registry.flows.EppTlsAction;
import google.registry.flows.FlowComponent;
import google.registry.flows.TlsCredentials.EppTlsModule;
import google.registry.monitoring.whitebox.WhiteboxModule;
import google.registry.rdap.RdapAutnumAction;
import google.registry.rdap.RdapDomainAction;
import google.registry.rdap.RdapDomainSearchAction;
@ -50,6 +51,7 @@ import google.registry.whois.WhoisServer;
RdapModule.class,
RegistrarUserModule.class,
RequestModule.class,
WhiteboxModule.class,
WhoisModule.class,
})
interface FrontendRequestComponent {

View file

@ -24,6 +24,7 @@ java_library(
"//java/google/registry/keyring/api",
"//java/google/registry/loadtest",
"//java/google/registry/mapreduce",
"//java/google/registry/monitoring/whitebox",
"//java/google/registry/request",
"//java/google/registry/request:modules",
"//java/google/registry/tools/server",

View file

@ -21,8 +21,8 @@ import google.registry.gcs.GcsServiceModule;
import google.registry.groups.DirectoryModule;
import google.registry.groups.GroupsModule;
import google.registry.groups.GroupssettingsModule;
import google.registry.keyring.api.KeyModule;
import google.registry.keyring.api.DummyKeyringModule;
import google.registry.keyring.api.KeyModule;
import google.registry.request.Modules.AppIdentityCredentialModule;
import google.registry.request.Modules.DatastoreServiceModule;
import google.registry.request.Modules.GoogleCredentialModule;
@ -44,6 +44,7 @@ import javax.inject.Singleton;
DatastoreServiceModule.class,
DirectoryModule.class,
DriveModule.class,
DummyKeyringModule.class,
GcsServiceModule.class,
GoogleCredentialModule.class,
GroupsModule.class,
@ -55,7 +56,6 @@ import javax.inject.Singleton;
UseAppIdentityCredentialForGoogleApisModule.class,
SystemClockModule.class,
SystemSleeperModule.class,
DummyKeyringModule.class,
})
interface ToolsComponent {
ToolsRequestComponent startRequest(RequestModule requestModule);

View file

@ -22,6 +22,7 @@ import google.registry.flows.FlowComponent;
import google.registry.loadtest.LoadTestAction;
import google.registry.loadtest.LoadTestModule;
import google.registry.mapreduce.MapreduceModule;
import google.registry.monitoring.whitebox.WhiteboxModule;
import google.registry.request.RequestModule;
import google.registry.request.RequestScope;
import google.registry.tools.server.CreateGroupsAction;
@ -54,6 +55,7 @@ import google.registry.tools.server.javascrap.RefreshAllDomainsAction;
MapreduceModule.class,
RequestModule.class,
ToolsServerModule.class,
WhiteboxModule.class,
})
interface ToolsRequestComponent {
BackfillAutorenewBillingFlagAction backfillAutorenewBillingFlagAction();

View file

@ -20,11 +20,11 @@ import static com.google.appengine.api.taskqueue.TaskOptions.Builder.withUrl;
import com.google.appengine.api.modules.ModulesService;
import com.google.appengine.api.taskqueue.TaskOptions;
import com.google.appengine.api.taskqueue.TransientFailureException;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Supplier;
import google.registry.util.FormattingLogger;
import java.util.Map.Entry;
import java.util.UUID;
import javax.inject.Inject;
import javax.inject.Named;
/**
* A collector of metric information. Enqueues collected metrics to a task queue to be written to
@ -39,18 +39,17 @@ public class BigQueryMetricsEnqueuer {
public static final String QUEUE = "bigquery-streaming-metrics";
@Inject ModulesService modulesService;
@Inject @Named("insertIdGenerator") Supplier<String> idGenerator;
@Inject
BigQueryMetricsEnqueuer() {}
@Inject BigQueryMetricsEnqueuer() {}
@VisibleForTesting
void export(BigQueryMetric metric, String insertId) {
public void export(BigQueryMetric metric) {
try {
String hostname = modulesService.getVersionHostname("backend", null);
TaskOptions opts =
withUrl(MetricsExportAction.PATH)
.header("Host", hostname)
.param("insertId", insertId);
.param("insertId", idGenerator.get());
for (Entry<String, String> entry : metric.getBigQueryRowEncoding().entrySet()) {
opts.param(entry.getKey(), entry.getValue());
}
@ -61,9 +60,4 @@ public class BigQueryMetricsEnqueuer {
logger.info(e, e.getMessage());
}
}
/** Enqueue a metric to be exported to BigQuery. */
public void export(BigQueryMetric metric) {
export(metric, UUID.randomUUID().toString());
}
}

Some files were not shown because too many files have changed in this diff Show more