mirror of
https://github.com/google/nomulus.git
synced 2025-04-30 12:07:51 +02:00
Autoformat all Markdown documentation
------------- Created by MOE: https://github.com/google/moe MOE_MIGRATED_REVID=132885981
This commit is contained in:
parent
cee08d48f2
commit
85eb641ca8
5 changed files with 436 additions and 418 deletions
54
README.md
54
README.md
|
@ -50,9 +50,9 @@ are limited to four minutes and ten megabytes in size. Furthermore, queries and
|
|||
indexes that span entity groups are always eventually consistent, which means
|
||||
they could take seconds, and very rarely, days to update. While most online
|
||||
services find eventual consistency useful, it is not appropriate for a service
|
||||
conducting financial exchanges. Therefore Domain Registry has been engineered
|
||||
to employ performance and complexity tradeoffs that allow strong consistency to
|
||||
be applied throughout the codebase.
|
||||
conducting financial exchanges. Therefore Domain Registry has been engineered to
|
||||
employ performance and complexity tradeoffs that allow strong consistency to be
|
||||
applied throughout the codebase.
|
||||
|
||||
Domain Registry has a commit log system. Commit logs are retained in datastore
|
||||
for thirty days. They are also streamed to Cloud Storage for backup purposes.
|
||||
|
@ -63,8 +63,8 @@ order to do restores. Each EPP resource entity also stores a map of its past
|
|||
mutations with 24-hour granularity. This makes it possible to have point-in-time
|
||||
projection queries with effectively no overhead.
|
||||
|
||||
The Registry Data Escrow (RDE) system is also built with reliability in mind.
|
||||
It executes on top of App Engine task queues, which can be double-executed and
|
||||
The Registry Data Escrow (RDE) system is also built with reliability in mind. It
|
||||
executes on top of App Engine task queues, which can be double-executed and
|
||||
therefore require operations to be idempotent. RDE isn't idempotent. To work
|
||||
around this, RDE uses datastore transactions to achieve mutual exclusion and
|
||||
serialization. We call this the "Locking Rolling Cursor Pattern." One benefit of
|
||||
|
@ -101,7 +101,8 @@ domain availability checks. This service listens on the `/check` path.
|
|||
* [RFC 3915: EPP Grace Period Mapping](http://tools.ietf.org/html/rfc3915)
|
||||
* [RFC 5734: EPP Transport over TCP](http://tools.ietf.org/html/rfc5734)
|
||||
* [RFC 5910: EPP DNSSEC Mapping](http://tools.ietf.org/html/rfc5910)
|
||||
* [Draft: EPP Launch Phase Mapping (Proposed)](http://tools.ietf.org/html/draft-tan-epp-launchphase-11)
|
||||
* [Draft: EPP Launch Phase Mapping (Proposed)]
|
||||
(http://tools.ietf.org/html/draft-tan-epp-launchphase-11)
|
||||
|
||||
### Registry Data Escrow (RDE)
|
||||
|
||||
|
@ -114,17 +115,22 @@ This service exists for ICANN regulatory purposes. ICANN needs to know that,
|
|||
should a registry business ever implode, that they can quickly migrate their
|
||||
TLDs to a different company so that they'll continue to operate.
|
||||
|
||||
* [Draft: Registry Data Escrow Specification](http://tools.ietf.org/html/draft-arias-noguchi-registry-data-escrow-06)
|
||||
* [Draft: Domain Name Registration Data (DNRD) Objects Mapping](http://tools.ietf.org/html/draft-arias-noguchi-dnrd-objects-mapping-05)
|
||||
* [Draft: ICANN Registry Interfaces](http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05)
|
||||
* [Draft: Registry Data Escrow Specification]
|
||||
(http://tools.ietf.org/html/draft-arias-noguchi-registry-data-escrow-06)
|
||||
* [Draft: Domain Name Registration Data (DNRD) Objects Mapping]
|
||||
(http://tools.ietf.org/html/draft-arias-noguchi-dnrd-objects-mapping-05)
|
||||
* [Draft: ICANN Registry Interfaces]
|
||||
(http://tools.ietf.org/html/draft-lozano-icann-registry-interfaces-05)
|
||||
|
||||
### Trademark Clearing House (TMCH)
|
||||
|
||||
Domain Registry integrates with ICANN and IBM's MarksDB in order to protect
|
||||
trademark holders, when new TLDs are being launched.
|
||||
|
||||
* [Draft: TMCH Functional Spec](http://tools.ietf.org/html/draft-lozano-tmch-func-spec-08)
|
||||
* [Draft: Mark and Signed Mark Objects Mapping](https://tools.ietf.org/html/draft-lozano-tmch-smd-02)
|
||||
* [Draft: TMCH Functional Spec]
|
||||
(http://tools.ietf.org/html/draft-lozano-tmch-func-spec-08)
|
||||
* [Draft: Mark and Signed Mark Objects Mapping]
|
||||
(https://tools.ietf.org/html/draft-lozano-tmch-smd-02)
|
||||
|
||||
### WHOIS
|
||||
|
||||
|
@ -134,8 +140,10 @@ internal HTTP endpoint running on `/_dr/whois`. A separate proxy running on port
|
|||
43 forwards requests to that path. Domain Registry also implements a public HTTP
|
||||
endpoint that listens on the `/whois` path.
|
||||
|
||||
* [RFC 3912: WHOIS Protocol Specification](https://tools.ietf.org/html/rfc3912)
|
||||
* [RFC 7485: Inventory and Analysis of Registration Objects](http://tools.ietf.org/html/rfc7485)
|
||||
* [RFC 3912: WHOIS Protocol Specification]
|
||||
(https://tools.ietf.org/html/rfc3912)
|
||||
* [RFC 7485: Inventory and Analysis of Registration Objects]
|
||||
(http://tools.ietf.org/html/rfc7485)
|
||||
|
||||
### Registration Data Access Protocol (RDAP)
|
||||
|
||||
|
@ -147,7 +155,8 @@ service available under the `/rdap/...` path.
|
|||
* [RFC 7481: RDAP Security Services](http://tools.ietf.org/html/rfc7481)
|
||||
* [RFC 7482: RDAP Query Format](http://tools.ietf.org/html/rfc7482)
|
||||
* [RFC 7483: RDAP JSON Responses](http://tools.ietf.org/html/rfc7483)
|
||||
* [RFC 7484: RDAP Finding the Authoritative Registration Data](http://tools.ietf.org/html/rfc7484)
|
||||
* [RFC 7484: RDAP Finding the Authoritative Registration Data]
|
||||
(http://tools.ietf.org/html/rfc7484)
|
||||
|
||||
### Backups
|
||||
|
||||
|
@ -189,8 +198,10 @@ that uses the [Google Cloud DNS](https://cloud.google.com/dns/) API. A bulk
|
|||
export tool is also provided to export a zone file for an entire TLD in BIND
|
||||
format.
|
||||
|
||||
* [RFC 1034: Domain Names - Concepts and Facilities](https://www.ietf.org/rfc/rfc1034.txt)
|
||||
* [RFC 1035: Domain Names - Implementation and Specification](https://www.ietf.org/rfc/rfc1034.txt)
|
||||
* [RFC 1034: Domain Names - Concepts and Facilities]
|
||||
(https://www.ietf.org/rfc/rfc1034.txt)
|
||||
* [RFC 1035: Domain Names - Implementation and Specification]
|
||||
(https://www.ietf.org/rfc/rfc1034.txt)
|
||||
|
||||
### Exports
|
||||
|
||||
|
@ -207,9 +218,8 @@ commands were run and when and by whom, information on failed commands, activity
|
|||
per registrar, and length of each request.
|
||||
|
||||
[BigQuery][bigquery] reporting scripts are provided to generate the required
|
||||
per-TLD monthly
|
||||
[registry reports](https://www.icann.org/resources/pages/registry-reports) for
|
||||
ICANN.
|
||||
per-TLD monthly [registry reports]
|
||||
(https://www.icann.org/resources/pages/registry-reports) for ICANN.
|
||||
|
||||
### Registrar console
|
||||
|
||||
|
@ -245,9 +255,9 @@ that are out of scope that it will never do.
|
|||
provide an implementation.
|
||||
* You will need an invoicing system to convert the internal registry billing
|
||||
events into registrar invoices using whatever accounts receivable setup you
|
||||
already have. A partial implementation is provided that generates generic CSV
|
||||
invoices (see `MakeBillingTablesCommand`), but you will need to integrate it
|
||||
with your payments system.
|
||||
already have. A partial implementation is provided that generates generic
|
||||
CSV invoices (see `MakeBillingTablesCommand`), but you will need to
|
||||
integrate it with your payments system.
|
||||
* You will likely need monitoring to continuously monitor the status of the
|
||||
system. Any of a large variety of tools can be used for this, or you can
|
||||
write your own.
|
||||
|
|
|
@ -5,8 +5,8 @@ Registry project as it is implemented in App Engine.
|
|||
|
||||
## Services
|
||||
|
||||
The Domain Registry contains three
|
||||
[services](https://cloud.google.com/appengine/docs/python/an-overview-of-app-engine),
|
||||
The Domain Registry contains three [services]
|
||||
(https://cloud.google.com/appengine/docs/python/an-overview-of-app-engine),
|
||||
which were previously called modules in earlier versions of App Engine. The
|
||||
services are: default (also called front-end), backend, and tools. Each service
|
||||
runs independently in a lot of ways, including that they can be upgraded
|
||||
|
@ -15,9 +15,9 @@ scaling are separate as well.
|
|||
|
||||
Once you have your app deployed and running, the default service can be accessed
|
||||
at `https://project-id.appspot.com`, substituting whatever your App Engine app
|
||||
is named for "project-id". Note that that is the URL for the production
|
||||
instance of your app; other environments will have the environment name appended
|
||||
with a hyphen in the hostname, e.g. `https://project-id-sandbox.appspot.com`.
|
||||
is named for "project-id". Note that that is the URL for the production instance
|
||||
of your app; other environments will have the environment name appended with a
|
||||
hyphen in the hostname, e.g. `https://project-id-sandbox.appspot.com`.
|
||||
|
||||
The URL for the backend service is `https://backend-dot-project-id.appspot.com`
|
||||
and the URL for the tools service is `https://tools-dot-project-id.appspot.com`.
|
||||
|
@ -27,8 +27,8 @@ wild-cards).
|
|||
|
||||
### Default service
|
||||
|
||||
The default service is responsible for all registrar-facing
|
||||
[EPP](https://en.wikipedia.org/wiki/Extensible_Provisioning_Protocol) command
|
||||
The default service is responsible for all registrar-facing [EPP]
|
||||
(https://en.wikipedia.org/wiki/Extensible_Provisioning_Protocol) command
|
||||
traffic, all user-facing WHOIS and RDAP traffic, and the admin and registrar web
|
||||
consoles, and is thus the most important service. If the service has any
|
||||
problems and goes down or stops servicing requests in a timely manner, it will
|
||||
|
@ -39,9 +39,9 @@ by the `FrontendServlet`, which provides all of the endpoints exposed in
|
|||
### Backend service
|
||||
|
||||
The backend service is responsible for executing all regularly scheduled
|
||||
background tasks (using cron) as well as all asynchronous tasks. Requests to
|
||||
the backend service are handled by the `BackendServlet`, which provides all of
|
||||
the endpoints exposed in `BackendRequestComponent`. These include tasks for
|
||||
background tasks (using cron) as well as all asynchronous tasks. Requests to the
|
||||
backend service are handled by the `BackendServlet`, which provides all of the
|
||||
endpoints exposed in `BackendRequestComponent`. These include tasks for
|
||||
generating/exporting RDE, syncing the trademark list from TMDB, exporting
|
||||
backups, writing out DNS updates, handling asynchronous contact and host
|
||||
deletions, writing out commit logs, exporting metrics to BigQuery, and many
|
||||
|
@ -62,8 +62,8 @@ contact/host deletion).
|
|||
The tools service is responsible for servicing requests from the `registry_tool`
|
||||
command line tool, which provides administrative-level functionality for
|
||||
developers and tech support employees of the registry. It is thus the least
|
||||
critical of the three services. Requests to the tools service are handled by
|
||||
the `ToolsServlet`, which provides all of the endpoints exposed in
|
||||
critical of the three services. Requests to the tools service are handled by the
|
||||
`ToolsServlet`, which provides all of the endpoints exposed in
|
||||
`ToolsRequestComponent`. Some example functionality that this service provides
|
||||
includes the server-side code to update premium lists, run EPP commands from the
|
||||
tool, and manually modify contacts/hosts/domains/and other resources. Problems
|
||||
|
@ -78,11 +78,11 @@ queues. Tasks in push queues are always executing up to some throttlable limit.
|
|||
Tasks in pull queues remain there indefinitely until the queue is polled by code
|
||||
that is running for some other reason. Essentially, push queues run their own
|
||||
tasks while pull queues just enqueue data that is used by something else. Many
|
||||
other parts of App Engine are implemented using task queues. For example,
|
||||
[App Engine cron](https://cloud.google.com/appengine/docs/java/config/cron) adds
|
||||
tasks to push queues at regularly scheduled intervals, and the
|
||||
[MapReduce framework](https://cloud.google.com/appengine/docs/java/dataprocessing/)
|
||||
adds tasks for each phase of the MapReduce algorithm.
|
||||
other parts of App Engine are implemented using task queues. For example, [App
|
||||
Engine cron](https://cloud.google.com/appengine/docs/java/config/cron) adds
|
||||
tasks to push queues at regularly scheduled intervals, and the [MapReduce
|
||||
framework](https://cloud.google.com/appengine/docs/java/dataprocessing/) adds
|
||||
tasks for each phase of the MapReduce algorithm.
|
||||
|
||||
The Domain Registry project uses a particular pattern of paired push/pull queues
|
||||
that is worth explaining in detail. Push queues are essential because App
|
||||
|
@ -90,8 +90,8 @@ Engine's architecture does not support long-running background processes, and so
|
|||
push queues are thus the fundamental building block that allows asynchronous and
|
||||
background execution of code that is not in response to incoming web requests.
|
||||
However, they also have limitations in that they do not allow batch processing
|
||||
or grouping. That's where the pull queue comes in. Regularly scheduled tasks
|
||||
in the push queue will, upon execution, poll the corresponding pull queue for a
|
||||
or grouping. That's where the pull queue comes in. Regularly scheduled tasks in
|
||||
the push queue will, upon execution, poll the corresponding pull queue for a
|
||||
specified number of tasks and execute them in a batch. This allows the code to
|
||||
execute in the background while taking advantage of batch processing.
|
||||
|
||||
|
@ -181,9 +181,9 @@ explicitly marked as otherwise.
|
|||
## Environments
|
||||
|
||||
The domain registry codebase comes pre-configured with support for a number of
|
||||
different environments, all of which are used in Google's registry system.
|
||||
Other registry operators may choose to user more or fewer environments,
|
||||
depending on their needs.
|
||||
different environments, all of which are used in Google's registry system. Other
|
||||
registry operators may choose to user more or fewer environments, depending on
|
||||
their needs.
|
||||
|
||||
The different environments are specified in `RegistryEnvironment`. Most
|
||||
correspond to a separate App Engine app except for `UNITTEST` and `LOCAL`, which
|
||||
|
@ -197,13 +197,13 @@ then the sandbox app would be named 'registry-platform-sandbox'.
|
|||
The full list of environments supported out-of-the-box, in descending order from
|
||||
real to not, is:
|
||||
|
||||
* `PRODUCTION` -- The real production environment that is actually running live
|
||||
TLDs. Since the Domain Registry is a shared registry platform, there need
|
||||
only ever be one of these.
|
||||
* `PRODUCTION` -- The real production environment that is actually running
|
||||
live TLDs. Since the Domain Registry is a shared registry platform, there
|
||||
need only ever be one of these.
|
||||
* `SANDBOX` -- A playground environment for external users to test commands in
|
||||
without the possibility of affecting production data. This is the environment
|
||||
new registrars go through
|
||||
[OT&E](https://www.icann.org/resources/unthemed-pages/registry-agmt-appc-e-2001-04-26-en)
|
||||
without the possibility of affecting production data. This is the
|
||||
environment new registrars go through [OT&E]
|
||||
(https://www.icann.org/resources/unthemed-pages/registry-agmt-appc-e-2001-04-26-en)
|
||||
in. Sandbox is also useful as a final sanity check to push a new prospective
|
||||
build to and allow it to "bake" before pushing it to production.
|
||||
* `QA` -- An internal environment used by business users to play with and sign
|
||||
|
@ -221,8 +221,8 @@ real to not, is:
|
|||
on Alpha because others are already using it).
|
||||
* `LOCAL` -- A fake environment that is used when running the app locally on a
|
||||
simulated App Engine instance.
|
||||
* `UNITTEST` -- A fake environment that is used in unit tests, where everything
|
||||
in the App Engine stack is simulated or mocked.
|
||||
* `UNITTEST` -- A fake environment that is used in unit tests, where
|
||||
everything in the App Engine stack is simulated or mocked.
|
||||
|
||||
## Release process
|
||||
|
||||
|
@ -231,8 +231,8 @@ of experience running a production registry using this codebase.
|
|||
|
||||
1. Developers write code and associated unit tests verifying that the new code
|
||||
works properly.
|
||||
2. New features or potentially risky bug fixes are pushed to Alpha and tested by
|
||||
the developers before being committed to the source code repository.
|
||||
2. New features or potentially risky bug fixes are pushed to Alpha and tested
|
||||
by the developers before being committed to the source code repository.
|
||||
3. New builds are cut and first pushed to Sandbox.
|
||||
4. Once a build has been running successfully in Sandbox for a day with no
|
||||
errors, it can be pushed to Production.
|
||||
|
@ -243,8 +243,8 @@ of experience running a production registry using this codebase.
|
|||
All [cron tasks](https://cloud.google.com/appengine/docs/java/config/cron) are
|
||||
specified in `cron.xml` files, with one per environment. There are more tasks
|
||||
that execute in Production than in other environments, because tasks like
|
||||
uploading RDE dumps are only done for the live system. Cron tasks execute on
|
||||
the `backend` service.
|
||||
uploading RDE dumps are only done for the live system. Cron tasks execute on the
|
||||
`backend` service.
|
||||
|
||||
Most cron tasks use the `TldFanoutAction` which is accessed via the
|
||||
`/_dr/cron/fanout` URL path. This action, which is run by the BackendServlet on
|
||||
|
@ -260,18 +260,20 @@ separately for each TLD, such as RDE exports and NORDN uploads. It's simpler to
|
|||
have a single cron entry that will create tasks for all TLDs than to have to
|
||||
specify a separate cron task for each action for each TLD (though that is still
|
||||
an option). Task queues also provide retry semantics in the event of transient
|
||||
failures that a raw cron task does not. This is why there are some tasks that
|
||||
do not fan out across TLDs that still use `TldFanoutAction` -- it's so that the
|
||||
failures that a raw cron task does not. This is why there are some tasks that do
|
||||
not fan out across TLDs that still use `TldFanoutAction` -- it's so that the
|
||||
tasks retry in the face of transient errors.
|
||||
|
||||
The full list of URL parameters to `TldFanoutAction` that can be specified in
|
||||
cron.xml is:
|
||||
* `endpoint` -- The path of the action that should be executed (see `web.xml`).
|
||||
|
||||
* `endpoint` -- The path of the action that should be executed (see
|
||||
`web.xml`).
|
||||
* `queue` -- The cron queue to enqueue tasks in.
|
||||
* `forEachRealTld` -- Specifies that the task should be run in each TLD of type
|
||||
`REAL`. This can be combined with `forEachTestTld`.
|
||||
* `forEachTestTld` -- Specifies that the task should be run in each TLD of type
|
||||
`TEST`. This can be combined with `forEachRealTld`.
|
||||
* `forEachRealTld` -- Specifies that the task should be run in each TLD of
|
||||
type `REAL`. This can be combined with `forEachTestTld`.
|
||||
* `forEachTestTld` -- Specifies that the task should be run in each TLD of
|
||||
type `TEST`. This can be combined with `forEachRealTld`.
|
||||
* `runInEmpty` -- Specifies that the task should be run globally, i.e. just
|
||||
once, rather than individually per TLD. This is provided to allow tasks to
|
||||
retry. It is called "`runInEmpty`" for historical reasons.
|
||||
|
@ -281,13 +283,13 @@ cron.xml is:
|
|||
|
||||
## Cloud Datastore
|
||||
|
||||
The Domain Registry platform uses
|
||||
[Cloud Datastore](https://cloud.google.com/appengine/docs/java/datastore/) as
|
||||
its primary database. Cloud Datastore is a NoSQL document database that
|
||||
provides automatic horizontal scaling, high performance, and high availability.
|
||||
All information that is persisted to Cloud Datastore takes the form of Java
|
||||
classes annotated with `@Entity` that are located in the `model` package. The
|
||||
[Objectify library](https://cloud.google.com/appengine/docs/java/gettingstarted/using-datastore-objectify)
|
||||
The Domain Registry platform uses [Cloud Datastore]
|
||||
(https://cloud.google.com/appengine/docs/java/datastore/) as its primary
|
||||
database. Cloud Datastore is a NoSQL document database that provides automatic
|
||||
horizontal scaling, high performance, and high availability. All information
|
||||
that is persisted to Cloud Datastore takes the form of Java classes annotated
|
||||
with `@Entity` that are located in the `model` package. The [Objectify library]
|
||||
(https://cloud.google.com/appengine/docs/java/gettingstarted/using-datastore-objectify)
|
||||
is used to persist instances of these classes in a format that Datastore
|
||||
understands.
|
||||
|
||||
|
@ -300,8 +302,8 @@ registry codebase:
|
|||
* `_ah_SESSION` -- These entities track App Engine client sessions.
|
||||
* `_GAE_MR_*` -- These entities are generated by App Engine while running
|
||||
MapReduces.
|
||||
* `BackupStatus` -- There should only be one of these entities, used to maintain
|
||||
the state of the backup process.
|
||||
* `BackupStatus` -- There should only be one of these entities, used to
|
||||
maintain the state of the backup process.
|
||||
* `Cancellation` -- A cancellation is a special type of billing event which
|
||||
represents the cancellation of another billing event such as a OneTime or
|
||||
Recurring.
|
||||
|
@ -311,23 +313,24 @@ registry codebase:
|
|||
* `ContactResource` -- These hold the ICANN contact information (but not
|
||||
registrar contacts, who have a separate entity type).
|
||||
* `Cursor` -- We use Cursor entities to maintain state about daily processes,
|
||||
remembering which dates have been processed. For instance, for the RDE export,
|
||||
Cursor entities maintain the date up to which each TLD has been exported.
|
||||
* `DomainApplicationIndex` -- These hold domain applications received during the
|
||||
sunrise period.
|
||||
remembering which dates have been processed. For instance, for the RDE
|
||||
export, Cursor entities maintain the date up to which each TLD has been
|
||||
exported.
|
||||
* `DomainApplicationIndex` -- These hold domain applications received during
|
||||
the sunrise period.
|
||||
* `DomainBase` -- These hold the ICANN domain information.
|
||||
* `DomainRecord` -- These are used during the DNS update process.
|
||||
* `EntityGroupRoot` -- There is only one EntityGroupRoot entity, which serves as
|
||||
the Datastore parent of many other entities.
|
||||
* `EppResourceIndex` -- These entities allow enumeration of EPP resources (such
|
||||
as domains, hosts and contacts), which would otherwise be difficult to do in
|
||||
Datastore.
|
||||
* `EntityGroupRoot` -- There is only one EntityGroupRoot entity, which serves
|
||||
as the Datastore parent of many other entities.
|
||||
* `EppResourceIndex` -- These entities allow enumeration of EPP resources
|
||||
(such as domains, hosts and contacts), which would otherwise be difficult to
|
||||
do in Datastore.
|
||||
* `ExceptionReportEntity` -- These entities are generated automatically by
|
||||
ECatcher, a Google-internal logging and debugging tool. Non-Google users
|
||||
should not encounter these entries.
|
||||
* `ForeignKeyContactIndex`, `ForeignKeyDomainIndex`, and `ForeignKeyHostIndex`
|
||||
-- These act as a unique index on contacts, domains and hosts, allowing
|
||||
transactional lookup by foreign key.
|
||||
* `ForeignKeyContactIndex`, `ForeignKeyDomainIndex`, and
|
||||
`ForeignKeyHostIndex` -- These act as a unique index on contacts, domains
|
||||
and hosts, allowing transactional lookup by foreign key.
|
||||
* `HistoryEntry` -- A HistoryEntry is the record of a command which mutated an
|
||||
EPP resource. It serves as the parent of BillingEvents and PollMessages.
|
||||
* `HostRecord` -- These are used during the DNS update process.
|
||||
|
@ -335,23 +338,23 @@ registry codebase:
|
|||
* `Lock` -- Lock entities are used to control access to a shared resource such
|
||||
as an App Engine queue. Under ordinary circumstances, these locks will be
|
||||
cleaned up automatically, and should not accumulate.
|
||||
* `LogsExportCursor` -- This is a single entity which maintains the state of log
|
||||
export.
|
||||
* `MR-*` -- These entities are generated by the App Engine MapReduce library in
|
||||
the course of running MapReduces.
|
||||
* `LogsExportCursor` -- This is a single entity which maintains the state of
|
||||
log export.
|
||||
* `MR-*` -- These entities are generated by the App Engine MapReduce library
|
||||
in the course of running MapReduces.
|
||||
* `Modification` -- A Modification is a special type of billing event which
|
||||
represents the modification of a OneTime billing event.
|
||||
* `OneTime` -- A OneTime is a billing event which represents a one-time charge
|
||||
or credit to the client (as opposed to Recurring).
|
||||
* `pipeline-*` -- These entities are also generated by the App Engine MapReduce
|
||||
library.
|
||||
* `PollMessage` -- PollMessages are generated by the system to notify registrars
|
||||
of asynchronous responses and status changes.
|
||||
* `pipeline-*` -- These entities are also generated by the App Engine
|
||||
MapReduce library.
|
||||
* `PollMessage` -- PollMessages are generated by the system to notify
|
||||
registrars of asynchronous responses and status changes.
|
||||
* `PremiumList`, `PremiumListEntry`, and `PremiumListRevision` -- The standard
|
||||
method for determining which domain names receive premium pricing is to
|
||||
maintain a static list of premium names. Each PremiumList contains some number
|
||||
of PremiumListRevisions, each of which in turn contains a PremiumListEntry for
|
||||
each premium name.
|
||||
maintain a static list of premium names. Each PremiumList contains some
|
||||
number of PremiumListRevisions, each of which in turn contains a
|
||||
PremiumListEntry for each premium name.
|
||||
* `RdeRevision` -- These entities are used by the RDE subsystem in the process
|
||||
of generating files.
|
||||
* `Recurring` -- A Recurring is a billing event which represents a recurring
|
||||
|
@ -361,10 +364,10 @@ registry codebase:
|
|||
stored in a special RegistrarContact entity.
|
||||
* `RegistrarCredit` and `RegistrarCreditBalance` -- The system supports the
|
||||
concept of a registrar credit balance, which is a pool of credit that the
|
||||
registrar can use to offset amounts they owe. This might come from promotions,
|
||||
for instance. These entities maintain registrars' balances.
|
||||
* `Registry` -- These hold information about the TLDs supported by the Registry
|
||||
system.
|
||||
registrar can use to offset amounts they owe. This might come from
|
||||
promotions, for instance. These entities maintain registrars' balances.
|
||||
* `Registry` -- These hold information about the TLDs supported by the
|
||||
Registry system.
|
||||
* `RegistryCursor` -- These entities are the predecessor to the Cursor
|
||||
entities. We are no longer using them, and will be deleting them soon.
|
||||
* `ReservedList` -- Each ReservedList entity represents an entire list of
|
||||
|
@ -374,22 +377,22 @@ registry codebase:
|
|||
for generating tokens such as XSRF tokens.
|
||||
* `SignedMarkRevocationList` -- The entities together contain the Signed Mark
|
||||
Data Revocation List file downloaded from the TMCH MarksDB each day. Each
|
||||
entity contains up to 10,000 rows of the file, so depending on the size of the
|
||||
file, there will be some handful of entities.
|
||||
entity contains up to 10,000 rows of the file, so depending on the size of
|
||||
the file, there will be some handful of entities.
|
||||
* `TmchCrl` -- This is a single entity containing ICANN's TMCH CA Certificate
|
||||
Revocation List.
|
||||
|
||||
## Cloud Storage buckets
|
||||
|
||||
The Domain Registry platform uses
|
||||
[Cloud Storage](https://cloud.google.com/storage/) for bulk storage of large
|
||||
flat files that aren't suitable for Datastore. These files include backups, RDE
|
||||
exports, Datastore snapshots (for ingestion into BigQuery), and reports. Each
|
||||
bucket name must be unique across all of Google Cloud Storage, so we use the
|
||||
common recommended pattern of prefixing all buckets with the name of the App
|
||||
Engine app (which is itself globally unique). Most of the bucket names are
|
||||
configurable, but the defaults are as follows, with PROJECT standing in as a
|
||||
placeholder for the App Engine app name:
|
||||
The Domain Registry platform uses [Cloud Storage]
|
||||
(https://cloud.google.com/storage/) for bulk storage of large flat files that
|
||||
aren't suitable for Datastore. These files include backups, RDE exports,
|
||||
Datastore snapshots (for ingestion into BigQuery), and reports. Each bucket name
|
||||
must be unique across all of Google Cloud Storage, so we use the common
|
||||
recommended pattern of prefixing all buckets with the name of the App Engine app
|
||||
(which is itself globally unique). Most of the bucket names are configurable,
|
||||
but the defaults are as follows, with PROJECT standing in as a placeholder for
|
||||
the App Engine app name:
|
||||
|
||||
* `PROJECT-billing` -- Monthly invoice files for each registrar.
|
||||
* `PROJECT-commits` -- Daily exports of commit logs that are needed for
|
||||
|
@ -399,13 +402,13 @@ placeholder for the App Engine app name:
|
|||
* `PROJECT-gcs-logs` -- This bucket is used at Google to store the GCS access
|
||||
logs and storage data. This bucket is not required by the Registry system,
|
||||
but can provide useful logging information. For instructions on setup, see
|
||||
the
|
||||
[Cloud Storage documentation](https://cloud.google.com/storage/docs/access-logs).
|
||||
the [Cloud Storage documentation]
|
||||
(https://cloud.google.com/storage/docs/access-logs).
|
||||
* `PROJECT-icann-brda` -- This bucket contains the weekly ICANN BRDA files.
|
||||
There is no lifecycle expiration; we keep a history of all the files. This
|
||||
bucket must exist for the BRDA process to function.
|
||||
* `PROJECT-icann-zfa` -- This bucket contains the most recent ICANN ZFA
|
||||
files. No lifecycle is needed, because the files are overwritten each time.
|
||||
* `PROJECT-icann-zfa` -- This bucket contains the most recent ICANN ZFA files.
|
||||
No lifecycle is needed, because the files are overwritten each time.
|
||||
* `PROJECT-rde` -- This bucket contains RDE exports, which should then be
|
||||
regularly uploaded to the escrow provider. Lifecycle is set to 90 days. The
|
||||
bucket must exist.
|
||||
|
@ -415,11 +418,12 @@ placeholder for the App Engine app name:
|
|||
allow for in-depth querying.
|
||||
* `PROJECT.appspot.com` -- Temporary MapReduce files are stored here. By
|
||||
default, the App Engine MapReduce library places its temporary files in a
|
||||
bucket named {project}.appspot.com. This bucket must exist. To keep temporary
|
||||
files from building up, a 90-day or 180-day lifecycle should be applied to the
|
||||
bucket, depending on how long you want to be able to go back and debug
|
||||
MapReduce problems. At 30 GB per day of generate temporary files, this bucket
|
||||
may be the largest consumer of storage, so only save what you actually use.
|
||||
bucket named {project}.appspot.com. This bucket must exist. To keep
|
||||
temporary files from building up, a 90-day or 180-day lifecycle should be
|
||||
applied to the bucket, depending on how long you want to be able to go back
|
||||
and debug MapReduce problems. At 30 GB per day of generate temporary files,
|
||||
this bucket may be the largest consumer of storage, so only save what you
|
||||
actually use.
|
||||
|
||||
## Commit logs
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
# Configuration
|
||||
|
||||
There are multiple different kinds of configuration that go into getting a
|
||||
working registry system up and running. Broadly speaking, configuration works
|
||||
in two ways -- globally, for the entire sytem, and per-TLD. Global
|
||||
configuration is managed by editing code and deploying a new version, whereas
|
||||
per-TLD configuration is data that lives in Datastore in `Registry` entities,
|
||||
and is updated by running `registry_tool` commands without having to deploy a
|
||||
new version.
|
||||
working registry system up and running. Broadly speaking, configuration works in
|
||||
two ways -- globally, for the entire sytem, and per-TLD. Global configuration is
|
||||
managed by editing code and deploying a new version, whereas per-TLD
|
||||
configuration is data that lives in Datastore in `Registry` entities, and is
|
||||
updated by running `registry_tool` commands without having to deploy a new
|
||||
version.
|
||||
|
||||
## Environments
|
||||
|
||||
|
@ -40,8 +40,8 @@ doc, and the rest are covered in the general App Engine documentation.
|
|||
|
||||
If you are not writing new code to implement custom features, is unlikely that
|
||||
you will need to make any modifications beyond simple changes to
|
||||
`application.xml` and `appengine-web.xml`. If you are writing new features,
|
||||
it's likely you'll need to add cronjobs, URL paths, Datastore indexes, and task
|
||||
`application.xml` and `appengine-web.xml`. If you are writing new features, it's
|
||||
likely you'll need to add cronjobs, URL paths, Datastore indexes, and task
|
||||
queues, and thus edit those associated XML files.
|
||||
|
||||
## Global configuration
|
||||
|
@ -101,20 +101,20 @@ module for `DummyKeyringModule` that loads the credentials in a secure way, and
|
|||
provides them using either an instance of `InMemoryKeyring` or your own custom
|
||||
implementation of `Keyring`. You then need to replace all usages of
|
||||
`DummyKeyringModule` with your own module in all of the per-service components
|
||||
in which it is referenced. The functions in `PgpHelper` will likely prove
|
||||
useful for loading keys stored in PGP format into the PGP key classes that
|
||||
you'll need to provide from `Keyring`, and you can see examples of them in
|
||||
action in `DummyKeyringModule`.
|
||||
in which it is referenced. The functions in `PgpHelper` will likely prove useful
|
||||
for loading keys stored in PGP format into the PGP key classes that you'll need
|
||||
to provide from `Keyring`, and you can see examples of them in action in
|
||||
`DummyKeyringModule`.
|
||||
|
||||
## Per-TLD configuration
|
||||
|
||||
`Registry` entities, which are persisted to Datastore, are used for per-TLD
|
||||
configuration. They contain any kind of configuration that is specific to a
|
||||
TLD, such as the create/renew price of a domain name, the pricing engine
|
||||
configuration. They contain any kind of configuration that is specific to a TLD,
|
||||
such as the create/renew price of a domain name, the pricing engine
|
||||
implementation, the DNS writer implementation, whether escrow exports are
|
||||
enabled, the default currency, the reserved label lists, and more. The
|
||||
`update_tld` command in `registry_tool` is used to set all of these options.
|
||||
See the "Registry tool" documentation for more information, as well as the
|
||||
`update_tld` command in `registry_tool` is used to set all of these options. See
|
||||
the "Registry tool" documentation for more information, as well as the
|
||||
command-line help for the `update_tld` command. Unlike global configuration
|
||||
above, per-TLD configuration options are stored as data in the running system,
|
||||
and thus do not require code pushes to update.
|
||||
|
|
|
@ -5,25 +5,27 @@ working running instance.
|
|||
|
||||
## Prerequisites
|
||||
|
||||
* A recent version of the
|
||||
[Java 7 JDK](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html)
|
||||
* A recent version of the [Java 7 JDK]
|
||||
(http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html)
|
||||
(note that Java 8 support should be coming to App Engine soon).
|
||||
* [Bazel](http://bazel.io/), which is the buld system that
|
||||
the Domain Registry project uses. The minimum required version is 0.3.1.
|
||||
* [Google App Engine SDK for Java](https://cloud.google.com/appengine/downloads#Google_App_Engine_SDK_for_Java),
|
||||
especially `appcfg`, which is a command-line tool that runs locally that is used
|
||||
to communicate with the App Engine cloud.
|
||||
* [Create an application](https://cloud.google.com/appengine/docs/java/quickstart)
|
||||
on App Engine to deploy to, and set up `appcfg` to connect to it.
|
||||
* [Bazel](http://bazel.io/), which is the buld system that the Domain Registry
|
||||
project uses. The minimum required version is 0.3.1.
|
||||
* [Google App Engine SDK for Java]
|
||||
(https://cloud.google.com/appengine/downloads#Google_App_Engine_SDK_for_Java),
|
||||
especially `appcfg`, which is a command-line tool that runs locally that is
|
||||
used to communicate with the App Engine cloud.
|
||||
* [Create an application]
|
||||
(https://cloud.google.com/appengine/docs/java/quickstart) on App Engine to
|
||||
deploy to, and set up `appcfg` to connect to it.
|
||||
|
||||
## Downloading the code
|
||||
|
||||
Start off by grabbing the latest version from the
|
||||
[Domain Registry project on GitHub](https://github.com/google/domain-registry).
|
||||
This can be done either by cloning the Git repo (if you expect to make code
|
||||
changes to contribute back), or simply by downloading the latest release as a
|
||||
zip file. This guide will cover cloning from Git, but should work almost
|
||||
identically for downloading the zip file.
|
||||
Start off by grabbing the latest version from the [Domain Registry project on
|
||||
GitHub](https://github.com/google/domain-registry). This can be done either by
|
||||
cloning the Git repo (if you expect to make code changes to contribute back), or
|
||||
simply by downloading the latest release as a zip file. This guide will cover
|
||||
cloning from Git, but should work almost identically for downloading the zip
|
||||
file.
|
||||
|
||||
$ git clone git@github.com:google/domain-registry.git
|
||||
Cloning into 'domain-registry'...
|
||||
|
@ -47,8 +49,8 @@ by the project.
|
|||
|
||||
## Building and verifying the code
|
||||
|
||||
The first step is to verify that the project successfully builds. This will
|
||||
also download and install dependencies.
|
||||
The first step is to verify that the project successfully builds. This will also
|
||||
download and install dependencies.
|
||||
|
||||
$ bazel --batch build //java{,tests}/google/registry/...
|
||||
INFO: Found 584 targets...
|
||||
|
@ -91,8 +93,8 @@ recommended to at least confirm that the default version of the code can be
|
|||
pushed at all first before diving into that, with the expectation that things
|
||||
won't work properly until they are configured.
|
||||
|
||||
All of the [EAR](https://en.wikipedia.org/wiki/EAR_(file_format)) and
|
||||
[WAR](https://en.wikipedia.org/wiki/WAR_(file_format)) files for the different
|
||||
All of the [EAR](https://en.wikipedia.org/wiki/EAR_\(file_format\)) and [WAR]
|
||||
(https://en.wikipedia.org/wiki/WAR_\(file_format\)) files for the different
|
||||
environments, which were built in the previous step, are outputted to the
|
||||
`bazel-genfiles` directory as follows:
|
||||
|
||||
|
@ -115,7 +117,8 @@ an environment in the file name), whereas there is one WAR file per service per
|
|||
environment, with there being three services in total: default, backend, and
|
||||
tools.
|
||||
|
||||
Then, use `appcfg` to [deploy the WAR files](https://cloud.google.com/appengine/docs/java/tools/uploadinganapp):
|
||||
Then, use `appcfg` to [deploy the WAR files]
|
||||
(https://cloud.google.com/appengine/docs/java/tools/uploadinganapp):
|
||||
|
||||
$ cd /path/to/downloaded/appengine/app
|
||||
$ /path/to/appcfg.sh update /path/to/registry_default.war
|
||||
|
@ -144,20 +147,20 @@ it'll never be created for real on the Internet at large.
|
|||
Perform this command? (y/N): y
|
||||
Updated 1 entities.
|
||||
|
||||
The name of the TLD is the main parameter passed to the command. The initial
|
||||
TLD state is set here to general availability, bypassing sunrise and landrush,
|
||||
so that domain names can be created immediately in the following steps. The TLD
|
||||
The name of the TLD is the main parameter passed to the command. The initial TLD
|
||||
state is set here to general availability, bypassing sunrise and landrush, so
|
||||
that domain names can be created immediately in the following steps. The TLD
|
||||
type is set to `TEST` (the other alternative being `REAL`) for obvious reasons.
|
||||
|
||||
`roid_suffix` is the suffix that will be used for repository ids of domains on
|
||||
the TLD -- it must be all uppercase and a maximum of eight ASCII characters.
|
||||
ICANN
|
||||
[recommends](https://www.icann.org/resources/pages/correction-non-compliant-roids-2015-08-26-en)
|
||||
ICANN [recommends]
|
||||
(https://www.icann.org/resources/pages/correction-non-compliant-roids-2015-08-26-en)
|
||||
a unique ROID suffix per TLD. The easiest way to come up with one is to simply
|
||||
use the entire uppercased TLD string if it is eight characters or fewer, or
|
||||
abbreviate it in some sensible way down to eight if it is longer. The full repo
|
||||
id of a domain resource is a hex string followed by the suffix,
|
||||
e.g. `12F7CDF3-EXAMPLE` for our example TLD.
|
||||
id of a domain resource is a hex string followed by the suffix, e.g.
|
||||
`12F7CDF3-EXAMPLE` for our example TLD.
|
||||
|
||||
### Create a registrar
|
||||
|
||||
|
@ -178,10 +181,10 @@ In the command above, "acme" is the internal registrar id that is the primary
|
|||
key used to refer to the registrar. The `name` is the display name that is used
|
||||
less often, primarily in user interfaces. We again set the type of the resource
|
||||
here to `TEST`. The `password` is the EPP password that the registrar uses to
|
||||
log in with. The `icann_referral_email` is the email address associated with
|
||||
the initial creation of the registrar -- note that the registrar cannot change
|
||||
it later. The address fields are self-explanatory (note that other parameters
|
||||
are available for international addresses). The `allowed_tlds` parameter is a
|
||||
log in with. The `icann_referral_email` is the email address associated with the
|
||||
initial creation of the registrar -- note that the registrar cannot change it
|
||||
later. The address fields are self-explanatory (note that other parameters are
|
||||
available for international addresses). The `allowed_tlds` parameter is a
|
||||
comma-delimited list of TLDs that the registrar has access to, and here is set
|
||||
to the example TLD.
|
||||
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# Registry tool
|
||||
|
||||
The registry tool is a command-line registry administration tool that is invoked
|
||||
using the `registry_tool` command. It has the ability to view and change a
|
||||
large number of things in a running domain registry environment, including
|
||||
creating registrars, updating premium and reserved lists, running an EPP command
|
||||
from a given XML file, and performing various backend tasks like re-running RDE
|
||||
if the most recent export failed. Its code lives inside the tools package
|
||||
using the `registry_tool` command. It has the ability to view and change a large
|
||||
number of things in a running domain registry environment, including creating
|
||||
registrars, updating premium and reserved lists, running an EPP command from a
|
||||
given XML file, and performing various backend tasks like re-running RDE if the
|
||||
most recent export failed. Its code lives inside the tools package
|
||||
(`java/google/registry/tools`), and is compiled by building the `registry_tool`
|
||||
target in the Bazel BUILD file in that package.
|
||||
|
||||
|
@ -69,22 +69,23 @@ them are grouped using sub-interfaces or abstract classes that provide
|
|||
additional functionality. The most common patterns that are used by a large
|
||||
number of other tools are:
|
||||
|
||||
* **`BigqueryCommand`** -- Provides a connection to BigQuery for tools that need
|
||||
it.
|
||||
* **`ConfirmingCommand`** -- Provides the methods `prompt()` and `execute()` to
|
||||
override. `prompt()` outputs a message (usually what the command is going to
|
||||
do) and prompts the user to confirm execution of the command, and then
|
||||
* **`BigqueryCommand`** -- Provides a connection to BigQuery for tools that
|
||||
need it.
|
||||
* **`ConfirmingCommand`** -- Provides the methods `prompt()` and `execute()`
|
||||
to override. `prompt()` outputs a message (usually what the command is going
|
||||
to do) and prompts the user to confirm execution of the command, and then
|
||||
`execute()` actually does it.
|
||||
* **`EppToolCommand`** -- Commands that work by executing EPP commands against
|
||||
the server, usually by filling in a template with parameters that were passed
|
||||
on the command-line.
|
||||
* **`MutatingEppToolCommand`** -- A sub-class of `EppToolCommand` that provides
|
||||
a `--dry_run` flag, that, if passed, will display the output from the server
|
||||
of what the command would've done without actually committing those changes.
|
||||
the server, usually by filling in a template with parameters that were
|
||||
passed on the command-line.
|
||||
* **`MutatingEppToolCommand`** -- A sub-class of `EppToolCommand` that
|
||||
provides a `--dry_run` flag, that, if passed, will display the output from
|
||||
the server of what the command would've done without actually committing
|
||||
those changes.
|
||||
* **`GetEppResourceCommand`** -- Gets individual EPP resources from the server
|
||||
and outputs them.
|
||||
* **`ListObjectsCommand`** -- Lists all objects of a specific type from the
|
||||
server and outputs them.
|
||||
* **`MutatingCommand`** -- Provides a facility to create or update entities in
|
||||
Datastore, and uses a diff algorithm to display the changes that will be made
|
||||
before committing them.
|
||||
Datastore, and uses a diff algorithm to display the changes that will be
|
||||
made before committing them.
|
||||
|
|
Loading…
Add table
Reference in a new issue