* Add a tools command to launch SQL validation job
Stopping using Pipeline.run().waitUntilFinish in
ValidateDatastorePipeline. Flex-templalate does not support blocking
wait in the main thread.
This PR adds a new ValidateSqlCommand that launches the pipeline and
maintains the SQL snapshot while the pipeline is running.
This PR also added more parameters to both ValidateSqlCommand and
ValidateDatastoreCommand:
- The -c option to supply an optional incremental comparison start time
- The -r option to supply an optional release tag that is not 'live',
e.g., nomulus-DDDDYYMM-RC00
If the manual launch option (-m) is enabled, the commands will print the
gcloud command that can launch the pipeline.
Tested with sandbox, qa and the dev project.
The gcloud command does some weird stuff with sorting when custom format
is used. Here we instead rely on linux sort and head command to sort the
versions list.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1511)
<!-- Reviewable:end -->
uberjar task and uberjar name are now different (beamPipelineCommon and
beam_pipeline_common, respectively). This is more idiomatic with regard
to naming conventions but we need to take two different variables now.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1369)
<!-- Reviewable:end -->
This is the first part of the RdeStagingAction SQL migration where the
mapper logic is implemented in Beam.
A few helper methods are added to convert the DomainContent, HostBase
and ContactBase to their respective terminal child classes. This is
necessary and possible because the child classes do not have extra
fields and the base classes exist only to be embedded to other entities
(such as the various HistoryEntry entities). The conversion is necessary
because most of our code expects the terminal classes, such as the
RdeMarshaller's various marshallXXX() methods. The alternative would be
to change all the call sites, which seems to be much more disruptive.
Unfortunately there is is no good way to do this conversion than just
creating a builder and setting every fields there is.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1219)
<!-- Reviewable:end -->
* Use SecretManager for nomulus-tool-cloudbuild cred
Store cloudbuild's nomulus-tool credential in SecretManager and make the
deployment pipeline load it from the SecretManager.
The tool-credential.json.enc file in the
gs://domain-registry-dev-deploy/secrets folder is no longer needed.
There has been a case where the CI was broken on Friday and no one
noticied or fixed it and a RC build was built with broken tests.
The tests were disabled due to unknown test failures that have since
been fixed.
Also update the machine type used by GCB to be more powerful. This is
necessary for the tests to past because N1_HIGHCPU_8 is RAM constraint
and the tests crashes. I updated all jobs to use the new type which
hopefully will make the build faster as well.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1130)
<!-- Reviewable:end -->
This is similar to the migration of the spec11 pipeline in #1073. Also removed
a few Dagger providers that are no longer needed.
TESTED=tested the dataflow job on alpha.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/google/nomulus/1100)
<!-- Reviewable:end -->
* Migrate Spec11 pipeline to flex template
Unfortunately this PR has turned out to be much bigger than I initially
conceived. However this is no good way to separate it out because the
changes are intertwined. This PR includes 3 main changes:
1. Change the spec11 pipline to use Dataflow Flex Template.
2. Retire the use of the old JPA layer that relies on credential saved
in KMS.
3. Some extensive refactoring to streamline the logic and improve test
isolation.
* Fix job name and remove projectId from options
* Add parameter logs
* Set RegistryEnvironment
* Remove logging and modify safe browsing API key regex
* Rename a test method and rebase
* Remove unused Junit extension
* Specify job region
* Add -r when rsync a release to the live folder
Release folders now are no longer flat. Each of them has a 'beam'
subfolder with pipeline metadata files.
* Allow nom_build to run in Cloudbuild
Our builder comes with python3.6 and cannot support nom_build out of
box. Nom_build requires dataclasses which is introduced in v3.7.
I haven't found an easy way to get python3.7+ without changing the base
linux image. This PR explicitly installs dataclasses.
* Use shared jar to stage BEAM pipeline if possible
Allow multiple BEAM pipelines with the same classes and dependencies to
share one Uber jar.
Added metadata for BulkDeleteDatastorePipeline.
Updated shell and Cloud Build scripts to stage all pipelines in one
step.
* Stage the init_sql_pipeline in CloudBuild
Defined metadata file and added Gradle uberJar task for the pipeline,
which are needed for staging.
Updated cloud build script to stage this pipeline during the build
processs.
* Script to rolling-start Nomulus
Add a script to restart Nomulus non-disruptively. This can be used after
a configuration change to external resources (e.g., Cloud SQL
credential) to make Nomulus pick up the latest config.
Also added proper support to paging based List api methods, replacing the
current hack that forces the server to return everything in one response.
The List method for instances has a lower limit on page size than others
which is not sufficient for our project.
* Sync the live folder after Nomulus rollback
To update the nomulus tool on corp desktop, the artifacts from the
rollback target release should be copied to the 'live' folder.
* Fix a test
* An automated rollback tool for Nomulus
A tool that directs traffic between deployed versions. It handles the
conversion between Nomulus tags and AppEngine versions, executes schema
compatibility tests, ensures that steps are executed in the correct order,
and updates deployment records appropriately.
* Maintain a release-to-Version map in deployment
Keep track of the mapping between Nomulus release tags and AppEngine
version ids with a mapping file. This is necessary because AppEngine
does not support custom versioning. With this mapping, rollbacks could
be automated. Automation of rollbacks is important since there are
test-supporting metadata to be updated, but are easily forgotten.
During the last stage of deployment, current per-service version ids
are fetched using gcloud and are appended to a file on GCS. Each line
is of the format "{RELEASE_TAG},{APPENGINE_SERVICE},{APPENGINE_VERSION}.
This change has been tested in crash. The rollback script is still a
work in progress.
The current setup causes the GCB job to fail validation and not run because it
uses backticks in the configuration yaml, which is not allowed -- there is no
shell to perform backtick substitution. See the error message here:
https://spinnaker.endpoints.domain-registry-dev.cloud.goog/gate/pipelines/01EF5GRMD625613H6Z033DBD3Z
In the future please make sure to test the GCB pipeline as instructed in
the comments at the beginning of each file before committing.
I tried to work around it by downloading the nomulus tool jar file
instead (running the nomulus-tool docker image inside a docker image is
not advisable). However the "nomulus deploy_spec11_pipeline" command
still fails. I'm not sure why. Has the command itself been tested
locally? The error message is shown below:
```
Step #2: Aug 09, 2020 3:11:46 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
Step #2: WARNING: --region not set; will default to us-central1. Future releases of Beam will require the user to set the region explicitly. https://cloud.google.com/compute/docs/regions-zones/regions-zones
Step #2: Aug 09, 2020 3:11:46 AM org.apache.beam.sdk.extensions.gcp.options.GcpOptions$GcpTempLocationFactory tryCreateDefaultBucket
Step #2: INFO: No tempLocation specified, attempting to use default bucket: dataflow-staging-us-central1-937378958468
Step #2: Aug 09, 2020 3:11:47 AM org.apache.beam.sdk.extensions.gcp.util.RetryHttpRequestInitializer$LoggingHttpBackOffHandler handleResponse
Step #2: WARNING: Request failed with code 409, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes, HTTP framework says request can be retried, (caller responsible for retrying): https://www.googleapis.com/storage/v1/b?predefinedAcl=projectPrivate&predefinedDefaultObjectAcl=projectPrivate&project=domain-registry-alpha.
Step #2: Exception in thread "main"
Step #2: java.lang.RuntimeException: Failed to construct instance from factory method DataflowRunner#fromOptions(interface org.apache.beam.sdk.options.PipelineOptions)
Step #2: at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:224)
Step #2: at org.apache.beam.sdk.util.InstanceBuilder.build(InstanceBuilder.java:155)
Step #2:
Step #2: at org.apache.beam.sdk.PipelineRunner.fromOptions(PipelineRunner.java:55)
Step #2: at org.apache.beam.sdk.Pipeline.create(Pipeline.java:147)
Step #2:
Step #2: at google.registry.beam.spec11.Spec11Pipeline.deploy(Spec11Pipeline.java:157)
Step #2: at google.registry.tools.DeploySpec11PipelineCommand.run(DeploySpec11PipelineCommand.java:80)
Step #2: at google.registry.tools.RegistryCli.runCommand(RegistryCli.java:257)
Step #2: at google.registry.tools.RegistryCli.run(RegistryCli.java:182)
Step #2: at google.registry.tools.RegistryTool.main(RegistryTool.java:129)
Step #2: Caused by: java.lang.reflect.InvocationTargetException
Step #2: at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Step #2:
Step #2: at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Step #2: at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Step #2: at java.base/java.lang.reflect.Method.invoke(Method.java:566)
Step #2: at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:214)
Step #2: ... 8 more
Step #2: Caused by: java.lang.IllegalArgumentException: Unable to use ClassLoader to detect classpath elements. Current ClassLoader is jdk.internal.loader.ClassLoaders$AppClassLoader@5cb0d902, only URLClassLoaders are supported.
Step #2: at org.apache.beam.runners.core.construction.PipelineResources.detectClassPathResourcesToStage(PipelineResources.java:58)
Step #2:
Step #2: at org.apache.beam.runners.dataflow.DataflowRunner.fromOptions(DataflowRunner.java:285)
Step #2:
Step #2: ... 13 more
```
Lastly the "--project" flag refers to the KMS project. While I'm not
sure which project is that, I don't think we can use the PROJECT_ID
variable as this is a GCB-substituted variable which refers to the
project that the GCB job runs in, which in our cases means
domain-registry-dev. We shouldn't use that project for KMS. I've changed
it to the same project as the one we are deploying to, but please note
that we have a separate project ${project_id}-keys that is used for all
KMS purposes. This is specified in the config file so if that's what you
meant to use, there is no need to specify it in the command line. Actually
if you meant use the project to be deployed to for KMS, it also
shouldn't be necessary to specify it separately as this information is
already known when you specified "nomulus -e ENV".
https://team.git.corp.google.com/domain-registry-eng/nomulus-internal/+/refs/heads/master/core/src/main/java/google/registry/config/files/nomulus-config-production.yaml#168
Can you add more description on what the KMS project is supposed to be?
I don't think we specify a project for KMS purpose in any other
commands.
Given that there are several unresolved issues, I've commented out my
proposed solution so that deployment can proceed.
For reasons unclear at the moment the tests are not passing. Disabling
them for now so that release candidates can be built. We have CI runs
after each merge so we should be pretty confident if the build is broken
or not.
The node installed by nvm gives errors when running "npm install".
Also installs Python as it is need. Presumbly the system provided npm
version has python as a dependency so it was installed when npm was
installed.
* Replace jpaTm with a JpaSupplierFactory
* Style
* Style
* Pipeline takes in a SerializableSupplier instead
* Change the ordering of imports
* Test a good domain in addition to a bad one
* Rename and check good domain for Transact Answer
* Use standard Mockito verify
* Verify transact call and no more interactions
* Remove Answer comment
* Naming chsnges
* Deploy Spec 11 pipeline correctly
* Fix formatting of deploy file
* Use a file to persist state across Cloud Build steps
Co-authored-by: Gus Brodman <gbrodman@google.com>
* Use JSON API for Maven Repo on GCS
The url pattern https://storage.googleapis.com/{Bucket}/{Path}
uses the legacy XML API, which seems to be less robust than
the JSON API. We have observed connection resets after a few
thousand-file download bursts over 30 minutes.
This PR changes all urls to registry's Maven repo on GCS to
gcs://{Bucket}/{Path}. Gradle uses the JSON API for such urls.
TESTED=In Cloud Build with local change
* Use dependency cache in all Gradle tasks in GCB
Make the initial test and the final publishing steps use the shared
dependency cache.
Also make the initial test use the registry's own maven repo instead
of Maven Central.
* Make Gradle dependency cache shareable in GCB
Make Gradle put its caches in the source tree so that
they can be preserved across steps. When left at their
default location, caches are lost after each step.
* Work around Spinnaker issue wrt variables
Cloud Build variable reference need to stay from the ${var} pattern
to prevent Spinnaker from trying to resolve it. In all files that
are used by Spinnaker, we change variable reference to the $var form.
We made the minimum amount of change possible, and will review this
issue after the permanent solution is available.
* Run cross-release SQL integration tests
Run SQL integration tests across arbitrary schema and server
releases.
Refer to integration/README.md in this change for more information.
TESTED=Cloud build changes tested with cloud-build-local
Used the published jars to test sqlIntegration task locally.
* Stop publish Cloud SQL schema jar to maven repo
The original purpose of the maven publication is for
use in server/schema compatibility tests. A commandline
flag can direct a test run to use different versions of
the schema jar. However, this won't work due to dependency
locking.
Defined Docker image for schema deployment.
Included schema deploymer docker in the Cloud Build release process.
Defined Cloud Build config for schema deployment.
TESTED=Used cloud-build-local to test deployment flow
TESTED=Used docker to test schema deployer image in more ways
* Release SQL schema in Cloud Build
Tentatively release SQL schema at the same time as the server release.
Publish schema jar to gs://domain-registry-maven-repository/nomulus
and also upload it with server artifacts.
Also removed the Gradle 'version' variable which is not used.
Tested=On cloud-build with a simplified version of
cloudbuild-nomulus.yaml.
* Save release tag during deployment
* Save current tag for every environment
Store tag of the current deployment in each environment.
This is used by the server-sql compatibility test.
* Save current tag for every environment
Store tag of the current deployment in each environment.
This is used by the server-sql compatibility test.