google-nomulus/release/cloudbuild-deploy.yaml
Lai Jiang f0919f9524 Temporarily disable spec 11 pipeline deployment in GCB (#755)
The current setup causes the GCB job to fail validation and not run because it
uses backticks in the configuration yaml, which is not allowed -- there is no
shell to perform backtick substitution. See the error message here:

https://spinnaker.endpoints.domain-registry-dev.cloud.goog/gate/pipelines/01EF5GRMD625613H6Z033DBD3Z

In the future please make sure to test the GCB pipeline as instructed in
the comments at the beginning of each file before committing.

I tried to work around it by downloading the nomulus tool jar file
instead (running the nomulus-tool docker image inside a docker image is
not advisable). However the "nomulus deploy_spec11_pipeline" command
still fails. I'm not sure why. Has the command itself been tested
locally? The error message is shown below:

```
Step #2: Aug 09, 2020 3:11:46 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
Step #2: WARNING: --region not set; will default to us-central1. Future releases of Beam will require the user to set the region explicitly. https://cloud.google.com/compute/docs/regions-zones/regions-zones
Step #2: Aug 09, 2020 3:11:46 AM org.apache.beam.sdk.extensions.gcp.options.GcpOptions$GcpTempLocationFactory tryCreateDefaultBucket
Step #2: INFO: No tempLocation specified, attempting to use default bucket: dataflow-staging-us-central1-937378958468
Step #2: Aug 09, 2020 3:11:47 AM org.apache.beam.sdk.extensions.gcp.util.RetryHttpRequestInitializer$LoggingHttpBackOffHandler handleResponse
Step #2: WARNING: Request failed with code 409, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes, HTTP framework says request can be retried, (caller responsible for retrying): https://www.googleapis.com/storage/v1/b?predefinedAcl=projectPrivate&predefinedDefaultObjectAcl=projectPrivate&project=domain-registry-alpha.
Step #2: Exception in thread "main"
Step #2: java.lang.RuntimeException: Failed to construct instance from factory method DataflowRunner#fromOptions(interface org.apache.beam.sdk.options.PipelineOptions)
Step #2:        at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:224)
Step #2:        at org.apache.beam.sdk.util.InstanceBuilder.build(InstanceBuilder.java:155)
Step #2:
Step #2:        at org.apache.beam.sdk.PipelineRunner.fromOptions(PipelineRunner.java:55)
Step #2:        at org.apache.beam.sdk.Pipeline.create(Pipeline.java:147)
Step #2:
Step #2:        at google.registry.beam.spec11.Spec11Pipeline.deploy(Spec11Pipeline.java:157)
Step #2:        at google.registry.tools.DeploySpec11PipelineCommand.run(DeploySpec11PipelineCommand.java:80)
Step #2:        at google.registry.tools.RegistryCli.runCommand(RegistryCli.java:257)
Step #2:        at google.registry.tools.RegistryCli.run(RegistryCli.java:182)
Step #2:        at google.registry.tools.RegistryTool.main(RegistryTool.java:129)
Step #2: Caused by: java.lang.reflect.InvocationTargetException
Step #2:        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Step #2:
Step #2:        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Step #2:        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Step #2:        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
Step #2:        at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:214)
Step #2:        ... 8 more
Step #2: Caused by: java.lang.IllegalArgumentException: Unable to use ClassLoader to detect classpath elements. Current ClassLoader is jdk.internal.loader.ClassLoaders$AppClassLoader@5cb0d902, only URLClassLoaders are supported.
Step #2:        at org.apache.beam.runners.core.construction.PipelineResources.detectClassPathResourcesToStage(PipelineResources.java:58)
Step #2:
Step #2:        at org.apache.beam.runners.dataflow.DataflowRunner.fromOptions(DataflowRunner.java:285)
Step #2:
Step #2:        ... 13 more
```

Lastly the "--project" flag refers to the KMS project. While I'm not
sure which project is that, I don't think we can use the PROJECT_ID
variable as this is a GCB-substituted variable which refers to the
project that the GCB job runs in, which in our cases means
domain-registry-dev. We shouldn't use that project for KMS. I've changed
it to the same project as the one we are deploying to, but please note
that we have a separate project ${project_id}-keys that is used for all
KMS purposes. This is specified in the config file so if that's what you
meant to use, there is no need to specify it in the command line. Actually
if you meant use the project to be deployed to for KMS, it also
shouldn't be necessary to specify it separately as this information is
already known when you specified "nomulus -e ENV".

https://team.git.corp.google.com/domain-registry-eng/nomulus-internal/+/refs/heads/master/core/src/main/java/google/registry/config/files/nomulus-config-production.yaml#168

Can you add more description on what the KMS project is supposed to be?
I don't think we specify a project for KMS purpose in any other
commands.

Given that there are several unresolved issues, I've commented out my
proposed solution so that deployment can proceed.
2020-08-09 22:41:31 -04:00

103 lines
3.6 KiB
YAML

# To run the build locally, install cloud-build-local first.
# Then run:
# cloud-build-local --config=cloudbuild-deploy.yaml --dryrun=false \
# --substitutions=TAG_NAME=[TAG],_ENV=[ENV] ..
#
# This will deploy Beam pipelines to GCS for the PROJECT_ID defined in gcloud
# tool.
#
# To manually trigger a build on GCB, run:
# gcloud builds submit --config=cloudbuild-deploy.yaml \
# --substitutions=TAG_NAME=[TAG],_ENV=[ENV] ..
#
# To trigger a build automatically, follow the instructions below and add a trigger:
# https://cloud.google.com/cloud-build/docs/running-builds/automate-builds
#
# Note: to work around issue in Spinnaker's 'Deployment Manifest' stage,
# variable references must avoid the ${var} format. Valid formats include
# $var or ${"${var}"}. This file use the former. Since TAG_NAME and _ENV are
# expanded in the copies sent to Spinnaker, we preserve the brackets around
# them for safe pattern matching during release.
# See https://github.com/spinnaker/spinnaker/issues/3028 for more information.
steps:
# Pull the credential for nomulus tool.
- name: 'gcr.io/$PROJECT_ID/builder:latest'
args:
- gsutil
- cp
- gs://$PROJECT_ID-deploy/secrets/tool-credential.json.enc
- .
# Decrypt the credential.
- name: 'gcr.io/$PROJECT_ID/builder:latest'
entrypoint: /bin/bash
args:
- -c
- |
set -e
cat tool-credential.json.enc | base64 -d | gcloud kms decrypt \
--ciphertext-file=- --plaintext-file=tool-credential.json \
--location=global --keyring=nomulus-tool-keyring --key=nomulus-tool-key
# Deploy the Spec11 pipeline to GCS.
#- name: 'gcr.io/$PROJECT_ID/builder:latest'
# entrypoint: /bin/bash
# args:
# - -c
# - |
# set -e
# if [ ${_ENV} == production ]; then
# project_id="domain-registry"
# else
# project_id="domain-registry-${_ENV}"
# fi
# echo "gs://$${project_id}-beam/cloudsql/admin_credential.enc"
# gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/nomulus.jar .
# java -jar nomulus.jar -e ${_ENV} --credential tool-credential.json \
# --sql_access_info \
# "gs://$${project_id}-beam/cloudsql/admin_credential.enc" \
# deploy_spec11_pipeline --project $${project_id}
# Deploy the invoicing pipeline to GCS.
- name: 'gcr.io/$PROJECT_ID/nomulus-tool:latest'
args:
- -e
- ${_ENV}
- --credential
- tool-credential.json
- deploy_invoicing_pipeline
# Save the deployed tag for the current environment on GCS. Because of b/137891685
# which causes the for-loop in the next step to fail, this may not be the last step.
# TODO(weiminyu): do this in last step.
- name: 'gcr.io/$PROJECT_ID/builder:latest'
entrypoint: /bin/bash
args:
- -c
- |
set -e
echo ${TAG_NAME} | \
gsutil cp - gs://$PROJECT_ID-deployed-tags/nomulus.${_ENV}.tag
# Deploy the GAE config files.
# First authorize the gcloud tool to use the credential json file, then
# download and unzip the tarball that contains the relevant config files
- name: 'gcr.io/$PROJECT_ID/builder:latest'
entrypoint: /bin/bash
args:
- -c
- |
set -e
gcloud auth activate-service-account --key-file=tool-credential.json
if [ ${_ENV} == production ]; then
project_id="domain-registry"
else
project_id="domain-registry-${_ENV}"
fi
gsutil cp gs://$PROJECT_ID-deploy/${TAG_NAME}/${_ENV}.tar .
tar -xvf ${_ENV}.tar
# Note that this currently does not work for google.com projects that
# we use due to b/137891685. External projects are likely to work.
for filename in cron dispatch dos index queue; do
gcloud -q --project $project_id app deploy \
default/WEB-INF/appengine-generated/$filename.yaml
done
timeout: 3600s
options:
machineType: 'N1_HIGHCPU_8'