The image name in the manifest needs to be the same as the name that Spinnaker trigger catches. With the new release, Spinnaker now correctly recognizes gcr.io/${PROJECT_ID}/proxy as the image name.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=244845037
GCB will now upload the images to GCR and the manifests to GCS. A Spinnaker pipeline can then be triggered by the GCB Pub/Sub message and use both the image and the manifests to deploy the proxy to GKE.
Also temporarily moves customized Maven repo location while it is being worked on.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=239853011
- Created configs for Proxy server, GKE, and terraform
- Created sans_list file for use with tarsier client
- Updated allowedClients in registry server
TODO: Update dr-bashrc to support crash environment
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=236659249
1. Moved code for the GCP proxy to where the [] proxy code used to live.
3. Corrected reference to the GCP proxy location.
4. Misc changes to make ErrorProne and various tools happy.
+diekmann to LGTM terraform whitelist change.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=213630560
This should not cause any waste as the pods are only scaled up when necessary.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209881536
We need to support web WHOIS on the same IP addresses that we use for port 43 whois. [] added support for HTTP(S) traffic on the proxy, which simply redirects to another website that actually hosts the web WHOIS service. This cl sets up the GCLB to route port 80 and port 443 traffic to the proxy.
We were using the TCP proxy load balancer for other protocols that we support (EPP and WHOIS), but the TCP proxy LB only exposes port 443, not port 80. For port 443, we simply follow the same pattern and add another TCP proxy LB. For port 80, we had to use the HTTP LB which exposes port 80 (on the same external IP addresses). This requires a different HTTP health check and a URL map. The added URL map is a dummy one that routes all paths to the same backend service that supports HTTP redirect.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=206409007
This also introduces a production canary environment, similar to sandbox canary. The docker tags are changed to "live" and "sandbox" respectively, to reflect the fact that different images may be used for prod and sandbox.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=204343530
This makes it easier to debug issues. There are no privacy concerns in sandbox.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=197045576
The autoscaling manifest doesn't really change much from environment to environment. It makes sense to move it to the service yaml file, which is not environment dependent.
Also enhanced bashrc function to update the deployment manifest when deploy the proxy to alpha
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=193407184
1) Clean up alpha config to only allow alpha proxy, removing test proxy client id.
2) Add sandbox service account client id to sandbox config.
3) Add sandbox config to nomulus and proxy, remove TEST environment, which is not being used anymore. (Test now uses LOCAL.)
4) Add sandbox kubenetes config
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=193400909
This gets around a bug in Spinnaker where the namespace, if missing in the manifest, is set to "spinnaker".
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=192825895
With terraform (https://terraform.io) we can convert most of the infrastructure setup into code. This simplifies setting up a new proxy as well as providing reproducibility in the setup, eliminating human errors as much as possible.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=190634711
Some changes are made to the configs so that they agree with the setup guide in []
Combined deployment and autoscale manifests together because they work together.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189403435
Associate the custom metrics with the correct monitored resource type. The labels of the monitored resource are either obtained from environment variables for the container, configured in the GKE deployment file, or queried from GCE metadate server. Using the correct monitored resource can help performance and reduced out-of-order metric writes.
Also changed the metrics display name to be more descriptive.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189184411
When not running locally, the logging formatter is set to convert the log record to a single-line JSON string that Stackdriver logging agent running in GKE will pick up and parse correctly.
Also removed redundant logging handler in the proxy frontend connection. They have two problems: 1) it is possible to leak PII when all frontend traffic is logged, such as client IPs. Even though this is less of a concern because the GCP TCP proxy load balancer masquerade source IPs. 2) We are only logging the HTTP request/response that the frontend connection is sending to/receiving from the backend connection, but the backend already has its own logging handler to log the same message that it gets from/sends to the GAE app, so the logging in the frontend connection does not really give extra information.
Logging of some potential PII information such as the source IP of a proxied connection are also removed.
Thirdly, added a k8s autoscaling object that scales the containers based on CPU load. The default target load is 80%. This, in connection with GKE cluster VM autoscaling, means that when traffic is low, we'll only have one VM running one container of the proxy.
Fixes a bug where the MetricsComponent generates a separate ProxyConfig that does not call parse method on the command line args passed, resulting default Environment always being used in constructing the metric reporter.
Lastly a little bit of cleaning of the MOE config script, no newlines are necessary as the BUILD are formatted after string substitution.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188029019
Using bazel to build and push image result in reproducible builds.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187252645
This CL setups up kubernetes configuration files necessary to deploy the proxy service to k8s (GKE to be specific). Because kubernetes service can only expose node ports higher than 30000, the default ports that the containers expose are also changed to >30000 so that they are consistent. This is *not* necessary, but makes it easier to remember which ports are for what purpose.
Note that we are not setting up a load balancing service. The way it is set up now, the services are only visible within the clusters, on each node at the specified node ports. The load balancer k8s sets up uses GCP L4 load balancer that does not support IPv6 (because it does not do TCP termination at the LB, rather just routes packages to cluster nodes, and GCE VMs does not support IPv6 yet). The L4 load balancer also only provides regional IPs on the frontend, which means proxies running in different regions (Americas, EMEA, APAC) would all have different IPs, which in turn offload regional routing determination to the DNS system, adding complexity.
A user of the proxy instead should set up TCP proxy load balancing in GCP separately and point traffic to the VM group(s) backing the k8s cluster. This allows for a single global anycast IP (IPv4 and IPv6) to be allocated at the load balancer frontend.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187046521