Commit graph

15 commits

Author SHA1 Message Date
jianglai
3fc7271145 Move GCP proxy code to the old [] proxy's location
1. Moved code for the GCP proxy to where the [] proxy code used to live.
3. Corrected reference to the GCP proxy location.
4. Misc changes to make ErrorProne and various tools happy.

+diekmann to LGTM terraform whitelist change.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=213630560
2018-09-20 11:19:36 -04:00
mcilwain
a483beef28 Add MOE equivalence for 2018-09-14 sync
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=212996616
2018-09-20 11:19:36 -04:00
jianglai
ee97d7c2cd Update max pod number to 10
This should not cause any waste as the pods are only scaled up when necessary.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=209881536
2018-09-07 23:59:39 -04:00
jianglai
0e62270f54 Set up GCLB to router web WHOIS traffic
We need to support web WHOIS on the same IP addresses that we use for port 43 whois. [] added support for HTTP(S) traffic on the proxy, which simply redirects to another website that actually hosts the web WHOIS service. This cl sets up the GCLB to route port 80 and port 443 traffic to the proxy.

We were using the TCP proxy load balancer for other protocols that we support (EPP and WHOIS), but the TCP proxy LB only exposes port 443, not port 80. For port 443, we simply follow the same pattern and add another TCP proxy LB. For port 80, we had to use the HTTP LB which exposes port 80 (on the same external IP addresses). This requires a different HTTP health check and a URL map. The added URL map is a dummy one that routes all paths to the same backend service that supports HTTP redirect.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=206409007
2018-08-10 13:44:25 -04:00
jianglai
35110927d6 Add configs for production GCP proxy
This also introduces a production canary environment, similar to sandbox canary. The docker tags are changed to "live" and "sandbox" respectively, to reflect the fact that different images may be used for prod and sandbox.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=204343530
2018-07-14 01:37:03 -04:00
jianglai
1248a7722b Enable logging in sandbox GCP proxies
This makes it easier to debug issues. There are no privacy concerns in sandbox.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=197045576
2018-05-17 21:52:35 -04:00
jianglai
77bfa5f4b8 Move autoscale object to service yaml file
The autoscaling manifest doesn't really change much from environment to environment. It makes sense to move it to the service yaml file, which is not environment dependent.

Also enhanced bashrc function to update the deployment manifest when deploy the proxy to alpha

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=193407184
2018-04-23 14:57:52 -04:00
jianglai
c6a4264606 Setup sandbox for GCP proxy
1) Clean up alpha config to only allow alpha proxy, removing test proxy client id.
2) Add sandbox service account client id to sandbox config.
3) Add sandbox config to nomulus and proxy, remove TEST environment, which is not being used anymore. (Test now uses LOCAL.)
4) Add sandbox kubenetes config

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=193400909
2018-04-23 14:51:35 -04:00
jianglai
23c9cf926c Set namespace as default
This gets around a bug in Spinnaker where the namespace, if missing in the manifest, is set to "spinnaker".

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=192825895
2018-04-23 14:39:09 -04:00
jianglai
6dec95b980 Use terraform to config GCP proxy setup
With terraform (https://terraform.io) we can convert most of the infrastructure setup into code. This simplifies setting up a new proxy as well as providing reproducibility in the setup, eliminating human errors as much as possible.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=190634711
2018-04-02 16:46:01 -04:00
jianglai
22b575b17d Update proxy k8s configs
Some changes are made to the configs so that they agree with the setup guide in []

Combined deployment and autoscale manifests together because they work together.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189403435
2018-03-19 18:40:42 -04:00
jianglai
33ec789a44 Use GKE-specific metrics in the proxy
Associate the custom metrics with the correct monitored resource type. The labels of the monitored resource are either obtained from environment variables for the container, configured in the GKE deployment file, or queried from GCE metadate server. Using the correct monitored resource can help performance and reduced out-of-order metric writes.

Also changed the metrics display name to be more descriptive.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189184411
2018-03-19 18:29:39 -04:00
jianglai
84eab90000 Make GCP proxy log in a Stackdriver logging compliant format
When not running locally, the logging formatter is set to convert the log record to a single-line JSON string that Stackdriver logging agent running in GKE will pick up and parse correctly.

Also removed redundant logging handler in the proxy frontend connection. They have two problems: 1) it is possible to leak PII when all frontend traffic is logged, such as client IPs. Even though this is less of a concern because the GCP TCP proxy load balancer masquerade source IPs. 2) We are only logging the HTTP request/response that the frontend connection is sending to/receiving from the backend connection, but the backend already has its own logging handler to log the same message that it gets from/sends to the GAE app, so the logging in the frontend connection does not really give extra information.
Logging of some potential PII information such as the source IP of a proxied connection are also removed.

Thirdly, added a k8s autoscaling object that scales the containers based on CPU load. The default target load is 80%. This, in connection with GKE cluster VM autoscaling, means that when traffic is low, we'll only have one VM running one container of the proxy.

Fixes a bug where the MetricsComponent generates a separate ProxyConfig that does not call parse method on the command line args passed, resulting default Environment always being used in constructing the metric reporter.

Lastly a little bit of cleaning of the MOE config script, no newlines are necessary as the BUILD are formatted after string substitution.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188029019
2018-03-06 19:23:23 -05:00
jianglai
753a269357 Use bazel rules to build docker image and push to GCR
Using bazel to build and push image result in reproducible builds.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187252645
2018-03-06 19:08:24 -05:00
jianglai
6a994f320f Add GKE deployment config files for GCP proxy
This CL setups up kubernetes configuration files necessary to deploy the proxy service to k8s (GKE to be specific). Because kubernetes service can only expose node ports higher than 30000, the default ports that the containers expose are also changed to >30000 so that they are consistent. This is *not* necessary, but makes it easier to remember which ports are for what purpose.

Note that we are not setting up a load balancing service. The way it is set up now, the services are only visible within the clusters, on each node at the specified node ports. The load balancer k8s sets up uses GCP L4 load balancer that does not support IPv6 (because it does not do TCP termination at the LB, rather just routes packages to cluster nodes, and GCE VMs does not support IPv6 yet). The L4 load balancer also only provides regional IPs on the frontend, which means proxies running in different regions (Americas, EMEA, APAC) would all have different IPs, which in turn offload regional routing determination to the DNS system, adding complexity.

A user of the proxy instead should set up TCP proxy load balancing in GCP separately and point traffic to the VM group(s) backing the k8s cluster. This allows for a single global anycast IP (IPv4 and IPv6) to be allocated at the load balancer frontend.

-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=187046521
2018-03-06 18:57:43 -05:00