This allows us to not obtain a certificate and encrypt it with KMS when running the proxy locally during development.
Also updated FOSS build dagger version.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=191746309
Associate the custom metrics with the correct monitored resource type. The labels of the monitored resource are either obtained from environment variables for the container, configured in the GKE deployment file, or queried from GCE metadate server. Using the correct monitored resource can help performance and reduced out-of-order metric writes.
Also changed the metrics display name to be more descriptive.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=189184411
If the proxy protocol header contains a malformatted string, such as "PROXY UNKNOWN", instead of throwing and killing the connection, use the TCP source IP as the remote IP.
Also changed how the header is read from the buffer, to avoid a potential Netty resource leak. Originally the header is read into another ByteBuf, which needs be be explicit released in order for Netty to reclaim its memory (http://netty.io/wiki/reference-counted-objects.html). Now we just read it into a byte array and let JVM GC it.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188047084
When not running locally, the logging formatter is set to convert the log record to a single-line JSON string that Stackdriver logging agent running in GKE will pick up and parse correctly.
Also removed redundant logging handler in the proxy frontend connection. They have two problems: 1) it is possible to leak PII when all frontend traffic is logged, such as client IPs. Even though this is less of a concern because the GCP TCP proxy load balancer masquerade source IPs. 2) We are only logging the HTTP request/response that the frontend connection is sending to/receiving from the backend connection, but the backend already has its own logging handler to log the same message that it gets from/sends to the GAE app, so the logging in the frontend connection does not really give extra information.
Logging of some potential PII information such as the source IP of a proxied connection are also removed.
Thirdly, added a k8s autoscaling object that scales the containers based on CPU load. The default target load is 80%. This, in connection with GKE cluster VM autoscaling, means that when traffic is low, we'll only have one VM running one container of the proxy.
Fixes a bug where the MetricsComponent generates a separate ProxyConfig that does not call parse method on the command line args passed, resulting default Environment always being used in constructing the metric reporter.
Lastly a little bit of cleaning of the MOE config script, no newlines are necessary as the BUILD are formatted after string substitution.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=188029019
A recent change in Netty 4.1.21 (978a46cc0a) tried to fix an issue where channels might be closed before any handshake exception can be propagated. This however introduced a regression where the the connection is not closed at all after a handshake failure, which caused test failures because we were expecting the connection to be closed after a handshake failure.
We rolled back dependency on Netty 4.1.21 so that the test would pass. A fix upstream is schedule for 4.1.22 (https://github.com/netty/netty/pull/7727).
However this does reveal some potential problem in our tests. Namely we did not wait for the connection to be closed before assertion on it. The old Netty behavior closes the connection before handshake exception is thrown, and we *do* wait for the handshake exception. The connection assertion happens after the handshake exception is verified, so by then the connection is always closed.
When the upstream fix is released, we'd run into concurrency problem described above. So we instead wait for the connection to be closed before checking handshake exception (by releasing the lock in a channel close listener), which guarantees that when we check the connection, it is always closed.
Also fixes some javadoc errors.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=186021997
When a quota request is rejected, increment the metric counter by one.
Also makes both frontend and backend metrics singleton because all the fields they have a static.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185146804
The quota handler terminates connections when quota is exceeded.
The next CL will add instrumentation for quota related metrics.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=185042675
The TokenStore is configured by a QuotaConfig for a protocol (EPP/WHOIS). It accepts concurrent take, put and refresh request to grant/accept token to the caller.
The QuotaManager contains a TokenStore and provides abstractions that are appropriate for a quota leasing entity to use. Quota return calls are executed asynchronously by the QuotaManager, and quota refresh tasks are scheduled by the QuotaManager to run periodically.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=182109341
The quotas can be configured in the yaml configuration file. Default quota will be applied to any userId that is not matched in the custom quota list.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=178804649
Dagger updated to 2.13, along with all its dependencies.
Also allows us to have multiple config files for different environment (prod, sandbox, alpha, local, etc) and specify which one to use on the command line with a --env flag. Therefore the same binary can be used in all environments.
-------------
Created by MOE: https://github.com/google/moe
MOE_MIGRATED_REVID=176551289