google-nomulus/proxy/kubernetes/proxy-service.yaml
Lai Jiang 08285f5de7
Greatly increase the upper limit of proxy instances in production (#2259)
From our investigation, the Monday night WHOIS storm does not cause any
strain to the backend system. The backend latency metrics are all well within
the limits. The latency measured from the proxy matches observed latency
by the prober, and we see that the "used" CPU is 1.5x of "requested" CPU
during the time when the latency is above the threshold.

Making this change hopefully removes the proxy as the bottleneck and
ameliorate the pages.
2023-12-20 15:37:29 -05:00

50 lines
906 B
YAML

kind: Service
apiVersion: v1
metadata:
namespace: default
name: proxy-service
spec:
selector:
app: proxy
ports:
- protocol: TCP
port: 30000
nodePort: 30000
targetPort: health-check
name: health-check
- protocol: TCP
port: 30001
nodePort: 30001
targetPort: whois
name: whois
- protocol: TCP
port: 30002
nodePort: 30002
targetPort: epp
name: epp
- protocol: TCP
port: 30010
nodePort: 30010
targetPort: http-whois
name: http-whois
- protocol: TCP
port: 30011
nodePort: 30011
targetPort: https-whois
name: https-whois
type: NodePort
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
namespace: default
name: proxy-autoscale
labels:
app: proxy
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: proxy-deployment
maxReplicas: 50
minReplicas: 10