Skip to main content

Dataplane V2

GKE Dataplane V21 is implemented with eBPF2 therefore it is very similar to Cilium (see how we integrate directly with Cilium).

By default, it doesn’t expose Hubble metrics which are needed, nevertheless, we can add them by editing the Cilium ConfigMap and adding:

  hubble-metrics: dns drop tcp:destinationContext=pod|ip;sourceContext=pod|ip flow:destinationContext=pod|ip;sourceContext=pod|ip port-distribution icmp http:destinationContext=pod|ip;sourceContext=pod|ip
hubble-metrics-server: :9091

It is required to:

  • copy cilium-config from the ConfigMap, add the two lines to configuration, and restart anetd pods
  • change the port of the default Prometheus (Hubble metrics are preferred) prometheus-serve-addr: :9990 -> prometheus-serve-addr: :9991
  • remove the label addonmanager.kubernetes.io/mode from the ConfigMap so that changes are saved and not overwritten by google
Example of complete configuration

apiVersion: v1
data:
auto-direct-node-routes: "false"
blacklist-conflicting-routes: "false"
bpf-lb-sock-hostns-only: "true"
bpf-map-dynamic-size-ratio: "0.0025"
bpf-policy-map-max: "16384"
cluster-name: default
cni-chaining-mode: generic-veth
custom-cni-conf: "true"
debug: "false"
enable-auto-protect-node-port-range: "true"
enable-bpf-clock-probe: "true"
enable-bpf-masquerade: "false"
enable-endpoint-health-checking: "false"
enable-endpoint-routes: "true"
enable-health-checking: "false"
enable-host-firewall: "false"
enable-hubble: "true"
enable-ipv4: "true"
enable-ipv4-masquerade: "false"
enable-ipv6: "false"
enable-ipv6-ndp: "false"
enable-local-node-route: "false"
enable-local-redirect-policy: "true"
enable-metrics: "true"
enable-redirect-service: "true"
enable-remote-node-identity: "true"
enable-well-known-identities: "false"
enable-xt-socket-fallback: "true"
identity-allocation-mode: crd
install-iptables-rules: "true"
ipam: kubernetes
k8s-api-server: https://35.204.123.171:443
k8s-require-ipv4-pod-cidr: "true"
k8s-require-ipv6-pod-cidr: "false"
kube-proxy-replacement: strict
kube-proxy-replacement-healthz-bind-address: 0.0.0.0:10256
local-router-ipv4: 169.254.4.6
local-router-ipv6: fe80::8893:b6ff:fe2c:7a0d
monitor-aggregation: medium
monitor-aggregation-flags: all
monitor-aggregation-interval: 5s
node-port-bind-protection: "true"
operator-api-serve-addr: 127.0.0.1:9234
operator-prometheus-serve-addr: :6942
preallocate-bpf-maps: "false"
prometheus-serve-addr: :9990
sidecar-istio-proxy-image: cilium/istio_proxy
tofqdns-enable-poller: "false"
tunnel: disabled
wait-bpf-mount: "false"
hubble-metrics: dns drop tcp:destinationContext=pod|ip;sourceContext=pod|ip flow:destinationContext=pod|ip;sourceContext=pod|ip
port-distribution icmp http:destinationContext=pod|ip;sourceContext=pod|ip
hubble-metrics-server: :9091
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system

Collecting L7 metrics requires an additional setup on the service level where visibility annotations must be added: io.cilium.proxy-visibility: "<Egress/3550/TCP/HTTP>". See Cilium L7 metrics section for more details.

Installation of the aiops-agent with dataplane-v2 support

helm upgrade --install --create-namespace aiops-agent cobrick/aiops-agent --version {VERSION} -n aiops \
--set global.clientId={CLIENT_ID} \
--set global.clientSecret={CLIENT_SECRET} \
--set global.tenant={TENANT} \
--set global.site={SITE} \
--set tags.linkerd=false
--set tags.dataplane-v2=true

See the aiops-agent documentation for more details.

Footnotes

  1. Official site of GKE Dataplane V2

  2. Official site of eBPF