Background
地址
时间
March 20, 2024 6:30 PM - March 22, 2024 10:30 PM
Challenge 1:RECON
题目描述
As a warmup, utilize DNS scanning to uncover hidden internal services and obtain the flag. We have “loaded your machine with dnscan to ease this process for further challenges.
考点
- DNScan
- service to ip
- curl
分析
- 点进 DNS scanning 的链接,映入眼帘:
We actually have access to a Kubernetes specific way to identify live services and sometimes pods in the form of the Kubernetes DNS service.
也就是说通过 DNS 来探测服务。
Service 被分配一个集群内可访问的 IP(Cluster IP) 地址,用于在集群内部将流量路由到对应 Pod,同时暴露端口到公网。与 Service 关联的 DNS 记录允许通过 Service 的名称来访问 Service。这些 DNS 记录由 DNS 解析器管理。
Pod 和 Service 的 DNS 名称会自动生成如下:
- Pod:pod-ip-address.pod-namespace-name.pod.cluster-domain.example(例如 10.244.0.1.my-app.svc.cluster.local)
- Service:service-name.service-namespace-name.svc.cluster-domain.example(例如 database.my-app.svc.cluster.local)
- 也就是说,思路是用 DNScan,通过环境变量里的 HOST_SERVER IP来反查对应的 Service。
1 | player@wiz-k8s-lan-party:~$ env | grep HOST |
找到 service:getflag-service.k8s-lan-party.svc.cluster.local.
- 想了一下没想到后续利用,想到环境变量里面由开443端口。直接访问该 IP,也就是 service 名称 getflag
1 | player@wiz-k8s-lan-party:~$ env |
附录
Challenge 2:FINDING NEIGHBOURS
题目描述
Sometimes, it seems we are the only ones around, but we should always be on guard against invisible sidecars reporting sensitive secrets.
考点
- sidecars(network)
- netstat
- tcpdump
分析
- 先点开 sidecars 看看怎么个事儿
Sidecar containers are the secondary containers that run along with the main application container within the same Pod These containers are used to enhance or to extend the functionality of the main application container by providing additional services, or functionality such as logging, monitoring, security, or data synchronization, without directly altering the primary application code.
相当于一个附属容器,提供附加、扩展功能
If an init container is created with its
restartPolicy
set toAlways
, it willstart and remain running during the entire life of the Pod. This can be helpful for running supporting services separated from the main application containers.
配置 restartpolicy=Always
后 ,就会默认启动 sidecar 的功能并一直运行.
- Hint 1
Sidecar containers share the same lifecycle, resources, and network namespace with the primary container This co-location allows them to interact closely and share resources.
共用相同的 network 和存储命名空间。
- 信息搜集
- service
1 | player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.1/16 |
- network
1 | player@wiz-k8s-lan-party:~$ ip a |
- netstat
1 | player@wiz-k8s-lan-party:~$ netstat -ano |
本地的192.168.8.215 一直向 10.100.171.123:80 通信
- 好吧,看了wp,直接
tcpdump
捕获流量详细看看通信细节。
1 | player@wiz-k8s-lan-party:~$ tcpdump host 10.100.171.123 -w 1.pcap |
wiz_k8s_lan_party{good-crime-comes-with-a-partner-in-a-sidecar}
Challenge 3:DATA LEAKAGE
题目描述
Exposed File Share
The targeted big corp utilizes outdated, yet cloud-supported technology for data storage in production. But oh my, this technology was introduced in an era when access control was only network-based 🤦️.
Hint
1 | 你可能会发现查看 nfs-cat 和 nfs-ls 的文档很有用。 |
考点
- Network File System(NFS)
- mount
- nfs-cat、nfs-ls
分析
nfs-cat
和nfs-ls
关注描述里的:”this technology was introduced in an era when access control was only network-based”,和提示给的
nfs-cat
和nfs-ls
NFS
NFS,是 Network File System 的简写,即网络文件系统。网络文件系统是FreeBSD支持的文件系统中的一种,也被称为NFS. NFS允许一个系统在网络上与他人共享目录和文件。 通过使用NFS,用户和程序可以像访问本地文件一样访问远端系统上的文件。
就是一个数据存储服务器或者服务,跟主机之间通过 nfs 来通信,主要是挂载。
1 | player@wiz-k8s-lan-party:/var/run/secrets$ mount | grep nfs |
发现好东西,这不得看看怎么个事儿。
GPT 解释:
fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com
: 这是一个 NFS 文件系统的远程服务器地址。NFS(Network File System)是一种网络文件系统协议,允许远程计算机通过网络共享文件。:/
: 这表示将远程文件系统的根目录挂载到本地目录。冒号之前是远程服务器的地址和共享的路径,冒号之后是挂载点。/efs
: 这是本地文件系统的挂载点。在本例中,远程 NFS 文件系统的根目录将会被挂载到本地的/efs
目录。
1 | player@wiz-k8s-lan-party:/var/run/secrets$ cd /efs |
呵呵呵
想到hint里的 nfs-cat
和 nfs-ls
nfs-cat
和 nfs-ls
是 NFS (Network File System) 的一些常见工具,用于从 NFS 服务器上读取文件内容和列出目录内容。这些工具通常需要在客户端上安装并配置 NFS 客户端来使用。学学规则,执行:
1 | $ nfs-cat 'nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/flag.txt' |
不知道是我的问题还是nfs server
挂了,连不上。服了
好吧:格式还是好好看清楚,还要加传参
1 | player@wiz-k8s-lan-party:~$ nfs-cat 'nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4.1&uid=0' |
Challenge 4:BYPASSING BOUNDARIES
题目描述
The Beauty and The Ist
Apparently, new service mesh technologies hold unique appeal for ultra-elite users (root users). Don’t abuse this power; use it responsibly and with caution.
hint Try examining Istio's [IPTables rules](<https://github.com/istio/istio/wiki/Understanding-IPTables-snapshot#use-pid-to-get-iptables>).
题目给的 policy
1 | apiVersion: security.istio.io/v1beta1 |
浅浅问一波 gpt:这个 AuthorizationPolicy 定义了一个访问控制策略,拒绝具有特定标签的应用程序(”{flag-pod-name}”)的请求,同时只允许来自 k8s-lan-party 命名空间的 POST 和 GET 请求访问目标资源。
考点
- Istio
- IPTables
- PID/UID/GID
分析
- 打开靶场给的
root
很难不让我想到逃逸。哈哈。 - 先学一下 hint 里面的
lstio
和iptabl
lstio
它是一个完全开源的服务网格,以透明层的方式构建在现有分布式应用中。它也是一个提供了各种API的平台,可以与任何日志平台、监控系统或策略系统集成。Istio的多样化特性可以让你高效地运行分布式微服务架构,并提供一种统一的方式来保护、连接和监控微服务
iptabl
iptables
是 Linux 内核中的防火墙软件 netfilter 的管理工具,位于用户空间,同时也是 netfilter 的一部分。Netfilter 位于内核空间,不仅有网络地址转换的功能,也具备数据包内容修改、以及数据包过滤等防火墙功能。
iptable 中一共有五张表,分别负责不同的方向(数据包、防火墙、网络地址转换、数据包修改、访问控制),不同地方表具有的链路(输入输出等)也不同。
以 filtter
表为例:
1 | $ iptables -L -v |
看到三个默认的链,分别是 INPUT、FORWARD 和 OUTPUT,每个链中的第一行输出表示链名称(在本例中为 INPUT/FORWARD/OUTPUT),后跟默认策略(ACCEPT)。
每条链中都可以添加多条规则,规则是按照顺序从前到后执行的。看下规则的表头定义。
- pkts:处理过的匹配的报文数量
- bytes:累计处理的报文大小(字节数)
- target:如果报文与规则匹配,指定目标就会被执行。
- prot:协议,例如
tdp
、udp
、icmp
和all
。 - opt:很少使用,这一列用于显示 IP 选项。
- in:入站网卡。
- out:出站网卡。
- source:流量的源 IP 地址或子网,或者是
anywhere
。 - destination:流量的目的地 IP 地址或子网,或者是
anywhere
。
信息搜集
- service:dnscan 扫描
1 | istio-protected-pod-service.k8s-lan-party.svc.cluster.local. |
命名空间没问题,问题在于被拒绝了:具有特定标签 {flag-pod-name}
的应用程序(”istio-protected-pod-service
“)的请求
hint 分析
关键概念:Magic constants
1337 - uid and gid used to distinguish between traffic originating from proxy vs the applications.
重要的就是
Outbound bypass
- If you observe the iptables rules above, you will see a magic number (1337) appear several times.
This is the uid/gid of the running proxy which is used by iptables
to differentiate between packets originating from the proxy and the ones originating from the application. When packets originate from the application, they must be redirected to proxy to implement a service mesh. When packets originate from the proxy,they \*must not\* be redirected
as doing so will cause an infinite loop.
This can cause multiple issues.
If the cluster is not mTLS, the outbound policies could be bypassed such as Data leak prevention
验证
查看 UID
1 | root@wiz-k8s-lan-party:~# cat /etc/passwd |
发现亮点:istio:x:1337:1337::/home/istio:/bin/sh
,root(也就是当前用户)UID和GID为 1337
根据:https://github.com/istio/istio/issues/4286
我们直接切这个用户然后就可以绕过 deny
了
1 | root@wiz-k8s-lan-party:~# su istio |
challenge 5:LATERAL MOVEMENT
题目描述
Who will guard the guardians?
Where pods are being mutated by a foreign regime, one could abuse its bureaucracy and leak sensitive information from the administrative services.
Hint
Need a hand crafting AdmissionReview requests? Checkout https://github.com/anderseknert/kube-review.
Policy
1 | apiVersion: kyverno.io/v1 |
考点
- Dynamic Admission Control
- admission webhooks
- AdmissionReview
分析
- 学习题目描述和 hint
- webhooks can specify what versions of
AdmissionReview
objects they accept with theadmissionReviewVersions
field in their configuration
就是一套请求响应体系。不能直接发请求,需要加上 JSON 构造一个合法请求。
When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the
patch
andpatchType
fields in the response. The only currently supportedpatchType
isJSONPatch
.ForpatchType: JSONPatch
, thepatch
field contains a base64-encoded array of JSON patch operations.
- 信息搜集
Service:
1 | kyverno-cleanup-controller.kyverno.svc.cluster.local. |
没懂啥意思,但是就是直接请求给的policy
转换的JSON,然后就返回patch了。
1 | player@wiz-k8s-lan-party:~$ curl -k -X POST <https://kyverno-svc.kyverno.svc.cluster.local>./mutate -H "Content-Type: application/json" --data '{"apiVersion":"admission.k8s.io/v1","kind":"AdmissionReview","request":{"uid":"2024ee9c-c374-413c-838d-e62bcb4826be","kind":{"group":"","version":"v1","kind":"Pod"},"resource":{"group":"","version":"v1","resource":"pods"},"requestKind":{"group":"","version":"v1","kind":"Pod"},"requestResource":{"group":"","version":"v1","resource":"pods"},"name":"example-pod","namespace":"sensitive-ns","operation":"CREATE","userInfo":{"username":"psych","uid":"xxx","groups":["system:authenticated"]},"object":{"apiVersion":"v1","kind":"Pod","metadata":{"name":"psych-pod","namespace":"sensitive-ns"},"spec":{"containers":[{"name":"psych-container","image":"nginx","env":[{"name":"FLAG","value":"{flag}"}]}]}},"oldObject":null,"options":{"apiVersion":"meta.k8s.io/v1","kind":"CreateOptions"},"dryRun":true}}' |
W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL2NvbnRhaW5lcnMvMC9lbnYvMC92YWx1ZSIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9LCB7InBhdGgiOiIvbWV0YWRhdGEvYW5ub3RhdGlvbnMiLCJvcCI6ImFkZCIsInZhbHVlIjp7InBvbGljaWVzLmt5dmVybm8uaW8vbGFzdC1hcHBsaWVkLXBhdGNoZXMiOiJpbmplY3QtZW52LXZhcnMuYXBwbHktZmxhZy10by1lbnYua3l2ZXJuby5pbzogcmVwbGFjZWQgL3NwZWMvY29udGFpbmVycy8wL2Vudi8wL3ZhbHVlXG4ifX1d
base64decode
1 | [{"op":"replace","path":"/spec/containers/0/env/0/value","value":"wiz_k8s_lan_party{you-are-k8s-net-master-with-great-power-to-mutate-your-way-to-victory}"}, {"path":"/metadata/annotations","op":"add","value":{"[policies.kyverno.io/last-applied-patches":"inject-env-vars.apply-flag-to-env.kyverno.io:](<http://policies.kyverno.io/last-applied-patches%22:%22inject-env-vars.apply-flag-to-env.kyverno.io:>) replaced /spec/containers/0/env/0/value\\n"}}] |