Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod level metrics are not being surfaced #821

Open
kate-goldenring opened this issue Jan 29, 2025 · 6 comments
Open

Pod level metrics are not being surfaced #821

kate-goldenring opened this issue Jan 29, 2025 · 6 comments
Labels
bug Something isn't working

Comments

@kate-goldenring
Copy link
Contributor

kate-goldenring commented Jan 29, 2025

This is a duplicate of this issue found in the Spin shim: spinframework/containerd-shim-spin#180. I wanted to recreate it here as it is reproducable with the Wasmtime shim and may have to do with how cgroups are being created by Youki.

TLDR: Pod level metrics are not being found by kubelet (even though container level metrics exist).

Repro:

$ make build-wasmtime
$ make test/k3s-wasmtime
# Get running wasmtime shim processes (the test deploys 3 replicas)
$ ps -ax | grep shim-wasmtime
  74103 ?        Sl     0:00 /home/kagold/projects/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 2e67245de8520087542e916a4d2bb0ab4d8efafdca2c07f69ee5fab37aa62928 -address /run/k3s/containerd/containerd.sock
  74110 ?        Sl     0:00 /home/kagold/projects/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id deee8be6836eace70740e167a402bb7662436df79c165f7d88361791ef4edcde -address /run/k3s/containerd/containerd.sock
  74137 ?        Sl     0:00 /home/kagold/projects/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 384b86f7322e36eb0aa4d0b312b3a226b268818f29da038f866e69bfb4733eeb -address /run/k3s/containerd/containerd.sock
  74220 ?        Ssl    0:05 /home/kagold/projects/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 2e67245de8520087542e916a4d2bb0ab4d8efafdca2c07f69ee5fab37aa62928 -address /run/k3s/containerd/containerd.sock
  74221 ?        Ssl    0:05 /home/kagold/projects/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id deee8be6836eace70740e167a402bb7662436df79c165f7d88361791ef4edcde -address /run/k3s/containerd/containerd.sock
  74223 ?        Ssl    0:05 /home/kagold/projects/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 384b86f7322e36eb0aa4d0b312b3a226b268818f29da038f866e69bfb4733eeb -address /run/k3s/containerd/containerd.sock
  74647 pts/5    S+     0:00 grep --color=auto shim-wasmtime
# Note: I am not sure why the parent process exists for each pod but the Ssl process is the 
# one with the container. Lets follow the `74220` process
# Get the cgroup for the process
$ cat /proc/74220/cgroup
0::/kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9d32b
# Note that unlike all other containers on the cluster, this cgroup is not nested under the `kubepods.slice` slice. 
# (There are two cgroups per pod because the test deployment has a wasm container and an nginx container).
$ ls /sys/fs/cgroup/
kubepods-besteffort-pod21abe37f_d90a_466e_9d80_1d99c80203dd.slice:cri-containerd:01cc5ac7a46b7de8ab0b6075f9f6513fdd572be26151d60e10f353a262caa4dc
kubepods-besteffort-pod21abe37f_d90a_466e_9d80_1d99c80203dd.slice:cri-containerd:b5ccb570b9f6929822296b0d16cdb04d9781aa9c5f9e1f41f3bc52329e019ae2
kubepods-besteffort-pod395f0bee_235f_4e58_b61b_69ee66c8bac0.slice:cri-containerd:49e55c45dd76a342828351e61c370b50549dc6d98aec4c6ceb4dd05ce8c143e6
kubepods-besteffort-pod395f0bee_235f_4e58_b61b_69ee66c8bac0.slice:cri-containerd:fc886f8c99cfc56618346d2361796229af192210d34253ad7227048602bf67ea
kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9d32b
kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:a0608bb6a60ab785ee9bc1e80a10aa7c8e71ed173fc7afdf7ed9282433df288c
# Get the CPU usage of the container -- this is correct as it has usage values
$ cat /sys/fs/cgroup//kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9
d32b/cpu.stat
usage_usec 6144353
user_usec 6046027
system_usec 98325
core_sched.force_idle_usec 0
nr_periods 0
nr_throttled 0
throttled_usec 0
nr_bursts 0
burst_usec 0
# However, the kubelet isn't aggregating that value -- notice how container 
# cpu and mem have values but pod doesn't:
$ sudo bin/k3s kubectl get --raw "/api/v1/nodes/kagold-thinkpad-x1-carbon-6th/proxy/stats/summary?only_cpu_and_memory=true" | grep -C 40 wasi
   "podRef": {
    "name": "wasi-demo-75d5745dd8-bp6mn",
    "namespace": "default",
    "uid": "b7c2cbdb-9ab5-47ef-977e-97e39a73da6d"
   },
   "startTime": "2025-01-29T00:17:01Z",
   "containers": [
    {
     "name": "demo",
     "startTime": "2025-01-29T00:17:01Z",
     "cpu": {
      "time": "2025-01-29T00:26:56Z",
      "usageNanoCores": 5262,
      "usageCoreNanoSeconds": 6347380
     },
     "memory": {
      "time": "2025-01-29T00:26:56Z",
      "workingSetBytes": 21766144
     }
    },
    {
     "name": "nginx",
     "startTime": "2025-01-29T00:17:07Z",
     "cpu": {
      "time": "2025-01-29T00:26:56Z",
      "usageNanoCores": 0,
      "usageCoreNanoSeconds": 47614
     },
     "memory": {
      "time": "2025-01-29T00:26:56Z",
      "workingSetBytes": 7647232
     }
    }
   ],
   "cpu": {
    "time": "2025-01-29T00:26:37Z",
    "usageNanoCores": 0,
    "usageCoreNanoSeconds": 0
   },
   "memory": {
    "time": "2025-01-29T00:26:37Z",
    "usageBytes": 0,
    "workingSetBytes": 0,
    "rssBytes": 0,
    "pageFaults": 0,
    "majorPageFaults": 0
   }
  },
# See the Spin shim issue for more details on the kubelet investigation but i think this is due to a 
# cgroups path issue. 
# All other k8s container cgroups are located under `/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice`. 
# See this snippet from `systemd-cgls --all`. 
└─kubepods.slice 
  ├─kubepods-burstable.slice 
  │ ├─kubepods-burstable-podb0bb1fba_f237_4769_897b_3b8b5b6905d2.slice 
  │ │ ├─cri-containerd-726349c380b050d138a1939966527344261659f53e3a437cbfd72ff1fdaa528c.scope 
  │ │ │ └─23222 /metrics-server --cert-dir=/tmp --secure-port=10250 --kubelet-p…
  │ │ └─cri-containerd-ddd5747967426ff09b105a6acb988876fc6b832860b29a42c3590dac208f9edb.scope 
  │ │   └─22434 /pause
  │ └─kubepods-burstable-pod19d4e37a_c1dc_48cf_9d15_d88256fdc4e0.slice 
  │   ├─cri-containerd-fad513ea1d58725eb9c31f4d6ae6aa979648c9c27f0ede2d6eb53a7d36f355a6.scope 
  │   │ └─22441 /pause
  │   └─cri-containerd-54249c0ee4210007bb2bce729532f6829565ac4a39fe8f144642e914c37f2415.scope 
  │     └─23070 /coredns -conf /etc/coredns/Corefile
  └─kubepods-besteffort.slice 
    ├─kubepods-besteffort-podc8a68445_ebd3_44db_b356_98ef0a933974.slice 
    ├─kubepods-besteffort-pod9f5ef48e_09aa_4bfc_a46c_72dae199889a.slice 
    ├─kubepods-besteffort-pod8b6c10ac_eb6b_49ca_b792_fd0143bc9832.slice 
    │ ├─cri-containerd-b25100aa1d63a1e2250eef6bf374e9ccd85853bc74ef071b80c833d6283b1cee.scope 
    │ │ └─23433 /pause
    │ ├─cri-containerd-4932ff19b4b498382af288a4745d6fc10ca683f72b543cf86e91febb4ff20733.scope 
    │ │ └─23793 /bin/sh /usr/bin/entry
    │ └─cri-containerd-93d90f0d30f3013ab96fe90a999a7de5a020ac401eb97a89faa285bf52c41214.scope 
    │   └─23841 /bin/sh /usr/bin/entry
    ├─kubepods-besteffort-podb23b33a6_f5bb_498a_9209_0977b8b94dd2.slice 
    ├─kubepods-besteffort-podf314c6e4_2e8d_418c_817e_0a4b3191853e.slice 
    │ ├─cri-containerd-e8473dba40059505774a52ba9cf34d4738ebe0c293733b00a55b507c42760507.scope 
    │ │ └─22741 local-path-provisioner start --config /etc/config/config.json
    │ └─cri-containerd-72cb1a172fce6a909150bec5107a9e0f8f5f8279c30d46175f13d8ac8466fd33.scope 
    │   └─22420 /pause
    └─kubepods-besteffort-podcb2d6b54_f354_4044_821c_3b43a24e6557.slice 
      ├─cri-containerd-23e66351f1e670c535fdd30bf019761c5d90e4c8c06c3ba600733dee921f0472.scope 
      │ └─24107 traefik traefik --global.checknewversion --global.sendanonymous…
      └─cri-containerd-b3c5f783798b8f6a69682856c2c0b2e10278e310047cdda1c155a169b504b353.scope 
        └─23565 /pause
# The runwasi pods also have a cgroup under `/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice`
# but it has empty stats. This is what i think kubelet is reading
$ cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice/cpu.stat
usage_usec 0
user_usec 0
system_usec 0
core_sched.force_idle_usec 0
nr_periods 0
nr_throttled 0
throttled_usec 0
nr_bursts 0
burst_usec 0

Cleanup:

$ sudo bin/k3s kubectl delete -f test/k8s/deploy.yaml
$ make test/k3s/clean

Based on the above, i think we need to amend the container cgroup paths from A to B:
A: /sys/fs/cgroup//kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9d32b
B: /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice

I am not sure how to communicate this to Youki or modify this in Youki

@Mossaka Mossaka added the bug Something isn't working label Jan 29, 2025
@Mossaka
Copy link
Member

Mossaka commented Jan 29, 2025

@utam0k do you have any insights on the cgroups path here?

@utam0k
Copy link
Member

utam0k commented Jan 30, 2025

My insights are:

First of all, I'd like to know the cgroup path passed youki.

@z63d
Copy link
Contributor

z63d commented Jan 31, 2025

I'm interested in this, can I give it a try?

By the way, I thought that the cgroup paths should probably be modified as follows, but am I doing something wrong?

A: /sys/fs/cgroup/kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9d32b
B: /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice/cri-containerd-66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9d32b.scope

Based on the above, i think we need to amend the container cgroup paths from A to B:
A: /sys/fs/cgroup//kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9d32b
B: /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice

For pods that do not have runtimeClassName set, it looks like this:

pod level: /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod951403eb_ff08_4bbb_beef_98a67262c289.slice
container level: /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod951403eb_ff08_4bbb_beef_98a67262c289.slice/cri-containerd-79893beed8c589f1d94b996c2c9fbee1730ce34aee267391c7a7268ae58ed5eb.scope

@Mossaka
Copy link
Member

Mossaka commented Jan 31, 2025

I'm interested in this, can I give it a try?

Sure, please go ahead and give it a try! Let me know what questions do you have.

@utam0k
Copy link
Member

utam0k commented Feb 1, 2025

Of course, I can help you to investigate it!

@z63d
Copy link
Contributor

z63d commented Feb 2, 2025

Apparently the issue is related to the cgroup driver.

Currently using cgroupfs as cgroup driver.

However, the cgroup path kubepods-besteffort-podb7c2cbdb_9ab5_47ef_977e_97e39a73da6d.slice:cri-containerd:66dda1e97c4fc48204b066f07fc1e28bab94a571820484e01576982d4cb9d32b seems to expect use systemd as the cgroup driver.

It seems like there is a contradiction here.


Let's try running K3s with cgroup-driver=cgroupfs.

$ cat /etc/rancher/k3s/config.yaml
kubelet-arg:
  - "cgroup-driver=cgroupfs"

$ sudo systemctl daemon-reload
$ sudo systemctl restart k3s-runwasi

$ sudo bin/k3s kubectl get --raw "/api/v1/nodes/runwasi/proxy/stats/summary?only_cpu_and_memory=true" | grep -C 40 wasi-demo
(...)
 "pods": [
  {
   "podRef": {
    "name": "wasi-demo-75d5745dd8-nxwb2",
    "namespace": "default",
    "uid": "db6b0776-895f-464f-88e3-865d39aee72a"
   },
   "startTime": "2025-02-02T11:44:52Z",
   "containers": [
    {
     "name": "demo",
     "startTime": "2025-02-02T11:44:53Z",
     "cpu": {
      "time": "2025-02-02T11:45:10Z",
      "usageNanoCores": 750,
      "usageCoreNanoSeconds": 4019593
     },
     "memory": {
      "time": "2025-02-02T11:45:10Z",
      "workingSetBytes": 34758656
     }
    },
    {
     "name": "nginx",
     "startTime": "2025-02-02T11:44:54Z",
     "cpu": {
      "time": "2025-02-02T11:45:10Z",
      "usageNanoCores": 0,
      "usageCoreNanoSeconds": 67232
     },
     "memory": {
      "time": "2025-02-02T11:45:10Z",
      "workingSetBytes": 13664256
     }
    }
   ],
   "cpu": {
    "time": "2025-02-02T11:45:03Z",
    "usageNanoCores": 254844561,
    "usageCoreNanoSeconds": 8155024000
   },
   "memory": {
    "time": "2025-02-02T11:45:03Z",
    "usageBytes": 98144256,
    "workingSetBytes": 97865728,
    "rssBytes": 87883776,
    "pageFaults": 31664,
    "majorPageFaults": 0
   }
  },
(...)

I could see pod level metrics.


Next, try building with with_systemd(true).
Create a K3s cluster and pods...

$ ps -ax | grep shim-wasmtime
 291117 ?        Sl     0:02 /home/kaita_nakamura/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 5e11919a8fb40b1cfada383a19631933df23b3a92c1fa4a76c97d76344ef1a5b -address /run/k3s/containerd/containerd.sock
 291118 ?        S      0:00 /home/kaita_nakamura/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 5e11919a8fb40b1cfada383a19631933df23b3a92c1fa4a76c97d76344ef1a5b -address /run/k3s/containerd/containerd.sock
 291160 ?        Ssl    0:04 /home/kaita_nakamura/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 5e11919a8fb40b1cfada383a19631933df23b3a92c1fa4a76c97d76344ef1a5b -address /run/k3s/containerd/containerd.sock
 291928 pts/1    S+     0:00 grep --color=auto shim-wasmtime

$ cat /proc/291160/cgroup 
0::/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5279caf_55ad_404e_b123_057fce72b3f4.slice/cri-containerd-d8ebe7ff6215863745d3e58d8639d39a3520bfc3d2795d1d969fe0928d75e8bc.scope

$ cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5279caf_55ad_404e_b123_057fce72b3f4.slice/cri-containerd-d8ebe7ff6215863745d3e58d8639d39a3520bfc3d2795d1d969fe0928d75e8bc.scope/cpu.stat
usage_usec 4392721
user_usec 4252989
system_usec 139731
core_sched.force_idle_usec 0
nr_periods 0
nr_throttled 0
throttled_usec 0
nr_bursts 0
burst_usec 0

$ systemd-cgls
Control group /:
-.slice
├─kubepods-besteffort-poddfa78cad_8bc2_417d_aab2_3efde7e957f8.slice:cri-containerd:fae03ab22ae9524544867faf03975077f6343e09f41293c184a69f46b762f4bd 
│ └─150436 /pause
├─kubepods-besteffort-pod24ea425f_4234_47d6_8dee_fdc521ba6216.slice:cri-containerd:3ba8186c1257a02113f13537b069650f46ecd53e252f7332c401a7e894f9b4e9 
│ └─150456 /pause
├─user.slice 
│ └─user-1002.slice 
│   ├─session-86.scope 
│   │ ├─291696 sshd: kaita_nakamura [priv]
│   │ ├─291774 sshd: kaita_nakamura@pts/2
│   │ └─291775 -bash
│   ├─[email protected] 
│   │ └─init.scope 
│   │   ├─137132 /lib/systemd/systemd --user
│   │   └─137133 (sd-pam)
│   ├─session-80.scope 
│   │ ├─137129 sshd: kaita_nakamura [priv]
│   │ ├─137216 sshd: kaita_nakamura@pts/0
│   │ ├─137217 -bash
│   │ ├─291565 -bash
│   │ └─291566 bash
│   └─session-82.scope 
│     ├─137233 sshd: kaita_nakamura [priv]
│     ├─137281 sshd: kaita_nakamura@pts/1
│     ├─137282 -bash
│     ├─292034 systemd-cgls
│     └─292035 pager
├─init.scope 
│ └─1 /sbin/init
├─system.slice 
│ ├─packagekit.service 
│ │ └─6557 /usr/libexec/packagekitd
│ ├─systemd-networkd.service 
│ │ └─531 /lib/systemd/systemd-networkd
│ ├─systemd-udevd.service 
│ │ └─292 /lib/systemd/systemd-udevd
│ ├─k3s-runwasi.service 
│ │ ├─280379 /home/kaita_nakamura/runwasi/bin/k3s server
│ │ ├─280558 containerd
│ │ ├─281302 /var/lib/rancher/k3s/data/da3ffc1d30a49a23449847b31d95bf4c96c8551396573c18886c9d0c4a63c710/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2909e6fe7ba690f3833a9c8448e57ea28bbd800570c711d332ee82c3db8a1f85 -address /run/k3s/containerd/containerd.sock
│ │ ├─281450 /var/lib/rancher/k3s/data/da3ffc1d30a49a23449847b31d95bf4c96c8551396573c18886c9d0c4a63c710/bin/containerd-shim-runc-v2 -namespace k8s.io -id f4d8f3e57cd04819dc0135b2a35a743444884947ae499b9eb8bec83d98ab3cc8 -address /run/k3s/containerd/containerd.sock
│ │ ├─281466 /var/lib/rancher/k3s/data/da3ffc1d30a49a23449847b31d95bf4c96c8551396573c18886c9d0c4a63c710/bin/containerd-shim-runc-v2 -namespace k8s.io -id 81bd9cd5412172aefb0d026f0ec282851481a1e287f5aabee2c5042b64801527 -address /run/k3s/containerd/containerd.sock
│ │ ├─283328 /var/lib/rancher/k3s/data/da3ffc1d30a49a23449847b31d95bf4c96c8551396573c18886c9d0c4a63c710/bin/containerd-shim-runc-v2 -namespace k8s.io -id d185b3c74b5313ac331d7f6c86f32395a1558772b64862a74018456f7cd03d20 -address /run/k3s/containerd/containerd.sock
│ │ ├─283415 /var/lib/rancher/k3s/data/da3ffc1d30a49a23449847b31d95bf4c96c8551396573c18886c9d0c4a63c710/bin/containerd-shim-runc-v2 -namespace k8s.io -id 8c55aaa22fdc1400ba9b7f766a3fb1834c2b9d5c713be05ae03544362b37c39e -address /run/k3s/containerd/containerd.sock
│ │ ├─291117 /home/kaita_nakamura/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 5e11919a8fb40b1cfada383a19631933df23b3a92c1fa4a76c97d76344ef1a5b -address /run/k3s/containerd/containerd.sock
│ │ └─291118 /home/kaita_nakamura/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 5e11919a8fb40b1cfada383a19631933df23b3a92c1fa4a76c97d76344ef1a5b -address /run/k3s/containerd/containerd.sock
│ ├─google-osconfig-agent.service 
│ │ └─573 /usr/bin/google_osconfig_agent
│ ├─cron.service 
│ │ └─1102 /usr/sbin/cron -f -P
│ ├─system-serial\x2dgetty.slice 
│ │ └─[email protected] 
│ │   └─822 /sbin/agetty -o -p -- \u --keep-baud 115200,57600,38400,9600 ttyS0 vt220
│ ├─polkit.service 
│ │ └─854 /usr/libexec/polkitd --no-debug
│ ├─networkd-dispatcher.service 
│ │ └─591 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
│ ├─multipathd.service 
│ │ └─288 /sbin/multipathd -d -s
│ ├─systemd-journald.service 
│ │ └─243 /lib/systemd/systemd-journald
│ ├─unattended-upgrades.service 
│ │ └─803 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
│ ├─ssh.service 
│ │ └─1094 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
│ ├─snapd.service 
│ │ └─606 /usr/lib/snapd/snapd
│ ├─rsyslog.service 
│ │ └─597 /usr/sbin/rsyslogd -n -iNONE
│ ├─chrony.service 
│ │ ├─585 /usr/sbin/chronyd -F 1
│ │ └─587 /usr/sbin/chronyd -F 1
│ ├─google-guest-agent.service 
│ │ └─798 /usr/bin/google_guest_agent
│ ├─systemd-resolved.service 
│ │ └─534 /lib/systemd/systemd-resolved
│ ├─dbus.service 
│ │ └─569 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
│ ├─system-getty.slice 
│ │ └─[email protected] 
│ │   └─825 /sbin/agetty -o -p -- \u --noclear tty1 linux
│ └─systemd-logind.service 
│   └─1099 /lib/systemd/systemd-logind
├─kubepods-besteffort-pod8867918a_f5fa_4864_ad25_d60f18985a7b.slice:cri-containerd:4b7b0b4d040f9ccabea45d60c0a2134896527b99a5827dbcca96bdc93978e69e 
│ └─150458 /pause
└─kubepods.slice 
  ├─kubepods-burstable.slice 
  │ ├─kubepods-burstable-pod73dd2cf2_0595_4d3b_8ca8_98a6f879469f.slice 
  │ │ ├─cri-containerd-f4d8f3e57cd04819dc0135b2a35a743444884947ae499b9eb8bec83d98ab3cc8.scope 
  │ │ │ └─281558 /pause
  │ │ └─cri-containerd-cec65d120364ea6b4ddc26fc729d090034fe5a1e97862ca4d6352f16e7bc7de1.scope 
  │ │   └─281983 /coredns -conf /etc/coredns/Corefile
  │ └─kubepods-burstable-pode8877f1e_dbe1_43d6_b295_4129eae7f168.slice 
  │   ├─cri-containerd-81bd9cd5412172aefb0d026f0ec282851481a1e287f5aabee2c5042b64801527.scope 
  │   │ └─281548 /pause
  │   └─cri-containerd-24c1efe45e130b8a430e85e3fac159d1e069db930b75506bcfb596f0ce31da09.scope 
  │     └─281980 /metrics-server --cert-dir=/tmp --secure-port=10250 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_>
  └─kubepods-besteffort.slice 
    ├─kubepods-besteffort-pod979b165d_d0b7_402a_8908_893812354519.slice 
    │ ├─cri-containerd-6868034cd49198104e13910b756fa67719c7a00dba52cac1f897f21c890df815.scope 
    │ │ └─283933 traefik traefik --global.checknewversion --global.sendanonymoususage --entrypoints.metrics.address=:9100/tcp --entrypoints.traefik.address=:9000/tcp --entrypoints.web.address=:8000/tcp --entrypoints.websecure.address=:8443/tcp --api.dashboard=true --p>
    │ └─cri-containerd-8c55aaa22fdc1400ba9b7f766a3fb1834c2b9d5c713be05ae03544362b37c39e.scope 
    │   └─283435 /pause
    ├─kubepods-besteffort-podc5279caf_55ad_404e_b123_057fce72b3f4.slice 
    │ ├─cri-containerd-d8ebe7ff6215863745d3e58d8639d39a3520bfc3d2795d1d969fe0928d75e8bc.scope 
    │ │ └─291160 /home/kaita_nakamura/runwasi/dist/bin/containerd-shim-wasmtime-v1 -namespace k8s.io -id 5e11919a8fb40b1cfada383a19631933df23b3a92c1fa4a76c97d76344ef1a5b -address /run/k3s/containerd/containerd.sock
    │ ├─cri-containerd-7bd5aaea142b25fe3073e81f898f54ba70c868f11d09b2cabc8e5298fc025744.scope 
    │ │ ├─291433 nginx: master process nginx -g daemon off;
    │ │ ├─291459 nginx: worker process
    │ │ ├─291460 nginx: worker process
    │ │ ├─291461 nginx: worker process
    │ │ ├─291462 nginx: worker process
    │ │ ├─291463 nginx: worker process
    │ │ ├─291464 nginx: worker process
    │ │ ├─291465 nginx: worker process
    │ │ ├─291466 nginx: worker process
    │ │ ├─291467 nginx: worker process
    │ │ ├─291468 nginx: worker process
    │ │ ├─291469 nginx: worker process
    │ │ ├─291470 nginx: worker process
    │ │ ├─291471 nginx: worker process
    │ │ ├─291472 nginx: worker process
    │ │ ├─291473 nginx: worker process
    │ │ └─291474 nginx: worker process
    │ └─cri-containerd-5e11919a8fb40b1cfada383a19631933df23b3a92c1fa4a76c97d76344ef1a5b.scope 
    │   └─291135 /pause
    ├─kubepods-besteffort-podb80882cf_fcbf_4311_8e3c_07ff1ce5a46f.slice 
    │ ├─cri-containerd-d185b3c74b5313ac331d7f6c86f32395a1558772b64862a74018456f7cd03d20.scope 
    │ │ └─283348 /pause
    │ ├─cri-containerd-a5c93d0dcce5e4cce58d6b2ee6a34f4bdb2a8d1aa58d25282be23ab8e8c01dec.scope 
    │ │ └─283651 /bin/sh /usr/bin/entry
    │ └─cri-containerd-7772c76d8045c4505495e662bfc276b5a28707c7f99f99ac3bed6c4c4da4f236.scope 
    │   └─283598 /bin/sh /usr/bin/entry
    └─kubepods-besteffort-pod863c8bc9_a188_440b_9bba_24bace00a09c.slice 
      ├─cri-containerd-2909e6fe7ba690f3833a9c8448e57ea28bbd800570c711d332ee82c3db8a1f85.scope 
      │ └─281415 /pause
      └─cri-containerd-f3f2175e41fa5ffd6e9972c65510159a13589a9c4ffb710c9cef5931ef9fe38d.scope 
        └─281820 local-path-provisioner start --config /etc/config/config.json

$ sudo bin/k3s kubectl get --raw "/api/v1/nodes/runwasi/proxy/stats/summary?only_cpu_and_memory=true" | grep -A 55 wasi-demo
    "name": "wasi-demo-75d5745dd8-qn2jm",
    "namespace": "default",
    "uid": "c5279caf-55ad-404e-b123-057fce72b3f4"
   },
   "startTime": "2025-02-02T13:33:11Z",
   "containers": [
    {
     "name": "nginx",
     "startTime": "2025-02-02T13:34:29Z",
     "cpu": {
      "time": "2025-02-02T13:43:32Z",
      "usageNanoCores": 0,
      "usageCoreNanoSeconds": 69235000
     },
     "memory": {
      "time": "2025-02-02T13:43:32Z",
      "usageBytes": 13606912,
      "workingSetBytes": 13467648,
      "rssBytes": 11051008,
      "pageFaults": 6092,
      "majorPageFaults": 0
     }
    },
    {
     "name": "demo",
     "startTime": "2025-02-02T13:33:11Z",
     "cpu": {
      "time": "2025-02-02T13:43:22Z",
      "usageNanoCores": 861398,
      "usageCoreNanoSeconds": 4533914000
     },
     "memory": {
      "time": "2025-02-02T13:43:22Z",
      "usageBytes": 35409920,
      "workingSetBytes": 35409920,
      "rssBytes": 33140736,
      "pageFaults": 9157,
      "majorPageFaults": 0
     }
    }
   ],
   "cpu": {
    "time": "2025-02-02T13:43:29Z",
    "usageNanoCores": 895400,
    "usageCoreNanoSeconds": 4741073000
   },
   "memory": {
    "time": "2025-02-02T13:43:29Z",
    "usageBytes": 49475584,
    "workingSetBytes": 49328128,
    "rssBytes": 44236800,
    "pageFaults": 24337,
    "majorPageFaults": 8
   }
  }
 ]

This looks like the behavior we want.

Based on this, what changes should I make? Any advice?

Finally, thanks to @utam0k and my co-worker @sat0ken for their help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

4 participants