Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using quadlet User: affects cgroup files and prevents reopening when changing or removing value #24942

Open
g4njawizard opened this issue Jan 6, 2025 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@g4njawizard
Copy link

Issue Description

Hi all,

I've had to build Podman up from source, which was kind of hard and annoying, due to some lacking guides on what to take care of on raspberry. I wasn't able to install the very latest Podman version, because it's bugged. (another ticket already opened by someone else). Anyway.. I got it working and I already ran some container using quadlet. For my Nextcloud instance to work, I have to add User=0:0 . If I remove User=0:0 quadlet becomes unable to read some cgroup related files.

Jan 06 11:47:17 pinode01 conmon[2859]: conmon c7bcf94f6dff735d99a7 <nwarn>: Failed to add inotify watch for /sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/app.slice/nextcloud.service/libpod-payload-c7bcf94f6dff735d99a74161201bd7b9272f113188b22b93ad48cd853966a22d/memory.events
Jan 06 11:47:17 pinode01 pasta[2864]: No external routable interface for IPv6
Jan 06 11:47:17 pinode01 systemd[689]: Started nextcloud.service - Nextcloud container.
░░ Subject: A start job for unit UNIT has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit UNIT has finished successfully.
░░ 
░░ The job identifier is 296.
Jan 06 11:47:17 pinode01 nextcloud[2847]: c7bcf94f6dff735d99a74161201bd7b9272f113188b22b93ad48cd853966a22d
Jan 06 11:47:17 pinode01 conmon[2859]: conmon c7bcf94f6dff735d99a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/app.slice/nextcloud.service/libpod-payload-c7bcf94f6dff735d99a74161201bd7b9272f113188b22b93ad48cd853966a22d/memory.events
Jan 06 11:47:18 pinode01 systemd[689]: nextcloud.service: Main process exited, code=exited, status=1/FAILURE

Is that behaviour expected? If yes, how to prevent that from happening when I test settings?

The .container file I'm using.

[Unit]
Description=Nextcloud container
Requires=mariadb.service
After=redis.service

[Service]
Restart=always

[Container]
Image=docker.io/library/nextcloud:latest
ContainerName=nextcloud
User=0
Group=0
UserNS=keep-id:uid=33,gid=33
PublishPort=8666:80
Volume=%h/data/nextcloud/html:/var/www/html:Z
Volume=%h/data/nextcloud/data:/var/www/html/data:Z
Environment=REDIS_HOST: systemd-redis
#...
#some more Env

[Install]

$ podman -v
podman version 5.3.0
host:
  arch: arm64
  buildahVersion: 1.38.0
  cgroupControllers:
  - cpu
  - pids
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /usr/local/libexec/podman/conmon
    version: 'conmon version 2.1.12, commit: aee638f5b23d408b42c74ece8f7bdb977078386a'
  cpuUtilization:
    idlePercent: 98.9
    systemPercent: 0.73
    userPercent: 0.37
  cpus: 4
  databaseBackend: boltdb
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: file
  freeLocks: 1946
  hostname: pinode01
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.6.62+rpt-rpi-2712
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 127664128
  memTotal: 4241489920
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.4.0-3_arm64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-3_arm64
    path: /usr/lib/podman/netavark
    version: netavark 1.4.0
  ociRuntime:
    name: crun
    package: Unknown
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1.0.0.0.4-bd4f
      commit: bd4f77330961819150f35bc42f4a6dc44ff135e7
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20230309.7c7625d-1_arm64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 536854528
  swapTotal: 536854528
  uptime: 0h 44m 41.00s
  variant: v8
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/odin/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 3
    stopped: 0
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/odin/.local/share/containers/storage
  graphRootAllocated: 984361312256
  graphRootUsed: 549058502656
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 13
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/odin/.local/share/containers/storage/volumes
version:
  APIVersion: 5.3.0
  Built: 1736029541
  BuiltTime: Sat Jan  4 23:25:41 2025
  GitCommit: 1c32d39997d25c0b04cf2f1eb7d6c9c800dbae80
  GoVersion: devel go1.24-705b5a569a Fri Jan 3 14:40:11 2025 -0800
  Os: linux
  OsArch: linux/arm64
  Version: 5.3.0

Steps to reproduce the issue

Steps to reproduce the issue

  1. Make container file.
  2. Add User: and UserNS: keep-id:uid=33,gid=33
  3. Start container.
  4. Stop container
  5. remove User:
  6. Start

Describe the results you received

Error reading cgroup files

Describe the results you expected

run pod

podman info output

If you are unable to run podman info for any reason, please provide the podman version, operating system and its version and the architecture you are running.

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

had to manually install the latest crun version

crun version 1.19.1.0.0.0.4-bd4f
commit: bd4f77330961819150f35bc42f4a6dc44ff135e7
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL

Additional information

Happens always.

@g4njawizard g4njawizard added the kind/bug Categorizes issue or PR as related to a bug. label Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant