Comments

345 Comments
karma

Meh, this breaks authentication against AD. But too late now, I'll file a bugzilla.

Gating tests fail, they need an update: https://src.fedoraproject.org/rpms/cockpit/pull-request/20

This is not a failure of cockpit itself though, just that we forgot to update the tests before releasing.

This failure of testUsed is persistent:

+ parted -s /dev/sda mktable msdos
+ parted -s /dev/sda mkpart primary ext2 1M 25
+ udevadm settle
+ echo einszweidrei | cryptsetup luksFormat /dev/sda1
Cannot wipe header on device /dev/sda1.

This is an actual regression in Rawhide, I filed https://bugzilla.redhat.com/show_bug.cgi?id=1824878 for it.

Please ignore the previous comment, that was meant for cockpit-podman, not this update.

I just pushed a fix to dist-git to drop the bogus "dist.rpmdeplint" required test. Not sure if/when greenwave picks that up, or of it needs to be explicitly re-triggered.

Initial tier-0 test has some flakes. A retry looks different, but even worse. This needs some deeper analysis and probably dialing back the tests a bit.

Yay, no crashes any more! Thanks!

BZ#1808767 X crashes on i915 GPU with SNA 2D acceleration

Breaks kdump, same regression as reported a week ago in Fedora 32: https://bugzilla.redhat.com/show_bug.cgi?id=1812393 . However, back then I thought it was kexec-tools to blame.

I just saw that this has already been in testing for a month, so it's probably not kexec-tools itself. I think dracut (https://bodhi.fedoraproject.org/updates/FEDORA-2020-b36b25de24) is a more likely candidate. So undoing my -1.

Breaks kdump, same regression as reported a week ago in Fedora 32: https://bugzilla.redhat.com/show_bug.cgi?id=1812393

This breaks:

setroubleshootd[7077]: Traceback (most recent call last):
setroubleshootd[7077]:   File "/usr/sbin/setroubleshootd", line 35, in <module>
setroubleshootd[7077]:     from setroubleshoot.util import log_debug
setroubleshootd[7077]:   File "/usr/lib/python3.7/site-packages/setroubleshoot/util.py", line 71, in <module>
setroubleshootd[7077]:     from pydbus import SystemBus
setroubleshootd[7077]: ModuleNotFoundError: No module named 'pydbus'

Apparently there is a missing dependency?

@imabug: Right, unfortunately we found that 211 does not completely fix #1792623 . It's not a regression compared to 210 (where the CPU/memory graphs pages oopsed everywhere), though. The full fix is in master now.

karma

I just upgraded my system again, which pulled in a new kernel and a new podman, and now it works again.

toolbox-0.0.17-1.fc31.noarch podman-1.7.0-2.fc31.x86_64 crun-0.11-1.fc31.x86_64 kernel-5.4.8-200.fc31.x86_64

So apparently this new crun needs the new podman?

@gscrivano: sudo journalctl --since '2 days ago' | grep mkfifo has no hits, and I didn't see that error message anywhere. I'll test with the previous version in a bit, with ostree that's not entirely trivial.

Slightly more info when trying to start the test container directly with podman:

 podman --log-level=debug start test
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/martin/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/martin/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /var/home/martin/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/martin/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/bin/fuse-overlayfs   
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
INFO[0000] running as rootless                          
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/martin/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/martin/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /var/home/martin/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/martin/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] overlay: mount_data=lowerdir=/var/home/martin/.local/share/containers/storage/overlay/l/AUK22E4VYYW2GEKXJKNH5ESVJA:/var/home/martin/.local/share/containers/storage/overlay/l/FTUE3LTM4ICA4MKGJ52LZTKLLM,upperdir=/var/home/martin/.local/share/containers/storage/overlay/fe33ed16d2edf5ef2448b0409ca086da52f65edf107cbf460f17012755922dc2/diff,workdir=/var/home/martin/.local/share/containers/storage/overlay/fe33ed16d2edf5ef2448b0409ca086da52f65edf107cbf460f17012755922dc2/work,context="system_u:object_r:container_file_t:s0:c59,c590" 
DEBU[0000] mounted container "6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19" at "/var/home/martin/.local/share/containers/storage/overlay/fe33ed16d2edf5ef2448b0409ca086da52f65edf107cbf460f17012755922dc2/merged" 
DEBU[0000] Created root filesystem for container 6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19 at /var/home/martin/.local/share/containers/storage/overlay/fe33ed16d2edf5ef2448b0409ca086da52f65edf107cbf460f17012755922dc2/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroups for container 6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19 to user.slice:libpod:6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19 
DEBU[0000] set root propagation to "rslave"             
DEBU[0000] Created OCI spec for container 6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19 at /var/home/martin/.local/share/containers/storage/overlay-containers/6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -s -c 6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19 -u 6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19 -r /usr/bin/crun -b /var/home/martin/.local/share/containers/storage/overlay-containers/6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19/userdata -p /run/user/1000/overlay-containers/6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19/userdata/pidfile -l k8s-file:/var/home/martin/.local/share/containers/storage/overlay-containers/6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog --conmon-pidfile /run/user/1000/overlay-containers/6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/martin/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19]"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Cleaning up container 6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] unmounted container "6dd6ba8c5dd6129905954c9d651a4264c96b2ce5222af9551a75e95cbdfafa19" 
ERRO[0000] unable to start container "test": container create failed (no logs from conmon): EOF 

New creation works:

toolbox create -c test
toolbox enter -c test

But after podman stop test (or rebooting the machine, which is the usual way how to get here):

toolbox --verbose enter -c test
toolbox: running as real user ID 1000
toolbox: resolved absolute path for /bin/toolbox to /usr/bin/toolbox
toolbox: checking if /etc/subgid and /etc/subuid have entries for user martin
toolbox: TOOLBOX_PATH is /usr/bin/toolbox
toolbox: running on a cgroups v2 host
toolbox: current Podman version is 1.6.2
toolbox: migration not needed: Podman version 1.6.2 is unchanged
toolbox: Fedora generational core is f31
toolbox: base image is fedora-toolbox:31
toolbox: container is test
toolbox: checking if container test exists
toolbox: calling org.freedesktop.Flatpak.SessionHelper.RequestSession
toolbox: starting container test
toolbox: /etc/profile.d/toolbox.sh already mounted in container test
Error: unable to start container "test": container create failed (no logs from conmon): EOF
toolbox: failed to start container test

This seems to break toolbox. Creating a new toolbox works fine (toolbox create), but rebooting and trying to start an existing one (toolbox enter) fails. I'll create a bz with more information shortly.