Comments

18 Comments

thanks Adam, i didn't get a chance to file a proper bug report on the ABI change. but fortunately, another user did, and fmt upstream released a new version which addresses this issue. see https://github.com/fmtlib/fmt/releases/tag/11.1.1 . i am working on packaging this new version.

i just recompiled dnf5 with fmt-11.1.0. it works fine:

$ /tmp/dnf5/dnf5-5.2.8.1-build/dnf5-5.2.8.1/redhat-linux-build/dnf5/dnf5 install fmt/x86_64/fmt-devel-11.1.0-1.fc42.x86_64.rpm fmt/x86_64/fmt-debuginfo-11.1.0-1.fc42.x86_64.rpm fmt/x86_64/fmt-11.1.0-1.fc42.x86_64.rpm
The requested operation requires superuser privileges. Please log in as a user with elevated rights, or use the "--assumeno" or "--downloadonly" options to run the command without modifying the system state.

while the one built with fmt-11.0.2 does not work after installing the fmt-11.1.0 package:

$ dnf install fmt/x86_64/fmt-devel-11.1.0-1.fc42.x86_64.rpm fmt/x86_64/fmt-debuginfo-11.1.0-1.fc42.x86_64.rpm fmt/x86_64/fmt-11.1.0-1.fc42.x86_64.rpm
Segmentation fault (core dumped)

seems that we have to bump the so version and rebuild all the packages linked against libfmt.so.11 .

This update has been unpushed.

it turns out a regression. unpushing ..

the tests failed like:

[2024-12-26T03:35:27.608123Z] [info] [pid:82017] ::: basetest::runtest: # Test died: command 'dnf -y install python3-dnf' failed at fedora/tests/_advisory_post.pm line 17.

and the screenshot looks like:

root@fedora:~# dnf -y install python3-dnf; echo MQkCg-$?- > /dev/ttyS0
Segmentation fault (core dumped)
karma

the gating test of "base_reboot_unmount@aarch64" failed

[2024-12-14T21:10:20.924062Z] [debug] [pid:1041414] starting: /usr/bin/qemu-system-aarch64 -device virtio-gpu-pci,edid=on,xres=1024,yres=768 -only-migratable -chardev ringbuf,id=serial0,logfile=serial0,logappend=on -serial chardev:serial0 -audiodev none,id=snd0 -device intel-hda -device hda-output,audiodev=snd0 -m 4096 -machine virt,gic-version=max -cpu host -netdev user,id=qanet0,net=172.16.2.0/24 -device virtio-net,netdev=qanet0,mac=52:54:00:12:34:56 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -boot menu=on,splash-time=5000 -device nec-usb-xhci -device usb-tablet -device usb-kbd -smp 2 -enable-kvm -no-shutdown -vnc :104,share=force-shared -device virtio-serial -chardev pipe,id=virtio_console,path=virtio_console,logfile=virtio_console.log,logappend=on -device virtconsole,chardev=virtio_console,name=org.openqa.console.virtio_console -chardev pipe,id=virtio_console_user,path=virtio_console_user,logfile=virtio_console_user.log,logappend=on -device virtconsole,chardev=virtio_console_user,name=org.openqa.console.virtio_console_user -chardev socket,path=qmp_socket,server=on,wait=off,id=qmp_socket,logfile=qmp_socket.log,logappend=on -qmp chardev:qmp_socket -S -device virtio-scsi-pci,id=scsi0 -blockdev driver=file,node-name=hd0-overlay0-file,filename=/var/lib/openqa/pool/14/raid/hd0-overlay0,cache.no-flush=on -blockdev driver=qcow2,node-name=hd0-overlay0,file=hd0-overlay0-file,cache.no-flush=on,discard=unmap -device virtio-blk,id=hd0-device,drive=hd0-overlay0,bootindex=0,serial=hd0 -blockdev driver=file,node-name=hd1-file,filename=/var/lib/openqa/pool/14/raid/hd1,cache.no-flush=on -blockdev driver=qcow2,node-name=hd1,file=hd1-file,cache.no-flush=on,discard=unmap -device virtio-blk,id=hd1-device,drive=hd1,serial=hd1 -drive id=pflash-code-overlay0,if=pflash,file=/var/lib/openqa/pool/14/raid/pflash-code-overlay0,unit=0,readonly=on -drive id=pflash-vars-overlay0,if=pflash,file=/var/lib/openqa/pool/14/raid/pflash-vars-overlay0,unit=1
[2024-12-14T21:10:20.932880Z] [debug] [pid:1041414] Waiting for 0 attempts
dmesg: read kernel buffer failed: Operation not permitted
[2024-12-14T21:10:21.285224Z] [debug] [pid:1041414] Waiting for 1 attempts
[2024-12-14T21:10:21.285617Z] [info] [pid:1041414] ::: backend::baseclass::die_handler: Backend process died, backend errors are reported below in the following lines:
  QEMU terminated before QMP connection could be established. Check for errors below
[2024-12-14T21:10:21.286260Z] [info] [pid:1041414] ::: OpenQA::Qemu::Proc::save_state: Saving QEMU state to qemu_state.json
[2024-12-14T21:10:21.288013Z] [debug] [pid:1041414] Passing remaining frames to the video encoder
[image2pipe @ 0xaaaae32cd930] Could not find codec parameters for stream 0 (Video: ppm, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2pipe, from 'fd:':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: ppm, none, 24 fps, 24 tbr, 24 tbn
Output #0, webm, to 'video.webm':
[out#0/webm @ 0xaaaae32dfda0] Output file does not contain any stream
Error opening output file video.webm.
Error opening output files: Invalid argument
[2024-12-14T21:10:21.319410Z] [debug] [pid:1041414] Waiting for video encoder to finalize the video
[2024-12-14T21:10:21.319605Z] [debug] [pid:1041414] The built-in video encoder (pid 1041448) terminated
[2024-12-14T21:10:21.319819Z] [debug] [pid:1041414] The external video encoder (pid 1041447) terminated
[2024-12-14T21:10:21.322160Z] [debug] [pid:1041414] QEMU: QEMU emulator version 9.1.2 (qemu-9.1.2-1.fc41)
[2024-12-14T21:10:21.322335Z] [debug] [pid:1041414] QEMU: Copyright (c) 2003-2024 Fabrice Bellard and the QEMU Project developers
[2024-12-14T21:10:21.322552Z] [warn] [pid:1041414] !!! : qemu-system-aarch64: -vnc :104,share=force-shared: Failed to find an available port: Address already in use
[2024-12-14T21:10:21.324834Z] [debug] [pid:1041414] sending magic and exit
[2024-12-14T21:10:21.325551Z] [debug] [pid:1039876] received magic close
[2024-12-14T21:10:21.341236Z] [debug] [pid:1039876] backend process exited: 0
[2024-12-14T21:10:21.442003Z] [warn] [pid:1039876] !!! main: failed to start VM at /usr/lib/os-autoinst/backend/driver.pm line 104.

but the same test on x86_64 passed, see https://openqa.fedoraproject.org/tests/3092312/logfile?filename=autoinst-log.txt:

[2024-12-14T21:18:02.360684Z] [debug] [pid:1846752] starting: /usr/bin/qemu-system-x86_64 -device virtio-vga,edid=on,xres=1024,yres=768 -only-migratable -chardev ringbuf,id=serial0,logfile=serial0,logappend=on -serial chardev:serial0 -audiodev none,id=snd0 -device intel-hda -device hda-output,audiodev=snd0 -global isa-fdc.fdtypeA=none -m 3072 -machine q35,smm=on -cpu Nehalem -netdev user,id=qanet0,net=172.16.2.0/24 -device virtio-net,netdev=qanet0,mac=52:54:00:12:34:56 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -device qemu-xhci -device usb-tablet -smp 2 -enable-kvm -no-shutdown -vnc :112,share=force-shared -device virtio-serial -chardev pipe,id=virtio_console,path=virtio_console,logfile=virtio_console.log,logappend=on -device virtconsole,chardev=virtio_console,name=org.openqa.console.virtio_console -chardev pipe,id=virtio_console_user,path=virtio_console_user,logfile=virtio_console_user.log,logappend=on -device virtconsole,chardev=virtio_console_user,name=org.openqa.console.virtio_console_user -chardev socket,path=qmp_socket,server=on,wait=off,id=qmp_socket,logfile=qmp_socket.log,logappend=on -qmp chardev:qmp_socket -S -global driver=cfi.pflash01,property=secure,value=on -device virtio-scsi-pci,id=scsi0 -blockdev driver=file,node-name=hd0-overlay0-file,filename=/var/lib/openqa/pool/22/raid/hd0-overlay0,cache.no-flush=on -blockdev driver=qcow2,node-name=hd0-overlay0,file=hd0-overlay0-file,cache.no-flush=on,discard=unmap -device virtio-blk,id=hd0-device,drive=hd0-overlay0,bootindex=0,serial=hd0 -blockdev driver=file,node-name=hd1-file,filename=/var/lib/openqa/pool/22/raid/hd1,cache.no-flush=on -blockdev driver=qcow2,node-name=hd1,file=hd1-file,cache.no-flush=on,discard=unmap -device virtio-blk,id=hd1-device,drive=hd1,serial=hd1 -drive id=pflash-code-overlay0,if=pflash,file=/var/lib/openqa/pool/22/raid/pflash-code-overlay0,unit=0,readonly=on -drive id=pflash-vars-overlay0,if=pflash,file=/var/lib/openqa/pool/22/raid/pflash-vars-overlay0,unit=1
[2024-12-14T21:18:02.369491Z] [debug] [pid:1846752] Waiting for 0 attempts
[2024-12-14T21:18:03.370311Z] [debug] [pid:1846752] Waiting for 1 attempts
[2024-12-14T21:18:04.370920Z] [debug] [pid:1846752] Finished after 2 attempts
[2024-12-14T21:18:04.395934Z] [debug] [pid:1846752] Establishing VNC connection to localhost:6012
BZ#2307596 c-ares-1.34.4 is available
Test Case c ares
karma

works great!

Thank you all for the help!

BZ#2295470 fmt-11.0.1 is available
:: [ 11:06:12 ] :: [   FAIL   ] :: Command 'cd /home/boost.pXRH0PQOn3/BUILD/boost_1_83_0' (Expected 0, got 1)
:: [ 11:06:12 ] :: [   FAIL   ] :: Command 'su -c './bootstrap.sh &>/home/boost.pXRH0PQOn3/bootstrap.log' bstbld' (Expected 0, got 127)

the test failures are confusing. the first seems suggest that the directory didn't exist at all?

https://osci-jenkins-1.ci.fedoraproject.org/job/fedora-ci/job/rpminspect-pipeline/job/master/254875/console timed out after running for 12 hours:

Ready to run at Thu Jun 06 12:52:00 UTC 2024
Cancelling nested steps due to timeout
karma

works great! previous i have to apply a patch to work around this issue, and the test passes without this workaround.

$ ./test.py --verbose --mode release cql-pytest/test_cast_data
Found 1 tests.
================================================================================
[N/TOTAL]   SUITE    MODE   RESULT   TEST
------------------------------------------------------------------------------
[1/1]      cql-pytest release [ PASS ] cql-pytest.test_cast_data.1 0.72s
------------------------------------------------------------------------------
CPU utilization: 7.5%
karma

works for me. and it does address the build failure which i ran into with clang-17.0.6-1.

works for me.

BZ#2236516 segfault when using coroutine due to miscompilation.

i tested this package by installing it on a RHEL 8.1 machine, and compiling Ceph, and running the test. everything is fine.

karma

tested on my up-to-date CentOS 8 with some XML queries, and used it to validate an XML document against a RelaxNG spec, it works great!

BZ#1757000 xmlstarlet missing in EPEL8

i reviewed https://bugzilla.redhat.com/show_bug.cgi?id=1703284 offline along with @xiubli, the packaging looks sane.

BZ#1703284 Review Request: nbd-runner - one nbd service for distributed storages