<moto-timo>
isolated ptests in docker-container is VERY handy
<moto-timo>
fray: isolated per-package (recipe) ptests are the way to go if you are on a new enough release... core-image-ptest-python3-cryptography for example (or meta-python-image-ptest-<foo> )
davidinux has quit [Ping timeout: 264 seconds]
<fray>
moto-timo: thats what I'm doing is running those tests, but I'm getting a lot of failures from parallel instances of QEMU now, but at least I have it running..
davidinux has joined #yocto
<moto-timo>
👍
<moto-timo>
it's a mental shift to be sure
<fray>
I can't use the YP version of QEMU, I need to use the qemu-xilinx so I'm able to emulate teh heterogenous configuration
<fray>
I'm getting a lot of failures during booting the more parallalization I do..
<fray>
(failures as in hung boots) so I'm wondering if qemu is running out of resources or has some other internal problem
<fray>
also on microblaze (which is a more simple design) after booting the passed in IP is getting cleared when udhcpd runs. I suspect maybe there is a kernel config or something causing this, but I really don't know.. Haven't investigated enough
<fray>
and if parselogs is enable it always fails, so I have to disable it. I can't find a way to add additonal exclusions for my configurations...
sakman has quit [Ping timeout: 255 seconds]
<moto-timo>
fray: I have definitely had to adjust memory and smp/cpu resources to get ptests to run. Are you using runemu?
<fray>
it's qemu-xlnx itself that seems to be having an issue.. the freeze up is happing prior to u-boot starting
<fray>
there is a hand-off between the cortex-a and the microblaze firwmare CPUs.. it's like the cortex-a is sitting there waiting for the firmware CPU to say "go" and never gets the signal
<fray>
the other alternative is the virtual serial port from the cortex-a went off in the weeds and I'm just not getting any ouput
sakman has joined #yocto
<fray>
on the xilinx Versal FPGAs, the FPGA, various hardware, and power management is handled by the microblaze CPU.. so it has to setup the wiring for the cortex-a to "go"
LocutusOfBorg has quit [Read error: Connection reset by peer]
LocutusOfBorg has joined #yocto
<moto-timo>
<sarcasm>Modern hardware designs have eliminated the serial port. stop trying to use it.</sarcasm>
* moto-timo
shakes his tiny fist to the heavens for all the wasted hours because there was no DB-9 header on the PCB.
<fray>
I was told a lot of our boards have moved to ftdi, jtag and serial are muxed over it.. actually eliminates additional headers and makes it 'easier' on developers..
<moto-timo>
yup. I've heard these rumors. I've seen designs without any of that and cursed the engineer.
<fray>
got the timeouts with only two parallel qemu sessions.. moved to 1.. and re-running the ptest.. if this passes, there is definitely something funky on our end
ablu has quit [Ping timeout: 255 seconds]
ablu has joined #yocto
<moto-timo>
fray: I’d like to say I have a lot if confidence in qemu development in the last 4+ years. But I don’t like to lie.
<fray>
well the AMD/Xilinx fork exists because (apparently) the QEMU developers said that they didn't need to model heterogenous systems..
<fray>
(shared memory/devices, but different CPUs)
<khem>
yeah I like ftdi, less cables and these days sticking it on a usb is awesome infact these days I am getting boards which has USB-C ports for serial
Daanct12 has joined #yocto
<khem>
fray: I do run 40+ qemu's in paralllel
<khem>
sucessfully
kpo_ has quit [Ping timeout: 258 seconds]
xmn_ has joined #yocto
xmn has quit [Ping timeout: 246 seconds]
Vonter has quit [Ping timeout: 246 seconds]
Vonter has joined #yocto
jclsn has quit [Ping timeout: 258 seconds]
jclsn has joined #yocto
xmn_ has quit [Quit: xmn_]
xmn has joined #yocto
xmn has quit [Read error: Connection reset by peer]
xmn has joined #yocto
silbe has quit [Ping timeout: 258 seconds]
silbe has joined #yocto
Guest62 has joined #yocto
<Guest62>
hello, building a yocto image for nvidia xavier NX, it is working well apart from a spi sensor (icm20689) no device is created at bootr despite the dts containing the right entry and the 6050 driver being built and present on target
<Guest62>
any suggestion welcome as I have tried all I could think of
jmd has joined #yocto
davidinux has quit [Ping timeout: 248 seconds]
davidinux has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
rob_w has joined #yocto
linfax has joined #yocto
tnovotny has joined #yocto
jmd has quit [Remote host closed the connection]
goliath has joined #yocto
varjag has joined #yocto
zpfvo has joined #yocto
vladest has quit [Ping timeout: 258 seconds]
<LetoThe2nd>
yo dudX
Kubu_work has joined #yocto
amsobr has quit [Ping timeout: 248 seconds]
mvlad has joined #yocto
rfuentess has joined #yocto
zpfvo has quit [Ping timeout: 240 seconds]
jmk1 has quit [Ping timeout: 246 seconds]
prabhakarlad has joined #yocto
zpfvo has joined #yocto
vladest has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
radanter has joined #yocto
diecore_ has joined #yocto
florian_kc has joined #yocto
<diecore_>
Hello guys! I've managed to build the custom kernel without issues using a custom recipe. My problem is that even that the patch works and generates the extra source files for adding a new driver in the kernel the do_compile_kernelmodules does not create the kernel module needed and doesnt show any errors. Any ideas ?
<kanavin>
RP: I did yes. Amended patch was sent to the ML.
<RP>
kanavin: ok, thanks. I think I'm losing track
<kanavin>
RP: I'm down with a sore throat, I'll answer a couple of emails, then probably be on sofa for the rest of the day
<RP>
kanavin: with AUH, I think you and I need to agree on the approach David should take. Should layers be handled internally to the AUH script or iterated over by the top level AB helper?
<RP>
kanavin: sorry to hear that :(. Any chance you could quickly look at the AUH stuff so we can unblock David?
<RP>
kanavin: hope you get over it quickly!
<kanavin>
RP: yes. first, AUH itself takes the layer location as a command line parameter. Layer 'discovery' or hardcoding can happen from scripts that execute it. I guess that's settled?
<kanavin>
otherwise, I'll reply to the email now
<RP>
kanavin: I was thinking iterating at a higher level gives us more control e.g. maybe at some point sending different sublayers to different maintainers
<kanavin>
let me just check there's no fever
<kanavin>
mild, 37.2
<RP>
kanavin: if you need to go that is fine, I just wanted to say that if you looked at anything, I'd like to try and ensure we agree on that one before more work gets done!
<kanavin>
RP: I wrote an answer, can you check it makes sense?
alejandrohs has quit [Ping timeout: 260 seconds]
alejandrohs has joined #yocto
<RP>
kanavin: yes, I think so, thanks
Vonter has quit [Ping timeout: 272 seconds]
Vonter has joined #yocto
florian__ has joined #yocto
frieder has joined #yocto
<Guest62>
hello, building a yocto image for nvidia xavier NX, it is working well apart from a spi sensor (icm20689) no device is created at bootr despite the dts containing the right entry and the 6050 driver being built and present on target
leon-anavi has joined #yocto
kayterina3 has joined #yocto
xmn has quit [Ping timeout: 260 seconds]
Chocky has quit [Ping timeout: 255 seconds]
Chocky has joined #yocto
Chocky has quit [Client Quit]
emdevt has joined #yocto
Chocky has joined #yocto
Chocky has quit [Client Quit]
Chocky has joined #yocto
Vonter has quit [Ping timeout: 260 seconds]
Vonter has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<kayterina3>
Hello. I want to split one script to a different package, I have FILES_myrecipe-bla="/etc/init.d/myrecipe-bla", but the file is not put under packages-split/myrecipe-bla unless I remove "etc" from FILES:${PN} (and readd all the other /etc/ files I want). Is it the intended behaviour? I expected have the same file present in both packages.
<rburton>
kayterina3: did you add myrecipe-bla to PACKAGES?
<kayterina3>
PACKAGES += "${PN}-bla"
<rburton>
that is the expeted behaviour
<rburton>
the packages are processed in the order that they are listed in FILES
<rburton>
you appended so PN takes /etc first
<rburton>
put -bla at the beginning of the list, not the end
<kayterina3>
It never crossed my mind. Where is that documented? The way the files are "consumed"?
<rburton>
what order would you think they were evaluated in when multiple FILES claim the same files?
<Rich_1234>
Is there any easy way to basically set a dependency in a local.conf file. I am using an existing recipe and rather than edit that I am wondering if there is an easy way to force a package to be built before that, in case I need my GPU drivers built first
<neverpanic>
The canonical way to do that is do add an entry to DEPENDS in your existing recipe. If you don't want to modify that, use a bbappend.
<rburton>
yeah, that
<rburton>
you _can_ do it from local.conf but that's so horrible. use a bbappend.
<Rich_1234>
right bbappend sounds like what I need
<Rich_1234>
thanks
Guest18 has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
Guest18 has quit [Quit: Connection closed]
tnovotny_ has joined #yocto
tnovotny has quit [Ping timeout: 240 seconds]
kayterina3 has quit [Ping timeout: 248 seconds]
diecore_ has quit [Ping timeout: 240 seconds]
diecore_ has joined #yocto
jmd has joined #yocto
kayterina3 has joined #yocto
sotaoverride is now known as Guest1925
Guest1925 has quit [Killed (sodium.libera.chat (Nickname regained by services))]
sotaover1ide is now known as sotaoverride
sotaover1ide has joined #yocto
|Xagen has joined #yocto
tnovotny has joined #yocto
Xagen has quit [Ping timeout: 240 seconds]
tnovotny_ has quit [Ping timeout: 260 seconds]
|Xagen has quit [Ping timeout: 240 seconds]
xmn has joined #yocto
florian__ has quit [Ping timeout: 260 seconds]
florian__ has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.1.0]
kpo_ has joined #yocto
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
rob_w has quit [Remote host closed the connection]
emdevt has quit [Quit: Leaving]
kayterina3 has quit [Remote host closed the connection]
kayterina3 has joined #yocto
kayterina3 has quit [Remote host closed the connection]
kayterina3 has joined #yocto
sbr_eps has joined #yocto
kayterina3 has quit [Remote host closed the connection]
kayterina3 has joined #yocto
kayterina3 has quit [Remote host closed the connection]
kayterina3 has joined #yocto
kayterina3 has quit [Ping timeout: 248 seconds]
<reatmon>
Is a tool or something that I can use to test the connection to the hash equiv server? I think we might have some firewall issues in our data center and need a quick way to know that I am correctly talking to the server or not.
<JPEW>
reatmon: There is a bitbake-hashclient command line tool
goliath has quit [Quit: SIGSEGV]
<reatmon>
Thanks. I will look into that.
<reatmon>
yep. my build servers cannot reach the server... but my desktop can... Time to hit up IT. Thanks Joshua!
silbe has quit [Ping timeout: 258 seconds]
<JPEW>
reatmon: np
Vonter has quit [Remote host closed the connection]
Vonter has joined #yocto
kayterina3 has joined #yocto
Vonter has quit [Ping timeout: 240 seconds]
Vonter has joined #yocto
amitk_ has joined #yocto
Chocky has quit [Quit: Leaving.]
Chocky has joined #yocto
rfuentess has quit [Remote host closed the connection]
Chocky has quit [Client Quit]
Chocky has joined #yocto
Vonter_ has joined #yocto
Chocky has quit [Client Quit]
kayterina3 has quit [Remote host closed the connection]
Chocky has joined #yocto
kayterina3 has joined #yocto
<kayterina3>
Can I trigger bitbake to regenerate the files under /etc/rcX.d/ of the image rootfs? Ideally, rebuild the ones that come from my recipe.
Vonter has quit [Ping timeout: 264 seconds]
<rburton>
if you changed the recipe that generates them then bitbake imagename will rerun that recipe and rebuilt your image
Chocky has quit [Client Quit]
Chocky has joined #yocto
Chocky has quit [Client Quit]
Chocky1 has joined #yocto
Chocky has joined #yocto
Chocky1 has quit [Read error: Connection reset by peer]
vladest has quit [Remote host closed the connection]
kayterina3 has quit [Remote host closed the connection]
linfax has quit [Ping timeout: 260 seconds]
florian__ has quit [Ping timeout: 240 seconds]
florian_kc has quit [Ping timeout: 258 seconds]
tgamblin has quit [Remote host closed the connection]
tgamblin has joined #yocto
<RP>
rburton: so we do execute
<rburton>
is that trying to identify what recipes were touched by the patch?
Vonter_ has quit [Ping timeout: 246 seconds]
alussiercullen has joined #yocto
Vonter has joined #yocto
alussiercullen has quit [Quit: Client closed]
vladest has joined #yocto
silbe has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
florian__ has joined #yocto
zpfvo has joined #yocto
prabhakarlad has quit [Quit: Client closed]
zpfvo has quit [Ping timeout: 255 seconds]
zpfvo has joined #yocto
sbr_eps has quit [Ping timeout: 260 seconds]
zpfvo has quit [Client Quit]
mattsm has joined #yocto
radanter has quit [Remote host closed the connection]
otavio has quit [Ping timeout: 252 seconds]
otavio has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<mischief>
what's a good way to manage CVE checks? one big file with a list of ignored ones? bbappends on each recipe with ignores and patches?
Guest84 has joined #yocto
<Guest84>
Hello, I try to create a yocto recipe to install "mitmproxy" (python software). Here is my recipe: https://pastebin.com/kKYXc7gu I when I execute my image, some files seems to be inserted but I don't find mitmproxy in the image or the python module.
<mischief>
i figured out part of the problem, i need to inherit go-mod
<mischief>
which seems to have the unfortunate side effect of doing network access during do_compile
<mischief>
now i have a new problem. two -native recipes that build go programs are needed at build time by another recipe, but during prepare_recipe_sysroot they install the same files due to go modules :-/
<khem>
yeah select one to provide it and let other remove it via do_install:class-native
Guest84 has quit [Quit: Client closed]
<mischief>
khem: i dont need any of the go module crap though, i just need the binaries produced (this is go gRPC crap btw). i can probably just safely remove all of the module stuff.
goliath has quit [Quit: SIGSEGV]
florian__ has joined #yocto
<khem>
mischief: you can look into what this class is doing and then shunt what you do not need or conversely use what you need and do not inherit it
Guest84 has joined #yocto
<mischief>
i donot think i can replace all of go.bbclass :-) but i can probabl work around what do_install is doing
<rburton>
mischief: your distro can set CVE_STATUS as needed
<khem>
yeah depends how big a chainsaw you have
<rburton>
mischief: see cve-extra-exclusions.inc or whatever its called in core
<mischief>
we do require `conf/distro/include/cve-extra-exclusions.inc`
<rburton>
and you can do the same if you want to knock out more, say you've decided that an issue isn't important enough or you've mitigated it
<mischief>
rburton: yes but if we patch we need a bbappend anyway or modify recipes in our layer. just wondering if global lists or local to recipe make more sense
zwelch has quit [Read error: Connection reset by peer]
<rburton>
if you've already written a bbappend then do any cve status mangling in the bbappend
<rburton>
but you can say 'yeah this isn't important to us' in a distro config if there's no append and it's just a judgement call
amitk has quit [Ping timeout: 255 seconds]
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
jmd has quit [Remote host closed the connection]
Haxxa has quit [Quit: Haxxa flies away.]
Haxxa has joined #yocto
zwelch has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
Guest84 has quit [Quit: Client closed]
diecore_ has quit [Ping timeout: 255 seconds]
<mischief>
well this is stupid. go class installs files and directories without write bit in some cases. can't delete them normally :-(
frieder has quit [Remote host closed the connection]
Chocky has quit [Quit: Leaving.]
Chocky has joined #yocto
Chocky has quit [Client Quit]
Chocky has joined #yocto
Chocky1 has joined #yocto
Chocky2 has joined #yocto
Chocky has quit [Ping timeout: 255 seconds]
Chocky has joined #yocto
Chocky2 has quit [Client Quit]
Chocky has quit [Client Quit]
Chocky has joined #yocto
sbr_eps has joined #yocto
Chocky1 has quit [Ping timeout: 255 seconds]
<fray>
I'm having an issue, one my aarch64 systems (using test image, which configures the kernel command line to set the IP address) and using 'poky'... I can see the during qemu boot that dmesg shows the IP being set, then gets to userspace and the IP is there. When I move to a different machine (32-bit arm) I see the IP set in the dmesg, but it gets to userspace and the network is not setup. I do see udhcp
<fray>
calls (and no dhcp server from qemu). Does anyone have a suggstion what I should look for to explain the difference in behavior?
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
<khem>
fray: is host x86_64 always?
<fray>
same host, same poky.. difference is the kernel config
<fray>
(and arch)
<fray>
I just can't figure out what is 'different'
<fray>
if I remove the /etc/network/interfaces (default) file, then it comes up with the rigth I[
<fray>
right Ip
<RP>
fray: network interface naming from the kernel ?
<fray>
both eth0.. but different interface drivers
<fray>
the aarch64 is a 'macb'.. rebuilding the 32-bit so I can compare
<RP>
fray: systemd? we had some issues with its device renaming which I think we disabled in the end
<fray>
no standard poky, no systemd
<fray>
Ok, now I'm breaking myself.. Whats the magic for 'runqemu' to use the same tap device that testimage uses?
<fray>
I just did MACHINE= ... runqemu nographic but it doesn't appear to have setup the tap.. (still verifying)
<fray>
(I'm trying ot figure out why testimage doesn't work for one machine but does for another.. so I'm grasping.. but trying to reproduce even the working I'm missing something)
<fray>
or is test image logging in on one machine and setting it, and not setting it on the other and I'm looking at the wrong part?
<RP>
fray: runqemu should be the same as what testimage does
<fray>
that's what i thought
Kubu_work has quit [Quit: Leaving.]
<fray>
debugging I broke something.. now I can't get runqemu to work at all.. Not sure what I did, but I'll have to figure it out..
Guest84 has joined #yocto
<Guest84>
khem : thanks it works like a charm with python_setuptools_build_meta
Guest84 has quit [Client Quit]
leon-anavi has quit [Quit: Leaving]
Guest81 has joined #yocto
Guest81 has quit [Client Quit]
Guest81 has joined #yocto
Guest81 has quit [Quit: Client closed]
Guest81 has joined #yocto
Xagen has joined #yocto
florian__ has quit [Ping timeout: 272 seconds]
mvlad has quit [Remote host closed the connection]
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]