GillesM has quit [Remote host closed the connection]
GillesM has joined #yocto
prabhakarlad has quit [Ping timeout: 244 seconds]
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
goliath has quit [Quit: SIGSEGV]
starblue has quit [Ping timeout: 260 seconds]
starblue has joined #yocto
sakoman has quit [Quit: Leaving.]
xcm_ has quit [Remote host closed the connection]
xcm_ has joined #yocto
<JaMa>
rburton: that's better than I would expect now
kevinrowland has quit [Ping timeout: 244 seconds]
xmn has quit [Ping timeout: 244 seconds]
Tokamak has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
pgowda_ has joined #yocto
<mark__>
Hi I'm getting this error: recipes-kernel/linux-libc-headers/linux-libc-headers_5.10.bb:do_install: [Errno 32] Broken pipe from an arm-aarch64 target with the latest meta-xilinx Kirkstone branch, I google around and found a posts reported a similar error, even at linux-libc-headers recipe, but there is no follow up/resolution. I looked into temp dir and there is no run.do_install file. Any insights any insight would be
barometz has quit [Quit: No Ping reply in 180 seconds.]
barometz has joined #yocto
sakoman has quit [Quit: Leaving.]
camus has quit [Ping timeout: 276 seconds]
camus has joined #yocto
alessioigor has joined #yocto
mark__ has quit [Quit: mark__]
mark__ has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
rfuentess has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<LetoThe2nd>
yo dudX
mckoan|away is now known as mckoan
<mckoan>
good morning
<LetoThe2nd>
mckoan: yo!
davidinux has joined #yocto
mvlad has joined #yocto
rokm_ has joined #yocto
JPEW has quit [Ping timeout: 260 seconds]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
JPEW has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
marek has joined #yocto
marek has quit [Client Quit]
<qschulz>
o/
mthenault has joined #yocto
<mthenault>
Hi, I am trying to install a cross package inside the sdk (crash_8.0.0.bb). I tried to add nativesdk to BBCLASSEXTEND in the .bb and to add nativesdk-crash-cross to TOOLCHAIN_HOST_TASK in my local.conf.sample. This doesn't work because "nativesdk-crash-cross" is not understood by yocto. Is there a way?
<LetoThe2nd>
mthenault: isn't it the other way round? crash-cross-something?
<mthenault>
my goal is to add this to the sdk. right now in jenkins I have to do clean sstate + rebuild each time + look into tmp/build/work..
<qschulz>
mthenault: I'd probably look into what gcc recipe is doing since I cannot imagine it not being part of the sdk
<qschulz>
and it's for cross compiling too
<mthenault>
hm ok, very complicated..
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
zhmylove has joined #yocto
Starfoxxes has quit [Ping timeout: 240 seconds]
leon-anavi has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
<qschulz>
mthenault: Considering that cross and nativesdk are two classextend, i'm not entirely sure it's possible to have both at the same time, so there might be some tricks used by gcc (or maybe using nativesdk isn't the way to do it)
<qschulz>
but I've no experience in building an sdk so can't help unfortunately
<mthenault>
yes.. gcc seems to use multiple recipes with diverse includes, I need to take a deeper look. Thanks anyway!
Starfoxxes has joined #yocto
prabhakarlad has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 250 seconds]
nemik has joined #yocto
florian has joined #yocto
dev1990 has joined #yocto
falk0n[m] has quit [Quit: You have been kicked for being idle]
<ykrons>
Hello, I wonder how to manage RDEPENDS in case of multilib. One script in my app is using curl. My app is built in multilib:lib32 so my understanding is that all its dependencies will be "converted" to lib32 and installed. The apps can work with curl or lib32-curl. Shall I add curl to RDEPENDS which will install lib32-curl in the image (potentially on top of curl) or shall I add curl in the IMAGE_INSTALL only ?
<qschulz>
ykrons: just have RDEPENDS have curl in it
<qschulz>
it'll pull whatever it needs, which should be lib32-curl for your lib32 app
<manuel__>
Hi all! I've got a core-image-minimal-xfce running in qemu but can't pass the mouse into qemu. Anyone any advice?
<manuel__>
My /var/log/Xorg.0.log is full of 'No input driver specified, ignoring this device'.
<manuel__>
runqemu starts qemu with '-usb -device usb-tablet'
<manuel__>
Didn't have this problems in wayland
<GillesM>
hello can I add INHERIT +="extrausers" and EXTRA_USERS-PARAMS in cusom image file ? If i set it in local.conf it work but no in custom image ... Idea ?
<manuel__>
I'm connected to qemus serial console as well. Is there a way I can start a program in the serial console and have it's GUI drawn in Xorg?
<ykrons>
qschulz: but in case my apps can run with curl or lib32-curl, I could save some space if I only install curl in the image.
<JaMa>
ykrons: true multilib handling isn't ideal in some cases and explicitly using ${MLPREFIX} helps only in some cases
<qschulz>
ykrons: are you doing a dlopen in your app and not using dynamic linking?
prabhakarlad has quit [Quit: Client closed]
<JaMa>
I guess his script is just calling curl binary
yashraj466 has joined #yocto
<qschulz>
ykrons: but otherwise, you can maybe override the DEPENDS/RDEPENDS for the multilib version to remove the automtically added MLPREFIX. e.g. DEPENDS:virtclass-multilib = "curl"
prabhakarlad has joined #yocto
<ykrons>
qschulz, JaMa: right, the script just uses the curl binary. The override seems to be a good option
<ykrons>
qschulz: I have done the same with bash and I get QA error because QA task detect bash scripts in my recipe and I guess it expects to found lib32-bash in the RDEPENDS. How can I fix this without disable the QA error?
<rburton>
kanavin: can you sent a v2 of the go upgrade with ppc checksums?
mrkiko has joined #yocto
mrkiko has left #yocto [#yocto]
<rburton>
kanavin: actually the dl site makes it trivial, i can rebase
GillesM has quit [Quit: Leaving]
<mcfrisk>
on kirkstone, seeing intermittend "groupmems command did not succeed" failures for a recipe which is pulled from sstate and doesn't do anything special. any hints on what to look for?
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
<ldericher>
Has RobertBerger been around lately?
goliath has joined #yocto
<LetoThe2nd>
ldericher: he shows up every now and then
<ldericher>
LetoThe2nd, that's good, I've missed him for a while now so I got curious :)
Payam has joined #yocto
KorG has joined #yocto
zhmylove has quit [Ping timeout: 260 seconds]
<JPEW>
Can I get an AB run with master-next of meta-mingw when some one has a chance? Thanks!
GNUmoon has quit [Remote host closed the connection]
<Payam>
I see building yocto takes alot of space
<Payam>
how do you make it work on CI?
GNUmoon has joined #yocto
<qschulz>
Payam: shared sstate-cache and downloads directory, get rid of everything else
<rburton>
Payam: delete the build tree afterwards, and inherit rm_work.
<rburton>
you need lots of space for sstate and downloads anyway, the build tree isn't a problem if you use rm_work
<Payam>
but first time on ci
<JPEW>
Payam: It takes a long time the first time
<JPEW>
Payam: We use really large NFS share to share downloads and sstate between our CI build nodes, so each (VM in our case) node comes up "clean" with no local state from prior builds. If the build has no changes, it will restore from the NFS sstate in a few minutes
<Payam>
JPEW but where do you save those? for instance on github? can you save it as cache?
<JPEW>
Payam: No, this is all on-prem; I don't have any expirence doing Yocto builds on GitHub
<Payam>
what is your on-prem provider?
<qschulz>
Payam: considering how slow github workers are, I wouldn't count on using Github CI for Yocto stuff
<qschulz>
I mean, you could host your own GitHub worker
<rburton>
we host our own gitlab runners: some are in AWS, some on-prem
<rburton>
both have lots of local storage
<Payam>
I don't know how to put the yocto downloads and sshare on-prem
<Payam>
is there any manual or guides?
<rburton>
Payam: are you trying to do yocto builds using the free runners?
<Payam>
Teams
<Payam>
We have the Teams version of Github
<Payam>
Can I make one of my computers to a runner?
<rburton>
yes
<Payam>
I should probably do that.
<Payam>
Do you have a good guide? I know I can search but I ask for a good source of info
Tokamak has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
JakubMartenka[m] has joined #yocto
Tokamak has joined #yocto
mvlad has quit [Remote host closed the connection]
seninha has joined #yocto
florian has quit [Ping timeout: 272 seconds]
Tokamak has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
JakubMartenka[m] is now known as ble[m]
Tokamak has joined #yocto
nemik has quit [Ping timeout: 272 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 272 seconds]
nemik has joined #yocto
dev1990 has quit [Quit: Konversation terminated!]
florian has joined #yocto
seninha has quit [Quit: Leaving]
kscherer has quit [Quit: Konversation terminated!]
alessioigor has joined #yocto
Tokamak has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
alessioigor has quit [Quit: alessioigor]
Tokamak has joined #yocto
Tokamak has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
goliath has quit [Quit: SIGSEGV]
seninha has joined #yocto
seninha has quit [Ping timeout: 272 seconds]
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
<kergoth>
Oof, if the fetcher fails to create a mirror tarball, it raises an exception and runs the clean method, which wipes out the git clonedir.
<kergoth>
So we have to re-fetch because a tar -c failed? this doesn't seem right :)
<kergoth>
Actually, I can't help but wonder if we really should be constructing mirror tarballs within do_fetch. do_fetch should be about fetching so we can move on to other tasks. We should probably split that out and have a separate task call into build_mirror_data().
<kergoth>
Doesn't seem right to have to re-run do_fetch to construct a mirror tarball if we've already fetched, even if most of it is a no-op
<kergoth>
would simplify the error handling in this case since the semantics aren't entirely clear (i guess we'd still want to fail out the task, but first verify and update the done stamp..)
<RP>
kergoth: the fetcher should just isolate the tarball creation and definitely shouldn't be calling clean :/
<RP>
kergoth: I suspect splitting it out, whilst attractive conceptually would cause a lot of different problems :(
<kergoth>
I think conceptually mirror construction and population really doesn't belong in do_fetch, even if it's within the same folder. I expect you'd have to make sure you run build_mirror_data against the right urldata when mirrors are involved, though.
<RP>
kergoth: there is an element of people wanting DL_DIR to be reusable for any given build and usable as a mirror though
<RP>
I suspect we would design it differently were we starting from scratch now