chep has quit [Read error: Connection reset by peer]
chep` is now known as chep
Tokamak has quit [Quit: Tokamak]
nemik has quit [Ping timeout: 248 seconds]
chep` has joined #yocto
chep has quit [Read error: Connection reset by peer]
chep` is now known as chep
nemik has joined #yocto
sakoman has quit [Quit: Leaving.]
Tokamak has joined #yocto
Tokamak has quit [Quit: Tokamak]
mark_ has joined #yocto
* kergoth
too
<kergoth>
RP: Would you remind me of the semantics of recideptask, by chance? I'm drawing a blank on the behavior, IIRC it altered how recrdeptask behaved
olani has joined #yocto
Tokamak has joined #yocto
davidinux has quit [Ping timeout: 260 seconds]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
davidinux has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
rber|res has joined #yocto
RobertBerger has quit [Ping timeout: 248 seconds]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
sakoman has joined #yocto
starblue has quit [Ping timeout: 260 seconds]
starblue has joined #yocto
Tokamak has quit [Quit: Tokamak]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
ak77 has quit [Remote host closed the connection]
ak77 has joined #yocto
kscherer has quit [Quit: Konversation terminated!]
erbo_ has quit [Read error: Software caused connection abort]
erbo has joined #yocto
jclsn has quit [Ping timeout: 252 seconds]
olani has quit [Ping timeout: 260 seconds]
olani has joined #yocto
chrysh has quit [Read error: Software caused connection abort]
chrysh has joined #yocto
davidinux has quit [Ping timeout: 260 seconds]
davidinux has joined #yocto
RobertBerger has joined #yocto
amitk has joined #yocto
rber|res has quit [Ping timeout: 260 seconds]
sakoman has quit [Ping timeout: 260 seconds]
sakoman has joined #yocto
sakoman has quit [Quit: Leaving.]
roussinm has quit [Quit: WeeChat 3.0]
rusam has joined #yocto
olani has quit [Ping timeout: 260 seconds]
olani has joined #yocto
alessioigor has joined #yocto
chrysh has quit [Ping timeout: 260 seconds]
chrysh has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
davidinux has quit [Ping timeout: 260 seconds]
davidinux has joined #yocto
amitk has quit [Ping timeout: 260 seconds]
amitk has joined #yocto
xmn has quit [Ping timeout: 252 seconds]
rusam has left #yocto [Leaving...]
amgedr has joined #yocto
Habbie has quit [Ping timeout: 255 seconds]
goliath has joined #yocto
ptsneves has joined #yocto
Habbie has joined #yocto
nerdboy has quit [Ping timeout: 255 seconds]
nerdboy has joined #yocto
nerdboy has joined #yocto
nerdboy has quit [Changing host]
leon-anavi has joined #yocto
gsalazar has quit [Ping timeout: 252 seconds]
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
rob_w has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
Schlumpf has joined #yocto
rfuentess has joined #yocto
<RP>
kergoth: I think do_deploy was the task that best illustrated the difference. recrdeptask works off the direct dependency chain so follows direct dependencies. ideptask worked off a specific task like do_deploy, so if recipe A depends on recipe B and recipe B has a do_deploy task, it would have the dependency on do_deploy. The i stands for indirect
zpfvo has joined #yocto
goliath has quit [Quit: SIGSEGV]
olani has quit [Ping timeout: 260 seconds]
olani has joined #yocto
jclsn has joined #yocto
chrysh has quit [Ping timeout: 260 seconds]
chrysh has joined #yocto
olani has quit [Ping timeout: 252 seconds]
olani has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
mckoan|away is now known as mckoan
manuel_ has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
goliath has joined #yocto
zpfvo has quit [Quit: Leaving.]
zpfvo has joined #yocto
tomzy_0 has joined #yocto
<tomzy_0>
Hello
<tomzy_0>
Yocto Summit 2022.11 will be virtual right?
mvlad has joined #yocto
zpfvo has quit [Ping timeout: 252 seconds]
zpfvo has joined #yocto
d-s-e has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
nerdboy has quit [Ping timeout: 260 seconds]
nerdboy has joined #yocto
nerdboy has quit [Changing host]
nerdboy has joined #yocto
ptsneves has quit [Ping timeout: 248 seconds]
zpfvo has quit [Ping timeout: 252 seconds]
zpfvo has joined #yocto
florian_kc has joined #yocto
<qschulz>
ndec: LetoThe2nd ^
zpfvo has quit [Ping timeout: 252 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
<ndec>
tomzy_0: yes, definitely!
zpfvo has joined #yocto
hcg has quit [Quit: leaving]
zpfvo has quit [Ping timeout: 248 seconds]
zpfvo has joined #yocto
<tomzy_0>
Yeah, I got this confirmed via e-mail also. Virtual rooms names got me confused :P
chrysh has quit [Ping timeout: 260 seconds]
chrysh has joined #yocto
frieder has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
<PhoenixMage>
Can someone remind me how to force recompile a package?
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
<rburton>
PhoenixMage: bitbake [recipe] -C unpack
<rburton>
PhoenixMage: if the build system is something sensible like meson, then -Cconfigure will force a rebuild with a clean build tree without having to repeat the unpack
<rburton>
but as some build systems can't do out-of-tree builds, it's safer to just go straight to unpack
seninha has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
seninha has quit [Client Quit]
<PhoenixMage>
Thanks
<qschulz>
i'm struggling with some sstate-cache issue I believe
<qschulz>
rburton: wondering if this is not an issue with the hashequiv server?
<qschulz>
because I dumped the taskhash after a cleansstate before/after the patch and the hashes are different (of course, but maybe I had the very bad luck of hash collision)
<RP>
qschulz: note that cleansstate doesn;t clear the hashequiv data
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
<qschulz>
RP: mmmm then I'm really confused
<qschulz>
why would a cleansstate after applying the patch fix this issue?
<qschulz>
but not running cleansstate have the issue happen
<qschulz>
because bitbake-getvar -r u-boot-tsd do_compile returns the expected content, even when the recipe is not rebuilt
<qschulz>
so the recipe parsing seems to be just fine
<qschulz>
bitbake-dumpsig -t u-boot-tsd do_compile is correct too (each done after a cleansstate, because otherwise it's using the last sig which didn't change)
<Guest13>
can't i use the sdk i created in eclipse in any way?
Schlumpf has quit [Quit: Client closed]
d-s-e has quit [Ping timeout: 252 seconds]
ykrons has quit [Ping timeout: 246 seconds]
chrysh has quit [Ping timeout: 260 seconds]
chrysh has joined #yocto
alejandrohs has quit [Ping timeout: 248 seconds]
alejandrohs has joined #yocto
sgw has joined #yocto
ykrons has joined #yocto
ptsneves has joined #yocto
Schlumpf has joined #yocto
kris has joined #yocto
<kris>
Hello - is there a way I can output a list of the packages and packagegroups as dependencies another packagegroup?
<RP>
qschulz: it would "fix" it as the sstate artefact isn't there. The mapping inside hashequiv remains
davidinux has quit [Ping timeout: 252 seconds]
Guest13 has quit [Ping timeout: 260 seconds]
<kris>
worked it out - should have talked to the rubber duck :-) - bitbake-getvar -r <package> RDEPENDS:<package> --value
<rburton>
for packagegroups that works, a general solution would use oe-pkgdata-util instead (as many rdepends are injected at build time, so bitbake-getvar won't see them)
<kris>
rburton: Thanks. Yes, that's true - I guess that will be built up based on the build time settings, but requires do_package to have been run. Thank you for the tip.
<JG98>
I was wondering is there a reason image recipes don't tend to include a version in the recipe name (e.g. core-image-base.bb vs u-boot_2020.01.bb)? and why do they include other image recipes instead of using include file (e.g. require require core-image-minimal.bb vs require u-boot.inc)
<rburton>
JG98: if you have an image recipe which has a meaningful version, feel free to use it
<rburton>
JG98: but what version would core-image-base be?
<Tyaku>
Is it possible to put this: 'PACKAGECONFIG:append:pn-systemd = " coredump"' in an image.bb or do we have to put it in local.conf ?
<rburton>
Tyaku: local or distro
<Tyaku>
Thanks rburton
<rburton>
your distro would be the place
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
<JG98>
rburton I'm not sure, but if e.g. the IMAGE_FEATURES would be updated you could update the version number to reflect that right? I've got some image recipes and I'm just trying to figure out why I would or wouldn't use version numbers, a best practise if you will
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
zpfvo has quit [Ping timeout: 248 seconds]
amgedr has quit [Quit: Leaving...]
sakoman has joined #yocto
gho has joined #yocto
zpfvo has joined #yocto
<RP>
JG98: even if we had a version number, someone could change the configuration with other local changes so the number wouldn't really mean much
<JG98>
RP it would potentially alert me that I'd need to update my own recipes because a certain feature was added or removed
<JG98>
although I suppose with include files you wouldn't have a version either
<RP>
JG98: historically it is very rare for us to make changes like that
roussinm has joined #yocto
ykrons has quit [Ping timeout: 260 seconds]
ykrons has joined #yocto
<JG98>
RP I suppose so, and really I was only trying to ascertain whether or not I should add them to our own recipe files
<roussinm>
rburton: Given the problem I had yesterday, native build doesnt work on different CI machine, it's been twice so far from native builds. Do you have a special localconf for CI that make sur that native sstate can be shared?
zpfvo has quit [Ping timeout: 246 seconds]
rusam has joined #yocto
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
Payam has joined #yocto
<Payam>
hi
<Payam>
I have asked this question before as well but never got a real answer. I would like to connect download and sstate_cache to an S3. Has anyone done it and how was it done? Thanks
florian has quit [Quit: Ex-Chat]
kris has quit [Ping timeout: 260 seconds]
kris has joined #yocto
rusam has quit [Remote host closed the connection]
rob_w has quit [Remote host closed the connection]
florian_kc has quit [Ping timeout: 260 seconds]
<rburton>
roussinm: no, sorry
<rburton>
Payam: last time you asked i gave you a link to a presentation that literally talked about this
zpfvo has joined #yocto
<rburton>
JG98: if an image version is useful, use one. nothing to stop you, it's just that the core ones don;t.
<d-s-e>
When an image is created the resulting filename always contains an timestamp, like for example imagename-qemux86-64-20221107140025.rootfs.ext4. How can I determine this timestamp ("20221107140025") during the build?
<roussinm>
rburton: so this is a risk for all native builds I guess, it might or might not work, and fix them as they fail?
<d-s-e>
rburton: thanks!
<rburton>
roussinm: yes. it very rare.
<rburton>
if the compiler gets told to produce host-tuned code and we're not telling it to, then there's not a lot we can do apart from figure out where and fix it
<roussinm>
rburton: there is no specific flag to the compile command, so just the default -march on one CI machine generates asm that the other machine can't use.
<rburton>
right
<rburton>
so clang, presumably, is being "clever" (read: stupid)
tomzy_0 has quit [Quit: Client closed]
<roussinm>
rburton: I expect people that requires a llvm-config:native >= 15. Might get the same error than, if unlucky.
manuel_ has quit [Ping timeout: 260 seconds]
<roussinm>
Which is not that hard to believe I guess, it's mesa with gallium, llvmpipe.
rusam has joined #yocto
goliath has quit [Quit: SIGSEGV]
<RP>
JG98: you would have a good idea of what your own versions would mean
Tyaku has quit [Ping timeout: 260 seconds]
Tyaku has joined #yocto
manuel1985 has joined #yocto
marek60 has quit [Quit: Client closed]
<JG98>
RP rburton yes thank you both, for now I'll keep it without, as I couldn't think of a solid reason either, can always change it where necessary!
kscherer has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
sgw has quit [Ping timeout: 246 seconds]
Tokamak has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
<Payam>
rburton possibly I went away. Do you have the article again?
<Payam>
rburton if possible I would like to have it . Thank you
<kergoth>
RP: I have a recipe which I want to gather up the deployed archiver mirror artifacts for all the deps needed to build an image, but I don't want it to depend on actually building the image, is this a case where I can use recideptask to get it to follow that build dependency graph without actually running those other tasks, instead running the deploy_archives?
zpfvo has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
chep` has joined #yocto
<rburton>
kergoth: you should join the zoom call and ask :)
<kergoth>
oh, right, forgot about the technical team meeting :)
<RP>
kergoth: isn't there a commandline option which does exactly that?
<kergoth>
There is, but I want to do something with those artifacts in the recipe, so the task needs to depend on them
<RP>
kergoth: bitbake <image> --runonly=fetch
<RP>
kergoth: I think ideptask might be what you want then
<kergoth>
okay, thanks, i'll try that
alessioigor has quit [Quit: alessioigor]
<RP>
kergoth: I remember adding runonly as I couldn't get exactly the right behaviour I wanted within bitbake itself through task dependencies
nemik has joined #yocto
<kergoth>
Ah :) Thanks for the warning
<RP>
kergoth: I think the dilemma was sometimes people wanted runonly and sometimes runall so two different options was a better want to handle it
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
florian_kc has joined #yocto
nemik has joined #yocto
Payam has quit [Quit: Client closed]
Tokamak has quit [Quit: Tokamak]
<xcm_>
hey guys. the darnest thing is happening. i am rsyncing an entire yocto directory (incl tmp/ etc) to another machine and it ends up taking up much more space than in the original machine. i just use `rsync -a`
gsalazar has joined #yocto
rfuentess has quit [Quit: CHELAS!!!]
<RP>
xcm_: you need to preserve hardlinks, we use them a lot
<xcm_>
ah darn i've been trying to see what's wrong with the symlinks
<xcm_>
thanks a lot. i think this will fix it
dmoseley has joined #yocto
<rburton>
xcm_: don't bother rsyncing tmp, just downloads and sstate-cache will be sufficient
<rburton>
(and your conf)
mckoan is now known as mckoan|away
gho has quit [Quit: Leaving.]
* paulg
got burned with the hardlinks and "tar ... | netcat" workflow before.
pasherring has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
<xcm_>
rburton: i've been trying to figure this out for hours so i think i can wait just a few minutes more for the comfort of having everything before i kill the source server
<rburton>
if the build is in a different location it wont work anyway
<rburton>
honestly, just copy conf, sstate, downloads
ptsneves has joined #yocto
<xcm_>
i'm likely bringing up an identical, just downsized server afterwards
zpfvo has quit [Remote host closed the connection]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
ptsneves has quit [Ping timeout: 248 seconds]
frieder has quit [Remote host closed the connection]
rusam has quit [Read error: Connection reset by peer]
prabhakarlad has joined #yocto
matthewzmd has joined #yocto
florian_kc has quit [Ping timeout: 246 seconds]
matthewzmd has left #yocto [#yocto]
gsalazar has quit [Ping timeout: 248 seconds]
PhoenixMage has quit [Ping timeout: 272 seconds]
PhoenixMage has joined #yocto
pasherring has quit [Quit: Leaving]
Tokamak has joined #yocto
gsalazar has joined #yocto
Guest6 has joined #yocto
<vvn>
hi there -- openvpn ships ${systemd_system_unitdir}/openvpn-client@.service, I appended the recipe and added SYSTEMD_SERVICE:${PN}:append = " openvpn-client@foo.service", but the build errors out with "Didn't find service unit". Am I missing something?
gsalazar_ has joined #yocto
gsalazar has quit [Ping timeout: 248 seconds]
florian_kc has joined #yocto
mvlad has quit [Remote host closed the connection]
gsalazar_ has quit [Read error: Connection reset by peer]
marc1 has quit [Ping timeout: 255 seconds]
prabhakarlad has quit [Ping timeout: 260 seconds]
<Guest6>
Hello, I would like to include into a Yocto image some pre compiled .ipk packages.
<Guest6>
The .ipk package will be generated in another Yocto build. We are considering this due to some source code segregation policies.
<Guest6>
I'm running some basic tests but I have some issues during the do_unpack.
<Guest6>
Thanks for the support, I'm building in my local PC now so it will take some minutes
Guest6 has quit [Quit: Client closed]
PhoenixMage has quit [Ping timeout: 260 seconds]
PhoenixMage has joined #yocto
Guest6 has joined #yocto
prabhakarlad has quit [Quit: Client closed]
<vmeson>
Guest6: you'll have to look for clone/exec calls in the log to see what's going wrong. strace is a bit of a learning curve but it's a good tool to know how to use.
<Guest6>
Thanks, I will add it to my learning queue!
prabhakarlad has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
<zeddii>
:q!
<zeddii>
this is not vim, and I did not exit!
goliath has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
<LetoThe2nd>
zeddii: and I can confirm that you also did not save
<zeddii>
nooooooooo!
<LetoThe2nd>
zeddii: AND I AM YOUR FATHER
leon-anavi has quit [Quit: Leaving]
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
Guest6_2 has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
Guest6 has quit [Ping timeout: 260 seconds]
<khem>
rule 1: an editor should autosave
florian has joined #yocto
<Guest6_2>
Hello again, same Guest6 with the ipk unpack issue but from a new pc
<Guest6_2>
Again, many thanks for the support! I think I found the problem
<Guest6_2>
with vmeson-s guide
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/sysroots-uninative/x86_64-linux/usr/bin/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/scripts/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/work/core2-64-poky-linux/example/0.1-r0/recipe-sysroot-native/usr/bin/x86_64-poky-linux/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/work/core2-64-poky-linux/example/0.1-r0/recipe-sysroot/usr/bin/crossscripts/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/work/core2-64-poky-linux/example/0.1-r0/recipe-sysroot-native/usr/sbin/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/work/core2-64-poky-linux/example/0.1-r0/recipe-sysroot-native/usr/bin/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/work/core2-64-poky-linux/example/0.1-r0/recipe-sysroot-native/sbin/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/work/core2-64-poky-linux/example/0.1-r0/recipe-sysroot-native/bin/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/bitbake/bin/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<Guest6_2>
9107 execve("/storage/yocto-test/poky/build/tmp/hosttools/xz", ["xz", "-d"], 0x7ffedffc7120 /* 82 vars */) = -1 ENOENT (No such file or directory)
<roussinm>
I want to believe that the change is correct, but it seems weird to me that copying native binaries to target sysroot should work? In what case that is the case?
<Guest6_2>
thanks rburton, but since the fetcher supports unpacking ipk packages out of the box shouldn't this dependency be considered automatically?
<roussinm>
running readelf on llvm-config it has: `Library runpath: [$ORIGIN/../lib:$ORIGIN/../../lib:$ORIGIN/../lib]` so when it starts to load the shared libraries it will find them inside the target sysroot which is not the same arch as native.
Tokamak has quit [Read error: Connection reset by peer]
Tokamak has joined #yocto
Tokamak has quit [Client Quit]
Tokamak has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
<RP>
roussinm: it isn't perfect, it just worked better than anything we did previously, at least in the test builds I made
<RP>
roussinm: it probably depends which options it is trying to run llvm-config with and exactly what those options try and do. Running it in that location rather than the native sysroot meant it did find target headers and so on rather than native ones at least
<RP>
It probably needs to be revisited and improved, preferably with something acceptable to upstrea,
<RP>
roussinm: pros and cons, that one doesn't do some things well either :(
<roussinm>
RP: I'm sure, it's just that currently, the build is blocked, I guess we can revert that commit on our tree for now?
<RP>
roussinm: I'm been saying for a while that we're lacking people spending time addressing key core issues. This is an example. Khem and I have both patched things up as best we can but at some point someone will need to try and solve it more completely/properly
<RP>
roussinm: so sure, you can revert and do something in your layer but ultimately someone will have to fix this properly :/
<RP>
roussinm: FWIW I suspect this is some issue with it being an x86 on x86 build which perhaps isn't seen in cross builds as the arch difference makes the mismatch clear
<roussinm>
This is x86 on x86 build indeed.
<roussinm>
But I'm trying to understand how copying the llvm-config from native to a target sysroot can work?
<roussinm>
If the target-sysroot is aarch64 it will try to use those libraries no?
Guest6_2 has quit [Ping timeout: 260 seconds]
<RP>
roussinm: llvm-config does multiple things. Some of it runs queries against the headers or an include file and those need the target config, not the native one
<RP>
running queries against a native sysroot libraries wouldn't be any more correct really
<roussinm>
RP: I understand that it needs to run queries, but the executable needs to link with native libraries, right?
<RP>
roussinm: if it is the RPATH causing problems, have you tried clearing that, or adjusting it to point at the native sysroot?
<RP>
roussinm: I think I was misunderstanding where the issue was :/
<roussinm>
RP: I haven't tried to play with RPATH, that was the next step.
<RP>
roussinm: you can use chrpath to change it
<roussinm>
My guess is that kind of patch wouldn't be upstreamable probably.
<RP>
roussinm: well, it would be to us for the code we have there
<RP>
this all does need fixing with upstream llvm properly
<RP>
we have issues with this in mesa, llvm and the rust-llvm recipes :(
<roussinm>
Yes, for me it's mesa.
<roussinm>
looking at: poky/meta/recipes-devtools/rust/rust-cross-canadian.inc looks like that chrpath is used a bunch
<RP>
roussinm: looks a bit different in my local branch, rust.inc: chrpath -d ${RUST_ALTERNATE_EXE_PATH}
<RP>
roussinm: which release are you on?
<roussinm>
RP: I'm a bit of a Frankeistein at the moment, kirkstone globally, but mesa langdale.
<roussinm>
`chrpath -d ${STAGING_BINDIR}/llvm-config` looks like this does the trick.
manuel1985 has joined #yocto
<RP>
roussinm: ok, good. I'd accept a patch doing that (assuming it passes other testing)
<roussinm>
RP: I can only tests my current machines which are both x86. I assume poky test suite has more than one machine to test this on?
<RP>
roussinm: yes, I mean if you send a patch we can test on the autobuilders
vmeson has quit [Ping timeout: 248 seconds]
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
warthog9 has quit [Remote host closed the connection]