qschulz has quit [Remote host closed the connection]
qschulz has joined #yocto
vd has quit [Quit: Ping timeout (120 seconds)]
RobertBerger has joined #yocto
rber|res has quit [Ping timeout: 246 seconds]
camus1 has joined #yocto
camus has quit [Ping timeout: 264 seconds]
camus1 is now known as camus
RobertBerger has quit [Remote host closed the connection]
RobertBerger has joined #yocto
vd has joined #yocto
mranostaj has quit [Ping timeout: 245 seconds]
mranostaj has joined #yocto
sakoman has quit [Quit: Leaving.]
RobertBerger has quit [Remote host closed the connection]
RobertBerger has joined #yocto
camus has quit [Ping timeout: 246 seconds]
camus has joined #yocto
RobertBerger has quit [Remote host closed the connection]
RobertBerger has joined #yocto
RobertBerger has quit [Ping timeout: 265 seconds]
jmiehe1 has joined #yocto
jmiehe has quit [Ping timeout: 246 seconds]
jmiehe1 is now known as jmiehe
RobertBerger has joined #yocto
RobertBerger has quit [Remote host closed the connection]
RobertBerger has joined #yocto
RobertBerger has quit [Remote host closed the connection]
RobertBerger has joined #yocto
amitk has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 265 seconds]
camus1 is now known as camus
oobitots has joined #yocto
RobertBerger has quit [Ping timeout: 246 seconds]
rber|res has joined #yocto
oobitots34 has joined #yocto
ThomasD13 has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 245 seconds]
camus1 is now known as camus
zyga-mbp has joined #yocto
mattsm has quit [Ping timeout: 252 seconds]
Sion has joined #yocto
alessioigor has joined #yocto
pgowda has joined #yocto
roussinm has quit [Quit: WeeChat 3.3-dev]
mattsm has joined #yocto
mattsm has quit [Read error: Connection reset by peer]
mattsm has joined #yocto
Wouter0100 has quit [Remote host closed the connection]
Wouter0100 has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 264 seconds]
camus1 is now known as camus
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
oobitots34 has quit [Ping timeout: 256 seconds]
tre has joined #yocto
zyga-mbp has joined #yocto
xmn has quit [Ping timeout: 265 seconds]
zyga-mbp has quit [Client Quit]
zyga-mbp has joined #yocto
ant__ has quit [Quit: Leaving]
<JosefHolzmayrThe>
cyo dudX
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mckoan|away is now known as mckoan
<mckoan>
good morning
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
zyga-mbp has joined #yocto
alessioigor has quit [Client Quit]
zyga-mbp has quit [Client Quit]
frieder has joined #yocto
zyga-mbp has joined #yocto
alessioigor has joined #yocto
fbre has joined #yocto
zpfvo has joined #yocto
goliath has joined #yocto
camus has quit [Ping timeout: 250 seconds]
camus has joined #yocto
rfuentess has joined #yocto
davidinux has quit [Ping timeout: 246 seconds]
davidinux has joined #yocto
dev1990 has quit [Quit: Konversation terminated!]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<qschulz>
o/
BuZZ-T has joined #yocto
<BuZZ-T>
hi.. i just started working with extensible sdk for the first time, usually i used the complete yocto system. when i used the yocto system i generated a complete sysroot with "bitbake build-sysroots". Can i achieve the same in the eSDK? i'd like to prefer only one sysroot that contains everything for my development.
vd has quit [Ping timeout: 256 seconds]
shoragan[m] has quit [Quit: Bridge terminating on SIGTERM]
kayterina[m] has quit [Quit: Bridge terminating on SIGTERM]
scjg has quit [Quit: Bridge terminating on SIGTERM]
Emantor[m] has quit [Quit: Bridge terminating on SIGTERM]
Pierre-jeanTexie has quit [Quit: Bridge terminating on SIGTERM]
PascalBach[m] has quit [Quit: Bridge terminating on SIGTERM]
behanw[m] has quit [Quit: Bridge terminating on SIGTERM]
CarlesFernandez[ has quit [Quit: Bridge terminating on SIGTERM]
k4wsys[m] has quit [Quit: Bridge terminating on SIGTERM]
barath has quit [Quit: Bridge terminating on SIGTERM]
falk0n[m] has quit [Quit: Bridge terminating on SIGTERM]
twinning[m] has quit [Quit: Bridge terminating on SIGTERM]
lexano[m] has quit [Quit: Bridge terminating on SIGTERM]
janvermaete[m] has quit [Quit: Bridge terminating on SIGTERM]
olani[m] has quit [Quit: Bridge terminating on SIGTERM]
JosefHolzmayrThe has quit [Quit: Bridge terminating on SIGTERM]
moto_timo[m] has quit [Quit: Bridge terminating on SIGTERM]
SalamaSalama[m] has quit [Quit: Bridge terminating on SIGTERM]
Jari[m] has quit [Quit: Bridge terminating on SIGTERM]
ezzuldin[m] has quit [Quit: Bridge terminating on SIGTERM]
hpsy[m] has quit [Quit: Bridge terminating on SIGTERM]
DanielWalls[m] has quit [Quit: Bridge terminating on SIGTERM]
michaelo[m] has quit [Quit: Bridge terminating on SIGTERM]
jaskij[m] has quit [Quit: Bridge terminating on SIGTERM]
coldspark29[m] has quit [Quit: Bridge terminating on SIGTERM]
Alban[m] has quit [Quit: Bridge terminating on SIGTERM]
etam[m] has quit [Quit: Bridge terminating on SIGTERM]
ramprakash[m] has quit [Quit: Bridge terminating on SIGTERM]
Nate[m]1 has quit [Quit: Bridge terminating on SIGTERM]
azstevep[m] has quit [Quit: Bridge terminating on SIGTERM]
cryptollision[m] has quit [Quit: Bridge terminating on SIGTERM]
dwagenk has quit [Quit: Bridge terminating on SIGTERM]
jwillikers[m] has quit [Quit: Bridge terminating on SIGTERM]
saYco[m] has quit [Quit: Bridge terminating on SIGTERM]
ejoerns[m] has quit [Quit: Bridge terminating on SIGTERM]
meck[m] has quit [Quit: Bridge terminating on SIGTERM]
agherzan has quit [Quit: Bridge terminating on SIGTERM]
xicopitz[m] has quit [Quit: Bridge terminating on SIGTERM]
glembo[m] has quit [Quit: Bridge terminating on SIGTERM]
Spectrejan[m] has quit [Quit: Bridge terminating on SIGTERM]
hmw[m] has quit [Quit: Bridge terminating on SIGTERM]
fabatera[m] has quit [Quit: Bridge terminating on SIGTERM]
t_unix[m] has quit [Quit: Bridge terminating on SIGTERM]
Markus[m]1 has quit [Quit: Bridge terminating on SIGTERM]
khem has quit [Quit: Bridge terminating on SIGTERM]
berton[m] has quit [Quit: Bridge terminating on SIGTERM]
Barry[m]1 has quit [Quit: Bridge terminating on SIGTERM]
jonesv[m] has quit [Quit: Bridge terminating on SIGTERM]
jordemort has quit [Quit: Bridge terminating on SIGTERM]
<kanavin>
BuZZ-T, eSDK is basically the same yocto system with a pre-built sstate and toolchain, so yes
jordemort has joined #yocto
etam[m] has joined #yocto
florian has joined #yocto
leon-anavi has joined #yocto
prabhakarlad has quit [Quit: Client closed]
<BuZZ-T>
kanavin: ok.. good to know, do have any idea how to generate the "complete" sysroot? like bitbake build-sysroots has done in the "normal" yocto system
lexano[m] has joined #yocto
frieder_ has joined #yocto
Alban[m] has joined #yocto
Spectrejan[m] has joined #yocto
tnovotny has joined #yocto
scjg has joined #yocto
meck[m] has joined #yocto
kayterina[m] has joined #yocto
Emantor[m] has joined #yocto
khem has joined #yocto
shoragan[m] has joined #yocto
<kanavin>
BuZZ-T, devtool has a command to rebuild the target image it's based on
Pierre-jeanTexie has joined #yocto
ejoerns[m] has joined #yocto
moto_timo[m] has joined #yocto
frieder has quit [Ping timeout: 245 seconds]
dwagenk has joined #yocto
t_unix[m] has joined #yocto
jonesv[m] has joined #yocto
azstevep[m] has joined #yocto
PascalBach[m] has joined #yocto
<kanavin>
what does build-sysroots really do?
falk0n[m] has joined #yocto
CarlesFernandez[ has joined #yocto
k4wsys[m] has joined #yocto
coldspark29[m] has joined #yocto
barath has joined #yocto
jwillikers[m] has joined #yocto
behanw[m] has joined #yocto
Nate[m]1 has joined #yocto
<BuZZ-T>
kanavin: ok, thank you. i'll give it a try
glembo[m] has joined #yocto
Barry[m] has joined #yocto
twinning[m] has joined #yocto
hmw[m] has joined #yocto
DanielWalls[m] has joined #yocto
Jari[m] has joined #yocto
fabatera[m] has joined #yocto
<kanavin>
BuZZ-T, the thing is, build-sysroots is not supposed to be used as a standalone target, so isn't really supported
agherzan has joined #yocto
<kanavin>
BuZZ-T, if you want a complete sysroot, making an image is better
ezzuldin[m] has joined #yocto
JosefHolzmayrThe has joined #yocto
xicopitz[m] has joined #yocto
ramprakash[m] has joined #yocto
janvermaete[m] has joined #yocto
Markus[m]1 has joined #yocto
cryptollision[m] has joined #yocto
<kanavin>
but I need to understand your use case. What's your development workflow?
olani[m] has joined #yocto
SalamaSalama[m] has joined #yocto
michaelo[m] has joined #yocto
berton[m] has joined #yocto
jaskij[m] has joined #yocto
hpsy[m] has joined #yocto
saYco[m] has joined #yocto
dkl has quit [Quit: %quit%]
dkl has joined #yocto
<BuZZ-T>
kanavin: in the past i used the complete yocto system.. i've done development in eclipse. I build the sysroot with "bitbake build-sysroots" and included the generated include folder (as link) to my eclipse projects. now we were running a newer version (dunfell) that doesn't have that eclipse plugin anymore. and i'm also playing arround with visual studio code.
<kanavin>
BuZZ-T, I think you need to first study the standard devtool workflows that eSDK provides
prabhakarlad has joined #yocto
<kanavin>
build-sysroots isn't supported outside of internal uses of it
<kanavin>
eclipse plugin was dropped because no one wanted to maintain it (and still no one does), so if you come up with some kind of IDE support and agree to keep it working, that'd be most welcome
<BuZZ-T>
kanavin: eclipse is still usable without the plugin. but the workflow does change a little bit. when i use a normal sdk (-c populate_sdk), source the environment from, do ./configure $CONFIGURE_FLAGS (we use autotools based projects) and start eclipse from the shell, i can do pretty much all of my development. the only manual tweak needs to be done is setting the right debugger in the debug configuration.
<kanavin>
BuZZ-T, you can do the same with eSDK, but eSDK adds the sstate cache, layers, devtool and bitbake (indirectly, only via devtool)
rber|res has quit [Remote host closed the connection]
<BuZZ-T>
kanavin: ok. thank you again.. i'll play a bit arround with eSDK :)
alessioigor has quit [Quit: alessioigor]
<kanavin>
JosefHolzmayrThe, dare I suggest, Debian's chart is not that impressive. The amount of non-repro packages is not going down.
alessioigor has joined #yocto
<JosefHolzmayrThe>
kanavin: thx
camus has quit [Ping timeout: 245 seconds]
camus has joined #yocto
dtometzki has quit [Ping timeout: 265 seconds]
<JosefHolzmayrThe>
kanavin: you may suggest anything, but I will probably not pass it along. advocacy is not meant to make others look bad. ;-)
<JosefHolzmayrThe>
(the question was not for me personally)
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<ThomasD13>
Hi, I have a new (hardware) machine to build my stuff with yocto. I have a bit of DDR4 memory - are there any tricks to further speedup the build-process with yocto?
<kanavin>
ThomasD13, add CPU cores
<ThomasD13>
I think its not possible to use DDR4 memory as ramdisk for the whole build process
<qschulz>
ThomasD13: shared sstate-cache, shared DLDIR, more CPU, more RAM, hashequiv server set correctly, etc..
<ThomasD13>
I have 12 cores which run 24 threads. I thought maybe I could reduce the "bottleneck" of m2-ssd access?
<ThomasD13>
I've got 72GB of RAM
<JosefHolzmayrThe>
ThomasD13: unless you are trying really hard to avoid it, linux effectively caches everything in ram
<kanavin>
ThomasD13, 12 cores is too little :-/
kayterina has joined #yocto
<kanavin>
you won't nearly saturate the disk I/O with it
<ThomasD13>
kanavin, its the fastest desktop CPU atm :(
<JosefHolzmayrThe>
ThomasD13: i've tried backing DL/SSTATE on hd vs. ssd vs. tmpfs, and there was no statistically significant difference, given the assumption that the box has enough RAM to cache stuff
<kanavin>
ThomasD13, what CPU do you have?
<ThomasD13>
alright josef, thats sound promising
<ThomasD13>
Ryzen 5900X
<JosefHolzmayrThe>
kanavin: and on the other hand, many builds will effectively max out at ~32 cores for parallelization limits.
<kanavin>
JosefHolzmayrThe, not if you're building several recipes at once
<kanavin>
JosefHolzmayrThe, but yes, beyond 32 cores the returns start to diminish
<JosefHolzmayrThe>
yeah but if you're tagetting images instead of recipes, the DAG will limit you
<kanavin>
ThomasD13, there's a 16 core version of that
<ThomasD13>
btw, with which hardware do you work with yocto?
<kanavin>
ThomasD13, and really, you should get a threadripper
<kanavin>
JosefHolzmayrThe, yes, so optimizing the graph and reducing the critical path is perhaps the best way to speed up builds
<ThomasD13>
ah, you mean 5950X. I thought the higher frq would serve better in that case
<kanavin>
ThomasD13, no, extra cores always win
<ThomasD13>
hmmm damn it. now its to late :D
<kanavin>
sell :)
<kanavin>
and get a TR
<kanavin>
or wait until zen 3 based TRs are available
<qschulz>
might also make sense to move to server-grade CPUs if you're in a company
<ThomasD13>
So you guys work with TR CPUs?
<JosefHolzmayrThe>
in the end it all depends on what you want to archieve
<BuZZ-T>
i got an x5900 - 64GB ram and an m2 ssd (pcie4.0) and i'm pretty happy with it :) but i didn't need to rebuild everything from scratch :)
<ThomasD13>
Yeah probably... but at least its now roughly 4times faster than before
<qschulz>
but cached/proxied DLDIR company wide and a mirrored SSTATE_CACHE (SSTATE_MIRROR) and/or nfs sstate-cache is usually a very good start (+ hashequiv server)
<ThomasD13>
hashequiv server is new to me - I'll google for that
<JosefHolzmayrThe>
our box here is a dual epyc-7-something with 256 threads, 512GB ram and 7something TB pci-nvmes
<kanavin>
yes, but it won't help when doing custom builds when a change you want to make is early in the task dependency graph
<qschulz>
I build on a Mac Mini because I like pain
<ThomasD13>
:D
<JosefHolzmayrThe>
the real killer feature is that we can now have multiple devs kicking off builds at reasonable core counts (16-32) without competing for resources.
<qschulz>
My previous company was running icecc (icecream) comapny wide
<kanavin>
I think the main advantage of server grade CPUs is that you can put a lot of RAM in them. threadrippers are capped at 256G.
<qschulz>
it didn't always work properly because of outdated icecc in some devs PCs
<ThomasD13>
Yeah, that server grad CPUs have a lot of PCIe lanes - thats a big advantage
<JosefHolzmayrThe>
in the end it all depends on the use case and the budget, how you put it to best use.
<kanavin>
but they cost dearly. If you want a machine just for yourself, don't get that :)
<kanavin>
threadrippers is the best thing that could happen to yocto
<kanavin>
crazy core counts available for non-astronomical price
<ThomasD13>
kanavin, exactly that is true in my case. My boss doesnt really understand WHAT i am actually doing
<JosefHolzmayrThe>
at the moment i'm rather banging my head against some pecularities of using a container on said machine for gitlab-runner
manuel1985 has joined #yocto
m4ho has quit [Read error: Connection reset by peer]
BuZZ-T has quit [Remote host closed the connection]
m4ho has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 265 seconds]
camus1 is now known as camus
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
kayterina has quit [Ping timeout: 252 seconds]
kayterina has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
kayterina has quit [Ping timeout: 252 seconds]
camus1 has joined #yocto
camus has quit [Remote host closed the connection]
camus1 is now known as camus
kayterina has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
xmn has joined #yocto
camus has quit [Ping timeout: 265 seconds]
camus1 has joined #yocto
camus1 is now known as camus
xmn has quit [Ping timeout: 252 seconds]
<Ad0>
how do I make sure I have everything needed to make bluetooth serial work ?
<Ad0>
it's rpi3
<Ad0>
there's always a mix between DISTRO_FEATURES and IMAGE_INSTALL on those
GillesM has joined #yocto
tre has quit [Remote host closed the connection]
camus has quit [Ping timeout: 245 seconds]
camus has joined #yocto
jwillikers has joined #yocto
<RP>
kanavin: I think you fixed a determinism issue with the parallelism option for zstd creeping into the output? Does that issue affect the current xz too?
ecdhe has quit [Read error: Connection reset by peer]
FredO3 has joined #yocto
ecdhe has joined #yocto
<kanavin>
RP: I am lacking context; i have not seen any determinism issues with zstd. The rpm package output is the same regardless of amount of threads or compression level,
<kanavin>
RP: these are the standard ones that come out of the box with rust, and are tested upstream
<RP>
kanavin: ah. So you're proposing we drop mips, musl, powerpc?
<RP>
kanavin: this would mean we drop sato since that depends on rsvg?
<kanavin>
RP: we on the other hand define and use custom ones (the definition is a json file), e.g. 'aarch64-poky-linux'. This is problematic, as 'the ecosystem' generally isn't expecting such custom targets, and sees them only as a way to 'bringup' new hardware.
<RP>
kanavin: ah, I wasn't looking far enough, there is tier 3
<kanavin>
RP: not at all. standard targets do include mips, musl and powerpc. They're not guaranteed to be as well tested as 'tier 1'.
GillesM has quit [Quit: Leaving]
<RP>
kanavin: mapping our values to the rust preferred name does seem sensible
<RP>
we already do something like that in openssl iirc
<RP>
kanavin: I'll port that patch if you are busy btw, one way or another I'm going to need it to get any further fixing this sstate reuse issue
<kanavin>
I have already had to add x32 in there (a copy-tweak from rust source code), and this got me thinking that it's problematic.
<kanavin>
Now that I've tried to build mozjs, it's even more problematic.
<RP>
kanavin: it does less than ideal so I'm open to trying changes
<kanavin>
RP: yes please (the rpm patch)
<RP>
kanavin: it seems to apply without changes :)
<RP>
That means something else will go badly wrong in a minute
<kanavin>
RP: the reason I got into mozjs upgrade is that existing mozjs (which does not use rust) won't build with python 3.10
<RP>
kanavin: ah. Good timing with rust then...
<jaskij[m]>
where do I submit about wrong mailing list link in the layer index? specifically for meta-freescale
<qschulz>
jaskij[m]: to the correct ML :)
<agherzan>
I have some layers in the layers index that I'm not maintaining anymore (old workplaces). How can I remove myself as maintainer?
<qschulz>
meta-freescale@lists.yoctoproject.org
<jaskij[m]>
of course :D
<mbrothers>
Question about using patches. I have created a patch using `git format-patch -1` and applying it using `SRC_URI += "file://0001-...patch"`. But, I am getting errors while it is applied. How can I debug this easily? Which commands are used by yocto under the hood?
<rburton>
what errors?
<rburton>
that's typically right, problems may be because you didn't generate the patch from the right path, or your S is set such that the prefix is wrong
<agherzan>
Layer index: how can I query for all the layers I've submitted and for those that I'm a maintainer for?
<jaskij[m]>
qschulz: I think it's more of a global issue, caused by a change in the structure of software at lists.yoctoproject.org
<qschulz>
but yes, the tool used to manage mailing list was changed about a year or two ago
<jaskij[m]>
where? "about this site" on layer index throws a 404
leon-anavi has quit [Quit: Leaving]
<qschulz>
jaskij[m]: the mail address is listed in the link you gave us above (the one for poky)
<qschulz>
and you have the same for other layers
<jaskij[m]>
So I'd have to dig through the whole index and submit about the change? Man, I need to write a crawler for that :P
<qschulz>
I don't know what the layer index "about this site" page should redirect to sorry
<qschulz>
halstead: michaelo: ndec_: maybe know ^
<qschulz>
jaskij[m]: only the layers hosted on git.yoctoproject.org and git.openembedded.org
<qschulz>
it's better to have a patch for a few than for none, so at least send the ones you encounter, though we would be very grateful if you did it for all repos since you're offering :)
sstiller has joined #yocto
<ndec_>
qschulz: jaskij[m] : right, i can see the issue.. i can fix the index for meta-freescale.. but I am not sure if we can do a fix-all-at-once change..
<jaskij[m]>
Might as well write an automated scanner for dead mailing list links and such. Been looking for a project to learn Rust :P although with my life as it is, no promises on timelines.
<qschulz>
aaaaaaah, jaskij[m] you were talking about the mailing list displayed on the layersindex? I thought in some layers' README or something
<ndec_>
i am not sure it is a patch that we need. i think we need to make a change in the layerindex admin panel.
<jaskij[m]>
layerindex, yes
<jaskij[m]>
my bad if I caused confusion
argonautx has joined #yocto
<mbrothers>
rburton: I am getting "Cannot rename file without two valid file names" where if I do it locally (git am 0001...patch) it works fine
<qschulz>
jaskij[m]: then I don't know where this needs to be changed, ndec_ and others will know :)
<ndec_>
the README in meta-freescale is showing the right mailing list. but layerindex does not parse the mailing list from the tree. it must have been entered manually
<rburton>
mbrothers: it usually applies using quilt
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<mbrothers>
alright, I'll try to reproduce it using quilt, I'll get back to you!
<rburton>
you can set PATCHTOOL=git if you want, but it sounds like your patch is just a bit weird and might need a bit of love to not be a git-specific patch
<mbrothers>
Thanks!
<ndec_>
jaskij[m]: i am sure a BZ entry with the list of issues you found would be helpful
<mbrothers>
rburton: Yes, it is renaming+modifying sources, so it may have some attention there
<mbrothers>
rburton: just for my information, is it possible to add that PATCHTOOL directive also in the "SRC_URI += file://...patch" line?
<rburton>
no, just set PATCHTOOL in the recipe
<mbrothers>
rburton: alright, thanks!
<jaskij[m]>
ndec_: I'll do it once I'm home, probably will try to get started on that scanner too
<ndec_>
it's a property (mailing list) in the layer entry. i am not sure if the layer maintainer has permission to update the information directly, or if it should be an admin. maybe bluelightning or RP would know. in any case, it's a good finding..
<ndec_>
qschulz: re: the about page which is broken, please submit a bug :)
<jaskij[m]>
ndec_: I've got a bunch of questions, is there some docs for the layer index I could read, or a repo I could try going through?
leon-anavi has joined #yocto
<RP>
ndec_: I do not know. I think rburton did get admin access
<rburton>
the layer owner can update it
<rburton>
what layer?
<ndec_>
i have admin access too, i can make the change. but i wanted to understand if admins should make the changes, or if we should request all maintainers to verify their entry.
<ndec_>
meta-freescale mailing in the layerindex points to the old mailman, not groups.io link.
<ndec_>
and probably many others.
<ndec_>
ah, rburton you answered my question :)
<rburton>
so the owner of meta-freescale is otavio
<ndec_>
hmm. actually the problem might be slightly different.. https://lists.yoctoproject.org/listinfo/meta-freescale is (I think) supposed to redirect to the new groups.io link, but it does not. while similar links for OE work.
<ndec_>
i am pretty sure it was there initially, but it is broken now..
<yates>
if i have several patches in a recipe (several SRC_URI += "file://patch1.patch", SRC_URI += "file://patch2.patch", ...) what order will they be applied in? alphabetical?
<qschulz>
yates: order in which they are appearing in SRC_URI
<yates>
qschulz: +1
davidinux has quit [Ping timeout: 245 seconds]
davidinux has joined #yocto
Nate[m]1 has quit [Ping timeout: 250 seconds]
<agherzan>
bluelightning: Hi Paul. I have a set of layers in the layer index that I'd like to remove myself as maintainer (past work). And also, I have submitted a layer with my current work email which will create the same issue in the future. I can't make these changes as the emails differ from my current one - some of them I don't even have access to anymore.
Jari[m] has quit [Ping timeout: 268 seconds]
hmw[m] has quit [Ping timeout: 268 seconds]
<agherzan>
And I think the way lindex does it, is by matching the maintainer's email to the account's email.
<jonesv[m]>
I am just trying to define two services from one recipe. Those two binaries `gpio-shutdown` and `i2c-battery` are both built by this recipe, and installed properly. I can `service gpio-shutdown start` and `service i2c-battery start` manually, but they don't start automatically
<jonesv[m]>
I have been looking at `./meta/classes/update-rc.d.bbclass` for two hours now, and I don't really get how the "multi-package" way works (if it does at all). Happy to get advice there
<jonesv[m]>
Also I don't get what they mean with "package", because for me a recipe describes one package. So I actually define multiple services here, not multiple packages 🤔
<qschulz>
jonesv[m]: a recipe builds multiple packages
<qschulz>
(in the immense majority of cases)
<qschulz>
it just happens that ${PN} is both the name of the recipe and of the "main" package
vd has joined #yocto
<jonesv[m]>
oh, I see. So it sounds like I should define those packages in my first multi-packages recipe
<qschulz>
jonesv[m]: which Yocto release are you working on currently?
<jonesv[m]>
Trying to find in the docs how to build multiple packages from one recipe, it seems like I missed that 😁
<qschulz>
jonesv[m]: I think the entries in INITSCRIPT_PACKAGES should be present in PACKAGES too and the init script that is enabled via an INISCRIPT_PACKAGE override should be in the package
<qschulz>
don't know how to convey my thoughts better than this
<jonesv[m]>
But that's all done in the same recipe, right? I just need to understand how to deal with multiple packages in one recipe I think
<qschulz>
basically, you need FILES_gpio-shutdown += "/path/to/gpio-shutdown.service" and PACKAGES =+ "gpio-shutdown"
<qschulz>
+- the typos and correct paths
<qschulz>
you can check if your initscript makes it to the correct package by running oe-pkgdata-util find-path '*gpio-shutdown*'
<qschulz>
jonesv[m]: your cmake should install the files in the correct directories already so you don't need a do_install_append
<qschulz>
but that's nitpicking
alessioigor has quit [Quit: alessioigor]
<qschulz>
but yeah, missing the PACKAGES and FILES variable for each new package
<jaskij[m]>
why would `dnf repoquery` on the device give me empty results? any tips on where to look?
alessioigor has joined #yocto
<jaskij[m]>
in particular `dnf repoquery --list glibc-utils` is empty
<jonesv[m]>
qschulz: but those do_install_append are for the sysvinit services, so they are not part of the project itself (only the recipe). Is that bad practice?
ThomasD13 has quit [Ping timeout: 246 seconds]
<qschulz>
jonesv[m]: misread the recipe, this way is fine, though probably they could be in the git repo already? but that's up to the maintainer's taste
<jonesv[m]>
got it
<qschulz>
jaskij[m]: don't know much about dnf repoquery, but, do youa ctually have a dnf repo somewhere to query?
<jaskij[m]>
So, I've found out that postgresql's `initdb` actually depends on `locale` to find out existing locales. Adding an RRECOMNDS is probably something for the mailing list?
<jaskij[m]>
without `locale`, you get a warning: `UTC [4118] WARNING: no usable system locales were found`
<RP>
jaskij[m]: I'd make that a RDEPENDS but yes, please send a patch
<jaskij[m]>
true, working with locale is such a basic thing for a database that `RDEPENDS` is probably better
rber|res has quit [Read error: Connection reset by peer]
tnovotny has quit [Quit: Leaving]
<fray>
just remember many people are building systems without locales, so if the component won't work in that it would be nice to identify and disable that recipe in some way..
<jaskij[m]>
this also probably needs safeguarding for whether glibc is used
<RP>
jaskij[m]: there is a libc-glibc override
<jaskij[m]>
fray: how do I detect that? by checking whether `GLIBC_GENERATE_LOCALES` is empty?
<qschulz>
jonesv[m]: enabling buildhistory feature should give you what you want
whuang0389 has joined #yocto
<fray>
I assume this needs to work in musl as well.. so I'm not sure..
<jaskij[m]>
fray: for now I'm sending this as a glibc-only patch, I've never worked with musl
vd has quit [Ping timeout: 256 seconds]
<jaskij[m]>
hoping someone will respond with a necessary correction for musl if it's available
vd has joined #yocto
<fray>
Ya, IMAGE_LINGUAS, but that shouldn't be used as it's "image" specific. I thought there was a variable that said 'construct these locals'. That is the one I was thinking of, but I'm not seeing it
<fray>
Ya, it's GLIBC_GENERATE_LOCALES .. 'all' is fine, other things might be.. and blank is probably "won't work"
<jaskij[m]>
blank is equal to 'all', as per the source linked
<fray>
Hmm.. I thought there was a way to say none..
<fray>
Maybe " "
<fray>
maybe I'm remembering wrong and locales are always generated "in some way".. then IMAGE_LINGUAS is what gets installed or not
<jaskij[m]>
I think having glibc without locale support is not really supported. Sure, you may have those locales not installed (as is common), but glibc will have locale support either way
sstiller has quit [Quit: Leaving]
<fray>
Ya ya.. glibc locale APIs will always be present for sure
<fray>
it's been a very long time since we could just turn off locales at that level..
<jaskij[m]>
so just `RDEPENDS` on `glibc-utils`, *if* `TCLIBC` is `glibc`
<jaskij[m]>
is what I'm thinking
<qschulz>
jaskij[m]: I think there's a glibc OVERRIDES available, so an RDEPENDS:${PN}:append:glibc should probably be fine?
<fray>
ya.. that should work fine (and be simpler)
<fray>
originally I was thinking we could disable packaging of any locale stuff, but I don't see that -- so my memory was wrong
<jaskij[m]>
If there's such an override it'll be perfect
<qschulz>
jaskij[m]: :libc-glibc
<jaskij[m]>
How much testing is required for patches to be submitted?
roussinm has joined #yocto
<jaskij[m]>
Locally I'm working with pgsql 13.3 backported from Hardknott to Dunfell
<jaskij[m]>
Thanks qschulz
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
oobitots has quit [Quit: Client closed]
thekappe has quit [Quit: WeeChat 1.9.1]
oobitots has joined #yocto
pgowda has quit [Quit: Connection closed for inactivity]
<qschulz>
jonesv[m]: for sure not, it should be ${sysconfdir}/init.d/
<qschulz>
check with oe-pkgdata-util list-pkg-files gpio-shutdown does not have the file
<qschulz>
it is very likely that you also shoul dhave PACKAGES =+ and not PACKAGES +=
<jonesv[m]>
oh, I thought `=+` was a typo 😅
<qschulz>
and ${INIT_D_DIR} could probably replace ${sysconfdir}/init.d/ not that it matters much :)
mbrothers has quit [Ping timeout: 250 seconds]
<qschulz>
jonesv[m]: a file can only appear in one package, so the first FILES to have a regex matching the file gets it
<jonesv[m]>
qschulz: so `FILES_gpio-shutdown += "${D}${sysconfdir}/init.d/gpio-shutdown.sh"`? You don't have the `${D}` above
<qschulz>
and the PACKAGES variable is read left to right
<qschulz>
jonesv[m]: no, I didn't write ${D}
<jonesv[m]>
So for the install_append, I need `${D}`, but for the `FILES`, I don't?
<qschulz>
yes, because FILES mechanism looks into ${D} for files
<jonesv[m]>
right, let my write that down
<qschulz>
${D} being a temporary rootfs skeleton per recipe
<rburton>
fray: around?
rfuentess has quit [Remote host closed the connection]
<fray>
ya.. on an internal meeting so I might be a bit slow responding, but I'm here
oobitots has quit [Quit: Client closed]
<rburton>
fray: having fun with gitsm fetcher. can i still blame you for that?
oobitots has joined #yocto
mbrother1 has joined #yocto
<fray>
lol what isn't working for you.. I can at least tell you if I know about it... :)
<mbrother1>
rburton: turns out I had binary files in my patch which are a problem in quilt when not using the `--binary` flag when creating the patch
<fray>
(the one thing that people have complained about and I've so far not done anything about -- is login required for sub modules)
<rburton>
fray: so say v1 of a gitsm repo has foo as a submodule. update to v2 changes foo to bar. on build, it doesn't go and get bar.
<fray>
Ohhh it should
<fray>
I thought there was a test case in place for that specific condition (bitbake test case)
<fray>
I know early on there was a bug that it wasn't doing that reload recursively.. but that was fixed..
<rburton>
i've a reproducer, two releases of edk2-firmware where cmocka changes repo
<rburton>
i'll see if the test case is similar or i can replicate in the same way
<fray>
What is changing from v1 to v2? URL only, path, and/or submodule entry name?
<rburton>
yeah was just wondering if the tracking code looks at just the path
<RP>
rburton: writing a selftest would be a great thing anyway as we can then stop things regressing
<fray>
ya, if you can either replicate it with the existing test case, or get me a test case I can look fairly quickly.. not sure I can't fix it this week -- but it SHOULD work
<fray>
rburton the tricky part with this is if you just 'scan things' you get race conditions (we had a LOT of them early in the development of the code). Moving everything to recursive with locking fixed that, but it's possible that we're not looking at the right set of values as we recurse
<rburton>
right, as the path is the same but the tarball it expands in unpack is different
<fray>
Ya, it may be looking at JUST the path, but it was intended to look for both
<rburton>
also of course this code is horrid :)
<fray>
yes it is
<fray>
it is _WAY WAY WAY WAY_ better then where I started..
<rburton>
i think its time for fetch3 ;)
<fray>
but trying to replicate git submodule (and it's quarks) without running git submodule.. yikes
<fray>
then parses the response and uses the key of path and the value is the url (I think)
<fray>
iterates over each of those and verifies everything is filled out. If it is it then stores the paths, revisions, uris and any subrevisions (don't remember what this means)
<fray>
then for each module it processes based on the submodule entry name....
<fray>
builds up a URL table and calls itself (recursively) for the submodules..
<fray>
(thats more or less how it fetches).. so that seems fairly straight forward
<fray>
It's probably 'need_update' where things fall apart..
<fray>
Does the main SRC_URL change? no we end, otherwise we have to process the submodules..
<rburton>
no just the srcrev
<rburton>
which includes a change to .gitmodules which changes the url of a submodule
<fray>
the idea is if the main one didn't change, then we don't need to do further evaluation as submoudles can't change by definition.. (the same happens when we recursively scan)
<rburton>
WOOP WOOP yocto shoutout by nasa
<fray>
rburton: the URL changed, but did the SRCREV change?
<rburton>
fray: so the recipe changes the SRCREV of the main recipe. that change includes a change to .gitmodules which changes the url of a submodule.
<fray>
ya, but does it change the SRCREV of the submodule?
<rburton>
gotcha
<fray>
There _IS_ a check that if the SRCREV of the submodule is unchanged, it skips further checking
<fray>
# Drop a nugget for the srcrev we've fetched (used by need_update)
<fray>
look at the top of need_update.. it looks for known srcrevs.. and then checks them skipping items it has already seen..
<fray>
The only thing I can think of is somehow adding the URL as well.. so it'd be:
<fray>
bitbake.srcurl = <url>
<fray>
bitbake.srcrev = <srcrev>
<fray>
then load them both, and compare
<rburton>
gotcha, will try
<fray>
So ya, lack of SRCREV changing is your cause.. but I'm not sure the 'fix', that is just one possibility
<rburton>
sounds feasible
<fray>
Somehow we need to know the url that 'last time through', so we can quickly compare and bail..
<fray>
anyway you have your reproducer now. Change the upstream URL w/o changing the SRCREV..
mbrother1 has quit [Ping timeout: 240 seconds]
<fray>
(I doubt I thought of a situation where the URL changed, but the SRCREV didn't at the same time)
<barath>
does anyone know if the covered/notcovered fix is being worked on for dunfell? I've tried backporting the fix from hardknott to dunfell, but we're still getting those errors
<jaskij[m]>
how long does it take to get an account on BZ?
<jaskij[m]>
and can I just send an e-mail to the mailing list with the bugs?
<jaskij[m]>
*bug
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
mckoan is now known as mckoan|away
zpfvo has quit [Remote host closed the connection]
kayterina has quit [Ping timeout: 252 seconds]
kayterina has joined #yocto
roussinm has quit [Quit: WeeChat 3.3-dev]
<RP>
jaskij[m]: we can ask halstead
<halstead>
jaskij[m]: accounts are added within an hour usually. There isn't a way to submit bugs via email.
florian has quit [Quit: Ex-Chat]
<override>
how can I make an image recipe compress only the rootfs to a format bmap likes and also generate a bmap file and a hash file for ONLY the rootfs in question.
<halstead>
jaskij[m], rburton, ndec_ , about the bad links and mailing lists. We added redirects from the old locations to the new ones as best as possible so old links would work. Especially direct links to message archives.
<jaskij[m]>
I
<jaskij[m]>
*I'm mostly looking to click through from layer index to actually getting the list e-mail address
<jaskij[m]>
which, at least for meta-intel and meta-freescale, isn't possible currently
nerdboy has joined #yocto
nerdboy has joined #yocto
nerdboy has quit [Changing host]
frieder_ has quit [Remote host closed the connection]
<argonautx>
hello out there, i want to build kernel modules on the target system and I read that have to add kernel-dev to achive this
<argonautx>
but there is no /usr/src/kernel directory on the target fs
<argonautx>
what did I miss?
<zeddii>
install kernel-devsrc to your image
nateglims has quit [Quit: Client closed]
roussinm has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
oobitots has quit [Quit: Client closed]
oobitots has joined #yocto
nateglims has joined #yocto
kayterina has quit [Quit: Leaving]
<nateglims>
Is there a written grammar for bb files or is it just bitbake as a reference implementation?
<override>
zeddii: you know what I can add to image for it to give out rootfs in a format bmaplikes and also an accompanying bmap file???
whuang0389 has quit [Quit: Client closed]
oobitots has quit [Quit: Client closed]
oobitots has joined #yocto
<RP>
so gnupg is different on my system compared to the autobuilder because of /usr/bin/sendmail
prabhakarlad has quit [Quit: Client closed]
vd has quit [Quit: Client closed]
vd has joined #yocto
jwillikers has quit [Remote host closed the connection]
jwillikers has joined #yocto
oobitots has quit [Ping timeout: 256 seconds]
<argonautx>
zeddii: thanks
whuang0389 has joined #yocto
leon-anavi has quit [Quit: Leaving]
prabhakarlad has joined #yocto
leon-anavi has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
ant__ has joined #yocto
<ndec_>
halstead: ok.. well, i think we need to update the links anyways to the new location!
<halstead>
ndec_: For sure. I'm not sure why the link in question does redirect now or if it ever did.
<halstead>
ndec_: It looks like meta-freescale layer has already updated. Maybe you did that?
<ndec_>
yes, well. rburton did.
<ndec_>
but given that a lot of layers need to be updated, perhaps there is a better way than doing it manually..
florian has joined #yocto
<smurray>
RP: I can't seem to ssh into my setup at home this morning, but I grabbed some bits on my laptop and could get the ptest patch in meta-agl-core updated in a pinch
<RP>
smurray: thanks, these things always happen the most inconvenient way :/
amitk has quit [Quit: leaving]
<smurray>
RP: when are you planning on pulling the queued master-next changes into master?
<RP>
smurray: I was hoping for sooner than later but if you want me to wait I can I guess
<smurray>
RP: it's more that I'm wondering if I push an update for that if it'll break test runs before you push to master
<RP>
smurray: oh, I can push whenever it is ready :)
<smurray>
RP: ah, okay. I'll do a test build of ptest-runner with the patch here ASAP then
tangofoxtrot has quit [Ping timeout: 260 seconds]
<smurray>
RP: tbh, I'm not sure it'll break the meta-agl-core test on the autobuilder, as I don't think AGL_FEATURES = "aglcore" is set for that
tangofoxtrot has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
camus has quit [Quit: camus]
whuang0389 has quit [Quit: Client closed]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
goliath has quit [Quit: SIGSEGV]
<vd>
I'm running out of space, is it safe to create mount points on my laptop for the DL_DIR, TOPDIR and SSTATE_DIR?
nateglims has quit [Quit: Client closed]
<RP>
vd: SSTATE_DIR and DL_DIR work fine. TOPDIR should work too but I don't do that locally so slightly less sure
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
leon-anavi has quit [Quit: Leaving]
florian has quit [Ping timeout: 260 seconds]
nateglims has joined #yocto
leon-anavi has joined #yocto
<smurray>
RP: I've pushed the patch update to meta-agl's next branch, ptest-runner should build w/o warnings if you merge the master-next updates
<jaskij[m]>
can I use overrides in recipe appends? particularly, I want different systemd-conf _bbappend files depending on the target machine
<jaskij[m]>
vd: not in variables, but on the filename level
<jaskij[m]>
sorry for badly wording it the first time
<jaskij[m]>
does it work like this at all in filenames?
<vd>
overrides aren't supported on a file basis for bbappend I think
<jaskij[m]>
what I feared, I'll just need to separate it differently
<vd>
but you can have many systemd-conf _%.bbappend files
<vd>
(and the VAR_append_myoverride syntax in them for sure)
<jaskij[m]>
vd: how, if they're all in the same directory? I have multiple machines supported in one layer, for what it's worth
<jaskij[m]>
rn, I'm thinking more of just making an include per machine and just include them all in `systemd-conf_%.bbappend`
<vd>
jaskij[m] you can place them in recipes-MACHINE1/systemd-conf/systemd-conf_%.bbappend, recipes-MACHINE2/systemd-conf/systemd-conf_%.bbappend, and so on.
<vd>
jaskij[m] but you would also edit the recipe variables within the machine configuration files
nateglims51 has joined #yocto
<vd>
but you could*
nateglims51 has quit [Client Quit]
<vd>
as in two different options
nateglims has quit [Ping timeout: 256 seconds]
goliath has joined #yocto
<jaskij[m]>
not really, I need to add in board-specific network configuration files
<jaskij[m]>
so that, for example, by default it boots with DHCP
<RP>
smurray: I merged, thanks
<smurray>
RP: when I get home I'll send the latest version of the patch to the ml
<RP>
smurray: sounds good, thanks
<vd>
jaskij[m] that's already the default I think, but you can just append FILESEXTRAPATHS and SRC_URI with the machines overrides in bbappend files
<jaskij[m]>
I could. If I didn't need multiple options per machine
<jaskij[m]>
Eg. for dual ethernet I need both an option for separate and bridged
<jaskij[m]>
and this, in turn, is chosen per image
<olani[m]>
jaskij: If the files are stored with the recipe you can use subpaths like recipes-X/systemd-conf/systemd-conf/MACHINE1/whatever.service and SRC_URI += "whatever.service" and let the recipe pick the appropriate file. This assumes they have the same name.
<jaskij[m]>
oh, I'm doing that alright, otherwise the files subdir would be a total mess
<jaskij[m]>
just thought someone could come up with something smarter than what I'm already doing
<jaskij[m]>
mostly, I just have a very visceral reaction to long files
<jaskij[m]>
so if my _bbappend would be 150 lines, I'm looking to split it
<olani[m]>
I probably don't understand exactly what you are trying to do. For one thing I do not understand how the image interacts with the machines here.
<jaskij[m]>
Yeah, I'd have to do a whole writeup to just explain, or mock the file structure I'm currently using.
<jaskij[m]>
Which... I might actually do that mock and post it on the mailing list
jamesp_ has joined #yocto
jamesp_ is now known as jamestperk
xmn has joined #yocto
<jaskij[m]>
olani: to attempt a rundown, there's two machines, one of them has two ethernet ports. Now, those two ethernet ports can be separate or bridged, and if bridged the bridge can either be a DHCP client or server. So one of the machines has *three* possible network configurations, and I want to select among them when creating the image.
<jaskij[m]>
that's what makes this all so complicated
vd44 has joined #yocto
vd has quit [Ping timeout: 256 seconds]
<vd44>
jaskij[m] that's not complicated. In recipes-core/systemd-conf/systemd-conf_%.bbappend, add a few SRC_URI_append_MACHINE1 = " file://br0.network" etc.
kiran_ has quit [Ping timeout: 245 seconds]
<vd44>
jaskij[m] but you could also have this configuration generated by systemd
<jaskij[m]>
That's broadly what I'm doing
<jaskij[m]>
But I just finished refactoring one machine for bbappend into a separate include ant it's fifty lines long
<vd44>
jaskij[m] or even always install all the files, and make the multi-interfaces unit files depends on the interfaces themselves, so that the board with a single interface won't load the configuration for non-existent interfaces
<jaskij[m]>
That's possible?
<jaskij[m]>
Also, what if I want to select, at image definition, between bridged and non-bridged?
<olani[m]>
jaskij: Do you only want to install one of the files on each image, or do you want to install all of them and then use a symlink or something to select the active one? I'm not that good with systemd so do not assume I know how the .network files are used.
<jaskij[m]>
Nah, only installing some of them. I left work just now, but I'll make a mock repo which can be public tomorrow and ping you both to show how it looks like rn, might be easier to understand.
<jaskij[m]>
But only want the actually used files in the image
<olani[m]>
jaskij: Then I'd install all of the files and then separate into separate packages that the image can choose from.
<vd44>
jaskij[m] maybe ways. You can use a systemd generator reading a configuration file or a kernel command line to generate the given unit files, or you can define a variable in your image recipe used in a ROOTFS_POSTPROCESS_COMMAND function to install the unit files at image creation time. You can then edit this variable in the recipe, the machine
<vd44>
configuration or somewhere else.
<vd44>
s/maybe/many/
<olani[m]>
Don't know why you put it all into the systemd-cond recipe either. It's fine to do one recipe per machine that can be selected with PREFERRED_PROVIDER.
<olani[m]>
s/cond/conf/
<jaskij[m]>
olani[m]: That's what I'm doing now, but what vd44 proposes seem a bit better
<jaskij[m]>
It just occured to me that I could maybe abuse PACKAGECONFIG or create a similar mechanism
<olani[m]>
Yes, systemd can do many wonderful things :)
<jaskij[m]>
Why I picked it, is that it actually also supports CAN interfaces
<vd44>
I personally prefer to have everything dynamic if possible, that makes the image recipes simpler. If you do not want all files to be present on the system (even though used given the correct After= and other conditions), then a generator can be used.
<olani[m]>
vd44: I agree. I just don't know how to do it with systemd.
<jaskij[m]>
vd44: For an appliance, sure. But I can't rule out we'll be selling these open, for the client to program. In that case I believe generators might be too convoluted.
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<vd44>
olani[m] 2 ways: 1) add all files, and set the correct After=sys-subsystem-net-devices-eth1.device for example so that the bridge isn't brought up if eth1 isn't present 2) write a generator which reads e.g. mynetworkmode=bridged kernel parameter and generates the unit files. See man systemd.generator
<vd44>
jaskij[m] if you only want to keep the configuration at the image level, then ROOTFS_POSTPROCESS_COMMAND += "mynetwork; " mynetwork () { if [ "${MYNETWORK}" = "bridged" ] ; then echo ... > ${IMAGE_ROOTFS}/usr/lib/systemd/... }
<vd44>
or one function per setting, ROOTFS_POSTPROCESS_COMMAND += "${MYNETWORKTWEAKS}" and set MYNETWORKTWEAKS = "bridged; " in your machine configuration, you get the idea
<jaskij[m]>
Yup
florian has joined #yocto
<halstead>
moto-timo: I pulled in your patchest and the index is working fine but the rrs portion is failing so I can't take it live.
<halstead>
moto-timo: Want to hack on the rrs code with me and get it Django 2.2 compatible?
<moto-timo>
halstead: absolutely. Do you have logs? Or a reproducer? (Or give me access…)
<halstead>
moto-timo: Sure. I'll email them to you.
<moto-timo>
RRS might be needing more love…
<halstead>
moto-timo: Sent to your gmail.
<halstead>
moto-timo: Would you prefer I paste snippets here?
<moto-timo>
halstead: gmail is great 👍
artri has quit [Ping timeout: 252 seconds]
leon-anavi has quit [Quit: Leaving]
<moto-timo>
halstead: I’ll find a way to drop the py2 deps
<moto-timo>
Die die die
<argonautx>
hi, I put a IMAGE_INSTALL += "kernel-devsrc" in my local.conf but got no /usr/src/kernel installation in my target image
<argonautx>
what went wrong?
<halstead>
moto-timo: I suppose for layers that are flagged as python2 only we can just exit immediately instead of standing up the python2 env?
<RP>
reproducibility issues in populate_sysroot down to "just" gobject-introspection, libtool-cross, mesa, perl, python3, rpm and util-linux :/
<moto-timo>
"just"
<RP>
moto-timo: it was everything earlier so it is looking better, now the more painful problems are emerging from the noise :/
<halstead>
moto-timo: Fair enough. I didn't look into how the system handles an early return.
<ndec_>
thanks for whoever suggested https://venueless.org/en/ in the BoF today.. I didn't know about it and it looks great. I will investigate a bit more about it! (Jan I think)
<moto-timo>
RP: maybe we we can discuss perl and python tomorrow in triage
<moto-timo>
halstead: I forgot about the "layers that support python2" wrinkle...
<RP>
moto-timo: sadly I think I'll have to dive into each one and figure out where the differences are
<RP>
e.g. util-linux is in setarch. No idea what beyond that
florian has quit [Ping timeout: 252 seconds]
<RP>
moto-timo: python3 is in sysconfigdata
<RP>
moto-timo: perl is in config.sh and Config_heavy.pl
<moto-timo>
RP: that sounds familiar with perl...
<RP>
moto-timo: trouble is this is the sysroot so things can have host paths :/
<RP>
somehow we need to remove these from the checksums
<ericson2314>
but there are nano.specs files in the newlib source
<ericson2314>
that hint otherwise
<ericson2314>
yet those have e.g. -lc_nano which doesn't seem to correspond to anything in the rest of newlib, so it is very unclear to me what's going on
<fray>
it's a fork, one that isn't directly compatible with newlib itself..
<ericson2314>
thanks
<fray>
I did some looking, and it appears to pretty specialized, for MY usecase it wasn't appropriate.. and the best i found is it seemed pretty out of date..
<fray>
(I was/am doing some baremetal builds that require a more full featured libc)
goliath has quit [Quit: SIGSEGV]
<ericson2314>
yeah I am trying to do get a way from a vendor's frankenstein SDK
<ericson2314>
which grab's ARM's prebuilt toolchains among other things
<fray>
everything I do is based on Yocto Project SDKs.. I avoid the ARM, crosstool-ng, etc..
<jonmason>
arm toolchains are great. get them from meta-arm!
<ericson2314>
jonmason: everything I do is with Nix / Nixpkgs, but I come here because the kindred spirit :)
<fray>
arm toolchains are great, if you want 15 different multilibs.. ;)
<fray>
I've been trying to convince people since we support 3 (main) ARM CPUs.. maybe our SDK should be those three + r5... ;)
<fray>
not 15..
<jonmason>
fray: I'm obligated by my employer to say they are perfect in every way
<ericson2314>
jonmason: are the recipes to create the official binaries public?