<BhsTalel>
I am working for a company that does not allow their Linux servers to access full Internet, so they told me they are only allowed to create a list of mirrors that Yocto uses to add them to their Firewall,
<BhsTalel>
Is this idea possible ? Because I know that there are some major mirrors for Yocto that do not change, but I think the idea is not possible for making Yocto works only with specific mirrors, also one cannot collect all mirrors for sure
prabhakalad has quit [Quit: Konversation terminated!]
prabhakalad has joined #yocto
enok has quit [Ping timeout: 240 seconds]
altru has joined #yocto
Saur_Home5 has quit [Quit: Client closed]
Saur_Home5 has joined #yocto
xmn has quit [Ping timeout: 246 seconds]
mbulut has joined #yocto
mischief has quit [Quit: WeeChat 4.1.1]
mischief has joined #yocto
<rburton>
BhsTalel: the downloads.yoctoproject.org mirror covers everything in oe-core and hopefully meta-oe too, but it won't cover _all_ sources you'll need so you need to figure something out. you can download on another machine (eg bitbake world --runall fetch) to create a local source mirror and copy that to the servers?
enok has joined #yocto
<BhsTalel>
rburton The issue that we don't have Linux hosts, it is not allowed to have Linux hosts, and other solutions like (WSL, docker, ...) cannot work too, so I need to figure out the full list of mirrors,
<BhsTalel>
maybe I will start with the downloads.yoctoproject.org mirror and for every FetchError I will collect the new mirror
<Guest13>
hello. i have an automatic ci/cd that builds images. however, sometimes i get the following error
<Guest13>
Bitbake Fetcher Error: FetchError('Unable to fetch URL from any source.', 'git://git.toradex.com/linux-toradex.git;protocol=git;branch=toradex_5.4-2.3.x-imx')
<Guest13>
im not sure why it happens, but re-building usually fixes it.
<Guest13>
is there any workaround for this? (e.g, retry fetch multiple times?)
<qschulz>
Guest13: the work around is to mirror it locally once you have fetched it. Network is unreliable so keeping as much as possible locally helps with that
<qschulz>
and also, makes things much faster AND is kinder to those git servers that are then less hit :)
<Guest13>
qschulz how do I do that?
<qschulz>
Guest13: the easiest is to share DL_DIR between your CI workers
<Guest13>
do i need a separate machine for the mirrors?
<Guest13>
i suppose i cant use it in the same machine as the ci job since it runs in docker
<qschulz>
I personally only use a shared DL_DIR over NFS between our different CI workers
<qschulz>
I would only set a PREMIRRORS if you want users to build stuff locally and make use of that local mirror
<qschulz>
same for SSTATE_DIR, I share that over NFS
<qschulz>
but if I had users of that sstate cache, I would make it a SSTATE_MIRROR
<qschulz>
basically, shared dir = read-write, mirror = read-only from user perspective
<Guest13>
so if source_mirror_url is set and the ci worker doesnt find anything there, what happens?
<Guest13>
it skips, or fetches and stores it there?
<Guest13>
(i suppose since its ready only, it just skips?)
<qschulz>
Guest13: the mirror is read-only, your worker will hit the mirror first (provided you set it up as PREMIRRORS), if not found, will it the other PREMIRRORS if any, then upstream (as listed in SRC_URI), if not, then in MIRRORS
<qschulz>
and download this locally in its DL_DIR
<qschulz>
sorry, it'll look first into its own DL_DIR, if not found then goes to the premirrors first, then upstream, then mirrors
<qschulz>
so each worker would download locally, ideally from the premirror you've set up
<qschulz>
something has to feed this mirror though otherwise it'll stay empty (as it's read-only from user perspective)
<Guest13>
so, recapping, for CI workers, i should use the read-write which is
<Guest13>
for user perspective, they can use the bitbake.dowloads setting the mirrors, right?
mvlad has joined #yocto
<qschulz>
Guest13: I think they could yes, but make it a PREMIRROR for your users
<qschulz>
otherwise they may modify DL_DIR
<qschulz>
and you don't want that
<qschulz>
e.g. a user runs bitbake -c cleanall <recipe> and one of your CI worker is running and fetching this tarball at the same time it's being removed => problems
<qschulz>
(in any case, never run -c cleanall or -c cleansstate on anything that is using a shared build directory, shared download directory or shared sstate cache
<qschulz>
ah never mind, didn't take the time to read what that was doing
<qschulz>
no clue how to do this on non self-hosted workers
<qschulz>
we have a self-hosted GitLab instance + workers, so we have some ansible mounting this NFS on the host, then have it exposed inside the gitlab worker via volumes
<Guest13>
FROM ubuntu:latest
<Guest13>
RUN apt-get update && apt-get install -y nfs-common
<Guest13>
trying to mount it on the docker image, no success for now :(
Saur_Home5 has quit [Quit: Client closed]
Jones42__ has joined #yocto
Saur_Home5 has joined #yocto
Jones42_ has quit [Read error: Connection reset by peer]
enok has quit [Ping timeout: 240 seconds]
altru has quit [Quit: Client closed]
<qschulz>
BTW, before I forget, do **NOT** access DL_DIR/SSTATE_DIR at the same through NFS and through another fs, the flock doesn't work across those (been there :) )
enok has joined #yocto
enok has quit [Ping timeout: 268 seconds]
florian_kc has joined #yocto
enok has joined #yocto
lexano has joined #yocto
Saur_Home5 has quit [Quit: Client closed]
Saur_Home5 has joined #yocto
Guest13 has quit [Quit: Client closed]
Kubu_work has joined #yocto
BhsTalel has quit [Quit: Client closed]
enok71 has joined #yocto
enok has quit [Ping timeout: 240 seconds]
enok71 is now known as enok
goliath has quit [Quit: SIGSEGV]
enok71 has joined #yocto
enok has quit [Quit: enok]
enok72 has joined #yocto
florian_kc has quit [Ping timeout: 272 seconds]
enok71 has quit [Ping timeout: 268 seconds]
enok72 has quit [Ping timeout: 268 seconds]
BhsTalel has joined #yocto
altru has joined #yocto
MrCryo has joined #yocto
goliath has joined #yocto
xmn has joined #yocto
Jones42_ has joined #yocto
Guest60 has joined #yocto
<Guest60>
Hey, anyone have any experience building Yocto Linux under Gentoo?
alessioigor has joined #yocto
Jones42__ has quit [Ping timeout: 268 seconds]
<mihai>
Guest60: go ahead, maybe it's not even gentoo specific what you're about to ask or state
<RP>
qschulz: it is supposed to work, that is probably your nfs setup
<RP>
qschulz: well, no supposed, it does on the autobuilder
<rburton>
notably though the AB has a NAS appliance for the NFS server that is not linux
<qschulz>
RP: so you have one worker using e.g. ext4, and another using NFS with the content from the ext4 fs and it works just fine?
<landgraf>
Guest60: I use gentoo to build YP based linux yes
mihai has quit [Ping timeout: 268 seconds]
<RP>
qschulz: I was thinking that the nas can change and work with files but as rburton says, that is a BSD NAS appliance
<RP>
I've only done local ext4 shared over NFS
mihai has joined #yocto
<qschulz>
RP: it's been a very long time (2 years probably?) since the IT looked into it, so I cannot provide more info than we just stopped doing this entirely and migrated everything over NFS, even when the NFS was basically localhost
<RP>
qschulz: fair enough,I have some memory of this now. Would be nice to understand what went wrong as in theory it should work
<qschulz>
funny how the discussion started the exact same way, just a year and a half later :D
prabhakarlad has quit [Ping timeout: 260 seconds]
hiagofranco has joined #yocto
BhsTalel has quit [Quit: Client closed]
Saur_Home5 has quit [Quit: Client closed]
Saur_Home5 has joined #yocto
Guest60 has quit [Quit: Client closed]
sgw has joined #yocto
florian_kc has joined #yocto
amitk has quit [Ping timeout: 264 seconds]
florian_kc has quit [Remote host closed the connection]
hiagofranco has quit [Quit: Client closed]
<qschulz>
I'm a bit perplex with what bitbake returns in DEPENDS for my PACKAGECONFIG varflag
<qschulz>
I have
<qschulz>
PACKAGECONFIG[gallium] = "-Dgallium-drivers=${@strip_comma('${GALLIUMDRIVERS}')}, -Dgallium-drivers='', libdrm ${@'libclc' if 'iris' in d.getVar('GALLIUMDRIVERS', '').split(',') else ''}"
enok has joined #yocto
<qschulz>
but it keeps it inline somehow, and then for the native variant of the recipe, I get 'libclc'-native if-native 'iris'-native etc.... in DEPENDS
<qschulz>
keeps it inline -> keeps it unexpanded/unresolved
vthor has joined #yocto
<qschulz>
using an intermediate variable made it work... what am I missing here?
<RP>
qschulz: is there some kind of conditional inherit involved? or how/where is PACKAGECONFIG set?
<RP>
it is probably related to how/when/where the PACKAGECONFIG code is processed
<qschulz>
mmm could be the anonymous python then I guess
enok has quit [Ping timeout: 240 seconds]
<qschulz>
but d.getVar in the anonymous function should expand this by default as that's the default value for getVar method from the DataSmart
<RP>
qschulz: could be some sort of quoting problem, or a genuine bug
<qschulz>
RP: I was already surprised by ${@strip_comma('${GALLIUMDRIVERS}')} (already in the recipe)
<qschulz>
RP: moving ${@'libclc' if 'iris' in d.getVar('GALLIUMDRIVERS', '').split(',') else ''} to VAR= and then use ${VAR} in PACKAGECONFIG[gallium] worked just fine
MrCryo has quit [Remote host closed the connection]
<qschulz>
so doesn't seem to be a quoting issue?
<RP>
qschulz: I suspect the regex bitbake is using to find code to run or something so two ${@} usages or something
<qschulz>
RP: then a corner case somewhere because we have multiple regex in PROVIDES of the same recipe for example (though on a newline each....)
<qschulz>
sorry, multiple inline python
<qschulz>
this mesa update is a bit of a nightmare for someone who knows nothing of mesa nor meson :)
Jones42__ has joined #yocto
Jones42_ has quit [Ping timeout: 246 seconds]
vvn has joined #yocto
<Xogium>
mesa, meson, that already sounds confusing ;p
michael_e has joined #yocto
rfuentess has quit [Remote host closed the connection]
enok has joined #yocto
michael_e has quit [Quit: Client closed]
sgw has quit [Quit: Leaving.]
enok has quit [Quit: enok]
enok has joined #yocto
alessioigor has quit [Quit: Client closed]
<qschulz>
khem: I'm trying to work out the mesa-native 24.1.0 recipe and we now need libclc for x86 hosts (iris Gallium driver requires clc now...)
<qschulz>
khem: however I get:
<qschulz>
The file /usr/lib/libLLVMPasses.a is installed by both llvm-native and clang-native, aborting
<qschulz>
master branch on meta-clang
<qschulz>
have you seen this already or is this one more new issue (which could be PEBKAC :) ) I found in the last couple of days :( ?
enok has quit [Read error: Connection reset by peer]
enok has joined #yocto
mckoan is now known as mckoan|away
leon-anavi has quit [Quit: Leaving]
enok has quit [Ping timeout: 240 seconds]
<qschulz>
RP: I FOUND THE ISSUE!
<qschulz>
So, it seems that we do a split by comma in PACKAGECONFIG before running the inline python
<qschulz>
and since I used a comma in my inline python... it broke the parsing in weird ways that didn't fully break the parsing
<qschulz>
just put the rest of the stuff in RDEPENDS (IIRC that's what after DEPENDS in the PACKAGECONFIG[v] entry?)
<qschulz>
So that's why putting it in another variable made it work
<qschulz>
so not sure what to do with this, is this an actual bug or not?
<qschulz>
because I'm not sure we want to expand everything in PACKAGECONFIG[v] before we split by comma?
<RP>
qschulz: oh, interesting. I wonder if we should detect/warn about that
goliath has quit [Quit: SIGSEGV]
altru has quit [Quit: Client closed]
florian has quit [Quit: Ex-Chat]
wdouglass has joined #yocto
<wdouglass>
Hi all. Is there a variable i can use in a .bbappend file that refers to the location of the .bbappend file, and not the original recipe? I'm trying to do something like `FILESEXTRAPATHS:prepend := "${THISDIR}/files:"` but it resolves to the wrong location
<qschulz>
wdouglass: that's exactly what this does
<qschulz>
wdouglass: but I think you may be looking at the wrong thing, can you explain what you're trying to do and what's happening?
tgamblin has quit [Ping timeout: 246 seconds]
<qschulz>
khem: never mind, it's because we both have gallium and gallium-llvm in mesa-native and that gallium now requires lblc-native which brings clang-native as well....
<wdouglass>
I'm trying to add functionality to a devkit which defines a bunch of packages in place with EXTERNALSRCDIR. It uses a config file that matches the SDK development board, and not the custom board i'm designing. I can override the board config in one of the tools that comes from upstream in a make argument, so i've created a BBAPPEND that looks like this: https://pastebin.com/uCxUzeX5
tgamblin has joined #yocto
<wdouglass>
I think because of EXTERNALSRCDIR, it's not actually copying `board_config` anywhere, and ${B} is referring to the source directory for the upstream package
<wdouglass>
then the upstream looks in the original recipe directory, and not the bbappend location
<qschulz>
yup, expected
<wdouglass>
so my question is -- what do? how do i get it to find `board_config` in the right place?
<qschulz>
you can use BBAPPEND_LOCATION := "${THISDIR} and then EXTRA_OEMAKE:append = " BOARD_CONFIG=${BBAPPEND_LOCATION}/board_config"
<wdouglass>
Oh interesting
<qschulz>
BUT, this isn't ideal ayway
<wdouglass>
what is ideal? it seems not-obvious to me when all of these things get parsed
<qschulz>
your SRC_URI should put this file somewhere available to your recipe
<qschulz>
in everything but the current master branch of Yocto, it's WORKDIR
<wdouglass>
I understand. can i undo the `inherit externalsrc` that's in the original recipe? I'm trying very hard not to patch the upstream sdk
<qschulz>
so if you use BOARD_CONFIG=${WORKDIR}/board_config this could probably work
<wdouglass>
ok i'll give that a shot
<qschulz>
wdouglass: no, you cannot uninherit stuff as far as I know
tgamblin has quit [Read error: Connection reset by peer]
tgamblin has joined #yocto
<wdouglass>
${WORKDIR}/board_config worked! thanks a ton qshulz!
sudip_ is now known as sudip
<qschulz>
note that using WORKDIR is a bit finicky, so I could suggest using destsuffix into e.g. ${S} for example and then use ${S} instead of WORKDIR
<qschulz>
the issue being that if you remove the SRC_URI, the file will still exist (except if you remove your whole TMPDIR)
<qschulz>
(well, the one for the recipe, that is)
<qschulz>
hence why we're migrating to UNPACKDIR now
<wdouglass>
well because of the externalsrc inherit, ${S} is the upstream source directory
<wdouglass>
so it doesn't seem right to put stuff there
zpfvo has quit [Remote host closed the connection]
<qschulz>
true
<qschulz>
nothing better to suggest right now
<rburton>
for not-master, WORKDIR/ is the right thing
<qschulz>
rburton: yes, but still not always working
<wdouglass>
ok thank you guys very much (because of my vendor, i'm stuck on kirkstone for the time being, so i'm a bit behind the times anyway)'
<qschulz>
wdouglass: kirkstone is not THAT bad, you still have two years to prepare for the next update ;)
<wdouglass>
:)
jmd has joined #yocto
tgamblin_ has joined #yocto
tgamblin has quit [Ping timeout: 268 seconds]
tgamblin_ is now known as tgamblin
<qschulz>
RP: ah, discovered that packageconfig-conflicts-for-f1 isn't expanded at all? Tried to use with inline python or a variable and it didn't parse
jmd has left #yocto [ERC 5.4 (IRC client for GNU Emacs 28.2)]
enok has joined #yocto
enok has quit [Ping timeout: 240 seconds]
enok has joined #yocto
amitk has joined #yocto
dankm has quit [Remote host closed the connection]
amitk has quit [Ping timeout: 240 seconds]
dankm has joined #yocto
amitk has joined #yocto
alessioigor has joined #yocto
alperak has quit [Quit: Connection closed for inactivity]
xmn has quit [Ping timeout: 240 seconds]
alessioigor has quit [Quit: Client closed]
xmn has joined #yocto
enok has quit [Ping timeout: 240 seconds]
vthor_ has joined #yocto
vthor has quit [Ping timeout: 255 seconds]
florian has joined #yocto
mbulut has quit [Ping timeout: 268 seconds]
amitk has quit [Remote host closed the connection]
<khem>
this is binutils for musl on target and I wonder why its complaining about ld-linux-x86-64.so.2()(64bit) which comes from glibc
mvlad has quit [Remote host closed the connection]
enok has joined #yocto
wdouglass has left #yocto [ERC 5.5.0.29.1 (IRC client for GNU Emacs 29.1)]
<RP>
khem: no idea on that :/
<khem>
and its using plain poky
<RP>
it is as if it has lost the glibc provides
<RP>
hmm, rc1 is a bust
* RP
merges a fix and tried an rc2
Saur_Home5 has quit [Quit: Client closed]
Saur_Home5 has joined #yocto
roussinm has joined #yocto
<roussinm>
We are using qsb (qt shader tool), and it lives in the native part of the sdk. Let's say you have a SDK from mickledore and someone using a host Ubuntu 24. When `qsb` runs on shader it will fail because of linkage on libGLdispatch.so.0 on the host machine. `/$SDKPATH/lib/libc.so.6: version `GLIBC_2.38` not found (required by /lib/x86_64-linux-gnu/libGLdispatch.so.0)` If you use an older version of
<roussinm>
Ubuntu it now works. If we upgrade to newer yocto version it now works again. Wondering if this is more of how we use the tool, qt issue or yocto sdk issue? We don't really need to target multiple version of glsl, we don't build for hlsl and metal either, so maybe the tool is useless to us?
enok has quit [Ping timeout: 240 seconds]
Saur_Home5 has quit [Quit: Client closed]
Saur_Home5 has joined #yocto
<RP>
roussinm: you need the libc in the SDK to be >= than the host libc. The newer yocto build will have a newer libc
florian has quit [Ping timeout: 260 seconds]
<roussinm>
Is that always a requirement?
<RP>
roussinm: it depends how much you're mixing up the binaries/libs
<roussinm>
RP: I guess here the problem is the GL library, because cmake works correctly.
<RP>
roussinm: right, that would do it
<RP>
the problems start when mixing elements of both, then you need the SDK to be >=
<roussinm>
RP: I think that's pretty much only tool that we need that uses libGLdispatch. Wondering if libglvnd would help?
Saur_Home5 has quit [Quit: Client closed]
Saur_Home5 has joined #yocto
<RP>
roussinm: I don't know enough about that specifically to comment. I've just spent far too long with linking errors like that
Kubu_work has quit [Quit: Leaving.]
xmn has joined #yocto
mbulut has joined #yocto
<Xogium>
so I have a system where the only storage medium is micro sd. I *hate* micro sd, but I don't have much choice in the matter... Somehow today, the overlayfs I have sitting on top of my a/b rootfs shows as totally fine when examine from another machine, but when booted up, the system has various strange things. For instance the machine id was actually visible in /etc/locale.conf, and my rauc system config had
<Xogium>
been seemingly replaced by the keymap for a remote control. Those files were not altered in the real rootfs however (squashfs) and not present when examining the overlayfs somewhere else. I guess, what I'm asking here is, does it sounds like that micro sd is on its way out, or what ?
<Xogium>
trying to rm the affected files just said, stale file handle
<Xogium>
no expert but to me it sounds like the whole setup is about to crumble heh
mbulut has quit [Ping timeout: 268 seconds]
<khem>
RP: the binutils issue does not happen when building on debian12 container but only on latest archlinux so I guess some future problem that I am seeing