<ThomasD13>
In terms of writing a recipe, which will execute that docker image and retreive somehow the output of it and packages that
<JosefHolzmayr[m]>
ThomasD13: you can hack it up and it will cause you pain.
<ThomasD13>
pain?
<JosefHolzmayr[m]>
ThomasD13: you either want to be able to build the application directly in the recipe, e.g. the yocto way, or create some form of ci/cd-ish build pipeline. everything else is just a road to terror.
<JosefHolzmayr[m]>
ThomasD13: yes. pain.
<ThomasD13>
Hmm.. this docker image is used to build the application via gitlab ci/cd, because it's kinda complex. So all the dependency/knowledge of how to build this application is within that image.
<ThomasD13>
At the first glance, I thought it could be a good idea, to reuse this container. So that yocto use that container to build that specific application
<JosefHolzmayr[m]>
think about it. the yocto-style build process goes to great lengths to ensure reproducibilty. to cache downloads. to provide a correct sstate. any recipe that brings a "hey lets just pull this container, no idea what it does, pulls itself, reproduces, or whatever" is effectively disabling all that.
<JosefHolzmayr[m]>
so if you totally and absolutely are conviced that the application can only be built in the container, then you effectively need a build pipeline. 1) container builds thing 2) yocto recipe packages built thing as a ready made artifact.
<JosefHolzmayr[m]>
this way, yocto ensures it stays sane itself. given the same artifact, it ensures reproducibilty. everything else is somebody elses problem, then.
<ThomasD13>
Hmm ok. I think I'm not able to rewrite that application bulid process in a yocto-style recipe.. :/
<JosefHolzmayr[m]>
then have fun pipelining.
prabhakarlad has joined #yocto
<ThomasD13>
So I assume I enter new territory? There aren't any public classes/recipes which are doing similar?
<JosefHolzmayr[m]>
ThomasD13: and the rest, well thats you CI/CD pipeline and infrastructure. its just outside our scope.
<ThomasD13>
thank you very much josef :)
<JosefHolzmayr[m]>
have fun.
<ThomasD13>
I won't ;)
fbre has joined #yocto
<ThomasD13>
Next time, I beg my boss to not use TI chips
<JosefHolzmayr[m]>
don't blame TI, blame those created the build processes and bsps that gave you the headaches.
<JosefHolzmayr[m]>
*who
dev1990 has joined #yocto
<rburton>
I can't see why you can't just call docker from inside a do_compile, assuming that your build user has permission to use docker.
meego has joined #yocto
<rburton>
it's madness, obviously
<rburton>
its quite possibly trivial to reproduce the build, and the docker is just to bring the build dependencies and cross-compilers in a single blob
<rburton>
have a look at the dockerfile and see what it does
<JosefHolzmayr[m]>
"you can hack it up and it will cause you pain." :)
<JosefHolzmayr[m]>
i still am convinced this holds true. i never said it can't be done.
<JosefHolzmayr[m]>
because if the docker container would be just pulling in dep and toolchains, then... but in my experience, they hardly ever do "just that", and often not in a static manner.
<JosefHolzmayr[m]>
so lots of variations, it all depends on the particular situation.
<ThomasD13>
Hmm.. building that application (for R5) is quite complex. There are many software packages as dependencies (sysbios/freetos, xdc, specific buildtools, network libs, and so on) and some glue scripts influenced by many ENV variables which produces in the end functional app.
<JosefHolzmayr[m]>
extra brownie points for the container using a proprietary toolchain or tooling that needs a license key being passed in and checking tis validity online.
<ThomasD13>
All this is done in a reproducible way in that docker image. Well I have to see which way go..
<ThomasD13>
Huh? what license key?
<JosefHolzmayr[m]>
like said, have fun.
<JosefHolzmayr[m]>
(that was an example for fun things that can happen)
<qschulz>
also you're going to have a hard time if the dependencies are patched by the distro used in the container and/or more recent/older releases
davidinux has quit [Ping timeout: 265 seconds]
davidinux has joined #yocto
camus has quit [Ping timeout: 265 seconds]
camus has joined #yocto
<RP>
kanavin_: librsvg recipe looks good, if a little worrying and hard to understand the "magic". You do like a challenge :)
<kanavin_>
RP: I like showing off too ;)
<kanavin_>
RP: I tried to add comments to the 'magic' bits
<kanavin_>
RP: I hope we won't see more of this autotools on top of cargo things
<RP>
kanavin_: the comments definitely help. I hope we don't see more of that kind of thing but who knows...
<RP>
kanavin_: reminds me a lot of how things used to be with gcc
<kanavin_>
RP: I'd like to throw the whole patchset on the AB again, but would like to do it on top of your task optimizations
<kanavin_>
I think those are not quite ready?
<RP>
kanavin_: actually, master-next isn't too bad now. There was one selftest that failed in the last test and I've put a "fix" for that and am retesting.
<RP>
kanavin_: The only downside is that fix is a total rebuild which is in progress atm, no sstate reuse
* RP
had to revert the native part of the siteinfo change which makes me sad
FredO3 has joined #yocto
FredO2 has quit [Read error: Connection reset by peer]
mattsm has quit [Quit: Ping timeout (120 seconds)]
mattsm has joined #yocto
Belsirk has joined #yocto
<kanavin_>
RP: as written in the commit, none of us understands the rust toolchain, or its consumers very well, so as we bring in more consumers, patterns for better solutions will emerge
<kanavin_>
e.g. consolidating and improving the hacks from individual recipes like librsvg
<RP>
kanavin_: definitely. gcc sysroot support was kind of a result of that :)
rfuentess has quit [Ping timeout: 246 seconds]
<kanavin_>
RP: one thing I'd like to do differently this time is take a hint from meson, and try to avoid relying on environment variables
<kanavin_>
they're ephemeral and it's hard to say what was there and what wasn't when things don't work - much better to have a config file on disk, and a log with command line switches to get the picture
<RP>
kanavin_: right, environment variables do tend to be problematic after a while. I still mean to go and try and reduce the number we have exported
zyga-mbp has joined #yocto
<RP>
(to be clear, I did clean it up a lot but we're down to the harder ones - tricky to know where they are used)
florian has joined #yocto
arlen has quit [Ping timeout: 264 seconds]
pgowda has joined #yocto
LocutusOfBorg has joined #yocto
m4ho has quit [Read error: Connection reset by peer]
m4ho has joined #yocto
Belsirk has quit [Remote host closed the connection]
retoatwork has quit [Ping timeout: 265 seconds]
Tokamak has quit [Read error: Connection reset by peer]
mckoan is now known as mckoan|away
Tokamak has joined #yocto
Alban[m] has joined #yocto
locutusofborg_ has joined #yocto
LocutusOfBorg has quit [Ping timeout: 246 seconds]
jwillikers has joined #yocto
<Alban[m]>
Hi! I'm thinking about setting up an internal layer index to automate the fetching of depend layers in my projects. I found layerindex-web but the README says that it has no REST API, so for me that sounds like it could not be used with bitbake-layers layerindex-fetch. Is there an implementation of this API available somewhere else?
BobPungartnik has quit [Quit: Leaving]
GillesM has joined #yocto
locutusofborg_ is now known as locutusofborg
locutusofborg has joined #yocto
locutusofborg has quit [Changing host]
xmn has quit [Ping timeout: 265 seconds]
arlen has joined #yocto
BCMM has joined #yocto
<dwagenk>
Hello. I know this is nor a lawyers chat, but I'd be interested if my take on INITRAMFS_IMAGE_BUNDLE is consensus or if there are different takes on this: Bundling the initramfs does NOT make the kernel+initramfs a derivative work in the sense of GPLv2. Thus the initramfs may contain proprietary (GPL incompatible) programms.
<JosefHolzmayr[m]>
based on this information one might conclude that your interpretation of bundling is wrong.
camus has quit [Ping timeout: 252 seconds]
camus has joined #yocto
<dwagenk>
thyoctojester: thanks for that hint! That looks pretty clear. Maybe we should add a hint about this to the yocto docs, since the implications of setting INITRAMFS_IMAGE_BUNDLE are easy to overlook. I know the documentation is pretty clear about who's obligation it is to ensure license compliance, but this feels like a trap. Especially since step 2 in https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#building-an-initramfs-image
<dwagenk>
recommends bundling due to technical reasons.
<JosefHolzmayr[m]>
dwagenk: patches welcome!
<JosefHolzmayr[m]>
also, michaelo
<dwagenk>
theyoctojester: I'll prepare something
<JosefHolzmayr[m]>
perfect, thanks!
meego has quit [Remote host closed the connection]
meego has joined #yocto
ThomasD13 has quit [Ping timeout: 264 seconds]
<tlwoerner>
RP: are you attending the plumbers conference?
<tlwoerner>
RP: in 15 minutes there's a "GCC steering committee Q&A (glibc, binutils, gdb)"
<tlwoerner>
might be a good opportunity to ask about glibc symbols (?)
<tlwoerner>
RP: today there's also the "Tracing Microconference" which might provide an opportunity to think about pseudo (?)
<tlwoerner>
i have started doing some research into pseudo/fakeroot things, but probably not enough yet
meego_ has joined #yocto
FredO3 is now known as FredO
meego has quit [Ping timeout: 246 seconds]
jwillikers has quit [Remote host closed the connection]
<moto-timo>
JPEW: what features/factors made you choose zuul?
<JPEW>
moto-timo: A bunch of reasons: 1) it's a complete CI solution (Tekton isn't *quite* there, and I don't think it's really intended to be)
<moto-timo>
JPEW: agreed, Tekton was designed to be the backend for a full soution, such as Jenkins-X
<JPEW>
2) It fully runs in K8s (see zuul-operator). It can actually still use a bunch of different worker nodes though. I only have it using K8s pods for building, but it can connect to Openstack nodes, AWS, GCE, etc.
<JPEW>
and you can mix and match pretty easily
<moto-timo>
ah
<JPEW>
3) It's really flexible. You define what pipelines you want and what they do, then the individual project say what jobs they want in each pipeline
<JPEW>
So... the recommended setup is to have a "check" pipeline that runs on every new change and does the basic sanity checks
<JPEW>
Then have a "gate" pipeline that runs you full CI and automated testing after a code review; when that passes Zuul submits the change for you
<JPEW>
(it also does speculative merging for stacked "gate" changes, so it can handle a lot of changes at once)
<moto-timo>
ah, that's like the gerrit workflow we were using for the "Living on Master" ELCE talk
<JPEW>
4) Zuul is run by openstack, so gerrit is a first class citizen (not important here, but we use gerrit at work)
<moto-timo>
I was aware of openstack/gerrit
<JPEW>
It's really nice to mark a PR/commit as "ready-to-merge" and know it will go through a full CI and merge after it passes.... kinda fire-and-forget :)
<moto-timo>
like openstack, zuul has some awkward naming, but that' just lack of familiarity I suppose
<JPEW>
Ya, it took me a while to grok all the nomeclature.... also it's all written in Python so I can actually hack on it
<moto-timo>
that is indeed nice... we had very light human touch automation for a rolling-release
<moto-timo>
Python is a huge plus, obviously
<moto-timo>
You're running this on one of your home servers I assume?
<JPEW>
Ya. I will say that it is very confusing on how to get a system stood up at first because they've broken down the entire thing into a bunch of very small and dedicated components.... so you have to run a scheduler, executor, merger, web-server, nodepool, etc independently (zuul-operator helps a lot with this though)
<JPEW>
Ua
<JPEW>
Ya on my home servers
<moto-timo>
that's very openstack of them ;)
<JPEW>
Ya, it makes sense after you see why they did it... but still confusing :)
<moto-timo>
Of course the dashboard project I did also had many small components/microservices...
<moto-timo>
which grew organically over time and the whole team knew what they all were. LOL
<smurray>
heh
<JPEW>
moto-timo: I think I've fully bought into the "operator" concept with K8s
<moto-timo>
JPEW: when it works, it is better than Helm
pidge has joined #yocto
<RP>
tlwoerner: sorry, I'm not at plumbers, no :(
* moto-timo
is at plumbers
<rburton>
you can be "at" plumbers for free, its live on youtube
<moto-timo>
indeed
<RP>
rburton: ah, well, I also didn't see this until now :/
<rburton>
you can also rewind the live streams to watch at your leisure
<RP>
rburton: right, but that wasn't quite what tlwoerner was suggesting
mrnuke has joined #yocto
<rburton>
ah right yeah asking a question means registering, sorry
<mrnuke>
Hi, I'm trying to use the arm esdk installer from downloads.yoctoproject.org. but when I run the shell script to install it, it takes forever.
<mrnuke>
And I keep getting "WARNING: Error contacting Hash Equivalence Server typhoon.yocto.io:8686: [Errno 110] Connect call failed ('35.233.185.178', 8686)"
<RP>
mrnuke: we were just discussing that. It has been fixed recently but won't be in the last point releases
DanielWalls[m] has joined #yocto
<RP>
mrnuke: 3.4 M3 should be ok
<mrnuke>
RP: thanks. I'll give that a try then!
tnovotny has quit [Quit: Leaving]
<JosefHolzmayr[m]>
moto-timo: JPEW funny reading you muse about that when I'm just getting into gitlab ci/cd ;-)
<JPEW>
JosefHolzmayr[m]: Ya, gitlab CI seems good also (no personal experience). I wanted something that A) I could selfhost and B) wasn't tied to any particular git provider
frieder has quit [Remote host closed the connection]
<JosefHolzmayr[m]>
funny, my objective is explictly to not-selfhost :)
manuel1985 has joined #yocto
<JPEW>
Sure! for non-yocto builds I use github actions (gitlab equivalent works just a well). Too much extra infra needed for fast Yocto builds (sstate, hashequiv, etc.)
<qschulz>
JPEW: you can self-host everything from GitLab I'm pretty sure?
<moto-timo>
khem has some GitHub actions for you-distro
camus1 has joined #yocto
<moto-timo>
s/you/yoe/
<JPEW>
qschulz: Yes, you can. I only want to self-host the CI not the code (maybe gitlab supports that, would have to look)
<JPEW>
qschulz: I know you can attach k8s runners to gitlab and have it do the builds locally
camus has quit [Ping timeout: 265 seconds]
camus1 is now known as camus
<moto-timo>
I self-hosted gitlab ci runner, using public gitlab repos
<JPEW>
^^ like that :)
<moto-timo>
But I got bogged down with running testimage (qemu running in a docker container)
<moto-timo>
And then I switched to podman
<moto-timo>
Still haven’t gotten the k8s gitlab ci runner functional yet
<qschulz>
though they explicitly say in the docs that you probably shouldn't run your own gitlab ci runner on a public repo. But that drastically limits the interest of it :p
<RP>
JPEW: would systemd-analyze show up potential races?
* RP
is pondering that issue again
<JPEW>
RP: No, but it might tell us why systemd is tearing down all the networking when systemd-timesyncd is stopped
<JPEW>
Perhaps one or the other doesn't like the discontiguous jump in time
<RP>
JPEW: something like that seems likely
<JPEW>
Maybe is't not the service, maybe it's when the test changes the time
<RP>
JPEW: right, it's a question of how to debug it
pgowda has quit [Quit: Connection closed for inactivity]
florian has quit [Quit: Ex-Chat]
ccf has joined #yocto
<ccf>
Ahoj, how to execute a shell-function from within a python-function?
<qschulz>
ccf: bb.build.exec_func
<qschulz>
though you can use [postfunc] and [prefunc] flags for your python task
fbre has quit [Quit: fbre]
arlen has quit [Ping timeout: 246 seconds]
arlen_ has joined #yocto
BCMM has quit [Remote host closed the connection]
kiran has joined #yocto
<sgw>
Morning Folks, is anyone else seeing a new ImportWarning
<sgw>
/usr/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
<sgw>
return f(*args, **kwds)
lucaceresoli has quit [Quit: Leaving]
<mrnuke>
new question. I'm trying to build a docker image (for gitlab-CI) with an arm toolchain, and mesa, libdrm and boost packages. I do 'devtool sdk-install -s boost libdrm, mesa' within the docker build. The resulting image is 17GB in size. Is there a way to cleanup bloat ?
kiran has quit [Ping timeout: 260 seconds]
<vd>
How can I figure out which recipe a binary comes from? (e.g. "resizepart")
zpfvo has quit [Remote host closed the connection]
<vd>
rburton: "Unable to find pkgdata directory" do I need to run something first?
<rburton>
you need to build something
<rburton>
can't tell you what package contains a file if there is nothing packaged
<rburton>
util-linux, fwiw
<rburton>
(bitbake recipes don't have to list exactly what is in each package, so you can't know from just the metadata)
<Tokamak>
random question. I'm finding it rather undesirable that yocto clobbers the kernel version.h's build date as shown by "uname -v". Does anyone know if there is a way to disable this clobbering for the kernel class?
vd has quit [Quit: Client closed]
vd has joined #yocto
<Tokamak>
sorry, correction. version.h's source revision date, not build date.
<rburton>
turn off reproducible builds
<zeddii>
+1
ant__ has joined #yocto
<Tokamak>
i had thought about that, but its somewhat desirable to have a consistent date at the filesystem level. is there a way to disable reproducible builds just for the kernel?
<rburton>
sure, disable reproducible builds for just the kernel
<rburton>
BUILD_REPRODUCIBLE_BINARIES=0
pidge has quit [Quit: Client closed]
prabhakarlad has quit [Quit: Client closed]
pidge has joined #yocto
<Tokamak>
forgive me as i'm still not comfortable with yocto variable scoping / masking. but do i need to mask that variable by PN somehow? or if i set it to 0 just in linux-xlnx (for example), it will stay scoped to that package?
<rburton>
in the recipe, use it as-is. or in your local.conf or distro conf you could do BUILD_REPRODUCIBLE_BINARIES_pn-linux-xlnx = "0"
<Tokamak>
ok, so recipes have some scoping, sometimes. :P thanks! will give it a go!
<rburton>
recipes have 100% scoping
<rburton>
set a variable inside them and it won't leak out into other recipes
camus has quit [Remote host closed the connection]
<Tokamak>
good to know, thanks again rburton
camus has joined #yocto
<vmeson>
sgw: have you or tgamblin checked if that happens on other ubu-18.04 systems? I'm doing a build on ubu-21.04 and it seems fine. Did your bisect finish?
meego_ has quit [Remote host closed the connection]
<RP>
sgw: yes, I see that on my 1804 system
meego has joined #yocto
<RP>
sgw: annoying. It is from the warnings change from alexk
<sgw>
RP: I should have said I have a 20.04 system, so it's there also.
<RP>
sgw: sorry, mine is 20.04! :)
<sgw>
vmeson: Yeah, it was alex's change to enable warnings
prabhakarlad has joined #yocto
<sgw>
Not sure why the create_spdx is failing now also, digging into that.
meego has quit [Ping timeout: 268 seconds]
GillesM has quit [Quit: Leaving]
<zeddii>
RP: round two of my 5.14 bump seems to be green (fixed systemtap). I can send a rfc unified series (I'll skip the OE/yocto split) and maybe we can get it into more testing.
<RP>
zeddii: sounds good, I can pull into master-next too
<RP>
sgw: we could do with adding tests for that on the AB
argonautx has quit [Ping timeout: 252 seconds]
leon-anavi has joined #yocto
kiran has joined #yocto
manuel_ has joined #yocto
meego has joined #yocto
manuel1985 has quit [Ping timeout: 252 seconds]
camus1 has joined #yocto
camus has quit [Remote host closed the connection]
camus1 is now known as camus
meego has quit [Ping timeout: 252 seconds]
manuel_ has quit [Ping timeout: 265 seconds]
camus1 has joined #yocto
camus has quit [Ping timeout: 265 seconds]
camus1 is now known as camus
roussinm has joined #yocto
kanavin_ has quit [Quit: Leaving]
kanavin has joined #yocto
Xagen has joined #yocto
kanavin has quit [Remote host closed the connection]
Xagen_ has joined #yocto
kanavin has joined #yocto
Xagen has quit [Ping timeout: 264 seconds]
Xagen_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Xagen has joined #yocto
pidge has quit [Quit: Client closed]
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
argonautx has joined #yocto
prabhakarlad has quit [Quit: Client closed]
kanavin has quit [Remote host closed the connection]
kanavin has joined #yocto
behanw has quit [Quit: Connection closed for inactivity]
argonautx has quit [Ping timeout: 260 seconds]
florian has joined #yocto
meego has joined #yocto
camus has quit [Ping timeout: 265 seconds]
camus has joined #yocto
meego has quit [Ping timeout: 246 seconds]
vd has quit [Quit: Client closed]
dkl_ is now known as dkl
yates has quit [Remote host closed the connection]
vd has joined #yocto
<jaskij[m]>
RP: regarding pg_config, that native build idea has one more hurdle: as of now the postgresql in meta-oe doesn't seem to package it.
<jaskij[m]>
Is there a way to get a dependency's version in a recipe?
<RP>
jaskij[m]: you'd have to write something into the sysroot from the recipe
<jaskij[m]>
Fortunately libpq is in the sysroot with it's pkgconfig
arlen has joined #yocto
arlen_ has quit [Ping timeout: 264 seconds]
goliath has quit [Quit: SIGSEGV]
leon-anavi has quit [Remote host closed the connection]