Tokamak has quit [Read error: Connection reset by peer]
Dracos-Carazza has joined #yocto
Tokamak has joined #yocto
goliath has quit [Quit: SIGSEGV]
sakoman has quit [Quit: Leaving.]
frieder has quit [Ping timeout: 258 seconds]
seninha has quit [Quit: Leaving]
frieder has joined #yocto
sakoman has joined #yocto
dingo_ has joined #yocto
jclsn has quit [Ping timeout: 248 seconds]
jclsn has joined #yocto
pgowda_ has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
sakoman has quit [Quit: Leaving.]
kroon has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
kroon_ has joined #yocto
kroon has quit [Ping timeout: 256 seconds]
nemik has quit [Ping timeout: 256 seconds]
<LetoThe2nd>
yo dudX
nemik has joined #yocto
ThomasRoos has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
vladest has quit [Quit: vladest]
vladest has joined #yocto
rob_w has joined #yocto
mvlad has joined #yocto
xmn has quit [Quit: ZZZzzz…]
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
marex has quit [Ping timeout: 272 seconds]
<mcfrisk>
so annoying when kirkstone "bitbake -e | less" ends up with endless output of "Broken pipe[Errno 32]". Using lxc-execute to Debian stable to run bitbake builds which somehow results in this in the interactive shell.
<mcfrisk>
same was in dunfell but with much longer timeout, e.g. days of keeping bitbake output piped somewhere, in kirkstone this now happens right away
WebertRLZ has joined #yocto
<kroon_>
Was there a way to dump all depsigs ?
<kroon_>
ugh.i don't think there was, except rebuilding
<RP>
kroon_: for task outputs?
<kroon_>
RP, yeah
<kroon_>
the depsig.* files
zen_coder has joined #yocto
Schlumpf has joined #yocto
alicef has quit [Quit: install gentoo]
alicef has joined #yocto
leon-anavi has joined #yocto
<RP>
kroon_: you need the build output to generate those :(
Schiller has joined #yocto
<kroon_>
RP, I do have the output in sstate cache, so as of now there is just no way to regenerate the depsig.*, right ?
zen_coder has quit [Remote host closed the connection]
zen_coder has joined #yocto
<zen_coder>
rfs613: is there a difference between removing stuff from the SDK or the image?
tre has joined #yocto
eLmankku has joined #yocto
hmw[m] has quit [Quit: You have been kicked for being idle]
<RP>
kroon_: I guess technically it would be possible but we have no such scripts
<RP>
zen_coder: they are two different things but can be related
jpuhlman has quit [Ping timeout: 258 seconds]
jpuhlman has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
ThomasRoos has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
OnkelUlla has quit [Remote host closed the connection]
zen_coder has quit [Ping timeout: 258 seconds]
<Schiller>
RP: Hey. I looked more into the AnyScheduler / SingleScheduler and the yocto-docs example. Atm when i do a commit on a branch other then master the target-present script from the autobuilder doesn't find the target and the build hangs. Yocto-docs has its own Buildfactory and script <run-docs-builds>. Can you confirm that i need to edit that stuff
<Schiller>
to make the branch targetable or am i interpreting to much into that.
frieder has quit [Ping timeout: 244 seconds]
nemik has quit [Ping timeout: 276 seconds]
<RP>
Schiller: You have multiple sources in the build, the autobuilder-helper and whatever it is that you're building
nemik has joined #yocto
zen_coder has joined #yocto
<RP>
Schiller: I suspect it is not mapping the branch config of whatever you're building over to the target build. I doubt the buildfactory piece is related, that is just because docs for us build differently to everything else
<RP>
Schiller: I'm just guessing though since you know your configuration and I can just guess
<RP>
kanavin: I've put the AUH successes into master-next for testing FWIW
<Schiller>
RP: The build depends on 4 - 5 different repos
<RP>
Schiller: right, and I think there may be mapping needed in a similar way to that function I pointed you at a while back. I don't remember the details though, I just made ours work
Wouter0100 has quit [Ping timeout: 246 seconds]
<Schiller>
RP It is rly hard to find the Error for me. When i have a force, nightly, periodic build i can set my properties function and in Buildproperties the branch is null or master and everything works fine as it configures exactly what i defined in the branchdefaultslist (schedulers.py). But when i use the SingleScheduler and commit on a different
<Schiller>
Branch, in the Buildproperties it then displays that branch and the run-target script doesn't know it. Further the fetch step tries to clone the autobuilder with that committed branch and revision even tho that commit is not on the autobuilder repo.
<Schiller>
RP: I guess the autobuilder tries to switch branches because of some buildvariables in scripts which i don't need. I just need the autobuilder to clone normally (branch master or null) into the builddirectory but can't find where this is done.
<RP>
zen_coder: if it were being added in TOOLCHAIN_HOST_TASK and it would be nativesdk-cmake. This is why we're asking where it is coming from
florian has joined #yocto
<RP>
zen_coder: a neat trick is to remove the nativesdk BBCLASSEXTEND from cmake, then see where it breaks
ThomasRoos has joined #yocto
<jaskij[m]>
What are the odds of PostgreSQL getting updated to 14.3 in Kirkstone?
m4ho has quit [Ping timeout: 260 seconds]
florian_kc has joined #yocto
m4ho has joined #yocto
GNUmoon has quit [Ping timeout: 240 seconds]
Juanosorio94 has joined #yocto
<Juanosorio94>
is there any way I can add a step between fetching and patching??
ThomasRoos has quit [Ping timeout: 252 seconds]
<Juanosorio94>
something like do_prepatch inside my recipy?
<mcfrisk>
Juanosorio94: yes, add a task in between, but why would you do this? custom tasks mostly end up breaking things like sstate cache
<mcfrisk>
it's better to fit your needs into to existing well working tasks. there is fetch, unpack, patch, configure, compile, install, deploy etc all ready.
<jaskij[m]>
mcfrisk: There are some legitimate use cases, although they are rare. Would you rather make a function and prepend it to two different tasks, or just add a new task?
<qschulz>
jaskij[m]: it seems 14.3 fixes a CVE (CVE-2022-1552) so I think you can just send a patch for the kirkstone branch stating that and it's likely to get merged
<jaskij[m]>
Cool. At least assuming existing patches don't break
<qschulz>
jaskij[m]: they do, but there's already a patch in the master branch
<jaskij[m]>
Ah, if there's an existing patch then cherry picking it will be easier
<landgraf>
RP: because of debian.bblass renames we discussed yesterday packages in package.manifest and packages in licences,manifest are named differently :(
nemik has quit [Ping timeout: 276 seconds]
LetoThe2nd has quit [Quit: WeeChat 3.4]
nemik has joined #yocto
<landgraf>
package.manifest contains debian variant while license one contains "normal" one
<qschulz>
jaskij[m]: we use mailing list contribution even for maintained branches, so it'd need to be sent to the mailing list still though.
<qschulz>
jaskij[m]: sometimes the maintainer picks updates themselves and send a big merge request before merging
<jaskij[m]>
Oh, I know, just meant that it's less work for me
<qschulz>
but considering that there was no mention of CVEs in the patch merged upstream, sakoman (their nick?) wouldn't know it's a proper candidate for backporting
<qschulz>
(except.. maybe the CVE reporting we do could highlight this but I have never used it so far so I don't know how well this works (but people are hyped about it so I guess pretty well :) )
LetoThe2nd has joined #yocto
<RP>
landgraf: it should be possible to fix them to be consistent
<qschulz>
why would a musl-specific patch be Inappropriate for upstream?
seninha has joined #yocto
seninha has quit [Remote host closed the connection]
seninha has joined #yocto
<jaskij[m]>
<qschulz> "why would a musl-specific..." <- maybe upstream officially doesn't support musl? it does have some vastly different behavior around dlopen for example
<jaskij[m]>
also: thanks for all the info
<qschulz>
jaskij[m]: I saw many for systemd (most of them are for systemd actually) and a simple internet search told me musl does not want to support systemd though that explains it
<landgraf>
RP: easiest way is " INHERIT_DISTRO:remove = "debian" " :-)
<RP>
landgraf: there are other places the PKG_ rename mechanism is used though
<RP>
qschulz: I don't think another honister release is planned so we can mark as EOL now btw
<jaskij[m]>
qschulz: Pottering wrote in 2018:
<jaskij[m]>
> Generally: glibc defines the Linux API pretty much, and we hack against that.
<zen_coder>
Can I add to the meta-toolchain x11 to the target_task?
<zen_coder>
MBition12017
dz1 has quit [Quit: Leaving]
zwelch_ is now known as zwelch
dz1 has joined #yocto
zen_coder has quit [Quit: Konversation terminated!]
Tokamak has quit [Ping timeout: 244 seconds]
Tokamak has joined #yocto
florian_kc has joined #yocto
florian_kc is now known as florian
pgowda_ has quit [Quit: Connection closed for inactivity]
alimon has quit [Remote host closed the connection]
ptsneves has quit [Quit: Client closed]
willo has quit [Quit: No Ping reply in 180 seconds.]
dz1 has quit [Remote host closed the connection]
yates has joined #yocto
<yates>
is there a list of yocto cross toolchains that can be built somewhere?
<yates>
right? the PITA is back...
jclsn has quit [Ping timeout: 260 seconds]
dev1990 has quit [Quit: Konversation terminated!]
seninha has quit [Quit: Leaving]
fitzsim has quit [Remote host closed the connection]
olani has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
mvlad has quit [Remote host closed the connection]
peoliye has joined #yocto
dz1 has joined #yocto
agupta1 has joined #yocto
fitzsim has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
PatrickE has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
seninha has joined #yocto
otavio has quit [Ping timeout: 250 seconds]
otavio has joined #yocto
<yates>
talkative bunch
<yates>
...
GNUmoon has quit [Ping timeout: 240 seconds]
<yates>
did you hear the one about the plastic surgeon that hung himself?
<smurray>
yates: that question needs some context IMO, as I've no idea what you're asking for? Every BSP layer that adds a machine potentially expands the list
<smurray>
yates: and in OE, the default SDK build is image based, so there is not really a fixed target to put on a list
camus has quit [Ping timeout: 244 seconds]
camus has joined #yocto
<yates>
smurray: fair enough. thanks.
agupta1 has quit [Ping timeout: 276 seconds]
<yates>
if i were to want to build a core-image-minimal for a machine/architecture xyz, is there a way to see if a build for that machine has already been done as a starting point for my project?
<yates>
perhaps i should ask about builds and not cross-toolchains, since the latter comes automatically with the former
<yates>
e.g., let's say i want to create a build for a riscv. does one exist? how do you find out?
<yates>
or perhaps i should ask if there is a list of pre-configured BSPs?