<khem>
<JaMa> "khem: does this look scary for..." <- I think using gcc to compile chromium is an unsupported case you should be using clang for that and this patch while not a deal breaker is quite invasive to backport
nemik has quit [Ping timeout: 258 seconds]
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 244 seconds]
nemik has joined #yocto
behanw has quit [Quit: Connection closed for inactivity]
frieder has quit [Ping timeout: 258 seconds]
Tokamak_ has quit [Ping timeout: 244 seconds]
fray has joined #yocto
Tokamak has joined #yocto
sakoman has quit [Quit: Leaving.]
frieder has joined #yocto
Tokamak has quit [Ping timeout: 255 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
Tokamak has joined #yocto
starblue has quit [Ping timeout: 240 seconds]
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
starblue has joined #yocto
dz1 has joined #yocto
Tokamak has quit [Ping timeout: 244 seconds]
kscherer has quit [Quit: Konversation terminated!]
Tokamak has joined #yocto
seninha has quit [Quit: Leaving]
nemik has quit [Ping timeout: 276 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
sakoman has joined #yocto
Tokamak has quit [Remote host closed the connection]
Tokamak has joined #yocto
Tokamak_ has joined #yocto
Tokamak_ has quit [Remote host closed the connection]
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 246 seconds]
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
manuel has quit [Remote host closed the connection]
ThomasRoos has quit [Remote host closed the connection]
<LetoThe2nd>
yo dudX
<jclsn[m]>
Morning
<jclsn[m]>
How was the project summit btw? I would really like to join next time, but not online...
<LetoThe2nd>
jclsn[m]: it was fun! videos will be up real soon ;-)
<LetoThe2nd>
jclsn[m]: in a nutshell though, it is different from the in-person thing and we will probably have both forms in the future.l
<jclsn[m]>
Must have been good then :)
* jclsn[m]
sighs
<jclsn[m]>
I still can't configure clangd or ccls to fully recognize to Yocto toolchain
gsalazar has joined #yocto
zwelch__ has joined #yocto
GuestNew118 has joined #yocto
zwelch has quit [Read error: Connection reset by peer]
<GuestNew118>
After updating poky to kirkstone release branche I got the following error : python -m installer: error: unrecognized arguments: --interpreter /home/nsalmin/yocto_taz_kirkstone_next_DEBUG/builddir/tmp/work/cortexa72-cortexa53-mensi-linux/python3-wheel/0.37.1-r0/dist/wheel-0.37.1-py3-none-any.whl any advices to fix that ?
<GuestNew118>
LetoThe2nd the issue appear when i call bitbake to build a recipe whith wheels deps. I have already take a look to the changelog but nothing at the first look
<LetoThe2nd>
GuestNew118: is it something public?
<GuestNew118>
it's not:( but I have the same issue when I build public recipe with wheel deps
<LetoThe2nd>
GuestNew118: example please? does it reproduce for a plain poky, or might it be something that your distro triggers?
cyberpear has joined #yocto
<cyberpear>
Hey all,
<cyberpear>
has anyone ever needed to disable all the getty related services in systemd with yocto?
<cyberpear>
Context:
<cyberpear>
I want to prevent the login screen/cursor from overwriting the image I wrote into the framebuffer.
<cyberpear>
Using core-image-minimal and the RPi (with meta-raspberry).
<GuestNew118>
LetoThe2nd thank you, you point me the right way, meta-jupyter is in cause so now I now where I have to search ;)
<LetoThe2nd>
GuestNew118: have fun!
<GuestNew118>
LetoThe2nd thx for making a good support
dev1990 has joined #yocto
prabhakarlad has joined #yocto
<qschulz>
o/
<JaMa>
sakoman: with 11.3 in master branch: 2022-06-01 09:46:37,051 - oe-selftest - INFO - RESULTS - reproducible.ReproducibleTests.test_reproducible_builds: PASSED (6429.19s) will test now with 11.3 backported to kirkstone
zeddii has quit [Ping timeout: 240 seconds]
dingo_ has quit [Ping timeout: 260 seconds]
tre has joined #yocto
mvlad has joined #yocto
florian has joined #yocto
<jclsn[m]>
So no one can help me getting clang/ccls to work with my Yocto toolchain?
frieder has quit [Ping timeout: 255 seconds]
florian_kc has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
frieder has joined #yocto
kroon has quit [Quit: Leaving]
<jclsn[m]>
Guess I need to code without it
<jclsn[m]>
What is the best way to control the backlight? I can write to /sys/class/backlight/backlight_lvds/backlight, but I need root permissions for that. Read online that you use drm?
<jclsn[m]>
* What is the best way to control the backlight with a C program? I can write to /sys/class/backlight/backlight_lvds/backlight, but I need root permissions for that. Read online that you use drm?
<jclsn[m]>
* What is the best way to control the backlight with a C program? I can write to /sys/class/backlight/backlight_lvds/backlight, but I need root permissions for that. Read online that you use a drm api?
GuestNew118 has quit [Ping timeout: 252 seconds]
jpuhlman has quit [Ping timeout: 246 seconds]
jpuhlman has joined #yocto
jclsn has joined #yocto
kroon has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
cyberpear has quit [Quit: Client closed]
starblue has quit [Ping timeout: 246 seconds]
starblue has joined #yocto
eloi1 has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
ptsneves has joined #yocto
jclsn has quit [Quit: WeeChat 3.5]
jclsn has joined #yocto
Schlumpf has quit [Quit: Client closed]
eloi1 has quit [Ping timeout: 276 seconds]
jclsn has quit [Quit: WeeChat 3.5]
jclsn has joined #yocto
jclsn has quit [Quit: WeeChat 3.5]
jclsn has joined #yocto
eloi1 has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
zeddii has joined #yocto
davidinux has quit [Ping timeout: 276 seconds]
manuel_ has joined #yocto
davidinux has joined #yocto
manuel1985 has quit [Ping timeout: 276 seconds]
fitzsim has quit [Ping timeout: 276 seconds]
<Guest87>
does toaster use a different python than bitbake? I am seeing a build failure in toaster, but not with using bitbake on the command line for the same build directory
GNUmoon has quit [Remote host closed the connection]
<landgraf>
oe.packagedata.recipename returns "None" for some packages. Is it bug or feature? :) packages like dbus-1, kernel-5.10.61-yocto-standard, liblzma5 etc
GNUmoon has joined #yocto
dz1 has quit [Quit: Leaving]
<RP>
landgraf: dbus-1 is a package name, not a recipe name, the recipe name would be dbus
<landgraf>
it works for most of the packages we have in base image though
* landgraf
is wondering if it has something to do with digit at the end of the package name
seninha has joined #yocto
<RP>
landgraf: that comes from debian package renaming, i.e. the use of PKG_<pkgname> to rename the final package
kevinrowland has quit [Quit: Client closed]
fitzsim has joined #yocto
<landgraf>
RP: But why only libsystemd has been renamed https://dpaste.com/9R75CZDLW Same for other packages "Main" package is renamed while subpackages have correct name
<RP>
landgraf: have a look at debian.bbclass
ThomasRoos[m] has joined #yocto
<landgraf>
RP: Ok. Thanks. will take a look
ThomasRoos has joined #yocto
<jonmason>
zeddii: whats the ETA for linux v5.17/5.18? I'm getting asked internally
m4ho has quit [Ping timeout: 246 seconds]
ThomasRoos has quit [Remote host closed the connection]
ThomasRoos has joined #yocto
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
<zeddii>
they are there in linux-yocto-dev, but there won't be a versioned linux-yocto for them, since we are far enough away from the next release, that we'll go 5.19
<sotaoverride>
trying to see if I can get the upperdir created in the same .mount file.. ..
ptsneves has quit [Quit: Client closed]
kscherer has joined #yocto
ThomasRoos has quit [Remote host closed the connection]
jsandman has quit [Quit: Ping timeout (120 seconds)]
jsandman has joined #yocto
risca has quit [Quit: No Ping reply in 180 seconds.]
risca has joined #yocto
florian has quit [Quit: Ex-Chat]
florian_kc has quit [Ping timeout: 258 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
ptsneves has joined #yocto
<ernstp>
saw that my cve-check patches landed on a couple of branches, perhaps only for running test builds?
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
<RP>
ernstp: I think they merged to master?
<ernstp>
RP: oh they're on the second page on gitweb already, missed that! :-D
<ernstp>
RP: guess the discussion was resolved then! :-)
<RP>
ernstp: well, yes and no. I didn't realise mrybczyn[m] was doing further investigation :/
<ernstp>
RP: well it can always be improved further!
ptsneves has quit [Quit: Client closed]
<mrybczyn[m]>
ernstp @RP: writing scripts to analyse the data but got interrupted...
<ernstp>
mrybczyn[m] RP: it will be pretty good solution for Dunfell etc I think. Then on master perhaps it could be further upgraded using the new spdx data or similar
dvorkindmitry has joined #yocto
tre has quit [Remote host closed the connection]
<mrybczyn[m]>
ernstp: for dunfell and kirkstone that would be à functional change. Remembrement that json is disabled in dunfell by default
<ernstp>
mrybczyn[m]: what regarding json are you thinking about?
GillesM has joined #yocto
florian_kc has joined #yocto
<ernstp>
mrybczyn[m]: no more functional change than on master? they don't really have anything to do with json specifically.
eloi1 has quit [Ping timeout: 258 seconds]
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
<mrybczyn[m]>
@ernstp it changes the content of the report. Everyone with old scripting will have an unexpected result change. Not sure what sakoman thinks about it. It was already at the limit fir JSON
<ernstp>
mrybczyn[m]: ah you mean it's not a very "stable" change. but there is the summary always. and it makes the rootfs manifest into actually a correct rootfs manifest. but yes...
eloi1 has joined #yocto
florian_kc is now known as florian
<mrybczyn[m]>
ernstp: and requires a change in the documentation and so on... yes not very stable...
<RP>
mrybczyn[m], ernstp: talk to sakoman about it, I think we do need to make sure we have strong CVE checking on the LTS releases
kevinrowland has joined #yocto
seninha has quit [Ping timeout: 276 seconds]
<sakoman>
mrybczyn[m]: ernstp: I would like to keep the CVE checking consistent between master, kirkstone, and dunfell so we can use the same tooling to report on all three
<RP>
sakoman: I was hoping you'd take my warning change so we can quieten the metrics run on the autobuilder
* RP
wondered about cherry picking it but thought he'd better not
<sakoman>
RP: I've already taken them in both dunfell-nut and kirkstone-nut
<RP>
sakoman: great, thanks
<sakoman>
So once I send the next pull requests we should be consistent
<zeddii>
well bugger. podman-compose rewrites history on their devel branch
<RP>
zeddii: because that is a really great idea? :/
seninha has joined #yocto
kevinrowland has quit [Quit: Client closed]
<zeddii>
public branch. meh. re-write away!
<zeddii>
fixed, but unexpected.
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
zen_coder has joined #yocto
<zen_coder>
Hi, how can I remove a package from an SDK with yocto?
<zen_coder>
my image just does: "inherit core-image"
<zen_coder>
then I do, as example: TOOLCHAIN_TARGET_TASK_remove =" zlib "
<zen_coder>
but nothing gets removed
<rfs613>
zen_coder: others will probably give you a better answer, but there are likely many other packages which depend (directly or indirecly) on zlib. So unless you also remove those, zlib will still be included.
<rfs613>
WebertRLZ: that's going to the channel, probably due space at the start...
<WebertRLZ>
ah, damn, thanks
<rfs613>
it happens ;-)
* RP
might have a lead on the intermittent taskhash mismatch bug during esdk generation
<RP>
sakoman: I think you have one of these open against one of the stable releases?
<sakoman>
RP: yes, on dunfell, though it hasn't happened in months
<zen_coder>
rfs613: it was just an example
<zen_coder>
so how can I force to remove something?
<RP>
sakoman: just happened on master on a release build :/
<zen_coder>
it seems to be that yocto is too clever to include it anyway
<WebertRLZ>
OK pass changed, no harm done (:
<sakoman>
RP: bummer :-(
mckoan is now known as mckoan|away
<zen_coder>
is there a way to force remove stuff from the SDK, not matter what?
<RP>
sakoman: I was able to poke the failed build and I think I can see how it could break
WebertRLZ has quit [Quit: Client closed]
zen_coder has quit [Remote host closed the connection]
zen_coder has joined #yocto
<sakoman>
RP: it would be nice to understand why this happens!
<zen_coder>
How can I find out why I cannot remove a package from the SDK?
eloi1 has quit [Ping timeout: 258 seconds]
<rfs613>
zen_coder: you need to look at the dependencies. One way is "bitbake -g recipe-name -u taskexp"
<RP>
sakoman: I can see some code which I think is wrong but I can't explain why it broke the way it did
<RP>
scratch that, it isn't what I was thinking although the code does have a race
WebertRLZ has joined #yocto
<WebertRLZ>
Hey colleagues, I'm trying to optimized use of yocto download and especially SSTATE cache in CI systems. I noticed that the system at my company is setting SSTATE_DIR for all builds, including builds from pull request pipelines. I have the impression that this is not a good approach, as the cache will be written by any possible development branch,
<WebertRLZ>
and usually we have hundreds of them every day
<WebertRLZ>
I could not find any reference implementation on the docs or internet discussions, I would experiment on disabling SSTATE_DIR for PR builds, and only use mirror, so the cache is used as "read-only". Does anyone have any thoughts on this?
<RP>
WebertRLZ: why do you think that is an issue? You also didn't say what it was being set to? A directory shared by all of them?
<WebertRLZ>
RP it's being set to a directory in a shared file system. AWS EFS
<RP>
WebertRLZ: that sounds right to maximise object reuse
<WebertRLZ>
my concern is if pull request builds writing to the cache would make it dirty, because branches get outdated easily as soon as other stuff is merged
<WebertRLZ>
and if it would be better to only populate the cache on target branches, and use it as read only on PR builds
<RP>
WebertRLZ: the cache is supposed to be pretty robust against that as long as recipes are properly written
<JPEW>
WebertRLZ: bitbake uses a hashing system to make sure that it only pulls the sstate objects that it should
<JPEW>
WebertRLZ: We share a single sstate NFS directory for all our CI builds and it works fine
<WebertRLZ>
RP I will check with internal devs to double check bitbake recipes
<JPEW>
FWIW, we do have a job to erase it once a week to keep it from growing unbounded, but that's another story :)
<WebertRLZ>
JPEW using SSTATE_DIR for all builds? I very often see "master" branch builds having "0% completed" so I suspect something is messing up the cache
<WebertRLZ>
And very interesting fact about your erase job, our cache is currently 34TB in size and this is driving me a bit crazy
<RP>
ah, I was confusing build directories. Back to the original corruption theory
<JPEW>
WebertRLZ: Ya, our sstate directory is capped at 70TB... that's "big enough"; I haven't looked at how much of that actually used in a given week.
<RP>
WebertRLZ: if the systems have write access they do touch the sstate files they use so you may be able to tell what is in use that way
<JPEW>
WebertRLZ: Yes, same SSTATE_DIR for all builds (pointed at an NFS mount)
<RP>
we prune out old stuff on the autobuilder like that
<WebertRLZ>
I need to check if that's feasible with EFS as well
<JPEW>
RP: Does that mean the mtime gets updated every time it's read?
<JPEW>
s/read/used/
<WebertRLZ>
on our case on EFS, I think yes.
<WebertRLZ>
at least it's NOT currently mounted with `noatime`
<WebertRLZ>
this was actually another question I had, would it be good to mount with `noatime` or does SSTATE relies on this kind of metadata?
<zen_coder>
rfs613: I get following output: FATAL: Gtk ui could not load the required gi python module
<RP>
JPEW: yes, see sstate_unpack_package
* JPEW
reads
<JPEW>
Excellent! I'll have to use that now instead of deleting everything
<rfs613>
zen_coder: yeah taskdep needs a bunch of python-gtk packages to run, which you probalby don't have installed
<rfs613>
zen_coder: sorry I meant taskexp not taskdep
<WebertRLZ>
JPEW you mean finding all files with atime < x and deleting?
<JPEW>
WebertRLZ: The mtime gets updated with "touch", so you can use mtime instead
<JPEW>
WebertRLZ: That way you can keep noatime
<WebertRLZ>
I was thinking if disabling atime on mount would bring any benefit on speed. My monitoring shows 98% of IOPS to the EFS directory is metadata
<rfs613>
zen_coder: maybe try python3-gi instead of python-gi ?
<WebertRLZ>
JPEW RP when you say the SSTATE mechanism uses hashing to make sure it only fetches what's necessary - is parallel builds for the same images also taken into account? So usually we have 100s pull requests triggering builds for the same things, using the same cache. I think they are all writing to the same files, in the end. No?
kevinrowland has joined #yocto
<RP>
WebertRLZ: it is designed to handle it, the files are atomicly moved into position
<RP>
sakoman: zero length bb_unihashes.dat in the esdk - explains a few things and means we're looking for a different kind of race
kevinrowland has quit [Quit: Client closed]
seninha has quit [Ping timeout: 246 seconds]
seninha has joined #yocto
<WebertRLZ>
RP I mean specifically about versioning of said files. Does that mean outdate branches will write different versions of such files in comparison to master branch for ex?
<JPEW>
WebertRLZ: The files are named based on a checksum of all their input data (e.g. dependencies, variables, tasks, etc.); so you might have multiple sstate files for the same task & recipe
<JPEW>
WebertRLZ: And it knows which one to use based on the hash of the inputs, so they don't ever conflict because a change in any input will result in a new hash and thus a new file
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
<JaMa>
sakoman: my selftest was on gentoo (starting with empty sstate)
nemik has quit [Ping timeout: 240 seconds]
<RP>
WebertRLZ: the versioning is pretty advanced and clever and is what the sha256 in the filenames represents
nemik has joined #yocto
<RP>
WebertRLZ: it is based on a per task configuration, it doesn't care about the branch it is on
<RP>
sakoman: patches sent out for the issue, I do think I understand it enough to fix it
<WebertRLZ>
RP alright thank you very much!
ThomasRoos has joined #yocto
<sakoman>
RP: that is great news!
<sakoman>
RP: do you think the patch will apply to dunfell too?
<sakoman>
JaMa: hmmm . . . gentoo isn't available on the autobuilder :-(
<RP>
sakoman: yes, but not entirely straight forward :)
<JaMa>
and even if it is, it probably wouldn't be very useful as every instalation is "custom"
<sakoman>
JaMa: I'll continue looking at the failure on my end as a background task
<JaMa>
I can rerun in some docker container, but I guess that starting from empty sstate is bigger factor than the host distro
<RP>
sakoman: I created some new bitbake api to fix it.
<sakoman>
JaMa: sadly that is likely the case
<sakoman>
RP: urgh . . . fortunately it is quite rare on dunfell
<RP>
sakoman: we need the decoded loc information. I wish I could remember how I decoded the dwarf info
<RP>
sakoman: it isn't so bad, we just need to do a different version bump
<sakoman>
I grabbed the a and b .debs and extracted the .so's from them
<RP>
sakoman: I just don't like to see you blocked :/
<RP>
sakoman: about 2 hours to download :/
<RP>
(the first one)
<sakoman>
RP: and that is just the beginning of the pain!
<RP>
halstead: should I be getting 120KB/s from there?
WebertRLZ has quit [Quit: Client closed]
ThomasRoos has quit [Ping timeout: 252 seconds]
<sakoman>
RP: I'm trying to extract the debug section info with objdump -g. It does generate some promising looking output, but also sprays "objdump: Error: LEB value too large
<sakoman>
" all too frequently
<halstead>
RP: Only if there are lots of other people downloading.
<RP>
halstead: that I can't answer :/
<RP>
sakoman: try objdump from our build?
<halstead>
RP: I'm checking it out now.
<halstead>
RP: It's a new month so we shouldn't be throttled.
<sakoman>
RP: I haven't tried that yet
<RP>
sakoman: readelf can sometimes also give different output
<RP>
sakoman: thanks, I can parse that better! it does seem to indicate a specific symbol at least. I think I'd be tempted to take one of the failed builds on the AB, tar up the source for webkit and compare to your local copy
<RP>
sakoman: specifically on generated files
<sakoman>
RP: OK, I suspect I may need to generate a new failed build since it's been a while since the failed build
<sakoman>
I'll get started on that
olani has joined #yocto
* RP
puts debian9 back in the pool
Tokamak_ has quit [Ping timeout: 256 seconds]
<mrybczyn[m]>
RP: sakoman: I have results from the Cve-check diff. Still some libs to check but one case is interesting: ovmf Technically it isn't in the image...
Tokamak has joined #yocto
<RP>
mrybczyn[m]: bootloaders and firmware get a bit tricky :/
<RP>
does anybody else see "Error contacting Hash Equivalence Server unix:///XXX/poky/build/hashserve.sock: [Errno 2] No such file or directory" style messages?
olani has quit [Remote host closed the connection]
* RP
wonders why he gets them
olani has joined #yocto
<sakoman>
RP: that message doesn't look familiar at all to me
jnugen has joined #yocto
<rfs613>
could it be some kind of max number of open files limit being reached? (does that even apply to local unix sockets?)
<RP>
rfs613: you'd expect other build failures if that were the case? :/
<rfs613>
RP: yeah I guess... although fewer and fewer things use unix sockets these days
<RP>
sakoman: a grep for WebCore11CSSProperty on a webkitgtk build fills me with dread at the output
<jnugen>
I'm having a problem with devtool. I did a modify, edit-recipe and build with no problems. Now I want to update-recipe, but I'm getting this message: "INFO: No patches or local source files needed updating". It's not creating a bbappend with my recipe changes. I didn't need to change this source. Any ideas? Thanks!
<Saur[m]>
jnugen: If you didn't change the source, why would you do a `devtool update-recipe`?
GillesM has quit [Quit: Leaving]
rob_w has quit [Read error: Connection reset by peer]
<sakoman>
RP: yes, I saw your email :-(
kscherer has quit [Quit: Konversation terminated!]
mvlad has quit [Quit: Leaving]
PatrickE has joined #yocto
kevinrowland has quit [Quit: Client closed]
<zen_coder>
rfs613: I was able to start the taskexp UI
<zen_coder>
however, where do I see now why a package gets installed?
jnugen has quit [Quit: Client closed]
PatrickE has quit [Ping timeout: 252 seconds]
<JPEW>
sakoman: You might be able to turn off unified builds in WebKit? I forget what benefit they actually give you....
<rfs613>
zen_coder: sorry, busy with kids dinner/bath/bedtime for next little while
<rfs613>
very briefly, there is a search box which you can use to locate the package you are interested in (say zlib)
<rfs613>
note that dependencies are per-task rather than per-package as you might expect.
<rfs613>
so you'll see zlib.do_build and zlib.do_install for example, and probably a dozen more
<rfs613>
you can click on one of those tasks on the left, and it will show you all the dependency list
<rfs613>
eg. the tasks which it depends on, and also which other tasks depend on it.
dkl_ has quit [Quit: %quit%]
<rfs613>
it's a bit tedious, but by reviewing the dependencies of each of the tasks of your package, you should be able to figure out why it doesn't want to be removed
dkl has joined #yocto
* rfs613
vanishes again for a while
Tokamak_ has quit [Ping timeout: 256 seconds]
<zen_coder>
rfs613: can it be, that I have to look for "zlib.do_populate_sysroot" and then in "Dependent Tasks" to see which other package requires zlib?
astlep has joined #yocto
xmn has joined #yocto
Tokamak has joined #yocto
<rfs613>
zen_coder: yes, but note that yocto dependencies are at the task level (do_populate_sysroot is one of many tasks associated with a package). Each package can additionally add its own new tasks, which could in turn become dependencies for other tasks.
<rfs613>
so you likely need to check do_install, do_populate_sysroot, and probaby a few others.
zen_coder has quit [Ping timeout: 272 seconds]
olani has quit [Ping timeout: 258 seconds]
Tokamak has quit [Read error: Connection reset by peer]
Tokamak has joined #yocto
Dracos-Carazza has quit [Ping timeout: 256 seconds]