ndec changed the topic of #yocto to: "Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2022.11) Nov 29-Dec 1, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec"
Tokamak has joined #yocto
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 255 seconds]
<Saur[m]> DvorkinDmitry: Regardless of parsing, you cannot use Bash arrays since the shell may be Dash, which does not support them.
<DvorkinDmitry> Saur[m], oh! but I can declare -a myarr and set values... hmmm
<Saur[m]> DvorkinDmitry: How do you need to access the values of ${IMG_ISP_P}? I.e., are you looping over the values, or do you need to be able to access them randomly?
<Saur[m]> It works if your /bin/sh happens to be bash. But on another computer it may be dash and then it will fail.
<DvorkinDmitry> Saur[m], do you have an idea how to separate simple string "xx,xxx,yyy" in IMAGE_CMD_my() correctly and take part #1, for example?
prabhakarlad has quit [Quit: Client closed]
kscherer has quit [Quit: Konversation terminated!]
Payam has quit [Ping timeout: 248 seconds]
sakoman has quit [Quit: Leaving.]
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
Payam has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
davidinux has quit [Ping timeout: 268 seconds]
davidinux has joined #yocto
sakoman has joined #yocto
starblue has quit [Ping timeout: 256 seconds]
starblue has joined #yocto
Tokamak has joined #yocto
Tokamak_ has quit [Ping timeout: 248 seconds]
camus has joined #yocto
sakoman has quit [Quit: Leaving.]
Crofton has quit [Read error: Software caused connection abort]
Crofton has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 268 seconds]
camus1 is now known as camus
alessioigor has joined #yocto
gstinocher[m] has quit [Read error: Software caused connection abort]
gstinocher[m] has joined #yocto
Guest1370 has joined #yocto
Guest1370 has quit [Quit: Client closed]
kotylamichal has joined #yocto
goliath has joined #yocto
smurray has quit [Read error: Software caused connection abort]
smurray has joined #yocto
Guest1338 has joined #yocto
rhadye has quit [Read error: Software caused connection abort]
rhadye has joined #yocto
Lihis has quit [Read error: Software caused connection abort]
Lihis has joined #yocto
<Guest1338> hi everyone. im trying to enable hdmi support on imx8mq(coral-dev-board). i couldnt find a solution for 3-4 days. i opened a topic(included kernel version,meta-layers):
<Guest1338> even any guidance would be helpful to me.
mthenault has joined #yocto
tomzy_0 has joined #yocto
xmn has quit [Ping timeout: 256 seconds]
<LetoThe2nd> yo dudX
<LetoThe2nd> Guest1338: I would suggest, try to find out which process is supposed to put something on the display, make sure it runs and look at its logs. the log you posted is just the boot process
hcg has joined #yocto
gho has joined #yocto
kotylamichal has quit [Quit: Konversation terminated!]
jclsn has joined #yocto
<jclsn> Morning boys and gals
frieder has joined #yocto
leon-anavi has joined #yocto
manuel1985 has joined #yocto
goliath has quit [Quit: SIGSEGV]
zpfvo has joined #yocto
Guest1338 has quit [Ping timeout: 260 seconds]
dev1990 has joined #yocto
dev1990 has quit [Client Quit]
Guest1327 has joined #yocto
Colin_Finck has quit [Read error: Software caused connection abort]
Colin_Finck has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
mvlad has joined #yocto
Guest1327 has quit [Quit: Client closed]
Herrie has quit [Read error: Connection reset by peer]
Herrie has joined #yocto
bluelightning has quit [Read error: Software caused connection abort]
bluelightning has joined #yocto
kanavin has quit [Quit: Leaving]
goliath has joined #yocto
fmartinsons[m] has quit [Read error: Software caused connection abort]
fmartinsons[m] has joined #yocto
Guest36 has joined #yocto
<Guest36> Hello @ALL
kanavin has joined #yocto
<Guest36> how to build a bootable Image for x86_64 ?
<Guest36> what I need to set in poky/build/conf/local.conf ?
<LetoThe2nd> Guest36: have you tried and completed (and hopefully understood) a first quick start build for qemux86_64?
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
<Guest36> got a file: core-image-sato-qemux86-64-20221122052528.rootfs.ext4
<LetoThe2nd> Guest36: yeah, as this is a build for the specified qemu machine, it obviously produces something that qemu can use to run. good. the next step is then to add the meta-intel layer, adjust the MACHINE and build again. please see the documentation for this: https://git.yoctoproject.org/meta-intel/tree/README
<Guest36> LetoThe2nd: thanks will try
florian has joined #yocto
<jclsn> What book do you recommend for getting to know the kernel? I am still struggling to debug it when something is going wrong
glgspg[m] has joined #yocto
gho has quit [Ping timeout: 260 seconds]
zpfvo has quit [Ping timeout: 268 seconds]
dlan has quit [Ping timeout: 246 seconds]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
grma has quit [Ping timeout: 268 seconds]
rber|res has joined #yocto
gho has joined #yocto
dlan has joined #yocto
gho has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
gho has joined #yocto
Salamandar has quit [Quit: Bridge terminating on SIGTERM]
fabatera[m] has quit [Quit: Bridge terminating on SIGTERM]
T_UNIX[m] has quit [Quit: Bridge terminating on SIGTERM]
michaelo[m] has quit [Quit: Bridge terminating on SIGTERM]
Saur[m] has quit [Quit: Bridge terminating on SIGTERM]
khem has quit [Quit: Bridge terminating on SIGTERM]
janvermaete[m] has quit [Quit: Bridge terminating on SIGTERM]
lrusak[m] has quit [Quit: Bridge terminating on SIGTERM]
agherzan has quit [Quit: Bridge terminating on SIGTERM]
barath has quit [Quit: Bridge terminating on SIGTERM]
esben[m] has quit [Quit: Bridge terminating on SIGTERM]
mrybczyn[m] has quit [Quit: Bridge terminating on SIGTERM]
ble[m] has quit [Quit: Bridge terminating on SIGTERM]
ramacassis[m] has quit [Quit: Bridge terminating on SIGTERM]
aleblanc[m] has quit [Quit: Bridge terminating on SIGTERM]
cperon has quit [Quit: Bridge terminating on SIGTERM]
matiop6[m] has quit [Quit: Bridge terminating on SIGTERM]
kiwi_29_[m] has quit [Quit: Bridge terminating on SIGTERM]
hmw[m] has quit [Quit: Bridge terminating on SIGTERM]
protu[m] has quit [Quit: Bridge terminating on SIGTERM]
Theo[m] has quit [Quit: Bridge terminating on SIGTERM]
jkorsnes[m] has quit [Quit: Bridge terminating on SIGTERM]
ThomasRoos[m] has quit [Quit: Bridge terminating on SIGTERM]
Peter[m]12345 has quit [Quit: Bridge terminating on SIGTERM]
danielt has quit [Quit: Bridge terminating on SIGTERM]
styloge[m] has quit [Quit: Bridge terminating on SIGTERM]
berton[m] has quit [Quit: Bridge terminating on SIGTERM]
hiyorijl[m] has quit [Quit: Bridge terminating on SIGTERM]
ejoerns[m] has quit [Quit: Bridge terminating on SIGTERM]
ericson2314 has quit [Quit: Bridge terminating on SIGTERM]
fmartinsons[m] has quit [Quit: Bridge terminating on SIGTERM]
zyga[m] has quit [Quit: Bridge terminating on SIGTERM]
Tartarus has quit [Quit: Bridge terminating on SIGTERM]
EwelusiaGsiorek[ has quit [Quit: Bridge terminating on SIGTERM]
shoragan[m] has quit [Quit: Bridge terminating on SIGTERM]
PascalBach[m] has quit [Quit: Bridge terminating on SIGTERM]
gstinocher[m] has quit [Quit: Bridge terminating on SIGTERM]
glgspg[m] has quit [Quit: Bridge terminating on SIGTERM]
mborzecki has quit [Quit: Bridge terminating on SIGTERM]
patersonc[m] has quit [Quit: Bridge terminating on SIGTERM]
gho has quit [Ping timeout: 240 seconds]
khem has joined #yocto
zyga[m] has joined #yocto
prabhakarlad has joined #yocto
gho has joined #yocto
<qschulz> Are we supposed to be able to share DL_DIR between concurrent builds?
Salamandar has joined #yocto
shoragan[m] has joined #yocto
patersonc[m] has joined #yocto
barath has joined #yocto
<qschulz> I'm seeing some weird file corruption
michaelo[m] has joined #yocto
ble[m] has joined #yocto
johankor[m] has joined #yocto
hiyorijl[m] has joined #yocto
danielt has joined #yocto
protu[m] has joined #yocto
ericson2314 has joined #yocto
Tartarus has joined #yocto
lrusak[m] has joined #yocto
EwelusiaGsiorek[ has joined #yocto
agherzan has joined #yocto
Saur[m] has joined #yocto
mrybczyn[m] has joined #yocto
T_UNIX[m] has joined #yocto
gstinocher[m] has joined #yocto
<qschulz> one server has an ext4 filesystem exported as nfs to server 2
fmartinsons[m] has joined #yocto
Theo[m] has joined #yocto
<qschulz> (kirkstone 4.0.5)
kiwi_29_[m] has joined #yocto
matiop6[m] has joined #yocto
PascalBach[m] has joined #yocto
mborzecki has joined #yocto
ramacassis[m] has joined #yocto
cperon has joined #yocto
aleblanc[m] has joined #yocto
hmw[m] has joined #yocto
<rburton> the AB does that, nfs share for DL_DIR
fabatera[m] has joined #yocto
<rburton> has to be a working nfs server with locking that works
<qschulz> where is the DL_DIR original filesystem located on?
glgspg[m] has joined #yocto
esben[m] has joined #yocto
<qschulz> and is the server where it is located on running Yocto builds?
ThomasRoos[m] has joined #yocto
<qschulz> rburton: would you happen to have the mount options?
janvermaete[m] has joined #yocto
ejoerns[m] has joined #yocto
<qschulz> we are running nfs 4.2 so no explicit locking in mount options AFAICT
<rburton> in the AB? there's a NFS appliance that just hosts sstate and dl_dir, and the clients all mount it
Peter[m]1 has joined #yocto
styloge[m] has joined #yocto
<qschulz> yeah, that's a different scenario from us (though we could probably replicate this with a local NFS mount)
berton[m] has joined #yocto
<rburton> if locking works then writes should be atomic
<qschulz> We did a local wget oif this same file on both servers at the same time and the data was valid at the end of both downloads
starblue has quit [Ping timeout: 248 seconds]
<rburton> unless the shallow tarball stuff is racy and nobody noticed
<qschulz> rburton: yeah but that's locks on two different filesystems
<qschulz> so I wouldn't trust it too much (though the wget test we did was surprisingly working ok)
johankor[m] is now known as jkorsnes[m]
<qschulz> (I mean "surprisingly" because that was the culprit I had thought of initially)
starblue has joined #yocto
grma has joined #yocto
jclsn has quit [Ping timeout: 260 seconds]
rber|res has quit [Remote host closed the connection]
florian_kc has joined #yocto
<RP> qschulz: the nfs server is a dedicated BSD based nas appliance
<RP> qschulz: I'd be tempted to print out the lockfile the fetcher is using (ud.lockfile) and see if there are conflicts. It is possible mutliple urls are mapping back to the same mirror tarball name
<RP> there have been a lot of changes (submodules, shallow clones) to the fetcher and nobody pays much attention to whether it works with the core lockfile
<RP> it looks like it locks based on ud.clonedir for git
* RP notices several potential bugs looking at the git fetcher code and feels sad/depressed
hcg has quit [Ping timeout: 260 seconds]
Guest36 has quit [Ping timeout: 260 seconds]
<qschulz> RP: the wget fetcher is racy
<qschulz> rburton: ^
<RP> qschulz: can you be more specific?
<qschulz> RP: yes, just in a call at the same time
<RP> qschulz: two questions - a) you are using the top level api, not the wget fetcher directly? and b) which version are you looking at, I added atomic move code in a recent version
<qschulz> so, I have one server starting a download of the file, which then DOES exist
<qschulz> the second server is starting to download the file too and sees it exists
<RP> qschulz: there is locking in the higher level function
<qschulz> so uses wget -c
<qschulz> which opens the file in append mode
<qschulz> RP: we're using kirkstone 4.0.5
<RP> qschulz: well, my point about high level locking stands
<qschulz> RP: high level locking on the fs?
<RP> qschulz: in the bitbake code. The wrapper around download() in __init__.py is meant to hold a lock so only one download for a given file can be called at once
jclsn has joined #yocto
<qschulz> RP: looking at it, also keep in mind this is using a premirror, so might have a slightly more convoluted code path
<RP> qschulz: hence my comments above about looking at what it is using for ud.localfile
<RP> er, ud.lockfile
<RP> qschulz: I really should have explained that in the commit message :(
<RP> basically if you expose DL_DIR over http without locking you need that
<qschulz> Not over HTTP, over NFS for us
<qschulz> If I were to provide a mirror, I would do the BB_GENERATE_MIRROR_TARBALLS stuff and rsync it to a dir exposed by a webserver
<qschulz> but thanks, will hopefully have this in mind, were my future me have different plans :)
<RP> qschulz: right, some mirrors (e.g. the YP one) do share DL_DIR directly over http which was causing occasional race issues
* qschulz looks into how to add patches to poky with kas...
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 255 seconds]
prabhakarlad has quit [Quit: Client closed]
tomzy_0 has quit [Quit: Client closed]
GuestNew has joined #yocto
<GuestNew> Hi, OOM killer is called during custom package build (arm-cowt-linux-gnueabi-g++: fatal error: Killed signal terminated program cc1plus). If I call the build again twice, no issue.  Is it possible to reduce the number of threads for only one recipe ?
<rburton> sure, drop PARALLEL_MAKE
<rburton> or just reduce BB_NUMBER_THREADS globally
<LetoThe2nd> rburton: lame. buy more RAM.
<rburton> both default to number-of-cores, which might be too much if you don't have enough RAM
prabhakarlad has joined #yocto
<rburton> buying more RAM is the true answer
<rburton> LetoThe2nd: in writing my slides i discovered that 1tb of nvme ssd is <£100, so I've officially dropped my "buy lots of ram, use tmpfs" tip for fast builds. :)
<LetoThe2nd> rburton: lame, writing slides less than 24h before presenting.
<rburton> ndec will go HULK SMASH if they're not uploaded by friday
<LetoThe2nd> rburton: now i want to see that.
<GuestNew> rburton I don't want (if it's possible to configure it globally)
<GuestNew> LetoThe2nd already 64Go ... should be enough
<qschulz> GuestNew: how many cores and are you building a browser or complex C++ applications?
denisoft81 has joined #yocto
<GuestNew> qschulz max of my i9-10980XE so 64
<rburton> GuestNew: globally, set BB_NUMBER_THREADS to 32 and PARALLEL_MAKE to -j32.
<rburton> you'll oom less and i expect the build will be no slower
<rburton> hell, 16 will be fine
<GuestNew> rburton ok thanks i will make a try
<LetoThe2nd> GuestNew: for building complex c++ applications, 2-4GB of RAM per thread should be allocated. and I confirm, we ran extensive tests of parallelism from 1 up to 256, and everything above roughly 20 (give or take some) is very much neglegible these days.
<qschulz> GuestNew: on consumer grade PC, we usually recommend twice the number of CPU cores in RAM, but once you enter server grade, you need much more RAM
<qschulz> and since I don't have much experience with server grade and we are not building complex things here, I'll let the pro handle this question/follow-up :)
<RP> rburton: I should buy some fast disks for my builder
<rburton> RP: 1tb nvme wd black, £103 quid on scan
<GuestNew> EXTRA_OEMAKE can used if I want to compare ? rburton LetoThe2nd qschulz thanks for support guys!
<rburton> GuestNew: no, set the PARALLEL_MAKE and BB_NUMBER_THREADS
<LetoThe2nd> GuestNew: there is a testing script somewhere in the wiki, rburton actually wrote it AFAIK
<qschulz> RP: https://paste.ack.tf/3a7ae2 for the diff
<rburton> LetoThe2nd: nope, jama iirc
<rburton> or was it dvhart
<rburton> might have been darren
<LetoThe2nd> rburton: old white metalheads, flaky memory.
* LetoThe2nd builds on a relatively cheapo i7 hosted by hetzner these days.
<qschulz> RP: reminder that one system has this path as an ext4 (native obviously) and one as NFS mounted from the first system. Also, we mount those inside our containers for building
<qschulz> RP: lsof complains on the NFS share for the /mnt/nfs-shared-cache/yocto/downloads/git2/sourceware.org.git.binutils-gdb.git.lock
<qschulz> lsof: WARNING: can't stat() ext4 file system /var/lib/containers/storage/overlay
<qschulz> Output information may be incomplete.
<RP> qschulz: I can't tell just from the log whether the locks are the right ones to hold but it seems reasonable? Have you tried holding a lock from within the container and then on the nfs server ?
<RP> qschulz: i.e. actually just test the locking works
<RP> qschulz: the stat() failure sounds ominous
<rburton> yeah i wonder if the overlayfs stuff is breaking locks
<RP> qschulz: I was worried the shallow clone would cause a problem but it is holding two locks, one shallow, one not so it looks like that case is covered
<RP> qschulz: my favourite trick for debugging this is a time.sleep(100000) somewhere in the code when the lock is held, then you can poke elsewhere
<RP> rburton: that gl patch ;-)
<rburton> erm yeah
<qschulz> RP: I'm going the flock way in a shell within the container with mounted volumes for now :)
<qschulz> aaaaaaand the locks don't work
<qschulz> "flock -u plop -c 'for i in `seq 0 10`; do echo $i; sleep 1; done'" on server1 within container and also on server2 within container
<qschulz> both run concurrently
<qschulz> oh the stuff of nightmares
<RP> qschulz: that would be why things aren't working then :/
<qschulz> in two different containers on the same host, same behavior
<qschulz> RP: sorry for the scares then, "happy" it's an issue on "my" side :)
<RP> qschulz: np, I could believe we had problems in there!
<qschulz> RP: also tried to reproduce the parsing cache invalidation miss I had with U-Boot on plain poky but didn't manage to yet.. will let you know if/how I can
<RP> qschulz: ok, thanks
<qschulz> RP: nvm, I swapped -u and -n
<qschulz> (unlock and non-block)
<qschulz> so on the same server the locks work, but not across servers
<qschulz> should have been: flock -n plop -c 'for i in `seq 0 10`; do echo $i; sleep 1; done'
<qschulz> soooo, server1 with ext4, server2 and server3 with NFS. Containers on same server: locks ok. Containers on server2 and server3: locks ok. Containers on server1 and server2: locks NOK.
<qschulz> same outside of containers
<qschulz> RP: ok, so the issue is that NFS translates flocks into fcntl syscall and bitbake uses flocks
<qschulz> see https://manned.org/nfs.5#head9 section "Using file locks with NFS"
<qschulz> and locks from flocks aren't aware of fcntl locks (and/or vice-versa)
<qschulz> using this program https://www.informit.com/articles/article.aspx?p=23618&seqNum=4 which uses the fcntl ioctl, the lock works across servers with ext4/nfs
denisoft81 has quit [Quit: Leaving]
<RP> qschulz: I guess it is siutation dependent as flock is working on on the autobuilder
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
GuestNew has quit [Quit: Client closed]
d-s-e has joined #yocto
sakoman has joined #yocto
d-s-e has quit [Ping timeout: 246 seconds]
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
<qschulz> RP: it works when everything is using NFS, which is the case of the autobuilder
<qschulz> RP: but if you were to have one of the autobuilder use another fs instead of NFS, then you would have the issue
<qschulz> wondering if/how this should be documented
<RP> qschulz: good questions :/
Tokamak has joined #yocto
Tokamak_ has quit [Ping timeout: 252 seconds]
mthenault has quit [Quit: Leaving]
d-s-e has joined #yocto
GuestNew has joined #yocto
zpfvo has quit [Ping timeout: 268 seconds]
GuestNew has left #yocto [#yocto]
Nini has joined #yocto
zpfvo has joined #yocto
<Nini> Hello, there is a way to execute bash/shell arithmetic operation on environnent variable to set bitbake variable value ? aka BITBAKE_VAR = $(SHELL_VAR1 + SHELL_VAR2 )
d-s-e has quit [Quit: Konversation terminated!]
<rburton> you mean $(( )), but sadly the parser doesn't handle that https://bugzilla.yoctoproject.org/show_bug.cgi?id=11314
<rburton> you can use expr though
<rburton> shouldn't be *that* difficult to fix the parser, its just that nobody has done it
<Nini> rburton ok thanks for info, agree looks "simple" but "simple" is never simple
xmn has joined #yocto
<rburton> actually this might be really simple
camus has quit [Remote host closed the connection]
camus has joined #yocto
<rburton> well okay moderately
* rburton wonders how pyshlex works
pgowda_ has joined #yocto
gho has quit [Ping timeout: 252 seconds]
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
gho has joined #yocto
gho has quit [Ping timeout: 252 seconds]
gho has joined #yocto
prabhakarlad has quit [Quit: Client closed]
prabhakarlad has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
rcw has joined #yocto
goliath has quit [Quit: SIGSEGV]
zpfvo has joined #yocto
amitk has joined #yocto
Nini has quit [Quit: Client closed]
Herrie has quit [Ping timeout: 260 seconds]
kscherer has joined #yocto
Tokamak has quit [Read error: Connection reset by peer]
Tokamak has joined #yocto
Mickal[m] has joined #yocto
hcg has joined #yocto
gho has quit [Ping timeout: 260 seconds]
gho has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 252 seconds]
camus1 is now known as camus
zpfvo has quit [Quit: Leaving.]
zpfvo has joined #yocto
<fray> I'm trying to construct a disk (sd card) image using wic that has a boot and root split. boot disk being vfat. The problem I'm having is I can't figure out how to actually SPLIT the root to /boot. It appears I need to specify everything in IMAGE_BOOT_FILES and they get copied from the tmp/deploy, that isn't what I want. There are already packages installing the files into /boot. So if I boot the board,
<fray> UNMOUNT /boot the files I actually want are there.
<fray> Anyone have any suggestions how to do this?
<qschulz> fray: meta-rockchip does this already
<qschulz> bootimg-partition might be the trick?
<fray> that is what I'm using...
<fray> I don't have this: --sourceparams="loader=u-boot"
<fray> but I'm not sure what that is
marek7 has joined #yocto
<marek7> hi, when compile php in yocto it's missing some dns functions like compared same evrsion compiled on x86 manually:
<marek7> ```php -r 'var_dump(get_defined_functions());' | grep dns
<marek7>     string(16) "dns_check_record"
<marek7>     string(10) "checkdnsrr"
<marek7>     string(10) "dns_get_mx"
<marek7>     string(14) "dns_get_record"
<marek7> ```
<marek7> on my embedded device it return empty
<marek7> any ideas if I'm missing some dependency maybe?
manuel1985 has quit [Ping timeout: 268 seconds]
<fray> qschulz: from a quick look, it looks like rockchip isn't pulling from the root (/) for /boot, but using IMAGE_BOOT_FILES from deploy as well..
hcg has quit [Quit: Client closed]
gho has quit [Ping timeout: 268 seconds]
prabhakarlad58 has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
hcg has joined #yocto
alessioigor has quit [Quit: alessioigor]
gho has joined #yocto
gsalazar has quit [Ping timeout: 260 seconds]
amitk_ has joined #yocto
florian_kc has quit [Ping timeout: 246 seconds]
seninha has joined #yocto
amitk_ has quit [Ping timeout: 256 seconds]
gho has quit [Ping timeout: 260 seconds]
gho has joined #yocto
amitk_ has joined #yocto
gho has quit [Ping timeout: 268 seconds]
kris has quit [Quit: WeeChat 1.9.1]
hcg has quit [Quit: Client closed]
camus has quit [Ping timeout: 248 seconds]
gsalazar has joined #yocto
pgowda_ has quit [Quit: Connection closed for inactivity]
florian_kc has joined #yocto
mischief has joined #yocto
florian_kc has quit [Ping timeout: 260 seconds]
amitk_ has quit [Ping timeout: 252 seconds]
amitk_ has joined #yocto
Herrie has joined #yocto
frieder has quit [Remote host closed the connection]
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
mrpelotazo has quit [Read error: Connection reset by peer]
mrpelotazo has joined #yocto
leon-anavi has quit [Quit: Leaving]
zpfvo has quit [Quit: Leaving.]
florian_kc has joined #yocto
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
seninha has quit [Remote host closed the connection]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
florian_kc has quit [Ping timeout: 260 seconds]
prabhakarlad58 has quit [Quit: Client closed]
amitk_ has quit [Remote host closed the connection]
amitk has quit [Ping timeout: 246 seconds]
prabhakarlad has joined #yocto
ZI7 has joined #yocto
gsalazar has quit [Ping timeout: 252 seconds]
tor has quit [Quit: Leaving]
Haxxa has quit [Quit: Haxxa flies away.]
Haxxa has joined #yocto
ZI7 has quit [Ping timeout: 260 seconds]
mvlad has quit [Remote host closed the connection]
marek7 has quit [Quit: Client closed]
florian_kc has joined #yocto
gsalazar has joined #yocto
FredericOuellet[ has joined #yocto
<JPEW> RP: I was looking at your branch for storing the cache data, and from my profiling it looks like the current bottle neck (with a few micro-optimizations I did) is now the de-pickle of data over the queue from the parser threads to the main thread. It's about 20% of the CPU time now, when it was much smaller
goliath has joined #yocto
<RP> JPEW: interesting, that wasn't what I was seeing :/. Was that with the -P option or something else?
<JPEW> RP: Ya, with -P. My parse takes about 17 seconds and _pickel.loads is 4s, select.poll is 3 and posix.read is 3
<JPEW> I think it's because we are sending more data from the parser threads?
<RP> JPEW: that would make sense, it just isn't where my profile shows the change :/
<RP> JPEW: if that is where it is showing, we may be better off sending the data after all the recipes are parsed, condense it with de-dup before sending
<JPEW> Ya, I was trying to figure out how to do that
<JPEW> I have the deduping down to 2.2 seconds
<RP> JPEW: I have a patch for the other way, one second
<JPEW> RP: For reference, on master _pickle.loads only takes 0.708 of 11 total seconds
<RP> JPEW: https://git.yoctoproject.org/poky-contrib/commit/?h=rpurdie/t222&id=637514ad85165ddab1dd78766c13db7f35a9a6fa but it doesn't do what we want. It does show the stream you'd want to tag on the end of though
* RP is getting confused between the different patches
<JPEW> The parsing threads are sending datastores across the queue right?
<JPEW> It's not a simple dictionary per-se
<RP> JPEW: no, they send cache objects
<RP> JPEW: CoreRecipeInfo objects specifically
<RP> out datastore was never picklable directly
<JPEW> Got it
<JPEW> I couldn't quite figure out what change was making it so much bigger
<RP> JPEW: in modern 3.x python with more knowledge in the minds of developers, we might be able to do something better but in 2.4 this was scary enough :)
* JPEW face palms for trying to figure out what changes made the cache bigger by looking at master instead of RP's branch
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
<RP> JPEW: 2.7s for loads() and 2.6s for add()
<RP> JPEW: so I do see similar numbers
<JPEW> You can cut the add in half if you don't cache the dictionary keys; they don't actually appear to make a difference in the cache size
<JPEW> AFAICT anyway
<JPEW> Let me check again, maybe I had something else weird going on
<JPEW> Ya, doesn't appear to matter. I suspect they are unique enough you don't gain anything by caching them
<RP> JPEW: I was meaning to see which bit was where the best effect was
<RP> JPEW: I think that bit will be python version dependent on the internal string interning
<JPEW> RP: Maybe; I don't think string interning is used when de-pickeling (not a compile time constant)
<JPEW> RP: I think the rest of add is the conversion to frozensets now
<JPEW> (with my micro-optimizing anyway)
<RP> JPEW: I have https://git.yoctoproject.org/poky-contrib/commit/?h=rpurdie/t222&id=746d455a5ece241e950cf273012ecd33d62034a5 pending to try and be more consistent with the frozensets
<JPEW> AH, ya that would help
<JPEW> Well, or move the problem elsewhere ;)
<RP> JPEW: well, we already create sets in the other places so I doubt it makes things much worse
* RP was going to try and profile that in isolation
<JPEW> It seems to save a little bit of time
<RP> JPEW: cool. It took me an age to debug where deps were coming from and that magic sorted()
<RP> JPEW: https://git.yoctoproject.org/poky-contrib/commit/?h=rpurdie/t222&id=d93aab4f5f3dc5176edd87f4fd97c7ecbf74fb5f was my last experiment and about as fast as I could get it
prabhakarlad has quit [Quit: Client closed]
kanavin_ has joined #yocto
jsandman8 has joined #yocto
rhadye_ has joined #yocto
ric96_ has joined #yocto
Net147_ has joined #yocto
mrnuke_ has joined #yocto
eggman_ has joined #yocto
yocton_ has joined #yocto
dlan_ has joined #yocto
zbr_ has joined #yocto
Vonter_ has joined #yocto
zkrx has quit [Killed (NickServ (GHOST command used by zkrx_))]
zkrx has joined #yocto
PascalBach[m] has quit [*.net *.split]
ramacassis[m] has quit [*.net *.split]
kiwi_29_[m] has quit [*.net *.split]
Saur[m] has quit [*.net *.split]
kanavin has quit [*.net *.split]
dlan has quit [*.net *.split]
rhadye has quit [*.net *.split]
DvorkinDmitry has quit [*.net *.split]
Vonter has quit [*.net *.split]
ric96 has quit [*.net *.split]
Net147 has quit [*.net *.split]
mrnuke has quit [*.net *.split]
jsandman has quit [*.net *.split]
Ad0 has quit [*.net *.split]
olof has quit [*.net *.split]
eggman has quit [*.net *.split]
yocton has quit [*.net *.split]
rhadye_ is now known as rhadye
eggman_ is now known as eggman
ric96_ is now known as ric96
jsandman8 is now known as jsandman
prabhakarlad has joined #yocto
Ad0 has joined #yocto
kiwi_29_[m] has joined #yocto
DvorkinDmitry has joined #yocto
PascalBach[m] has joined #yocto
Saur[m] has joined #yocto
goliath has quit [Quit: SIGSEGV]
eLmankku has quit [Ping timeout: 268 seconds]
ramacassis[m] has joined #yocto
Nick13 has joined #yocto
Nick13 has quit [Client Quit]
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 248 seconds]
eLmankku has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
eLmankku has quit [Ping timeout: 260 seconds]
eLmankku has joined #yocto