ad__ has quit [Quit: ZNC 1.7.2+deb3~bpo9+1 - https://znc.in]
ad__ has joined #yocto
<mihai>
yo
leon-anavi has joined #yocto
DasChaos[m] has joined #yocto
dev1990 has joined #yocto
rob_w has joined #yocto
<RP>
LetoThe2nd: seems fairly accurate :)
<LetoThe2nd>
RP: hehehe
<RP>
LetoThe2nd: I was temped to start that "with the greatest respect," .... :)
<LetoThe2nd>
RP: I actually looked up the list if it stated "fairly accurate"
zpfvo has quit [Ping timeout: 250 seconds]
ad__ has quit [Quit: ZNC 1.7.2+deb3~bpo9+1 - https://znc.in]
zpfvo has joined #yocto
ad__ has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
kroon has quit [Quit: Leaving]
ad__ has quit [Quit: ZNC 1.7.2+deb3~bpo9+1 - https://znc.in]
ad__ has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
Schlumpf has joined #yocto
jqua[m] has quit [Quit: You have been kicked for being idle]
florian has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
tnovotny has joined #yocto
tnovotny has quit [Remote host closed the connection]
tnovotny has joined #yocto
vladest has quit [Remote host closed the connection]
vladest has joined #yocto
ad__ has quit [Quit: ZNC 1.7.2+deb3~bpo9+1 - https://znc.in]
ad__ has joined #yocto
ad__ has quit [Client Quit]
ad__ has joined #yocto
<Schlumpf>
Hi, what is the root path of the yocto patch mechanism? I have to set a relative path in the .patch file to find the file to patch, but it doesn't find it
<qschulz>
Schlumpf: but even better way to deal with this, is to use devtool modify, then commit the change in the devtool workspace for this recipe and then devtool finish or something like that
<qschulz>
(I never update recipes with devtool, only modify them and then handle the patching process myself...)
ad__ has quit [Quit: ZNC 1.7.2+deb3~bpo9+1 - https://znc.in]
ad__ has joined #yocto
Schlumpf has quit [Ping timeout: 256 seconds]
Schlumpf has joined #yocto
<rburton>
LetoThe2nd: that is 100% accurate fwiw
<LetoThe2nd>
rburton: hrhr
<Schlumpf>
letoThe2nd: qschulz: I thought that, too. And that seems to be correct. I dumped the PWD in do_patch(). But it still doesn't find the file...
kroon has joined #yocto
<ad__>
gm, in dunfell, trying to produce sdk i get: nothing provides nativesdk-perl needed by nativesdk-autoconf-2.69-r11.x86_64-nativesdk
<ad__>
am i missing some layer for it ?
ZygmuntKrynicki[ is now known as zyga[m]
zpfvo has quit [Ping timeout: 240 seconds]
argonautx has joined #yocto
florian_kc has joined #yocto
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 252 seconds]
zpfvo has joined #yocto
GNUmoon has quit [Ping timeout: 276 seconds]
GNUmoon has joined #yocto
pgowda_ has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
Circuitsoft has quit [Quit: Connection closed for inactivity]
<agherzan>
RP: there is no way I can reproduce this pseudo race issue. Is there any chance I can pick up the pseudo dbs from the autobuilder?
Schlumpf has quit [Ping timeout: 256 seconds]
tnovotny has quit [Quit: Leaving]
GNUmoon has quit [Remote host closed the connection]
<RP>
agherzan: since this happened a while ago, I think it is unlikely now. I can't remember if we were able to save any of that builddir :/
<agherzan>
RP: could it be that we just fixed it inadvertently? It's just that not seeing anything suspicions in either the tasks code nor in the pseudo command checks, could imply that.
<RP>
agherzan: it is possible some of the changes there could have let something slip through and cause an issue :/
tgamblin has quit [Quit: Leaving]
tperrot_ has quit [Quit: leaving]
tgamblin has joined #yocto
tprrt has joined #yocto
tprrt is now known as tperrot
goliath has joined #yocto
<rburton>
RP: sent a oeqa addition to build newlib as part of selftest. very *very* minimum viable product, but would have caught the license problem
<agherzan>
RP: fair point. Checking it a bit more.
tnovotny has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
kroon has quit [Quit: Leaving]
jclsn has joined #yocto
akiCA has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
codavi has joined #yocto
mvlad has quit [Read error: Connection reset by peer]
akiCA has quit [Ping timeout: 256 seconds]
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
osama has joined #yocto
rsalveti has joined #yocto
Minvera has joined #yocto
mvlad has joined #yocto
GillesM has joined #yocto
<vd>
Can an IMAGE_CMD mount the image file to add extra commands?
<RP>
vd: you don't normally have root privs so no
<vd>
ok
GillesM has quit [Quit: Leaving]
<vd>
RP: I would like to extend the btrfs support to add subvolumes, either with more control over the btrfs IMAGE_FSTYPES, or via wic (a subvolume can be described as a mountpoint from another parent partition). Do you have a guess where to start? The rootfs.py WIC's plugin maybe?
jatedev has joined #yocto
pgowda_ has quit [Quit: Connection closed for inactivity]
<vd>
erf no, wic doesn't mount its image files neither, I guess scripts on the target system is the only option.
<rburton>
unless you can tell the btrfs commands to set up the stuff you want at creation time
ekathva_ has quit [Remote host closed the connection]
<vd>
rburton: I can create a service script which is run after the partition device and before initrd-root-fs.target for example
<LetoThe2nd>
stupid me again. wheres the sync call described?
<tlwoerner>
sgw: we're experiencing a heat wave in ontario, it's around 0°C. i too have my fan on to help keep it cool ;-)
<RP>
LetoThe2nd: which sync call?
<LetoThe2nd>
RP: found it, nevermind.
davidinux has quit [Ping timeout: 256 seconds]
<vd>
rburton: the same goes for a luks partition I assume?
<rburton>
you are a normal user at all stages of a build. luckily, disk image creation is normally not restricted to root, so if the tools can do it ofline, you can do it at image generation time
<vd>
rburton: got it, I'll see what cryptsetup offers
<T_UNIX[m]>
JaMa: hey, is there a channel dedicated to meta-qt5 ?
<rburton>
no
pinenewb has joined #yocto
<pinenewb>
hi, I'm building a meta layer for the PineNote, but I'm running into an image tarball issue. The yocto generated linux tarball is doing everything correctly, but a couple of kernel module paths (specifically broadcom) are really long (over 100 chars). So when I go to unpack the rootfs tarball from the default PineNote Android OS, tar chokes when it hits those long file paths. I'm having to manually delete the directory from the
<pinenewb>
tarball in Linux before I copy & extract it
<pinenewb>
I recognize that this is not a yocto problem, but I'm wondering if anyone has suggestions on how to make this work a bit better out of the box
<pinenewb>
maybe change INSTALL_MOD_PATH to a shorter path, and then create a symlink dir to where the kernel expects the files to be? Part of the problem is my lengthy kernel version: 5.16.0-rc8-yocto-standard-pinenote
<qschulz>
pinenewb: please consider having it supported in meta-rockchip, I'm sure tlwoerner will be delighted to have one more device supported :)
<qschulz>
(likely requires most of the support to be upstream though, to be discussed with tlwoerner I guess)
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
<pinenewb>
yeah, ideally this would be supported in meta-rockchip or some other upstream layer. Is pinephone in meta-rockchip?
<pinenewb>
I haven't spent the time to figure out the pine64/rock64 relationship, but I dont think there's a meta-pine64, is there?
<pinenewb>
k, will look at merging my stuff into that meta layer instead of creating a new one. Any suggestions on the Android tar path length problem?
<khem>
can you re-state the problem
<pinenewb>
my output rootfs tarball has a couple of really long paths for broadcom kernel modules, and the default PineNote Android OS can't unpack the tarball without blowing up
<pinenewb>
I'm having to remove the files from the tarball manually before I can unpack it
<pinenewb>
it's totally _not_ a yocto problem, but I'm hoping there's a yocto solution
<khem>
sometimes max filelength can come in as problem
<tlwoerner>
yes, i always like seeing more MACHINEs in meta-rockchip :-D
<pinenewb>
I'm thinking that I might change INSTALL_MOD_PATH to a really short base path (/kmod) or something, then just create a symlink to the /lib/modules/<kernel_version> directory?
<tlwoerner>
i have a board on my desk that i haven't found the time to add in a while!
<pinenewb>
well I don't have too many rockchip boards other than this new PineNote, but I happen to have a bit of free time right now :)
florian_kc has quit [Ping timeout: 256 seconds]
<khem>
you might have PATH_MAX overflow
<pinenewb>
it seems like the OS itself can handle the path length, just not tar (untar). If I remove the "leaf" directory that's blows up "tar -x" then I can drop it back in where its supposed to be after the rootfs tarball successfully extracts
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
<pinenewb>
I dunno, maybe it's just a transient problem for me. I basically have the device booting Linux & connecting to WiFi at this point with SSH access, so maybe I don't need to use the Android partition any more
<pinenewb>
it's just slightly annoying that I'm getting a rootfs tarball that only extracts in *most* places
osama1 has joined #yocto
smsm has quit [Quit: Client closed]
osama has quit [Read error: Connection reset by peer]
osama2 has joined #yocto
florian has quit [Quit: Ex-Chat]
<qschulz>
pinenewb: for a quick and dirty hack you cna always just mv the files in the do_install:append of your recipe in a directory with a shorter name and do the ls by hand
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
osama1 has quit [Ping timeout: 240 seconds]
zpfvo has quit [Remote host closed the connection]
<pinenewb>
yeah, that would be an improvement. Very unfortunate that it's the broadcom kmods though (no network without fixing the links)
<khem>
which distro are you on ?
<pinenewb>
I do have filesystem access when I untar, so I can fix the links then
<pinenewb>
poky/honister
<smurray>
I assume the root issue is that the tar in Android sucks?
<pinenewb>
pulling in smaeul's kernel. all of the heavy lifting is already done in terms of device tree & kernel stuff
<pinenewb>
yes, that is the problem. tar in Android sucks
<pinenewb>
khem: that command doesn't work for (on Android): /system/bin/sh: gcc: inaccessible or not found
<pinenewb>
I'm in an ADB shell with sudo, but I just don't know enough android
<pinenewb>
only know enough to get me back to linux :)
<pinenewb>
on my build host I get 4096
Alternate_Pacifi is now known as The_Pacifist
<khem>
yeah and a bit google-fu tells me its 1024 for android
<JaMa>
T_UNIX[m]: no
<pinenewb>
well this definitely seems to be a very narrow problem with the particular version of tar on Android, so I'll just come up with a hackaround for now
osama3 has joined #yocto
osama4 has joined #yocto
osama2 has quit [Read error: Connection reset by peer]
osama3 has quit [Ping timeout: 240 seconds]
osama4 has quit [Quit: WeeChat 3.4]
frieder has quit [Ping timeout: 256 seconds]
mckoan is now known as mckoan|away
prabhakarlad has quit [Quit: Client closed]
florian_kc has joined #yocto
florian_kc has quit [Ping timeout: 240 seconds]
rfuentess has quit [Quit: LIBERT..zzz]
florian_kc has joined #yocto
jatedev has quit [Quit: Client closed]
ex-bugsbunny has quit [Ping timeout: 256 seconds]
Vonter has quit [Quit: WeeChat 3.4]
pinenewb has quit [Quit: Leaving]
florian_kc has quit [Ping timeout: 250 seconds]
<agherzan>
at all: What do people use nowadays for CI on GitHub Yocto based projects?
gchamp has quit [Quit: WeeChat 2.3]
<agherzan>
I tried GitLab CI - premium feature and no support for fork MRs. I tried GutHub actions, a big security mess on self-hosted workers and hard to scale builds when cache is a concern.
<agherzan>
I'm currently thinking of giving drone.io a try. I know khem was saying good things about it. Do we have any other experience in that regard? And no, I'm not coming back to Jenkins.
<fray>
might not be what you are looking for.. but pushed and such kick off jenkins jobs in our environment.. then we group/cache/schedule builds and do them in our jenkins environmnet.
<fray>
ignoring the jenkins side, I have seen other do something similar.. where the CI work flows can handle the kick off, and the 'results'.. but the work itself is done externally in their own systems
<agherzan>
fray: I had something similar both on personal and work infras. And maintaining Jenkins is not a nice thing. Especially if you look into the security concerns of various aspects.
<fray>
ya, that is why I said from a practical standpoint ignore the build side.. it's all about kicking off a build with appropriate data (what needs to be built), scheduling it (do I do one build per change, or bundle the changes or some combination), and results success means what, failures mean what, and how are success/failure "logs" returned
<fray>
all of that can be done securely (you transform and transfer) or insecurely you just point people directly to a public instance of your CI system..
<fray>
the right answer is somewhere in the middle, but I've not seen a "correct" everyone should do this behavior..
<fray>
We've got tooling requirements for some open source repositories that require potentially proprietary (licensed) software, add to that needs to sanitize bad actor contributions that could attempt to run things on your build environmnet.. so our "public" CI is being designed to work more in a DMZ where it's "inside", but not inside our network...
<fray>
that also gives us the ability to restrict what the machine can do on the external.. such as limit network access during the build, etc
<fray>
also helps make this whole thing agnostic to the actual "build" infrastructure (jenkins or otherwise).. but still allow connections w/ github, gitlab or "other"
sakoman has quit [Quit: Leaving.]
florian_kc has joined #yocto
<agherzan>
I like the agonistic aspect of that. Also yes. All those infra requirements are to be taken into consideration but for public Foss projects, they probably don't matter that much. When you move into proprietary code and pipelines things change pretty fundamentally.
<fray>
even for open source, as github found, you really need CPU process limits, network limits, etc. bad actors will take over CI infrastructure to act as network proxies, download software, mine crypto, etc
<agherzan>
I'm more looking into what people use for their public projects in the yocto context
<vd>
I see recipes using custom FOO[bar], FOO_bar or a mixture of both. Is there a recommended way between crafted variables names or flags?
amitk_ has joined #yocto
risca has joined #yocto
risca has quit [Client Quit]
risca has joined #yocto
amitk has quit [Ping timeout: 256 seconds]
<kergoth>
vd: Generally flags should be used to define metadata about the metadata. SRC_URI[sha256sum] is the sum for the uri, etc. Sometimes it's overloaded to get access to a dictionary-like structure within the limitations of the bitbake file format, but that's not ideal unless it's in line with the original intention
rob_w has quit [Read error: Connection reset by peer]
Minvera has quit [Quit: Leaving]
florian_kc has quit [Ping timeout: 256 seconds]
florian_kc has joined #yocto
GNUmoon has joined #yocto
mvlad has quit [Remote host closed the connection]
<vd>
kergoth: so in general you'd recommend defining FOO_somekey-bar instead of abusing FOO[bar]
prabhakarlad has joined #yocto
Bardon has quit [Ping timeout: 256 seconds]
leon-anavi has quit [Remote host closed the connection]
<smurray>
vd: using custom varflags like FOO[bar] in your own recipes can be problematic for downstream users as they can't be appended or easily removed, AFAIK it requires anon python
<smurray>
vd: so IMO the usage needs to be somewhat simple or not something you expect anyone to ever want to customize