<kanavin>
luc4, remove vulkan from DISTRO_FEATURES and it will be removed
<kanavin>
'bitbake -e qtbase' will show how DISTRO_FEATUR
<kanavin>
ES is formed
pabigot has quit [Ping timeout: 250 seconds]
lighteagle has joined #yocto
ptsneves has joined #yocto
frieder has joined #yocto
ptsneves has quit [Read error: Connection reset by peer]
Vonter has joined #yocto
<luc4>
kanavin: thanks, unfortunately that is the first thing I tried: DISTRO_FEATURES:remove = " vulkan"
<luc4>
kanavin: but this is what I get: ERROR: Nothing PROVIDES 'vulkan-loader' (but /workspace/meta-qt6/recipes-qt/qt6/qtbase_git.bb DEPENDS on or otherwise requires it)
pabigot has joined #yocto
<kanavin>
luc4, that should work, so you need to find out whether vulkan is enabled some other way in that recipe through inspecting the output of above command
<luc4>
kanavin: that is pretty long. Are the first lines sufficient?
<kanavin>
I basically need to look at values and formtion of DISTRO_FEATURES and PACKAGECONFIG
speeder_ has quit [Ping timeout: 248 seconds]
leon-anavi has joined #yocto
lars__ has joined #yocto
<lars__>
Hello. Does anyone know how I can configure systemd to use services from /data partition? I have a read only root filesystem and want to install an application which runs as a service to /data
<lars__>
I do not want all the systemd services that are built as part of Yocto to live in /data, only allow my application service to be installed (and enabled) there
ptsneves1 has joined #yocto
<luc4>
kanavin: not sure, I'm trying to understand that output. Maybe this is sufficient: https://pastebin.com/5uhW1Srb
ptsneves1 is now known as ptsneves
speeder_ has joined #yocto
speeder__ has quit [Ping timeout: 255 seconds]
Kubu_work has joined #yocto
<dario>
lars__: i think a common approach is symlinking /etc/systemd/system to a real folder on /data
<lars__>
dario: but then all the services will live on /data. I only want one service to live there, the ones that are part of Yocto should still live inside the read only rootfs
speeder__ has joined #yocto
<luc4>
kanavin: I can see the place where PACKAGECONFIG is built, it seems like it is finding it in the DISTRO_FEATURES...
alessioigor has quit [Quit: alessioigor]
<luc4>
maybe DISTRO_FEATURES:remove = " vulkan" is the wrong syntax
<kanavin>
so you are removing x11 and wayland, but not vulkan
<kanavin>
where did you place the vulkan removal?
alessioigor has quit [Remote host closed the connection]
<kanavin>
the syntax is correct, the location might be wrong
alessioigor has joined #yocto
<luc4>
kanavin: I placed the removal in the bb file related to my image, so in the specific layer, while I removed x11 and wayland from my local.conf. Actually I'd prefer to move everything to my image bb file. Is that the wrong location?
<kanavin>
yes
<kanavin>
you cannot alter DISTRO_FEATURES from recipes
<kanavin>
it's a global setting
<luc4>
kanavin: ah I see, thank you very much for your help
<kanavin>
luc4, it's not a good idea to alter DISTRO_FEATURES from local.conf either
<kanavin>
if poky makes choices you do not like, make your own distro
<kanavin>
yocto is literally meant for making custom distributions
speeder_ has joined #yocto
<luc4>
kanavin: I'll read into this, thanks
speeder__ has quit [Ping timeout: 250 seconds]
gsalazar has joined #yocto
vladest has quit [Read error: Connection reset by peer]
ykrons has quit [Server closed connection]
ykrons has joined #yocto
speeder__ has joined #yocto
gsalazar has quit [Read error: Connection reset by peer]
speeder_ has quit [Ping timeout: 255 seconds]
gsalazar has joined #yocto
ptsneves1 has joined #yocto
rfried has quit [Server closed connection]
rfried has joined #yocto
gsalazar has quit [Ping timeout: 246 seconds]
speeder_ has joined #yocto
speeder__ has quit [Ping timeout: 240 seconds]
olani has joined #yocto
rusam has joined #yocto
gsalazar has joined #yocto
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
gsalazar has quit [Remote host closed the connection]
prabhakarlad has quit [Quit: Client closed]
speeder__ has joined #yocto
rusam has quit [Ping timeout: 248 seconds]
speeder_ has quit [Ping timeout: 248 seconds]
rusam has joined #yocto
vladest has joined #yocto
rusam has quit [Client Quit]
nedko has joined #yocto
speeder_ has joined #yocto
speeder__ has quit [Ping timeout: 255 seconds]
mbulut has joined #yocto
pabigot has quit [Ping timeout: 246 seconds]
Schlumpf has quit [Quit: Client closed]
speeder__ has joined #yocto
slimak has quit [Ping timeout: 245 seconds]
speeder_ has quit [Ping timeout: 246 seconds]
zpfvo has quit [Ping timeout: 255 seconds]
l3s8g has quit [Remote host closed the connection]
l3s8g has joined #yocto
slimak has joined #yocto
prabhakarlad has joined #yocto
speeder_ has joined #yocto
speeder__ has quit [Ping timeout: 245 seconds]
slimak has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
speeder__ has joined #yocto
speeder_ has quit [Ping timeout: 246 seconds]
Vonter has quit [Ping timeout: 245 seconds]
Vonter has joined #yocto
<dario>
lar if it's just the one service, can you have the .service file on / partition, and it's ExecStart run a script from /data?
<dario>
la
<dario>
lars__:
l3s8g has quit [Remote host closed the connection]
lars__ has quit [Read error: Connection reset by peer]
<luc4>
Hello! I'm creating my custom image with yocto. I created a few custom layers, for which I created specific git repos. I also see though that building the system requires the files local.conf, bblayers.conf and even all the layers, which were cloned to a specific branch. What do you typically version in this case? I was thinking of creating a repo containing the conf directory and submodules for each cloned layers. Is this the best
<luc4>
practice?
<rburton>
you can use git submodules or kas or repo or oe-setup-layer to manage the layers and the revisions
<rburton>
best practise is don't commit bblayers/local.conf
<rburton>
ie if you use kas then your kas conf defines the local.conf and bblayers.conf
<rburton>
oe-setup-layers encourages you to define a distro and write templates for those two files
<rburton>
etc
Vonter has quit [Ping timeout: 260 seconds]
Vonter has joined #yocto
kayterina has quit [Quit: Client closed]
vladest has quit [Read error: Connection reset by peer]
vladest has quit [Read error: Connection reset by peer]
vladest has joined #yocto
alessioigor has quit [Quit: alessioigor]
kayterina has quit [Quit: Client closed]
alessioigor has joined #yocto
pabigot has joined #yocto
tgamblin has joined #yocto
Schlumpf has joined #yocto
kayterina has joined #yocto
<vvn>
rburton: I think you meant oe-setup-builddir
kayterina has quit [Quit: Client closed]
<JaMa>
RP: another interesting "timeout" with latest master .. https://paste.ack.tf/1ab0f3 I'm not complaining the build as successful and super fast, but something doesn't look right :)
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Haxxa has quit [Ping timeout: 255 seconds]
<RP>
JaMa: we've seen that occasionally on the autobuilder. The (1) exit code means python had some exception the worker most likely but I've never captured it :(
<JaMa>
in cookerlog I see ValueError: too many values to unpack (expected 9)
<JaMa>
but in this one I have strong suspicion that something was modifying that builddir
kayterina has joined #yocto
<RP>
JaMa: the interwoven git output does look a bit worrying
<RP>
JaMa: the traceback is interesting. We could put some proper error handling around that so we'd get a nicer warning but I don't know how it would get that wrong :/
<RP>
that is saying the data on the cooker -> worker pipe is incorrect
<JaMa>
isn't that git output from pushing buildhistory?
<JaMa>
I've added the check for bitbake processes still running, but it's not propagated to all the CI jobs yet, so my build didn't see anything running when it started, but some other job could still modify metadata
<jclsn>
Where are the tpm2 machine features defined?
<jclsn>
Ah meta-security/meta-tpm2 somewhere I guess
ptsneves1 has quit [Quit: ptsneves1]
florian_kc has quit [Ping timeout: 246 seconds]
<jclsn>
Ah I guess ${@bb.utils.contains('MACHINE_FEATURES', 'tpm2', 'packagegroup-security-tpm2', '', d)} means that if the machine feature is activated, the packagegroup is installed?
<luc4>
rburton: I'm reading what kas is, thanks for the info!
lexano has joined #yocto
<rburton>
zeddii: did you turn on CONFIG_DEBUG recently to chase a bug? can we turn it off again?
<zeddii>
no changes to the baseline configuration.
<zeddii>
not by me anyway.
wmills_ has joined #yocto
<RP>
zeddii: pesky gremlins getting in :)
wmills has quit [Ping timeout: 245 seconds]
<rburton>
zeddii: is CONFIG_DEBUG_PREEMPT on by default intentionally?
TundraMan is now known as marka
<zeddii>
only when the debug-kernel.cfg fragment is added, nothing in the kernel selects it
<zeddii>
but I do see: lib/oeqa/selftest/cases/runtime_test.py:KERNEL_EXTRA_FEATURES:append = " features/debug/debug-kernel.scc"
<zeddii>
so yah, intentional :)
rob_w has quit [Quit: Leaving]
Vonter has quit [Ping timeout: 255 seconds]
tgamblin has quit [Remote host closed the connection]
Xagen has joined #yocto
<jclsn>
JPEW: Any idea why bitbake virtual/kernel -c menuconfig doesn't work with Pyrex? It is stuck at "trying to run screen -r... "
sam has joined #yocto
<sam>
Hello! Does anyone have good reference for guide to build FIT image in yocto?
frieder has quit [Remote host closed the connection]
Joel44 has joined #yocto
<Joel44>
Good day! I'd like to modify meta/recipes-devtools/python/python3/* but what recipe name do I give it? If I use python3 then it picks up meta/recipes-devtools/python/python3_3.11.2.bb instead
<Joel44>
Hmm... wait
<rburton>
assuming you mean the recipe that those patches apply to, then yes that's python3
goliath has joined #yocto
<Joel44>
I actually need to patch the recipe itself to edit the manifest json file.
<rburton>
edit the recipe then
<JPEW>
jclsn: Hmm, I'm not exactly sure. It works for me, but I'm using tmux, so maybe that's why?
amitk_ has joined #yocto
leon-anavi has quit [Quit: Leaving]
alessioigor has quit [Remote host closed the connection]
mbulut has quit [Ping timeout: 258 seconds]
Kubu_work has quit [Quit: Leaving.]
slimak has quit [Ping timeout: 255 seconds]
prabhakarlad has quit [Quit: Client closed]
rfuentess has quit [Remote host closed the connection]
Joel44 has quit [Quit: Client closed]
xmn has joined #yocto
Vonter has quit [Read error: Connection reset by peer]
zpfvo has quit [Quit: Leaving.]
ptsneves has quit [Ping timeout: 255 seconds]
vladest has quit [Ping timeout: 245 seconds]
<khem>
RP: I tried to check the ppc64 build on worker as I wanted to inspect some .so files but the build seems to be gone :(
<khem>
my local build with clang seems to work good I can launch rpm fine inside qemu
ptsneves has joined #yocto
Estrella has quit [Ping timeout: 250 seconds]
<RP>
khem: the builds cycle quickly and weekly maintenance does clean up the disks. We can run another?
olani has quit [Ping timeout: 245 seconds]
olani has joined #yocto
<khem>
yeah will be cool
l3s8g has joined #yocto
Circuitsoft has joined #yocto
pabigot has quit [Ping timeout: 246 seconds]
<khem>
RP: does oeqa has some knowledge/assumptions about libdir on target
<khem>
on ppc64 we have - BASELIB:libc-glibc:powerpc64le = "lib64"
<khem>
I built core-image-sato locally and I can do` ssh qemu dnf --help` fine
pabigot has joined #yocto
goliath has quit [Quit: SIGSEGV]
<dvergatal>
khem: reproducible tests go well with my patchset on glibc-2.38 so I dunno what is wrong with your tests
aak-rookie has joined #yocto
aak-rookie has quit [Quit: Client closed]
mbulut has joined #yocto
ptsneves has quit [Ping timeout: 246 seconds]
<khem>
dvergatal: thats good, however the issue was seeing in normal builds with opkg backend, infact the opkg patch which is already upstream when disabled worked fine with rest of your changeset
<khem>
It could also be differences between different distros ( arch and gentoo ) configuring glibc but I will doubt that
ptsneves has joined #yocto
<khem>
RP: testimage for core-image-sato worked on my local build for qemuppc on master + alex'es master-next staging changes
<khem>
s/qemuppc/qemuppc64
<khem>
so failures are in testsdk
zelgomer has quit [Remote host closed the connection]