camus has quit [Remote host closed the connection]
camus has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
GNUmoon has quit [Ping timeout: 255 seconds]
camus has quit [Remote host closed the connection]
camus has joined #yocto
GNUmoon has joined #yocto
camus has quit [Remote host closed the connection]
camus has joined #yocto
camus has quit [Remote host closed the connection]
camus has joined #yocto
camus has quit [Ping timeout: 252 seconds]
camus has joined #yocto
money has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
money is now known as polo
polo is now known as Gambino
Gambino has quit [Quit: late]
<LetoThe2nd>
yo dudX
xmn has quit [Ping timeout: 272 seconds]
demirok has joined #yocto
bps has joined #yocto
thomasd13 has joined #yocto
Payam has joined #yocto
<Payam>
Hi, I have sent up the downloads to a S3 bucket but I see that it still tries to download git repositories from somewhere
bps has quit [Ping timeout: 272 seconds]
amitk_ has joined #yocto
yashraj466 has joined #yocto
amitk has quit [Ping timeout: 260 seconds]
yashraj466 has quit [Client Quit]
invalidopcode has quit [Remote host closed the connection]
yashraj466 has joined #yocto
invalidopcode has joined #yocto
<Payam>
ERROR: bluez-glib-1.0+gitAUTOINC+045d4a1ffc-r0 do_fetch: Bitbake Fetcher Error: FetchError('Unable to fetch URL from any source.', 'git://gerrit.automotivelinux.org/gerrit/src/bluez-glib;protocol=https;branch=needlefish')
<Payam>
this is the problem
gho has joined #yocto
<Payam>
is it possible to download it manually?
goliath has joined #yocto
seninha has joined #yocto
<Payam>
what can I do to avoid these stuff?
<Payam>
the gerrit and fetching gives me lots of error.
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
zpfvo has joined #yocto
leon-anavi has joined #yocto
<qschulz>
Payam: did you set up the PREMIMRRORS variable correctly?
mvlad has joined #yocto
tomzy_0 has joined #yocto
bps has joined #yocto
bps has quit [Changing host]
bps has joined #yocto
Saur[m] has quit [Quit: You have been kicked for being idle]
zpfvo has quit [Ping timeout: 260 seconds]
seninha has quit [Quit: Leaving]
seninha has joined #yocto
<Payam>
no
<Payam>
I haven't touched it since I assumed that if I put all the downloads to s3 then I won't need it.
<Payam>
and then I download the sstate and downloads to /tmp in a ec2 and then run it.
<qschulz>
you should use a PREMIRRORS and a SSTATE_MIRRORs
<qschulz>
this will download only what you need from your s3
<Haxxa>
Any guess as to where I could find a watchdog program that restarts the device if an application is not running? I haven't been able to figure out where it would be; i.e. if I run /etc/init.d/scadaserver stop, 15 seconds later the device reboots.
<Haxxa>
Couldn't find anything that looks similar, I have full root access, so I can explore a fair bit
<qschulz>
Haxxa: if you are fine with a HW reset, look into HW watchdogs
<qschulz>
if your platform supports one
<qschulz>
you just need to write to /dev/watchdog a specific character every now and then to "ping it"
<Haxxa>
qschulz I am trying to figure out what is causing the reboot, I want to prevent it
<qschulz>
ah my bad, misread
<Haxxa>
i.e. some vendors device reboots 15 seconds after I stop their application.
<Haxxa>
I have been using grep and find to try and figure out what cause the watchdog reboot, but I haven't fogured it out yet
<Haxxa>
*figured
<LetoThe2nd>
heh yeah i guessed that by now. probably the application just uses the watchdog device.
<LetoThe2nd>
so time for you to read up on how a watchdog actually works :-)
<Haxxa>
LetoThe2nd ideally I would like to confirm it is the watchdog device rather than a script, is there anyway to figure this out, all the logs are written to volaitile memory which makes debugging hard.
<LetoThe2nd>
Haxxa: well, you're not only a hacker, you're even a Haxxa. you certainly can find out, right?
<qschulz>
Haxxa: if it's going through sysv reboot process properly (e.g. the script just calls "reboot" or something like that), just add a script that runs at the end of the poweroff process of the init system to move all logs to a persistent memory
<LetoThe2nd>
hint - this is completely unrelated to Yocto, it is merely reverse engineering some linux device. there are certainly better places for that.
<Haxxa>
qschulz thanks, nice idea
<Haxxa>
LetoThe2nd Thanks that has got me onto the right path, "fuser /dev/watchdog" returns the pid of the vendors application. So it is rebooting due to the watchdog device :)
<LetoThe2nd>
have fun then.
* RP
wonders how many issues the bitbake threading changes are going to cause
<LetoThe2nd>
RP: n.
<JaMa>
FWIW: I was using them since they were in master-next and haven't noticed any issues
amsobr has joined #yocto
florian_kc has joined #yocto
bps has joined #yocto
bps has joined #yocto
azcraft has joined #yocto
<RP>
JaMa: thanks, it helps to know they're working somewhere other than the autobuilder :)
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
amsobr has quit [Quit: Client closed]
d-s-e has quit [Ping timeout: 252 seconds]
camus has quit [Remote host closed the connection]
camus has joined #yocto
Payam has quit [Ping timeout: 272 seconds]
florian_kc has quit [Ping timeout: 252 seconds]
<rburton>
RP: so as oe-core has new layer.conf semantics, should master require layers to opt in with the compat names?
<rburton>
i mean mark as compatible
<rburton>
we've a layer that is marked as compat with langdale only, and predictably fails if you use it with master as it doesn't use the new syntax.
<RP>
rburton: good question, I'm not sure what you're suggesting we do? :/
<rburton>
make master only compatible with layers which are mickledore
<rburton>
so you hit an error case earlier, and things like layer index can notice this sooner
<RP>
rburton: oh, yes, we should drop langdale compatibility
<rburton>
well i've never seen do_rm_work fail before
<rburton>
.../package-index/1.0-r0/temp/run.do_rm_work.38677: 163: cd: can't cd to /builds/engineering/yocto/meta-arm/work/build/tmp/stamps/armv8r-poky-linux/package-index
<risca>
What would be the best approach to building a really ancient SDK based on yocto? The docs say that the supported distro is ubuntu12 :@ I've spent a day backporting various packages in oe-core to make the native packages build with a modern glibc/gcc/etc, but it seems to never end
<mborzecki>
risca: a container maybe?
<RP>
risca: probably find an old ubuntu VM/container ?
<risca>
Is there even ubuntu mirrors up?
<risca>
*searches online*
<mborzecki>
fwiw, it seems to be possible to docker pull ubuntu:12.04
<rburton>
yeah, a container running ubuntu 12.x is the one true way
<rburton>
you'll be in for a world of pain otherwise
<risca>
Tell me about it :(
<risca>
I've been commiting to muscle memory "git cherry-pick"
<risca>
I'll try a container. Thanks! :D
<RP>
risca: the "modern" way would be buildtools tarball FWIW. I don't think we have one for that far back though
<risca>
Huh! Looks like there is a buildtools tarball available. I believe this SDK is based on Yocto-1.4 (Dylan)
<risca>
Thank you :D
<risca>
Woow! There's even documentation available online for this release!
florian_kc has joined #yocto
d-s-e has joined #yocto
<RP>
risca: I don't think at that age it included the compiler though? :(
<RP>
It might help a bit at least
<risca>
The SDK comes with a cross compiler. Or, at least a working download link for the compiler
<RP>
risca: I was meaning a native gcc but I guess the cross one from the x86 sdk could work at a push
<risca>
I was planning on "apt-get install build-essential" and hope for the best =)
<risca>
I'm writing a Dockerfile now
<risca>
I'm sure someone has done this before, but I couldn't find one. Shouldn't be that much work. The Dockerfile is only meant to install the yocto host dependencies
marc1 has joined #yocto
<rburton>
risca: there are the crops containers. you could try convincing moto-timo to add old releases to the build for people like yourself.
<risca>
the crops containers go back to 16.04. That might be a bit too modern for what I'm doing
<risca>
I might take a look at the Dockerfiles though
florian_kc has quit [Ping timeout: 252 seconds]
Payam has joined #yocto
<Payam>
I get this error now : ERROR: Variable PREMIRRORS_prepend file: /opt/actions-runner/_work/CC__metalayers/CC__metalayers/qemux86-64/conf/site.conf line: 4 contains an operation using the old override syntax. Please convert this layer/metadata before attempting to use with a newer bitbake.
<qschulz>
Payam: please read the error message
<Payam>
yes but I look for variables and this is the variable to be used.
<qschulz>
Payam: where did you get that you needed _prepend?
<qschulz>
Payam: you're not using Yocto 3.2.3 (gatesgarth)
<Payam>
so what is the comand now for PREMIRRORS?
<Payam>
it is with :
xmn has joined #yocto
<Payam>
if I use mirror do I need to have the DL_DIR and sstate_cache any more? What would happen if I download the downloads and sstate_cahe and run a mirror as well?
<Payam>
so that it only fetches the missing one from mirrors
<Payam>
?
<Payam>
qschulz, I am not really sure that the change with mirrors do anything special
<rburton>
Payam: things from mirrors are fetched into DL_DIR
sakoman has joined #yocto
<Payam>
it is not so fast
camus has quit [Remote host closed the connection]
camus has joined #yocto
<Payam>
You are using a local hash equivalence server but have configured an sstate mirror
<Payam>
do I need to have BB_HASHSERVE=something in local.conf?
demirok has quit [Quit: Leaving.]
sgw has joined #yocto
AKN has joined #yocto
Estrella has joined #yocto
camus has quit [Remote host closed the connection]
camus has joined #yocto
seninha has quit [Ping timeout: 272 seconds]
<qschulz>
Payam: that's a later improvement, not required right now
<qschulz>
Payam: there's a special fetcher for s3, you probably just added http in PREMIRRORS instead of s3?
<qschulz>
probably need something like s3://something as the second operand
<qschulz>
actually, you probably should just remove PREMIRRORS:prepend and use the SOURCE_MIRROR_URL variable
<qschulz>
(and add INHERIT += "own-mirrors" in local.conf
demirok has joined #yocto
<qschulz>
which is documented in the link I gave you yesterday
<qschulz>
michaelo: we're missing documentation on s3 fetcher in bitbake docs
seninha has joined #yocto
leon-anavi has quit [Remote host closed the connection]
Ad0 has quit [Ping timeout: 260 seconds]
leon-anavi has joined #yocto
<Payam>
qschulz, is it possible to only download the dependencies and not build?
<JPEW>
Payam: The closest you can get is `bitbake --runonly fetch ...`
<JPEW>
Payam: But that may still build a few things
<TRO[m]>
Is it possible to reference a variable e.g. SRCREV from another recipe? something like SRCREV = xxx:SRCREV
<TRO[m]>
Or is there a workaround python function?
<JPEW>
TRO[m]: No, why do you need to do that?
AntA_ has quit [Ping timeout: 265 seconds]
<qschulz>
TRO[m]: recipe data is local, one recipe cannot impact another one
<qschulz>
if you tell us what you want to do, we may be able to guide you :)
<TRO[m]>
Have a recipe building samples for a lib and I want to keep the recipe for lib and examples separate. BUT they are in the same git repo. So I want be able to devtool upgrade the lib recipe and automatically have a coresponding test recipe.
<rburton>
if they're in the same repo, a single recipe that builds both but puts them in separate packages would make everything easier
<qschulz>
TRO[m]: like rburton said. If you REALLY want them separate, you could have a common .inc file included by both recipes where you set the SRCREV, SRC_URI, etc...
<TRO[m]>
They are separate cmake projects in subdirs and I do not know an easy way to build them all in one recipe. Having an recipe per sample and an include for them all is what I use. This works great, but then I do not really what to split the git SRCREV out of the lib into a separate include for lib + samples.
<TRO[m]>
Thank you, btw!
<qschulz>
TRO[m]: why not?
<TRO[m]>
unsure if devtool upgrade still works then
<qschulz>
TRO[m]: bitbake does not care about your include files, it flattens everything out
<qschulz>
so essentially, for devtool, the recipe is a text file after all includes/inherits are done by bitbake
<qschulz>
it wouldn't know if there's an include file common to multiple things
<qschulz>
at least I don't see the issue here
<TRO[m]>
yes, I had that idea - but I was thinking may there is a way to ref other recipes vars. maybe
<qschulz>
TRO[m]: nope, and on purpose
<qschulz>
TRO[m]: everything is sandboxed pretty well. For example, variables set in a task or their modified contents) are only available in said task
<TRO[m]>
ok, so the only solution to my problem is to have a common include ;)
<rburton>
kanavin: just noticed that you sent a patch to gtk main yesterday that fixes a problem i was having with the gtk3 point upgrade. are we duplicating effort?
<rburton>
TRO[m]: no, you can duplicate and let devtool do the right thing. there might be a situation where you want different versions of the libraries vs examples?
<qschulz>
TRO[m]: the second solution is the one suggsted by rburton :)
<kanavin>
rburton, it's held up only by repro fails in ffmpeg otherwise good to go
<TRO[m]>
rburton: yes, that is then the problem. lib vs. samples version.
<TRO[m]>
will try the common include approach
AKN has quit [Read error: Connection reset by peer]
<qschulz>
TRO[m]: wait... is one old devtool version of lib vs current version in layer for samples a scenario?
<qschulz>
I mean, one you would like to avoid?
<qschulz>
because if you want to make absolutely sure that lib and samples recipes are in sync even with outdated devtool, only rburton suggestion will work
<qschulz>
e.g. you do a devtool modify lib when lib+samples is v1
<qschulz>
then you update your layers and you get lib+samples v2 in it
<qschulz>
but bitbake will still take lib v1 from your devtool workspace, but samples from your layer, hus v2
<TRO[m]>
yes, but I'm talking about just the devtool upgrade usecase
dgriego has joined #yocto
dgriego has quit [Read error: Connection reset by peer]
dgriego_ has joined #yocto
d-s-e has quit [Quit: Konversation terminated!]
<kanavin>
rburton, not to discourage you from doing updates, if you're happy with mine, there's a few more that I didn't do (run 'devtool check-upgrade-status' on top of my branch)
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #yocto
yashraj466 has quit [Quit: Client closed]
Estrella has quit [Read error: Connection reset by peer]
Estrella has joined #yocto
locutusofborg_ has joined #yocto
locutusofborg_ is now known as LocutusOfBorg
LocutusOfBorg is now known as locutusofborg
locutusofborg is now known as LocutusOfBor
LocutusOfBor is now known as LocutusOfBorg
LocutusOfBorg has joined #yocto
LocutusOfBorg has quit [Changing host]
LocutusOfBorg is now known as LocutusFfBorg
LocutusFfBorg is now known as LocutusOfBorg
<TRO[m]>
qschulz, rburton and probably kanavin: ok, cool I'm happy with the common include file approach. Thank you. The only thing I do not like is that I have a version.inc file containing SRCREV and this file does not contain a version number in the filename. The recipe including this inc does have a revison, but does not contain a revision ;)
<TRO[m]>
Tested also the devtool upgrade - works perfect.
<TRO[m]>
Have a great day!
<qschulz>
TRO[m]: have your inc file have the version number too
<kanavin>
TRO[m], I am missing the context, but if it works perfect, then you're welcome :)
<qschulz>
then require myinc_${PV}.inc in your lib_v1.0.bb recipe
<kanavin>
qschulz, that might actually break devtool upgrades
<kanavin>
generally messing about with $PV is not recommended
<qschulz>
kanavin: ack, thx for the heads up :)
<TRO[m]>
will try - just a moment ;)
<kanavin>
qschulz, devtool might be clever enough to rename the includes, or it might not. I do not remember that, and I would opt for not doing risky things :)
<qschulz>
TRO[m]: see kanavin warning though!
<kanavin>
in general upgrades are prone to tripping on all kinds of corner cases, so you recipe must be as standard and simple as you can make it.
<kanavin>
(I mean devtool-driven upgrades)
<TRO[m]>
totally agree!! !! !!!!
xeche has joined #yocto
<bps>
in my experience devtool doesn't follow .inc's well either
<bps>
I think it basically assumes you have one SRC_URI append and that's all
<bps>
per bb or bbappend
<xeche>
Hello people. I've some trouble with Kirkstone and the libbacktrace recipe. It works fine for build purposes (i.e. DEPENDS), but in attempting to install (IMAGE_INSTALL_APPEND) the built static library and header file(s) provided by libbacktrace-staticdev and libbacktrace-dev, both of them depends on what seems to be a base libbacktrace package.
<xeche>
The base package doesn't seem to exist. nothing provides libbacktrace = 1.0+git0+4f57c99971-r0.0 needed by libbacktrace-dev-1.0+git0+4f57c99971-r0.0.core2-64
<rburton>
most likely a bug in the recipe
<xeche>
Also the recipe seems to have a spelling error. EXTR_OECONF should be EXTRA_OECONF.
<rburton>
easy fix is ALLOW_EMPTY:${PN} = "1" to make an empty package
<rburton>
and yes, that's a typo
<rburton>
patches welcome :)
<TRO[m]>
<TRO[m]> "will try - just a moment ;)" <- ok, this will not work. As expected.
<xeche>
rburton: Thanks alot :) Looks like quick fix works
Payam has quit [Quit: Leaving]
<rburton>
RP: you didn't change meta-yocto-bsp
<rburton>
ERROR: Layer yocto is not compatible with the core layer which only supports these series: mickledore (layer is compatible with langdale kirkstone)
<RP>
rburton: I knew there was something I was missing
<RP>
fixed
<moto-timo>
risca: the crops containers have been around for many years, so you can go back in git history.. but I just looked and the oldest ever was ubuntu-14.04 (the project started in 2015/2016)
<risca>
moto-timo: thanks for looking into it!
paulg has joined #yocto
<JaMa>
risca: I also have a Dockerfile for 12.04 ubuntu, you just need to update apt sources to be able to install build-esentials etc, (e.g. RUN sed -i 's/archive.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list)
AntA has joined #yocto
<JaMa>
and still had to cherry-pick bunch of fixes to build dunfell based image (e.g. due to upstream sources long gone)
<risca>
JaMa: thanks! I got that far already. It's working surprisingly well =) right now I'm patching some of the SDK setup scripts. They have a hardcoded download URL for the latest version of Google's repotool, which requires >=python-3.5. Latest python available in Ubuntu-12.04 is 3.2
paulg has quit [Quit: Leaving]
gsalazar has quit [Ping timeout: 264 seconds]
bps has quit [Ping timeout: 255 seconds]
goliath has quit [Quit: SIGSEGV]
gsalazar has joined #yocto
gho has quit [Quit: Leaving.]
zpfvo has quit [Quit: Leaving.]
<kergoth>
Ugh, templateconf handling broke my scripts, lovely. It won't accept an absolute path, but a relative path has to be inside of oe-core. My layers aren't cloned inside of oe-core.
<kergoth>
Am I missing something here?
Guest29 has joined #yocto
gsalazar has quit [Ping timeout: 264 seconds]
Guest29 has quit [Changing host]
Guest29 has joined #yocto
gsalazar has joined #yocto
<kergoth>
hmm, meta-oe needs updating for mickledore layer compat, i think
<JaMa>
yes, many layers need that, I've sent the changes to all I use few minutes ago
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #yocto
Notgnoshi has quit [Ping timeout: 264 seconds]
xmn has joined #yocto
dgriego_ is now known as dgriego
Payam has joined #yocto
pabigot has quit [Remote host closed the connection]
pabigot has joined #yocto
<Payam>
is there way to list what packages are fetched from download directory and which ones are fetched from internet?
<rburton>
not without grepping log.do_fetch, but why would you need to know?
<rburton>
you can force a build to be entirely local by just running all the fetch tasks (eg bitbake core-image-sato --runall fetch)
<Payam>
yes but it seems like a couple of packages are missed
<Payam>
I m just tired of all that cloning
amitk_ has quit [Ping timeout: 256 seconds]
goliath has joined #yocto
mlaga97 has joined #yocto
<rburton>
Payam: if anything fetches more than once, you're either deleting DL_DIR or the recipe is fetching during the compile, so yocto can't cache it. that would be very bad form and is a bug.
<Payam>
is there a way to know how many packages should be fetched?
<Payam>
with their versions
<rburton>
that's just the value of SRC_URI for every recipe being built
<rburton>
you should say what you're actually trying to solve instead of asking questions and hoping we know what you're after
<Payam>
So I want to know how many packages should be fetched and that way I can look at the s3 bucket and see if they are the same number
<Payam>
Because each time it seems like something is not downloaded
<rburton>
but if you've a s3 bucket as a mirror then they'll be downloaded
<rburton>
the log.do_fetch for each recipe will tell you where the files came from, be it DL_DIR, or a mirror, or the actual upstream URL
<Payam>
yes but I upload packages manually from my pc to s3
<Payam>
and it seems like it removes stuff from downloads
<Payam>
I did a watch du -sh in that directory
<Payam>
and it went from 35G to 25
<rburton>
what is "it"?
invalidopcode has quit [Read error: Connection reset by peer]
<rburton>
and if you're copying a DL_DIR to use as a mirror, remember to set BB_GENERATE_MIRROR_TARBALLS
invalidopcode has joined #yocto
<rburton>
it's easy to see if your mirrors is being used: delete DL_DIR, do a bitbake myimage --runall fetch, grep log.do_fetch to verify the mirror is being used
<Payam>
I downloaded my downloads from aws to a directory. And when building I tell bitbake to use that DL_DIR. when I run bitbake command it just remove 10GB and tried to fetch thing. I used bitbake my-image --runall fetch
mvlad has quit [Remote host closed the connection]
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
<rburton>
use PREMIRRORS instead of downloading the entire thing each time, it will be faster as you won't need all of it
<rburton>
even if transit was free, it's not instant, and some tarballs are huge
<rburton>
i'm also assuming you're doing this as you're doing CI or something in AWS and so fetching from S3 is a lot faster
leon-anavi has quit [Quit: Leaving]
<Payam>
rburton, yes
<Payam>
is there any free mirror?
<rburton>
the yocto mirror, but it's not as fast as an on-site cache. often slower than just hitting the real URL.
<Payam>
let me see
<rburton>
it gets used by default if the real URL isn't available for some reason
<rburton>
eg when sourceforge goes down, the yocto mirror is used automatically
<Payam>
can you please provide me with the URL?
<Payam>
and does it mean that bitbake wont go any git cloning?
<rburton>
if a git repo is unavailable then it will download, slowly, a tarball from yoctoproject.org instead
<rburton>
as we maintain a mirror of all the sources
<rburton>
if you're doing CI in AWS then the easiest solution is a EFS mount you use a sstate and dldir