ravanelli has quit [Remote host closed the connection]
ravanelli has joined #fedora-coreos
plarsen has quit [Remote host closed the connection]
ravanelli has quit [Remote host closed the connection]
jlebon has quit [Quit: leaving]
jpn has joined #fedora-coreos
bytehackr has joined #fedora-coreos
paragan has joined #fedora-coreos
b100s has joined #fedora-coreos
jpn has quit [Ping timeout: 268 seconds]
jpn has joined #fedora-coreos
jpn has quit [Ping timeout: 268 seconds]
b100s has quit [Ping timeout: 260 seconds]
plundra has quit [Remote host closed the connection]
plundra has joined #fedora-coreos
saroy has joined #fedora-coreos
bytehackr has quit [Ping timeout: 260 seconds]
sandipan_roy has joined #fedora-coreos
saroy has quit [Ping timeout: 260 seconds]
hotbox has joined #fedora-coreos
b100s has joined #fedora-coreos
c4rt0 has joined #fedora-coreos
jpn has joined #fedora-coreos
Orotheyshe[m] has quit [Quit: You have been kicked for being idle]
jcajka has joined #fedora-coreos
ravanelli has joined #fedora-coreos
ravanelli has quit [Remote host closed the connection]
ravanelli has joined #fedora-coreos
ravanelli has quit [Remote host closed the connection]
ravanelli has joined #fedora-coreos
mei has quit [Quit: mei]
mei has joined #fedora-coreos
jcajka has quit [Quit: Leaving]
ravanelli has quit [Remote host closed the connection]
ravanelli has joined #fedora-coreos
ravanelli has quit [Ping timeout: 260 seconds]
vgoyal has joined #fedora-coreos
ravanelli has joined #fedora-coreos
ravanelli has quit [Ping timeout: 256 seconds]
jcajka has joined #fedora-coreos
plarsen has joined #fedora-coreos
ravanelli has joined #fedora-coreos
ravanelli has quit [Ping timeout: 260 seconds]
palasso has joined #fedora-coreos
b100s has quit [Ping timeout: 240 seconds]
b100s has joined #fedora-coreos
ravanelli has joined #fedora-coreos
ravanelli has quit [Remote host closed the connection]
gursewak has quit [Ping timeout: 260 seconds]
ravanelli has joined #fedora-coreos
mheon has joined #fedora-coreos
jlebon has joined #fedora-coreos
<jlebon>
are github fonts rendering slightly differently for anyone else?
<dustymabe>
hadn't noticed it - will let you know if I do :)
<dustymabe>
jlebon: let's discuss a potential enhancement to the pipeline when you get a chance
<lucab>
jlebon: I got the same feeling this morning
<jlebon>
lucab: thanks, at least i know it's not just me :)
<jlebon>
dustymabe: sure
<dustymabe>
jlebon: the early archive+forkmArch in the build.Jenkinsfile job
<dustymabe>
i'm thinking about changing the semantic a bit to make that fucntionality a parameter
<dustymabe>
i.e. params.EARLY_ARCHIVE or something
<dustymabe>
it would default to TRUE for manually triggered jobs (i.e. when we kick off production builds), but our automated jobs (mechanical, development, etc) would set it to False
<dustymabe>
i want to cut down on the number of builds that fail later (i.e. testiso or cloud upload) but we already have multi-arch builds running for it
<jlebon>
i.e. move the multi-arch fork to lower down?
<dustymabe>
yeah, basically we move it to the end of the build job
<dustymabe>
the theory is that for builds that are kicked off by automation people aren't waiting on them and watching them as closely, so kicking them off later probably not a big deal
<dustymabe>
to summarize a scenario: "I don't want AMIs for aarch64 to get created if testiso for x86_64 fails"
<dustymabe>
another option is to somehow kill the multi-arch jobs if the x86_64 one fails
<dustymabe>
but I don't know how to do that
<jlebon>
that makes sense. just unsure about supporting it both ways. WDYT about switching it and seeing how it goes before we add a parameter for it for prod?
jpn has quit [Ping timeout: 248 seconds]
<dustymabe>
"see how it goes" i.e. is there a question on if it would work?
<jlebon>
on if it's worth the complexity of supporting both early or late archiving
<jlebon>
hmm, the killing idea is worth investigating. i know i've done it from the script console so it should be feasible
c4rt0_ has joined #fedora-coreos
<jlebon>
i should say early or late multi-arch forking. maybe throw up a PR and let's see what it looks like?
<jlebon>
but let me investigate the cancelling option
<dustymabe>
yeah. i'll throw up a PR - basically I think I'll just factor out the code into a function and then call it in one place or the other based on the parameter
c4rt0 has quit [Ping timeout: 252 seconds]
<dustymabe>
jlebon: now that I look at GH a little longer I do believe the font has changed
<ramcq>
does it allow Flatpak to remove some of its OCI layer in favour of ostree speaking OCI natively?
<jlebon>
dustymabe: we'd need something more sophisticated of course. i guess doing this is related but independent to the option.
<ramcq>
I must say as the (indirect) owner of a 7TB ostree repo, I am a little nervous about this news in terms of tooling support and investment 😅
<ramcq>
(not that, if there is good delta support, moving to an OCI registry world is a bad idea...!)
<ramcq>
but that's certainly a Migration(tm)
<dustymabe>
ramcq: I don't think the pure ostree path is going away, but I understand your concern from an investment perspective
<walters>
ramcq: Good question! Indeed it does - this would allow replacing flatpak's OCI backend with a much more battle tested containers/image backend. Though, all this new stuff is in Rust.
<ramcq>
well, so is flat-manager... which I guess can (has to?) go away
<walters>
We're probably overdue for another ostree maintainers meeting with this as a top agenda item
<ramcq>
you thinking about the disruption of building flatpak for like Ubuntu 12.04 using rust?
<ramcq>
I don't think mclasen filled his req in the Workstation team for a Flatpak maintainer, or whether it even exists any more
<ramcq>
so like, if Flatpak needs to / should move some, question 0 is who should we be talking to
<ramcq>
(btw, I love the vision of this generally, making rpm-ostree less "special"... and it's actually weirdly not that far away from where Endless OS is, because you can just bust out apt/dpkg and modify the running system, and we have OCI images too, so its probably pretty easy for us to make a bootable Endless OS with a cloud-native ostree)
<ramcq>
maybe the ostree repo we already have is already functionally unmanageable (like, we can't really prune it?) so having an orderly plan to migrate to something else would be a good plan
<ramcq>
and we already had to delete all of the non-x86_64 refs from the summary so it's not a real ostree repo any more
<ramcq>
at least, only 1/3 - 1/4 of it is :)
<walters>
*however* what you really want is a less dumb mapping, which requires "chunking" which requires build tooling to understand more of this, which we've done in https://coreos.github.io/rpm-ostree/container/#converting-ostree-commits-to-new-base-images which understands how to use the RPM database to split up into reproducible chunks...in theory perhaps we could try to make some of that logic package system independent