cyberpear has quit [Quit: Connection closed for inactivity]
plarsen has quit [Remote host closed the connection]
paragan has joined #fedora-coreos
Betal has quit [Ping timeout: 265 seconds]
jpn has joined #fedora-coreos
jpn has quit [Ping timeout: 260 seconds]
poppajarv has quit [Quit: Ping timeout (120 seconds)]
poppajarv has joined #fedora-coreos
paragan has quit [Ping timeout: 260 seconds]
paragan has joined #fedora-coreos
gursewak has joined #fedora-coreos
gursewak_ has quit [Ping timeout: 248 seconds]
paragan has quit [Quit: Leaving]
c4rt0 has joined #fedora-coreos
jcajka has joined #fedora-coreos
darknao has quit [Ping timeout: 260 seconds]
paragan has joined #fedora-coreos
darknao has joined #fedora-coreos
<hartan[m]>
Hello,
<hartan[m]>
quick question on adding users to CoreOS via butane/ignition: How do I configure a user that can use sudo but still needs to enter their password to authenticate for sudo? Can ignition do this automatically or are there manual steps involved?
darknao has quit [Quit: WeeChat 3.6]
jpn has joined #fedora-coreos
darknao has joined #fedora-coreos
jpn has quit [Remote host closed the connection]
jpn has joined #fedora-coreos
jpn has quit [Ping timeout: 260 seconds]
ravanelli has joined #fedora-coreos
jpn has joined #fedora-coreos
jpn has quit [Ping timeout: 248 seconds]
jpn has joined #fedora-coreos
jpn has quit [Ping timeout: 268 seconds]
jpn has joined #fedora-coreos
<walters>
hartan: our default configuration has: ```## Allows people in group wheel to run all commands
<walters>
%wheelALL=(ALL)ALL
<walters>
``` so if you add them to that group it will get what you want
<hartan[m]>
It doesn't require installing an additional package in CoreOS, I used systemd-cryptenroll to register the token. The problem is that (iiuc) all CoreOS systems, by default, boot a pre-generated initramfs which doesn't work with FIDO2 tokens. Regenerating the initramfs locally after adding entries to /etc/crypttab resolves this.
<dustymabe>
walters: is it new that `rpm-ostree install` will prompt user Y/n ?
<daMaestro>
find your constraint and then determine the unit (1 vcpu, 1g ram, 1g storage, 1m net, etc, etc) you need to scale -- find the most cost effective way to get that unit scale cost optimized and you've found your instance type
<daMaestro>
also, ask chatgpt, might be good for some entertainment ;-p
<walters>
jlebon: what we really want is spot instances, for which there is tooling built on that
<dustymabe>
walters: but why? simply because it's cheaper?
<walters>
yep
<dustymabe>
meh
<dustymabe>
I illustrated in the ticket that our actual usage (cost) for tests is very minimal since we get charged by the second and the instances are up for short periods of time.
<dustymabe>
I'd much rather spend our time chasing something else (like enabling aarch64 for azure and gcp) than trying to figure out how to use spot instances.
<walters>
yeah
<dustymabe>
if we're concerned about cost I'd also argue that we should pursue actually implementing our Garbage Collection initiative
<walters>
also true
<dustymabe>
since there is probably a bunch of storage we could prune as part of that
<dustymabe>
+1
<dustymabe>
regarding your proposal for an instance type abstraction in cosa.. I'm not fully opposed, I just would like to understand the value and weigh the maintenance cost of it.
<walters>
but I wrote the PR since I used it for getting a small instance type in qemu conveniently, it's nearly risk free and has no user impact. GC is not that
<dustymabe>
I know the code is easy to write, but is the abstraction helpful over time? i.e. having more than one way to do something can be unhelpful at times
<walters>
I guess I don't really care enough to champion it that much, if you don't like it, feel free to close
<jlebon>
i don't see it so much in terms of cost, but more just efficiently using resources
<dustymabe>
jlebon: you mean spot instances? or what?
<jlebon>
and also it'd be nice to know if our tests no longer fit in e.g. a nano or whatever baseline we land on
<jlebon>
i mean using a smaller instance type by default
<dustymabe>
jlebon: but now you're talking about cloud again?
<dustymabe>
and not qemu tests?
<jlebon>
yeah, cloud instances
<dustymabe>
but that's not what this PR is really
<dustymabe>
well - maybe I don't understand
<jlebon>
i think you're right, but it sounds like the implication is that our default AWS instance type is overkill
<dustymabe>
IIUC this PR adds an instance type abstraction to cosa and the current implementation just makes it so you can specify an instance type for qemu instances (for tests or cosa run invocations)
<dustymabe>
if we extend that to the cloud tests now we have to convert between the bespoke instance types names that we defined with the instance types available in the cloud and map them
<jlebon>
right, that's where my minMemory suggestion comes in :)
<dustymabe>
and all of it really boils back down into how much memory and CPU you really need (for the most part, I guess you could bundle it with disk size)
<dustymabe>
for which we already have knobs for, right?
<jlebon>
yup. they only work on QEMU currently. i'm arguing for making them work in other clouds, and then we could lower the baseline
<bgilbert>
just looking at this
<bgilbert>
I'm not strongly opposed, but I am concerned that the instance type names are pretty opaque
<bgilbert>
also, I can imagine someone trying to change the name -> size mappings later, and not being clear what the actual dependencies are on them
<jlebon>
dustymabe: to be clear, i agree it's not really super high priority
<jlebon>
bgilbert: yeah, i'd rather stick with the explicit minMemory knob (and disk size, and... we don't have one for ncpus yet, but could)
<bgilbert>
I used spot instances for some Buildbot-based CI years ago. there were some nice cost savings, but it sometimes added a few minutes of startup delay/weird flakes
<bgilbert>
haven't tried it in 5+ years though
<dustymabe>
walters: ha, sorry. I wasn't trying to kill it per-se, but was interested in the discussion about usefullness (i.e. maybe there was something I was missing)
<walters>
It's not worth the 4 senior engineer bikeshedding time