<azonenberg>
As part of the CI revamp I am converting my VM server from vanilla Xen on Debian to xcp-ng
<azonenberg>
it's a fair bit of work to do each VM one at a time because the Ceph setup I have, while technically compatible, requires a lot of manual CLI work to link each ceph RBD to a storage pool object xcp-ng can see
<azonenberg>
(for CI-created VMs we will probably use a different provisioning strategy, I already have that kinda in progress)
massi has joined #scopehal
<azonenberg>
@johnsel: also apparently i'm down 64GB of RAM
<azonenberg>
one of the dimms was an RDIMM not a LRDIMM, which explains some of the errors and weirdness i saw in the past
<azonenberg>
another one seems to have gone bad, the new mobo doesnt like it
<azonenberg>
it wasnt even detected
<azonenberg>
So i'm still at 192 GB. Buying two more dimms to add next maintenance outage but no immediate plans to shut down to install them
<azonenberg>
also still working on trying to get pcie passthrough to work in xcp-ng. In my initial test the device gave errors about firmware loading failed etc
<azonenberg>
it's possible the three GPUs is too much for my 500W power supply, so i'm thinking of getting a beefier one when i add the additional RAM
<azonenberg>
but i'd be surprised if these 50W TDP cards burn that much juice when idle
<azonenberg>
So it's probably something else
<azonenberg>
anyway, we're a ways from needing the GPUs
<d1b2>
<johnsel> Hmm well 500 watt is bit on the very low end so that might very well be it
<d1b2>
<johnsel> Though it could be a number of other things too
<d1b2>
<Darius> 3 GPUs is definitely going to push a 500W PSU..
<d1b2>
<Darius> unless you really baby them
<d1b2>
<johnsel> Given the tdp of a server cpu and that much memory that might even explain all issues, though you’d expect the motherboard to complain about it over it’s management interfaces
<d1b2>
<johnsel> And forced shutdown eventually
<d1b2>
<johnsel> Though the GPU might disable itself nowadays, I don’t know tbh
<d1b2>
<johnsel> Anyway one or two big storage pool(s) probably is what we want for CI. Should also keep the ACLs simpler
<d1b2>
<johnsel> brought the Windows CI down to ~1hr
<d1b2>
<johnsel> from 2.5, that is
<azonenberg>
johnsel: yeah i will make one big storage pool that you can allocate from as you see fit. under the hood it will be a ceph RBD but you dont have to worry about that detail
<azonenberg>
and my thought was that i might have problems if we maxed out all 3
<azonenberg>
but that idling they should be ok
<azonenberg>
and using one at a time should be ok
<azonenberg>
anyway i have a 700W on order and will install along with the new ram when i get back
<azonenberg>
darius: for reference these are single slot gpus without a dedicated power connection
<azonenberg>
but sure if you think its redundant get rid of it
<azonenberg>
i'm all for cleaning up the CI, we can re-add if needed
<d1b2>
<johnsel> alright, also once we start promoting CI artifacts to releases the first link under the release will be the auto-generated zip/tarball links
<_whitenotifier-7>
[scopehal-apps] Johnsel 1694019 - CI: Removed tarball uploads and cron schedule from CI workflows to clean up the process some more.
<_whitenotifier-7>
[scopehal-apps] azonenberg d9c3822 - Merge pull request #529 from Johnsel/windows-ci-out-of-diskspace CI: Removed tarball uploads and cron schedule from CI workflows
bvernoux has joined #scopehal
massi has quit [Remote host closed the connection]
<d1b2>
<ehntoo> Found another GTK/Cairo dependency in scopehal that I wasn't quite while working on a branch that removes the gtk dependency from scopehal. I may leave my initial PR at just removing Gdk::Color.
<d1b2>
<louis> There are the PRs moving those helpers around
<d1b2>
<louis> Also fixes longstanding bug where you had to place cursor to left of a bar on a histogram to see it's value
<d1b2>
<louis> (and if you did place it on the bar you didn't get anything)
<azonenberg>
interesting. i'm used to looking at jitter histograms where the bars are like one pixel wide
<azonenberg>
so i likely never zoomed in enough to notice
<azonenberg>
Will look and merge shortly
<d1b2>
<louis> I think generally there was an off-by-one in the logic for getting the value to display when cursoring-over a sample but it wasn't noticeable in the presence of interpolation
<azonenberg>
ah interesting
<azonenberg>
did you verify it also works correctly w/ interpolated waveforms?
<_whitenotifier-7>
[scopehal-apps] 602p 3abbc0f - Refactor to use new helpers from scopehal for getting values
<_whitenotifier-7>
[scopehal-apps] azonenberg 40ab86d - Merge pull request #530 from 602p/getvalue_cleanup Refactor to use new helpers from scopehal for getting values
<_whitenotifier-7>
[scopehal-apps] azonenberg a9f2bea - Updated to latest scopehal
bvernoux has quit [Quit: Leaving]
<azonenberg>
AKL-PT5 v0.9 boards shipped. Already obsolete but i may assemble one just to compare the discrete filter to the distributed v0.10
<azonenberg>
woop
<azonenberg>
Just did a WAN test of ngscopeclient
<azonenberg>
Laptop physically in my lab, but tethered off of my phone using LTE and a VPN back into the lab
<azonenberg>
(vs the usual thunderbolt to 10Gbase-SR pipe I use)
<azonenberg>
connected to a couple of scopes and the siglent SSG to generate test waveforms and stream live
<azonenberg>
UI was very responsive despite the ~100ms latency
<d1b2>
<ehntoo> @azonenberg - for PRs that touch both scopehal and scopehal-apps, do you have a preference on whether or not the -apps PR bumps the submodule pointer? I've been leaving it untouched so far to avoid a nearly-inevitable merge conflict, but it's definitely easier to inspect CI results with the submodule update in there.
<azonenberg>
Preferable to not include the pointer
<azonenberg>
Because of exactly that reason
<d1b2>
<david.rysk> Anything worth testing at this time?
<d1b2>
<ehntoo> 👍 I'll include links to CI from my fork in PRs then for quick reference. looks like the windows CI in my fork for #531/#710 failed... fixed the immediate reason and kicked off a new build. I'll update the link in the PR to the new build, hopefully that one succeeds.