<azonenberg>
in any case, the related issue is that right now our builds take 30+ minutes
<azonenberg>
github actions has a 2k minute per month cap for linux on free accounts
<azonenberg>
windows are build at 2x actual time and mac at 10x
<Johnsel>
Furthermore even if their runner can not live on the target that still does not mean it is not possible you can just have a linux box drive a remote Docker instance
<azonenberg>
so six 30-minute builds on a mac eats our whole budget for the month
<azonenberg>
so we will need to move to self hosted for at least those and ideally the others too
<Johnsel>
alright that was not in the ticket
<azonenberg>
yeah that was in the notes from the developer call
<Johnsel>
I obviously did not read those, haha
<azonenberg>
Yeah lol
<Johnsel>
Still, CI is nice if it works easily and reliably, my experience with self hosted cross platform setups is that they can quickly grow time consuming
<azonenberg>
anyway, so fundamentally the issue is that i think we are hitting limits on the free tier of github actions.
<azonenberg>
and we will likely have to move to self hosted
<azonenberg>
if we are doing that, the question then becomes do we (can we) stick w/ actions for the top level control, vs moving elsewhere
<azonenberg>
in particular, one of the issues brought up on the dev call was the potential of buying a couple of cheap scopes
<azonenberg>
say bottom tier siglents or picoscope 2000 or dscope/dslogic
<azonenberg>
and setting them up plugged into say a ftdi chip or something
<azonenberg>
so you can do hardware in the loop CI
<azonenberg>
it wouldn't give us anywhere near 100% driver code coverage as nobody can afford to buy a couple of nice lecroy/tek/keysight scopes to dedicate to a CI cluster (at least not until we get manufacturer support at a much greater level than we have now)
<azonenberg>
But it would allow us to do testing of all of the acquisition pipeline and such once we get to that point
<azonenberg>
today, we have nowhere near enough unit tests to benefit from this
<Johnsel>
I think that's an awesome idea, and I also wanted to point to the option of running stuff remotely from the GitHub Actions as a potential good middle ground
<azonenberg>
but it's a long term issue to think about since one of my goals is to get much more tests written
<azonenberg>
Yes. That is indeed a possibility
<azonenberg>
anyway this is kind of a forward looking issue at this point, not at all a priority or something we want to burn significant resources on now
<azonenberg>
especially if we conclude that actions does indeed support self hosted osx-arm64 runners
<azonenberg>
lain and MP i think both have some older M1 hardware they might be willing to dedicate to that
<azonenberg>
Anyway, the higher priority now WRT build issues is that the windows build is broken and i have no idea how to fix it because i dont even have a windows dev machine
<Johnsel>
Though that introduces new issues in that you need a way to collect your logs, but thinking through those options on a longer term scale I think there's a good solution somewhere
<azonenberg>
CDash is a solution i have used in the past during my thesis
<Johnsel>
it's just not trivial once you go multi-platform
<azonenberg>
it's part of the kitware suite with cmake/ctest, which we're already using, and integrates well with them
<azonenberg>
Yes its not trivial
<azonenberg>
But this is not a trivial project
<azonenberg>
I expect significant resources will be needed. There will likely be ongoing maintenance costs
<Johnsel>
yes I was taking a look at it, but current CI runs off of Docker containers, moving those over and integrating it in some other system needs thought
<azonenberg>
And this is why i'm slowly starting to try and attract early adopters in industry, like matt's company
<azonenberg>
getting people who plan to use glscopeclient in prod to commit dollars to both one-time feature development and ongoing maintenance etc
<azonenberg>
Hardware costs money :p
<Johnsel>
I have some experience with embedded + docker + ci + multiplatform so I will look at that option and give some thoughts on it if you want
<Johnsel>
as for the Windows, I am already on that :)
<Johnsel>
option == kitware suite
<azonenberg>
Yeah. when i was working on my thesis i ultimately moved away from the kitware suite in favor of a fully homegrown option
<azonenberg>
because i was working on one project that had host and firmware and fpga code in many different arches in one build tree
<azonenberg>
cmake works great for building one codebase on N platforms but only one at a time
<azonenberg>
it fails when you have multiple ISAs of binary in a single tree
<azonenberg>
it assumes you have only a single c++ compiler etc at a time
<azonenberg>
But for our use case, that's fine
<Johnsel>
yes then Docker is nice, but that's another abstraction on top, then some runner to drive the dockers, it gets complicated quickly
<azonenberg>
Yeah
<Johnsel>
and then you need something in A hat is made in B and you are running your own little docker server park
<d1b2>
<louis> azonenberg: bulk (7k line diff :/) of refactor is done. some polishing probably still needed.
<Johnsel>
add hardware, well I'm sure I'm preaching to the choir
<d1b2>
<louis> do we have a suite of protocol decoder tests? that would be handy right now :P. Tried UART and I2C because that's what I had on hand.
<azonenberg>
the eventual goal is for this to become a corpus of test data with a single waveform to test every protocol decode
<azonenberg>
(some being shared, like the 8b10b is also 1000baseX)
<azonenberg>
this is limited in size at the moment, and does not necessarily test every feature of every decode. it's also reliant on end users to visually verify that the decode is correct by looking at it
<azonenberg>
we have no actual unit tests for any decodes
<azonenberg>
The second is...
<azonenberg>
azonenberg@havequick:/ceph/fast/home/azonenberg/scopehal-tests$ du -h --summarize .
<azonenberg>
71G .
<azonenberg>
a 71 GB dump of interesting waveforms and scopesessions i'm sitting on
<azonenberg>
most but not all are shareable WRT not being encumbered by any IP restrictions (a few, mostly in another directory, were gathered on client hardware for work and i'm trying to find redistributable replacements for them)
<azonenberg>
Said dump is also too big to comfortably share via most methods
<azonenberg>
But i can supply subsets of it on request if someone wants test data for a particular protocol
<azonenberg>
Johnsel: oh, wrt windows stuff
<azonenberg>
one of the topics brought up on the dev call earlier today is that we really need someone who is using glscopeclient reasonably often to act as a windows dev and maintainer
<azonenberg>
not necessarily working on new features, but at least filing tickets when something breaks on windows (especially if major like "doesnt compile") and hopefully working to correct said issues
<azonenberg>
Is that something you think you'd be able to do? Not looking for a specific hour time commitment or anything
<azonenberg>
we've had a few people float through the role for a while but it's currently vacant
<azonenberg>
also lain, miek, louis: do you folks think the dev call was productive? is this something we should do more frequently? at scheduled intervals or just ad-hoc whenever we think it's needed?
<Johnsel>
I mean I'm 100% willing but I have an incompatible SmartScope w/ weird sigrok issues and my DIY scope so for the time being I can definitely test what I can test but that is not a lot
<azonenberg>
Johnsel: testing w/ offline captures still lets you exercise most of the codebase
<azonenberg>
just not the acquisition path
<azonenberg>
it's better than nothing
<Johnsel>
that's no problem, I'd be working on a dev build anyway for my own project
<azonenberg>
great. well consider yourself the unofficial windows maintainer then :p
<azonenberg>
Because nobody else is doing it lol
<Johnsel>
hurray
<azonenberg>
once you get the build breakage fixed, the big thing to keep an eye on is the upcoming vulkan work and refactoring of the renderer in preparation for arm64 and osx support
<azonenberg>
this *should* not break anything on windows but please make sure that is in fact the case :)
<Johnsel>
alright I have read the meeting notes too
<Johnsel>
I got it to start building but goodness it is practically begging at this point /mingw64/include/winsock2.h:15:2: warning: #warning Please include winsock2.h before windows.h [-Wcpp]
<Johnsel>
15 | #warning Please include winsock2.h before windows.h
<Johnsel>
216/298 though so I have good hope still
<azonenberg>
Send a PR with those changes to the build scripts then, plus anything we need to do on the CI side. And build instructions in the end user docs
<azonenberg>
oh i guess i should have asked first, does it actually *run*? :p
<Johnsel>
it's a very simple change luckily, but I am still testing. Assuming it works; can I tack on some windows build dx changes? .gitignore additions that exclude msys generated binaries and the .vscode folder and a mkdir -p that prevents the windows build from dieing on rebuild because the cleanup does not remove that folder for some reason? If yes, separate commit or are we not that precise
<Johnsel>
about such things here?
<azonenberg>
Can't hurt to make it a separate commit but at this point we dont care. This is still pre 0.1, we're making major breaking changes to the code on a daily basis
<azonenberg>
we'll get a lot more strict about nice clean atomic commits and not breaking the build once things stabilize more
<azonenberg>
and when we actually have a formal release. especially post 1.0 when we've got guarantees about API stability, file format, etc
<azonenberg>
we're nowhere near that yet
<Johnsel>
Understood, I am getting some dll errors. Had I had the foresight I should have had I would have built the last known commit first.
<Johnsel>
Either way since it's 5:39 here and the issue is at least partially solved I think I'll hit the hay and do it right tomorrow
<azonenberg>
Great, good progress anyway
<azonenberg>
(are the dlls vulkan related?)
<Johnsel>
I don't believe so actually
<Johnsel>
libcairomm
<Johnsel>
libatkmm
<Johnsel>
libgdkmm definitely is not vk
<Johnsel>
giomm
<azonenberg>
all of those are gtk related
<Johnsel>
yes though if it's my setup or not is not clear to me
<azonenberg>
likely one root cause, missing package or path set wrong
<Johnsel>
regardless I should have a known good build anyway
<azonenberg>
Try pushing the current changes to a fork on github
<azonenberg>
and see if they build in CI
<azonenberg>
if so, we can merge upstream and then you can troubleshoot your local setup later
<azonenberg>
Either way soundsl ike the build is less broken than it was when you started
<azonenberg>
so it's a start :)
<Johnsel>
the CI will need to get the right env var set but if it makes you happy I'll modify that config quickly and then hit the hay
<Johnsel>
I just command line wrangled it
<azonenberg>
i'm not in a rush
<azonenberg>
do it when you're awake
<Johnsel>
oh hah your script sets the remote back to the main repo
<Johnsel>
One optimization for this CI setup would be to modify the docker containers and build one that has all the dependencies in it already
<azonenberg>
i dont know if that is possible on actions
<Johnsel>
built and all that is
<azonenberg>
i think it always starts from a clean blank image
<azonenberg>
using github's runners i mean
<azonenberg>
if we hosted our own endpoints it would be possible
<Johnsel>
it's possible here too
<Johnsel>
Tarball:
<Johnsel>
runs-on: ubuntu-latest
<Johnsel>
that's just some magic identifier for their version of the latest ubuntu as a docker
<Johnsel>
you can point that to any docker container to start from
<Johnsel>
not with that syntax, but similarly
<Johnsel>
we can chat about it once I have solved this to see if it's worthwhile to do some tests, maybe with the m1 hardware too
<azonenberg>
interesting, i assumed that it was pulling from a fixed list of images on some private github internal server
<azonenberg>
and you coudlnt do arbitrary
<Johnsel>
no it's a weird system but it's like you have some runs-on=ubuntu system that then has a docker instance running and that docker instance can run whatever docker you want as a starting point
<azonenberg>
i mean i also know zilch about docker
<azonenberg>
i'm used to xen
<Johnsel>
yeah docker is also just virtualization but at the kernel level basically
<Johnsel>
althought that statement comes with asterix too because it can be that your osx docker is actually linux in a vps on mac
<Johnsel>
and windows can host it's containers on wsl or also just in a linux vm on hyper-v
<Johnsel>
it really is a mess out there
<Johnsel>
anywho, docker is nice because assuming your isa stays the same, you get a fully defined OS + library + apps environment
<Johnsel>
ofcourse once you start crossing ISAs, then one docker container might not behave the same anymore at all
* azonenberg
stares at m1 mac
<azonenberg>
and yeah glscopeclient has been x86-64 only until now
<d1b2>
<Darius> Docker is "works on my machine" taken to production
<azonenberg>
arm has a weaker memory model so we will probably find some ordering issues we hadnt anticipated
<azonenberg>
darius: lol yeah
<azonenberg>
"works on my machine"? great
<azonenberg>
ship your machine
<Johnsel>
I think it's possible, I've seen how docker was made available for arm64 by the Balena team
<Johnsel>
back when they weren't called Balena yet
<Johnsel>
they called themselves resin.io
<Johnsel>
which seems fitting because boy oh boy were things droopy
<Johnsel>
and I have to agree Docker does promise that portability
<Johnsel>
and gives it too sometimes!
<azonenberg>
very encouraging "sometimes" :p
<Johnsel>
there's a lot of ways to do Docker wrong is more the issue than anything
<Johnsel>
I've spend many hours trying to convince someone Docker is not the best way to try to compile against every version of gcc to see what breaks
<Johnsel>
just speccing -a- gcc version and staying with it is much better
<d1b2>
<louis> azonenberg: yeah, I thought it was productive
<Johnsel>
in any case containers are nice if they are portable because you can just take your code, put it through ci and know if it runs there it will on your targets because they are the same environment guaranteed
<Johnsel>
even if the metal that docker runs on is not similar at all (but say, x64)
<Johnsel>
for arm you can do the same, but don't expect the x64 image to run on arm they're still just binaries of everything that matters
<azonenberg>
well yeah
<Johnsel>
you did not lie about that build speed
<azonenberg>
lol
<azonenberg>
I'm used to running "make -j" with no cap
<azonenberg>
on my workstation with 192GB of ram and twin xeon scalable gold's
<azonenberg>
it only takes a minute or two
<azonenberg>
i tried it on my work laptop the other day and it compeltely locked up :p was basically a forkbomb lol
<Johnsel>
this isn't far off
<Johnsel>
(from locking up that is)
<azonenberg>
There is definitely room to optimize in terms of tidying up headers to not #include more than required
<Johnsel>
probably but my 3600 built it in 5 or so minutes
<Johnsel>
it's just a matter of them assigning 1vCPU or something to those free CI boxes
<azonenberg>
2
<azonenberg>
wimpy ones
<Johnsel>
mm as much as I am curious about if it worked or not this is really getting too late for me now
<d1b2>
<azonenberg> This is using an XLF-732+ filter to cut off some of the high freq peaking. I'm on the fence between this and another filter which i have on order but isn't here yet. so we'll find out which works better in a couple days
<azonenberg>
At this point we're just optimizing the response though
<azonenberg>
we have a wide open eye out to 10.3125 Gbps and we're just fine tuning for flatness
<d1b2>
<azonenberg> Oh, also... S21 response of the probe head (not including cable)
<miek>
looking at the actions log failures for windows, i think it actually found the SDK but not the library itself? maybe it needs the runtime installer too
<miek>
or maybe the library it did find didn't match the sdk version
<Johnsel>
that issue seems to be resolved actually, but it now is stuck on something else in the test phase
<Johnsel>
which might be as trivial as it might try to initialize something vulkan related that is not available on these build boxes
<azonenberg>
Johnsel: there is a ticket for using swiftshader (software vulkan implementation) for unit testing
<azonenberg>
on my systems, that isnt necessary because i have two vulkan devices present (llvmpipe software plus the nvidia card)
<azonenberg>
but that may not be the case on the CI box
josuah has quit [Read error: Connection reset by peer]
<azonenberg>
DanielG: yes, we're still fixing some rough edges around the initial vulkan support. 4a96248 is probably a good option
<d1b2>
<DanielG> Awesome, thanks.
<azonenberg>
you may want to backport the bug fixes from cfe61fd though
<azonenberg>
that one diff should apply cleanly to the older commits and fixes some file load/save breakage where 64-bit timestamps are truncated to 32 bits
<d1b2>
<DanielG> I'll play with this this weekend. I have my SDS6204A in hand and am eager to play with it and get an eye.
<azonenberg>
Great. If you have some extra time, the Siglent driver should be fully functional but we do not support a lot of the more advanced trigger types
<azonenberg>
so if you wanted to work on adding support for e.g. i2c or spi protocol trigger, we'll gladly take a PR for those
<d1b2>
<DanielG> I didn't get any extra options, so I'm not sure if I have all the "advanced trigger types"
<d1b2>
<DanielG> Oh perfect. Yes I think I have i2c and spi
<azonenberg>
ah ok. well i think there are still a few basic trigger types we dont have too. like nth edge, pattern, delay, and setup/hold
<azonenberg>
its not a huge priority but it needs to happen eventually
<azonenberg>
DanielG: alternatively, if you wait until the weekend we may have the vulkan build fixes solved
<azonenberg>
it can't hurt to install the vulkan SDK as you will need that longer term to build glscopeclient
<azonenberg>
hopefully johnsel will have that fixed in the next day or two
<d1b2>
<DanielG> I'll probably start with the older commit, just because I have a time crunch. Once that's gone, I'll be less constrained and can play with the latest commits.
<Johnsel>
daniel: it is very simple to use the current commit on windows with vulkan support. Just install the sdk and use, 1 sec
<azonenberg>
Merging shortly after i finish reviewing
<azonenberg>
louis: what's your next focus?
<d1b2>
<louis> Thunderscope driver stuff, I think
<azonenberg>
Ok. And then the spectran?
<d1b2>
<louis> I suspect so
<d1b2>
<louis> That will be fun to play with :)
<azonenberg>
And WRT the dev call, what do you think of making that a ~monthly thing, open to all active devs?
<azonenberg>
i feel like it will help for everyone to have an idea of what's coming, who's working on what, and coordinate stuff
<azonenberg>
but i'm also of course very wary of spending more times on meetings than actual work and i want to avoid that :p
<d1b2>
<louis> Yeah, I think that would be good. Even more frequently maybe, as long as we keep 'em short. Helps to keep a sense of what is going on in the wider scheme of the project
<azonenberg>
yeah. I mean they're not by any means mandatory
<azonenberg>
i just want to make sure everyone has a shared vision of what we're doing and how we're getting there, and we don't get in each other's way
<d1b2>
<DanielG> @johnsel following the instructions from the glscopeclient manual, I installed the .zst package (pacman -U *.zst) and ran glscopeclient.exe. It seems I don't have the OpenCL headers. Does the msys2 install not include these? Can you please point me in the right direction?
<azonenberg>
Anyway, i'll merge the GetText stuff shortly then tonight probably start the AcceleratorBuffer refactoring
<azonenberg>
@DanielG: OpenCL is an optional component that the build should cleanly disable if not supported
<azonenberg>
we are moving away from OpenCL towards Vulkan for GPU-accelerated compute in the future
<azonenberg>
so don't put too much effort into getting it working
<azonenberg>
That sounds like it was enabled at compile time but not installed somewhere you can find it
<d1b2>
<johnsel> I can only attest that it seems the build issue related to Vulkan is resolved, OpenCL I have no knowledge of or had issues with
<d1b2>
<johnsel> that said, you might disable it at compile time
<d1b2>
<johnsel> should have been installed fine though by mingw
<azonenberg>
Yeah. This shoudl go away in the next week or two as I am actively working on refactoring to remove the legacy opencl stuff and transition all of our accelerated compute to Vulkan
<d1b2>
<DanielG> Sounds good. Apologies; I'm pretty rusty with CMake. What's the flag to disable OpenCL at compile?
<azonenberg>
Commenting out the FindPackage(OpenCL) in the top level CMakeLists.txt is probably the simplest option
<d1b2>
<DanielG> And to re-install a new pkg, can I just run pacman -U *.zst again?
<d1b2>
<DanielG> or do I have to uninstall it first?
<azonenberg>
And that's a windows/msys2 specific question i cant answer
<d1b2>
<johnsel> you can just run that again
<d1b2>
<johnsel> though I have ran into the issue that *.zst matches 2 versions, older and the new one
<d1b2>
<johnsel> and that does not work obviously
<d1b2>
<johnsel> but pacman -U blabla.zst just installs/updates the package
<d1b2>
<johnsel> I had to look it up too mysefl
<azonenberg>
$ pacman runfrom ghost
<azonenberg>
(sorry i cant help it, the name of the command is too perfect lol. And where's mrspacman?)
<d1b2>
<DanielG> ngl I googled it because I thought it was real
<d1b2>
<johnsel> I think you have some serious path issues
<azonenberg>
Adding the current directory to LD_LIBRARY_PATH might help?
<d1b2>
<johnsel> alternatively send me the output of export via pm (because it's likely long)
<d1b2>
<johnsel> and are you 100% sure you followed all the steps as per the pdf? mingw x64 etc etc
<d1b2>
<DanielG> Could it be because I'm building in a Windows directory (C:/Users/gies/...) rather than msys2 /home/...?
<d1b2>
<johnsel> one thing to try would be to start over with checking out the git but at a much higher level /c/GitHub or so. Sometimes that helps for various weird reasons
<d1b2>
<johnsel> no, that in and of itself is fine I build from /c/GitHub/scopehal-apps too
<d1b2>
<johnsel> you have the wrong cli
<d1b2>
<johnsel> you're on MSYS, not MINGw64
<d1b2>
<DanielG> OH
<d1b2>
<DanielG> ok will give that a shot. Do I need to compile from mingsw64, or just launch .exe from there?
<d1b2>
<johnsel> I honestly don't know, but you can try and if it does not work then recompile
<d1b2>
<DanielG> @johnsel , @azonenberg thank you for the help!
<d1b2>
<johnsel> you are welcome
<d1b2>
<johnsel> it's an annoying detail
<d1b2>
<DanielG> I think seeing "install msys2" and "msys2" directory tripped me up - I'm not very familiar with MSYS2; MINGW64
<d1b2>
<johnsel> yes you really have to read the manual
<d1b2>
<johnsel> in it's defense, the manual also specifically states that you need to read it and follow it to the T or you will get weird issues
<d1b2>
<johnsel> but it's fine we're testing it, we run into these issues and we can put it in a troubleshooting section in the manual
<d1b2>
<DanielG> oh no the documentation is clear.
<d1b2>
<DanielG> I just read it, installed msys2, took a break and came back to it
<d1b2>
<johnsel> sure but like you did people just aren't that good at following instructions you often apply heuristics and in this case there is no heuristic for "it installs itself with 6 shortcuts but only 1 will work"
<d1b2>
<johnsel> the fact that the terminal calls out MSYS is a good identifier for this issue though
<d1b2>
<johnsel> there's also a related issue that the same problem will prevent running it from the installed directory without going through MinGW
<d1b2>
<DanielG> I think part of it is "people aren't good at following instructions" and part of it is a case of how people read instructions. I may make some tweaks to help newbies like me, such as:
<d1b2>
<johnsel> I don't think that helps, someone who is fairly comfortable with command lines will just not read anything around the statements and most people skim through instructions anyway. I think a troubleshooting section is more useful. But that's just my opinion, though it is based on having more support experience than I'd care to have.
<d1b2>
<DanielG> Yeah that's fair; no matter how much "IMPORTANT" or red underline or <...>, people will see "I know how to do that pfft" and skip. Troubleshooting makes sense, and is searchable
<azonenberg>
yeah we have a troubleshooting section but its way out of date and also has like two entries in it
<azonenberg>
more content cant hurt
<d1b2>
<johnsel> I think calling it out more explicitly with color is useful too, but I'd expect many people to still make the same mistake
<d1b2>
<johnsel> or something like a list with a checkmark next to MinGW64 and cross the other out
<d1b2>
<DanielG> I think the way I read instructions is, Bullet, first sentence. If I understand how to do the first sentence, I skip the supporting information after it.
<azonenberg>
Yeah. Improving the windows UX, perhaps even making it something you can generate a visual studio project from, would be nice
<d1b2>
<DanielG> So reading 1. Download msys2. I know how to do that, and I see the blue URL. 2. run instruction. I know how to do that. I'll launch the thing I installed.
<d1b2>
<johnsel> @DanielG exactly that is precisely the heuristic I meant
<d1b2>
<DanielG> I think (at least for me) if there's an intermediary step of "launch THIS shell", I'd be prompted to realize I have the wrong shell.
<d1b2>
<DanielG> Perhaps even adding it to the first sentence in 1. "Download and install MSYS2, then launch MinGW64 (not MSYS2)."
<d1b2>
<johnsel> Or reverse the order. "Download and run the MinGW64 shell, which is part of the MSYS2 suite blablabla"
<d1b2>
<DanielG> Yeah I like that.
<d1b2>
<johnsel> you could also put an explicit "Verify that your shell says it is the MinGW64 shell"