<cyborg_ar>
azonenberg: what is the best way to contribute to drivers? I'd like to use glscopeclient but I don't have any of the supported hardware
<cyborg_ar>
I do have a gpib network with a bunch of boat anchors attached to it
<azonenberg>
cyborg_ar: well first off, don't, glscopeclient has been deprecated for a long time and isn't even in the latest git branch anymore :p
<azonenberg>
use ngscopeclient :p
<cyborg_ar>
Ng i meant
<azonenberg>
as far as drivers go, just sit down and write one
<cyborg_ar>
From what i can see the existing drivers are c++ files in scopehal
<azonenberg>
Yeah we really need a guide for driver devs written at some point
<azonenberg>
but the focus has been on end user docs as those are kinda more important
<azonenberg>
there's more people using than dev'ing
<cyborg_ar>
There are some base classes
<azonenberg>
anyway so the tl;dr is, make a new class derived from SCPIOscilloscope
<azonenberg>
override all of the pure virtuals, you can start out with stubs that do nothing then fill them out as you go
<azonenberg>
optionally override non-pure virtuals to provide additional features
<azonenberg>
add the source file to the cmakelists
<cyborg_ar>
About that, for some of the instruments they are not strictly scpi, can i use the gpib transport still, or do i need to override the transport as well?
<azonenberg>
add an AddDriverClass() call to DriverStaticInit() in scopehal.cpp to register your driver so it can be instantiated in the connection dialog etc
<azonenberg>
The transports provide methods to move line-oriented ascii data and raw byte sequences which are flexible enough for most applications
<azonenberg>
by default the SCPIOscilloscope constructor sends a *IDN? to populate make/model/serial number fields in the object
<azonenberg>
if that command isn't implemented/available there's an arugment you can pass to the constructor to skip that and fill out those variables via any means of your choosing
<azonenberg>
at which point there's no assumption of scpi at all in the lower level stack
<cyborg_ar>
Ahh i see
<cyborg_ar>
I was hoping that was the case
<azonenberg>
Yeah. the classes are just named that because they take a SCPITransport object
<azonenberg>
we might do some refactoring to reflect that they don't *have* to actually be SCPI
<azonenberg>
and instead that it's assumed that they are most of the time but you can usually convince most hardware to work with it :p
<azonenberg>
in general if you find an instrument that doesnt work with the existing api we'll find a way to shoehorn it in lol
<azonenberg>
sometimes this means extending the api to handle cases we hadnt thought of
<azonenberg>
other times it's more practical issues like a binary blob sdk with incompatible licensing
<cyborg_ar>
Yeah my spectrum analyzer syntax is nothing like scpi, but still text line based
<azonenberg>
in which case the usual solution is to shove it in a bridge server that translates to scpi or something like it
<azonenberg>
and have scopehal talk to it
<azonenberg>
over a socket
<azonenberg>
(we also do this to provide network transparency for usb-attached hardware)
<cyborg_ar>
How does the gpib transport work though for multiple instruments? I wouldn't think it would work very well with a bridge and scopehal in separate processes
<cyborg_ar>
I do have a proper agilent usb to gpib interface connected to the network
<cyborg_ar>
Ive used it with pyvisapy (internally linix-gpib)
<azonenberg>
gpib we dont normally use with bridges
<azonenberg>
i dont think we have a viable network transparency flow for gpib gear
<cyborg_ar>
Yeah it is not easy
<cyborg_ar>
It is its own kind of network
<azonenberg>
i'd likely translate it to lecroy vicp over a socket
<azonenberg>
with a bridge server that supported multiple concurrent connections each socket bound to its own address
<azonenberg>
But no such server currently exists
<azonenberg>
Anyway at this point we do have a gpib transport (linux only), i do not know how or if it handles multiple instruments
<azonenberg>
it uses the linux gpib driver stack afaik
<cyborg_ar>
Alright ill have to give it a try
<azonenberg>
miek is probably the person to talk to about it
<azonenberg>
i think he's the one who wrote it
<azonenberg>
most of the rest of us are using usbtmc or ethernet
<azonenberg>
(in one of the many different flavors like raw tcp sockets, vicp, or lxi)
<azonenberg>
cyborg_ar: also as a fyi there's a developer zoom call happening monday at 11am pacific, link to be shared here shortly beforehand
<azonenberg>
you're welcome to join, although the primary focus will be already-active contributors talking about what they're doing, future plans, challenges, etc
<cyborg_ar>
I have the following instruments in the network: HP 6623A PSU, HP 3874A, HP 8590L, Tek TDS540 and Yokogawa DL1540. It will be interesting to see if i can get some to work
<azonenberg>
what are those? i dont have the model names of the old HP gear memorized
<azonenberg>
the TDS540 i know is a scope
<azonenberg>
or i think it is anyway
<cyborg_ar>
The 6623a is a rack mount 3 channel power suply
<azonenberg>
That should be straightforward, you might actually want to do that one first
<cyborg_ar>
A really nice one
<azonenberg>
to get familiar with the codebase and the transport layer
<cyborg_ar>
Yeah also the protocol is scpi
<azonenberg>
we have a power supply class already and it doesnt have a whole lot of functions
<azonenberg>
you could probably copy paste the R&S driver and make small changes if it's actual scpi
<cyborg_ar>
The 3874a is a multimeter, very much pre-scpi. The commands tend to be single bytes, and you get readout by just making it talk
<cyborg_ar>
That one may be fun
<azonenberg>
yeah we have multimeter support too, you can probalby also use the R&S driver as a starting point but you'll have to make more changes if it's non-scpi
<azonenberg>
a scope driver is a bit more involved just because there's so many more APIs to implement and more data to work with
<azonenberg>
not harder per se, but more stuff to do
<cyborg_ar>
The 8590l is a 1.8ghz spectrum analyzer
<azonenberg>
So i dont actually think we have any pure specan drivers yet. but as a general rule, in our taxonomy a specan is a scope
<azonenberg>
it just happens to output waveforms with y axis units in dBm or similar and x axis units in Hz
<cyborg_ar>
Frequency domain scope
<azonenberg>
the APIs dont care
<azonenberg>
you can be making something that does triggered measurements of kelvins vs mass or something lol
<cyborg_ar>
It doesn't have a tracking gen or anything else fun, so it shouldn't be harder than a scope then
<azonenberg>
Yeah. and we do have methods for setting frequency domain sweep config like center frequency and span or start/stop (i forget which the API uses natively but the gui converts and allows both)
<cyborg_ar>
The tek 540 is a 4 channel scope, should be fairly straightforward
<azonenberg>
one thing to keep in mind in general is that some methods the GUI likes to call every frame during rendering, and especially with older slow hardware
<azonenberg>
that doesnt work well
<azonenberg>
with meters, psus, etc most of that happens in a background thread so it's fine to run as slow as it wants
<azonenberg>
for scopes, you will see most of the drivers do aggressive caching
<azonenberg>
so as to not query the hardware constantly
<azonenberg>
then there's a flush-cache button in the gui to clear that state if you've changed something on the front panel and scopehal hasn't realized that yet
<azonenberg>
(no instrument that i know of has a push-based flow where it can notify a client that a setting has changed)
<azonenberg>
i've been poking vendors to add that or ages
<azonenberg>
for*
<cyborg_ar>
And the yokogawa is my favourite scope, the little adorable thing
<cyborg_ar>
I guess with gpib you could actually use the srq button, most gpib instruments have it in the front panel
<cyborg_ar>
Dunno how linux-gpib reacts to it
welterde has quit [Quit: Moving day]
<cyborg_ar>
woo, i got ngscopeclient to start building
<cyborg_ar>
i dont like the instructions in the manual so i had to deviate a bit
<cyborg_ar>
:( sadness, getting linker errors
<azonenberg>
lol what about the instructions do you not like?
<azonenberg>
(balrog is actively working on refactoring of the build system right now, if you have complaints talk to him)
<d1b2>
<david.rysk> @cyborg_ar please try my branch
<d1b2>
<david.rysk> you also need the related PR for scopehal
<d1b2>
<david.rysk> I've tested with linux-gpib on Arch and it works with these changes
<d1b2>
<david.rysk> I'm also rewriting the instructions in the manual to not have those problems
<d1b2>
<azonenberg> (I plan to review and upstream those later today)
<d1b2>
<david.rysk> @azonenberg I haven't thoroughly tested on Windows yet after fixing a whole pile of stuff, but it currently works with Vulkan SDK or distro packages on macOS ARM and all the Linux distros I've tried
<d1b2>
<david.rysk> Also I need to put back FindVulkan.cmake
<cyborg_ar>
azonenberg: i just managed to make it build by making it think it was fedora
<d1b2>
<david.rysk> would still be nice to get my PR tested 🙂 what distro are you on?
<cyborg_ar>
debian 12
<d1b2>
<david.rysk> yeah I tested my PR on debian 12 but I didn't test with manually installed linux-gpib since it's not in the repos
<d1b2>
<david.rysk> I tested with linux-gpib on arch; I'm using pkgconfig to look for it
<cyborg_ar>
azonenberg: i dont like that it directs people to run make install with the prefix set to /usr
<d1b2>
<david.rysk> yeah I'm taking that out
<azonenberg>
cyborg_ar: huh i am doing dev on debian 11 and 12 right now
<d1b2>
<david.rysk> I'm also taking out the manually install Vulkan SDK crap
<cyborg_ar>
that is a red flag, i managed to get stuff working for ffts using prefix=$HOME/.local
<cyborg_ar>
yeah i did not manually install the vulkan SDK either
<cyborg_ar>
because all the stuff is already packaged and you shall not install shit that debian has already packaged elsewhere
<azonenberg>
cyborg_ar: i run the upstream ubuntu packages for the SDK
<azonenberg>
from lunarg
<d1b2>
<david.rysk> you shouldn't need the upstream ubuntu packages on debian 11
<cyborg_ar>
yeah i dont install ubuntu packages into debian if i can help it
<d1b2>
<david.rysk> debian 12*
<azonenberg>
(they install fine on debian but i'm not putting that in docs)
<azonenberg>
cyborg_ar: FFTS is also going bye-bye very soon
<cyborg_ar>
good
<d1b2>
<david.rysk> debian 11... you're better off using the Vulkan SDK
<azonenberg>
it's a legacy dependency that we've almost fully replaced
<d1b2>
<david.rysk> which my PR fixes detection for
<d1b2>
<david.rysk> I just changed the instructions to put it in /usr/local
<d1b2>
<david.rysk> which works fine
<cyborg_ar>
david.rysk: i'll check out your pr
<d1b2>
<david.rysk> yeah I wouldn't mind more people testing it
<cyborg_ar>
im gonna see if what i compiled run, i kinda went off to do something else since compiling C++ is like watching paint dry
<d1b2>
<david.rysk> not enough CPU cores? 🙂
<cyborg_ar>
yay it starts
<d1b2>
<azonenberg> 😄
<cyborg_ar>
i have 4, i guess i need 20
<d1b2>
<azonenberg> i mean more is faster lol
<d1b2>
<azonenberg> I have dual 8 core 16 thread xeons on my main dev box (although from 2017) and usually build with -j32
<d1b2>
<azonenberg> takes maybe a minute to do a full build on that beast
<d1b2>
<david.rysk> I haven't timed here, I should
<d1b2>
<david.rysk> it's no more than 3-4 minutes. M2 MacBook Air (8 core), AMD 5800X (also 8 core, just with HT)
<cyborg_ar>
ugh i wish debian would package linux-gpib
<cyborg_ar>
i started work on doing it, but for some reason i cant get dkms to work right when the kernel updates
<d1b2>
<david.rysk> you need to reboot
<cyborg_ar>
it's probably a dumb mistake
<d1b2>
<david.rysk> or does it break even after reboot?
<d1b2>
<david.rysk> (if you don't reboot after a kernel update, the running kernel gets out of sync from the kernel headers)
<cyborg_ar>
yeah i think i got the timing of the reconfigure wrong, it builds for the old kernel instead of the new kernel on reconfigure
<d1b2>
<david.rysk> (so a lot of things that involve kernel modules will break)
<d1b2>
<david.rysk> reboot and reconfigure
<cyborg_ar>
yeah that's what i do
<cyborg_ar>
but i shouldnt have to, nvidia dkms works properly, mine is just wrong i guess
<d1b2>
<david.rysk> but there's probably some way to compensate for this in the dpkg files
<d1b2>
<david.rysk> I guess look for an example
<cyborg_ar>
yea
<cyborg_ar>
woo it works now
<cyborg_ar>
i tried to drive my HP6623A with the R&S driver and it complains about invalid IDN response, just what i was hoping
<cyborg_ar>
now time to copy paste and modify...
<azonenberg>
Awesome, looking forward to a PR
<azonenberg>
cyborg_ar: is that the one that has the plain text protocol?
<azonenberg>
you can try going window | scpi console and type text commands in there and see if it runs them (but it may be confused since the R&S driver will also be sending status query commands it doesn't know how to handle)
<d1b2>
<david.rysk> Mac M2: make -j9 1817.07s user 82.11s system 436% cpu 7:15.02 total AMD 5800X: real 5m 37.02s user 57m 23.70s sys 7m 21.67s
<_whitenotifier-e>
[scopehal] azonenberg c9d77db - Update to latest xptools
<cyborg_ar>
hmmm now i need to remember how to do string comparisons in C++
<d1b2>
<david.rysk> std:strings? I thought you just == them
<azonenberg>
Yeah
<azonenberg>
[100%] Built target ngscopeclient real 240.37 user 3460.56 sys 536.38
<d1b2>
<azonenberg> @david.rysk so four minutes with -j32 on my xeon box
<d1b2>
<david.rysk> @azonenberg what gen and how many cores?
<azonenberg>
dual socket 6144
<d1b2>
<david.rysk> I'm using -j9 on my Mac and -j17 on my AMD box
<azonenberg>
skylake-sp, 8 cores / 16 threads per socket
<d1b2>
<david.rysk> yeah, IPC has improved that much since skylake-sp
<azonenberg>
it was, in 2017, the highest single thread performance you could get in a xeon
<azonenberg>
(they had more cores but all significantly lower fmax)
<d1b2>
<david.rysk> and AMD is among the highest IPC of x86-based (I should test on my 7800x3d, it's probably faster)
<azonenberg>
the CI box is a xeon 5320
<azonenberg>
i havent timed it on there lately, i know its 7 mins end to end last time i checked but that includes package installs and checkouts from github
<azonenberg>
so the actual build could well be <4
<azonenberg>
(and wow i still think of this thing as "new" but it's coming up on 7 years old now?)
<d1b2>
<david.rysk> I've been testing across various Linux distros and package installs are vastly faster on Alpine Linux
<d1b2>
<david.rysk> like, stuff that feels like "waiting for paint to dry" on Debian happens instantly
<d1b2>
<johnsel> I like alpine
<d1b2>
<david.rysk> @johnsel alpine might be a good choice for first-line CI
<d1b2>
<johnsel> I used to run it embedded for a 3d camera system with 2x6 cams
<d1b2>
<246tnt> Damn, I wanted to join the benchmarking fun but of course it doesn't build after a git pull 😅
<d1b2>
<johnsel> did you pull david's fork tnt?
<d1b2>
<david.rysk> @246tnt how doesn't it build? If you can test out my PR I'd appreciate it 😄
<d1b2>
<246tnt> I no I just pulled from upstream normal repo.
<d1b2>
<246tnt> I can check the PR
<d1b2>
<johnsel> david is your PR also suitable for Windows?
<d1b2>
<johnsel> I'll give it a try locally if so
<d1b2>
<david.rysk> @johnsel it should be but I haven't tested it as thoroughly
<d1b2>
<david.rysk> Most notable I haven't tested Wix packaging
<d1b2>
<david.rysk> It fixes the issues with Catch-based tests on Windows
<d1b2>
<johnsel> let's see
<d1b2>
<johnsel> also I'm going to change the way that Vulkan SDK version pulling works in the CI script
<d1b2>
<david.rysk> Feel free to check my docs PR
<d1b2>
<johnsel> I really don't like that I can't just copy paste the CI commands and run them locally
<d1b2>
<johnsel> if the CI is our golden copy (and it should be) then why make it impossible to run the same locally
<d1b2>
<246tnt> Wait ... how do I build ... there is no CMakelist anymore ?
<d1b2>
<johnsel> how about instead of doing this locally I do it on the CI system and pull an image once I've done the msys install
<cyborg_ar>
huh? so this codebase is cxx11?
<d1b2>
<johnsel> that way that progresses too while I'm at it
<d1b2>
<david.rysk> cxx17
<cyborg_ar>
error: ‘std::string’ {aka ‘class std::__cxx11::basic_string<char>’} has no member named ‘starts_with
<cyborg_ar>
i so very sad
<d1b2>
<david.rysk> the CMakeLists should be declaring C++17
<d1b2>
<david.rysk> why?
<d1b2>
<david.rysk> are you stuck on an ancient (unsupported) C++ lib?
<cyborg_ar>
because that was added in cxx20
<d1b2>
<246tnt> Mmm, it doesn't find YAML anymore while the upstream finds it just fine.
<d1b2>
<david.rysk> @246tnt platform details?
<d1b2>
<246tnt> Gentoo
<d1b2>
<246tnt> But yaml has been manually installed in /opt/scopehal ( same prefix I specify for scopehal build ) and also PKG_CONFIG_PATH contains path to where the .pc for it is.
<d1b2>
<johnsel> @azonenberg the damn CI is eating up ghost resources again
<d1b2>
<david.rysk> you need CMAKE_PREFIX_PATH=/opt/scopehal
<d1b2>
<david.rysk> and you need yaml-cpp to install the .cmake files into its prefix in lib/cmake, but it's probably doing that
<d1b2>
<246tnt> @david.rysk But I didn't need it before 😁 Usually the prefix you install to is searched directly for that stuff.
<d1b2>
<david.rysk> hmm yeah, if you're specifying CMAKE_INSTALL_PREFIX then that should be added to CMAKE_PREFIX_PATH
<cyborg_ar>
aaaargh why is it recompiling everything
<d1b2>
<david.rysk> are there .cmake files for yaml-cpp in your /opt/scopehal?
<d1b2>
<david.rysk> (I'm asking all these questions because, uh, I don't test on gentoo)
<d1b2>
<david.rysk> okay yeah so it should pick those up and pick up yaml-cpp from them, but if you have more detail on how it's erroring (like the error text) I'd appreciate it
<d1b2>
-std=c++17 -fPIC -Winvalid-pch -include /home/tnt/projects/ext/scopehal/scopehal-apps/_build/lib/scopehal/CMakeFiles/scopehal.dir/cmake_pch.hxx -MD -MT lib/scopehal/CMakeFiles/scopehal.dir/scopehal.cpp.o -MF CMakeFiles/scopehal.dir/scopehal.cpp.o.d -o CMakeFiles/scopehal.dir/scopehal.cpp.o -c /home/tnt/projects/ext/scopehal/scopehal-apps/lib/scopehal/scopehal.cpp In file included from
<d1b2>
/home/tnt/projects/ext/scopehal/scopehal-apps/lib/scopehal/scopehal.cpp:35: /home/tnt/projects/ext/scopehal/scopehal-apps/lib/scopehal/scopehal.h:55:10: fatal error: yaml-cpp/yaml.h: No such file or directory 55 | #include <yaml-cpp/yaml.h>
<d1b2>
<246tnt> //Path to a file. YAML_CPP_INCLUDEFILES_DIR:PATH=/usr/include //Path to a library. YAML_CPP_LIBRARIES_FILES:FILEPATH=/opt/scopehal/lib64/libyaml-cpp.a
<d1b2>
<david.rysk> thanks, have some other things to do but will investigate soon
<d1b2>
<246tnt> That's weird.
<d1b2>
<246tnt> (above is from CMakeCache)
<d1b2>
<david.rysk> probably my logic is broken
<d1b2>
<david.rysk> I had to work around broken .cmake files in I think Debian oldstable and the way I did it there is suboptimal
<d1b2>
<david.rysk> I’ll fix that later today
<d1b2>
<david.rysk> Right now I’ve got some other things to take care of
<d1b2>
<246tnt> I removed the "Workaround" stuff and now it builds ...
<d1b2>
<david.rysk> Ok good 🙂 I’ll fix it
<d1b2>
<david.rysk> (Wish we didn’t have to support “stable” distros)
<cyborg_ar>
hmm
<d1b2>
<246tnt> (My bad, no it didn't work either, it mayeb went a bit further , or just the parallel build took a different path ...)
<d1b2>
<david.rysk> Yeah I didn’t expect that to fix it
<d1b2>
<david.rysk> Due to how cmake targets work
<cyborg_ar>
argh, all the nice string things are in cxx20
<cyborg_ar>
:_;
<azonenberg>
yeah we'll probably move to cxx20 once all stable distros have compilers for it
<cyborg_ar>
what's the preferred way to make a formated string to send down a transport?
<cyborg_ar>
snprintf?
<cyborg_ar>
basically, all the commands take a channel number as argument
<azonenberg>
cyborg_ar: for a number in general we usually use std::to_string, snprintf does work as well. but for channel numbers there's a better solution
<azonenberg>
that's what the "hardware name" of the channel is for
<azonenberg>
the hardware name is immutable and is whatever the instrument's API refers to the channel as
<azonenberg>
whereas the display name is the user configurable "friendly name" for the channel that you can rename based on usage
<cyborg_ar>
the commands on this instrument are like "VSET 1 3.3", that sets channel 1 to 3.3 volts
<azonenberg>
Yep
<azonenberg>
so for example in the lecroy driver we have
<azonenberg>
for reference, the transport class has two levels of API
<azonenberg>
there's the raw/immediate ones that talk directly to the socket and require external sync points
<azonenberg>
and there's the queued ones that all new drivers should use (we're probably going to deprecate the old one or make it private to the driver implementation)
<azonenberg>
that allow you to batch up commands without a mutex lock then push to hardware when you explicitly flush or when you send a command that expects a reply
<azonenberg>
the idea is that a write-only operation on the instrment should not block the gui thread
<azonenberg>
(we really need a good dev getting-started guide... too many things to fix, too little time)
<d1b2>
<david.rysk> @246tnt testing proper fix...
<d1b2>
<david.rysk> @246tnt is there a /usr/include/yaml.h or /usr/include/yaml-cpp/yaml.h on your system?
<d1b2>
<david.rysk> Asking because that could lead to a potential bug (heh)
<tnt>
Yes, the first one.
<tnt>
Belongs to libyaml which is installed system wide (and distinct from yaml-cpp)
<azonenberg>
lol
<azonenberg>
...
<azonenberg>
so there are two different libs that install a file called yaml.h
<azonenberg>
wcgw
<tnt>
yeah, that's why you include yaml-cpp/yaml.h and not yaml.h :D
<tnt>
(and the include path given my the yaml-confi is prefix/include and not prefix/include/yaml-cpp)
<d1b2>
<david.rysk> well I have to do proper search for yaml-cpp/yaml.h 🙂
<d1b2>
<david.rysk> oops left a debug message() in 🙂
<d1b2>
<246tnt> The fact CMakeCache still ends up with YAML_CPP_INCLUDEFILES_DIR:PATH=/usr/includedoesn't bode well.
<d1b2>
<david.rysk> did you try after the latest commit? That should adjust the logic
<d1b2>
<david.rysk> the YAML_CPP_INCLUDEFILES_DIR you were getting was from the Debian workaround, which should now only trigger if the yaml-cpp installed is really old.
<d1b2>
<246tnt> I just pulled and updated submodules.
<d1b2>
<david.rysk> And I adjusted the logic for that workaround too, so it shouldn't find the wrong yaml.h
<d1b2>
<david.rysk> yeah the workaround logic is fixed now
<d1b2>
<david.rysk> YAML_CPP_LIBRARIES_FILES is /usr/lib/x86_64-linux-gnu/libyaml-cpp.so CMake Error at CMakeLists.txt:79 (find_path): Could not find YAML_CPP_INCLUDEFILES_DIR using the following files: yaml-cpp/yaml.h
<d1b2>
<david.rysk> (when /usr/include/yaml.h exists but /usr/include/yaml-cpp/yaml.h does not)
<tnt>
It doesn't go through the work around. Added `message` to confirm ... but still ends up wiht YAML_CPP_INCLUDEFILES_DIR:PATH=/usr/include
<d1b2>
<david.rysk> YAML_CPP_INCLUDEFILES_DIR should not be getting set at all unless the workaround is triggered. Are you deleting your CMakeCache?
<d1b2>
<david.rysk> or wait, hm
<d1b2>
<david.rysk> now that is strange
<d1b2>
<david.rysk> I'll have to test on Gentoo I guess
<d1b2>
<246tnt> Ok, so /opt/scopehal/include/yaml-cpp/yaml.h exists. message("${YAML_CPP_H_FILE}") prints /opt/scopehal/include/yaml-cpp/yaml.h but still the if(NOT EXISTS YAML_CPP_H_FILE) triggers.
<d1b2>
<david.rysk> pushed additional changes to fix that
<d1b2>
<david.rysk> @246tnt can you reset, pull, and try again? Again sorry about this!
<d1b2>
<246tnt> yeah, it's already building good so far.
<d1b2>
<azonenberg> @david.rysk none of this work you're doing right now should affect the tests refactor right?
<d1b2>
<azonenberg> that's next on my merge queue
<d1b2>
<david.rysk> @azonenberg correct
<d1b2>
<246tnt> Builds fine AFAICT.
<cyborg_ar>
woo i think i completed the driver
<d1b2>
<azonenberg> @david.rysk Minor coding style fix: we use ansi/allman style braces (curly brace on its own line not with the function name)
<d1b2>
<azonenberg> e.g. in testRunEnded
<d1b2>
<azonenberg> ditto for declaring testRunListener
<d1b2>
<azonenberg> if you can fix that it looks good to merge
<d1b2>
<david.rysk> in the class definition too?
<d1b2>
<azonenberg> yeah
<d1b2>
<azonenberg> braces always go on their own line, the sole exception is one-line inline functions
<d1b2>
<azonenberg> which look like
<d1b2>
<azonenberg> function() { body; }
<cyborg_ar>
it works great!
<azonenberg>
cyborg_ar: awesome :)
<azonenberg>
When you're happy send a PR to scopehal (and a companion PR to scopehal-docs listing the name of the driver and what hardware it works with)
<_whitenotifier-e>
[scopehal-apps] azonenberg 7beedfe - Merge pull request #672 from d235j/fix-headless-build-with-tests Move test initialization and teardown into Catch2 event listeners
<cyborg_ar>
i wrote some helper functions to format the commands so i wouldnt have to write so much repeated code
<azonenberg>
cyborg_ar: Let's see... Keep the header the same (my name and contributors)
<azonenberg>
you're included under "and contributors"
<cyborg_ar>
ah, that's not how copyright works but ok
<azonenberg>
it's a derivative work of what was originally my code
<cyborg_ar>
should i keep the date range the same?
<azonenberg>
yeah, also remove the v0.1 in the header (we're gradually cleaning that out from each file as we go)
<azonenberg>
but yeah basically since we don't have folks sign CLAs
<azonenberg>
copyright in the project is collective among all of us, there's no one right holder
<azonenberg>
technically everyone owns the rights to the specific lines they wrote i guess
<azonenberg>
but just putting "everyone" in the header makes more sense :p
<cyborg_ar>
alright, easy fix
<azonenberg>
(not signing CLAs was somewhat intentional because it makes it harder for someone to take over the project and move to a less friendly license)
<cyborg_ar>
also fixed the @brief thing
<azonenberg>
(this is also why we have the single unified date range in the header, because again technically every line is copyrighted by the author the year it was created, there's code dating back to 2012 in the project, but nobody has time to trace out the origin of every line of every file)
<azonenberg>
having the simple umbrella header makes more sense
<azonenberg>
wrt the optional features like soft start, were there any features that you wanted to add that were not in the API?
<azonenberg>
(we don't support some of the fancier stuff like tracking modes yet, that's a pending issue)
<azonenberg>
for ChannelCommand() why didn't you just use std::string concatenation and the channel hwnames?
<azonenberg>
while this code isn't wrong, using std::string operations typically results in less verbose code
<azonenberg>
So we're trying to move new drivers to that style and gradually transition older ones over
<cyborg_ar>
how do i get the hwname? the argument is an int not a channel
<cyborg_ar>
i have to index the channel in the array?
<azonenberg>
m_channels[chan]->GetHwname()
<azonenberg>
yeah
<cyborg_ar>
yeah it doesnt make much difference in the verbosity, i could eliminate the repetition of the "+1"
<azonenberg>
it turns those functions into one-liners in most cases
<azonenberg>
eliminating a function call
<azonenberg>
instead of ChannelCommand("VSET", chan, volts);
<azonenberg>
you'd do m_transport->SendCommandQueued(string("VSET ") + m_channels[chan]->GetHwname() + to_string(volts))
<azonenberg>
or well there should be a space in there
<azonenberg>
but you get the idea
<cyborg_ar>
yeah one line but a long line :)
<azonenberg>
yeah well our coding style is 120 character lines
<azonenberg>
and in most cases it ends up being more readable than printf-style expressions
<cyborg_ar>
also it creates like 3 objects, though i guess since they are const they may be optimized out
<cyborg_ar>
id like to use std::format when it becomes available...
<azonenberg>
Yeah and more importantly, the control plane traffic is negligible CPU time compared to the data plane work on scope samples etc
<azonenberg>
more readable code is usually the preference for control plane
<azonenberg>
the network / io performance dominates the few extra clocks you spend on string manipulation
<azonenberg>
data plane is a very different story :p
<cyborg_ar>
yeah that's why i wrote those helper functions, so there is a lot less noise, i may put the call to the gethwname just to remove code repetition
<azonenberg>
yeah i'm not opposed to having the helper, although i'd probably put it in the header and inline it if it turns into a one-liner
<azonenberg>
i'm just trying to move away from the old drivers that had like three or four lines of code for every single command
<d1b2>
<david.rysk> @johnsel any luck testing on windows?
<azonenberg>
declaring a temporary, sprintf, send
<cyborg_ar>
azonenberg: one thing im not seeing is a way to reset a tripped OCP, also my psu has OVP settings and trip/reset
<azonenberg>
cyborg_ar: with my R&S gear you have to send an off command then an on command to reset OCP
<azonenberg>
We don't have APIs for OVP, please file a github ticket against ngscopeclient/scopehal so we can add that
<cyborg_ar>
i think mine you have to explicitly reset the OCP or OVP, and also resetting that will reenable the output but will not clear the fault
<azonenberg>
Interesting. File a ticket for that as well, we may need to rethink how we handle OCP then
<azonenberg>
this is part of the fun of making a universal API for T&M gear
<azonenberg>
different instruments implement the same basic idiom differently and you have to figure out how to abstract those differences
<azonenberg>
i think you're only the second driver, HM804x being the first, for a PSU that has OCP
<azonenberg>
HMC804x*
<azonenberg>
the more drivers of a given type we have the more the APIs tend to reach a unified, all-encompassing state
<cyborg_ar>
yeah if you trigger OCP it will not let you do anything with the output
<cyborg_ar>
until you clear the OC
<azonenberg>
so near term, you can make it so that when you shut down a channel it sends the OCP clear command
<azonenberg>
that will match R&S behavior
<cyborg_ar>
i could send a clear OC command after sending the output off command, that reproduces the R&S behavior where turning the output off resets OC
<azonenberg>
and at least allow you to use the instrument if you trip OCP
<azonenberg>
Exactly
<azonenberg>
Long term we should make a dedicated API (that for R&S sends a clear command and for your unit sends the dedicated command)
<azonenberg>
but that will mean making a new method in the class, updating the R&S driver, and doing some GUI work to add a button for resetting it
<azonenberg>
absolutely doable but something i don't have time for today :)
<cyborg_ar>
yeah not needed
<azonenberg>
but absolutely file the ticket
<azonenberg>
"some instrument does things in a way that the current API isn't a good match for" is something we definitely want to know about
<cyborg_ar>
haha i connected two power supplies in the filter graph and make them ghetto track eachother
<cyborg_ar>
two channels i mean
<azonenberg>
and thats the beauty of the filter graph model
<azonenberg>
you can do stuff like that
<azonenberg>
or create a ramp, or have it track a multimeter reading
<cyborg_ar>
how do you create a ramp? ScalarStairstep?
<azonenberg>
Yeah
<azonenberg>
i use it a lot if i'm trying to characterize how a device operates over a range of inputs/outputs
<azonenberg>
you can also use it with a load to set the set point for current or voltage or something
<azonenberg>
e.g. when i was designing my 48V intermediate bus converter i used ScalarStairstep to sweep from 0 to 6 amps output current on my siglent load
<azonenberg>
and then measure output voltage, current consumption, temperature, power lost in the conversion stages, ripple, etc
<azonenberg>
and plot against that
<azonenberg>
but yeah the power of the filter graph model is the flexibility, we give you the tools and building blocks and you can make whatever you need out of them
<cyborg_ar>
how do you plot a scalar? or do you have to export to a csv or something like that?
<azonenberg>
the first is to get a live reading, you drag from the scalar output and select "measure"
<azonenberg>
it pops up in the measurements window
<azonenberg>
(you can't currently move or reorder, only remove - there's a pending ticket for that0
<azonenberg>
the second is to feed it to the "trend" filter which displays a live plot of the last N values over time
<azonenberg>
as a time domain waveform
<azonenberg>
or for a one-time measurement just mouse over the output in the graph editor and the tooltip will show the current reading
<azonenberg>
note that the default view for the trend filter is almost useless, i need to make it autofit better
<azonenberg>
since the trend has time zero as the current time and values scrolling left of that, and the default view is zoomed in to like nanosecond level
<azonenberg>
so you have to zoom way out and move left to see anything :p
<azonenberg>
also, trend filters show up as a trigger group which means you can start, stop, or single-shot update them just like a scope
<azonenberg>
by default all trends are linked but you can unlink them if you want to pause just one or something
<cyborg_ar>
yay i can see the trend
<cyborg_ar>
this thing is definitely chatty
<azonenberg>
The power supply thread defaults to polling the instrument at i think 20 Hz (or slower if the device takes a long time to respond) to get responsive gui updates
<azonenberg>
if it's spamming the bus too much we can look at adding a preference setting to lower the polling rate either for specific instruments or globally
<azonenberg>
for modern ethernet based gear that's fine
<azonenberg>
for gpib it might be too much
<azonenberg>
(in particular since it's a shared medium and you don't want to DoS other instruments)
<azonenberg>
also the SCPITransport layer supports rate limiting so you could perhaps add that in the driver as well, but i think it makes sense to be a preference
<azonenberg>
We have support in the ngscopeclient gui architecture for per-driver preferences although right now i think only the lecroy scope driver takes advantage of this
<cyborg_ar>
yeah it cant seem to fit a measuerement command in sideways when it's setting the vset every frame
<cyborg_ar>
yeah if its sending tons of vsets it wont let it do anything else
<cyborg_ar>
oh interesting, looks like basically it fills the queue with stuff, so it will get arbitrarily behind
<cyborg_ar>
and it will stop making measurements until the queue is empty
<cyborg_ar>
ahh there are functions for deduplicating commands in the queue but they are only aware of scpi syntax
<cyborg_ar>
beans
<cyborg_ar>
that seems to be the only part that requires a specific syntax
<d1b2>
<johnsel> sorry I had to run, apparently I promised my fiancee a date and forgot 😐
<d1b2>
<johnsel> big oops
<cyborg_ar>
RIP
<d1b2>
<johnsel> anyway I did my duties and am back now
<d1b2>
<johnsel> @azonenberg the CI resource set is having more ghost resource usage
<d1b2>
<johnsel> I saw there's also a reboot pending in the xcp-ng status
<d1b2>
<johnsel> I wonder if that would fix it
<d1b2>
<johnsel> anyway can you make a template from this ?
<d1b2>
<david.rysk> @johnsel can you bump the Vulkan SDK? That one is clearly broken
<d1b2>
<david.rysk> (meaning on the debian CI
<d1b2>
<azonenberg> can you be specific as to what version you want the CI to use?
<d1b2>
<david.rysk> the latest
<d1b2>
<azonenberg> (we should update docs to point to whatever the CI is)
<d1b2>
<david.rysk> yeah my docs updates will suggest that version OR latest
<d1b2>
<azonenberg> Do not suggest "or latest"
<d1b2>
<david.rysk> latest is 1.3.275.0
<d1b2>
<azonenberg> vulkan-hpp occasionally has breaking API changes
<d1b2>
<david.rysk> if it's broken on latest, people should file a bug
<d1b2>
<azonenberg> yes but we don't want new users worrying about upstream bugs
<d1b2>
<azonenberg> we want to recommend using a tested and known working release
<d1b2>
<david.rysk> I can add appropriate text then
<d1b2>
<azonenberg> every time a new sdk version comes out, we should test it and once it passes CI on all platforms then update docs to point to it
<d1b2>
<azonenberg> but this should not be done automatically or without testing
<d1b2>
<david.rysk> "You can also use latest, but be aware that the Vulkan API sometimes changes so it might not compile. If this is the case, please file a bug."
<d1b2>
<azonenberg> yeah that works
<d1b2>
<azonenberg> in particular the c++ wrappers
<d1b2>
<azonenberg> the C API normally is upward compatible
<d1b2>
<azonenberg> the RAII layer is not
<d1b2>
<azonenberg> (but it makes the code a lot cleaner so it's worth using even if it means refactoring once a year or so when they break something)
<d1b2>
<david.rysk> I didn't test against older SDKs because I know stuff was a lot more broken in older versions
<d1b2>
<david.rysk> well I tested against one and it was worse 😛
<d1b2>
<azonenberg> Yeah i dont care what the version is, but a) if it's not the distro packaged version we should have the CI/build instructions use the upstream release
<d1b2>
<azonenberg> and b) we should not recommend a version in the docs that we have not tested specifically
<d1b2>
<david.rysk> I'm supporting either distro-packaged or upstream-release by user choice
<d1b2>
<david.rysk> docs will explain how to use either
<d1b2>
<azonenberg> ok as long as you test both cases since they probably will have differetn bugs :p
<d1b2>
<david.rysk> except on oldstable where distro packaged may be too old (I didn't look at backports yet)
<d1b2>
<david.rysk> I mean yeah I have different workarounds for both cases 😛
<d1b2>
<david.rysk> @johnsel @azonenberg what Debian version does the selfhosted VM run? I think it's bullseye?
<d1b2>
<david.rysk> I just want to confirm
<d1b2>
<johnsel> 11 afaik, which should be bullseye yes
<d1b2>
<johnsel> hey andrew what is a drawer steak? drawers teak? lol
<_whitenotifier-e>
[scopehal-apps] Johnsel bc14063 - Re-enabled test execution on ci
<d1b2>
<david.rysk> @johnsel I'm working on the CI files now
<d1b2>
<johnsel> cool, if you really want to make me happy get rid of that templated version number
<d1b2>
<johnsel> I'm wondering what happens if I run the test on the ci vm now
<d1b2>
<azonenberg> It's a piece of meat in a desk drawer
<d1b2>
<johnsel> I actually realized just now I was under the impression I broke that vm template entirely
<d1b2>
<johnsel> I'm up for it
<d1b2>
<azonenberg> tl;dr its an in joke from somebody spacing out and putting something in the wrong drawer while stocking the minifridge in a grad student office
<d1b2>
<azonenberg> became the name of my first company
<d1b2>
<azonenberg> It hasn't done business in probably 15 years at this point
<d1b2>
<johnsel> yeah I saw it in the git logs
<d1b2>
<azonenberg> but it became my main email address and i have so many accounts on there i've ketp it because i was too lazy to migrate it off :p
<d1b2>
<johnsel> I thought you used antikernel something
<d1b2>
<azonenberg> i have so much legacy stuff on drawersteak that i kept the DNS and mail active since it was less work than figuring out how to change the address on every account lol
<d1b2>
<johnsel> Yeah I feel that, for anything not directly related to my old business I often use my personal email
<d1b2>
<johnsel> Just because I don't want to deal with migrating it in the future
<d1b2>
<azonenberg> yeah i dont have a gmail or anything else, only my own domains
<d1b2>
<johnsel> I actually lost access to my google apps account after my business went under
<d1b2>
<johnsel> I lost the phone number asssociated with the admin account