azonenberg changed the topic of #scopehal to: libscopehal, libscopeprotocols, and glscopeclient development and testing | https://github.com/glscopeclient/scopehal-apps | Logs: https://libera.irclog.whitequark.org/scopehal
Degi has quit [Ping timeout: 248 seconds]
Degi has joined #scopehal
GenTooMan has quit [Ping timeout: 244 seconds]
<azonenberg> also hmmmm, so it seems like a "pipeline" is actually what vulkan calls something similar to a "program" in opengl land
<azonenberg> and i need one for each unique compute shader
GenTooMan has joined #scopehal
Johnsel has quit [Ping timeout: 252 seconds]
Johnsel has joined #scopehal
nelgau has joined #scopehal
<azonenberg> ok so we are going to have to do a bunch more infrastructure on pipeline caching (which apparently is a whole can of worms in itself)
<azonenberg> and i'll need to think about how to share pipelines across filter instances, and if that is even possible/something i should be doing
<azonenberg> I might just want to have one cache and instantiate multiple pipelines to keep things thread safe
<d1b2> <bob_twinkles> you may want to check out https://github.com/ValveSoftware/Fossilize, which is how Valve solves this problem for games on Linux
<d1b2> <bob_twinkles> it might be a bigger hammer than you need, but also you don't have to write it
<azonenberg> among other things, it's using raw vulkan C api calls
<azonenberg> while i'm using the vk::raii C++ layer
<azonenberg> and pipeline caching is a separate issue, it's a binary serialization format generated by the driver
<azonenberg> you basically just have to give it back the blob
<azonenberg> the wrinkle is, you alos have to keep track of exact gpu driver version etc
<azonenberg> as someitmes they crash loading blobs from other versions :p
<d1b2> <azonenberg> i am absolutely loving VK_LAYER_KHRONOS_validation
<d1b2> <azonenberg> it's already caught several bugs that i can't even begin to imagine how i'd find any other way
<_whitenotifier-7> [scopehal] azonenberg pushed 6 commits to master [+0/-0/±14] https://github.com/glscopeclient/scopehal/compare/d533852700c1...d188eec2ab89
<_whitenotifier-7> [scopehal] azonenberg 0c04901 - Initial work on Vulkan implementation of SubtractFilter. Incomplete, not functional.
<_whitenotifier-7> [scopehal] azonenberg 68c1b18 - StreamDescriptor: calling GetData() on a null stream is now legal and returns null
<_whitenotifier-7> [scopehal] azonenberg bf6c6ba - Filter/FilterGraphExecutor: added new Refresh() method that takes a Vulkan command buffer and queue
<_whitenotifier-7> [scopehal] ... and 3 more commits.
<_whitenotifier-7> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±2] https://github.com/glscopeclient/scopehal-apps/compare/8f54ba69b465...2291127fcbac
<_whitenotifier-7> [scopehal-apps] azonenberg 2291127 - Added SPIR-V binary shaders as a dependency, copy them to build directory so the app can see them
<azonenberg> Ok this is good progress. not done, but i wanted to quickly throw together a test on the AKL-PR1 before bed
Johnsel has quit [Ping timeout: 252 seconds]
Johnsel has joined #scopehal
bvernoux1 has quit [Quit: Leaving]
Johnsel has quit [Ping timeout: 256 seconds]
GenTooMan has quit [Ping timeout: 256 seconds]
GenTooMan has joined #scopehal
GenTooMan has quit [Ping timeout: 244 seconds]
GenTooMan has joined #scopehal
Johnsel has joined #scopehal
<Johnsel> azonenberg: I've found out that using your xen server, at least for Windows CI, may not be possible
<Johnsel> I have a Windows Server 2022 install running in hyper-v now, but the GH runner is under the impression that it can use WSL, which it can't because the virtual CPU does not have nested virtualization capabilities
<Johnsel> it may be possible we would run into the same on your setup
<Johnsel> I initially argued for staying with GitHub actions because I thought it was possible to do the m1 builds with it as well. That was/is true, but this non Dockerized Windows VM is actually a much bigger headache. Whatever CI solution, one of the most basic requirements for it is that debugging build issues should be easy and straight forward. Ideally with the ability to step through the build
<Johnsel> process manually, if needed. This is far from that right now with this hardcoded magic 200 dependency, non local runnable VM with Azure-isms forced in them.
<Johnsel> I think I'll try to wrap everything in a Docker container, which I initially thought the CI did anyway. That should be more portable, and I think GH Actions supports Docker too, though I'm not sure about Windows Docker containers.
<Johnsel> I don't have all bad news though, https://vast.ai/console/create/ has $0.05/hr GPU instances that we could spin up and down with their API to run GPU tasks. It has a "pre-paid credit" payment option too, so no potential for the CI to spin up a 25 USD/hr 8x Tesla V100 instance and bankrupt someone. So that may be a good alternative
<Johnsel> It would be some effort to set it up, though that holds for every option
tiltmesenpai0 has joined #scopehal
tiltmesenpai has quit [Ping timeout: 252 seconds]
tiltmesenpai0 is now known as tiltmesenpai