ChanServ changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things Sandstorm and Cap'n Proto. Say hi! | Have a question but no one is here? Try asking in the discussion group: https://groups.google.com/group/sandstorm-dev | Channel logs available at https://libera.irclog.whitequark.org/sandstorm
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #sandstorm
cwebber has quit [Ping timeout: 256 seconds]
cwebber has joined #sandstorm
<isd> ocdtrekkie: I was able to reproduce the 19 test failures we were seeing in CI locally as far bask as February (further back than that doesn't build for me due to some changes in glibc that break old versions of (iirc) ekam).
<isd> So, this is not a regression we introduced; it's probably is in fact a function of ChromeDriver.
<isd> I ran a couple of them with SHOW_BROWSER=true and it looked like the grain iframe was crashing.
<isd> (d59fed1958d627b52cca963dc2c888f4af583063 is the oldest commit I can build, and it has these failures)
<ocdtrekkie> It feels like this is annoyingly fragile. The issue I just closed and the one kentonv reported today seems to be that for one, the runner uses the latest version of Chrome but the latest version of ChromeDriver lags behind.
<ocdtrekkie> I am not sure I should've closed it now... is there maybe a way to keep us from testing against an overly new Chrome?
<isd> Hm, where do we get the browser from in CI in the first place? presumably we could pin the version.
<isd> But, that's not going to stop it from breaking tests locally. The version of chromium I have installed on my laptop was last updated a few weeks ago; that's a long time to go on a developer's machine with spurrious test failures.
<ocdtrekkie> Are there larger projects with a similarish test setup, that might have addressed this somehow?
<isd> I wonder if testing against firefox would be less flaky.
<ocdtrekkie> It's interesting to me that the latest Chrome version (93) corresponds with the beta ChromeDriver version. But the latest stable ChromeDriver is 92.
<ocdtrekkie> It appears that they want you to do some of your own magic to ensure Chrome and ChromeDriver are on the same version.
<isd> $ chromium --version
<isd> Chromium 92.0.4515.131 Arch Linux
<isd> My version of chromedriver matches. So a mismatch isn't why it's failing.
<isd> I'm going to try to run the tests with firefox, see how they do.
<isd> The source of the failures on my local box is actually that the test app is crashing on startup. So I need to look into why more, but I don't think chromedriver is actually the problem.
<isd> where test app is the meteor test app, not the raw one in C++. And it's intermittent. Looks like the version of meteor used there (as opposed to for sandstorm itself) hasn't been updated since april.
<isd> Also, just getting a "connection was reset" from the browser, with nothing in the grain log, so not sure it's crashing per se. But it's not working consistently.
<isd> Looks like once the failure happens, it persists until the grain is rebooted, so it's getting into some odd state then.
<isd> Ok, enough with the live updates from my brain. I'll report back later.
<isd> Oh, I bet I know what's going on: the test app is still using node 12 since it hasn't been updated, but it's linked in node-capnp, which we built against node 14. So probably this is inducing UB.
<isd> kentonv: ^
<isd> We'll have to update that too, and perhaps meteor-spk.
<isd> Maybe we should also add a check to CI to make sure the meteor versions of sandstorm and meteor-testapp agree, so we can't forget to update the latter going forward.
<isd> it's weird though that I was still getting failures on old versions...
<isd> Anyway, I'm going to call it a night. Later all.
digitalcircuit has quit [Quit: Signing off from Quassel - see ya!]
digitalcircuit has joined #sandstorm
xet7 has quit [Quit: Leaving]
<mnutt> really excited to see the new sandstorm release. expect a new davros release in the next couple of days
<ocdtrekkie> Awesome, I look forward to testing it
<kentonv> isd, FWIW I ran the test suite yesterday after my changes and only had the three failures that have been there forever
<isd> Hrm. well, we're seeing 19 on CI
<isd> Actually, I should double check that the latest commit has that problem.
<isd> Maybe you're just getting lucky with the UB?
<kentonv> maybe I had a pre-existing copy of the test app that `make` didn't rebuild?
<isd> Maybe. If you nuke it and rebuild, does it still work?
<kentonv> well, I thought I did `make clean`
* isd looks at Makefile
<isd> That should do it, though interestingly it wouldn't delete the C++ test app... should probably fix that bit.
<kentonv> the C++ test app is built by the main ekam build, though, isn't it?
<kentonv> so that should be covered by deleting `tmp`
<isd> The call to spk pack is in the Makefile.
<isd> and the output is in the top level dir.
<isd> Sent a pr that deletes test-app.spk too
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #sandstorm
strugee has quit [Quit: ZNC - http://znc.in]
strugee has joined #sandstorm