satyanash has quit [Quit: kthnxbai]
satyanash has joined #jruby
drbobbeaty has quit [Ping timeout: 250 seconds]
drbobbeaty has joined #jruby
drbobbeaty has quit [Ping timeout: 256 seconds]
drbobbeaty has joined #jruby
boc_tothefuture[ has quit [Quit: You have been kicked for being idle]
akshsingh[m] has quit [Quit: You have been kicked for being idle]
<mattpatt[m]> Sorry about Will try and dig in around filesystem security / permissions stuff so I can help with that...
<headius> You should try a snapshot build of 9.3.3, I found a bug in our native stat that could cause this
<headius> mattpatt:
<mattpatt[m]> headius: sweet
<headius> Ugh the snapshots aren't back yet because we just did a big migrate to GHA
<headius> I will run one myself
<mattpatt[m]> Since I have an M1 mac, if you need stuff testing, you're welcome to ask me...
<mattpatt[m]> i remember that GHA transition ;-)
<headius> Oh hah of course you do
<headius> I forgot we didn't have snapshots working quite yet
<mattpatt[m]> Want me to work on getting snapshots working again?
<mattpatt[m]> doing a bunch of GHA stuff for work today anyway
<headius> Yeah I haven't even looked at why they aren't working
<headius> I probably missed some step
<mattpatt[m]> I don't think i even tried during the migration
<mattpatt[m]> wasn't sure when they should run
<headius> we actually just push them on every successful CI
<mattpatt[m]> i remember now. there's a manual task you can trigger, so we can debug it.
<headius> aha that's right
<mattpatt[m]> once that's working okay it's just a case of adding a ref to the reusable task in the main workflow
<mattpatt[m]> i need more coffee
<headius> snapshots from my local build should be there
<mattpatt[m]> thanks
<headius> there's still problems with varargs, which prevents fcntl among other things from working right
<headius> chrismtas project
<headius> eh
<mattpatt[m]> speaking of which, i need to run and silkscreen print some shit
<mattpatt[m]> hang on, if that manual process works then I will submit a PR with it enabled for every successful CI
<headius> yeah successful CI only on jruby-9.3 and master branches
<headius> seems to be having trouble provisioning
<headius> unauthorized
<mattpatt[m]> provisioned then gfa
<mattpatt[m]> failed
<mattpatt[m]> looks like credentials not set?
<headius> secrets missing probably
<headius> yeah
<mattpatt[m]> in case you don't know there's a secrets pane in the repo settings for GHA secrets and ENV vars
<headius> yeah took a minute but I'm there
<mattpatt[m]> took me a while to find it first time for me too
<mattpatt[m]> lemme poke at that workflow and see if i made a silly
<headius> trying to remember out how these get propagated into the deploy target
<headius> some of these variables being set in the workflow appear nowhere else in the repo
<mattpatt[m]> i have a feeling it's a dumb workflow syntax thing
<headius> also unsure if my secrets are being used right... I set the environment "deploy" up to be active for master and jruby-9.3 branches
<headius> yeah I don't get why the intermediate env vars when you can just put the secrets in the setup-java part
<mattpatt[m]> where does the maven settings.xml get generated?
<mattpatt[m]> right. do not have enough coffee in system for this, will circle back to it later on
<mattpatt[m]> sorry, i don't have any experience doing maven deploys, so my mental model for what's going on is non existent and i have no intuition for what might be the problem...
<mattpatt[m]> best i have so far is that maybe GHA is generating a maven settings.xml and so is something else?
<mattpatt[m]> i have a vague recollection of that being how GHA maven deploy works
<headius> I think I got it
<mattpatt[m]> Is this line wrong? server-id: sonatype-nexus-snapshots
<headius> I did not notice the "New Repository Secret" button so I added them to an environment, which must not have been activating right
<mattpatt[m]> pom.rb has id as plain 'sonatype'
<headius> Why is the "New Repository Secret" button at the top of the page but the list of repository secrets is halfway down
<headius> that's just dumb
<headius> aha it needs to run with 8
<headius> but otherwise it worked
<mattpatt[m]> sweet
<headius> so what's the last switch I need to throw
<headius> not sure how to trigger this only after other actions succeed... seems like docs only show how to have it in the same workflow as whatever you want to trigger from
<headius> a recent post points this out as the best way currently:
<headius> but it depends on a workflow step that waits for another workflow to finish, and if we have a step waiting for the 20-30min we need it will probably get killed
<headius> it might be simplest to move this to the main workflow or create a deploy workflow that runs a smaller set of sanity checks
<headius> let me know if you have another idea when you've got your coffee ☕️
<mattpatt[m]> oh, it's simple
<mattpatt[m]> it's a reusable workflow, so we just reuse it in the CI with a dependency on whatever passing we need
<mattpatt[m]> will send you a PR later]
<mattpatt[m]> well, it's straightforward anyway
<mattpatt[m]> maybe not totally simple. the dependency stuff is a bit verbose
<headius> yeah cool
<headius> I am playing with some speedups like caching maven stuff better and will PR this branch when I'm happy with it
<headius> I think we can also cache the compiled base sources which should cut a minute or so off every job
<headius> wow ok uploading the build output is way slower than just rebuilding for each job
<headius> uploading core/target after a clean package took 4.5 minutes... I assume downloading would cost the same so that's way slower than rebuilding
<enebo[m]> This reported issue is interesting to me in that it is doing arity checking which made me wonder how we behave if that raise line is missing. It really feels like if we had reasonable error messages someone would not put that there.
<enebo[m]> But the second thing I wanted was some agreement that what I wrote in that comment is reasonable. I believe raise requires this.
<headius> mattpatt: I get it now... you use `needs` and call the snapshot-publish workflow at the end of the main workflow
<headius> I pushed a larger change just now that massively reduces duplication in the CI workflow but unsure if it is the best approach:
<headius> leaning on matrices for as many permutations as possible
<headius> my generic target names are not great though
<headius> this added a handful of additional jobs but mostly just moved hand-duplicated jobs into matrices
<enebo[m]> ok I was thinking it looked like more jobs being run
<enebo[m]> heh it already found some strictness error with Java 16
<headius> enebo: the added jobs were mostly adding 16 where we already did 8 and 11
<headius> yeah it sure did
<headius> that seems to be the only thing new breaking though
<enebo[m]> yeah I guess that's good
<enebo[m]> I removed a test from one of our suites yesterday that is covered to death by mri/mspec
<headius> excellent
<headius> we have a lot of those
<enebo[m]> I realize we cannot remove all of our tests but I am thinking we should occasionally just pare down our tests
<headius> whenever I notice a dupe I will wipe it out
<headius> or if MRI fails a test we have I just delete it
<enebo[m]> I almost updated them but it was comparing the new Method inspect strings and I was "nope" :)
<enebo[m]> Today is sprintf friday
<enebo[m]> It is a significant feature for 9.4 but I am boiling a small ocean
<headius> have fun
<enebo[m]> here was some fun from yesterday:
<enebo[m]> d[5][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1]... (full message at
<enebo[m]> I fixed some recursive flatten issue
<enebo[m]> I did other fixes too but this one was weird. It did not occur to me flattening a recursive array will partially flatten it but leave the recursion behind
<enebo[m]> It is when I work on bug fixes for stuff like this I am entertained at how strange Ruby can be (and tbh this behavior does make sense)
subbu has joined #jruby
<headius> yeah I suppose it does
<headius> ok I cleaned up the failures and simplified the aggregate job names so you can see the matrix elements easier
<headius> the names are not as descriptive but I don't think that can be avoided
<headius> hmm maybe there is a better way to specify a matrix-generated display name
<headius> ok I think I've got it
<headius> that looks decent and the workflow file is much smaller now
<headius> we can endeavor to merge those one-off jobs into the aggregate once
<headius> ones
<headius> hmm one more bit of smithing... I think we want these to sort by target first, java version second
<headius> hopefully flipping the matrix will do that
<headius> yeah that looks nicer
<headius> after I merge this we can reorder jobs or whatever seems to make it easier to read through
<headius> some of these may not need to run on all javas so we could pare them down
<headius> prior to this work it was 51 jobs, now it is 58
<headius> oh one of those new ones is the Windows job
<headius> ffi gained java 16
<headius> oh I see... enebo there are more jobs but some of these were just run sequentially in the same job before
<headius> moving them into matrices added jobs but allows them to run in parallel
<headius> we can also consider batching some of those back together, like those fast little maven targets
<headius> so I know windows job was moved over and FFI gained a 16 job but there's not much else that wasn't already running somewhere
<enebo[m]> ship it!
<enebo[m]> GHA really needs to show times in the left hand side summary
<headius> Yeah that would be nice
<headius> the mvn targets are usually the fastest runs, they should probably move down so the longer rake tasks get queued earlier
<headius> I'm going to try to reenable the snapshot build
<byteit101[m]> enebo: oh those 3 issues are fun. I don't think I tested re-opened classes. Will look at it more after work. Though I am glad I changed the error to `Java proxy not initialized. Did you call super() yet?` from a generic NPE
<enebo[m]> byteit101: yeah it is helpful as a message
<byteit101[m]> enebo: unsure how much time I'll have to dig into that before the holidays, I'm trying to finish some stuff up before then, but will definitely try to look at it before the new year
subbu has quit [Ping timeout: 240 seconds]