lucerne has joined #jruby
razetime has joined #jruby
razetime has quit [Ping timeout: 260 seconds]
razetime has joined #jruby
razetime has quit [Ping timeout: 256 seconds]
razetime has joined #jruby
razetime_ has joined #jruby
razetime has quit [Ping timeout: 256 seconds]
<byteit101[m]> headius: Been slightly busy. Will do ggem release tonight
<headius> Great!
<headius> I can update the gems on master any time
<headius> byteit101: just let me know when they are ready, or push an update PR
<byteit101[m]> subspawn-{,posix}-0.1.1 pushed
<byteit101[m]> 0.1.0 -> 0.1.1
<byteit101[m]> headius: ^
<headius> Ok
<headius> ok
<headius> OK
<headius> oK
<headius> byteit101: updated on master
<headius> enebo: doesn't seem like there's a way to to have the WIP jobs run but show a passing result when they are red...maybe they should just move out of CI now that we're releasing 9.4 and we'll have them for us and contribs to run locally
<headius> I want us to be able to show passing results
<headius> byteit101: looks good locally btw, I can load pty.rb just fine
<byteit101[m]> oh good!
<headius> I don't know what's up with this flaky ssl test
<headius> but I think we're basically green and ready to go
<headius> 1) Failure:
<headius> TestOpenURISSL#test_validation_success [/home/runner/work/jruby/jruby/test/mri/open-uri/test_ssl.rb:52]:
<headius> end of file reached
<headius> exceptions on 1 threads:
<headius> enebo: alternative to removing wip stuff from main CI would be having the snapshot deploy happen regardless of the other jobs... currently it is set up to only deploy if all specified jobs pass
razetime_ has quit [Ping timeout: 260 seconds]
razetime has joined #jruby
byteit101[m] has quit [Read error: Software caused connection abort]
byteit101[m] has joined #jruby
razetime has quit [Ping timeout: 256 seconds]
razetime has joined #jruby
razetime has quit [Ping timeout: 255 seconds]
razetime has joined #jruby
razetime has quit [Ping timeout: 260 seconds]
razetime has joined #jruby
razetime has quit [Ping timeout: 256 seconds]
razetime has joined #jruby
yosafbridge has quit [Quit: Leaving]
yosafbridge has joined #jruby
razetime has quit [Ping timeout: 248 seconds]
razetime has joined #jruby
lopex[m] has quit [Quit: Bridge terminating on SIGTERM]
enebo[m] has quit [Quit: Bridge terminating on SIGTERM]
nilsding has quit [Quit: Bridge terminating on SIGTERM]
JesseChavez[m] has quit [Quit: Bridge terminating on SIGTERM]
Francesco[m]1 has quit [Quit: Bridge terminating on SIGTERM]
byteit101[m] has quit [Quit: Bridge terminating on SIGTERM]
razetime[m] has quit [Quit: Bridge terminating on SIGTERM]
kalenp[m] has quit [Quit: Bridge terminating on SIGTERM]
ahorek[m] has quit [Quit: Bridge terminating on SIGTERM]
ldywicki[m] has quit [Quit: Bridge terminating on SIGTERM]
headius has quit [Quit: Bridge terminating on SIGTERM]
mattpatt[m] has quit [Quit: Bridge terminating on SIGTERM]
kares[m] has quit [Quit: Bridge terminating on SIGTERM]
enebo[m] has joined #jruby
nilsding has joined #jruby
razetime[m] has joined #jruby
ldywicki[m] has joined #jruby
mattpatt[m] has joined #jruby
kalenp[m] has joined #jruby
JesseChavez[m] has joined #jruby
byteit101[m] has joined #jruby
Francesco[m] has joined #jruby
kares[m] has joined #jruby
ahorek[m] has joined #jruby
lopex[m] has joined #jruby
headius has joined #jruby
razetime has quit [Ping timeout: 260 seconds]
razetime has joined #jruby
razetime has quit [Ping timeout: 252 seconds]
razetime has joined #jruby
razetime has quit [Ping timeout: 246 seconds]
razetime has joined #jruby
razetime_ has joined #jruby
razetime has quit [Ping timeout: 260 seconds]
genpaku has quit [Read error: Connection reset by peer]
genpaku has joined #jruby
razetime_ has quit [Ping timeout: 248 seconds]
razetime has joined #jruby
<headius> Good morning!
lucerne has quit [Ping timeout: 256 seconds]
razetime has quit [Read error: Connection reset by peer]
razetime has joined #jruby
razetime has quit [Ping timeout: 268 seconds]
razetime has joined #jruby
<enebo[m]> We can decide by end of today how to deal with wip but I have been using it as something to triage by moving to fails when I see what it is and whether we can reasonably fix it soon or not (or if it is really unimportant like mismatched error string)
<headius> It is nice to not have to run it locally. Maybe if I move it to a separate workflow we can still get our nightly build when we don't regress
<headius> With byteit101 updated subspawn pty works on fedora
<enebo[m]> cool
<headius> Stdlib is not current but not far behind
<enebo[m]> I do not think wip should generally be kept past the release
<enebo[m]> We have lots of fails which tend to already be 2.7+ failures
<enebo[m]> I think we probably need to also consider whether we do a sweep over failures once we have this out just to whittle them down a bit more but with that said we have lots of golden path stuff to do (like postgresql support)
<headius> Yeah I want to take a stab at scheduler at some point but nobody's using it really so it would just be bragging rights
<headius> I could get async and loom numbers to show off I guess
<enebo[m]> I think wip was useful for knowing we did not regress but all those tags are part of the larger corpus is ok once we triage them as not needed for 9.4.0.0
<headius> I am curious what of the remaining unimplemented bits and bobs anyone will run into
<enebo[m]> I have been trying to make sure missing methods possible to implement have gotten done
<enebo[m]> I think I have a messy partial impl for NameError#local_variables but obviously missing that for release will not hurt us too much
<headius> Yeah missing methods and constants is a good priority
<enebo[m]> but having it done I think means more than things like syntax errors of weird cases no one will normally do
<headius> Getting the release out will drive folks to try things, and find things, and possibly help fix things
<enebo[m]> but having spent so much time on ripper and parser lately I do have some memory to fix some of those
<enebo[m]> So I guess it is how I feel a bit too
<enebo[m]> yeah I am positive we will get quick feedback of things we are missing or tipped someone up
<headius> It's a .0 and we implemented like high 90% of changes across three Ruby versions
<enebo[m]> wednesday release is not really my desire but I think you probably will have too much going on Monday to do it then
<headius> "we" with lots of contrib help
razetime_ has joined #jruby
razetime has quit [Ping timeout: 268 seconds]
<headius> Yeah it's going to be a little tight. I'll have a couple hours in the morning maybe
<enebo[m]> yeah so tomorrow is the day
<headius> And I kind of need to make the talk
<enebo[m]> and we will just know we won't be looking at stuff until next week
<headius> It's only 30 minutes so it might just be a 101 with a couple graphs
<headius> I will have lots of time in the air to and from Bangkok the following week
<enebo[m]> If you could also help triage wip issues and just knock down simple and move to fails on things we know we won't do. Like fiber is a good example since I did not work on that at all
<headius> Fiber doesn't have a ton of things left but the seams are showing a bit
<enebo[m]> we have quite a lot of F/E but I don't know how much of that is important or fixable
<enebo[m]> They mostly are errors about not being born yet and things like that
<headius> I admit I have not taken a close look at the MRI failures, I mostly just implemented the specs and we know those are spotty
<headius> I guess we'll see if people are using fibers
<enebo[m]> Looking I see 4-5 in spec:wip and like 5-6 in one of the MRI suites but there may be a few in another
<headius> That will be a good .1 activity perhaps, since we also need to reevaluate how fiber is implemented given loom
<headius> Ok not too bad
<enebo[m]> but just evaluating them for sake of removing them from wip would be helpful
<headius> Ok I can take a look
<enebo[m]> There is a huge mess of fails for openssl which I think I will wholesale move to fails
<headius> Back in the closet
<enebo[m]> heh
razetime_ has quit [Ping timeout: 256 seconds]
razetime has joined #jruby
<headius> Fiber.current returns the current Fiber when called from a Fiber that transferred to another
<headius> I'll need to look at these soon after the release but they are pretty specific situations
<headius> Some of this may be the vaguely described changes that removes some restriction from transfer
<enebo[m]> If I think it will be 15 minutes or so I will try and fix simple things but I do prioritize missing methods/constants more than anything
<headius> If I remember right that linked to a mega issue and I have not figured out exactly what those changes were
<headius> Less restrictions on transfer of some sort
<enebo[m]> get_clocktime had a new constant but I noticed we already don't support nearly all of them so I tagged that out
<headius> Yeah most consumers will use defined for that set of constants
<headius> At least that's how I've seen it done a few times
<enebo[m]> I briefly looked at fiber a few months to impl Rails and got most of those to pass but then I saw lots of weirdness I didn't get
<headius> We all have to implement one, which I think might be the one we have
<headius> Fiber is weird. I still have a little trouble wrapping my head around transfer
<enebo[m]> I saw a fixme we are missing using jnr-constnts for the constants we do support
<headius> Is that the socket stuff?
<enebo[m]> clocktime stuff
<enebo[m]> we just stub out two constants and say we should be using jnr-posix
<headius> All right, I have never really gone ahead with that because I'm not sure about the FFI downcall overhead versus these very specific time functions
<enebo[m]> I think we have all-java AND jnr-posix side to it so I am guessing we looked up some numbers for whatever system we were on and probably got lucky that it is that for all platforms
<headius> If we leave them undefined I don't have to answer that question
<headius> Make a more accurate time call but add a couple microseconds due to FFI
<enebo[m]> yeah well the answer to the fixme is more constants which should be in jnr-constants
<headius> Ok
<enebo[m]> I hate that package because of updating it
<headius> Yes
<enebo[m]> cross-builds will increasingly hurt us
<headius> That and jffi, but it's improving
<enebo[m]> The new parser will need cross-building too
<headius> jffi builds a half dozen Linux platforms in CI now, we can do the same for constants
<enebo[m]> Although that maybe won't be a big problem for us since all impls using it will need cross building
<headius> Though constants probably won't very much across platforms
<headius> Vary
<enebo[m]> yeah once can hope
<enebo[m]> twice can hope too
<headius> Right, most of the time if we are not up to date on constants for a platform we find out because someone on that platform needs those constants
<enebo[m]> I think we should focus on examining remaining wip and move to fails or fix and then look for problems with making a release today
<enebo[m]> tomorrow we release and live with whatever that is hopefully until after your talk
<headius> Ok
<enebo[m]> If we do manage to knock the wips off today then going back to examining rails unit tests and probably hit some other large gems unit tests would be good
<enebo[m]> I guess if we make it to that point I will flip over to arjdbc and release 70 so those two adapters are not pres
<enebo[m]> sqlite3 gem also needs a release
<enebo[m]> but maybe those are really second to finding problems with 9.4 since I can work on that early next week
<enebo[m]> Just wondering out loud I guess
<enebo[m]> I think not having postgres is our next huge blocker for users
<enebo[m]> supporting sqlite3 is not getting anyone to upgrade their rails app
<enebo[m]> and by upgrade I mean trying out their app
<enebo[m]> I doubt anyone will be moving to 9.4.0.0 immediately
subbu has joined #jruby
<headius> Yeah probably not but we will get some library authors and curious folks trying it out
<headius> We're green on selenium at least 😀
<enebo[m]> oh really?
<enebo[m]> I wonder if they can run with -X+C
<headius> Hmm yeah good to try
<enebo[m]> Did you run those tests?
<headius> No
<enebo[m]> Or did someone reply that they worked
<headius> Someone online jumped on a snapshot and reported back
<enebo[m]> yeah that open issue is a kwargs issue in JIT using selenium
<headius> Ha
<headius> So it works most of the time
<enebo[m]> Not sure if it will actually be selenium-proper code or how they use it but it would be great to see it run with compiled code
<enebo[m]> I doubt it works most of the time for the reporter
<enebo[m]> if they run in interp it works all the time
<headius> I wonder how hard the tests are to run
<enebo[m]> It is a multiple gigabyte mongo repo of all adapters of languages
<enebo[m]> rake will trigger go and explode
<headius> Oh right you messed around with this
<enebo[m]> or I think go. go may not even be the language
<headius> I did not even notice it was selenium
<enebo[m]> their is some "fancy" language embedded in stuff too
<enebo[m]> I tried some selenium tutorials over the weeked (well two scripts)
<enebo[m]> and I added cursor movement and action batches since that was in the stacktrace and it ran
<enebo[m]> err both things ran
<headius> And I assume you ran it with -X+C too
<enebo[m]> I am betting fixed arity JIT path is problem since it does kwargs handling differently
<enebo[m]> well yeah
<enebo[m]> Since I know it only happens once the code has been JITTed
<enebo[m]> I will say I did try without -X+C as wll
<enebo[m]> I should have perhaps just using jit threshold 0
<headius> I'll take a closer look at that stack trace and see if I can into it something that might be wrong
<headius> Intuit
<enebo[m]> I can tell you the stack is not where the problem occurs
<enebo[m]> It uses an ivar which contains undefined
<enebo[m]> that ivar is not made during the backtrace but before the trace
<headius> Well I'm hoping that I can see where that variable is being set and that should narrow down the set of methods it could be
<enebo[m]> yeah
<enebo[m]> it is doing a hash.merge and the argument to it contains an undefined
<enebo[m]> or is an undefined
razetime has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
subbu has quit [Ping timeout: 255 seconds]
lucerne has joined #jruby
subbu has joined #jruby
<headius> enebo: so we don't collide what are you working on
<headius> I see some specs I can fix quickly
<headius> just pushed fix for Symbol#to_proc arity
<enebo[m]> ok
<enebo[m]> I did look at that briefly yesterday and I was confused because it felt like OPTIONAL was correct since it is 1 req + rest
<enebo[m]> but I did not want to change Signature arityValue but it made me think it is wrong
<enebo[m]> Your fix does isolate this to just Proc#to_sym
<byteit101[m]> huh, pom.rb lines are off by one (+1) in stack traces
<headius> enebo: yeah just that specific block type
<enebo[m]> headius: but I do really wonder if OPTIONAL should return -2
<enebo[m]> hmm ok I looking at Yielder which is OPTIONAL and MRI does return -1 for that
<enebo[m]> My confusion comes from looking at parameters on Symbol.to_proc
razetime has joined #jruby
<enebo[m]> which is req, rest
<headius> yeah
<enebo[m]> So OPTIONAL I guess is -1 and just [rest]
<enebo[m]> and req + rest is -1 for the req and -1 for the rest
<enebo[m]> which is in fact ONE_REQUIRED
<enebo[m]> which is a strange name
<enebo[m]> I feel like ONE_ARGUMENT should be called ONE_REQUIRED and ONE_REQUIRED should be ONE_PLUS
<enebo[m]> but I don't remember how these names came into being
<headius> yeah MRI arity is -(1 + required count) when there's rest
<headius> yeah 9.5
<enebo[m]> So in the end my confusion is ONE_REQUIRED is a bad name
<enebo[m]> HAHAH or never since people probably use this constant in existing code
<enebo[m]> I went through all that work to deprecate arity for this and did not consider this (although we may be following MRI naming here too which I don't remember either :) )
<headius> I think I just came up with those names years ago
<enebo[m]> it is possible it was you or me or MRI. I don't care enough to see the genesis
<headius> if you look at specs check this PR first so we don't conflict: https://github.com/jruby/jruby/pull/7467
<headius> just purged a bunch of tags that no longer matched any specs
<headius> done as PR because sometimes it seems to miss some things
<enebo[m]> yeah ship it
razetime has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
<headius> yeah something's not right in there, I'll wait on that until after release
<enebo[m]> headius: what do you mean that tags PR?
<headius> yeah
<headius> something that was removed must have still matched a spec that blows up
<enebo[m]> Example took longer than the configured timeout of 120.0s:
<enebo[m]> Enumerator::Lazy#with_index enumerates with a given block
<enebo[m]> Not positive this is the F but it is the only test listed in those runs
<headius> yeah lots of fiber errors too
<enebo[m]> I suppose something is not letting it complete since there is no result
<headius> yeah it just dies
<enebo[m]> It would be really nice if these were not on CI before release. Maybe just fix the enumerator test and remove the fiber ones since you have a lot of things removed from the wip tags
<headius> fix the enumerator test?
<enebo[m]> Example took longer than the configured timeout of 120.0s:
<enebo[m]> Enumerator::Lazy#with_index enumerates with a given block
<enebo[m]> I suspect this timeout is a F
<enebo[m]> perhaps the only F displaying
<enebo[m]> but something is killing the test run too
<headius> oh sure
<enebo[m]> perhaps in some environments those are still valid fails
<headius> still shouldn't be matching a spec if purged
<enebo[m]> Some of these are interesting since they have (exits)
<headius> FWIW it did pass for me so maybe it is marked as "travis" or something
<enebo[m]> This knocked out some windows specific fails too
<headius> yeah maybe I'm not understanding what "remove all tags not matching any specs" means
<enebo[m]> what is the command-line for this anyways?
<headius> mspec tag --purge <spec dir>
<enebo[m]> It would be cool if it could be --retest /(fails|wip)/
<headius> you can do that
<headius> just run -G <tag> <spec> I think
<enebo[m]> oh ok. Well I suspect that will work out better since the command used took out any env-specific failure not in your env
<headius> tag delete is mspec tag --del wip -G wip <specs> if you want to just run the wip specs and remove passing
<enebo[m]> I do see a ton of (hangs) which pass on your machine and I am wondering if those are just macos hangs or just really old
<enebo[m]> My brain is scrambled today too. In my mind this was mri and not mspec
<headius> yeah I don't get it, these shouldn't have been removed
<enebo[m]> they pass locally for you though which may be totally legit
subbu has quit [Ping timeout: 248 seconds]
<enebo[m]> the hangs are one of those things we probably almost never revisit
<headius> but the tag descriptions should not match anything in order for purge to remove them
<enebo[m]> ah
<headius> but it did, so I dunno what's up with it
<enebo[m]> 'critical(hangs on OS X and Windows):' lol
<enebo[m]> yeah how could this get hit
<headius> Process.kill accepts a Symbol as a signal name
<headius> matches exactly and hangs for me locally
<headius> I did just run spec:ruby:fast
<enebo[m]> so it should not have even tried to remove that yet did AND doesn't work locally
<headius> figured if anything broke I'd see it in there
<headius> yeah
<enebo[m]> wow. So perhaps a big nope on that PR
<enebo[m]> It may just be easier to concentrate on wips
<enebo[m]> I will try the -G on wip
<headius> yeah
<headius> I'm not spending any time on the puege
<headius> purge
<headius> aargh
<headius> spent last 15 minutes trying to figure out why I had local failures in an exception test and it's because the rake target sets backtraces to look like MRI
<headius> matching output from a subprocess error
<enebo[m]> ah yeah that's a fun one
<headius> working on some MRI excludes now, remaining specs are too involved or too specific to mess with right now
subbu has joined #jruby
<enebo[m]> nothing in core got removed from wip
<enebo[m]> but it did remove 3 empty lines :)
<enebo[m]> I think I will just change wip to fails in core
<headius> get er done
<headius> ok one more small item done, lunch break
subbu has quit [Ping timeout: 252 seconds]
<headius> pushed another small one, looking into a transient failure on CI
<headius> lots of false starts today, stuff that got too involved to go in 9.4
<enebo[m]> yeah I have had a few of those
<enebo[m]> spec:ruby:fast:wip is gone
<headius> woot
<enebo[m]> test:mri:extra_wip is gone
<enebo[m]> openssl has been merged back to normal fails but it has not quite finished
<enebo[m]> I realized I did not fully remove wips in MRI so for example all of ripper was a delete
<enebo[m]> This whole system makes me want to really kill like 80% of our local test suite
<enebo[m]> with that said I don't really want to take the time but it would be cool to be testing less on CI
<enebo[m]> ZLib test_multithread_inflate :)
<enebo[m]> hmm
subbu has joined #jruby
<headius> Exception in thread "main" java.lang.UnsupportedClassVersionError: JVMCFRE199E bad major version 53.0 of class=com/jcraft/jzlib/GZIPInputStream, the maximum supported major version is 52.0; offset=6
<headius> looking into this openj9 failure, but that's what I get locally
<headius> I don't understand how that would not be failing on other Java 8s
<enebo[m]> so that is the j9 thing but there are some interesting tests failing in stdlib
<headius> the failure on CI is loading some io/wait
<enebo[m]> As much as it would be cool if j9 worked I am willing to throw it under the bus for a point release
<enebo[m]> We can make an issue and target it
<headius> yeah I just thought I'd take a quick run at it
<enebo[m]> sure
<headius> but this is weird
<enebo[m]> if it is simple then cool but that looks like they ship it or something
<enebo[m]> or j9 is confused and is loading the wrong class format
<headius> yeah but it works with hotspot 8
<headius> yeah very strange
<enebo[m]> I would guess 52 is 8
<enebo[m]> I get why it has its own version number but it would be nice if it could have a lose name like Java 8 classfile
<enebo[m]> like use the first version that was made for
<headius> no idea... last release of jzlib was before 9 was even out
<enebo[m]> So hotspot 8 is loading this fine but j9 8 is saying it cannot load java 9
<enebo[m]> Yeah that is a weird message :)
<headius> very strange
<headius> in any case it may require a rebuild and re-release of jzlib which we basically just need to take over now
<headius> there's various features we need to add but the original maintainer last committed in 2013
<headius> ok... I will disable this job some other way
subbu has quit [Ping timeout: 260 seconds]
<headius> hey the automatic snapshot deploy is working
<headius> finally managed to get a run with everything green other than known fails
nilsding has quit [*.net *.split]
joast has quit [*.net *.split]
<enebo[m]> cool
subbu has joined #jruby
<headius> I'm disabling the openj9 and M1 spec:ffi jobs by ignoring errors... they'll still be there and show up as green but there's two issues to fix them
<enebo[m]> ok
nilsding has joined #jruby
<headius> enebo: can I kill any of these jobs? Trying to confirm my job disables
<enebo[m]> headius: yeah but if there is a missing tag you may need to fix up
<enebo[m]> I may not be on much more tonight
<headius> I'll leave your most recent one running
<headius> I'm only running two jobs on the PR
<headius> it's just backlogged
<enebo[m]> Ultimately not having the wips would be a good goal but it sounds like even if we don't get them all moved over snapshots will build and we can still release
<headius> yeah seems to be working now that I restructured the jobs
<enebo[m]> I would like to make bits when I wake up and make sure we are good enough
<headius> sounds good to me
<enebo[m]> ok. I will see you in the morning
<headius> I will poke at a few things this evening but not much longer
<headius> ttfn
<enebo[m]> cool
<enebo[m]> cy