<headius>
I did a pass over 2.7 and 3.0 features and updated checklists... implemented several items along the way
<headius>
ttfn
richbridger has quit [Ping timeout: 268 seconds]
richbridger has joined #jruby
lopex[m] has quit [Quit: Bridge terminating on SIGTERM]
puritylake[m] has quit [Quit: Bridge terminating on SIGTERM]
liamwhiteGitter[ has quit [Quit: Bridge terminating on SIGTERM]
XavierNoriaGitte has quit [Quit: Bridge terminating on SIGTERM]
KarolBucekGitter has quit [Quit: Bridge terminating on SIGTERM]
ahorek[m] has quit [Quit: Bridge terminating on SIGTERM]
ChrisSeatonGitte has quit [Quit: Bridge terminating on SIGTERM]
subbu[m] has quit [Quit: Bridge terminating on SIGTERM]
kares[m] has quit [Quit: Bridge terminating on SIGTERM]
nirvdrum[m] has quit [Quit: Bridge terminating on SIGTERM]
JasonvanZyl[m] has quit [Quit: Bridge terminating on SIGTERM]
CharlesOliverNut has quit [Quit: Bridge terminating on SIGTERM]
JesseChavezGitte has quit [Quit: Bridge terminating on SIGTERM]
TimGitter[m]1 has quit [Quit: Bridge terminating on SIGTERM]
Leonardomejiabus has quit [Quit: Bridge terminating on SIGTERM]
MatrixTravelerbo has quit [Quit: Bridge terminating on SIGTERM]
MarcinMielyskiGi has quit [Quit: Bridge terminating on SIGTERM]
nilsding has quit [Quit: Bridge terminating on SIGTERM]
UweKuboschGitter has quit [Quit: Bridge terminating on SIGTERM]
edipofederle[m] has quit [Quit: Bridge terminating on SIGTERM]
TimGitter[m] has quit [Quit: Bridge terminating on SIGTERM]
BlaneDabneyGitte has quit [Quit: Bridge terminating on SIGTERM]
enebo[m] has quit [Quit: Bridge terminating on SIGTERM]
rebelwarrior[m] has quit [Quit: Bridge terminating on SIGTERM]
AndyMaleh[m] has quit [Quit: Bridge terminating on SIGTERM]
annabackiyam[m] has quit [Quit: Bridge terminating on SIGTERM]
danieljrubyquest has quit [Quit: Bridge terminating on SIGTERM]
jswenson[m] has quit [Quit: Bridge terminating on SIGTERM]
CrisShupp[m] has quit [Quit: Bridge terminating on SIGTERM]
OlleJonssonGitte has quit [Quit: Bridge terminating on SIGTERM]
byteit101[m] has quit [Quit: Bridge terminating on SIGTERM]
MattPattersonGit has quit [Quit: Bridge terminating on SIGTERM]
klobuczek[m] has quit [Quit: Bridge terminating on SIGTERM]
basshelal[m] has quit [Quit: Bridge terminating on SIGTERM]
JulesIvanicGitte has quit [Quit: Bridge terminating on SIGTERM]
RomainManni-Buca has quit [Quit: Bridge terminating on SIGTERM]
FlorianDoubletGi has quit [Quit: Bridge terminating on SIGTERM]
headius has quit [Quit: Bridge terminating on SIGTERM]
mattpatt[m] has quit [Quit: Bridge terminating on SIGTERM]
ahorek[m] has joined #jruby
enebo[m] has joined #jruby
lopex[m] has joined #jruby
kai[m] has joined #jruby
MatrixTravelerbo has joined #jruby
nilsding has joined #jruby
subbu[m] has joined #jruby
JasonvanZyl[m] has joined #jruby
MattPattersonGit has joined #jruby
Leonardomejiabus has joined #jruby
kares[m] has joined #jruby
ChrisSeatonGitte has joined #jruby
klobuczek[m] has joined #jruby
jswenson[m] has joined #jruby
JulesIvanicGitte has joined #jruby
068AAB7IX has joined #jruby
annabackiyam[m] has joined #jruby
CrisShupp[m] has joined #jruby
MarcinMielyskiGi has joined #jruby
RomainManni-Buca has joined #jruby
BlaneDabneyGitte has joined #jruby
UweKuboschGitter has joined #jruby
XavierNoriaGitte has joined #jruby
CharlesOliverNut has joined #jruby
danieljrubyquest has joined #jruby
basshelal[m] has joined #jruby
edipofederle[m] has joined #jruby
nirvdrum[m] has joined #jruby
074AACU6W has joined #jruby
AndyMaleh[m] has joined #jruby
byteit101[m] has joined #jruby
KarolBucekGitter has joined #jruby
mattpatt[m] has joined #jruby
OlleJonssonGitte has joined #jruby
liamwhiteGitter[ has joined #jruby
FlorianDoubletGi has joined #jruby
JesseChavezGitte has joined #jruby
rebelwarrior[m] has joined #jruby
puritylake[m] has joined #jruby
headius has joined #jruby
MatrixTravelerbo has quit [Quit: Client limit exceeded: 20000]
subbu[m] has quit [Quit: Client limit exceeded: 20000]
ahorek[m] has quit [Quit: Client limit exceeded: 20000]
lopex[m] has quit [Quit: Client limit exceeded: 20000]
nilsding has quit [Quit: Client limit exceeded: 20000]
sagax has quit [Quit: Konversation terminated!]
<enebo[m]>
headius: I believe pass through ... is not 100%. I removed kwargs rest and block fake vars because we are not quite right with argument passing of kwargs for 3.0
<enebo[m]>
That might not be the issue here but I see it is being used
<headius>
I tried with **args and got a different AIOOB
<headius>
Just booting up with that patch if you want to try it
<headius>
I can look into it, just was looking for easy wins so I didn't go further
<enebo[m]>
*r, **k might be needed with zsuper
<headius>
No zsuper, just redispatches to initialize_copy
<enebo[m]>
err then just manually pass them and probably &b too
<headius>
this is with explicit passing of the original arg and kwreat
<headius>
kwrest
<headius>
it is a different error
<headius>
disabling RG makes it go away
<enebo[m]>
HAHAHAH
<enebo[m]>
well line_num somehow is -7
<enebo[m]>
I wonder how that is happening
<headius>
fascinating isn't it
<enebo[m]>
if you -Xir.print you shoudl see it go off the rails
<enebo[m]>
If i had to guess the parser is setting a bogus line position to a node
<enebo[m]>
I can try your PR out
<headius>
I can try to look later
<enebo[m]>
I will do a quick looksee
<enebo[m]>
That is pretty weird
<headius>
I won't dig into the ... since you say that is a WIP
<enebo[m]>
Obviously it is speficially ** causing it too
<enebo[m]>
since that is all you changed
<enebo[m]>
but I believe if this is the case you should be able to easily make a repro
<enebo[m]>
I am going try that quick
<headius>
it doesn't seem to fail in simple cases
<enebo[m]>
line 7 is 0-indexed location of firs tintialize_dup
<enebo[m]>
I am getting random AIIOBE in popScope
<headius>
That sounds like the error I get for ...
<enebo[m]>
I am on HEAD with fresh rebuild
<enebo[m]>
but that is pretty odd...I mean I would hope **a happens at least once on bootstrapping
<enebo[m]>
scopeStack[scopeIndex--] = null;
<enebo[m]>
scopeIndex == -7
<enebo[m]>
How can this happen.. we only modify this with an increment and decrement
<enebo[m]>
-8
<enebo[m]>
ok I see the AIIOBE changes with JIT on and bounces around at different small negative indices. If I lock into startup interp and do not allow full interp is it always =8
<enebo[m]>
Hmm apparently bindex is written around grabbing TC private fields
<enebo[m]>
the initialize_copy(a, **kwargs) is where it happens just accepting the kwarg is ok
<enebo[m]>
a little theory forming....
<enebo[m]>
I see a collection of invalid gemspec errors before it dies with AAIOBE. I am guessing those gemspec errors are raising but allowing us to decrement the scopeIndex likely because we are calling initialize_copy with kwargs and it is exploding
<enebo[m]>
we probably have a catch all in load
<enebo[m]>
we unconditionally pop but for interpret root we seem to also unconditionally push so it does not really explain it
<enebo[m]>
More or less we have an recursive logic error and it runs out of stack which explains the oddity in TC
<headius>
Ah I see
<mattpatt[m]>
hey all. Can I steal some bandwidth to talk about deploys?
<headius>
Go for it
<mattpatt[m]>
deploy snapshot to sonatype every build on master / jruby-9.3 / jruby-9.2?
<mattpatt[m]>
deploy release to sonatype when a release is cut in Github?
<mattpatt[m]>
I'm also very tempted to generate the ci.yml from a Rake task, because there are a lot of very similar tasks and we probably need to declare dependencies between the deploy-snapshot job and whichever jobs we anoint as the sufficient-for-snapshot set
<enebo[m]>
mattpatt: I have no issues with snapshot deploying but release is more complicated
<enebo[m]>
in a release we also run a licensed windows installer and push to s3 and push a gem
<mattpatt[m]>
you can constrain jobs so they only run when you cut a release in Github
<mattpatt[m]>
so all those are totally doable
<mattpatt[m]>
assume the windows stuff is scriptable
<enebo[m]>
for windows installer is just means some secret key stuff so it can run
<enebo[m]>
and we need install4j runnable on GHA too
<enebo[m]>
but typically after I mvn release I close to make sure we actually close (which has been a long term historical problem whenever we change out build scripts)
<enebo[m]>
then I make the command to generate the installer for windows
<enebo[m]>
then some manual tests on windows/linux to make sure it is safe to put out
<enebo[m]>
This could be partially/mostly alleviated with a Rails smoke test job on windows/linux but that should also happen before we try and automate it more
<enebo[m]>
Other things (which really is not technically a big deal) is gem push of jruby-jars and S3 push of windows artifacts (although I do push some replicated files that are maven artifacts)
<enebo[m]>
I am definitely not against automating more or all of this but until we get some confidence on some sort of smoke test job (especially on windows) we cannot trust what we currently run on CI
<enebo[m]>
The other part of that smoke test would be installing the windows installer too although perhaps that is more wishful (we do not actually change windows installer config very often so it is probably not a huge fear)
<enebo[m]>
Another possibility would be just scripting more of this process until we have enough to stitch it to an automated release process as well. Like pushing PRs to ruby-build, rvm, docker, rbenv
<mattpatt[m]>
I think that formalising your release process (well, writing it down anyway) enough that we can automate it piecemeal would be the best / only way
<mattpatt[m]>
nibble away at the edges as confidence in those bits builds / to build confidence in those bits
<mattpatt[m]>
do we need to avoid pushing a snapshot build if certain of the test jobs fail?
<mattpatt[m]>
the travis setup looks like only push if the tests all pass
<enebo[m]>
mattpatt: It would be nice if they all passed but I am not sure it is a requirement
<enebo[m]>
I don't have a good handle on this...on one hand we typically want whoever is trying something to try our latest bits. So regardless of test status that seems reasonable
<enebo[m]>
we also occasionally have flaky tests
<enebo[m]>
but we tend to do a lot more PR-based landing so master it typically only not green due to flakiness
<enebo[m]>
OR like current master it is so new we do not want to tag out failing tests yet
<enebo[m]>
With that said current master is not something we would say "hey come try this out". That will likely happen after we tag stuff
<mattpatt[m]>
Aha, hadn't seen that wiki page. Lots to work on automating there before going anywhere near the push-release-packages button
<enebo[m]>
mattpatt: My only largish reservation is most release problems tend to be things outside the typical test bubble
<enebo[m]>
like an installer having a glitch. build changes preventing a sonatype close. some env difference not caught by CI
<enebo[m]>
but the more we could automate the better
<enebo[m]>
we mostly do not catch env issues so that perhaps is not a fair statement
<enebo[m]>
we have though
<mattpatt[m]>
Looks like you can hold final release for human review and pause the automation while waiting for that, so there might be a best of both worlds solution somewhere in the murk
<enebo[m]>
realistically we can have a one click release at one point and some of these tests will happen before that click with a dry run
<enebo[m]>
oh interesting
<enebo[m]>
Not sure that would help closing on sonatype but that would be pretty nice for everything else
<enebo[m]>
if we have all the bits but not the formal pushing of those bits then we could verify and finish
<enebo[m]>
That would be slick
<enebo[m]>
Actually perhaps you can push and close on sonatype but not release through an API?
<enebo[m]>
API/maven. Makes sense for this very issue
<mattpatt[m]>
I have a version of the snapshot publish task that may or may not work, and won't run until everything passes. GHA's setup-java action claims to configure the settings.xml stuff correctly, so it's pretty simple. I'm trying it as a reusable workflow and one that can be manually triggered so we can test it (unfortunately, only once that workflow file is on master)
<mattpatt[m]>
I may have also got the Sequel stuff running properly
<mattpatt[m]>
Yup, Sequel passed.
<mattpatt[m]>
Ignoring the almost certain non-workiness of the snapshot publish workflow, I think that's all the Travis jobs running in GHA
<mattpatt[m]>
Once you're happy that you're only seeing real failures and not environmental stuff I'll squash the PR commits down (It'll still need to be two commits because of the need to use a full SHA in the reusable workflow invocation).
<byteit101[m]>
Is there a way to link a java field and an @ivar?