<adam12>
weaksauce: Yeah. I think the CVE forced their hand.
<adam12>
I just wanted to call out it wasn't just a regular patch release.
<weaksauce>
ah
<weaksauce>
four months later to get a 0.0.1 patch :/
<adam12>
Yeah. Almost the latest. 2.3.1 (I think?) was Aug 28.
<adam12>
err, Apr 28
<adam12>
or Apr 26.
<weaksauce>
does ruby core have any ft workers?
<adam12>
Great question - I'm not personally sure.
<weaksauce>
shopify probably has a person or two working on it
<leftylin1>
I'm surprised to see that a distro has fallen so behind on their Ruby updates that they're still on an EOL version and one that thus didn't get updated
<weaksauce>
which one?
<weaksauce>
that's it
<weaksauce>
ruby is dead
<weaksauce>
it was a good run
<leftylin1>
I don't have any replacement for Ruby though so no choice but to continue using it
<leftylin1>
and writing in it
<weaksauce>
i kid i kid
<leftylin1>
dead or not, what I said is still true
<weaksauce>
you don't compile it?
donofrio__ has joined #ruby
donofrio_ has quit [Ping timeout: 260 seconds]
<adam12>
I agree with leftylin1 kind of. I don't know what I'd replace Ruby with currently.
<adam12>
I'd be curious to see what more formal associations around Ruby would look like. RubyCentral seems great but it's outside of my country (so no tax rebate on donations) and Ruby Assocation (japan) is similar. RubyCentral spends a lot of focus on Bundler/Rubygems.org. Ruby Association, I am not sure.
leftylin1 is now known as leftylink
pandabot_ is now known as pandabot
<havenwood>
Critical vulnerabilities patched for every stable Ruby in addition to all those memory fixes for 3.3.1.
<havenwood>
Update your Rubies!
<havenwood>
Serious enough to backport to Ruby 3.0.
<kjetilho>
I guess RedHat will let us know even if Ruby developers don't care.
ken_barber has quit [Quit: Client closed]
<adam12>
kjetilho: Ahaha. I have a few CentOS 7 machines too..
<adam12>
(like 100 but who's counting)
<adam12>
I'm in the middle of a mass migration to Debian. I moved some to CentOS 8 a while back and then in that same year to AlmaLinux.
<adam12>
Debian mostly gives me a current Ruby, thankfully, tho I have some hypervisors still with 2.7 through bullseye.
<adam12>
I avoided scripting sysadmin stuff in Ruby for a long time because of that damn CentOS7 Ruby version.
<kjetilho>
current EL8 is 2.5.9p229
<adam12>
Oof. Same version as Alma. ruby 2.5.9p229 (2021-04-05 revision 67939) [x86_64-linux]
<adam12>
For those machines, I write the oldest style Ruby I ca.
<adam12>
s/ca/can.
<adam12>
It's still annoying.
<kjetilho>
EL9 is 3.0.4p208
<adam12>
I'm eventually just going to run Debian everywhere, which should give me a recent enough Ruby through distro pkgs.
<kjetilho>
I write stuff in Perl since Ruby breaking all the time is so annoying
<adam12>
It's been nice scripting in Ruby again tho. I pair it with rset.
benjaminwil has quit [Ping timeout: 255 seconds]
bw has joined #ruby
<kjetilho>
but we're using Puppet, so I do some simple Ruby coding
<adam12>
I won't write Perl anymore. I'd rather reach for Python, or even something obscure like Janet.
<kjetilho>
Python is also horrible in breaking stuff
<kjetilho>
life is too short
<adam12>
I've generally made out OK, but I'm allergic to dependencies.
<adam12>
Most of my server stuff is optionparser+logger.
<kjetilho>
it's a bit sad, but golang or rust monolithic binaries is the simplest way to deploy stuff to a variety of distros. (or Perl :)
<adam12>
Agreed.
<adam12>
I built rbz a few weeks ago because I wanted to experiment with more complex CLI tools that I could ship to servers, but without using gem. I think wasm might get us there eventually, if we didn't have to deal with the weird sandbox.
<adam12>
A pure server side, no sandbox wasm implementation might be kind of interesting. Like Java, without the slow boot times and insane memory requirements.
<kjetilho>
:)
<weaksauce>
that would be nice
<gr33n7007h>
leftylink: if you're talking about arch linux, it's because they won't update ruby to 3.3.+ until the mass update of python 3.12 been done.
<gr33n7007h>
then we should see regular updates, hopefully ;)
<leftylink>
arch linux is behind? that is surprising to hear as from what I know arch linux is a rolling release
<gr33n7007h>
yeah, 3.0.6
<leftylink>
I see. hm, at least 3.0.7 still came out as a last hurrah for 3.0.x series
<gr33n7007h>
as soon as 3.12 completed for python, ruby is next in the pipeline
<gr33n7007h>
i've never seen ruby so behind on arch before, 3+ years
<gr33n7007h>
to to fair though, they're only volunteers at the end of the day.
<leftylink>
a lot of distros out there. wonder what the... distribution of volunteers is like among distros
<leftylink>
I dunno that page hits mean much as opposed to, say, number of users, but I don't know that there's an easy way to get number of users for a wide variety of distros
<gr33n7007h>
wow, 63rd, that is surprising.
<adam12>
I'm surprised MX is so high.
<gr33n7007h>
adam12: was just thinking that myself.
<leftylink>
hm, let's see, a glance at the names near the top, I recognise that EndeavourOS and Manjaro are both based on Arch. maybe that's where some of the energy has been going lately, I dunno what the story is
<kjetilho>
I never heard of MX Linux before...
<leftylink>
I have a lot of respect for Arch though. setting things up as you want them set up
<leftylink>
getting a little old though and sometimes feel like I'm losing the patience needed to do that...
<adam12>
I did LFS back in 1999 or so and that was enough for me :P
<leftylink>
maybe "getting old" is also the reason I never found a replacement for Ruby ("can't teach an old dog new tricks") but I dunno
infinityfye has joined #ruby
<adam12>
And Slackware, ofc.
<adam12>
Now I'd rather just let Debian handle it. It's already a pain dealing with the churn in Linux (ipchains->iptables->nftables for example).
<leftylink>
I don't think it's for lack of trying; I've tried out other languages, but still like Ruby
infinityfye has quit [Read error: Connection reset by peer]
<gr33n7007h>
ruby will never die!
<gr33n7007h>
not while i'm still breathing anyway lol ;)
<adam12>
I think the DX needs to be given a fresh look as a newcomer.
<adam12>
This is where I wish $ was invested. I wish I had the $ to invest in it.
<gr33n7007h>
my jerk chicken wings are ready, bk in a bit
_ht has quit [Remote host closed the connection]
jenrzzz has quit [Ping timeout: 268 seconds]
pascal_blaze has joined #ruby
<[0x1eef]>
Arch Linux stopped being cool in 2012-ish.
<[0x1eef]>
IMO they became little more than another brick in the wall after adopting systemd.
TomyWork has joined #ruby
<miah>
"am i cool if i hate systemd?" (probably not)
<adam12>
I'm not a huge systemd fan but tbh, going back to rc scripts doesn't appeal at all. If it was all in on runit (like void) then maybe I'd be accepting.
<adam12>
Being able to just alias unit files out to run multiple instances has been super handy. And a 5 line unit file vs an insane rc file was a huge improvement.
<[0x1eef]>
For me it is too complicated, too broad in scope, and solves problems no one had. That's a large reason why I left Linux altogether, but also because of the churn. It's a community that doesn't want to improve existing tools, it wants to rewrite em all in its own image. BSD is the perfect fit for me.
<miah>
i agree with those statements
<miah>
i do think though, that where systemd is well integrated it works.
<miah>
(i run arch on my desktop, and fileservers, but all other systems run openbsd)
jenrzzz has joined #ruby
<[0x1eef]>
It works, especially after all those years but I think the design is questionable. For example, the recent xz fiasco, was made possible by a systemd component. I can't remember which. I think it was libsystemd or systemd-notify. Why ? So other services can know if sshd has started, It has its tenacles everywhere and sometimes the usecases raise the eyebrows. But it did set Linux apart, for better or
<[0x1eef]>
worse.
<miah>
the xz thing happened because distros patched in libnotify to liblzma rather than modifying software to write "READY=1" to a unix domain socket
<miah>
because they patched sshd to link to liblzma/libnotify
<miah>
which didn't need to happen
<miah>
maybe they were trying to get around some weird sandboxing issue, but the problem wasn't _caused_ by systemd. maybe it was caused by run away complexity (in both the average linux distribution, and systemd)
<[0x1eef]>
From the gist: sshd is often patched to support systemd-notify so that other services can start when sshd is running. liblzma is loaded because it's depended on by other parts of libsystemd
<gr33n7007h>
yeah, some linux distros patched openssh to support systemd notification which in turn requires libsystemd that depends on lzma
<miah>
notification doesn't technically require libsystemd though, its just a unix domain socket
<miah>
the UDS lives wherever $NOTIFY_SOCKET is pointed at
<miah>
maybe this gets tricky in containers, flatpack, etc
<miah>
(im not sure the exact reasoning they did the linking as its not technically required, and afaik the response from systemd was to improve the interface to make it more obvious that people didnt need to link to libsystemd to send notifications)
<gr33n7007h>
Fortunely arch linux wasn't prone to this woop woop!
<[0x1eef]>
That usecase seems weird to me. You could know that by setting the order the daemons start in. Maybe because systemd wants to paralleize that process, that's a feature I never needed though. systemd introduces complexity to solve problems I never had.
<miah>
congratulations on not having that problem. i have admin'd servers that take literally 4+ hours to boot simply because of the number of SAN connections, and service management wasn't a pain but wasn't easy on those either =)
<miah>
the problems do exist (in large orgs who are mostly powered by monkeys with typewriters and bosses who haven't touched a keyboard in decades) so things like this happen
<miah>
and while i would love to see openbsd everywhere, thats a big lol
<[0x1eef]>
Fair enough.
<kjetilho>
[0x1eef]: how do you know when a daemon is ready to accept connections?
<kjetilho>
it has nothing to do with parallelization
<[0x1eef]>
When it returns control to the caller that's normally the case.
<kjetilho>
I have never seen a daemon do that
<miah>
answer: when it accepts connections )
<kjetilho>
they will daemonize first, then do their startup work
<kjetilho>
miah: yeah, but the init system won't know what the task of the daemon is. there are no readiness checks in any init implementation I know
<miah>
ya, the init system doesn't need to know that. it just needs to know that the daemon has started
<kjetilho>
so lots of sleep everywhere?
<miah>
to implement a readiness check for every service would be... bonkers
HappyPassover is now known as Al2O3
<miah>
the startup script can send a notification that the program completed (daemonized) when it exits with a 0, or a notification of else when non-zero
<miah>
doesn't need any custom linking, just a way to send a notification
<kjetilho>
miah: then the daemon needs to implement its own notification system internally - via a socket?
<[0x1eef]>
On FreeBSD there are magic comments that appear to help with this type of stuff. But it's not a problem I ever had. That's generally how I see systemd: solving problems I never had.
<kjetilho>
miah: right, I was arguing for the usefulness of that method - compared to the mess of sleep we used to have
<miah>
ya, ive done a lot with old sysvinits, runit, systemd, SMF etc
<miah>
i prefer runit style, but basically nobody does that (except maybe void), systemd is fine when you _learn it_
<miah>
it is overly complex for sure, but its better than sysvinits
<kjetilho>
yeah
<miah>
im thankful it isn't xml like SMF =)
<kjetilho>
exactly! I was about to say that :)
<[0x1eef]>
On FreeBSD's /etc/rc.d/sshd there's "PROVIDE: sshd", and then other rc.d scripts can add: "REQUIRE: sshd".
<miah>
(although from my understanding, systemd is modeled after launchd from macos
<kjetilho>
[0x1eef]: sshd generally don't have a strong dependency - but let's say postgresql, there it is often important that the next service does not start until it is ready to accept connections.
<kjetilho>
or keepalived - it should not start before apache is up
<kjetilho>
there are plenty of examples. "sleep 2" will usually work. but then one day it doesn't. so you put "sleep 5" everywhere instead to be safe
<miah>
(to be really safe you do load balancing and dont rely on a single server)
<kjetilho>
you don't want to announce the service until it is ready regardless