jaeger changed the topic of #crux to: CRUX 3.6 | Homepage: https://crux.nu/ | Ports: https://crux.nu/portdb/ https://crux.ninja/portdb/ | Logs: https://libera.irclog.whitequark.org/crux/
tilman has quit [Ping timeout: 252 seconds]
tilman has joined #crux
groovy2shoes has joined #crux
_moth_ has quit [Ping timeout: 256 seconds]
mechaniputer has joined #crux
mechaniputer has quit [Quit: leaving]
crash_2 has joined #crux
crash_ has quit [Read error: Connection reset by peer]
crash_2 is now known as crash_
ocb has quit [Ping timeout: 276 seconds]
ocb has joined #crux
Guest22 has joined #crux
Guest22 has quit [Client Quit]
_moth_ has joined #crux
elderK has quit [Quit: Connection closed for inactivity]
groovy2shoes has quit [Ping timeout: 252 seconds]
groovy2shoes has joined #crux
<cruxbot> [contrib.git/3.6]: remind: update to 03.03.10
<cruxbot> [opt.git/3.6]: poppler-qt5: update to 21.12.0
<cruxbot> [opt.git/3.6]: poppler-glib: update to 21.12.0
<cruxbot> [opt.git/3.6]: poppler: update to 21.12.0
<cruxbot> [opt.git/3.6]: libsdl2: update to 2.0.18
<ocb> the internet is 250-300 kbps, avg download 48 KB/s - this costs ~22$ / month
<ocb> and the nearest country across the border having 1gbps dedicated uplinks for 20$
<ocb> its funny how sad it is
<stenur> sounds like where i live five years ago (for me at least)
<cruxbot> [xorg.git/3.6]: mesa: update to 21.3.1
pedja has joined #crux
<ocb> desperate internet connection. on the other side, telecoms talking about 5G, while at the same time some can't provide 1mbps.
<cruxbot> [core.git/3.6]: sysklogd: update to 2.3.0
<pedja> upgrade from 20/4 to 200/80 was nice :)
<pedja> (FTTB)
<pedja> still no ipv6, thou
<stenur> also no IPv6 here; only on the VM server, but disabled there, i have no idea, am at the superficial level of understanding of about 2003 (less (local) device suffix notation), and a lot happened since then. But i do own (as long as i pay) i think a complete 32-bit subnet!
<pedja> ipv6 is enabled in LAN, just no ipv6 from ISP(yet™️) :)
<stenur> Other than that i am incapable to gain a SSH performance >~30MB/sec even for local virtio qemu copy.
<stenur> I am totally insecure with IPv6, this is why.
<stenur> And the effort to pimp the firewall script for it, uff.
<pedja> iirc, there were some patches for openssh to deal with that, but they were rejected upstream
<stenur> The HPN you mean, was that the name? Which use large windows and multiple threads.
<pedja> I think so, yeah
<stenur> Or the like, HPN i think.
<stenur> Yes. They were part of FreeBSD once, but then there was a massive typical FreeBSD bikeshed storm over there, with figures to prove that on a modern FreeBSD the patches do not really mean anything.
<stenur> Or mostly.
<stenur> Anyhow, Dag-Erling Smørgrav then retired.
<stenur> I knew his name since shorty after Y2K, lots of LATIN1 8-bit bytes in the FreeBSD source code :)
<stenur> (Seems to work occasionally in ports though, handful of times per year. But that's what remained.)
<stenur> I dunno. I saw people talking things like 300MB/sec, i have a tenth.
<stenur> Hm, regarding that patch series i saw a thread .. it must be in active use somewhere; the code maintainer also got donations i have read, not too long ago (max 1-2 years)
<stenur> Out for cycling; what a wet and cold mess; i'd wish we could finally go to southern Europe at least in winter, 'got pissed at by someone who was in Israel last week 18° at six in the morning, 28° or so (i have forgotten) in the afternoon. And many hours more light, not almost dark by 16 o'clock. Ciao!
<pedja> afaik, they are used in hpc/scientific communities, makes moving datasets around a bit more nicer
prologic has quit [Quit: ZNC - https://znc.in]
prologic has joined #crux
CrashTestDummy2 has quit [Quit: Leaving]
CrashTestDummy has joined #crux
stoffepojken has joined #crux
testestest has joined #crux
CrashTestDummy2 has joined #crux
CrashTestDummy3 has joined #crux
CrashTestDummy has quit [Ping timeout: 256 seconds]
CrashTestDummy2 has quit [Ping timeout: 256 seconds]
<ocb> i have two questions unrelated to crux. maybe somebody knows. im planning a two bare metal server setup for an existing project that is currently running on 6-7 year old slackware 14.2. the idea is that both servers to run a qemu kvm setup with only a few ipv4. some virtual machines will use qemu user-networking and some will be bridged. machines having user-networking will use tap devices to access
<ocb> wan. hower i have some issues understanding bridged networking. in current setup a bridge0 with eth0 and tap0 devices allows tap0 to access wan through eth0. however if i add one more tap device ie tap1 to bridge0 then either tap0 or tap1 will have access to wan but not both. both have different mac addresses assigned, different subnets and have learning and discover enabled. i am not sure if its
<ocb> even ok to use >2 devices in a bridge. is anyone running a similar setup with qemu or bridged networking, would like to hear how you done it.
CrashTestDummy2 has joined #crux
CrashTestDummy3 has quit [Ping timeout: 252 seconds]
CrashTestDummy has joined #crux
CrashTestDummy2 has quit [Ping timeout: 252 seconds]
testestest has quit [Remote host closed the connection]
testestest has joined #crux
<braewoods> ocb: already have the hardware?
<ocb> braewoods: don't have the hardware yet, should be getting it early january. but until then slowly planning how everything should look.
<ocb> but it seems my question is too specific and should be dealt with at proper time (upon setup)
<ocb> one will most likely be dl380p 2xE5-2650v2, 32/48 gb memory, 4x4tb hdd, 2x256gb ssd. the second one can be a weaker E3-12xx or something. yet to plan. but if anyone is running a qemu setup would like to hear their opinion :)
<jaeger-> I have not done multiple taps in one bridge, just one which worked fine as you mentioned already... maybe openvswitch could solve that for you
jaeger- is now known as jaeger
<jaeger> I do have some openvswitch and qemu/kvm/libvirt stuff at work with physical hosts running quite a few VMs, but I'm not very familiar with the networking there
<ocb> i am not familiar with openvswitch, its early evening will take a look at it now. thanks for the input :))
<jaeger> No problem
fishe7 has joined #crux
fishe7 has quit [Client Quit]
<jaeger> It's something I've been meaning to play with if I ever have time
<stenur> Even with my dedicated net namespace that is plugged to the real via veth, and internally uses a bridge to which the VMs attach, 30 mb/sec for SSH is not much. I have seen people claim factor 10! The former proxy_arp approach was only 30 percent faster (not ssh exactly, but ping times and such simple things).
<braewoods> ocb: why the hp proliant?
<braewoods> ocb: i built a workstation with comparable specs earlier in the year
<ocb> honestly, only that i got used to it and a friend has some extra in .nl
<braewoods> if you were building it from "new" stuff i'd suggest just a DIY ryzen rig with ECC RAM.
<braewoods> they're probably cheaper than an official server board
<braewoods> etc
<braewoods> i've got mine with 64GB of ECC RAM
<braewoods> 8 cores
<braewoods> Vega graphics
<braewoods> but if you're getting it at a discount it's probably cheaper, the proliant stuff
<ocb> thanks for the suggestion, will look into the prices but have to talk with the guy that provides colocation
<braewoods> my rig is actually the consolidation of my desktop and server
<braewoods> oh, if you're putting it in a datacenter, this is probably a bad fit then
<ocb> you're happy with ryzen?
<braewoods> yes
<braewoods> it's good performance but has zero remote management
<braewoods> the board that is
<braewoods> for me it was a better value because i don't need remote management
<braewoods> i've used proliants before, better than most remote management systems
<ocb> its surely going to be on remote location because i have a 200-300 kbps uplink that just broke 5 times while writing this message 'Destination Net Unreachable'
<braewoods> i'm actually using a ryzen APU
<braewoods> 4750G
<braewoods> but in your case you actually need low level remote control
<braewoods> that's typically only found in actual server motherboards
<braewoods> if this was for home or office use, a place you rarely need remote access to
<braewoods> you could probably get away without it
<braewoods> or just use ssh for it
<braewoods> but in a server farm of sorts
<braewoods> it's more valuable due to how hard those are to access
<braewoods> physically
<ocb> i will put everything on the paper but its most likely going to be proliant if my guy can provide a good price
<ocb> this is my normal home connection - 26.1% packet loss
<braewoods> proliants are fine, better than most server boards due to the availability of updates for the remote management firmware
<ocb> so the server can't be here :)
<braewoods> which reminds me, what gen are the servers
<ocb> they are g8
<braewoods> you'll probably want to make sure all the firmware is updated before putting them into use
<braewoods> gen8... "deprecated" at this point
<braewoods> but afaik still serviced for now
<ocb> thanks for the tips, i've updated a few 360p but ilo firmware only, it will be a good school to get more knowledge around this
<ocb> since the project is barely paying for itself right now, not sure we have extra to go with g9 or g10
<ocb> however i will look at ryzen just for interest
<ocb> all disks are encrypted so a server/workstation with no or bad remote control could be problematic
<ocb> (in case of a power outage) ^
<braewoods> even if not for that, any kind of remote malfunction would do it
<ocb> still have a month or more to think, thanks everyone for the information :))
<braewoods> ocb: all i'm really saying is you may also want to consider a beefier single server
<braewoods> it may be more practical
<braewoods> especially if everything else can be virtualized or containerizer
<cruxbot> [contrib.git/3.6]: containerd: updated to version 1.5.8
<cruxbot> [contrib.git/3.6]: docker: updated to version 20.10.11
_moth_ has quit [Ping timeout: 256 seconds]
<cruxbot> [contrib.git/3.6]: docker-compose: updated to version 2.2.1; NOTE: this is v2 and now depends on go rather than python
z812 has quit [Quit: bye!]
z812 has joined #crux
<cruxbot> [contrib.git/3.6]: feh: adopted, updated to version 3.7.2
<cruxbot> [contrib.git/3.6]: Revert "feh: dropped port"
<cruxbot> [contrib.git/3.6]: dunst: updated to version 1.7.2