hightower3 has quit [Remote host closed the connection]
hightower2 has joined #crystal-lang
<FromGitter>
<wwalker> Well, maybe not. That requires that I have /usr/lib64/crystal on the aarch64 machine. If I had crystal on the aarch64, I wouldn't be cross compiling :-(
<FromGitter>
<Blacksmoke16> you sure? my understanding of cross compiling was its already compiled and it just needs linked on the target system
<FromGitter>
<Blacksmoke16> does it work if you just exclude the `-L`? pretty sure thats not actually the crystal binary, but the source of crystal libs
<FromGitter>
<Blacksmoke16> idk if that's expected or not, feels a bit weird its there...?
<FromGitter>
<wwalker> @Blacksmoke16 it felt weird. And I was getting "not found" for libs that were installed. but now it builds fine. ⏎ ⏎ Thanks! I was about to switch all 20 machines to x86_64 (that is the nice part of AWS EC2... :-)
<FromGitter>
<Blacksmoke16> :shrug: huh
<FromGitter>
<Blacksmoke16> any reason to run on aarch64? amd64 would deff be easier ha
<FromGitter>
<wwalker> the same performance level server for aarch64 costs 80% of what the x86_64 servers cost...
<FromGitter>
<Blacksmoke16> oof
<FromGitter>
<Blacksmoke16> yea thats a fair reason ha
<FromGitter>
<wwalker> yeah, I need to set up 300 machines. so it is the difference between 217,700 USD and 174,200 USD per month. $45,500 / month.
<FromGitter>
<jrei:matrix.org> You could also use qemu, compile statically and put it on the server
<FromGitter>
<stellarpower> Earthly can be useful for this stuff:
<FromGitter>
<stellarpower> I think it has a way of latching into QEMU to do cross-compilation without as much effort as it often takes
<FromGitter>
<moe:busyloop.net> you need to set up 300 machines with an app called slimer. do you work for the ghostbusters? 🤔
<FromGitter>
<stellarpower> I might be being stupid; is there a nice way to pairwise multiply two arrays without doing a zip then a map?
<FromGitter>
<stellarpower> It's actually called slimmer
<FromGitter>
<stellarpower> It optimises
<FromGitter>
<moe:busyloop.net> ok that sounds less exciting then :P
<FromGitter>
<stellarpower> And the sign of a good optimiser is when it optimises away redundant information in itso wn name
<FromGitter>
<moe:busyloop.net> ha
<FromGitter>
<wwalker> earthly looks interesting!
<FromGitter>
<moe:busyloop.net> if cost is a concern, have you considered hetzner tho? around $60k would buy you the same compute there. probably less.
<FromGitter>
<stellarpower> I've been having a play. IT still suffers form some of the arseyness of dockerfile syntax.
<FromGitter>
<stellarpower> But it's a step in the right direction at lweast
<FromGitter>
<stellarpower> We wanna use it to get reproducible builds, across differnt people's machines, but then the ease of debugging in a local IDE for development work
<FromGitter>
<wwalker> Nope, I am writing a pipeline to ingest about 100 TB a day into an elasticsearch cluster (editing all 100 billion ~1KB json documents) ⏎ The project is named Taz (Tasmanian Devil from Looney Tunes) and the first piece of the pipeline is Slimer because it consumes those 100E9 docs the same way Slimer donsumed hot dogs, just shovels them in and they just fall right through. The analogy was better in my head....
<FromGitter>
<moe:busyloop.net> that's quite some data indeed
<FromGitter>
<wwalker> the data we consume is already in AWS, so the cost to exfiltrate the data would kill any cost savings most likely, and we've already done all our compliance paperword around AWS...
<FromGitter>
<wwalker> However, Hetzner looks nice. I'll keep it in mind for any of my old clients looking for cheap high end cloud servers.
<FromGitter>
<stellarpower> I already stole this idea ;)
<FromGitter>
<stellarpower> I think ours are settled, and struck a deal, but it's interesting as we do need quite some power.
<FromGitter>
<wwalker> Yeah, currently we're doing about 5% of that (5 TB / day, with the ability to ingest about 15 TB/day (in case we get behind because something broke, so we can catch up in a reasonable time), versus 35 TB/day with the ability to ingest 100 TB / day for "catching up")
<FromGitter>
<moe:busyloop.net> hm yea, 100T egress day is hefty
<FromGitter>
<wwalker> well 100TB is our burst, but 30-35 TB / day is still hefty.
<FromGitter>
<moe:busyloop.net> yea, that'd be something like $45k/mo
<FromGitter>
<moe:busyloop.net> could still work out, considering how much you could save on compute
<FromGitter>
<moe:busyloop.net> but if you also have compliance stuff to deal with, and if it's not your own money, may not be worth it ;)
<FromGitter>
<wwalker> Yeah, even talking with other IT folks, when I say things like "I need to move 1.5 Petabytes over the next 5 days" they say things like "you mean terabytes?"
<FromGitter>
<moe:busyloop.net> i would probably say something like "have you tried gzip?" :P
Sankalp has quit [Ping timeout: 260 seconds]
<FromGitter>
<moe:busyloop.net> i'm curious about that elasticsearch cluster tho. do you rebuild it from scratch every day or does it permanently hold an index of that size?
<FromGitter>
<moe:busyloop.net> my experience with ES durability is not great, wonder how many nodes that takes and how you keep them alive lol
<FromGitter>
<wwalker> Currently 120 nodes. 112 with data storage. 48 have 12 TB of hard drives each, 64 have 2 TB of SSDs each. ingest 3 to 5 TB/day, times 2 for storage, as we have 1 replica of each index. each node has 16 cores and 128 GB RAM. Data comes in through the "hot" nodes (SSDs) and sits there for 3 days, then as it exceeds 3 days old, it moves to the "warm" nodes (HDDs) and sits until it is 30 days old and is deleted.
<FromGitter>
<wwalker> I'm expecting roughly 300 nodes to meet our ingest and retention goals.
<FromGitter>
<moe:busyloop.net> ouf.
<FromGitter>
<Blacksmoke16> 💰
<FromGitter>
<moe:busyloop.net> that def sounds like fun :D
<wwalker>
it is fun until you realize that to start up your "test/scaling" env you will be spending $11K a day. And if you forget and leave an "extra hundred or so..." machines running for a week...
<FromGitter>
<moe:busyloop.net> yea. i also don't want to imagine having to reshard or such on that size.
<FromGitter>
<moe:busyloop.net> but still sounds like fun anyway. with some sweaty palms in between. ;)
<wwalker>
yes, our ES is "write once". No updates, no document deletes.
<wwalker>
yeah, this is my sweaty palms week for this year and next year!
<FromGitter>
<moe:busyloop.net> ah good. that def makes life easier. updates are evil.
Sankalp has joined #crystal-lang
Sankalp- has joined #crystal-lang
Sankalp has quit [Read error: Connection reset by peer]
Sankalp- is now known as Sankalp
Sankalp has quit [Ping timeout: 248 seconds]
Sankalp has joined #crystal-lang
renich has quit [Quit: Leaving]
ur5us has quit [Ping timeout: 260 seconds]
ur5us has joined #crystal-lang
Sankalp has quit [Ping timeout: 256 seconds]
ur5us has quit [Ping timeout: 260 seconds]
ur5us has joined #crystal-lang
Sankalp has joined #crystal-lang
miketheman has quit [Ping timeout: 260 seconds]
miketheman has joined #crystal-lang
ur5us has quit [Ping timeout: 260 seconds]
egality has quit [*.net *.split]
egality has joined #crystal-lang
ur5us has joined #crystal-lang
walez has joined #crystal-lang
ua_ has quit [Ping timeout: 268 seconds]
ua_ has joined #crystal-lang
ur5us has quit [Ping timeout: 255 seconds]
alexherbo2 has joined #crystal-lang
alexherbo2 has quit [Ping timeout: 260 seconds]
jmdaemon has quit [Ping timeout: 256 seconds]
hightower2 has quit [Ping timeout: 260 seconds]
hightower2 has joined #crystal-lang
alexherbo2 has joined #crystal-lang
alexherbo2 has quit [Ping timeout: 260 seconds]
alexherbo2 has joined #crystal-lang
hightower2 has quit [Read error: Connection reset by peer]
hightower2 has joined #crystal-lang
alexherbo2 has quit [Ping timeout: 260 seconds]
hightower3 has joined #crystal-lang
alexherbo2 has joined #crystal-lang
hightower2 has quit [Ping timeout: 264 seconds]
hightower3 has quit [Ping timeout: 264 seconds]
hightower2 has joined #crystal-lang
walez has quit [Quit: Leaving]
alexherbo2 has quit [Remote host closed the connection]
yxhuvud has quit [Read error: Connection reset by peer]