walez has quit [Read error: Connection reset by peer]
walez has joined #crystal-lang
jmdaemon has quit [Ping timeout: 250 seconds]
greenbigfrog has quit [Ping timeout: 276 seconds]
greenbigfrog has joined #crystal-lang
walez has quit [Ping timeout: 252 seconds]
__ht has joined #crystal-lang
_ht has quit [Ping timeout: 246 seconds]
__ht is now known as _ht
__ht has joined #crystal-lang
_ht has quit [Ping timeout: 248 seconds]
__ht is now known as _ht
walez has joined #crystal-lang
walez has quit [Remote host closed the connection]
jmiven has quit [Quit: reboot]
jmiven has joined #crystal-lang
__ht has joined #crystal-lang
_ht has quit [Ping timeout: 246 seconds]
__ht is now known as _ht
taupiqueur has quit [Remote host closed the connection]
taupiqueur has joined #crystal-lang
<FromGitter>
<mattrberry> Heh, according to the Instruments profiler on mac, 2.3% of my application's performance is taken by Slice#check_writable
<FromGitter>
<mattrberry> Instruments says that's the heaviest stack trace in the entire app lol
<FromGitter>
<mattrberry> Is there a decent way to effectively template a method over constant parameters? Today, I have something like ⏎ ⏎ ```code paste, see link``` ⏎ ⏎ but I figure this would actually be worse for performance. While it saves computing the `second_instr` on every call, it'd require looking it up on the heap (if my assumption is correct) ... [https://gitter.im/cry
<FromGitter>
<mattrberry> With a few more values on the heap / parsed at runtime, the closured approach seems to be increasingly bad and the macro approach pulls ahead a little ⏎ ⏎ ```code paste, see link``` [https://gitter.im/crystal-lang/crystal?at=635ed4aa27f328266d62cf29]
<FromGitter>
<mattrberry> I guess the real takeaway here is that the macro approach is super ugly and that (as expected) checking bits at runtime is basically a nop
<FromGitter>
<mattrberry> So my take is that macros aren't worth the fugliness for a sub-2% perf improvement even in a hot benchmark loop
<FromGitter>
<mattrberry> This is a situation where it'd be nice to have some compile-time evaluation with the interpreter as opposed to the macro language :p
<yxhuvud>
<yxhuvud>
<yxhuvud>
I don't think what you have told us is really enough to give any possibility for us to help you. In general I'd avoid an overreliance on procs as I wouldn't trust llvm to be able to optimize across proc boundaries.