ChanServ changed the topic of #crystal-lang to: The Crystal programming language | https://crystal-lang.org | Fund Crystal's development: https://crystal-lang.org/sponsors | GH: https://github.com/crystal-lang/crystal | Docs: https://crystal-lang.org/docs | Gitter: https://gitter.im/crystal-lang/crystal
ur5us has quit [Ping timeout: 240 seconds]
ur5us has joined #crystal-lang
ur5us has quit [Ping timeout: 260 seconds]
ur5us has joined #crystal-lang
renich has quit [Quit: Leaving]
ur5us has quit [Ping timeout: 240 seconds]
nq has joined #crystal-lang
Stephie has quit [Quit: Fuck this shit, I'm out!]
Stephie has joined #crystal-lang
jmdaemon has quit [Ping timeout: 260 seconds]
antoni has quit [Ping timeout: 240 seconds]
antoni has joined #crystal-lang
ur5us has joined #crystal-lang
ur5us_ has joined #crystal-lang
ur5us has quit [Read error: Connection reset by peer]
ur5us_ has quit [Remote host closed the connection]
ur5us_ has joined #crystal-lang
ur5us_ has quit [Ping timeout: 250 seconds]
<FromGitter> <wrq> does crystal having anything like timeit for python? I've never profiled before, but I'm trying to determine where to use Structs and where to use tuples and such in a large data model, and I was hoping to put together numbers for the performance of each design
<FromGitter> <Blacksmoke16> structs and tuples are both on the stack so they'd be essentially the same. Is suggested to use structs over tuples/namedtuples majority of the time anyway
<FromGitter> <wrq> okay, that makes sense
<FromGitter> <wrq> thank you
<FromGitter> <Blacksmoke16> but to answer your question, i'd checkout https://crystal-lang.org/api/master/Benchmark.html
<FromGitter> <wrq> AH, okay. I'm embarrassed that I missed that, what an obvious name for it
<FromGitter> <jwaldrip:matrix.org> Is 1.4 dropping today?
<FromGitter> <jwaldrip:matrix.org> I saw the release nodes reference todays date
<FromGitter> <Blacksmoke16> sure looks like it, or if not very soon
<FromGitter> <Blacksmoke16> based on https://forum.crystal-lang.org/t/upcoming-crystal-1-4-release/4492, should be today yea
<FromGitter> <jwaldrip:matrix.org> Does the new version still require a compile time flag to get the interpreter?
<FromGitter> <jwaldrip:matrix.org> Or is it enabled by default now?
<FromGitter> <Blacksmoke16> still need to manually build it with the flag
jmdaemon has joined #crystal-lang
yxhuvud has quit [Ping timeout: 256 seconds]
yxhuvud has joined #crystal-lang
jmdaemon has quit [Quit: ZNC 1.8.2 - https://znc.in]
jmdaemon has joined #crystal-lang
<FromGitter> <wrq> I want to implement something like a dispatch table for a series of pseudo-bytecode instructions. I'd like to lookup each instruction and then yield my little VM to a proc, and execute each instruction that way. ⏎ ⏎ Is this wise? And also, I don't see a yield_self method, am I missing something crucial?
<FromGitter> <wrq> oh, that's wonderful. thanks again. I swear I actually *can* read, I guess I just overlook all the good stuff!
<FromGitter> <Blacksmoke16> iirc there are some issues with tho
<FromGitter> <Blacksmoke16> so debatable if its worth just not declaring a block arg
<FromGitter> <Blacksmoke16> but if it works fine for your use case 👍
<FromGitter> <wrq> Do you think it makes more sense then to just implement all the bytecodes as methods under the main VM class? I had already thought of that, but it just seemed inelegant. I haven't worked with a statically typed language before, so I don't really have the right intuition for this sort of thing. If it were ruby, I would just have the VM object be very simple and then have a big hash of procs as the values and use
<FromGitter> ... yield_self to execute them by passing in the vm object
<FromGitter> <Blacksmoke16> Got some example code?
<FromGitter> <wrq> doesn't compile, just a quick example I crapped out
<FromGitter> <wrq> I'm writing my own Push interpreter
<FromGitter> <wrq> but that's the general idea for execution
<riza> @wrq it seems like a good use of a macro to generate a dispatch method based off of some classes or methods
<riza> i suppose it depends on how many instructions are in your instruction set
<FromGitter> <wrq> I've been dreading learning macros, but that's because I'm lazy
<FromGitter> <Blacksmoke16> any reason to not just have them as methods on `vm` versus passing that in?
<FromGitter> <Blacksmoke16> would have to benchmark what would be more performant, but i imagine having methods on the same type would be than a bunch of procs
<FromGitter> <wrq> I have no clue, but I trust your intuition and I suppose I'll just do it that way
ur5us_ has joined #crystal-lang
<riza> the benefit to macros is you can generate code which is very performant but still maintainable, paying the conversion cost at compile time rather than at runtime
<riza> going along with what Blacksmoke has suggested, you could easily make a macro that collects all the methods on your VM class, filters out methods with a prefix, and generates the parser/dispatcher for that
<riza> So adding an instruction to your bytecode is as simple as defining a method called `instruction_add`
<FromGitter> <wrq> right, I definitely want to do as much as possible before runtime. Alright, well this has been very helpful information. I suppose I'll spend some time with macros and do it with methods.
Starfoxxes has quit [Ping timeout: 252 seconds]
Starfoxxes has joined #crystal-lang
ur5us_ has quit [Ping timeout: 240 seconds]