ChanServ changed the topic of #crystal-lang to: The Crystal programming language | https://crystal-lang.org | Fund Crystal's development: https://crystal-lang.org/sponsors | GH: https://github.com/crystal-lang/crystal | Docs: https://crystal-lang.org/docs | Gitter: https://gitter.im/crystal-lang/crystal
<FromGitter> <michael-swan> With `BitArray`, is there an efficient way to extract a range of a BitArray as an Int32 or something?
<FromGitter> <michael-swan> I was looking through the interface and it looks like you can at best get back another BitArray if you extract a range
dom96 has quit [Quit: Nim is King]
<FromGitter> <michael-swan> like `bits[0..6]` would give you another BitArray but I would like to get the Int32 corresponding with those bits
<FromGitter> <michael-swan> I wish there was like a `#to_i`-like function which could return `UInt32?` such that it would give `nil` if the `BitArray` is too large to have its bits fit in a `UInt32`
<FromGitter> <michael-swan> Is there a way that I can extend or subclass the stdlib `BitArray` to add such functionality?
<FromGitter> <jrei:matrix.org> Why have you a `BitArray` in the first place?
<FromGitter> <michael-swan> I'm decoding RISC-V instructions
<FromGitter> <michael-swan> And I want to break out sections of the instructions, e.g. opcode, funct3, funct7, rd, rs1, rs2, etc.
<FromGitter> <jrei:matrix.org> So, you can transform this bits to bytes, then use https://crystal-lang.org/api/1.6.2/IO/ByteFormat/LittleEndian.html
<FromGitter> <michael-swan> Okay, that's a good start, thanks. I think I might be able to do something like: ⏎ ⏎ ```funct3 = IO::ByteFormat::LittleEndian.decode(UInt8, bits[12..14].to_slice)``` [https://gitter.im/crystal-lang/crystal?at=63af8830fb195421bd6052e5]
<FromGitter> <michael-swan> I didn't test that but looks like something like that would do, though I have to imagine that this is a pretty inefficient way of doing things
<FromGitter> <michael-swan> I might just have to break it out the old-fashion way
<FromGitter> <michael-swan> That's already a little complicated
<FromGitter> <jrei:matrix.org> why inefficient? Not that much
<FromGitter> <jrei:matrix.org> some structs are created, but that's it
<FromGitter> <michael-swan> I'll have to experiment, really is a function of how much inlining and simplification the compiler can do with that. This is an instruction decoder and it will be a pretty hot function for my purposes
<FromGitter> <michael-swan> I'm trying to write an emulator
<FromGitter> <michael-swan> Instruction decoding needs to be fast but I'm also trying to take advantages of the expressive features of this language
<FromGitter> <jrei:matrix.org> I guess make it work, then make it fast(er)
<FromGitter> <michael-swan> Yeah
<FromGitter> <michael-swan> I'll experiment. Interestingly Godbolt has Crystal support
<FromGitter> <michael-swan> So I could run some tests there
<FromGitter> <jrei:matrix.org> sometimes code seems inefficient but LLVM can optimize out things at times
<FromGitter> <michael-swan> Gotcha
<FromGitter> <michael-swan> Yeah this language is pretty sick. I used to write Ruby in high school and then gave it up because it was ungodly slow.
<FromGitter> <michael-swan> My hope is that there is enough smarts in this compiler to be usable for my purposes, i.e. emulation
<FromGitter> <jrei:matrix.org> Perhaps you could `IO::Memory`?
<FromGitter> <michael-swan> Yeah, that might be the move for this
<FromGitter> <jrei:matrix.org> if what you want is to buffer this instructions, then clear the buffer and reuse it to store subsequent instructions
<FromGitter> <michael-swan> I mean, that's likely how this information will be stored anyways. I haven't really even considered how I should best represent my volatile memory. `IO::Memory` is likely the move
<FromGitter> <jrei:matrix.org> but you just have to store bits, not bytes 🤔
<FromGitter> <michael-swan> Well I mean, the bits thing is just meant as a neat abstraction. If I could get away with thinking of my instructions in terms of bits without having to perform a series of Int32 to Int32 transformations such as bit-shifts and AND bit-masks etc.
<FromGitter> <michael-swan> Like, that'd be neat if I could get away with writing code in that way at no cost
<FromGitter> <michael-swan> BitArray seemed to fit that ask
<FromGitter> <jrei:matrix.org> Indeed
<FromGitter> <jrei:matrix.org> Also note there is a pointer available at `bits.@bits`
<FromGitter> <michael-swan> But I don't want to make any assumptions. I'd rather learn early what the hell it actually compiles too so I'm not constructing a silly house of cards
<FromGitter> <michael-swan> Oh okay
<FromGitter> <michael-swan> That's good to know
<FromGitter> <michael-swan> Can I actually reference it that way?
<FromGitter> <michael-swan> Like if `bit : BitArray` I can actually just evaluate `bits.@bits`?
<FromGitter> <michael-swan> Also, what'
<FromGitter> <michael-swan> I keep trying `crystal i` and it says it hasn't been compiled in but like why not?
<FromGitter> <jrei:matrix.org> ivars can accessed this way in the language, but it is not recommended
<FromGitter> <michael-swan> I can see why it's not recommended but definitely good to know
<FromGitter> <jrei:matrix.org> obviously getters are designed to publicly access them
<FromGitter> <michael-swan> Right
<FromGitter> <michael-swan> Also, am I able to locally extend stdlib code like this?
<FromGitter> <jrei:matrix.org> yes, can be helpful for low-level access, debugging
<FromGitter> <jrei:matrix.org> You can monkey patch and re-open objects, but that's another thing.
<FromGitter> <michael-swan> Like I can do something like ⏎ ⏎ ```class BitArray ⏎ def bits ⏎ @bits ⏎ end ⏎ end``` [https://gitter.im/crystal-lang/crystal?at=63af8cd640557a3d5c51d323]
<FromGitter> <michael-swan> I presume?
<FromGitter> <michael-swan> Assuming `BitArray` is in scope
<FromGitter> <jrei:matrix.org> doing that for stdlib has to be avoided at all cost, if can't do any other way
<FromGitter> <jrei:matrix.org> Better not monkey patching and just use `#@bits`
<FromGitter> <michael-swan> Sure, I get your point, I'm not trying to go to crazy town with my changes
<FromGitter> <michael-swan> I'm really just trying to understand the language and its rules
<FromGitter> <jrei:matrix.org> Another option is to make a custom `IO` for `IO::ByteFormat::LittleEndian.decode`, in your case a custom buffer.
<FromGitter> <jrei:matrix.org> Instructions at the root are fixed-length, what do you have at first before all of this as an input?
<FromGitter> <michael-swan> I'm not sure what you're asking. RISC-V is sometimes fixed-length but they also have extensions that introduce 16-bit instructions
<FromGitter> <michael-swan> Which is something I would eventually intend to support
<FromGitter> <michael-swan> What do you mean by "what do you have at first..."?
<FromGitter> <jrei:matrix.org> So the instruction you want to decode is a sequence of 32 or 16bits, right?
<FromGitter> <jrei:matrix.org> Which are in a file I suppose, which is opened as an IO.
<FromGitter> <michael-swan> Yeah, I mean, I'll probably read in some instructions from disk while others will be written back into memory by the emulated code
<FromGitter> <michael-swan> e.g. if the emulator is running internally a Unix distro which then loads a program from its virtual hard drive
<FromGitter> <michael-swan> I mean it is essential to view this memory as a byte array sometimes and be able to decode a UInt16 or UInt32 from an offset into it, like a pointer.
<FromGitter> <michael-swan> Ideally doing so in the cheapest possible way because this will happen a whole lot
<FromGitter> <jrei:matrix.org> BitArray you meant?
<FromGitter> <michael-swan> Well it would be nice to be able to take a byte offset into memory and then view it as a BitArray, but that was not what I was just saying.
<FromGitter> <michael-swan> I was just saying that the memory abstraction is typically indexed in terms of byte offsets
<FromGitter> <michael-swan> Which means I need a way to index in a byte-wise manner
<FromGitter> <michael-swan> And sometimes I may want to get a UInt32 out of memory so there ought to be a sane and fast way to basically index into this memory and get out a four-byte word.
<FromGitter> <michael-swan> I can go one byte at a time for such operations and then piece together a endian-correct UInt32 or whatever.
<FromGitter> <michael-swan> But that is slower than an aligned word load from that offset, which is essentially what I want
<FromGitter> <michael-swan> Similarly, if I could view chunks I extract from memory in a bit-wise fashion, without incurring major overhead, that'd be nice though I suspect any such abstraction will necessarily break things down bit by bit and wouldn't be optimized to the bit shift and bit masking operations that would be optimal
<FromGitter> <jrei:matrix.org> I think it may be the way to go to use operations on Bytes, without BitArray
<FromGitter> <michael-swan> I suspect that is so
Sankalp- has joined #crystal-lang
Sankalp has quit [Ping timeout: 272 seconds]
Sankalp- is now known as Sankalp
jrayhawk has quit [Ping timeout: 260 seconds]
jrayhawk has joined #crystal-lang
r0bby is now known as jessbot420blazin
jessbot420blazin is now known as r0bby
jmdaemon has quit [Ping timeout: 246 seconds]
alexherbo2 has joined #crystal-lang
taupiqueur has joined #crystal-lang
alexherbo2 has quit [Remote host closed the connection]
alexherbo2 has joined #crystal-lang
taupiqueur has quit [Ping timeout: 248 seconds]
taupiqueur has joined #crystal-lang
hightower4 has joined #crystal-lang
hightower3 has quit [Ping timeout: 268 seconds]
hightower4 has quit [Ping timeout: 252 seconds]
alexherbo2 has quit [Remote host closed the connection]
hightower2 has joined #crystal-lang
taupiqueur has quit [Ping timeout: 252 seconds]
taupiqueur has joined #crystal-lang
taupiqueur has quit [Ping timeout: 252 seconds]
taupiqueur has joined #crystal-lang
jmdaemon has joined #crystal-lang
oprypin has quit [Quit: Bye]
oprypin has joined #crystal-lang