klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
<mrvn> implementation wise c can be anything but anything outside 0/1 is UB.
<mrvn> ===> thise example should not warn but others should even when you cover all members of the enum.
<zid> I want a secure hash, gotta make sure nobody else knows what my files contain just because they know the hash!
<mrvn> zid: nonon, you want them to see that the file contains one thing while it contains another.
<mrvn> If they can't see what's in the file that's suspicious already.
<mrvn> A hash is usually so small there is really no way to deduce the content of a file from it. Encrypting a file is another matter, but indexing? How are you going to say what a file contains from 256 bit unless you already know?
mahmutov has quit [Ping timeout: 246 seconds]
<geist> but yeah after thinking about it and reading a bit, sha256 is pretty difficult to collide
<geist> as in, no one has ever been able to do it, and given the probability of it there *may* have been a collision, especially with all the mining, in the sense that enough sha hashes have been rolled over the years it's possibile that someone has identically made one
<geist> but of course no one detected it
<mrvn> Practically there is also a big difference between an accidental and purposeful collision.
<zid> yea the chances of detecting it are also miniscule
<zid> on top of the chances of there being one
<mrvn> For the mining a single signature might have a collision but a) you couldn't make money of it and b) the other packet would throw errors if you substituted it in the chain there.
<zid> 2^256 is like.. the number of total transistors ever made, squared squared
<mrvn> What does the birthday paradox say the probability of a collision is with 2^32 hashes? or 2^64?
<mrvn> ; (1-1/(2^256-2^32)) ^ (2^32)
<mrvn> Raising to very large power
<mrvn> *damn*
<mrvn> ; ((1-1/(2^256-2^32)) ^ 2) ^ (2^16) ~1.00000000000000000000
<mrvn> It's just an estimate but still pretty collsion free.
mykernel has joined #osdev
<mrvn> ; 1 - (((1-1/(2^256-2^32)) ^ 2) ^ (2^16)) ~0.00000000000000000000
<mrvn> I think apcalc doesn't use enough digits.
<mrvn> Lets ask Wolfram Alpha: 3.709 * 10^-68
X-Scale` has joined #osdev
<mrvn> 2^64 hashes: 1.593 * 10^-58
koolazer has quit [Read error: Connection reset by peer]
<mrvn> I think I messed up the estimate.
X-Scale has quit [Ping timeout: 240 seconds]
[X-Scale] has joined #osdev
X-Scale` has quit [Ping timeout: 244 seconds]
[X-Scale] is now known as X-Scale
<mrvn> I broke Wolfram alpha: ((2^256-2^64)/(2^256))^(2^64) == 10^(10^5.417461594016476), should be less than 1.
arch-angel has quit [Ping timeout: 260 seconds]
arch-angel has joined #osdev
nyah has quit [Ping timeout: 244 seconds]
mykernel has quit [Ping timeout: 246 seconds]
srjek has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
Likorn has quit [Quit: WeeChat 3.4.1]
arch-angel has quit [Ping timeout: 276 seconds]
gog has quit [Ping timeout: 255 seconds]
_73 has joined #osdev
SGautam has quit [Quit: Connection closed for inactivity]
qubasa has joined #osdev
X-Scale` has joined #osdev
X-Scale has quit [Ping timeout: 255 seconds]
X-Scale` is now known as X-Scale
<_73> I just thought this was interesting - A processes kernel stack must always be kept in memory because it is the processes responsibility to handle a page fault and the kernel stack is needed to run the handler, so if the process page faulted without the kernel stack there is nothing the kernel or process can do, and the kernel just panics because this scenario can only occur if there is a major bug in the kernel.
<_73> FreeBSD btw
<Mutabah> A fault without a kernel stack will double-fault (on x86)
<Mutabah> which (if the kernel is defensively designed) will use a different (statically allocated) stack
<Mutabah> and if that's also bad, triple fault!
<mrvn> my processes have no kernel stack.
<mrvn> CPU cores have kernel stacks, not processes, for me.
<_73> what is the OS called?
<mrvn> moose
<_73> Cool, ill read about it.
<Mutabah> ^ per-cpu stacks is another approach. More complex, but forces good design and lower memory usage
<mrvn> I disagree, it's much simpler.
<_73> How do per cpu stacks work?
<mrvn> It's geared towards microkernel though as you can't sleep in kernel. Can't do any thing complex really.
<mrvn> _73: you never sleep or block in the kernel and task switching happens when you leave kernel mode.
<mrvn> SO you only ever switch the processes stack and never the kernel stack.
<_73> ok I see how that is a fundamentally different design.
<mrvn> Also means you save the processes registers when entering kernel mode in the process structure, not on the stack.
arch-angel has joined #osdev
<mrvn> It's raining cats and dogs. Time to implement a suffix tree.
<Andrew> Hashes, when hashing things bigger than the hash itself, is never collision free
Likorn has joined #osdev
<Andrew> mrvn: I don't know of one, but it's not intuitive why there *can't* be - unless we're magically compressing data ... using 256 bits to represent files of large size, I mean, how?
nur has quit [Quit: Leaving]
<mrvn> Andrew: the question isn't if you are hashing things bigger than the hash but if you are hashing more things than the hash result can represent
<Andrew> - yes
<mrvn> Potentially you can has 2^256 files to unique hashes regardless of their size.
<mrvn> +h
<Andrew> 115792089237316195423570985008687907853269984665640564039457584007913129639936
<Andrew> note: 'potentially'
vdamewood has joined #osdev
katek has quit [Quit: Leaving]
GeDaMo has joined #osdev
azu has joined #osdev
arch-angel has quit [Ping timeout: 272 seconds]
Jari-- has joined #osdev
<Jari--> Hi People, my kernel is crashing unless I use same CPU frequency (synched) as Bridge Speed?
<Jari--> So should I do some power management in this case?
<Jari--> I am basically hitting CPU idle all the time.
<Jari--> Is this normal in NT and Linux kernels? It idles all the time?
<Jari--> CPU bug
<Jari--> I notice this in I/O sensitive tasks like Hard Disk Driver.
<Jari--> Basically I/O should not be poked faster than the bridge speed is.
<Jari--> And each device has their throughoutput limitations.
<Jari--> Impossible to do without power management or related regulation needed.
<Jari--> On Virtual Machines I notice it throws up virtualization exception.
<mrvn> That makes no sense. you are doing something wrong.
<Jari--> mrvn: well in that case my kernel embedded procmalloc (property-allocation) is broken still :P
<Jari--> wrote down a note, I noticed later, re-read
<Jari--> making kernel memory dynamic allocation system stable is a todo
<Jari--> but its just running a single thread so it should not be so unstable
Likorn has quit [Quit: WeeChat 3.4.1]
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
zaquest has quit [Remote host closed the connection]
zaquest has joined #osdev
<bslsk05> ​forum.osdev.org: OSDev.org • View topic - That pesky FPU again
<Jari--> I was just designing the task-switching for my OS, and as part of that, there is the question of FPU handling. That is, the handling of any register set besides the general purpose ones (for simplicity referred to as FPU from here on). First, there is the question of laziness or eagerness. In an age of increasing logical CPU counts, lazy FPU save seems like a bad idea, since a sleeping task with an unsaved
<Jari--> FPU cannot be transferred to another CPU. So lazy FPU save would make the scheduler more complicated, and it already is plenty complicated.
<Jari--> -This is interesting factual talk about the same issue I have, OS/2 3.0 has this etcs.
<Jari--> If I recall, on *some custom hardware, like cloned minilaptops I have, ..just bigger monitor here.
<Jari--> Could be related with the core handling, I have two types of cores e.g.
<Jari--> 2 type A and 2 type B
<Jari--> cheapest minilaptops = lots of fun for low level I/O programming (this is common according to my experience)
<bslsk05> ​lore.kernel.org: [PATCH 0/4] x86/fpu: Make AMX state ready for CPU idle
<Jari--> VMware has better hardware support and handling than VirtualBox
<Jari--> less crashes, tried more than 20 different OS platforms
<Jari--> and unhandled exceptions might be reason I am not having crashes on non-functioning code
<Jari--> handling all the common 386 processor faults
<mrvn> have you tested how many of your tasks use the fpu regulary?
<mrvn> If you don't tell the compiler to not use SSE that number probably approaches 0
<mrvn> (just a guess)
<Jari--> Apparently I can't write cache code, or I wrote this in a confused state of mind.
<Jari--> Tempting to remove the cache entirely.
<Jari--> E.g. rewrite the cache problem solved.
<Jari--> This is actually buffers, but I call it a cache.
<Jari--> FAT buffer e.g.
<moon-child> here is a question, not really related but it reminded me:
Terlisimo has quit [Quit: Connection reset by beer]
<moon-child> suppose you have big.little with actual architectural differences between the cores. Cf new intels--had avx512 on the big cores, not on the littleones--before they fused off avx512 on the latter
<moon-child> as an application developer, I would actually not mind writing code tuned both for the big cores and for the little cores
<moon-child> but, what interfaces should the os provide to support this?
<GeDaMo> Feature flags?
<moon-child> obviously you can't just move a thread from a big core to a little core. But can you provide some way of migrating control state, perhaps at a safepoint?
<GeDaMo> I seem to remember a problem on some ARM where an instruction was only supported on some of the cores
<moon-child> (and given such an interface, would it make sense to keep running the thread on a big core until it reaches a safe point, or move it to a little core and use sw emulation for the unsupported features?)
Terlisimo has joined #osdev
<mrvn> if it's using the big features once it's probably going to do it a lot. emulating seems pointless.
<mrvn> You can migrate when it throws an illegal instruction
Burgundy has joined #osdev
<moon-child> mrvn: the idea is that the safepoints should come at fairly close intervals, so it only has to do a small amount of work before it can switch to the little-specific code
<mrvn> If you design software to switch codes then migrate it only on safepoints
<mrvn> it's not like a core disappears randomly.
<moon-child> but suppose all the big cores are tied up, and a little core is free. Isn't it better to let it make progress, even with sw emulation?
<mrvn> If all your cores are tied up you need a bigger mobile
<moon-child> I have m big cores and n little cores. I should be able to run m+n threads at once. But your way, the latency of safepoints becomes a bottleneck
<mrvn> not really. you just have to run m big threads and n little threads
<moon-child> suppose I used to have m+1 big threads and n little threads. And then one of the little threads died. So I need to turn one of the big threads into a little thread
<moon-child> with sw emulation, I can start on that right away
<moon-child> without it, I have to wait until one of the existing big threads hits a safepoint
<mrvn> As I see it the usual case will be running a single game that want to get all out of your mobile. It shouldn't have m+1 big threads. And other apps that occasionally run should run little code.
<mrvn> And you don't have to wait for a safepoint to schedule. Just to migrate.
<moon-child> indeed. Would be bad if you did :P
<mrvn> I think you are making too much of an issue out of a made up problem. Worst case you run m+1 threads on m cores till a safepoint is reached. That's not the end of the world.
kingoffrance has joined #osdev
<mrvn> I wouldn't even think of designing such safe points. Switching between big and little code seems too much work already. At most I would mark threads that use big opcodes to run only on the big cores from now on.
nyah has joined #osdev
Jari-- has quit [Ping timeout: 244 seconds]
jafarlihi has joined #osdev
<bauen1> would there really be that many tasks that actually benefit from the better opcodes, or have higher performance requirements ? in your average linux system there's quite a few (system) tasks that probably wouldn't benefit from better simd instructions (e.g. network services)
<bauen1> obviously if you're running games you probably have a way to make use of those faster instructions
<jafarlihi> What are some eaiser system software projects you can take on before being ready for osdev?
<mrvn> bauen1: can't think of anything but games or hpc that really would benefit.
<bauen1> jafarlihi: the average "introduction to operating systems" course seems to involve writing your own toy shell in C, and e.g. parsing a FAT32 filesystem
jafarlihi has quit [Ping timeout: 250 seconds]
Burgundy has quit [Ping timeout: 246 seconds]
<moon-child> bauen1: well, sure, yes; if you don't need to go fast, no need to do any tuning
<moon-child> but in cases where performance does matter, you frequently profit by taking advantage of available isa extensions, as well as uarch minutiae
CaCode has joined #osdev
<j`ey> you could also use taskset to just force it to only execute on the big cores
<moon-child> I want to use all the cores though
jjuran has quit [Quit: Killing Colloquy first, before it kills me…]
jjuran has joined #osdev
Likorn has joined #osdev
<mrvn> then don't run big code on all threads
<mrvn> And no, generally you want to run as few cores as possible. that's the idea of big/little.
<moon-child> mrvn: idea of big.little, I thought, is that, if you don't have compute intensive work, you run it on the little cores and save power
<mrvn> exactly
<moon-child> if you _do_ have compute intensive work, use the big cores
<moon-child> use all the cores
<Andrew> little endian sucks!
<moon-child> eh, why?
<mrvn> Andrew: it's not about endianness
<Andrew> i know, just made me think of it
<j`ey> moon-child: the best way is to, obviously, just have symmetrical cores :P
<moon-child> Andrew: oh, i thought it was a non sequitur
<mrvn> moon-child: do you have the memory bandwidth to run all cores?
<Griwes> But symmetrical cores make for less problems to solve in your schedulers!
<moon-child> mrvn: good question. Only big.little hardware I have is a mac, so the answer to 'do you have the memory bandwidth' is 'yes' :P
<moon-child> but more generally maybe not
mykernel has joined #osdev
<mykernel> i am currently learning i386 assembly, how calling convention works and how stack works, i think i understand stack alignment (before call you need to have stack word aligned), but i have seen some people talking about using nop to do alignment (like aligning instruction address), can someone give me a push in a right direction? Preferably with practical examples using nop
<mrvn> big mistake. learn at least x86_64
<j`ey> moon-child: high five for m1
<mykernel> mrvn: i will learn it afterwards
<moon-child> mykernel: usually, you will not manually insert nops, but will instead direct the assembler to insert them for you, since you won't know ahead of time what offset your code will be at
<moon-child> e.g. in gas use '.align k' for a k-byte alignment
<moon-child> that said, you should not be worrying about aligning code at this stage
<mykernel> what needs to be aligned?
<j`ey> a non-x86 example is the exception/vector table on arm needs to be aligned
<moon-child> whatever the abi says needs to be aligned--it depends on the abi
arch-angel has joined #osdev
<mykernel> thanks
<moon-child> the stack, as you know; also, frequently integers must be self-aligned
<mrvn> it's not that code needs to be aligned, it's that some alignments are faster to jump to
<moon-child> there are usually no requirements regarding code alignment
<moon-child> yeah
<mrvn> Also it's sometimes faster to issue an opcode a cycle later, or half or a quarter cycle so you insert a NOP to delay it
<moon-child> mrvn: afaik it's not even usually that aligned is faster to jump to; it's that if an entire loop (or w/e) fits in some self-aligned small chunk of memory, it's faster
<mrvn> lots of architecture and model specific stuff anyway
<moon-child> yeah
<mrvn> Does x86 icache have a 64byte cache line too?
<moon-child> I think so
* moon-child looks at agner
<mrvn> On Alpha the cpu would read opcodes in pairs so if you jump to an odd slot you waste half the opcodes for that cycle.
arch-angel has quit [Ping timeout: 272 seconds]
CaCode has quit [Quit: Leaving]
<zid> just don't look at agner in anger
<clever> mrvn: i believe cortex-m0+ does the same, its thumb based, so 16bit opcodes, but the opcode fetch is always 32bit wide and 32bit aligned
mavhq has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
mavhq has joined #osdev
<mrvn> except thumb can be misaligned I believe
<kazinsal> conveniently we have the ARM extension "magic" for that
mykernel has quit [Quit: leaving]
<clever> mrvn: ive not heard of that being allowed
mahmutov has joined #osdev
<mrvn> geist mentioned that you could have one 16bit thumb and then 32bit opcodes.
<clever> mrvn: but you can only switch between thumb and arm when you branch, and one of the address bits (bit0 i think) signals which mode the dest is
<clever> the cortex-m0+ also doesnt support 32bit opcodes
<clever> so its forever thumb
gog has joined #osdev
wootehfoot has quit [Read error: Connection reset by peer]
<mxshift> Thumb2 32-bit opcodes are kinda like a pair of 16-bit opcodes under the hood so no need for 32b alignment
<clever> ahh, thats something different from arm 32bit
dennis95 has joined #osdev
mahmutov has quit [Ping timeout: 240 seconds]
nur has joined #osdev
mahmutov has joined #osdev
the_lanetly_052_ has joined #osdev
the_lanetly_052 has quit [Ping timeout: 255 seconds]
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas has joined #osdev
Ali_A has joined #osdev
robert_ has joined #osdev
qubasa has quit [Ping timeout: 260 seconds]
qubasa has joined #osdev
Burgundy has joined #osdev
qubasa has quit [Ping timeout: 260 seconds]
Likorn has quit [Quit: WeeChat 3.4.1]
dra has joined #osdev
mrlemke_ has joined #osdev
_mrlemke_ has quit [Ping timeout: 240 seconds]
_mrlemke_ has joined #osdev
mrlemke_ has quit [Ping timeout: 246 seconds]
papaya has quit [Quit: Lost terminal]
dennis95 has quit [Quit: Leaving]
_73 has quit [Remote host closed the connection]
gxt has quit [Ping timeout: 240 seconds]
gxt has joined #osdev
the_lanetly_052_ has quit [Ping timeout: 255 seconds]
Likorn has joined #osdev
* geist yawns
<geist> good morning everyone
<zid> we just had one of those though
Ali_A has quit [Quit: Connection closed]
<jjuran> What about second morning
<bauen1> jjuran: time to move to a planet in a solar system with multiple stars i guess
<bauen1> i mean how many stars would you need so you could say "good morning" any time and be right ?
<zid> 1
<zid> just move yourself
<zid> sunrise travels at 1600kph on earth
<zid> so you can either go backwards at that speed and get them twice as often, or go forwards at that speed and have it be perpetual morning
<zid> or whatever combination you fancy
<geist> tis true, could have a tatooine morning
<nur> watch out for the krayt dragons
_mrlemke_ has quit [Quit: Leaving]
<bauen1> couldn't you move to the north or south pole, that should work for roughly half a year right ?
<zid> you can only go so far north before you get to twilight instead of night
<zid> like say, the UK in summer
<bauen1> but moving around the earth at 1600kph sounds tiring, I would opt to just slow down earths rotation enough to be locked to the sun
<zid> you'd still get a 'sunrise' but the more north you go to cut the speed you'd need down, the less 'morning' makes sense as a concept
<zid> Yea you could just go chill out on mercury, that one is tidally locked
<geist> or on the north (or south pole) of uranus
<geist> oh wait i dont think that's tidally locked either, just sitting sideways
<geist> but probably pointing the sun for a few hundred years at a time
<bauen1> if your only requirement is "always morning for a human life", then I guess even the moon would fulfill that, you just wouldn't have a very long morninng
<\Test_User> "nearly anywhere with a non-breathable atmosphere ought to fulfill that"
<\Test_User> ...though it fulfills it by reducing the timespan required rather than extending the morning
<zid> You can have 'morning' forever if you just stop moving, watch out for passing planets. Or falling into the sun. Or the sun orbitting the galactic core away from you.
<zid> (bring your own fun gasses, not supplied)
<jimbzy> Hallo urrybody.
<geist> howdy
<jimbzy> How's things?
<sortie> I finally have a day off from my crazy life and just am sitting here looking out over the city doing some osdev :)
<jimbzy> Nice
<geist> yay
<sortie> Working on my xkcd 1579 compliance :)
<sortie> https://xkcd.com/1579/ ← Even got the "irc for some reason" part nailed down
<bslsk05> ​xkcd - Tech Loops
<sortie> Once I get audio and can play youtube videos, I might actually have connected it to "things I actually want to use my computer for"
<geist> kazinsal: got the new mobo yesterday, seems stable so far
GeDaMo has quit [Quit: There is as yet insufficient data for a meaningful answer.]
<bauen1> sortie: oh wow, didn't know that existed, checking in with what is technically my own debian derivative lol
<sortie> :D
<sortie> geist, neato :) What motivated getting a new one?
<geist> oh was having some stability problems with my ryzen server. kinda a long ordeal
<sortie> Ah :)
<geist> eventually settled on the old motherboard being unstable, so got a new one
<geist> will see. the MTBF was like 3 days, so i need to run it for a week or so to really see if it fixed everything
<zid> does this one have more volts
<bauen1> speaking of old hardware, i still have a macbook air 2013 that i want to setup with a proper linux installation again, which is always a joy. and at the same time also see if i can improve the performance a bit (e.g. new thermal paste)
<bauen1> now of course i can't reproduce it right now, but i think the last time i tested the _read speed_ of the internal SSD it ended up dropping to just 40MB/s weird ...
<geist> zid: i think so, yeah
<geist> only annoyance is this mobo turns the socket 90 degrees so my cpu cooler now blows air against the top of the case
<geist> so it's not quite as effective
MiningMarsh has quit [Ping timeout: 272 seconds]
_xor has joined #osdev
MiningMarsh has joined #osdev
azu has quit [Quit: leaving]
freakazoid333 has left #osdev [Leaving]
JanC has quit [Remote host closed the connection]
JanC has joined #osdev
MiningMarsh has quit [Ping timeout: 256 seconds]
MiningMarsh has joined #osdev
sonny has joined #osdev
MiningMarsh has quit [Ping timeout: 246 seconds]
MiningMarsh has joined #osdev
xenos1984 has quit [Read error: Connection reset by peer]
xenos1984 has joined #osdev
sonny has quit [Quit: Client closed]
_73 has joined #osdev
mahmutov has quit [Ping timeout: 246 seconds]
scripted has joined #osdev
<scripted> hello
<scripted> been a long time
<scripted> oh, it's in the middle of the night!
<scripted> anyways, for those who care, I moved to blockchain programming
<scripted> godo night
scripted has quit [Client Quit]
_73 has quit [Remote host closed the connection]
<geist> uh okay.
<geist> good to know
nyah has quit [Quit: leaving]
<gog> what should i compile
<gog> no blockchains pls
<j`ey> vim, llvm, linux
<klange> bim, kuroko, toaruos
<j`ey> nice
Likorn has quit [Quit: WeeChat 3.4.1]
<gog> i wanna see how fast this baby can compile something
<j`ey> what're the specs?
<gog> Ryzen 5 4600H, 8GB RAM
<gog> gtx 1650 on the side
<j`ey> nice, good that you finally got a new computah
<gog> yes i'm vhappy with it
<gog> prime offload works much better in linux than i anticipated
<geist> i sometimes use compiling qemu as a benchmark, though it has variable options so it's a bit hard to do a proper 1:1 comparison with another machine
<geist> also yay join the ryzen club!
<gog> yes
<gog> this machine came with windows 11 and i immediately wiped it all out and installed manjaro
<gog> works great, no complaints
<Ermine> Oh, you can even use nvidia open source drivers
<gog> i'm using proprietary rn
Burgundy has quit [Ping timeout: 240 seconds]
<geist> yah experience is nvidia linux drivers are pretty solid
<gog> they've improved a lot since last i used them
<gog> which was years ago
<j`ey> talking about open source gpu drivers.. https://twitter.com/AsahiLinux/status/1532035506539995136
<bslsk05> ​twitter: <AsahiLinux> First triangle ever rendered on an M1 Mac with a fully open source driver! 🎉🎉🎉🎉 [https://twitter.com/LinaAsahi/status/1532028228189458432 <LinaAsahi> 🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺 ␤ 🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺 ␤ 🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺 ␤ 🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺🔺 ␤
<gog> aaay
<gog> next step render thousands of them
<j`ey> :)
<Ermine> gog: rendering thousands of them was a problem: https://rosenzweig.io/blog/asahi-gpu-part-5.html
<bslsk05> ​rosenzweig.io: Rosenzweig – The Apple GPU and the Impossible Bug
dra has quit [Ping timeout: 260 seconds]
sebonirc has quit [Remote host closed the connection]
sebonirc has joined #osdev
dude12312414 has joined #osdev