klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
mctpyt has joined #osdev
vin has quit [Quit: WeeChat 2.8]
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
immibis has quit [Ping timeout: 240 seconds]
Starfoxxes has quit [Ping timeout: 245 seconds]
sortie has quit [Ping timeout: 240 seconds]
Starfoxxes has joined #osdev
sortie has joined #osdev
kingoffrance has joined #osdev
thinkpol has quit [Remote host closed the connection]
Mach` has joined #osdev
thinkpol has joined #osdev
ElectronApps has joined #osdev
Mach` has quit [Ping timeout: 246 seconds]
[itchyjunk] has joined #osdev
nyah has quit [Ping timeout: 246 seconds]
<klange> Spun up a little test environment for running ToaruOS off an ext2 partition on a disk, and... okay yeah there's definitely some subtle bugs in this ext2 implementation.
<klange> For one it's making all new files owned by root. Oops.
<klange> But with a little initrd to load the relevant drivers and mount it, it does boot to a GUI at least.
gog has joined #osdev
ElectronApps has quit [Remote host closed the connection]
<klange> I installed doom from the package manager, rebooted, and it's still there and working, so at least the basics seem to be functioning.
<klange> And fsck even says my partition is clean, that's surprising...
<Ameisen> hmm, looks like trunk gcc, trunk clang, and msvc.latest are all honoring [[likely]]/[[unlikely]]
<Ameisen> though MSVC has difficulty with an else branch being specified [[likely]] without a corresponding [[unlikely]] on the if
<klange> Ah, dang, I made the mistake of sticking the local package index in /var so the package manager doesn't know Doom is installed after a reboot, but it's there and the desktop launcher exists...
<klange> Oh I didn't -f so fsck lied to me, I see i_blocks inconsitencies...
tacco_ has quit [Remote host closed the connection]
vin has joined #osdev
vdamewood has joined #osdev
<vin> Reposting my question: I am little confused, in ext4 if two threads are writing to the same inode/file at different offsets in parallel, would a lock be acquired on the file making the writes sequential?
<Mutabah> Depends on what requirements you want to have
<Mutabah> You don't _need_ a lock (the two writes can just potentially interleave)
ElectronApps has joined #osdev
<vin> Mutabah: but would the fs implicitly apply a lock to the inode when a process is accessing it?
<Mutabah> Depends on your design
<Mutabah> You would definitely want to lock modification to the inode itself
<Mutabah> (to avoid tearing when updating e.g. the size)
<vin> even if I don't use any locks in my program will the file system convert all accesses to the file (at different) access into sequential
<vin> yes that makes sense, when you are modifying the metadata you should have a lock
<vin> *at different offsets
<Mutabah> The FS could easily just do non-locked accesses for file data, and leave it up to the userland to not do overlapping writes
<jjuran> There's s difference between writes to different disk blocks and non-overlapping writes to the same block
<jjuran> *a
<Mutabah> Or, your FS could require the file to be opened exclusively to be able to write to it
<vin> do know what ext4 does Mutabah?
<Mutabah> ext4 is just a filesystem format, it doesn't (afaik?) specify the semantics of accessing files
<Mutabah> That's the domain of the VFS layer
<vin> So this should be a POSIX standard?
<Mutabah> If you want to know what linux does, I'd check the manpages
<vin> Okay, I will read the VFS docs. Not sure which manpage in particular
<Mutabah> open/read/write/...
<geist> i think you'll find there's not a lot of hard policy on exactly what happens to overlapping writes
<geist> non overlapping doesn't matter which order they appear in
<geist> or at least by the time the write() syscall ends the data should appear to any subsequent read (or mmap) but within that syscall i suspect it's undefined precisely the order it appears in, or what granularity (1 byte? 1 page? 1 disk block? 1 fs block? etc)
<geist> it is assymed if you do two writes in sequence that data A appears before data B, but if they're simultaneous, no guarantees (or at least if there are they're OS specific)
<geist> the hard one is of course O_APPEND
<geist> that one gets tricky, if you have to simultaneous threads appending to the same file, what is the granularity of their writes
<geist> i believe linux makes some guarantees, up to a point
<geist> but the up to a point is sufficiently large that it generally isn't a problem
<jjuran> I would expect simultaneous O_APPEND writes not to clobber each other
<jjuran> which means non-overlapping writes to the same disk block have to not clobber each other
<jjuran> So, writes to a particular disk block should be serialized.
<zid> ext4 has all sorts of nice mount options like async sync noatime blah blah
<zid> to deal with what happens in some of these edge cases
<zid> you just.. pick a behavior when you mount it and the vfs/fs code sorts it out
<vin> geist: but are there gurantees for non overlaping writes to a single file? (to different blocks) -- Can I assume the filesystem won't act as a fence and flushes one write at a time? It doesn't make sense why it should do so especially with ssds which support multiple channels
<geist> i think the general model re: simultaneous O_APPEND is atomically bump the file pointer by the size of the write, write to the old location
<geist> vin: yeah no flushing unless you're operating with sync or whatnot
<zid> vin: why would two writes to two regions interfere anyway?
<zid> it doesn't have to guarentee anything there anyway
<vin> because they both are operating on the same inode? Yes I expect them to not interfere
<zid> All I can think of is both causing an append, might interfere, updating the file size in the wrong order or such
<geist> yah the key is to get the model of what user space is supposed to see with a series of FS operations
<geist> and then work backwards from there
<geist> the FS implementation has to guarantee at least that, but it may be more strict because of internal implementation details
<zid> or way less strict because the vfs never orders it do do anything confusing, depending on design
<vin> So none of the file system metadata data structures are concurrent data structures imo right
<zid> and userspace rules (see: posix) might say that you can't open the same file twice to begin with etc
<vin> yes zid append is good example of requests being serialized
<geist> also there's a model as to what user space sees in a file, and what gets to the disk
<geist> the two can be disconnected substantially, except where fsync/sync come into play
<zid> even posix is crap wrt specifying this stuff
<vin> zid: you don't need to open it twice, open in the parent and pass the fd to children
<geist> so part of the fun is to allow things to be somewhat lazily done physically
<zid> vin: I'd count that as having two open file descriptors for the same file, and thus opened twice
<vin> yes geist
<zid> posix being crappy is why I use sqlite for a lot of stuff
<geist> so in general the user space model is that a file ops are atomic at least with regards to completing the op
<geist> ie, at the end of a write() it has happened to all other observers in the system, etc
<geist> same with truncate, unlink, etc
<zid> It just turns into races, rather than inconsistencies
<geist> that sort of thing. whether or not it happene on the disk is not really specced *except* where syncs and unmounts and whatnot come into play
<vin> I am tempted to write a small benchmark that compares parllel writes to a single file at different offsets (bigger than a block) vs a single thread doing all the writes.
<vin> This should be O_DIRECT though, to avoid paging
<zid> it shouldn't really be measurable unless you're cpu bound
<zid> I can issue 10s of gigabytes of seconds of writes in a single thread
<zid> a second*
<vin> without paging?
<zid> the queue will just fill and it'll start to block
<vin> The max bandwidth I have ever got from an ssd is 6 GB/s
<vin> single ssd
<zid> and fwiw, a little operation queue with thread safe append/pop would probably cover 99% of all the edge cases
<vin> okay so to be clear you expect the parallel writes to a file would be faster than a single writer? zid
<zid> not in the least
<zid> I expect the device to be completely bottlenecked
<zid> by its write speed, and my ability to supply it writes will massively outpace that even on a single core
<vin> why though? modern ssds have thousans of large queues you can submit parallel writes to
<zid> unless you've got like, an optane and one of those 24 core 1GHz webserver xeons or something
<zid> because then you're dealing with a lot of stacked syscall latencies
<zid> on a shit cpu
<vin> I do have an optane ssd with xeon to try this out on
<zid> a nice shitty webserver xeon?
<vin> No but I don't get why a bad cpu is needed to extrapolate the write performance -- if I bypass paging I will be IO bound anyway
<zid> That's precisely my point, you will be io bound
<zid> so who gives a fuck about threads
<zid> threads only matter once you're cpu bound
<vin> but the question is if a single thread is enough to saturate storage bandwidth
<vin> You are saying it should be
<zid> It's absolutely horrifically plenty on any setup you're likely to find in the wild
<zid> The 6GB/s you quoted is 10% of my memory bandwidth
<zid> for example
<zid> where it doesn't work is when you're going for shit loads of iops
<zid> because the overheads eat you
<vin> Okay so let's say we have paging, so we are now bounded by memory bandwidth -- which for sure can't be saturated by a single thread.
<zid> except we're at 10%, not saturated
<vin> Sure. So having multiple threads writing to a file has an advantage now
<zid> no, we need 10x the write bandwidth
<zid> I can completely saturate my ssd multiple times over from a single thread.
<zid> Unless it's an optane and we're talking iops, I don't need any threading at all.
<vin> So pages in memory needs to be invalidated to storage which can be the bottleneck with multiple threads, I get that. So the only time multiple threads will help is when your file fits the memory.
<zid> ???
<zid> I don't understand any of what you just said
<zid> I do sys_write, it tells the device either to dma from my memory, or I memcpy to its internal buffer over the pci-e link at pci-e link speeds. I can memcpy from a single thread /faster than pci-e/
<vin> If your writing to a file and run out of memory to do paging then you will need to flush old pages to the disk by which the program becomes IO bound again.
<vin> But if your file/working set fits dram then you multiple threads can take advantage of mem bandwidth (don't need to wait for pages to be swapped)
srjek|home has quit [Ping timeout: 264 seconds]
mahmutov_ has quit [Ping timeout: 256 seconds]
<devcpu> p/sb clever
<devcpu> nvm i meant /sb clear
dude12312414 has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
[itchyjunk] has quit [Read error: Connection reset by peer]
ElectronApps has quit [Remote host closed the connection]
onering has quit [Quit: I have been discovered!]
Beato has joined #osdev
ravan_ has joined #osdev
ravan has quit [Ping timeout: 246 seconds]
<zid> odd, stopped working for dns
<junon> I've also had issues in the last 48 hours
<junon> that's weird
<zid> damn you cloudflare
<zid> I swapped to, I think that's google?
<junon> yes it's google
<junon> google's secondary is btw
<zid> discord seems buggered too
<zid> wonder if some datacenter exploded
<junon> zid are you in germany by any chance? looks like berlin is re-routed right now for cloudflare, idk if that's often or not
<zid> nope
<junon> In fact a number of german datacenters are re-routed at the moment.,..
<zid> and it's back
sonny has joined #osdev
ravan_ is now known as ravan
sonny has quit [Quit: Going offline, see ya! (www.adiirc.com)]
ravan has quit [Remote host closed the connection]
<geist> yeah i used to try to mix and but cloudflare has gone down more than once on me
<geist> so finally removed it
<graphitemaster> clearly what we need is a single ip address that gives us a lit of dns ip addresses we can cycle through so then you only need to put one dns address in there /s
<graphitemaster> s/lit/list
<moon-child> dns server server?
<geist> well you *can* look up the google one with dns.google.com
<geist> gives you all the aliases
<geist> oddly, dns.cloudflare.com gives you something other than
<geist> btw if you're fiddling with your dns stuff, i encourage you to look into DNS over SSL and DNS over HTTPS
<geist> both google and cloudflare support it, and it's nice to cloak your dns traffic, especially on a laptop in a public place
<zid> idk how easy that is on windows
<geist> yah it's not even easy on linux (mint linux at least)
<zid> it's probably easy on my gentoo
<zid> also, my google account is bugged
<zid> I can't load the home page, everything else works, home page gives a 500 error
<geist> what i have is a local dns resolver on my firewall that handles local dns traffic but then talks to
<zid> search works, account page, gmail, etc all work, google.com is 500
<geist> hmm, not here, so it's at least not globally down
<zid> It's mya ccount
<zid> I asked a friend at google, he says he doesn't know how to open a ticket for it because he's on mail
<zid> but the only open bug that looked similar that he could find was toggling some account setting and it wasn't that :(
xenos1984 has quit [Quit: Leaving.]
<kazinsal> I think I mostly mix and
<kazinsal> but I also have a local domain controller with a DNS server because I'm a horrendous nerd
<zid> I'd do it if I was capable of using my router
<zid> but there's a lovely bug in the ISP modem that stops me
ElectronApps has joined #osdev
xenos1984 has joined #osdev
Oshawott has quit [Ping timeout: 245 seconds]
<ZetItUp> man i just found a bug in my mm, if i free some space in between some memory it still returns that address on the next kmalloc even if the size is bigger than the hole :P time to debug wtf im doing wrong :P
<zid> oopsie
<zid> what data structure are you tracking with?
<ZetItUp> i kinda followed james molloy's tutorial where you add header/footer of memory spaces
<zid> linked list then?
<ZetItUp> yeah, so i guess i forgot to modify the list :P
<ZetItUp> on free i mean
<graphitemaster> <moon-child> dns server server?
<graphitemaster> moon-child, yes, presumably we'll have a few of those too
<graphitemaster> So then we'll need a dns server server server to consolidate them
<moon-child> https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00528.html I wonder if you could use this to dump/change ucode?
<bslsk05> ​www.intel.com: INTEL-SA-00528
<moon-child> oh, twitter says no
klange has quit [Ping timeout: 268 seconds]
<Affliction> Description: Hardware allows activation of test or debug logic at runtime for some Intel(R) processors which may allow an unauthenticated user to potentially enable escalation of privilege via physical access.
<Affliction> When did having physical access stop implying "attackers win"
<moon-child> lol
gog has quit [Quit: byee]
gog has joined #osdev
gog has quit [Client Quit]
gog has joined #osdev
klange has joined #osdev
klange has quit [Client Quit]
klange has joined #osdev
<klange> complete loss of ipv6 on my DO droplet ;-;
<klange> gateway responds, but nothing's getting past it
<moon-child> Affliction: honestly feels like it's going the opposite way. Post-meltdown et al, people are finally realising that untrusted code on shared hardware will never work, even with a sandbox
<Affliction> hm
<Affliction> Nah, just throw another layer of virtualisation at it, it'll be fiiine
<jjuran> I've discovered a means of performing a denial of service attack via physical access. It works with any processor.
CaCode has joined #osdev
<klange> Does it still work afterwards? :P
<moon-child> ;o
<moon-child> jjuran: https://xkcd.com/1217/
<bslsk05> ​xkcd - Cells
<Affliction> o noes! have a CVE#
<jjuran> klange: In some variations, yes. :-)
<Affliction> Depends on if we're talking about ripping the cables out, or smashing it with a hammer
<klange> I was just wondering if your approach involved a hammer.
<Affliction> Or an etherkiller
<jjuran> No, not a hammer.
<jjuran> Power drill.
<klange> Might not kill the CPU. Might not even make it past the NIC.
<Affliction> service will still be denied!
<jjuran> (Or just pull out the power cable / close the laptop lid.)
<klange> Nothing quite like a good kick with some line voltage~
<klange> > close the laptop lid
<klange> jokes on you I don't have the ACPI support to respond to that
<Affliction> Is that an example of the code working on the developers' system, so they put the developers' system in prod?
<junon> I'm not looking forward to implementing ACPI
<junon> I hear it's hell
<jjuran> Hell is other people's software.
<Affliction> It's... certainly a thing
<klange> ACPI is definitely a lot of other peoples' software.
<geist> in a mixed C/C++ environment, .h vs .hpp (or .hh depending on your flavor). discuss.
<Affliction> void shutdown() { set_fan_speed(0); avx512_powervirus(); } // ACPI is hard, try to shut down by PROCHOT trip
<geist> ie in a sea of C headers, ifyou have a header that's intended to be used for C++ stuff, does it make sense to name it that way, or simply #ifdef __cplusplus the body of it?
<klange> Would probably work on my laptop, but I think it would beep a few times first.
<Affliction> I know in theory modern chips are supposed to throttle
<j`ey> geist: I think just .h
<Affliction> That didn't happen on my 3950X, at least with the old firmware, haven't tested since.
<j`ey> geist: no real reason though
<Affliction> ran straight up to 105C then off.
<klange> obviously if I'm writing c++ headers I've lost my mind and have decided to write a C++ standard library
<geist> j`ey: noted. thats what most things I've used do. it always seemed nice to have a separate extension
<klange> and thus my C++ headers should have the suffix ""
<geist> ugh... that's so terrible
<geist> not your fault, the C++ people's fault
<geist> thoughi guess with a // vim: tag its at least usable
<klange> It _was_ a cheeky way to clean up standard #include's.
<geist> trying to think of the nicest way to declare C++ wrappers around C things
<geist> probably simplest and most useful is to declare he C things in a header, followed by the C++ wrappers in a C++ specific section just afterwards
<klange> I slapped the #ifdef __cplusplus crap in a shared header under a pair of _Begin_C_Header and _End_C_Header macros.
<geist> ie, struct mutex {} and then a class Mutex { struct mutex; } right after it in a #ifdef __cplusplus
<geist> yah
<geist> kk. yeah. probably the simplest. that way you can pick your poison
<geist> C++ code can use the fancier versions, but the headers aren't particularly weird for them
<klange> I haven't actively written C++ since my days in robotics, and I even just removed support for C++ from my build system...
<klange> (Not that meaningful of a change, just stopped sticking libstdcxx on the base image and removed one 'hello world')
<geist> i've been starting to write more and more subsystems in LK in C++. guess a few years of doing it in zircon is bleeding through
<geist> not that i particularly want super fancy bits, but i have to admit things like RAII lock guards and whatnot are darn handy
<geist> so makes sense to at least provide simple wrappers and lock guards and whatnot for folks that want to use them around the standard primitives
<geist> just spent a few hours converting the PCI bus driver to C++. since it was basically already just object oriented C anyway
<zid> so now you have a vtable and it links slower too, yay? :p
<geist> well, no. it was already a vtable
<geist> oh also check this out: `time make -j` `real0m0.208s`
<geist> oh wait, that was with ccache. without `real0m0.810s`
<geist> but noted. my experience is C++ doesn't start really slowing down until you start drinking from the template fountain
CaCode_ has joined #osdev
CaCode has quit [Ping timeout: 250 seconds]
Celelibi has quit [Ping timeout: 268 seconds]
ahlk has quit [Remote host closed the connection]
Celelibi has joined #osdev
<junon> compilation? or runtime?
<j`ey> compilation
<junon> right
GeDaMo has joined #osdev
<klange> I have some C stuff that takes several seconds...
<klange> Like my editor. Or when I throw all of the source files for my interpreter at one gcc...
archenoth has joined #osdev
<geist> Yah i gotta respect projects that do the whole ‘compile in one command’ thing
<geist> Honestly surprised more stuff doesn’t just do that
<klange> My editor only works that way because I keep it as one file - it's its own stress test.
<junon> I wrote my own build system to do that
<moon-child> geist: I still want to write a c compile which does its own caching and parallelization
<moon-child> such that 'compile in one command' is faster than anything else you could do
<klange> I want to write a C compiler at all
<geist> Seems that llvm could pull it off pretty easily if the clang front end just stamps out N copies of the compiler internally
<geist> And splits it across files you pass it on the command line
<geist> I had heard that clang driver somewhere along the line stopped forking itself to run internal steps, because it can instantiate the compiler bits, run it, then tear that down and instantiate the linker, etc
<moon-child> ideally you would do those in parallel, pipelined
<klange> I should try to write a C compiler following the same model as Kuroko's compiler. Just straight up mash out machine code while you parse...
<moon-child> so semantic analyser handles one function, then sends it to the code generator which is running on another thread. So you can generate code for the first function at the same time as you semantically analyse the second function
<moon-child> obviously obviates ipo. And depends on c/c++'s in-order semantics
<geist> I don’t know if that’s feasible on modern compilers nowadays. I think they probably need to look at too much global state to do that
<moon-child> klange: that's what tcc does!
<geist> Years ago I remember you could drive GCC by typing into stein and watch it generate asm as you wrote C
<geist> But somewhere along the way even basic -O0 needs to see too much before it outputs asm
<geist> S/stein/stdin
<geist> Stupid autocorrecting client
lkronnus has quit [Ping timeout: 256 seconds]
dormito has quit [Quit: WeeChat 3.3]
lkronnus has joined #osdev
NeoCron has joined #osdev
lkronnus has quit [Ping timeout: 246 seconds]
NeoCron has quit [Remote host closed the connection]
mctpyt has quit [Remote host closed the connection]
mctpyt has joined #osdev
lkronnus has joined #osdev
nyah has joined #osdev
ElectronApps has quit [Remote host closed the connection]
mctpyt has quit [Ping timeout: 268 seconds]
flx has joined #osdev
lkronnus has quit [Ping timeout: 268 seconds]
lkronnus has joined #osdev
lkronnus has quit [Ping timeout: 240 seconds]
lkronnus has joined #osdev
CaCode_ has quit [Quit: Leaving]
ahalaney has joined #osdev
CaCode has joined #osdev
dennis95 has joined #osdev
dormito has joined #osdev
ElectronApps has joined #osdev
aejsmith has quit [Quit: Lost terminal]
aejsmith has joined #osdev
<ZetItUp> found some old osdev post which started with this sentence: I'm developing an operating system and instead of programming the kernel, I'm developing the kernel.
dude12312414 has joined #osdev
makersmasher has quit [Remote host closed the connection]
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
CaCode has quit [Quit: Leaving]
[itchyjunk] has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
srjek|home has joined #osdev
ccx_ is now known as ccx
dormito has quit [Quit: WeeChat 3.3]
<Bitweasil> When you write the kernel with a laser pointer on film, you indeed have to develop it to see what you've done!
ahlk has joined #osdev
ElectronApps has quit [Remote host closed the connection]
<kingoffrance> i sent my kernel to therapy, it wont be out for a long time lol
<kingoffrance> its developing, in counseling, under lockdown lol
<kingoffrance> we write every year or so
<kingoffrance> our relationship is "developing"
<kingoffrance> well, maybe they meant "design"
<kingoffrance> then, that post sort of makes sense...
Oli_ has joined #osdev
dude12312414 has joined #osdev
xenos1984 has quit [Quit: Leaving.]
sikkiladho has joined #osdev
CryptoDavid has joined #osdev
mahmutov_ has joined #osdev
dennis95 has quit [Quit: Leaving]
sikkiladho has quit [Ping timeout: 240 seconds]
sikkiladho has joined #osdev
<sikkiladho> Hi. I'm trying to create a chain between bare metal binaries on raspberry pi 4 to learn about the bare metal world. I have two binaries.
<sikkiladho> el2-kernel.img - it prints Hello to UART.
<sikkiladho> kernel=el2-kernel.img
<sikkiladho> initramfs el1-kernel.img 0x400000
<sikkiladho> I use these configs to load both binaries at different addresses. In el2-kernel.img, I simpy jump to 0x400000 (using eret and relevant values in spsr_el2 and elr_el2(the address). I have been succesfull in doing so. Now I want to jump from el2-kernel to the standard linux kernel. I have been tried to do it for a few weeks but the standard kernel
<sikkiladho> won't run. I have also kept the address to dtb in ram in x0.
<sikkiladho> Here's my code: https://github.com/SikkiLadho/Leo
<bslsk05> ​SikkiLadho/Leo - Leo Hypervisor. Type 1 hypervisor on Raspberry Pi 4 machine. (0 forks/0 stargazers)
X-Scale has quit [Ping timeout: 264 seconds]
nvmd has quit [Quit: Later, nerds.]
X-Scale` has joined #osdev
X-Scale` is now known as X-Scale
<geist> SikkiLadho: oooh nice
<geist> clever: !
* geist points clever at SikkiLadho
vin has quit [Quit: WeeChat 2.8]
<clever> geist: using the initrd like that was an idea i gave him a few days ago!
<sikkiladho> yeah, thank you for that clever. I can print Hello and World by using two separate binaries loaded at different addresses(and at different exception levels). Why can't I do the same with the kernel?
<clever> SikkiLadho: not sure, simplest way to get an answer is to rig up jtag and see what happens
<jjuran> geist: .h for headers usable from C, .hh for headers only usable from C++.
<gog> .hhhhhhhhhhhhhh for headers that are so bad they give you an asthma attack
* gog stares at every header in glibc
<clever> gog: haskell has a accursedUnutterablePerformIO for when you really want to give people an asthma attack! :P
<bslsk05> ​hackage.haskell.org: Data/ByteString/Internal.hs
<clever> > It lulls you into thinking it is reasonable, but when you are not looking it stabs you in the back and aliases all of your mutable buffers.
<clever> lol
<gog> "witness the trail of destruction" lol
X-Scale has quit [Ping timeout: 256 seconds]
<bslsk05> ​www.reddit.com: accursedUnutterablePerformIO : haskell
<clever> basically, its a way of doing something with side-effects, in a pure expression that shouldnt have side-effects
<clever> and in one example, the compiler trusts you a little too much, and assumes `accursedUnutterablePerformIO mallocWord32` will always return the same thing
<clever> so every malloc returns the same addr
<clever> because you disabled every safety in the language
sikkiladho has quit [Quit: Connection closed]
X-Scale` has joined #osdev
X-Scale` is now known as X-Scale
<gog> i should really learn more about functional programming
<gog> because i cannot grok this
Arthuria has joined #osdev
<clever> gog: have you heard of SSA?
<bslsk05> ​en.wikipedia.org: Static single assignment form - Wikipedia
meisaka has quit [Ping timeout: 268 seconds]
sikkiladho has joined #osdev
meisaka has joined #osdev
Ameisen has quit [Quit: Quitting]
xenos1984 has joined #osdev
sikkiladho has quit [Quit: Connection closed]
sikkiladho has joined #osdev
sikkiladho has quit [Client Quit]
Ameisen has joined #osdev
Ameisen has quit [Ping timeout: 256 seconds]
CaCode has joined #osdev
Ameisen has joined #osdev
Ameisen has quit [Client Quit]
Ameisen has joined #osdev
tacco has joined #osdev
CaCode_ has joined #osdev
CaCode has quit [Ping timeout: 240 seconds]
sikkiladho has joined #osdev
vin has joined #osdev
<vin> Does CoW have poor bandwidth utilization? if so I don't see why?
bradd has quit [Remote host closed the connection]
srjek|home has quit [Ping timeout: 245 seconds]
bradd has joined #osdev
wootehfoot has joined #osdev
amazigh has joined #osdev
gog has quit []
<junon> lol j`ey I should have just asked in here
<j`ey> :p
<junon> I knew I recognized the nick
<geist> vin: i wouldn't say that no
<geist> or at least it's not a concrete enough problem statement to say yes or no
<geist> if you're trying to benchmark some specific scenario i'm sure some situations may be slower than not cow, but depends on what it is
<sortie> Really depends on how much data you attach to each cow and how long it takes to move the cow from point A to point B
<Bitweasil> If you get a good stampede going... bandwidth improves dramatically!
<j`ey> geist: what does zircon have, re filesystems?
<geist> sortie: haha nice. that was a real missed opportunity there
<geist> j`ey: we have a few custom ones and a new one in development and simple support for fat and ext4
<j`ey> neat
sikkiladho has quit [Quit: Connection closed]
<geist> https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/src/storage/fxfs/ is the new one in development. looks pretty neat
<bslsk05> ​fuchsia.googlesource.com: src/storage/fxfs - fuchsia - Git at Google
dzwdz has quit [Remote host closed the connection]
dzwdz has joined #osdev
<j`ey> written in rust, cool
Ameisen has quit [Quit: Quitting]
Ameisen has joined #osdev
wootehfoot has quit [Ping timeout: 250 seconds]
ahalaney has quit [Quit: Leaving]
cooligans has joined #osdev
GeDaMo has quit [Remote host closed the connection]
mahmutov_ has quit [Ping timeout: 256 seconds]
srjek|home has joined #osdev
cooligans has quit [Ping timeout: 256 seconds]
dormito has joined #osdev
Arthuria has quit [Ping timeout: 240 seconds]
Oli_ has quit [Quit: -a- IRC for Android 2.1.59]
srjek|home has quit [Ping timeout: 264 seconds]
ZetItUp has quit []
<Ameisen> So, I find those results interesting.
<Ameisen> Reduced latency is something that, as I recall, MuQSS was meant for.
<Ameisen> and better interactivity - though it fails that test as well.
wootehfoot has joined #osdev
Ameisen has quit [Quit: Quitting]
Ameisen has joined #osdev
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
wootehfoot has quit [Ping timeout: 245 seconds]
<Ameisen> though MuQSS development has been halted, as he doesn't want to keep forward-porting itl
<Ameisen> so that leaves... CFS and PDS.
Ameisen_ has joined #osdev
Ameisen has quit [Quit: Quitting]
Ameisen_ has quit [Client Quit]
Ameisen has joined #osdev
dude12312414 has quit [Ping timeout: 276 seconds]
dude12312414 has joined #osdev