klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
kingoffrance has quit [Ping timeout: 250 seconds]
<mrvn> heat: fd and fd2 have different ports
<mrvn> +must
<mrvn> The second bind fails with EADDRINUSE
<mrvn> When someone connects to port 80 should that go to the socket behind fd or fd2?
<heat> yeah exactly
<mrvn> Bonus question: bind(fd, eth0:80); bind(fd2, eth1:80); what does sendto(fd, ..., <IP on eth1:1234>, ...) do?
<heat> tcp or udp?
<heat> that doesn't work on tcp because sendto (or sendmsg)'s argument must be null, else EISCONN
<mrvn> For tcp it's ENOTCONN. but udp?
<heat> it goes out to the router
<mrvn> what router?
<heat> the one I'm connected to
<heat> default gateway
<mrvn> why? The route would say it goeds out on eth1.
<heat> yes
<heat> and then it comes back
<heat> wait
<heat> hrm
<mrvn> What source addr does it have?
zaquest has quit [Remote host closed the connection]
<heat> yeah I guess it routes it directly in the host
<heat> no need to go out
<mrvn> has to go out. it's some external IP on the eth1 network.
<heat> it's not external
<mrvn> not the IPÜ eth1 has itself
zaquest has joined #osdev
<heat> well, in that case, sure
<mrvn> But then what source IP is used?
<heat> you route it directly through the eth1 interface
<heat> eth0
PyR3X_ is now known as PyR3X
<heat> your host serves as a router
<heat> routes from eth0 <-> eth1 using its routing table
<mrvn> So the dest gets total garbage because it maybe has no idea how to route the IP for eth0.
<heat> huh?
<heat> assuming you have two subnets connected to eth0 and eth1, respectively, communication from eth0 to eth1 and vice-versa will go through you (the host, the router, whatever you want to call it)
<mrvn> Say eth0 is 192.168.1.10 and eth1 is 192.168.2.10. And dest is 192.168.2.23. That might not have a router for 192.168.1.10
<Mutabah> SNAT
<mrvn> Mutabah: no snat involved
<Mutabah> No, that's the solution
friedy10- has joined #osdev
<mrvn> Maybe the kernel should return EINVAL because you are trying to send with the wrong source address
<Mutabah> If you don't want the other devices on your network (or just your main router) to need the `192.168.2.0/24 via 192.168.1.10` rule
<heat> why does it not have a router for eth'?
<heat> eth0*
<mrvn> heat: lets just say it doesn't. No reason it should have one.
<heat> two different subnets will not be able to communicate unless there are routes from one to the other
<mrvn> heat: except it works one way
<heat> ok, i have a solution then
<heat> proxy arp
<Mutabah> That's a bridge
<Mutabah> also a workable solution
<mrvn> let's make this example different: bind fd to 127.0.0.1 and sendto 8.8.8.8
<mrvn> Does google then see a source address of 127.0.0.1?
<Mutabah> No intermediate router should accept that pacet
<Mutabah> even if it got onto the wire
<mrvn> Mutabah: true
<heat> no clue how that one works
<mrvn> I'm not sure if it should be let onto the wire at all. returning EINVAL would make sense to me.
<heat> what's the solution
<mrvn> no idea
<heat> wait are you asking questions
<mrvn> yes, I'm asking what should happen.
<Mutabah> I think that once you bind you tie yourself to an interface
<Mutabah> So any packet you generate will go out that interface
<heat> what should happen? error, can't route to host
<mrvn> Mutabah: That would not be able to resolve the IP via ARP
<mrvn> heat: no such error code in the manpage
<heat> doesn't matter
<heat> sendto, send, sendmsg don't have IP specific error codes
<Mutabah> Well, it shouldn't go via ARP, as it's off the local segment. it'd need to find a matching route instead
<heat> that's a part of ip(7) I would imagine
<mrvn> "Additional errors may be generated and returned from the underlying protocol modules" So I guess that would allow for a no route to host error?
<heat> yes
<mrvn> No exactly the right error though. There potentially is a route. It's just doesn't have the interface the FD is bound to as source address.
<heat> if you bind yourself to an interface, you need to go out that interface
<mrvn> my question is if the system will tell you about it or just things don't work. :)
<heat> it's -EINVAL
* Griwes . o O (it's just not going to work because the users always forget to check error values)
<bslsk05> ​gist.github.com: huh.log · GitHub
<heat> ok here's a cute question which i answered before
<heat> I have a host
<heat> i forgot to add a default gateway to the host
<heat> ping 1.1.1.1 works
<heat> how?
<heat> i am also not in 1.1.1.1's subnet ;)
<mrvn> route -n
<heat> ip route ;)
<heat> let's assume I have a route dest 0.0.0.0 genmask 0.0.0.0, no gateway
<mrvn> that is a default route
<mrvn> 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 enp4s0f0
<mrvn> Destination Gateway Genmask Flags Metric Ref Use Iface
<heat> sure, but no gateway
<heat> that's the point
<heat> _no gateway_
<mrvn> what does the gateway column say?
<heat> 0.0.0.0, without the G flag set
<heat> taking your output
<mrvn> Then I think you defined a network that has every IP in it.
<heat> 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 enp4s0f0
<heat> i have
<heat> but my subnet definitely isn't the whole internet
<mrvn> So that should send a ARP who-has 1.1.1.1 and fail to get a reply.
<heat> so how does it work
<heat> wrong ;)
<heat> proxy arp!
<mrvn> do, if you have proxy arp then you specified a gateway on the ether layer.
<mrvn> doh, *
<Mutabah> Just bridge
<heat> router gets the arp, sees it can route to 1.1.1.1 and replies with its own ethernet address
<heat> no, proxy arp does not need a gateway
<mrvn> no, but it makes it a gateway
<heat> you get a neighbour entry with (1.1.1.1, router_eth), router does the thing
<mrvn> exactly, anything for 1.1.1.1 goes to the router
<heat> ><heat> i forgot to add a default gateway to the host
<heat> _to the host_
<mrvn> you just made a route on a lower level making the routing and gateway config irrelevant.
<heat> what route?
<mrvn> heat: you gave it a destination on the ether level
<heat> sure did
<heat> networking still works even though I grossly misconfigured my routing
<heat> you just get a huge neighbour table
<heat> (and a lot more ARPs)
<heat> i hope you enjoyed this useless piece of fun networking trivia
<mrvn> proxy arp works the other way too. Some data is send to the local network and suddenly you act as gateway for it.
Likorn has joined #osdev
abbix has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
<klange> < sortie> I built some new memstat -a and ps -v statistics to help debug the memory leaks I got on my IRC server :) ← That reminded me I wanted to fix up my procfs API for consistent reads
terrorjack has quit [Quit: The Lounge - https://thelounge.chat]
terrorjack has joined #osdev
qubasa_ has joined #osdev
abbix has quit [Quit: Leaving]
mahmutov has quit [Ping timeout: 256 seconds]
qubasa_ is now known as qubasa
roan has quit [Quit: Lost terminal]
gog has quit [Ping timeout: 256 seconds]
heat has quit [Ping timeout: 248 seconds]
nanovad has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
nanovad has joined #osdev
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]
troseman has quit [Read error: Connection reset by peer]
kingoffrance has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
HeTo has quit [*.net *.split]
particleflux has quit [*.net *.split]
koolazer has quit [*.net *.split]
skipwich has quit [*.net *.split]
janemba has quit [*.net *.split]
HeTo has joined #osdev
skipwich has joined #osdev
koolazer has joined #osdev
particleflux has joined #osdev
xenos1984 has quit [Quit: Leaving.]
nickster has quit [*.net *.split]
remexre has quit [*.net *.split]
cln has quit [*.net *.split]
cheapie has quit [*.net *.split]
j`ey has quit [*.net *.split]
Piraty has quit [*.net *.split]
flx-- has quit [*.net *.split]
marshmallow has quit [*.net *.split]
Arsen has quit [*.net *.split]
j`ey has joined #osdev
Arsen has joined #osdev
Piraty has joined #osdev
cln has joined #osdev
flx-- has joined #osdev
nickster has joined #osdev
xenos1984 has joined #osdev
remexre has joined #osdev
xenos1984 has quit [Ping timeout: 248 seconds]
cheapie has joined #osdev
xenos1984 has joined #osdev
the_lanetly_052 has quit [Ping timeout: 246 seconds]
pretty_dumm_guy has joined #osdev
sortie has quit [Quit: Leaving]
sortie has joined #osdev
Likorn has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
corank_ has quit [Remote host closed the connection]
corank_ has joined #osdev
Burgundy has joined #osdev
Likorn has joined #osdev
woky has quit [Quit: Nothing in this world is hopeless!]
woky has joined #osdev
woky has quit [Remote host closed the connection]
GeDaMo has joined #osdev
woky has joined #osdev
corank_ has quit [Remote host closed the connection]
corank_ has joined #osdev
woky has quit [Quit: Nothing in this world is hopeless!]
woky has joined #osdev
woky has quit [Client Quit]
woky has joined #osdev
woky has quit [Client Quit]
woky has joined #osdev
mavhq has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
mavhq has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
vimal has joined #osdev
corank_ has quit [Remote host closed the connection]
corank_ has joined #osdev
<sortie> Well I just hacked up a whole kernel malloc tracking system in a couple of hours, total rush job, looks like it works Well Enough®
<sortie> Looks like it works
<sortie> Let's yolo this
gog has joined #osdev
<sortie> https://paste.ahti.space/8c46e9.html ← Neat, it actually came up and gave me this stuff. Let's see if it's stable :)
<bslsk05> ​paste.ahti.space <no title>
<zid> "I tested it a bit and it *seems* to work" is my favourite kind of works
<gog> yes
<gog> best kind
<zid> I have burrito
<gog> lucky
elastic_dog has quit [Ping timeout: 248 seconds]
<zid> I maded it
<zid> sort of chilli, and basmati rice
<gog> what about beans
<gog> and guac
<zid> no thanks
<zid> never tried guac
<zid> it's not really a huge thing here
<zid> we're indian based not mexico based
<gog> ah yes
<zid> It's sorta chilli cus it has like, mushrooms in it
<zid> cus I like mushrooms
<gog> i like a fungi
<zid> earthy + meat + sweet rice = gud shit
elastic_dog has joined #osdev
the_lanetly_052 has joined #osdev
<sham1> Wait, who here has a burrito
<sham1> Oh, zid
<FireFly> hmm, I should make curry
heat has joined #osdev
<heat> is there an easy way to do a search-and-replace of instructions?
<heat> I wanted to replace all lcall's with int 0x80
<heat> why? I want to see if i can run this svr4 nm utility under linux
<GeDaMo> Machine code? Asm?
<heat> machine code
<GeDaMo> Presumably it would be int 80h and nops
<GeDaMo> I wonder if you could disassemble to get the offsets of the lcalls then generate a patch file
<GeDaMo> Could you intercept the lcall?
dude12312414 has joined #osdev
<bslsk05> ​www.linuxjournal.com: Using iBCS2 Under Linux | Linux Journal
<bslsk05> ​ibcs-us.sourceforge.io: ibcs-us - User space emulation of SCO, SUN and others
<GeDaMo> «The emulated personalities all use the x86 "lcall" instruction to make syscalls.»
<GeDaMo> «Since it doesn't work ibcs-us reverts to trapping the SIGSEGV's these lcall instructions generate, and patches the lcall in the .text segment to do a normal call instead.»
gildasio has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
the_lanetly_052_ has joined #osdev
the_lanetly_052 has quit [Ping timeout: 240 seconds]
<heat> GeDaMo, sure but not really
<heat> i'm not going to modify the kernel to trap those lcalls
<GeDaMo> That iBCS thing seems to be user space
<heat> nop, loadable module
<heat> anyway, i re-thought my approach thing
<heat> svr4 isn't a good target for "hey minimal thing that does things" because it's super complete
<heat> i would be better off adapting my already existing kernel to svr4
<GeDaMo> The older versions of iBCS were kernel modules, the -us one seems to be user space
<gog> mew
dormito has quit [Quit: WeeChat 3.3]
dormito has joined #osdev
<heat> GeDaMo, build's broken
<heat> i could spend way too much time going my own way
<heat> but
<heat> it's simply not worth it lol what
<Bitweasil> ARMv7 TLB shootdowns: The main interface is through the CP15 registers, does there exist a MMIO interface to that too, somewhere?
<GeDaMo> heat: yeah, judging by what I've read it's bit rotted for a while
<heat> ideally I could edit the shared object to make every lcall into a call to a stub
mahmutov has joined #osdev
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
Starfoxxes has quit [Ping timeout: 260 seconds]
Starfoxxes has joined #osdev
Arsen has quit [Quit: Quit.]
Arsen has joined #osdev
zid has quit [Ping timeout: 256 seconds]
gildasio1 has joined #osdev
gildasio has quit [Ping timeout: 240 seconds]
gildasio1 is now known as gildasio
zid has joined #osdev
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
Ali_A has joined #osdev
Ali_A is now known as Aragami
Likorn has joined #osdev
wootehfoot has joined #osdev
wootehfoot has quit [Remote host closed the connection]
wootehfoot has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
nyah has joined #osdev
heat has joined #osdev
heat_ has quit [Read error: Connection reset by peer]
terminalpusher has joined #osdev
<terminalpusher> When I do like an OS and I do graphics stuff, draw rectangles, shapes etc. using double buffering (https://wiki.osdev.org/Double_Buffering), when do you think will I hit serious performance problems because I set all the pixels myself? I mean, it's done by the CPU right? So would I eventually be forced to have to make the GPU do the work of drawing lines, rectangles etc. instead of drawing that all by myself one by one into a fr
<bslsk05> ​wiki.osdev.org: Double Buffering - OSDev Wiki
<terminalpusher> amebuffer?
Aragami has quit [Quit: Connection closed]
<terminalpusher> you know like this https://wiki.osdev.org/GUI
<bslsk05> ​wiki.osdev.org: GUI - OSDev Wiki
<moon-child> terminalpusher: making the gpu draw things for you involves writing a graphics driver, which is a very involved process and should be ... approached with caution
<terminalpusher> right right I thought so. But when do you think I will hit performance problems? Like where is the limit with letting the CPU do all the drawing?
<moon-child> terminalpusher: software rendering is adequate for a wide variety of applications, especially given sufficient optimisation. See https://rxi.github.io/cached_software_rendering.html for some recent work
<bslsk05> ​rxi.github.io: Cached Software Rendering | rxi
<terminalpusher> oh I see
<terminalpusher> because to be honest my "OS" is already choking with just drawing a big rectangle
<terminalpusher> I think I should focus on redrawing only the minimum
<moon-child> are you double buffering?
<terminalpusher> I think so
<moon-child> rectangle rendering is a tight loop? Or does it use the (bad) primitives suggested by the wiki article you linked?
<geist> there are also some framebuffer access rules that may or may not be affecting you
<geist> but sounds like double buffering should help with most of those
<terminalpusher> tight loop? Yeah they say like this right? "PutRect and PutLine should have their own pixel plotting (calling PutPixel on a large rect will slow rendering WAY down). " What do they mean? Do they mean I should write all the loops and stuff by hand to make sure everything is inlined?
<terminalpusher> Because I'm trusting the compiler on that
<mrvn> double buffering avoids flickering and costs speed.
<geist> also depending on hardware, you should generally never read back from the framebuffer
<geist> write only, etc
<terminalpusher> why that? What about color with alpha?
<moon-child> I'll put it this way. putpixel will be inlined, but it encourages bad code and inhibits simd/swar
<mrvn> terminalpusher: reading from graphics memory is like 100 times slower than main memory, if your arch has such a thing
<moon-child> terminalpusher: alpha is handled with double buffering
<terminalpusher> mrvn: oh wait you mean the framebuffer, not my own buffer in RAM. yeah sure, I would write from my own buffer if I do double buffering
<moon-child> reading back from sw framebuffer is fine; just don't read from hw framebuffer
<geist> yah it depends. but in a lot of PC level hardare at least (which i assumed you're dealing with) the framebuffer is on the other side of a PCI link
<geist> and is optimized for blatting pixels out. reading back is very slow
<terminalpusher> yeah I wouldn't do that
<terminalpusher> I read from my own buffer
<geist> right. if you're doing readbacks in your double buffer and then blatting it out at the end using large, efficient linear copies, that shoulda t least be fairly optimal there
<geist> you should also probably insert some timing code to see where your time is going if you're curious
<mrvn> terminalpusher: you may want to draw into more hardware framebuffer and composite them with the gpu at some point.
<geist> is it the copy back? is it the render? blend? etc
<mrvn> terminalpusher: e.g. have a buffer for each window
<geist> right, that's what modern GUIs do since about the last 10-15 years. alas programming the gui is hard
<terminalpusher> more hardware framebuffers?
<moon-child> mrvn: and write a graphics driver?
<geist> but basically they treat each window as a gpu texture and then blend with a series of draw calls
<terminalpusher> ah
<mrvn> terminalpusher: blocks of memory on the gpu side
<terminalpusher> so you're saying I should write a graphics driver?
<mrvn> terminalpusher: The most important thing you need to do is to NOT draw things.
<geist> doesn't mean the cpu doesn't do a bit of manual drawing for individual components of course, but that's usually within a texture
<mrvn> terminalpusher: only draw changes.
<geist> right
<moon-child> yeah. Thing I linked talks about a semi-modular way of doing that
<terminalpusher> right, right. I think I will focus on that more now, make it all based on caching and drawing the minimum
<geist> traditionally, guis would draw directly on the framebuffer. as a window would get a draw call because of some change the app would be able to 'directly' draw on the now dirty rectangle
<geist> is why you would get left behind trails of garbage on a slow computer or slow app as you scrubbed windows around
<terminalpusher> right, right, because you can see the pixels change as their set
<mrvn> terminalpusher: also if you have a hardware framebuffer and a sowftware framebuffer consider what operations you can split. Like outputing text on a console. On newline you should have the hardware frame buffer scroll while the software framebuffer only updates the new line.
<terminalpusher> theyre
<geist> but it had the nice effect of there's no double buffering per se, there's exactly one frame buffer, and the apps are given more or less direct access to the framebuffer (not entirely, but effectively)
<geist> made sense in an era where there's no gpu and not a tremendous amount of extra memory
<terminalpusher> yeah that does make sense
<geist> since doble buffering windows is very memory intensive. i remember when macosx started doing it in the early 2000s, pre-gpu compositing. looked fantastic but it was a huge memory pig
<geist> they had some very optimized PPC code doing the sw compositing
<mrvn> AmigaOS had a flag for it and apps would set it when redrawing was too expensive.
<geist> yah, not sure when windows finally picked up the feature, but it was pretty late
<geist> cool thing is it still works with the 'redraw your dirty rect event' model. if your app has a fully double buffered window it just simply doesn't get any of thsoe anymore
<geist> at least as a result of external events
<terminalpusher> so I understand the advantage of having double buffering for the whole screen, but why have it for individual windows on the system?
<geist> so that each app can update its window when it wants to, how it wants to
<geist> and it's just the job of the compositor (software or hardware) to stich it together onto the final framebuffer
<mrvn> terminalpusher: The biggest gain is security. Each app can only draw in it's own framebuffer and can't read other windows contents.
<geist> so the model is much simpler: each app simply draws on its own windows like it has a full screen, effectively
<geist> obviously things like resizing the window are pretty complicated, that's basically a full redraw to the app
<terminalpusher> ah, yeah, right. Yeah I didn't intend to make it possible that a process can write outside of its own window
<mrvn> terminalpusher: you want that when you do shared memory between the display logic and the app.
<geist> also stuff like transparency effects and whatnot are 'free' in the sense that the compositor can simply do it
<geist> shadows, translucent windows, non square windows, etc
<terminalpusher> interesting
<geist> the older 'apps draw on the framebuffer' model is much more complicated since you have this complex set of notifications to send out to one or more apps and then wait for them to all complete
<geist> obviously the gui process/service/etc can use software stencils to actually keep apps from drawing outside of their window, but that's also software thats having to mask off draw calls
<geist> 'ill sent app A notification that its region X-Y is dirty, which corresponsds to actual framebuffer W-Z, and when it does draw calls i'll translate X-Y to W-Z and make sure they dont color outside their bounds'
<terminalpusher> that makes sense
<geist> and this may be simultaneously going on with app B C and D. worst case is something like you minimize a window which uncovers a bunch of windows underneath
<terminalpusher> and it does sound complicated
<geist> in a composited system you minimze the window and the compositor simply changes what it does the next frame
<mrvn> and with overlapping windows the region of a window that is visible can be rather complex.
<mrvn> usualy it's nice big rectangles but it can be really horribly in theory.
<terminalpusher> I see
<mrvn> Think of a maze or a sieve.
<geist> yah i thik the worst case is something like a window on top of another one that's completely within the underlkying window
<geist> so now you have 8 regions, or maybe 3 if you optimize it
<mrvn> there could be 1000000 1x1 pixel windows placed randomly over a bigger window.
<geist> yah
<geist> though the gui may optimize that i guess and just say it's a larger dirty rect and then stencil out those smallish windows
<mrvn> That's a case where I would say: screw it. I draw the lower window fully as 1 rect and then the 10000000 pixels.
<mrvn> or stencil as geist says.
<geist> years ago i started to write a gui like this and it was interesting, but i didn't get much farther than basic gui events and then stop
<geist> but it is an interessting data structure/async programming problem
<mrvn> delivering mouse events becomes interesting too
<terminalpusher> definitely
<mrvn> In practice nobody has 1000000 1x1 pixel windows.
<terminalpusher> probably not
<mrvn> it's a bit annoying, 99.9% of the time you only have simple rects and then there is that one case where it suddenly totaly explodes your simple code.
<geist> what i dont know is if modern composited guis have notificatins to apps that say things like 'you're completely covered and undrawable, dont bother updating yourself'
<geist> or do apps that are in the background, or on the other workspace, or whatnot fruitlessly update their screen in the background
<geist> they probably do, unless they're super fancy (web browsers maybe) because at any instant they can become uncovered and you never seen them flicker to draw their shit
<geist> perhaps minimized screens do, but then it also depends on the gui. do they draw a little minimized icon in the corner that's still live?
<mrvn> You do want that though to safe battery
<moon-child> if it's not in focus, it's not receiving input events, so probably not doing an appreciable amount of work to redraw itself anyway
<mrvn> There is also another approach. Instead of double buffering the window content you can double buffer what's below the window.
<moon-child> so probably no big deal
<mrvn> That approach is useful for things like menues. That way when the menu closes you just blit back the data that's below it.
<mrvn> moon-child: videos, anmations, progress bars, chat messages, game updates, ...
<moon-child> sure
<mrvn> !ADVERTISING!
<mrvn> geist: redraws are normaly really fast and I would assume when you minimize a window the content of the window just remains till the window below redraws. So the minimizing looks a bit delayed.
<mrvn> unless you have some fancy effect where you see the window shrink down to nothing that leaves artefacts behind.
<mrvn> Browser are maybe the slowest to render but they tend to render pages to a larger framebuffer ahead of time so you can scroll smoothly. So they have a double buffer already.
nvmd has joined #osdev
<moon-child> even not-browsers do that
heat has quit [Remote host closed the connection]
heat_ has joined #osdev
<geist> mrvn: yah probably. of course modern guis also have fancy features like showing little thumbnails of what is in the window when you hit alt-tab and whatnot
<geist> but those can be simply 'you're not minimized anymore!'
<mrvn> geist: or the minimized icon shows the contents
* Bitweasil finds himself more and more using non-accelerated framebuffers anyway these days...
<geist> as a side note i have observed that browsers clearly understand not working when tabs are not visible. f you have a browser window with a lot of expensive animated tabs or whatnot one trick i've found is to open a blank tab and switch to it while you're not using it
<geist> and sure enough they'll background all the non visible tabs
<Bitweasil> There's the... Great Suspender extension, maybe, that used to do that.
<Bitweasil> But I've definitely noticed if you have "live Javascript charts" or something, the update rate in the background is horrid compared to the foreground.
<geist> right
<geist> i tend to have multiple windows with multiple tabs, sorted by topic, so at any given point i might have 5 or 6 active tabs (5 or 6 windows) so sometimes i do this to keep them from burning unnecessary cpu
<mrvn> yeah, background tabs might not render but they still get (timer) events and audio output.
<Bitweasil> I've been *trying* to be better about not having that many tabs open.
<Bitweasil> That I'm back on more machines with 4GB of RAM helps a lot
<geist> yah but seems to be far less according to at least chromes task manager. clearly the ones that are in the foreground get more pookie
<mrvn> I realy want something in firefox to shut down non-visible tabs after a minute.
<geist> Bitweasil: yah i actuallyhave a problem, probably from growing up with dos and windows 3.1 or whatnot, of unnecessarily closing things too quickly
<geist> like i tend to instantly close windows/apps when i stop using it sometimes, and then 5 seconds later wish i hadnt
<bauen1> that's a good problem to have, i'm sitting at 2k+ tabs in firefox (yes, I don't need most of them and a good chunk should rather be bookmarks)
<Bitweasil> bauen1, browsers lose tabs often enough, in my experience from days past, that it's risky.
<geist> tabs i'm better at but usually if the window gets enough tabs that i can't read the first few characters of text, it's time to clean
<Bitweasil> Try running on a Pi or something for a while. It will cure you of that habit.
<Bitweasil> geist, all that means is you open a new window. :p
<geist> it's motsly stuff like open terminal, do some command, instantly close the terminal
<geist> then 5 seconds later wish i had kept it open
<Bitweasil> "I only have 30 tabs open!" "Yes, but you have 50 windows open. Each with 30 tabs..."
<geist> i also have mostly overpowered machines where none of this matter, is i guess my main point
<Bitweasil> Yes, those of us who try to use any recent products of your company *without* that kind of computer are quite aware of this fact...
<bauen1> Bitweasil: well, so far I could always restore them from the backup firefox makes, so far this firefox profile has survived from at least the begining of 2018, so a laptop move, the debian firefox update in unstable that made hell break loose, and at least 2 tab restores because firefox "forgot" them due to a bug
<Bitweasil> "But it works fine on a 32 core Xeon with 128GB of RAM, NVMe, and a pair of 4k monitors!" :p
<geist> well i guess i stepped into that
<geist> i think my point was *far* more about not chrome windows
<geist> ie, terminals, etc
<bslsk05> ​'Blocking People in Real Life: Tom Scott at An Evening of Unnecessary Detail' by Tom Scott (00:08:47)
<Bitweasil> Sure, but I can run lots of those on a Pi without rbouel.
<Bitweasil> trouble
<geist> right, and i have an old habit of closing the thing as soon as i'm done
<Bitweasil> Rust is pretty bad, actually.
<Bitweasil> A make -j 6 on rust stuff is likely to blow out 4GB of RAM and start killing stuff.
<Bitweasil> or cargo... [incantation goes here] -j6
<moon-child> old corp provided a machine with 48cores, 128g ram. Template blooooat. Even a dirty build could take close to 20s
<geist> yeah dont get me started
<geist> i have i think made peace with this problem, but it still annoys me
<moon-child> (actually, I just recently left them; they said they'd ask for the machine back, but they haven't done it yet. I kinda hope they forget...)
<mrvn> "Ignore that itch behind your ear, it's probably nothing."
<mrvn> Bitweasil: I want "-j mem" for that. Adapt the number to how much ram is free. And maybe freeze and swap out some if ram runs out till they can resume again.
<Bitweasil> I think that's more than make really should handle...
<Bitweasil> But, yes, that would be nice.
<Bitweasil> Or, perhaps, small board ARM computers *could get sane amounts of RAM for 2022.* :/
<Bitweasil> Sub-1GB per core is a bit silly.
<mrvn> or just kill off processes, lower the number of parallel builds and restart them later.
<mrvn> Bitweasil: then why did you buy so many cores? :)
<moon-child> well, you don't want compilation processes to be swapping out
<moon-child> so if you have swap, you're gonna have a bad time with that
<Bitweasil> mrvn, the N2+ has 6 cores.
<Bitweasil> And 4GB RAM.
<mrvn> moon-child: that's why you freeze them first.
<moon-child> but how do you detect that?
<moon-child> much easier if you kill all the swap and restart anything that gets oom'd
<mrvn> moon-child: kernel support or just watch the available ram
<mrvn> You would something that says: this progress group is a batch job. If you have to swap then freeze any number of them and swap them out completly.
<moon-child> I guess dynamic is kinda overkill, really; just measure avg memory use up-front, and use it to decide how much parallelism to use
<mrvn> moon-child: my idea would be adaptive. Take a guess and then go up or down as needed.
<Bitweasil> moon-child, the system either glitches hard and you see oom killer spam in dmesg if it recovers, or you launch a build, the system freezes, and you figure out that you ran out of RAM.
<mrvn> if you turn off overcommit then malloc/new simply fails and that's easy to detect
<mrvn> Just get make to notice failures due to out-of-memory and then buildn those targets with less parallelism.
<Bitweasil> "What the f--- is this exception I never expected?" "Oh, right, that's new running out of memory..."
GeDaMo has quit [Remote host closed the connection]
Likorn has joined #osdev
heat_ has quit [Remote host closed the connection]
nvmd has quit [Quit: WeeChat 3.5]
heat_ has joined #osdev
Raito_Bezarius has quit [Ping timeout: 240 seconds]
vimal has quit [Remote host closed the connection]
<heat_> so, i have this problem i've hit twice in my OS
<heat_> let's imagine I have a list of things (with a lock), and each thing has a lock as well
<heat_> I can add things to a list, by locking the list and then locking the thing's lock
<heat_> but I also need to be able to remove the thing from the list
<heat_> the problem is that the only way I can know in what list the thing is by looking at the thing
<mrvn> you need to lock the thing before/after
<heat_> lock, read the thing->list, unlock, lock the list, lock the thing, see if the thing is still present?
<heat_> does this make sense?
heat_ is now known as heat
<mrvn> so the list lock protects prev/next and the thing lock protects the things own data?
<heat> yeah
<mjg> sounds like you are missing reference counting scheme
<mrvn> that's how my Task structure works
<mjg> you add stuff to $data_structure, you add a ref
<mjg> you want remove it from $data_structure? you take whatever lock you need to do the removal and unref. if that's the last ref, you free
<mrvn> Why do you have to release the thing lock to lock the list?
<mjg> i presume to avoid a deadlock?
<mjg> against lookup which finds the particular object
<mjg> against, this is a self-induced problem imo and see above for the stock standard solution
<mjg> s/against/again/
<heat> yes, to avoid a deadlock
<mrvn> Do you have a read/write lock? Maybe you need a third "I still need this thing" state.
<heat> I do but I don't see how a read/write lock fits into this
<heat> nor refcounting
<mjg> i don;t see how a rw lock would help. you still need write-locking for removal and that already excludes lookup
<heat> unless you're talking about recursive mutexes, in which case, yuck
<mrvn> heat: 1) mutate the think write lock to read lock, lock the list, reaquire the write lock
<mrvn> heat: 2) have a refcount of things still needing the thing and a flag saying it's to be deleted. Then when releasing the lock if the refcount is 0 you delete the thing.
<heat> what?
<mrvn> You still have to check if the thing is valid when you lock it again but at least you don't end up with the thing already freed.
<heat> I don't need refcounting or deletion here
<mrvn> heat: it's a solution to the "see if the thing is still present" problem.
Raito_Bezarius has joined #osdev
<heat> so a "if --refcount is 0, remove it from the list?"
<mrvn> heat: if --refcount is 0 and to-be-delete, remove it
<heat> i see
<mrvn> heat: you can remove it from the list earlier if you are OK with that but as long as anyone still has a pointer to the thing you can't free 8it.
<heat> it's definitely not pretty but I see how this works
<mrvn> heat: simplest implementation: list<std::shared_ptr<thing>>
heat has quit [Read error: Connection reset by peer]
heat has joined #osdev
wootehfoot has quit [Quit: Leaving]
<heat> i mean...
<heat> i really don't like the solution
<heat> but i can't come up with something better
<mrvn> wait lists?
<mrvn> but I don't think you can get around the "to-be-delete" flag.
<mrvn> You can have a borrowable lock on the thing that you keep. If someone else wants to lock the thing they can borrow your lock but have to give it back after.
<heat> i can also just keep a single global lock
<heat> that kinda works
<heat> it's what fuchsia does for what I'm trying to fix (timer subsystem)
<mrvn> for all lists?
<heat> yes
<heat> in my case, each CPU has a timer, each timer has a list and a lock
<mrvn> My task structure are all in per-core lists and kernel has IRQs off so that is implicitly locked. Otherwise it is a nightmare as you noticed.
<heat> when you do timer_queue(event) it picks the current cpu's timer
<heat> but you can do timer_cancel(event) from everywhere
<mrvn> do you implement that via IPI?
<mrvn> .oO(It's in your list, here, deal with it)
rorx has quit [Read error: Connection reset by peer]
<geist> yah the problem weith per cpu locks for per cpu lists is when cancelling a timer you can't know up front what list its on
<geist> so just has a single lock
<geist> a few solutions would be to dynamically allocate timer structures so you could just mark it canceled and move on
<geist> and if it fires, doesn't matter
<bslsk05> ​forums.raspberrypi.com: Multicore PI System Bus Design Question - Page 2 - Raspberry Pi Forums
<geist> the zircon/LK code will take the lock and remove the timer from the list (since it is required to 'reclaim' the timer_t) but doesn't tell the other cpu to reset its timer hardware
<geist> so the timer hw may fire extraneously, but that's okay
<mrvn> and just make sure the time you hold that global lock is verry small
<clever> geist: one of the rpi engineers confirmed what i had suspected and talked to you about before, there is a 256 bit bus between the vector core and caches
<geist> ah but not the arm because a53s/a73s aren't that wide
<clever> but other datapaths, like the bcm2835 arm core, are only 32bits
<clever> increased to 64bit on later models
heat_ has joined #osdev
<clever> and many of the bridges are inteligent, and can split/join transactions
<mrvn> geist: hah, that's what I do too. Mark the timer as canceled and the next time the CPU owning the list processes the timer it removes it.
<clever> so a pair of 32bit transfers may turn into a single 64bit transfer
<geist> yah the difference is the zircon/lk timers that i wrote forever ago the caller provides the timer_t and thus the obect must be 'free' by the end of timer_cancel
heat has quit [Ping timeout: 260 seconds]
<geist> so it kinda makes a wrinkle on it, because otherwrise you could do some atomic set of a canceled bit and move on
<geist> but since the structure must be removed from the list and freed at the end of the call, it makes it hardware
<mrvn> geist: null the timer_t to mark it as cancelled?
<geist> OTOH, having the caller provide the structure is useful for a whole lot of other reasons, so i thin its a good compromise
<geist> mrvn: wouldn't work,. because it's in a linked list
<geist> have to at least delink it
<mrvn> geist: struct { List *prev, *next; timer_t *timer; }?
<heat_> no
<geist> sure, but then who allocates the struct?
heat_ is now known as heat
<heat> those lists are bad
<mrvn> geist: the kernel and the timer_t comes from the user
<geist> sure, but what about the struct that holds prev/next?
<heat> 1) you need dynamic memory allocation; 2) bad caching
<mrvn> geist: as said, kernel
<geist> yeah not compatible with this use case
<geist> like i said its a tradeoff. by having inserting/removing timers no allocation it can be done from any context, etc
<geist> but downside is the timer_t goes in a list so need to be able to remove it at cancel time
<mrvn> You can't have the prev/next in the timer_t struct itself if that remains owned by the user.
<geist> well, you *can* and does
<heat> geist, is it not possible to have huge lock contention on that single lock?
<geist> that's the compromise
<geist> heat: possibe, yes
<mrvn> and then they change it and the kernel crashes?
<clever> geist: is there a clear document on how wide of a databus each arm core has?
<geist> mrvn: of course
<geist> clever: yes. look in the core specific manual
<clever> *looks*
<geist> mrvn: note this is all kernel api. so 'user' in this case is stuff like thread_sleep() or event_wait_with_timeout()
<mrvn> heat: have you considered that a list is maybe not the right data structure in the first place?
<geist> so if that code corrupts the timer, that's bad time
<heat> what is?
<geist> a tree might be more useful, but really depends on how big N is in this case
<geist> linux does some sort of interesting heap style scheme that i think originated from VMS that kinda cant be unseen, but trades resolution for speed
<mrvn> heat: no idea. Consider the actions you use and what can happen in parallel. Maybe there is a better way to model the data to avoid the problem.
<heat> wait, idea
<geist> heat: the lock could probably be broken into core specific if there's a mechanism to safely atomically determine what core the timer_t is on and then go grab that specific lock
<mrvn> geist: Linux has a series of buckets, each for waiting >2^N ticks.
<geist> but there's some race conditions
<geist> mrvn: yeah i was avoiding mentioning it, because it tends to pollute folks that are figuring it out
<heat> I can avoid the lock contention issue by having a rw spinlock
<mrvn> geist: best short description I have for it is amortized bucket sort.
<heat> inserts and cancels are serialised using the write lock (no biggie, unless you're hammering the lock 24/7)
<heat> when doing a walk you use a read lock
<mrvn> heat: networking code will hammer it
<geist> heat: it also depends on what you're optimizing for. is it insertion/cancel/timer tick?
<heat> yes
<heat> :)
<geist> RW locks tend to be kinda expensive all in all, so not sure it's a good idea. also in general all accesses to the timer queue are destructive
<geist> so most of the time you'd grab it with W
<heat> but each processor has its own timer queue
<geist> yes
<mrvn> One think I've been considering is having a recursive structure for timers. So the networking module would have it's own heap (or whatever) of timers and register the heap with the systems timer heap.
<clever> geist: i see cortex-a7 mentioning a 128bit wide axi bus with coherency support, though the rpi forum mod says the pi2 is using a 64bit bus, maybe the vendor is able to implement a smaller bus, via verilog params?
<geist> clever: yes
<heat> you only need the R vs W to stop any "readers" from messing with any CPU's list while it's walking "W"
<mrvn> The system would only care what the top of the heap is and then lets the networking module handle the contents of the heap itself.
<geist> heat: yes, but in what conditions would it be okay to access the list with R?
<clever> so the core docs are just the max width, and the implementor can do less, and then split and join transactions, got it
<clever> bbl, supper
<geist> clever: yah and i bet as you expand that width the transistor complexity of the AXI bus goes up exponentially
<heat> wait, I mean the inverse. each cpu is a reader, each inserter/canceller is a writer
<geist> heat: but how is it a reader?
<geist> doesn't it 'consume' the timer event when it's done?
<geist> and thus have to unlink it
<clever> geist: they also said its not a full crossbar, just N:1
<mrvn> heat: why would the CPU ever read the list (other than the front)?
<heat> yes but the queue is per-cpu so there won't be races between readers
<heat> since each reader is each cpu, so it has its own queue
<mrvn> heat: what would the cpu read?
<heat> the rw lock is a bad analogy but would stop me from reinventing the wheel
<heat> mrvn, because 1 IRQ doesn't mean 1 event
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
<geist> alternatively could probably endeavor to make it an even smaller lock
<geist> like a spinlock
<geist> and just use it for insert/removal
<heat> and what does the tick() use?
<mrvn> heat: have a per list variable for when the next event happens. Update that atomically. The CPU only ever needs to read that.
<geist> what we ended up doing in zircon is each cpu has a timer queue + one scheduler tick time
<clever> back
<heat> no, the CPU needs to read the list to call events
<geist> basically a cheat, but its based on the idea that the scheduler resets the tick every time and it's always the same callback/etc
rorx has joined #osdev
<geist> so that can be done outside of the lock, since per cpu
<geist> that was our cheesy compromise
<mrvn> heat: when it calls eventgs it's going to remove them (assuming one-shot timers)
<heat> yes but you need to walk the list
<geist> means the regular timer queue isn't accessed
<mrvn> heat: but that's a write lock, not read
<heat> <heat> the rw lock is a bad analogy but would stop me from reinventing the wheel
pretty_dumm_guy has joined #osdev
<geist> soundsl ike you want some sort of exclusive/nonexclusive lock or something
<geist> with slightly different semantics
<mrvn> heat: you haven't shown any use case for a lock with 2 states. So far you only have write locks.
<heat> the cpus need a read state to get a nonexclusive lock (or a "READ" LOCK)
<mrvn> heat: for what?
<heat> so you don't have a threadripper cpu with 128 threads all contending for a single lock to dispatch events
<mrvn> insertion is exclusive, deleteion is excelusive, event firing is exclusive, cancel is exclusive.
<heat> event firing is not exclusive
<clever> interesting, arm1176 claims to have a 32bit peripheral interface, and 64bit ifetch/data/dma
<geist> but my point is what happens once it fires? does it remove the timer from the queue? zircon does
<bslsk05> ​cs.opensource.google <no title>
<mrvn> heat: it removes events or updates their time. How is that not exclusive?
<heat> it's a percpu queue.
<geist> heat: we do. and in fact i was just pondering writing a note to myself saying i should optimize that
<mrvn> we all know that
<heat> no one touches the percpu queue except the cpu itself
<geist> if we tracked the timer for the scheduler tick *and* the head of the queue, we can skip diving into the lock if the head isn't there
<geist> ie if the scheruler lock was set for time 5 and the head of the queue is 10, and we fire the IRQ at 6, it can do the scheduler tick, then test if we're also passed the current head, and avoid diving into the lock
<heat> dunno how many timers you'll have but there may be many
<mrvn> heat: you said removal can come from anywhere
<heat> and that's why you need the exclusive lock
<geist> yah i think there's an optimization in here somewhere. will add a note to look at it when i'm in the office again
<mrvn> yes, but on event firing too
<heat> you don't need the exclusive lock if you're touching your own cpu's queue
<geist> again. you haven't answered my question: what about removing the event from the queue once it's fired
<mrvn> So each cpui does a "read" lock on their own list and "write" lock on other lists? That's the same as just having a lock.
<heat> oh yeah, they're removed unless a certain flag is set (if set, it requeues it back)
<geist> heat: yeah that's what i keep trying to point out. even firing an event requires modifying the queue
<geist> and even if you have the flag set you might need to sort it in the list differently, thus modifying the list
<heat> yes
<mrvn> so in summary, all operations on the list need exclusive access
<heat> NO
<mrvn> which one doesn't?
<heat> the tick
<heat> this is all IRQ/softirq context
<mrvn> doesn't need access to the list, only to when the next event probably happens.
<heat> there's no preemption here
<heat> the tick also handles events
<geist> there must be some basic assumption we're not on the same page
<heat> all of this code is non-preemptive
<heat> right?
<mrvn> irrelevant if you have multiple cores
<clever> geist: my understanding, is that a 64bit bus port on something, would have 8 byte-enable signals (for each 8bit part), and a transaction involves setting an addr, then presenting some aligned data on the bus, and byte-enables for which bytes are used
<heat> no, not irrelevant
<geist> clever: yep. AXI can handle sub/width transactions
<mrvn> ok, maybe not, but mostly.
<clever> so an 8bit read from addr 0x12 would have the byte-enable for byte 2
<heat> holding a lock in insert() when getting ->tick() called inside an IRQ is a deadlock
<clever> and a 16bit read, would have bytes 2&3 enabled
dude12312414 has joined #osdev
<heat> so, softirq/irq context, always
<mrvn> heat: can't have IRQs enabled in insert or you need an atomic insert
<heat> yes, irqs disabled in insert as well
<heat> to avoid the deadlocking issues
<mrvn> which basically is just saying insert is atomic.
<clever> geist: but i have also seen things malfunction, the MMIO is only 32bit ports, and that 32bit, is wired to say both the upper and lower 32bits of a 64bit bus, so if you read from something that is 64bit aligned + 4, your expecting it on the upper 32bits of the 64bit bus
<clever> geist: but the MMIO is only a 32bit device, and solves that by wiring to both halves
<heat> because this is all non-preemptive, only code that accesses other cpus' timers needs an exclusive lock, because insert/tick just touch their own queue
<heat> this makes sense, right?
<mrvn> heat: no, that's wrong
<clever> and if you illegally perform a 64bit read of mmio, you get the 32bit value repeated, because its wired to both halves of the 64bit bus
<geist> heat: thanks for reminding me of this, i think there is an optimization here that can be done
<geist> will add a TODO list for work to do it
<mrvn> heat: core 0 can be accessing it's own list while core 1 tries to remove a timer
<heat> which is why you take a shared lock while doing operations on your own list
<geist> since i suspect the number of times the HW timer fires for the scheduler tick but before the first timeout of a queue is set, can avoid diving into the spinlock
<heat> np geist
<mrvn> heat: which would never ever share it with anyone because the only core than can use shared access is the core itself.
<mrvn> heat: so that's an exclusive access too
<heat> it would share it with other CPUs
<heat> don't forget we're all sharing the same global_timer_lock
<mrvn> what operation would do that?
<heat> insert/tick
* geist is gonna let only mrvn pick on heat for now
<mrvn> heat: HUH? since when? I thought we were talking about per-core lists with per-core locks?
<j`ey> big kernel lock D:
<geist> i think 1:1 is more fair
<heat> no.
<heat> if these are per-core lists with per-core locks, i would just have spinlocks
<heat> that would work fine
<mrvn> heat: So you want all cores to be able to modify their own lists in parallel. But if you meddle with another codes list you stop the world?
<heat> YES
<mrvn> ok, that makes sense then
<mrvn> but why not have per-core locks?
<heat> because of the horribleness it's causing
<mrvn> not seeing a difference
<mrvn> your case: lock timer, find out which list it is in, do exclusive/non-exclusive lock depending on which list it is, modify
<heat> how can you safely remove a timer if you don't know the list it's on
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
<mrvn> my case: lock timer, find out which list it is in, lock list, modify
<heat> it can stop being on a list, it can switch lists, between "find out" and "lock list"
<mrvn> same for both cases
<geist> now you have more locks to contend with
<geist> though at least they're more narrowly scoped
<geist> and you wouldn't be able to hold the lock across botyh of them, or at least in one direction
<geist> ie, list -> timer lock may be fine, but timer lock -> list wouldn't be
<heat> if you take the exclusive lock, you know it's a snapshot of the whole timer system's state, it's either in a queue, or it isn't
<geist> but yeah could work with it. the latter case you'd have to drop the lock, grab the cores' lock, then revalidate its' still there
<heat> there's no "hey, i'm in this queue, oh wait i'm not"
<geist> which should be pretty rare
<mrvn> heat: so everything would first have to take the exclusive lock to find out what list it is in
<heat> the cancel yeah
<mrvn> or I guess a read lock would suffice there
<geist> siude note: in practice we haven't really gone back to fix much of the zircon timer queue code because we've found in practice there isn't a lot of contention
<geist> ie, the max amount of contention that can be in the zircon design is number of threads in the system
<mrvn> heat: part of the problem goes away with the "to-be-removed" flag.
<geist> and that's only if threads are blocked with timeout or sleeping for time period
<geist> not sayign it's not worth fixing, but i remember looking into this not very long ago and it turns out it wasn't yet so bad
<geist> but the more cpus you add the worse it'll get until it's terrible
<mrvn> geist: no problem with network timeouts?
<geist> network timeouts?
<heat> retransmission stuff for instance
<geist> sure
<mrvn> geist: all the timjers TCP needs
<geist> but fairly certain the net stack runs its own timer queue
<geist> so it only ends up leaning on a handful at best of kernel timer events
<mrvn> that takes a big chunk out of it then
<geist> sure, but it may only be like 4 or 5 kernel timers, spread across all cpus
<geist> it's a thing, but it's not the biggest thing
<geist> same thing with N events per cpu. the linked list is sorted so it's O(N) to add a new timer event. that's an issue, but in practice even a loaded system we're only seeing maybe N=5 at worst
<geist> doesn't mean you can't construct a thing that will cause more, but hasn't become a high priority to fix yet
<mrvn> That works a long long time, except for networking code.
<mrvn> Lik you say you have like 4 or 5 kernel timers and then 100000 network timers.
<heat> you don't have 100k network timers
<heat> unless you're literally running google.com on your OS
<heat> in which case, hey, better use linux
<mrvn> a torrent client can easily add 10k.
<heat> if I max out my bandwidth I can get maybe 10k in 5 seconds
<heat> but not concurrently
<mrvn> even if it's just 1k that will still kill your O(N) algorithm.
<heat> on a regular TCP download, my host gets 2 full TCP segments before it acks
<heat> assuming 40ms of latency, you won't get much overlap
<mrvn> heat: open a web page and you probably get 10+ connections all with their own timeouts. But nothing anywhere near a p2p network.
<geist> usually network stacks maintain their own timers
<geist> generally because they can more efficiently do it and probablydont need the resolution that the system timer facility provides
<heat> how?
<geist> probably a tick based thing
<geist> oh i dunno. i dont know what high end machines done
<geist> OSes do
<mrvn> The main point is that insert/cancle must be really fast. Events basically never ever fire.
<mrvn> Whereas on normal timers they nearly always fire
<geist> depends a lot on the system and what facilities it provides
<geist> i think in the case of zircon a sizable chunk never fire
<mrvn> network timers vs other timers
<geist> because things like 'wait on this with timeout' sets a timer, but lots of times it never timesout
<geist> ah yeah. either way you should plan on everything
<mrvn> With networking I'm also talking of potentially a million timers per second.
<geist> anyway, going to ignore this a bit and try to heads down
<mrvn> or more for 10/50/400GBit networks
<mrvn> makes me wonder if NICs have an offload engine for some of the timer stuff
terminalpusher has quit [Remote host closed the connection]
Ali_A has joined #osdev
Matt|home has joined #osdev
Burgundy has quit [Ping timeout: 276 seconds]
Ali_A has quit [Quit: Connection closed]
Likorn has quit [Quit: WeeChat 3.4.1]
Likorn has joined #osdev