<bslsk05>
en.wikipedia.org: Thought disorder - Wikipedia
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
perano has joined #osdev
Izem has joined #osdev
<perano>
Btw. I managed to deal with RUST language the whole night too, i am on form but rust is something new for me, though now i have it setup on the iphone.
<Izem>
Are all functions that you can use to interact with the operating system syscalls?
<clever>
Izem: i dont think apple would approve of that
<perano>
There is some nice package which kind of got my attention.
<zid>
Izem: depends what you mean by functions, and syscall, tbh
<perano>
usbip is implemented in rust language.
<clever>
Izem: ive heard that some of the sandboxing, is just if statements within the sandbox, and static analysis, to make sure you dont read something you shouldnt
<Izem>
zid: I am thinking of a typical OS that is written in C or similar
<zid>
Izem: depends what you mean by functions, and syscall, tbh
<Izem>
hmm
<zid>
nothing is a syscall, from C, it's a wrapper function that happens to do a syscall as part of its implementation
<zid>
or everything is
<zid>
depends what your definition is
<moon-child>
__asm__("syscall")
wereii has quit [Ping timeout: 268 seconds]
<perano>
t first i was thinking more on the lines of modbus bridges until i got a hold on that particular package, which seems like pretty good the way i understand.
<Izem>
clever: what you describe doesn't sound very enticing
<clever>
Izem: the example i heard, was basically using objective c to call the method "foo" + "bar"
<perano>
Until i got some food i did not really understand what issues i ran into compiling rust, but even iphone has extension2021 compatible repository yeah prebuilt. it is not in cydia, but comes with install and uninstall scripts in the zipped trees.
<clever>
Izem: and because "foobar" never appeared in the binary, it bypassed the static analasys, and called something it shouldnt have permission to see
<perano>
i ran into some issues with socat alone.
<Izem>
oof
<Izem>
klange: are you going to implement the network tools or do you want them to run?
<klange>
I implement everything.
<Izem>
oh nice
<moon-child>
'I do want to write our own TLS implementation'
<moon-child>
lies! Lies and slander!
<Izem>
what kind of crypto does TLS use?
<kazinsal>
a bunch
<klange>
Lots of different things, different versions of the spec have different required minimum implementations.
<Izem>
ok
<Izem>
if I ever get to as writing as much code as you have I'd like to take the wirth route and design the hardware first before the os
Izem has quit [Quit: Going offline, see ya! (www.adiirc.com)]
pretty_dumm_guy has joined #osdev
vdamewood has joined #osdev
<perano>
I explained everything in personal messages to those people here about how to authore the fast pipeline, now my final comments go to cache and registers. loop buffers with certain intrinsics and or big OoO queues could use cache or registers without ever going to memory, this is being used soon in my secur work protocol to just perform my low-profile works, cache and registers can put the cpu master out of phase, it is security related.
<perano>
graphitemaster was commenting on cache, i hope you understand why those things are meant to be on the die.
wereii has joined #osdev
<perano>
and yes you notice the people from my country having hired very nasty people to stalk and trash me, those people are way nastier than any on that channel, the local ones bullied me for years to preventively or precautively to boot me off, i did ten years of training to overcome this issue, and i say still that they see as big as delusions as their jamaican fecalist partners, i have never even done and most of the people known personally, as well as i have not
<perano>
even attacked any of their friends, cause to me those persons were just living trashers without any value to me.
<perano>
similarly in programming as late, i made ten year of effort and even twenty to know what i know, nothing came without effort
<perano>
in other words, very deep maniacal abusers, so something to learn about this case to anyone in similar shoes as i was in.
<perano>
it is nothing you ever gain dealing with such trashtalkers.
pretty_dumm_guy has quit [Quit: WeeChat 3.2]
pretty_dumm_guy has joined #osdev
isaacwoods has quit [Quit: WeeChat 3.2]
<perano>
i hear that every day how my relatives are associated with fellons, there is nothing i had ever related to this but only being their victims to carry their mistakes as sanctioned persons in certain institutions, ridiciulous people to me all of them, but what can i do, estonian court decided to corrupt things like this.
mctpyt has joined #osdev
<perano>
you think you are the smart ones to come to tell me how twisted those persons are or what? When i cut off with an order once they did not catch me on streets 4times, and carried all the pentalties for this kind of things, they are mad people.
<kazinsal>
klange: ping
<perano>
and i am even genetically entirely unrelated to those people. cause the only one similar to me, was appearing two generations back, in fact he was almost like me
<Nuclear>
are you by any chance the second coming of Terry ?
<Mutabah>
Oh, hey Mard
<perano>
you think you teach me nuclear science, who are you?
<kazinsal>
was wondering who was going to get here first
<perano>
i think we know way better than any of you in that subject too
perano was kicked from #osdev by Mutabah [We tire of you]
<Mutabah>
... now I'm self-concious, I can never remember which spelling of that word is which
<zid>
tired
<zid>
tire
<moon-child>
Mutabah: which word?
<zid>
unless you meant you forgot how to spell 'we' I guess
<moon-child>
spelling was right
<Mutabah>
"tire"
<moon-child>
oh tire/tyre?
* moon-child
laughs in north america
<kazinsal>
"we tyre of you" would be a britishism meaning "we cover you in a rubber compound to assist in traction"
<Mutabah>
I work in (rail) vehicle measurement, and I have mixed up "tyre" and "tire" a few times
<Mutabah>
:D
<zid>
tyred isn't a word so just go with that cognate
<kazinsal>
ye hath beene tyred
<zid>
(hit by a tyre)
tacco has quit []
<klange>
Mutabah: for north americans, there's just the one :) but if you're growing weary, that's always an 'i'.
<Mutabah>
:D
pretty_dumm_guy has quit [Quit: WeeChat 3.2]
sts-q has quit [Ping timeout: 248 seconds]
sts-q has joined #osdev
flx-- has joined #osdev
flx- has quit [Ping timeout: 252 seconds]
Izem has joined #osdev
sts-q has quit [Ping timeout: 240 seconds]
orthoplex64 has quit [Quit: Leaving]
sts-q has joined #osdev
<clever>
klange: after comparing how LK fails on ext4, and works on ext2, i can see that the i_flags field on the inode, says that this inode is using extents rather then blocks, and ext2_read_inode() doesnt support that
<geist>
does it detect it or just blindly interpret extents as blocks?
<bslsk05>
ext4.wiki.kernel.org: Ext4 Disk Layout - Ext4
<clever>
geist: blindly mis-handles it all
<clever>
i just modified ext2_read_inode, to abort if any unsupported flags (non-zero) are detected
<clever>
so it will at least fail a little more gracefully
<clever>
ext2 was using raw block lists, and when there are too many blocks, it starts forming a tree of indirect blocks
<clever>
but ext4 is using extents, where you just have a starting block# and block length
<clever>
so non-fragmented parts of the file, can be described more compactly, and your not spending a large number of indirect blocks describing how to count
dutch has joined #osdev
<clever>
the code from LK for ext2, used uint32_t i_block[EXT2_N_BLOCKS];/* Pointers to blocks */
<clever>
i think that was just 15 blocks?
<clever>
yeah, the above wiki agrees
<geist>
i dunno
<geist>
as you come up with fixes please post patches
<bslsk05>
github.com: lk/ext3_fs.h at master · littlekernel/lk · GitHub
<clever>
now i see why this complicated 12+1+1+1 exists
<clever>
the first 12 slots, ALWAYS point to the first 12 blocks of the file
<clever>
slot 13, always points to a single-depth indirect block, which itself holds the next N blocks of the file
<clever>
slot 14, points to a double-depth indirect block, that holds the next N single-depth indirect blocks
<clever>
and slot 15, is a tripple-depth indirect block
<clever>
N depends on how many 32bit pointers you can fit into a block
<clever>
> Note that with this block mapping scheme, it is necessary to fill out a lot of mapping data even for a large contiguous file! This inefficiency led to the creation of the extent mapping scheme, discussed below.
<clever>
ahh, and as expected, its shoving 4 of the new extents, into the same 60 byte slot in the inode
<bslsk05>
github.com: lk/io.c at master · littlekernel/lk · GitHub
<clever>
ahhh!
<clever>
this code will compute how many indirection levels to recurse, and what index into each to read
<clever>
all of that goes out the window when using ext4
<clever>
eh_magic: 0xf30a
<clever>
eh_entries: 1
<clever>
eh_max: 4
<clever>
geist: progress!, i can see the expected magic, this file has a single extent, and the header is advertising room for 3 more in this block (the 60 byte blob in the inode itself)
paulman has joined #osdev
<kazinsal>
I oughta start working on my OS again...
kulernil has quit [Remote host closed the connection]
freakazoid333 has quit [Read error: Connection reset by peer]
Izem has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<clever>
geist: wooo!, its now able to traverse a single directory, and read a 6 byte file!
<clever>
it lacks proper support for converting LE->native, and for dealing with files over 4 fragments, but its a start!
<clever>
`status_t ret = fs_mount("/root", "ext2", "sdhostp2");` is why things are at /root
<sham1>
.. is... odd
<kazinsal>
wonder what .. is represented as on disk
<sham1>
An inode, for whatever reason
<clever>
kazinsal: i believe ext2/3/4 reports . and .. as hardlinks to the respective inodes
<zid>
I'd be surprised if .. was represented tbh
<sham1>
Instead of letting the VFS synthesise . and ..
<zid>
seems like a waste to me
<geist>
some fses synthesize it, some dont
<kazinsal>
yeah it should be EXT2_FT_DIR
<zid>
and can only lead to fs corruption and not do anything useful
<geist>
if the directory inode, for example, has a parent inode # in the inode itself, then there's no reason to store it inline
<moon-child>
yeah
<geist>
iirc, befs had somethingl ike that
<moon-child>
lik eyou'd have to do bookkeeping every time you move a directory. Easy to get wrong
<geist>
but, for filesystems like traditional FFS/UFS/EXT* there's no concept of a parent inode, so you just store it as two entries in the directory itself
<geist>
but really it's a holdover from early unix, when directory files were literally not special, they were just a regular file with a data structure that user space code parsed directly
<geist>
and then, i guess, opened files by inode # or something (dunno how that worked)
<moon-child>
even just a couple of freebsd versions ago, you could 'cat' a directory
<geist>
ior at least needed suid to modify (adding dir entries, etc)
<moon-child>
spat out binary garbage, but it didn't fail
<geist>
yep
superleaf1995 has joined #osdev
<geist>
but it does have a nice property of being a known way to get the parent inode and walk up the tree when walking, so it does serve a purpose
<clever>
that was also an exploit against chroot, at one time
<clever>
.. let you traverse outside the chroot
superleaf1995 has quit [Quit: Lost terminal]
<geist>
yeah, chroot almost certainly has to shut that down
<moon-child>
there was a fun ios exploid a while ago
<moon-child>
where you could escape the sandbox
<moon-child>
by making a relative symlink
<zid>
forgetful fs drivers seems like an odd thing to accomodate with wasted disk space and potential for corruption to m
ZetItUp has joined #osdev
<clever>
moon-child: the android backup restore process, didnt check for what it was about to overwrite
<clever>
moon-child: if you restored a backup, that contained a chmod +777 directory, 1000 junk files, and 1 specially named file
<geist>
i forget exactly what it was used, but iirc there's some value to . that's non obvious too
<zid>
Yea lots and lots of bugs in the world of slipping symlinks into a dir a high privledge process will deal with
<clever>
moon-child: but then start a shell script to race it, that creates a symlink at that special name (while its busy with the junk files)
<zid>
steam and windows both in the past short while
<geist>
but i dont remember it. it was something to the effect of stat()ting ("a/long/dir/struct/.") being a permission check
<moon-child>
clever: ooh, I see
<clever>
moon-child: the backup restoration (running as root) would blindly follow the symlink, and overwrite something important!
<moon-child>
fun
<geist>
ie, walking into a dir and being able to look up . is internally a test of permission
<clever>
moon-child: in the example i saw, it writes to a properties file, to flag the system as running inside a VM
<clever>
moon-child: that disables the security, allowing `adb root` to give a root shell
<clever>
but that also disables hw acceleration, so you need to create a setuid root binary, and undo it
ElectronApps has joined #osdev
tacco has joined #osdev
rubion has joined #osdev
rubion has quit [Ping timeout: 252 seconds]
zaquest has quit [Remote host closed the connection]
diamondbond has joined #osdev
GeDaMo has joined #osdev
ElectronApps has quit [Remote host closed the connection]
diamondbond has quit [Quit: Leaving]
mctpyt has quit [Ping timeout: 240 seconds]
mctpyt has joined #osdev
dormito has quit [Ping timeout: 250 seconds]
ZombieChicken has quit [Remote host closed the connection]
ElectronApps has joined #osdev
Arthuria has joined #osdev
dormito has joined #osdev
<ZetItUp>
hmm vmware actually improved the performance of my VMs
<ZetItUp>
so why have i've been using virtualbox which always gave me issues :P
<heat>
linux doesn't try to go for the htree when your entries all fit inside the first fs block
<clever>
heat: something like this should trigger it!
<clever>
ah, but that example was on xfs!
<Oli>
Hello, and good day!
<Oli>
moon-child: I have myself been thinking about doug16k; I haven't seen him since a while around.
<zid>
maybe he took some time off, last time I saw him he was getting explained in 3 channels simultaneously that he was wrong about what he just said :P
<bslsk05>
github.com: edk2-platforms/Directory.c at ext4-dev · heatd/edk2-platforms · GitHub
<heat>
clever, this looks sane right?
<heat>
I don't think I'm missing any check now
<clever>
// Check if the minimum directory entry fits inside [BlockOffset, EndOfBlock]
<clever>
heat: i think thats what LK is missing, to properly roll over into the next block
<heat>
i check for minimum entry < rest of block, namelen + minimum entry >= rec_len, rec_len % 4, name_len > Remaining Block (this check could be removed), rec_len > Remaining Block
<heat>
I don't see what else can be checked for
<heat>
name_len >= and rec_len >=*
<heat>
wait no it's actually >, I'm stupid
<Oli>
I have seen doug16k doing way more good than anything else around: I appreciate his presence, and thoughts I have seen him sharing around.
<heat>
^^
srjek has quit [Ping timeout: 258 seconds]
tds has quit [Ping timeout: 258 seconds]
shlomif has quit [Ping timeout: 252 seconds]
kwilczynski has quit []
tds has joined #osdev
<kingoffrance>
well i dont know how to summon a doug16k, im not at that level yet. profile, port, c++ <nothing happens>
<kazinsal>
*ahem* micro-optimized memcpy
<clever>
kazinsal: i was recently doing some math, comparing the VPU vector load/store to dma and arm load/store, and the VPU seems to somehow do better then dma
<clever>
but i have to question if i did the math right
<sham1>
How does one micro-optimize memcpy anyway
<heat>
Just Do It(tm)
<kazinsal>
clever: it's entirely possible the DMA controller on the BCM2835 just sucks
<clever>
kazinsal: i was also comparing L1 load-hits, to what is likely an uncached ram->ram dma copy
<sham1>
I mean, userspace memcpy probably can be micro-optimized with SIMD, but probably not so for kernels
<bslsk05>
www.raspberrypi.org: Fast way to move memory? - Raspberry Pi Forums
<sham1>
That's the kind of BS I've come to expect from GNU, yes
<clever>
kazinsal: from my past testing a vector-load of 4096 bytes, takes 139 clock cycles, *256 to make that 1mb, to equal the dma i'm comparing against
<kazinsal>
avx memcpy is interesting and seems like a fairly reliably fast thing
<clever>
running at 500mhz, that should mean the vector load of 4kb, takes 71 uSec
<kazinsal>
in my experience rep movsb *is* fast now but it's not *the* fastest
freakazoid333 has quit [Read error: Connection reset by peer]
<moon-child>
used by freebsd kernel
<moon-child>
(and userspace)
<kazinsal>
it'll be the fastest for simple unaligned multi-cache-line copies
<heat>
moon-child, it's not much faster than rep movsb
<moon-child>
heat: you are not doug16k :/
<heat>
i'm budget doug16k
<clever>
kazinsal: did the numbers i just say, all line up?
<moon-child>
fair
<sham1>
I just do whatever the compiler generates for `void *memcpy(void *restrict dest, const void *source, size_t count) { unsigned char *d = dest; unsigned char *s = source; for (size_t i = 0; i < count; ++i) { d[i]=s[i]; } return dest; }` I don't care enough to select instructions myself frankly
<heat>
anyway rep movsb has a lot of overhead so it might not be the best option for $cpu for smaller copies
<kazinsal>
clever: I'd need to know more about the internals of the VPU or the DMA engine on the BCM2835 to be able to give you a proper answer
<heat>
same for stosb
<moon-child>
yeah. My memset falls back to stosb at...800 bytes, I think? Fallover is somewhere around there
<heat>
I've had kernel builds where I had literally 0 memcpy or memset calls
<heat>
they were all inlined into particular instructions (mostly rep movsb)
<clever>
kazinsal: assuming all of the math is right, copying 1mb with arm memcpy takes 349ms, dma 12.8ms, and vpu vector 0.142ms
<sham1>
If gcc decides to autovectorize an implementation of memcpy (which it probably won't, but still) then whatever
<moon-child>
800 for sse2, 256k for avx
<zid>
it always autovectorizes mine
<sham1>
I frankly don't care about if it emits SSE or AVX or whatever
<zid>
I've never had it not turn my memcpy def into the stock builtin
<zid>
with 400 avx ops
<zid>
I'm more than happy with rep movsb though, it has decent perf at every size and sse doesn't even beat it, only threads do because I have two memory ports and 4 channels
<sham1>
Conventional wisdom tells you to disable vector instructions for kernel space, and while that might be a good idea, I don't like playing by the rules (aside from red zone, but that's less avoidable)
<zid>
yea I have them disabled in kernel so it won't
<clever>
sham1: i think the reason for that wisdom, is that you have to context switch the vector state, on every irq and syscall
<zid>
redzone is very avoidable, -mno-redzone
<kazinsal>
practical wisdom is that using AVX will likely make the rest of your kernel slow down
<moon-child>
yea
<kazinsal>
great in user-space because if you're running AVX code in user-space then you're probably running a *lot* of AVX code
<moon-child>
xsave all over the place = no bueno
<heat>
sham1, do you like slowing down every context switch by a lot
<kazinsal>
you're not just AVX memcpying once in a blue moon
<sham1>
Yeah, I disable red zone. As for vector state, I remember reading that it could be done lazily
<zid>
the lazy vector state stuff is actually slower now
<heat>
you can't do it lazily inside the kernel too
<moon-child>
you can use redzone with FRED iirc
<heat>
at least I wouldn't
<clever>
you would need to make vector ops trap, while in kernel mode
<clever>
and then have some code that you know isnt vectorized, to load/save it
<heat>
an FPU op in the wrong place and the whole kernel comes down crashing
<clever>
for linux, there are some mutex like functions, to clean the fpu out, and temporarily claim control of it
<clever>
so you can do vectorized stuff, in confined areas
<sham1>
Well performance isn't my priority, rather correctness, but I will reconsider my position
<sham1>
At least for when building the compiler for kernel stuff. I suppose when I port gcc for my OS then I can decide that there are so-and-so builds that enable and disable things like red-zone and such
<clever>
kazinsal: i dont know about the bck283x dma stuff, but i have seen the rp2040 dma in depth (it has far better docs)
<clever>
kazinsal: with the rp2040, there are 3 FIFO's, the dma core will generate pairs of read-addr and write-addr, and write them to the read-fifo, and dest-fifo
<clever>
kazinsal: the AXI interface will then consume 1 read addr, fetch data, and write it to the data fifo
<clever>
kazinsal: the AXI interface will also consume 1 dest addr, 1 data, and write it to the defined addr
<clever>
so it can queue up multiple copies, and then act on them purely thru the fifo's
<clever>
kazinsal: the main limitation, is that if a single bus operation stalls, it hangs ALL dma
<clever>
but there are also other things, to throttle the dma operations to the exact rate needed
<clever>
on the bcm line of SoC's, the dreq line just signals if a fifo is above or below some point
<clever>
dreq being active, causes dma to act on the fifo until dreq stops being active (either filling or draining)
<clever>
but because of transfers in flight, dreq has to stop before the fifo is full, causing some fifo space to go to waste
<clever>
but on the rp2040, the dma core keeps a counter, of how many transfers the fifo wants
<clever>
and for every clock cycle dreq is active, the count goes up by 1
mhall has quit [Quit: Connection closed for inactivity]
dormito has quit [Ping timeout: 240 seconds]
dormito has joined #osdev
pretty_dumm_guy has joined #osdev
gog has quit [Ping timeout: 268 seconds]
GeDaMo has quit [Quit: Leaving.]
heat has quit [Ping timeout: 248 seconds]
dormito has quit [Ping timeout: 240 seconds]
Izem has joined #osdev
srjek has joined #osdev
Izem has left #osdev [Closing Window]
dormito has joined #osdev
ahalaney has quit [Quit: Leaving]
hbag has joined #osdev
freakazoid333 has joined #osdev
gog has joined #osdev
freakazoid333 has quit [Ping timeout: 248 seconds]
dennis95 has quit [Quit: Leaving]
<Ameisen_>
I'm wondering how I could get ll/sc working on vemips without doing something absolutely awful like using the system's virtual memory mapping capabilities as a way to implement a crude bloom filter via access violation trapping.
<Ameisen_>
IIRC, the architecture requires it do handle at _least_ page-level granularity (though vemips doesn't really have pages)
<Ameisen_>
I'm not even entirely sure how hardware implementations handle it when there are multiple CPUs/cores - is there a shared unit for determining which addresses are 'monitored'?