dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<clever>
nice
gildasio has quit [Read error: Connection reset by peer]
Vercas has quit [Read error: Connection reset by peer]
gxt has quit [Read error: Connection reset by peer]
gxt has joined #osdev
gildasio has joined #osdev
Vercas has joined #osdev
<lg>
macos ld doesn't love my old kernel code that I'm trying to resurrect. is a gcc cross-compiler the preferred option or is there a rosetta stone somewhere so that I can translate the options from gnu ld?
rustyy has quit [Quit: leaving]
nick64 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
rustyy has joined #osdev
<geist>
ugh finally started to parse the response to an IDENTIFY command. what a huge mess
<geist>
but... at least the ACS-2 spec is fairly clear about what bit is what, and what bit implies another, etc
<geist>
so looks like you can get away with just looking at a handful
Vercas has quit [Write error: Connection reset by peer]
gildasio has quit [Read error: Connection reset by peer]
gxt has quit [Read error: Connection reset by peer]
Vercas has joined #osdev
gxt has joined #osdev
gildasio has joined #osdev
<kazinsal>
yeah, there's an immense amount of information that IDENTIFY spits out
DeepComa has quit [Quit: .oO (bbl tc folks~!)]
garrit has quit [Read error: Connection reset by peer]
elastic_dog has quit [Ping timeout: 240 seconds]
garrit has joined #osdev
[itchyjunk] has quit [Remote host closed the connection]
gog has quit [Ping timeout: 256 seconds]
elastic_dog has joined #osdev
gog has joined #osdev
masoudd has quit [Read error: Connection reset by peer]
masoudd has joined #osdev
rustyy has quit [Ping timeout: 272 seconds]
gog has quit [Ping timeout: 256 seconds]
the_lanetly_052_ has joined #osdev
the_lanetly_052_ has quit [Max SendQ exceeded]
the_lanetly_052_ has joined #osdev
the_lanetly_052_ has quit [Max SendQ exceeded]
the_lanetly_052_ has joined #osdev
the_lanetly_052_ has quit [Max SendQ exceeded]
the_lanetly_052_ has joined #osdev
the_lanetly_052_ has quit [Max SendQ exceeded]
the_lanetly_052_ has joined #osdev
rustyy has joined #osdev
<Griwes>
30 minutes into prototyping a thing and already hit a problem which means I'll need to reevaluate some things. You gotta both love and hate that
<Mutabah>
At least it was only 30mins
mahmutov has joined #osdev
<Griwes>
Yeah that's why the love it part
rustyy has quit [Ping timeout: 256 seconds]
rustyy has joined #osdev
Payam has joined #osdev
nick64 has quit [Quit: Connection closed for inactivity]
jack_rabbit has quit [Quit: Connection closed]
Oshawott has joined #osdev
archenoth has quit [Ping timeout: 240 seconds]
gog has joined #osdev
zaquest has quit [Remote host closed the connection]
kingoffrance has quit [Ping timeout: 240 seconds]
GeDaMo has joined #osdev
pretty_dumm_guy has joined #osdev
zaquest has joined #osdev
dormito has quit [Quit: WeeChat 3.3]
ElementW_ is now known as ElementW
dormito has joined #osdev
Payam has quit [Quit: Client closed]
<g1n>
hello
<gog>
hey g1n you safe?
<g1n>
hi gog, yes i am safe :)
<gog>
:)
<g1n>
i can view files from tar initrd!
<g1n>
but, how to form proper fs?
<mrvn>
g1n: you need a VFS, not an explicit FS. Later you may want to implement existing FSes but for start any ram based filesystem no matter how stupid will do.
<mrvn>
In the c++ core guidelines it says a pointer should always point to a single object. And then they have not_null(). But isn't a pointer to a single object that isn't nullptr exactly what reference is? Any use of not_null() seems to me like you violate the core guidelines or you would use a reference.
<j`ey>
Does it mean.. not pointng to an array of objects?
<g1n>
mrvn: yes i know that i need vfs. but where should i start implementing it?
<mrvn>
j`ey: yes. For an array you should use a span.
<mrvn>
g1n: a vfs usualy defines the operations you can do on filesystems in general and usualy includes generic caching. So think about what operations an FS can perform and what syscalls you want to provide to the user. E.g. open, close, read, write, readdir, ...
<mrvn>
think about blocking IO, non-blocking IO, async IO.
<mrvn>
and what parts will the kernel provide and what part will the libc implement.
<g1n>
ok
<g1n>
mrvn: so vfs should define funcs for registering fses and fs implementation should give funcs (open/close/other) that will be abstracted in vfs?
<mrvn>
g1n: if you mean mountpoints then yes, sounds like a plan.
<mrvn>
g1n: look at how linux has bind mounts and private/shared mounts although the later isn't without problems.
<g1n>
ok
<mrvn>
This can get really complicated, there is a lot of concepts that go into or through the vfs. Maybe just look what's out there and then limit yourself to a bare minimum without closing doors for the future.
<mrvn>
No need to implement it all right now.
<mrvn>
My first vfs interface could just load the splash screen data from the initrd.
Burgundy has joined #osdev
ElectronApps has quit [Remote host closed the connection]
ElectronApps has joined #osdev
ElectronApps has quit [Max SendQ exceeded]
ElectronApps has joined #osdev
<g1n>
mrvn: ok, i currently already can display data from files, but need to abstract it to vfs
<mrvn>
g1n: The VFS also deals with all the stuff common to all FS. Like walking a path, checking permissions. You don't want every FS driver reimplementing that and maybe have bugs there.
<mrvn>
or doing blocking IO when the FS drivers only do async IO.
<g1n>
so, fs driver should abstract its fs, to some one abstract fs and vfs wil parse it?
<clever>
for linux, every fs has a file-ops struct, defining what functions you call to open/read/write/close/other
<clever>
and every open file handle, has a copy of that file-ops struct
<mrvn>
Usualy you have a struct with callbacks for that. In linux there are also a bunch of default implementations for those callbacks an FS can use and only provide a few essential ones.
<clever>
but!, if you open a character device, the open handler can mutate the per-handle file-ops struct
<clever>
so a single general driver (character device) can then provide a different set of handlers, based on the major/minor#
<clever>
and somewhere in all of that, is the file walking code, and allowing a mount-point to hijack a dir and redirect it
<g1n>
hmmmmmmmmmm, ok
<mrvn>
The user <-> vfs interface usualy deals with paths and file descriptors. The vfs <-> fs interface usualy has inodes or handlers.
<clever>
so if your opening multiple files on ext4, they will all share a common file-ops struct for the ext4 driver
<mrvn>
So the user says: open("/etc/passwd") and the vfs will translate that into file-ops and inode#17
<mrvn>
Note that NFS is basically the one big exception to FSes in that it works on paths, not inodes.
<clever>
and it will also call an open handler, to create a per-fs struct
<clever>
which linux then just stores in a void*
<mrvn>
klange: congrats
<clever>
that allows the fs (ext4 for ex) to hold things like the inode# and caches about where the indirect blocks are
<g1n>
mrvn: tar don't have inodes isn't it?
<mrvn>
g1n: it kind of has. Not indexed by number but it has the contents of the indoes (uid, gid, name, type, size, ...)
<klange>
tar has a struct before each file with all the deets, which is what an inode is in anything else
<g1n>
oh ok
<mrvn>
g1n: what you do with a tar as initrd is unpack it at boot into something you can index.
<mrvn>
cpio would be the simpler form there.
<g1n>
oh hmmm
<clever>
cpio also has an inode# like field, and thats used for hardlinks
<clever>
you can say that a directory contains foo.txt, refer to file-42 previously seen
<clever>
and not repeat the body
<g1n>
so, i can unpack initrd in smth like linked list (or tree)? but, how to store data, if i don't have malloc
<clever>
it also has a header, that says to wipe the file# cache
<clever>
so when you append 2 cpio archives, they dont cross-link file#'s
<mrvn>
g1n: you have the tar blob. you can point into it.
<g1n>
mrvn: ok
<clever>
so you only need to generate a tree of directory objects, and a linked list of files within a directory
<clever>
and point to the offset+size of the file data within the existing .tar blob
<mrvn>
g1n: but you really do need some way to malloc small variable sized blocks of memory for the VFS.
<g1n>
mrvn: so, i need malloc, and for it i need userland
<clever>
linux has kmalloc, for allocations done in-kernel
<g1n>
i have just alloc page, so it is not malloc
<clever>
LK just has plain malloc
<mrvn>
g1n: no userland. Make something kernel internal. And think about making an memory pool + allocator for the vfs. Look at slabs for exampls.
<g1n>
ok...
<clever>
slabs can solve this without needing an malloc
<clever>
grab an entire page, treat it as an array of directory structs, and have a bitmap of what slots are used in the tail?
<clever>
and then you just need to keep track of what pages your using for the directory struct slabs
<clever>
then your slab allocator can just hand out an entire `struct directory` from that array
<clever>
already of the right size
<mrvn>
The hard part is that the name of a file is variable size.
<clever>
offset+size into the .tar blob again?
<mrvn>
Yes, for the tar case you can not dynamically allocate the names.-
<mrvn>
But later you wouldn't want the VFS to keep pointers into FS private data.
<g1n>
yeah
<g1n>
what is better for that tree, or linked list?
<mrvn>
VFS is the only place in my kernel that needs a (k)malloc.
<mrvn>
g1n: hashtable
<mrvn>
tree of hashtables
<clever>
a lot of FS's also have a tree of hashtables on-disk as well
<mrvn>
but that is an optimization. A tree of linked lists or just a linked list works as first approximation.
<clever>
so reading a file with a known name is far faster then listing the contents of a dir
<g1n>
i think just linked list can be not good, if having directories
<g1n>
or, it can have path in name isn't it (like "test/test1.txt")
<mrvn>
In the end you need path walks to be fast. You will be checking permissions on paths very frequent.
<clever>
it would just be "test" and "test1.txt" in the lists
<mrvn>
A single linked list is really slow to find a file. A tree of linked lists is much better.
<clever>
but you could have a pathwalk cache? where "test/test1.txt" is a pointer to the answer
<mrvn>
Also: Are we talking vfs or tarfs here?
<GeDaMo>
"In early MCP implementations, directory nodes were represented by separate files with directory entries, as other systems did. However, since about 1970, MCP internally uses a 'FLAT' directory listing all file paths on a volume." https://en.wikipedia.org/wiki/Burroughs_MCP#File_system
<bslsk05>
en.wikipedia.org: Burroughs MCP - Wikipedia
<mrvn>
GeDaMo: hehe, devolution.
<g1n>
mrvn: currently about tarfs, i will try to make some vfs
srjek has joined #osdev
<g1n>
also, just checked that tar saves files in dirs as "test/" then "test/test.txt"
<clever>
ive also had bugs before, where test/foo.txt existed in a cpio, but test/ didnt
<mrvn>
g1n: I would suggest this path forward: implement kmalloc() to allocate variable sized structures from a memory pool, implement hashtbl, parse the tar into a hashtbl using (directory-inode, name) as key. Then implement the directory structure and file-ops interface for the vfs.
<clever>
so at unpack time, it failed to create test/foo.txt, because test/ didnt exist
<clever>
but then in git, a directory is a list of entities, and an empty directory cant exist
<mrvn>
g1n: instead of (directory-indoe, name) you can use some other handle, maybe even keep it abstract in the vfs interface so each FS can define their own format.
<mrvn>
clever: which I sometimes hate. A few git repositories have .PLACEHOLDER files.
srjek has quit [Ping timeout: 240 seconds]
<mrvn>
g1n: How is "test/test.txt" stored in tar? The full path or a reference to "test/" + "test.txt"?
dennis95 has joined #osdev
<mrvn>
g1n: I really suggest the use of separate memory pools for vfs and tarfs. So kmalloc() really should be a method of some object that subsystems create. It's best memory from different subsystems don't mix and you can often use specialized allocation methods depending on the use case. You could do a global kmalloc() for now but splitting memory usage into separate pools later is much harder than doing it from
<mrvn>
the start.
<mrvn>
Again you can make that really stupid at the start: Grab 1GB of memory and increment a pointer by size in alloc(). And free() does nothing at the moment.
<clever>
linux does almost exactly that, for the pre-mmu self-decompression code
<clever>
it compiles a tiny gunzip (or others) c file, and has a very dumb asm based relocation patcher and some dumb malloc's
<clever>
just enough to unpack itself, and then do things properly with C
<clever>
and with the mmu on, that second stage doesnt need relocation
<g1n>
mrvn: ok, i will try
<g1n>
just made VERY DUMB malloc
<g1n>
but seems it is working!!!!!
<g1n>
or no? hmm
<g1n>
how to test it?
<g1n>
cuz memcpy to malloced char*, gives very same result (but not malloced has space in the end)
<g1n>
oh, cool page fault lol
nur has quit [Ping timeout: 240 seconds]
Payam has joined #osdev
<mrvn>
Not sure how many bugs you can fit in 4-8 lines of code.
<j`ey>
8-16
<mrvn>
j`ey: is that a challenge? :)
<j`ey>
:D
<g1n>
seems, it is working?
<g1n>
lol
* mrvn
gives g1n the works-for-me seal of approval.
<g1n>
lol
<g1n>
currently i have this: uint32_t kmalloc(size_t size) { uint32_t tmp = mmaddr; mmaddr += size; return tmp; } where mmaddr is uint32_t and it is after end of initrd
<mrvn>
alignment to 4/8/16 byte might be needed on non-x86.
<mrvn>
and didn't you have something to allocate pages? You should allocate pages and then hand out chunks of memory from there.
vdamewood has quit [Read error: Connection reset by peer]
<mrvn>
food for thought while I go make some food for my tummy.
<bslsk05>
twitter: <TheDeadDistrict> We don't target Ukrainian people they said.  Looks like 🇷🇺 starting to shelling city with cluster munition. It's very common Russian practice - who cares about civilians when Putin need win.  Kharkiv today #StundWithUkraine #SupportUkraine 🇺🇦 #Kharkiv #Nazirussia #PutinHitler 144/ https://video.twimg.com/ext_tw_video/1498244120564875271/pu/vid/384x624/GTTo1y8e7RlqLj-L.mp4?tag=12
<mjg>
ops, wrong window
<g1n>
mjg: ##politics?
<g1n>
mrvn: oh, right
<g1n>
so i need to allocate one page if less than 4KB?
<mjg>
you can't do less than one page
<mjg>
which, if you are dealing with x86, is 4KB minimum
mniip has quit [Ping timeout: 604 seconds]
<g1n>
mjg: but, if i can't alloc less than one page, then i waste a lot of mem isn't it?
ElectronApps has quit [Remote host closed the connection]
ElectronApps has joined #osdev
<mrvn>
g1n: No. That's the point of the kmalloc object. It allocates a big chunk of memory and then hands it out in little chunks.
<mrvn>
g1n: At the start just set some fixed size like 2MB or 64MB. When you run out later think about how to make it allocate more pages dynamically.
mahmutov has quit [Ping timeout: 240 seconds]
nick64 has joined #osdev
troseman has joined #osdev
gog` has joined #osdev
dude12312414 has joined #osdev
sonny has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
sprock has quit [Ping timeout: 240 seconds]
<g1n>
mrvn: so in smth like mm_init i should alloc 64MB and then using kmalloc, cut it?
<g1n>
so to alloc 64MB i need to run kalloc_frame 64MB/4KB times?
xenos1984 has quit [Remote host closed the connection]
<mrvn>
g1n: it's a good start
xenos1984 has joined #osdev
<g1n>
ok, will that func that i have shown work with that implementation?
<g1n>
also then where should mm_addr start?
srjek has joined #osdev
mahmutov has joined #osdev
<mrvn>
with whatever virtual address you page those 64MB to.
<g1n>
ok
<g1n>
mrvn: seems i need to run kalloc_frame 16000 times for 64MB? i think i will start from 2MB for now lol
blockhead has quit []
<mrvn>
Don't you have a function kalloc_frames(size_t num) that allocates the frames, the virtual address space, maps the pages and returns the start.
<g1n>
oh
<g1n>
no
<g1n>
:(
<g1n>
i just have kalloc_frame
X-Scale` has joined #osdev
<g1n>
so i need to kalloc_frame and virt addrs?
<g1n>
and then map them
<g1n>
wait a second
<g1n>
i don't use paging in page frame allocator
* g1n
is sad to go back :(
<g1n>
lol
X-Scale has quit [Ping timeout: 256 seconds]
X-Scale` is now known as X-Scale
epony has quit [Ping timeout: 240 seconds]
the_lanetly_052 has joined #osdev
gog` has quit [Read error: Connection reset by peer]
the_lanetly_052_ has quit [Ping timeout: 240 seconds]
sonny has quit [Remote host closed the connection]
ElectronApps has quit [Remote host closed the connection]
nur has joined #osdev
FatalNIX has joined #osdev
<FatalNIX>
I have a terrifying idea; What do you think about using RINA as the basis for IPC in a toy project?
<mrvn>
Recursive Internetwork Architecture?
<FatalNIX>
I have known about RINA for quite a long time, and never did get a chance to finish John Day's book, but I was just thinking, it may be fun to play with.
<FatalNIX>
Yeah.
<mrvn>
you want network transparency for your IPC?
<FatalNIX>
Sure. Might get a little interesting, at least.
[itchyjunk] has joined #osdev
sonny has joined #osdev
sprock has joined #osdev
Payam has quit [Ping timeout: 256 seconds]
sonny has quit [Ping timeout: 256 seconds]
bgs has quit [Read error: Connection reset by peer]
bgs has joined #osdev
gildasio has quit [Ping timeout: 240 seconds]
xenos1984 has quit [Remote host closed the connection]
gildasio has joined #osdev
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
sonny has joined #osdev
gildasio has quit [Ping timeout: 240 seconds]
epony has joined #osdev
sonny has quit [Ping timeout: 256 seconds]
sonny has joined #osdev
sonny has left #osdev [#osdev]
[itchyjunk] has quit [Read error: Connection reset by peer]
<GeDaMo>
I read that one of the reasons for updating was the ability to declare variables in for loop headers so they didn't leak out to the rest of the function
<j`ey>
yep
nick64 has quit [Quit: Connection closed for inactivity]
<mrvn>
I don't see much in c23 that a kernel would be interested in. [[fallthrough]] and [[depreated]] besically.
<gog>
probably not
<j`ey>
and linux already uses __attribute__((__fallthrough__))
<mrvn>
but that's a gnuism
<gog>
bless you
<mrvn>
No more 1s-complement binary in C23 I see. signed overflow is still UB though, right?
<GeDaMo>
Could you #define [[fallthrough]] as a macro?
<mrvn>
GeDaMo: what would be the point?
<GeDaMo>
Forward compatibility?
<mrvn>
GeDaMo: I bet there already is a fallthrough macro.
<gog>
i just do /* fallthrough */
<mrvn>
that still gives compiler warnings
<gog>
hm
FatalNIX has quit [Quit: Lost terminal]
sonny has joined #osdev
dennis95 has quit [Quit: Leaving]
sortiecat has joined #osdev
sortie has quit [Ping timeout: 240 seconds]
sonny has quit [Quit: Client closed]
sonny has joined #osdev
kaitsh has joined #osdev
<GeDaMo>
mrvn: you're right "In order to identify intentional fall-through cases, we have adopted a pseudo-keyword macro ‘fallthrough’ which expands to gcc’s extension __attribute__((__fallthrough__))." https://www.kernel.org/doc/html/v5.6/process/deprecated.html
<bslsk05>
www.kernel.org: Deprecated Interfaces, Language Features, Attributes, and Conventions — The Linux Kernel documentation
<GeDaMo>
"When the C17/C18 [[fallthrough]] syntax is more commonly supported by C compilers, static analyzers, and IDEs, we can switch to using that syntax for the macro pseudo-keyword."
sortiecat is now known as sortie
<geist>
fallthrough is goooood. we use it in zircon too
gilidasio is now known as gildasio
<GeDaMo>
#define case break;case :P
sonny has quit [Ping timeout: 256 seconds]
sonny has joined #osdev
* gog
falls through the floor
gog is now known as gog`
the_lanetly_052 has quit [Ping timeout: 245 seconds]
<GeDaMo>
Is gog prime from the mirror universe? :|
masoudd has quit [Ping timeout: 256 seconds]
blockhead has joined #osdev
<gog`>
yes, but the gog from your universe is the evil one
dude12312414 has joined #osdev
gildasio has quit [Quit: WeeChat 3.4]
zid has joined #osdev
<zid>
I boughted a new cpu.
<geist>
grats. weren't you using the ancient xeon?
<bslsk05>
www.theregister.com: Concern over growing reach of proprietary firmware BLOBs • The Register
kingoffrance has joined #osdev
not_not has joined #osdev
<mrvn>
what's growing there?
<not_not>
Trees?
<not_not>
Maybe mushrooms
<not_not>
Possibly plants or mold
<mrvn>
From my view the proprietary firmware blobs are shrinking, not growing. Too slow but shrinking.
sonny has quit [Ping timeout: 256 seconds]
<zid>
My single core performance at the same clock speed is identical, who'd have guessed, but I get 2 more cores so multithread results are way up
<mrvn>
What? your identical core at identical clock has identical speed? Who'd thought?
<geist>
suppose it could have more L3 cache or whatnot
<zid>
Oh yea that's a point, I do have 2MB more L3 atm, but I don't think anything really tests that on these benchies
<zid>
I think I need to remount it though, cooler was being a poohead and I probably made a right mess of the thermal paste.
<gog`>
intel cpus from that era are pretty hard to kill ime
<mrvn>
tryed to kill a bunch of them?
<mrvn>
tried
<gog`>
i had a kentsfield i overheated a few times
<zid>
isn't kentsfield like 90nm
<gog`>
yeah it's earlier than sandy
<gog`>
iirc
<zid>
I'm not sure you can kill anything newer than a p4
<zid>
without overvolting
<gog`>
probably not
<mrvn>
sure, just put it in your microwave
<gog`>
lol
<mrvn>
or maybe not, that would overvolt it
<gog`>
definitely
<mrvn>
oven will do
<gog`>
yes just melt the wafer
<mrvn>
a few seconds with a heat gun
<zid>
I.. think I'm going to turn it down a bit, idling at 75C probably not-ideal
<gog`>
turn it up to 11
not_not has quit [Ping timeout: 240 seconds]
<zid>
WARNING: PCH 60C
sonny has joined #osdev
<geist>
i had my i7-2600k overclocked a good 30% for most of it's operational life (3 or 4 years)
<geist>
was a pretty impressive cpu
freakazoid333 has quit [Read error: Connection reset by peer]
<geist>
was like 3400 to 4200 or something
<geist>
maybe 4500? pushed it with a simple multiplier tweak, didn't even push the voltage or whatnot
<zid>
yea I never dick with voltages, especially now they're on a curve
<geist>
totes
<zid>
*drops the voltages as he says that*
<geist>
now with my 5950x i just push up the TDP a bit so let it run faster longer when loaded
<geist>
since that's about all you can do nowadays
<mrvn>
The only time I overclicked was my 68881 (external FPU) from 40MHz to 50MHz (cpu clock speed) because then it run cooler.
Raito_Bezarius has quit [Ping timeout: 240 seconds]
xenos1984 has quit [Read error: Connection reset by peer]
<zid>
I can't drop the multipliers without rebooting, so plan: Drop voltage until it either gets cold or crashes. Either way I win.
xenos1984 has joined #osdev
<bauen1>
how exactly does overclocking make for a cooler cpu ?? if you ran my old macbook air at anything above 1.7ghz it would very quickly enter a loop of running fast -> thermal throttling -> running slower than some 6502 -> cooling down
<zid>
who said it would?
<gog`>
best guess is waiting on the cpu was not an idle state for the 68881
<gog`>
so being out of sync would make it do things more
<zid>
oh mrvn
<zid>
There's the pentium 4 speedstep fallacy too
<gog`>
yes
<zid>
where the cpu would enter sleep states less often because it would downclock itself to 1GHz and just run at full load 24/7
<zid>
rather than running at 30% load at 3GHz
<zid>
with 70% of the time spent in a fairly deep power-gated mode
<zid>
Well I've gone from 1.4V to 1.1V and it still hasn't crashed, surprising.
Raito_Bezarius has joined #osdev
<mrvn>
zid: In the case of the 68881 it would busy loop waiting for the bus cycles to sync up. No power saving in those old chips.
<mrvn>
plus 20% faster, so why the hell not?
<gog`>
everybody wins
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
freakazoid333 has joined #osdev
GeDaMo has quit [Remote host closed the connection]
kaitsh has quit [Ping timeout: 272 seconds]
kaitsh has joined #osdev
<zid>
My 3.3V dips under load quite badly :(
<zid>
It's not a super strong psu, so the 3.3V isn't specced that high. It'll do 500W over 12V but barely anything over 3.3/5
mahmutov has quit [Ping timeout: 250 seconds]
dormito has quit [Quit: WeeChat 3.3]
<geist>
mrvn: ah so by running at 50 it was a proper ratio to the fpu or whatnot?
<geist>
i was wondering if that was some sort of bus ratio thing but had forgotten to ask
<geist>
actually surprised it used less anyway since i assumed it was a NMOS or PMOS machine (pre CMOS) and thus generally wouldn't matter if it was doing anything or not
colona has joined #osdev
<mrvn>
the different bus frequencies must have caused micro shorts or something that wastes power.
<geist>
i guess a question is were they ever designed to run at different frequencies or was it an error to have them that way in the first place
<geist>
and how did they derive different frequencies? did they have their own dividers?
<mrvn>
2 quartz
<geist>
huh. what machine was this on?
<zid>
wouldn't you have massive phase issues with that?
<mrvn>
A1200 with Blizzard 1260 cpu board.
<mrvn>
zid: async bus protocol
<geist>
yah with two crystals that'd be hard to keep them in sync, unless they had some sort of trainig thing to try to find some multiple
<geist>
even at the same freq seems like they'd drift into an out of it
<mrvn>
The board has a jumper to select the 50MHz (cpu) or 40MHz quartz (external).
<geist>
thoguh i guess it's not like the fpu was probably used much back then, was the era before games and whatnot used fpus
<geist>
aaah okay that makes sense
<mrvn>
Oh I totally used the fpu for my fractals.
<geist>
when i pickedup an old 386 i got a fpu for the hey, and yeah. fractals
<geist>
exactly
<mrvn>
ANything using floats through the math.library would also use the fpu .
<mrvn>
Kind of bad to have a function call to do fmov fp0, d0; fmov fp1, d1; fadd f0, fp1; fmov d0, fp0. But faster than software floats.