<geist>
it pretty much immediately loses the --target part as it drills in
<heat>
that's so weird
<geist>
it is an older clang and an older lld, so may vary
<heat>
yeah I'm running the newest stable
<geist>
that being said if i do it directly with ld.lld it seems to work
<heat>
try using a newer one
<heat>
i.e fuchsia's
<geist>
yah will have to trylater
<geist>
got some stuff to do now
<geist>
anyway, probably the answer is to support both linker based linkage and compiler based, and then have a little matrix of build options
<geist>
ie, with gcc/clang and then with/without LTO. if using LTO then link with CC and maybe only works on some subset of clang. i dunno
<geist>
i'll also try to build llvm manually and if a 'clean' build like that works i'll be more okay with it
<geist>
since that's effecitvely what i require with gcc anyway
<geist>
also a partial build: clang + bfd ld is still useful
<geist>
thats what i had done years ago but just never really checked in
<geist>
i *think* i can just do the -Wl stuff by adding it in make/build.mk to all the existing options
<geist>
and/or i think maybe -Wl"-foo -bar" works?
<geist>
and then conditionally do it there based on if it's using CC linker
<geist>
if you have time can you see if it's trivial to make this conditional like that?
<geist>
alas gotta go for a bit
<geist>
precisely this sort of implicit hostisms is why a long time ago i generally decided to drive LD directly if at all possible when doing bare metal
Vercas has quit [Quit: Ping timeout (120 seconds)]
nyah has quit [Ping timeout: 268 seconds]
nanovad has quit [Ping timeout: 240 seconds]
nanovad has joined #osdev
dude12312414 has joined #osdev
Vercas has joined #osdev
opal has quit [Ping timeout: 268 seconds]
tsraoien has quit [Ping timeout: 255 seconds]
opal has joined #osdev
SpikeHeron has quit [Quit: WeeChat 3.6]
SpikeHeron has joined #osdev
tsraoien has joined #osdev
\Test_User has quit [Ping timeout: 268 seconds]
\Test_User has joined #osdev
tsraoien has quit [Ping timeout: 268 seconds]
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<clever>
mrvn: but it doesnt just turn it into hex, the parser skips that stage if compiling to a .o, so you dont waste time re-parsing the hex back into binary
<clever>
mrvn: and that kind of time-waste occurs if you #include a generated hex file
<clever>
the link also mentions that several compilers are having to throw out debug info, for any array over 256 elements, because that just slows it down
<mrvn>
clever: that's because gcc/clang are crap with arrays
<clever>
mrvn: the MS compiler also has a stupid limit of 64kb for string literals, lol
<mrvn>
optimization fails with clang and larger arrays
<clever>
mrvn: and no matter what you do, parsing hex into binary will never beat fread()
<mrvn>
clever: #anything is supposed to be C-pre-processor stuff and #embed is decidately mend not to be done at that stage.
<clever>
yeah, from what i read, it sounds like its blurring the lines
<clever>
its turning into a special object, that the pre-processor output can turn into a hex array
<clever>
or the compiler can just treat as a binary blob
<mrvn>
and you have do both or it's pretty useless
<clever>
if your running the pre-processor and compiler in the same progress, i would assume it skips the hex stage
<clever>
and then its as fast as .incbin
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
<mrvn>
The std::embed looks like a much cleaner proposal
<clever>
but is c++ only
<mrvn>
One thing that's bad is that it's another thing you can't implement in C++. You need a compiler builtin for it.
<mrvn>
Maybe it shouldn't be in the STL and std namespace but be a new keyword.
<mrvn>
.oO(but keywords suck too, especially new ones that aren't reserved already)
<clever>
template haskell solves this generally with $( i believe
<clever>
where you can insert $(expression) almost anywhere in the code
<mrvn>
like bashs $(run this)?
<clever>
and it must return a compiler internal type
<clever>
yeah
<clever>
instead of returning code to be parsed, it returns the result of parsing, an internal type
<clever>
so you can construct things directly
<clever>
such as an array filled with 2mb of binary
<mrvn>
that's what std::embed does
<clever>
but template haskell isnt limited to data embedding
<clever>
it can read the types of other classes, and generate functions
<clever>
its basically injecting user code into the compiler, which can then do whatever it wants with the compiler state
<klange>
Had I a hat in the ring, I would have gone with... "uint8_t foo[] [[embed("file.dat")]];
<mrvn>
Looks like it should be implemented as struct { int fd; size_t size; }, at least for seakable files.
<mrvn>
klange: an attribute that returns an AST node?
<mrvn>
klange: I would like to be able to do concatenation
theruran has joined #osdev
<mrvn>
#embed should have a trailing ","
<clever>
some languages get upset if you have a trailing , on arrays
<clever>
it makes the diffs messier, when you have to edit the previous line
<mrvn>
c++ member initializer lists do get upset.
<heat>
c o n s i s t e n c y
<klange>
I don't like that the committee went with an approach that basically just standardizes the stupid hack everyone was doing.
<mrvn>
struct Foo { Foo() : x(1), {} int x; };
<klange>
Like, wow, now the standard can do what GIMP has supported for years!
<klange>
mrvn: I don't care how it works from a parse standpoint, I see my attribute approach has meaning "initialize this object with this data, whatever the fuck it looks like"
<mrvn>
klange: I would have added "=". Make [[embed..]] the same as {data}
<klange>
But you can't = an attribute.
<mrvn>
yeah, that fails in the parser. You could do uint8_t foo[] = {}[[embed..]]; I think.
<klange>
I'm not sure I see the benefit with that, vs. just having the attribute be on the object, but I did only give it like three seconds of thought while reading the blog post on #embed.
<mrvn>
or something with "extern".
<mrvn>
although "extern" would conflict with the desire to static_assert on the contents.
<klange>
Hm, gotta say no to 'extern' here. The goal is "this object, in this compilation unit, should be initialized with the contents of this file"
gog` has quit [Ping timeout: 244 seconds]
<klange>
lemme... actually sit down and write something as if it mattered
<heat>
_Embed(filename)
<heat>
u8 array = _Embed("file.dat")
<mrvn>
there is a reason it's been an unsolved problem since C99 times. There isn't a clear convincing syntax for it
<heat>
bikeshedding ftw
<mrvn>
heat: yeah, why isn't there a builtin_embed("file.dat") if this is such a needed feature.
<heat>
there kinda is
<heat>
.incbin
<heat>
it's just not standard C
<heat>
same as __builtin_embed :)
<mrvn>
that's kind of shooting yourself from the back through the knee. inline asm :)
<mrvn>
not quite the same though I believe, can't static_assert on it.
<bslsk05>
github.com: incbin/incbin.h at main · graphitemaster/incbin · GitHub
<mrvn>
klange: you forgot: char foo[16] [[embed("file.dat")]]; // take first 16 bytes from file
<klange>
good point, added and clarified first section as "flexible arrays"
<mrvn>
[[embed("file.dat", 16)]] // first 16 bytes from file, [[embed("file.data", 16, 128]] // bytes 128-143 How do you get all but the first 16 bytes?
<klange>
,,16 ?
<heat>
-1
<klange>
love me some empty arguments :)
<mrvn>
qux[16] should 0 initialize any missing bytes I think.
<heat>
i believe this is all bikeshedding
<klange>
Of course it is.
<klange>
That's how standards work.
<heat>
why would you ever want to skip bytes, or get a range of bytes in a file
<mrvn>
totally, we aren't going to change C23 tonight and once it's in use it's not getting changed.
<kof123>
" basically just standardizes the stupid hack everyone was doing." i thought that was how standards work
<heat>
the simple usecase is "embed a single file into an array"
<mrvn>
heat: because the original data has some stupid header?
* kof123
ducks and covers
<klange>
kof123: touché
<heat>
mrvn, preprocess that, or skip that at runtime
<mrvn>
heat: In c++ it's a constexpr so you can just slice it to get the parts you need.
<heat>
cool
<mrvn>
even parse the file to get the offset dynamically
<zid`>
my complaint is still just that it's worse than bin2o, a tool we've been using for 25 years, and I feel kind of offended they've finally deigned us important enough to have a couple of emails
<mrvn>
E.g. Here is the Windows firmware.dll, extract the actual firmware blob for the device from it and embed it in the kernel.
<mrvn>
zid`: bin2o isn't a constexpr so embed is something more.
<heat>
C doesn't have constexpr
<mrvn>
heat: true. it does have const and optimizes when it knows the value of a const.
<klange>
It also has situations where there is a concept of a constant, mostly around initializers.
<mrvn>
You could do some nice things with this in C++. Like use a .ttf font, parse it, render it and put a bitmap into your kernel image.
<mrvn>
or vector graphics
<klange>
my eyes glazed over at the thought of a constexpr TrueType parser
<mrvn>
Compile for a specific display type and it converts the logo.svg into a spash screen.
<heat>
this seems like a living hell
<mrvn>
If you have a function that parses and renders it then converting the result into std::array at the end isn't that hard and you have constexpr.
<klange>
have i mentioned i finally started working on a real ast-based compiler for kuroko?
<mrvn>
no
<klange>
well i have now!
<mrvn>
Tetsuya Kuroko?
<mrvn>
About 20.000.000, bad name,
<klange>
Kuroko Shirai
<zid`>
neat
<zid`>
you should answer my question about parsers then
<mrvn>
girly
<mrvn>
klange: what type of language is it?
<klange>
A Python clone.
<mrvn>
Kuroko Shirai is a level 4 esper and a character introduced in A Certain Magical Index.
<klange>
But with block scoping like every sane language.
<mrvn>
linear types?
<klange>
It is prety much literally "Python, but with a 'let' keyword".
<zid`>
polite python
<zid`>
I am going to make german python, keywords are in caps and the initializer syntax is MUST
<klange>
When I finish up this AST compiler, I will be able to pretty easily add Python function scoping semantics as an option, and then it will be able to just run Python code [assuming supported stdlib usage]
<mrvn>
klange: nested functions?
<klange>
With exactly the same semantics as Python, yes [though with the block scoping mechanics, you don't declare "nonlocal" for your captured vars]
<mrvn>
and captured vars in loops end up having the last value the loop uses?
<mrvn>
(cpature by reference)
<klange>
Yes, though I consider that a wart from Python, tbh.
<mrvn>
I hate having to write: lambda i=i, j=j, k=k, n=n: something(i, j, k, n)
<mrvn>
and if you accidentally call the lambda with an argument then "i" is suddenly overwritten
<klange>
I did two things differently in Kuroko. The declaration and scoping semantics are one, the other is that I didn't do static default arguments, so, uh, that won't actually work anyway.
<mrvn>
another pytho wart: def foo(arg=[]): arg.append(1); print(arg) what does foo(); foo(); print?
<klange>
I do default args as "an unset default arg has a sentinel value and is compiled as 'if arg is sentinel: arg = (the default expression)' were inlined at the start of the function.
<mrvn>
unlike c++ where the caller constructs them
<mrvn>
ocaml does it that way too. optional argument are syntactic sugar for an std::optional arg + if unset arg=default
<klange>
Yeah, static typing can offer that pretty well. I _could_ have done this at a VM level with subexpressions stored as codeobjects and have the CALL mechanism do the work, but that was... more effort than I wanted to invest
<mrvn>
klange: In ocaml it's verry little magic. The call mechanism just adds "Some " to the arguments and the called function does "let arg = match t with Some x -> x | None -> default"
<mrvn>
That's all compile time. Runtime doesn't do anything special.
<mrvn>
You can actually write it all out to get the same effect. It's purely syntactic suggar.
opal has quit [Ping timeout: 268 seconds]
gxt_ has quit [Ping timeout: 268 seconds]
Vercas7 has joined #osdev
gxt_ has joined #osdev
<mrvn>
"Umbrella does shady shit, like facebook shady." :)
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas7 is now known as Vercas
opal has joined #osdev
<klange>
I've been working on Kuroko for nearly two years now. I'm quite happy with how it's turned out. It powers the config + syntax highlighting in my editor, I ported 'ponysay' to it for a PonyOS release, a bunch of the ToaruOS build scripts are in it...
[itchyjunk] has quit [Remote host closed the connection]
<Mutabah>
klange: Your ability to make a practial OS/language is truly amazing
<Mutabah>
o7
Matt|home has quit [Remote host closed the connection]
<bslsk05>
github.com: kuroko/gendoc.krk at master · kuroko-lang/kuroko · GitHub
<klange>
it's a bit more hairy in a comprehension, but at least in a statement-for with block scoping I can slap things into fresh locals and they'll be captured separately with each loop iteration :)
<klange>
but if you did the unpack in the loop it would 'work' like Python, as the unpack targets would be scoped with the loop entry, not the body
<klange>
there's the normal block scope the 'for' is in, a scope for the loop vars that covers all iterations, and a scope for the body that is specific to each iteration
<mrvn>
klange: only if you deep copy it
<klange>
Another notable property of the scoping is that I have locals in modules. Top level scope is globals, but as soon as you enter a block (if, try, with...) you get a local scope. This did have an unfortunate affect on some common Python patterns, like you need to go through extra declaration and assignment steps to "try: import...", but it's not been enough of a problem for me to consider changing things :)
<klange>
(it also means you can make module code faster by throwing it in an 'if True' ;) )
<mrvn>
so to access a global you have to declare it first?
<klange>
Yep. There's three opcodes around globals, DECLARE, GET, and SET. If GET or SET are used on global names that aren't already in the globals table, they raise exceptions.
<mrvn>
unlike python where SET defaults to creating a local variable and GET falls back to global if there isn't a local one.
<klange>
Anything that doesn't resolve to a local or nonlocal resolves to a global access - and I've got tools to do static analysis and point out "globals" that were not declared, which usually means a missing local declaration somewhere :)
<klange>
Python does... something interesting. Anything variable not declared as global or nonlocal and assigned to within a function binds that name to a local (all of which are function scoped)
<mrvn>
nonlocal variables I find hardest to implement
<klange>
Yeah, thankfully I was reading through Crafting Interpeters and nonlocals = upvalues = closed closure variables.
<mrvn>
You kind of needs a linked list of stack frames so you can iterate over the scopes and find the frame a variable is defined in at runtime.
<klange>
Indeed you do :)
<mrvn>
In a compiled language I guess you can lift the function and add all nonlocal variables as extra arguments.
<klange>
Basically all managed at runtime, there's opcodes to close upvalues from a scope that the compiler emits when it can, then handling in RETURNs and exception unwinding. That bit was extra fun - Lox doesn't have exceptions, so Bob's book couldn't help me figure that out :)
<mrvn>
That part is so much easier in a GC language and with CPS. Every function gets 2 continuations: the normal one and the exception case.
<mrvn>
no stack unwinding and nothing
<mrvn>
every function ends with a "jmp %rax"
<moon-child>
eh cps is slow though
<moon-child>
cpu has a ret predictor, no continuation predictor
<moon-child>
adding in the metadata isn't that bad. Could make destructors a lot faster, too. And pales in comparison to stackmaps for non-crap gc
<mrvn>
moon-child: wasn't there a register jump predictor too?
<moon-child>
kinda?
<moon-child>
the ret predictor actually knows that you're gonna return to the place you were called from
<moon-child>
the indirect predictor just kinda takes a wild guess based on the ip
<mrvn>
In most cases the continuation is a known value so you can emit a relative branch or even just write the next code block after this one.
<mrvn>
On ARM you don't have a RET. You always jump to the return register.
<mrvn>
Sometimes you pop PC from the stack though
<moon-child>
yeah, but that's a recognised idiom
<moon-child>
and I think the instruction form actually has a hint for if you're 'returning' or not
<moon-child>
which is used to prime the predictor
<klange>
I found a bug :) raising an exception that exits the VM was not closing nonlocals; shouldn't have affected most code, but test cases could easily be done in the repl :)
Vercas5 has joined #osdev
Vercas has quit [Remote host closed the connection]
Vercas5 is now known as Vercas
<mrvn>
wouldn't it close them as you unwind the stack and the become locals?
<mrvn>
how do you handle when a function escapes it's scope? Do you ref count stack frames?
<AmyMalik>
hewo :3
<klange>
The compiler necessarily knows when a variable is potentially captured. When an inner function object is created, it is given an array of upvalue objects which track the stack locations of the variables it has captured from its outer scopes.
<klange>
In some cases, the compiler is able to emit explicit instructions to close upvalues because a scope is exiting, such as simple blocks, break/continue, etc.
<klange>
Exceptions work by scanning up the stack for a handler. If a handler is found, all upvalues tracking variables up to the stack depth of the handler are closed, the stack and execution context is restored to that point, and bob's your uncle.
<klange>
"with" statements also produce handlers on the stack, so that their exit routines may be run with the correct scope before further unwinding.
<klange>
The trouble here was the case where an exception handler was not found, the stack pointer was being reset but the relevant upvalues were not being closed.
<mrvn>
Except that doesn't work when you e.g. register a callback.
<mrvn>
The callback lives longer than the stack frame the upvalues reference.
<mrvn>
If "with" creates a handler why doesn't every scope just produce a handler? Then unwinding would just call handlers until it hits one that catches an exception.
<mrvn>
Or do you want the C++ model of an exception having zero cost unless you throw one?
<mrvn>
Do you create a stack frame per function for the maximum size of all scopes or do you create new space every time you enter a scope?
the_lanetly_052 has joined #osdev
Vercas has quit [Remote host closed the connection]
Vercas has joined #osdev
<klange>
< mrvn> The callback lives longer than the stack frame the upvalues reference.
<klange>
Uh, yes, when the stack frame the upvalues reference stops being valid... the upvalues stop referencing the stack frame.
<klange>
That's... how that works.
<klange>
< mrvn> Or do you want the C++ model of an exception having zero cost unless you throw one?
<klange>
Yes.
<mrvn>
I kind of like the idea of having a data stack and return stack and alternative push a return address and unwind function to the return stack on every CALL.
<mrvn>
s/alternative/alterntingly/
GeDaMo has joined #osdev
janemba has joined #osdev
gxt_ has quit [Write error: Connection reset by peer]
foudfou has quit [Remote host closed the connection]
gxt_ has joined #osdev
foudfou has joined #osdev
foudfou has quit [Remote host closed the connection]
foudfou has joined #osdev
gxt_ has quit [Remote host closed the connection]
gxt_ has joined #osdev
foudfou has quit [Remote host closed the connection]
foudfou has joined #osdev
vdamewood has quit [Read error: Connection reset by peer]
foudfou has quit [Remote host closed the connection]
foudfou has joined #osdev
foudfou_ has joined #osdev
fwg has joined #osdev
vdamewood has joined #osdev
foudfou has quit [Ping timeout: 268 seconds]
fwg has quit [Quit: so long and thanks for all the fish.]
fwg has joined #osdev
scaleww has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
toluene has quit [Quit: Ping timeout (120 seconds)]
toluene has joined #osdev
orccoin has joined #osdev
justmatt has joined #osdev
the_lanetly_052_ has joined #osdev
the_lanetly_052 has quit [Ping timeout: 268 seconds]
fwg has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
gog has joined #osdev
fwg has joined #osdev
Vercas has quit [Quit: Ping timeout (120 seconds)]
fwg has quit [Quit: .oO( zzZzZzz ...]
Vercas has joined #osdev
fwg has joined #osdev
nyah has joined #osdev
orccoin has quit [Ping timeout: 276 seconds]
elastic_dog has quit [Ping timeout: 244 seconds]
elastic_dog has joined #osdev
orccoin has joined #osdev
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas has joined #osdev
pretty_dumm_guy has joined #osdev
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas has joined #osdev
vdamewood has quit [Read error: Connection reset by peer]
vdamewood has joined #osdev
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
foudfou_ has quit [Quit: Bye]
foudfou has joined #osdev
fwg has joined #osdev
foudfou has quit [Remote host closed the connection]
foudfou has joined #osdev
[itchyjunk] has joined #osdev
Vercas has quit [Quit: Ping timeout (120 seconds)]
tsraoien has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
terminalpusher has joined #osdev
terminalpusher has quit [Remote host closed the connection]
foudfou has quit [Remote host closed the connection]
terminalpusher has joined #osdev
opal has quit [Remote host closed the connection]
foudfou has joined #osdev
opal has joined #osdev
terminalpusher has quit [Remote host closed the connection]
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas has joined #osdev
fwg_ has joined #osdev
fwg_ has quit [Client Quit]
fwg has quit [Ping timeout: 260 seconds]
the_lanetly_052_ has quit [Ping timeout: 276 seconds]
fwg has joined #osdev
fwg has quit [Client Quit]
fwg has joined #osdev
fwg has quit [Ping timeout: 240 seconds]
fwg has joined #osdev
<ddevault>
well this is unfortunate
<ddevault>
unmap a page in userspace and then write to it... and it doesn't GP fault
<mrvn>
forgot to INVLPG?
<ddevault>
nope, I remembered to
<heat>
GP fault?
<ddevault>
page fault, rather
<heat>
how does your invlpg invocation look?
<heat>
inline assembly is tricky and I remember I had that screwed up for like 4 years
<ddevault>
invlpg (%rax)
<ddevault>
called via the C ABI (not inline) with the virtual address in the first parameter
[itchyjunk] has quit [Remote host closed the connection]
<heat>
that looks correct
<mrvn>
rax is the return value register, not the first arg
<Mutabah>
... rax isn't a parameter in most C ABIs
[itchyjunk] has joined #osdev
<heat>
are you sure it's not mapped?
<ddevault>
erp
<ddevault>
right
<ddevault>
rdi
<Mutabah>
rdi iirc?
<mrvn>
do inline asm
<ddevault>
that's probably my issue
<ddevault>
mrvn: no
<ddevault>
my language does not support it, and I don't like it anyway
<ddevault>
okay works with rdi
<ddevault>
derp
<mrvn>
are you sure? Have you set some attribute to mark the function as affecting 4096bytes from the given address?
<ddevault>
my language also doesn't have a stupid big brain optimizer
<mrvn>
that always helps
<mrvn>
memory barrier?
<ddevault>
no memory barrier
<heat>
you don't need to have a memory barrier for invlpg
<ddevault>
but I'm not sure that one is required here
<mrvn>
is invlpg serializing or whatever you call it on x86?
<heat>
you also don't need to tell the optimizer shit
<heat>
else it will pessimize everything
<mrvn>
heat: depends on the optimizer
<ddevault>
I've always wanted to write a C compiler which rewrites your program into nethack if you use undefined behavior
<heat>
you're not actually going to want to invlpg something you're using, so you don't want to reload things from memory
<heat>
as it would
<heat>
"memory" clobbers are expensive
<mrvn>
heat: no, but addr[4096]
<heat>
there's no such clobber
<mrvn>
sure there is
fwg has quit [Quit: .oO( zzZzZzz ...]
<mrvn>
you pass it as argument
<mrvn>
The gcc docs have examples.
<ddevault>
in any case, I'm not really "using" these pages
<ddevault>
this is a page of memory allocated for & mapped into userspace
<ddevault>
by the time anything which "uses" it gets to run again, I've exited the syscall handler and memory barriers probably don't matter and the compiler has lost any context it could have used for dumb optimizations
<ddevault>
I wonder what I should do if userspace tries to unmap or destroy a page table while there are still pages mapped into it
<ddevault>
maybe that should just be an error
<zid`>
: "+a" (ax)
<zid`>
: "D" (s), "m" (s[0]), "m" (s[1])
<zid`>
that will clobber s[0] and s[1]
<mrvn>
zid`: you don't want to list every array element separate, the whole array syntax is easier.
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas has joined #osdev
Vercas has quit [Quit: Ping timeout (120 seconds)]
<ddevault>
I'm trying to figure out what seL4 does in this situation and I am reminded how bad this code is
<ddevault>
but I think it just doesn't deal with it? that would be surprising
<mrvn>
why would you allow user space to munmap any address outside of userspace range?
<ddevault>
that's not the issue
scaleww has quit [Quit: Leaving]
<ddevault>
the issue is, say that userspace has a PD object with some number of pages mapped into it, then they destroy the PD object, but those pages still think they're mapped into the PD
<mrvn>
PD object?
<ddevault>
page directory object
<mrvn>
why would userspace ever have something like that? That's a kernel thing.
<ddevault>
microkernel
<ddevault>
userspace "owns" its page tables and can ask the kernel to do operations against them
<mrvn>
it should not have read nor write access.
<ddevault>
naturally
<ddevault>
the page tables themselves are not mapped into userspace, and cannot be
<mrvn>
So why would you put that in userspace address range?
<ddevault>
I don't know what you mean
<ddevault>
all operations that userspace does against page tables is done via syscalls which operate on their respective capabilities
<mrvn>
If the user can do "munmap(&PD_object)" you have problems
<ddevault>
the user can kind of do that, by design
<mrvn>
the PD object should be in kernel memory and munmap should reject any address in kernel memory
<ddevault>
I don't think you fully understand my design
<mrvn>
Unless you have recursive memory management and page tables.
<ddevault>
physical memory is broken up into memory objects, which userspace receives a capability for (but cannot directly use, it's not mapped in their address space)
<ddevault>
then they can allocate more kernel objects (page tables, threads, etc) from their physical memory capabilities, which produce more capabilities that make use of that memory
<mrvn>
ddevault: userspace gets physical memory?
<ddevault>
but again, are not mapped into userspace
<ddevault>
if userspace creates a page capability, one page of physical memory is associated with that capability
<ddevault>
and page capabilities (and not other capabilities, regardless of whether or not they allocate physical memory to store their state) support a "map" operation
<mrvn>
Still not seeing a problem
<ddevault>
userspace can also allocate page table objects in a similar fashion
<mrvn>
All of that should map the page somewhere in kernel land
<ddevault>
page tables also support "map", but it doesn't map the page table's memory like any other page, so userspace cannot read/write it themselves, but rather maps it into a higher-level page table
<ddevault>
so we have a page directory with some pages mapped into it, and the user calls destroy on the page table
<mrvn>
then it either unmaps all the pages or returns EBUSY
<bslsk05>
harelang.org: The Hare programming language
<mrvn>
again all that code assumes you already have a verified capability
<ddevault>
aye, but you do
<ddevault>
it's one of the key features of a capability
<ddevault>
the actual capability structs are only r/w from kernel space
<ddevault>
userspace just refers to them by index, like a fd
<graphitemaster>
does hare use llvm or is it bespoke codegen or something like libfirm
<ddevault>
it uses qbe
<mrvn>
ddevault: you still have to check the index though
<ddevault>
aye, this is done earlier on
<ddevault>
each task has a cspace capability, which stores these capability objects (in memory userspace cannot access)
<ddevault>
all null by default
<ddevault>
invoking a capability will start by looking it up in this table (and reject it if the indexed capability has the wrong type or is out of bounds)
<mrvn>
In my kernel you can't allocate memory. But you can read 4096 bytes from /dev/zero basically, which returns a page of zeroed memory.
<mrvn>
sort of
<ddevault>
neat
[itchyjunk] has quit [Remote host closed the connection]
<mrvn>
I was playing around with making malloc() async. You can request memory without waiting for the reply and some time later a message appears in your mailbox with the memory attached.
troseman has joined #osdev
<mrvn>
One thing I want to experiment with is a memory pressure and reclaiming system. Say you have your web browser with 100 tabs. Each has a big chunk of memory for the currently visible part of the page as framebuffer, cache for recently downloaded urls, ... When the kernel runs low on memory it would be nice if it would ask the browser to return some of it.
<ddevault>
aye, I'm going to do something similar since my disk cache will be in userspace
<mrvn>
exactly. So processes have to mark memory as "cache" and have some base and stretch cost factors and then the kernel computes which has to give up memory.
<mrvn>
it's more or less cooperative though. If they don't mark stuff as "cache" it can't work.
<ddevault>
aye
<mrvn>
processes could even have multiple cache objects with different costs.
<mrvn>
and clean and dirty. Dirty cache takes time to free up.
troseman has quit [Read error: Connection reset by peer]
troseman has joined #osdev
<mrvn>
ddevault: The other approach I have is that the kernel has a key[+pid]=value store but retrieving keys can fail when the kernel had need of the memory.
<ddevault>
it would be nice to be able to read and write to memory without a syscall
<mrvn>
So processes can cache anything and the kernel keeps track of LRU and hit counts.
<ddevault>
the requester should block until the current owner is available to release the memory imo
<ddevault>
or maybe some kind of locking system
<mrvn>
I have more of a push design. You do your work and then you send a message to the next processes with the memory attached.
<netbsduser`>
mrvn: a proven design
<netbsduser`>
apple uses it for iOS
<netbsduser`>
the EVFILT_VM filter for kqueue returns various sorts of notifications of memory pressure
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
<ddevault>
the main problem with EAGAIN when destroying a page table that contains mappings is that it will require whatever cleans up processes to know to unmap pages first
<ddevault>
rather than just destroying all of the child's capabilities
<ddevault>
the problem with unmapping all of the pages is that the page table has to know which capabilities are mapped into it
<ddevault>
sure it can enumerate the physical addresses it has mapped ez pz, but enumerating the capabilities themselves is less obvious
<ddevault>
may need linked list or something, much less pleasant
<ddevault>
one advantage of EAGAIN is that it means the state of a page's mapping does not change out from under it, which would reduce potential use-after-free issues
ebrasca has joined #osdev
<ebrasca>
I am having problems reading the mac address of 82540EM!
<ebrasca>
( e1000 )
<heat>
why
<ebrasca>
I am trying to implement it for a OS!
<heat>
well no shit
<heat>
what problem are you having
<ebrasca>
I can't find the place where the mac is stored
<heat>
it's in the EEPROM iirc
<ebrasca>
I have read the BARs to try to find the data
<bslsk05>
fuchsia.googlesource.com: sdk/fidl/fuchsia.hardware.pty/pty.fidl - fuchsia - Git at Google
<raggi>
It's not a pty in the unix tty sense, so don't let it fool you, there are no sessions, session leaders, etc, but those events are basically what you need
<jafarlihi>
Will take a look, thanks
<heat>
raggi, you were on the build side of fuchsia right?
<raggi>
I made the first generation of the package system, verified boot flows, a lot of build and tools stuff. For a while I led a lot for the userspace teams. I left Google now, don't work on any of it anymore
<heat>
is there a neat syntax for substracting elements off a list without getting any "element isn't in the list" error?
<raggi>
Gn is fairly simple, with some severe limitations on key lines that it doesn't itself aim for
<raggi>
Yeah
<raggi>
Add the items you want to subtract first, then subtract them
<heat>
like list = a + b - b but for -=
<raggi>
Just += first
<heat>
that's horrible
<heat>
thanks
<raggi>
Yeah, we used to do it all the time
<raggi>
If GN wanted to be a mainstream build solution some things like that should probably be more ergonomic, but the constraint is there to stop people making bad design choices in build strategies
<geist>
yeah it has some very explicit design decisions, though at least it's fairly up front about it. the README on the top of the gn repo is pretty clear that it's good for this and bad for that
<raggi>
I think that probably works out for chrome, but once you're doing complex stuff it is mostly just annoyance
<heat>
gn requires so much setup
<heat>
it's insane
<heat>
it's very versatile but very abstract
<geist>
it does go out of its way in the README to point out that it's designed for you to have to completely express how to build everything, ie no implicit build rules and whatnot
<raggi>
Kinda true, but the flip side is the default toolchain rules for cmake are kinda junky
<geist>
which in my exerience with osdev or bare metal is what you end up doing in make anyway
<geist>
exactly, i always want to turn all of that off for any build system i use
<raggi>
It is pretty implicitly C toolchain oriented though
<raggi>
There are things I like about it, especially now I'm outside, using other junk again that is often worse
<geist>
i do remember roland did do a more or less direct conversion of the LK build system to gn once for the zircon build system (before it was tossed in favor of unification) and there were some pain points
<heat>
yeah but fuchsia ended up getting chromium's //build
<heat>
like most google projects do
<raggi>
The syntax and behavior are fairly self consistent and regular, and there isn't a lot of syntax
<geist>
mostly because LK's build system expects to add to itself as it parses, and gn wants everything known up front
<raggi>
heat: fuchsias //build is very different, but it started similar because people writing it came from chrome/nacl
<geist>
but i have to separate what we built on top of gn for fuchsia with gn the build system. it's possible we designed things into some sort of design corner that's what i have most of the friction on
<raggi>
Yeah, the variants stuff is fuchsia special
<raggi>
And like the way fidl uses toolchains
<heat>
AIUI fuchsia actually used chromium's stuff as a base
<heat>
such that if you go way back in fuchsia.git you'll see a //build commit from like 2014
<raggi>
yes
<heat>
my main gripe with gn is that it's so non-trivial to install files
<raggi>
geist: if I was to design one, a couple of things I'd aim to demonstrate is to enable it to do efficient subtree builds and efficient artifact prebuilds, e.g. pick a package set that can come from some upstream rather than build
<heat>
turns out no one using gn has a good solution for this
<raggi>
heat: build systems doing installs is awful
<heat>
so fuchsia has a huge system of metadata plus scripts, etc
<raggi>
I will die on that hill.
<heat>
chromium's solution is just "cp chromium from out/"
<heat>
even inside a sysroot
<raggi>
That is only true for dev builds
<geist>
raggi: yah. fundamentally the fuchaia thing has ended up with lots of top level tables that describe things that IMO belong in a subtree. it fails the 'does the subtree describe everything it needs to build itself' rule
<heat>
you can't sanely set up a sysroot without tons of supporting code
<geist>
to add some new thing somewhere deep in the build you have to edit a few things up at the top, which is IMO a failure
<geist>
or some thing deep in a tree needs some build rule to build itself because it's special? that goes in some top level toolchain def, etc
<raggi>
heat: yeah, the sysroot setup in general is a portability disaster and something I'd like to see platforms and toolchains work toward standardizing and fixing
<geist>
i get it, that's probably an explicit design rule, but it doesn't scale well
<geist>
i always go out of my way to have the rules to build a thing defined next to the thing that gets built (if it's a special case) and i thin kthat works well
<raggi>
geist: the lack of a full metadata pre-stage is part of the problem there. The gn design concern is that a metadata pre-stage allows bad behavior - and it does, but I think there are ways to use such a thing well, mostly as you're alluding to
<geist>
exactly. it's an explicit design decision
<geist>
also it allows gn to parse things in parallel, since it knows up front what it needs to parse
<raggi>
Yeah
<raggi>
I'm not sure how I'd try to to rationalize that well, I think I might even use separate files for the pre-stage than the build stage tbh
<geist>
the opposite approach iv'e described a few times was a build system we had at $video_game_company where it was defined entirely in C#, and rules were classes
<geist>
the build started with a single node: parse the root .xml file describing the build
<raggi>
That's effectively how most distro package systems work, but it's not obvious as a build system
<geist>
and then as it parsed each rule could add more nodes and the graph was continually solved
<raggi>
*nod*
alpha2023 has quit [Read error: Connection reset by peer]
<heat>
custom build systems are cursed
<geist>
yah i guess it was a distro package style thing, except it was doing the parse and build at the same time
<bslsk05>
capnproto/ekam - Ekam Build System (30 forks/206 stargazers/Apache-2.0)
<geist>
but yeah, i tend to stick with make because it's so ubiquitous
<geist>
even if gn is nice i have no idea if google is going to abandon it in a few years or not. and you end up with the problem that haiku has: jam
<zid`>
jam is never a problem as long as you have baked goods
<raggi>
I've been thinking about bootstrapping a distro and one of the blockers is I really don't particularly want to build a builder, but traditional things I could crib don't meet my performance or isolation criteria
<heat>
geist, what's jam?
<geist>
exactly
<raggi>
A lot of the decent stuff out there now is some form of cgroup based isolation tacked underneath a make-alike
<heat>
then you must have a misconfigured toolchain
<griddle>
¯\_(ツ)_/¯ possible. It's worked for the past 4y w/o failing until I updated my ubuntu vm to 22.04 :)
h4zel has joined #osdev
<heat>
ninja doesn't even have internal rules
<heat>
afaik
<griddle>
Idk, I just got frustrated at it an backdated my os
<griddle>
I'll deal with it when I need to :)
<griddle>
Its the same version of ninja on both versions, so it is very likely some external configuration leaking into my build
<heat>
ye
<griddle>
I didn't try `env -i make`, that may have helped
<raggi>
cmake has lots and lots of conditions that might leak arbitrary flags into generators
<raggi>
debugging cmake builds is wild
<geist>
not too bad, null build of the largest LK project i know of: real0m0.114s
<heat>
raggi, thoughts about soong?
<geist>
it's nonzero actually ecause it does the 'regenerate all the config.h files and replace if new' logic
<geist>
so it scales with the number of modules in the build
<raggi>
I don't have any thoughts about soong, I know it exists, but not much more
<griddle>
Yeah I just want a build generator that lets me create functions like `user_binary(name sources..)` that generate ideal makefiles w/o having to use Makefile macros
<heat>
griddle, meson!
<zid`>
see heat, you just did it :P
<geist>
i made an android build the other day and soong took for ever to run, but then i dont know what the alternative would do
<geist>
but it is frustrating to have to wait like 5 minutes for it to just generate the ninja files to start the real build
<heat>
GNU make was allegedly worse
<geist>
i generally dislike build systems that generate a thing and then require you run another thing
<geist>
but that's a pretty minor thing now
<griddle>
letting people handwrite make has been a problem for humanity
<heat>
that erm, leaves you with bazel and make
<raggi>
Eyeballing the soong examples, having constant target portability constraints in every target feels like a looking disaster
<raggi>
*looming
<geist>
right, it's no longer a popular opinion
<griddle>
monorepo project at work has quite possibly the worst Makefile on the planet
<griddle>
null rebuild takes 3min
<heat>
fuck
<geist>
yah i mean i can't really argue that much for make except i'm happy with it
<griddle>
make is fantastic for small projects
<geist>
but that is because i'm willing to take the responsibility to shepherd my makefiles along the path to enlightenment
<raggi>
You also know make
<heat>
have you ever set up gn for a small project?
<griddle>
you start to want something better when you have >200 link targets
<heat>
it's fantastic
<griddle>
gn?
<geist>
exactly. i can't say it's a great solution for everything
<raggi>
Googles allergy to make is that people who don't know make create really bad messes with it
<heat>
the build system files will be larger than the project :P
<geist>
(make that is)
<raggi>
Most of Google's build projects are defensive endeavors
<bslsk05>
gn.googlesource.com: gn - Git at Google
<raggi>
They're there to stop people doing things
<griddle>
ah
<geist>
i already treat it lke the C of build systems. you can do more or less what you want, as long as there's a file underneath the rule, but you can screw yourself up so badly
<griddle>
all of google's build stuff is based on still using perforce right?
<heat>
no
<geist>
actually if make just had a notion of variable scope that you push/pop it would be so much more powerful
<heat>
what does VCS have to do with a build system?
<geist>
heat: oh you have't heard of all the vcs/build system integrations there are?
<griddle>
p4 views make static build systems difficulty
<geist>
griddle: no, not in at least 10-15 years it hink
<heat>
geist, no?
<gog>
make gog
<griddle>
did they drop p4 or are they still wrapping it in their own vcs
<geist>
yes
<griddle>
:^)
<mjg>
i thought p4 is dead for years now
<mjg>
killed by git effectively
<raggi>
There is one aspect to it, which you will see in bazel, is that bazel has abstractions so that data sources and outputs can be rpc services
<mjg>
(not only at g)
<griddle>
you'd hope p4 was dead.
<geist>
heat: i knew of a few build systems back in the day, and maybe still in use, where the VCS and the build system were integrated in the sense that it knew intrinsically what was edited and what wasn't
<griddle>
not enough devs + 20yo software + monorepo means they probably still use p4
<geist>
and thus can feed into the build system as to what to rebuild or not
<raggi>
Making data sources and sinks be services pulls you away from a lot of os level optimizations, so it scales up to silly scale better, but scales down to single machine very poorly
<geist>
there was some thing i remember in college, windows based, that used a network share where your source was on
<heat>
that's horrific
<mjg>
griddle: well i can point a finger at active opensource projects which still use cvs :)
<geist>
and thus could feed you prebuilt .o files and whatnot. it was horrific
<mjg>
griddle: and they have more than 3 people contributing
<geist>
but i had heard stories that there were other things like that back in the day
<raggi>
checking build environment is sane... ok
<raggi>
That's all about those environments
<geist>
Clearcase is i think what i'm talking about
<griddle>
there is an effort internally to use git and have code actions or whatever to sync to p4
<geist>
still a thing, looks like IBM owns it now
<griddle>
gluing two vcs together is the worst
<geist>
griddle: i honestly dont know what the core google stuff uses, but if i did i dont know if i can say precisely
<raggi>
It's all bespoke
<geist>
but AFAIK you interact with it with git, but the backend could be anything, becaus eit's easy to build backends for git
<geist>
ie, gerrit, etc
<griddle>
yeah
<griddle>
git's `status` code is amazingly fast for what it has to do
<geist>
also one of the powers of git, the network protocol is simple enough that you can easily pretned to be a server
<griddle>
p4 still takes *minutes* to determine what files I've change
<griddle>
git on the same project doesn't break a sweat
<heat>
you know, i really like gerrit
<heat>
and git
<geist>
agreed. gerrit i'm pretty much totally happy with
<raggi>
Which part? The prolog configs?
<griddle>
shoot, now I'm looking into meson
<griddle>
bad influence
<raggi>
The patch part of Gerrit I'm ok with, the plugin parts for handling ci and so on, very much not particularly enamored with
<geist>
as a user of gerrit the patch/review part i'm happy with
<raggi>
Yeah
<geist>
at least relative to the alternatives i've worked with. notably github
jafarlihi has quit [Ping timeout: 276 seconds]
<raggi>
GitHub somehow got a lot worse. Been using it full time again the last few months and eugh
<heat>
griddle, meson is trivial(tm) to use
<raggi>
It used to work better, 10y ago
<geist>
raggi: yah just the very branch centric way of doing patches is already annoying AF
<bslsk05>
github.com: onyx-package-tree/kuroko-1.2.5.patch at master · heatd/onyx-package-tree · GitHub
lkurusa has joined #osdev
<heat>
it looks similar-ish though
<raggi>
geist: I actually like branches, but I don't know a review tool that really does them well
<griddle>
I understand the reason to not use a general purpose language for build defs, but I still think it'd be nice :)
<geist>
branches themselves is fine, but the ability for gerrit to associate a CL id with the patch, independent of who/how it got pushed is pretty powerful
<raggi>
geist: I'm always disappointed when I could really move faster if stacked commits were more usable
<heat>
griddle, rust uses rust and soong uses go afaik
<geist>
it's the fact that if committer A makes a patch on their branch on their repo, no one else can edit it
<heat>
s/rust/cargo/
<geist>
or can force it to be rebased on the server, etc
<raggi>
geist: that could be done very easily with git notes, if the original review sha was included with the merge
<geist>
i was going throughs ome CLs yesterday for LK and a few had trivial rebases, but i a) couldn't rebase the CL and b) couldn't rebuild it on top of the current head for the action because it wasn't rebased yet
<geist>
etc. have to wait for the original committer to rebase and upload or just pull it locally and push it in another PR, etc
<raggi>
Oh, you just want a rebate button on GitHub you mean (effectively)?
<geist>
more or less yes
<raggi>
Yeah
<raggi>
There is something there, but it's super wonko
<geist>
but that would break their flow, because a PR in github is intrinsically tied to a particular branch on a particular repo by a particular user
<geist>
so if you rebased it it'd probably have to also rebase their branch for it to keep the connection
<raggi>
You can hit edit, and then toggle the base and target branches
<raggi>
But it's very untidy and sharp, ux wise
<geist>
OTOH me editing someone elses PR is i guess maybe not good foo, but these are things that gerrit just does magically
<geist>
not editing a PR as much as dealing with rebases and whatnot
<raggi>
Sure, I mean Gerrit is backed by a branch too in fact
<geist>
well, but the connect to a commit is intrinsically the tag, not the name of the branch or whatnot
<raggi>
And you can just push to it, and what it does when you do is make another new branch
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<geist>
the commit id yeah, but that's not so much a branch imo as an abuse of the path mechanism in git
opal has quit [Ping timeout: 268 seconds]
<geist>
ie, git push :refs/for/...
<raggi>
Oh there are pullable branches too
<raggi>
For each patchset
<griddle>
any kernels I can look at that use meson?
<griddle>
Definitely looking into meson. Initial poking looks good
justmatt has quit []
<raggi>
Same ones you use when you use the download buttons, you can push to those too - the behavior of doing so changes depending on the Gerrit config
<ebrasca>
heat: I think there is some problem with the documentation for 82540em , the BAR3 is not there.
<raggi>
griddle: the entry level ui looks good, what I hear is that it decays into a cmake like experience at the edges
<griddle>
the state leakage is all i want to solve
<griddle>
if there is a way to cut cmake off from the rest of my system at the border of my project, I'll stick with it
<griddle>
I feel like I am fighting against cmake not liking cross compiling
<geist>
ebrasca: it gets complicated because there are a bazillion versions of the intel nics
<geist>
in general the base BAR that gets to the base registers i think are consistently the same across the entire line
<geist>
but there may be some additional bars that come or go based on the version
<griddle>
also, cmake doesn't let me use clang. It sees "oh you have CXX set to clang++, you want to change all this other stuff elsewhere"
<raggi>
griddle: elementary uses meson, not an os, but distro
<ebrasca>
geist: Do you know where the mac address is?
<bslsk05>
github.com: lk/e1000.cpp at master · littlekernel/lk · GitHub
<geist>
that logic should work across the whole e1000 and e1000e line. basic stuff like that hasn't changed forever
<ebrasca>
geist: Thank you!
<geist>
but yeah i seem to remember there was some sort of memory mapped eeprom thing? maybe that's the BAR3 you were looking at? I think thats optional
<geist>
and/or something else. but i think you can *always* read the eeprom via the regitser accessors
<griddle>
no matter what build system I use, I will still put a `Makefile` in the root which invokes it :)
<geist>
it's just slower, but reading mac address is no sweat
griddle has quit [Read error: Connection reset by peer]
griddle has joined #osdev
<netbsduser`>
the correspondence between fuchsia and mach is very interesting
gxt_ has joined #osdev
<heat>
griddle, I haven't messed with meson precisely because I don't know how it fits with a full OS
<netbsduser`>
i wonder if the Fuchsists deliberately borrowed from it or whether it's just convergent evolution
<griddle>
Yeah, I basically have two build systems for my kernel and the uspace
<griddle>
organizing that in CMake is annoying but doable
Matt|home has joined #osdev
<griddle>
gets annoying when you need arch dependent configuration
<Matt|home>
o\
<griddle>
howdy
<heat>
griddle, do you have a link?
<griddle>
for my kernel?
<heat>
the system itself
<griddle>
https://github.com/ChariotOS/chariot The build system is nice to use in steady state, but is annoying to get going (toolchain is manually compiled right now)
<bslsk05>
ChariotOS/chariot - The Chariot Operating System (2 forks/43 stargazers/GPL-2.0)
<bslsk05>
github.com: Onyx/Makefile at master · heatd/Onyx · GitHub
<heat>
I was thinking about switching fully to gn but now I'm not so sure
<heat>
maybe I'll go for bazel? idk
<heat>
third party shit is hard to get going in gn because few things support gn
<heat>
so you either get a not-ideal build or you spend a lot of time rewriting things
<griddle>
Honestly, I've considered using my kernel as a chance to play around with making a build system to just learn how they work
<griddle>
strictly *because* you don't want to use external stuff outside the repo
<heat>
why don't you?
<griddle>
my system's only external dep is libfreetype
<griddle>
cause fonts are annoying
<griddle>
(granted, my window system hasn't worked for a year)
<griddle>
Been to focused on scheduler stuff :)
nick64 has joined #osdev
<griddle>
But yeah, I'd love to directly use makefiles, but I don't feel like they scale gracefully when you have a bunch of targets
<griddle>
something *like* cmake is nice cause you can automatically detect new targets programmatically w/o paying that cost when you build
<heat>
I do not like makefiles
<heat>
at least I don't know how to effectively write a large one
<heat>
without recursively calling make
ebrasca has quit [Ping timeout: 260 seconds]
<griddle>
I'm curious to see if the complexity of writing a build system's frontend is the langauge or the theory behind it
<heat>
I think gn is relatively small
<heat>
it's just a bunch of logic + a ninja writer
<griddle>
like, if you had a `build.py` or `build.scm` in every directory and had a driver around it to generate ninja/makefiles, I feel like a build system frontend could be whipped out in a weekend
<griddle>
making it fast is another thing :)
<griddle>
is gn a declarative language?
<heat>
isn't build.py like scons? you just end up being slow and bad :)
<griddle>
I mean, yeah
<griddle>
but if you don't change the build often, you don't have to configure often right?
foudfou has quit [Remote host closed the connection]
<raggi>
You can whip out a bad version of most things in a weekend, but that's only generally better before you actually do it
<heat>
no, I would say gn is kinda imperative
<griddle>
heat: their examples arent very complete :)
foudfou has joined #osdev
<heat>
you can go into my usystem/ for crap GN
<griddle>
raggi: isnt that all hobby os projects :) /s
<heat>
or fuchsia/v8/chromium/whatever for mindnumbingly complete and unreadable GN
<raggi>
s/os//
<heat>
but IMO it reads like a regular language, which I like
<heat>
control flow is a first class citizen
<griddle>
inb4 "endforeach"
<raggi>
gn doesn't really have control flow
<heat>
doesn't it?
<raggi>
no, templates aren't functions, they're templates
<raggi>
and if only provides exclusion, you can't actually indirect where the flow is going
<bslsk05>
fuchsia.googlesource.com: BUILD.gn - fuchsia - Git at Google
<heat>
BUILD.gn are like Makefiles, they have targets, etc
<heat>
.gni are includes with templates, etc
<heat>
BUILDCONFIG.gn are toolchain setup AIUI, they are evaluated for every file, for every toolchain you define
<heat>
.gn is just a tiny file which puts everything together
<raggi>
s/define/depend on
<heat>
oh yes
<raggi>
gn only evaluates the set of targets in the active build graph
<raggi>
The input is evaluated exactly once per toolchain, not more
pretty_dumm_guy has joined #osdev
liz has joined #osdev
psykose has quit [Remote host closed the connection]
psykose has joined #osdev
alpha2023 has joined #osdev
psykose has quit [Ping timeout: 272 seconds]
gxt_ has quit [Remote host closed the connection]
opal has quit [Remote host closed the connection]
foudfou has quit [Remote host closed the connection]
gxt_ has joined #osdev
opal has joined #osdev
foudfou has joined #osdev
foudfou has quit [Remote host closed the connection]
foudfou_ has joined #osdev
liz has quit [Ping timeout: 272 seconds]
<griddle>
does anyone else have the problem where they have to relearn how virtio works every time they want to implement a new virtio device?
<griddle>
I impl'd block, then had to relearn to impl gpu, now I need to relearn to impl net
wikan has joined #osdev
* wikan
says hello to everyone
<griddle>
howdy
<wikan>
I was wondering how can I ask some important questions, because my english isn't fluent
<wikan>
let me try anyway
<wikan>
let me say, I wanna to make a possibility to port my OS I want to write
<wikan>
someday in a future
<wikan>
someday port
<wikan>
for example to arm
<griddle>
yep, good to plan that out in advance
liz has joined #osdev
elderK has joined #osdev
<griddle>
typically, it's done by identifying the core parts of your kernel that rely on the cpu hardware (virtual memory, interrupts, etc)
<griddle>
and defining a relatively small interface between the core part of your kernel and the architecture dependent part
<wikan>
exactly, and I am wonder what is arch dependent. For example VGA, Drive, have no idea
<griddle>
Hardware ought to be abstracted into a generic device model
<griddle>
ie: if you want to display a pixel on the screen, define an interface for a "video device" which is capable of doing so
<griddle>
then implement x86's vga on top of that instead
<wikan>
so, "print char" or "draw box" as "API"
<moon-child>
wikan: abstraction is good, but don't get ahead of yourself
<moon-child>
it can be easy to make abstractions which are useless, or which miss important details
<griddle>
^
<moon-child>
the best way to find the happy in-between is to start by building something very specific
<wikan>
abstraction is like an api?
nyah has quit [Ping timeout: 240 seconds]
<moon-child>
abstraction is when you hide irrelevant details
mavhq has quit [Ping timeout: 256 seconds]
<griddle>
for example, if you have multiple ways to display printed data -- say through a serial port -- you might want to have a level of indirection between your printf library and the hardware
<moon-child>
it's what you want to do. Hiding the details which are different between x86 and arm, but which don't matter for most of your os
<wikan>
i mean what printf() or drawBox() should be...
<griddle>
in large kernel projects that support alot of different archs, hardware devices are abstracted sufficiently so they don't care about the CPU
<griddle>
and vice versa
<wikan>
A) a API function that calls correct functions
<wikan>
B) a compiled version of functions for specified arch
<moon-child>
wikan: yep. Those are abstract interfaces, which hide the details of how something is to be printed, or where a box should be drawn
<griddle>
The typical route is B, I think
<moon-child>
there are lots of implementation strategies, which are appropriate at different times
<griddle>
yeah, it depends on your problem space
<griddle>
though generally, CPU specific (arch) functions are implemented through different compilation
<moon-child>
for instance, printf will probably push bytes over a pipe of some sort, but it doesn't know what's on the other side of that pipe. could be a file, a terminal, a serial port...
<griddle>
and devices are through the api which calls correct functions
<moon-child>
but the low-level memory mapper will be arch-specific
<moon-child>
and fixed at compile time
<wikan>
ok
<wikan>
what about hard drives. I have seen BIOS calls to work with drives. Is any other more correct way?
<wikan>
i heard somewhere to not use bios
<griddle>
using bios is great to get started
<griddle>
the best thing to do is to make a generic "read a block from the disk" interface
<griddle>
which can be swapped out laterr
alpha2023 has joined #osdev
<griddle>
starting out, read from the bios
<wikan>
wait
<griddle>
but in the future, write an ATA driver
<wikan>
ohh
<wikan>
so drive read/write should be abstract too?
<griddle>
yep :)
<wikan>
ok. now the most annoying topic for me, GRUB
<griddle>
If you want to read up on how say, linux does it, look into their "block layer" interface
<wikan>
i have seen a lot of variables setup for grup. Are they all required?
<bslsk05>
github.com: chariot/mm.cpp at trunk · ChariotOS/chariot · GitHub
<griddle>
pretty sure that code is stolen word for word from the wiki
<griddle>
its the kind of thing you write once and forget about, then move on to risc-v and never write x86 again :)
<wikan>
yea, GRUB may be the easest way to boot
<griddle>
Agreed
<wikan>
but still GRUB will not load my FS
<griddle>
there is a certain self validation to rolling your own bootloader, but it's honestly more work than it's worth in the end (if writing the kernel is what you enjoy)
<griddle>
if you enjoy writing boot code, write a bootloader :)
<wikan>
no dont care about hardware support, you know, wifi, 3d, super video, audio, etc
<griddle>
when you say loading the filesystem, do you mean being able to read files in your kernel?
<wikan>
i care about my own ideas of how different my os should be :)
<wikan>
load kernel from my own filesystem
alpha2023 has joined #osdev
<griddle>
oh, grub does that
<griddle>
oh waity I see
<griddle>
my bad, you made a new filesystem
<griddle>
in that case, you probably have two routes: 1) use grub and boot your kernel from a FAT32 partition, 2) write a bootloader and do it yourself ::)
<dh`>
don't waste time writing a bootloader (especially not for x86) until you have everything else running
<griddle>
^ plus, writing the rest of the system is more fun than screwing aroudn with booting :) (imo)
<dh`>
the number of people on the internet who decide they want to write an OS and then assume they must begin with an x86 bootloader and then get stuck seems to be ... quite large
<clever>
dh`: another thing that helps, is to have your kernel be compatible with other bootloaders, so you can swap between working and custom bootloaders without much work
<wikan>
well, I have to write my own bootloader as I see
<griddle>
dh`: probably because the wiki hasn't been updated ;)
<clever>
ive written my own bootloader for the pi, but it lacks usb/network support
<clever>
so if i want netboot, i just swap to the closed source bootloader
<clever>
and i kept the next stage compatible with both
<wikan>
i dont need anything but load the kernel
<wikan>
i don't write next linux :)
<griddle>
I mean, yeah
<griddle>
Loading the kernel is important, but different bootloaders provide you with different info
<wikan>
i think it is good idea to put good bootloader at the end of queue
<griddle>
honestly, grub should be your "good bootloader" on x86
<griddle>
everything can use it
<griddle>
(all modern hardware)
<wikan>
cant if I can't use my own fs
<clever>
i dont see what the new obsession is with people using limine
<clever>
wikan: add a grub module for your fs, or use a fat /boot folder
<griddle>
clever: probably cause it has a pretty picture :)
<griddle>
wikan: putting your kernel in a fat partition is definitely the way to go
<wikan>
i cant use other fs for boot :)
<griddle>
grub will need a fat partition anyways right?
<wikan>
i planned much different system architecture
<griddle>
yeah unfortunately x86 forces you to do boot a very particular way
<dh`>
uh
<dh`>
you can boot the kernel from anywhere without it affecting your system architecture
<wikan>
nope
<wikan>
my kernel will be inside rom
<wikan>
rom file
<clever>
that reminds me, linux does have XIP support
<wikan>
pseudo-rom file
<clever>
where the kernel is in a true rom, mapped into the addr space
<clever>
in that case, it cant relocate itself, or modify the .data in the binary
<wikan>
and bootloader must find all roms
<wikan>
this is why I plan to share code between kernel and bootloader
<wikan>
like GRUB shares some code with Linux
<griddle>
Not sure it does though
<wikan>
i heard shares multiscreen support, not sure
<griddle>
90% sure grub load the kernel into memory after decompressing it, then jumps off into linux land
<dh`>
unless you're planning to burn a new rom every time you have a bug, you'll want to be able to boot your system from files on a disk
<griddle>
grub provides multiboot tables to the kernel in memory
<griddle>
which might contain framebuffer addresses in them
<wikan>
doesn't solve anything
<wikan>
I see only two options
<wikan>
1) My Bootloader -> My Kernel
<wikan>
2) GRUB -> My Bootloader -> My Kernel
<griddle>
what does your bootloader need to do?
<wikan>
work with roms
<griddle>
the core of the thing is, you probably shouldnt be writing a bootloader unless you HAVE to do so
<griddle>
you want to work w/ roms on x86?
<griddle>
ie: flashing the bios every time?
<wikan>
rom is a name of FS feauture
<clever>
wikan: grub can load multiple files, and pass them to the kernel over the initrd api
<griddle>
didnt know that
<wikan>
you can take ROM and move as a file
<clever>
so you could pass each of those "roms" to the kernel with just grub
<wikan>
and it has namespaces
<griddle>
then with that system, you'd need to do option 2
<griddle>
though you probably don't need the in-between bootloader
<griddle>
unless that inbetween is just something that reads your filesystem and loads the kernel
<griddle>
the question then is if your kernel's binary needs to live in your filesystem or if it matters or not
<wikan>
it is more than load kernel
<wikan>
OS config is annoying so I decidev to use namespaces
<wikan>
and all config will be kept in different space
<wikan>
so, there is an option to have multiple configs
<wikan>
so my own loader is required always or... kernel acts like "bootloader" before loads config and lets to choose correct one
<griddle>
I'm confused, but it seems like you know what to do regarding the bootloader
sikkiladho_ has quit [Quit: Connection closed for inactivity]
<wikan>
me too
<griddle>
what's an example of such a configuration
<griddle>
and why would it be different between reboots?
<wikan>
i am not even sure if it will work as I expect
<griddle>
^ thats when osdev is most fun :)_
<wikan>
well the main goal was quit with .config directory where is trash
<wikan>
and I thought it may work for kernel too
<wikan>
anyway, maybe we should not talk about something what is not clear even for me yet
ripmalware__ has joined #osdev
<wikan>
so I need abstraction from the beginning. It is clear to me now.
<wikan>
not I can see it
<griddle>
are these .config files the same that linux has?
<wikan>
thanks
<wikan>
by .config I mean $HOME/.config it was main goal