klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
<rpnx_> So, looking at my archive, the object symbol is clearly present... I think this is a linker issue.
isaacwoods has quit [Quit: WeeChat 3.6]
StoaPhil has joined #osdev
<geist> did you try using rv?
<geist> or at least 'r' vs 'q'?
catern has joined #osdev
<mrvn> Do you actually need archives? Why not build .o or .so files?
<geist> tis true as well
<geist> (in this case there's also built in intrinsics for these you want to use,b ut that's another point entirely)
<geist> look for arm_acle.h
<rpnx_> Hum, I think CMake makes archives by default
<rpnx_> I can change the archive command, but I'm not sure if it's possible to make it not do that
<bslsk05> ​pastebin.com: [100%] Linking CXX executable rpnx-kernel.elf/Applications/CMake.app/Contents/ - Pastebin.com
<mrvn> I'm not sure cmake is the best build system for kernels
<rpnx_> Mrvn, Isn't everything else going to be ugly though?
<geist> well, if they're deleting the .a file and then doing a 'q' it should be okay, since it recreates it from scratch
<\Test_User> cmake is ugly as well
<Matt|home> i slept like 13 hours.. poorly but i slept -_- so im looking at the linux kernel process handler, and each process is a struct with like, 50+ attributes. off the top of my head i can think of a few that are necessary, but not that many
<rpnx_> oh
<rpnx_> I found the problem
<rpnx_> I needed to put
<rpnx_> set(CMAKE_RANLIB "llvm-ranlib-mp-14")
<rpnx_> In my toolchain file
<geist> rpnx_: had a thought. problem could be the ranlib it used is the ... yeah you just found it
<geist> the mac one is going to look for ELF files, heh
<Matt|home> when you guys write process handlers what do you generally need?
<geist> er *not* look for ELF files
<rpnx_> \Test_User, bad cmake is ugly, good cmake is pretty :)
<\Test_User> you could say the same about (almost) any other build system
<bslsk05> ​pastebin.com: cmake_minimum_required(VERSION 3.20)project(TestOS LANGUAGES C CXX ASM)a - Pastebin.com
<rpnx_> Make *cannot* can anywhere near this terse.
<\Test_User> I say almost bc no doubt there's some that are ugly no matter how it's done
<rpnx_> *cannot get
<rpnx_> cmake -S source-folder -B build-folder -D CMAKE_TOOLCHAIN_FILE=kernel-build-toolchain.cmake -D CMAKE_BUILD_TYPE=Debug
<rpnx_> That's like, it
<rpnx_> And... I can use the same CMakeLists.txt to build libraries in userspace and kernelspace
<rpnx_> I just set a different CMAKE_TOOLCHAIN_FILE for userspace
<rpnx_> Cross compiling to a different architecture?
<rpnx_> Just another cmake toolchain file...
<rpnx_> If I used make, I'd have to add cross compilation support to each library individually
<zid> no?
<rpnx_> Same if I want to use sanitizers or whatever
<zid> the compiler commands don't change, so the lines of the makefile don't change
<rpnx_> But they do?
<rpnx_> What if I am compiling with MSVC?
<zid> that isn't cross compiling
<zid> that's changing buildchain
<rpnx_> Or want to use gcc instead of llvm?
<\Test_User> you can specify a compiler command and flags easily with make
<\Test_User> as states, ugly when done wrong ;)
<\Test_User> *stated
<rpnx_> I'd rather my project just have a list of files and all the options go into the toolchain file where it defines different build types (like Debug, Release, etc)
<zid> Yea I do that with make too
<rpnx_> I'd like to see a makefile that can do that
<zid> make debug just sets different flags to make release, and the flags get appended to the relevent cflags etc
<rpnx_> I've never seen one
<rpnx_> Though make is very flexible, so I don't doubt it's possible
<bslsk05> ​www.gnu.org: GNU make
<zid> basically make has special syntax to allow rules to set variables when those rules are used
<zid> so you would just do debug: CFLAGS += -g
<zid> or llvm: CFLAGS = $(LLVMFLAGS) / gcc: CFLAGS = $(GCCFLAGS) or whatever you wanted to do
<rpnx_> I find make syntax less intuitive to read than cmake though
<zid> probably
<zid> but it's trivial and universally supported, being the important part
<rpnx_> There was a time I used something called make++
<zid> I personally abhor people throwing in new build systems just because they can't be bothered to learn an existing one
<zid> because I have to install it just for them, and can't fix it either
<rpnx_> I disagree, cmake has better support than make. For example, CMake is supported in QtCreator, CLion, Visual Studio, etc
<rpnx_> Using make on windows with msvc/visual studio is quite annoying.
<\Test_User> cmake produces makefiles that are used with make, as far as I've ever seen
<\Test_User> so to use cmake, you must also have make
<rpnx_> You can, if you want to, yes.
<rpnx_> Not nesssecarily...
<\Test_User> >as far as I've seen
<\Test_User> so it's just all the things I've seen cmake used for doing smth weird by making cmake make makefiles and not letting cmake do it all on its own?
<rpnx_> Cmake can generate makefiles, yes
<rpnx_> It can also generate ninjafiles
<rpnx_> Visual studio projects
<rpnx_> Borland projects
<rpnx_> Codeblocks
<\Test_User> ah
<zid> we have very very different definitions of portable
<rpnx_> Watch make
<zid> I have never used an IDE in my life for anything
<rpnx_> watcom wmake *
<\Test_User> still though, cmake depends on a bunch of other stuff to work
<zid> portable for me is where it can run
<\Test_User> namely, whatever build system it makes the thing for
<zid> not how well random 3rd party software can interact with it
<bslsk05> ​postimg.cc: Screen Shot 2022 09 01 at 8 34 51 PM — Postimages
<rpnx_> This is just what it can do on MacOS
<zid> it can draw a dropdown?
<\Test_User> a bunch of obscure buil- oh right, macos being weird and not supporting normal make
<\Test_User> er wait nvm that is on there, ignore the last part of the comment
<\Test_User> "a bunch of obscure build systems, some of which are only for macos"
Burgundy has left #osdev [#osdev]
<zid> 90% of them appear to be unix makefiles anyway
<rpnx_> I think all of those are on other OS too
<rpnx_> Well, the real advantage is in stuff like
<rpnx_> set_target_properties(foo_library PROPERTIES POSITION_INDEPENDENT_CODE ON)
<zid> cmake definitely has a lot of features
<\Test_User> cmake makes a makefile to do it, therefore so can you
<zid> some of which are useful, some of which are covering up for misfeatures in people's projects
<rpnx_> Use GCC? -fPIC/-fPIE etc. Microsoft compilers? No problem, /ziojwfaFJWOPAJas:FEs/s (may not be actual flags)
<zid> why would you be deliberately passing fPIE to gcc in a makefile?
<\Test_User> makefile $(CC)/$(CXX)/etc, $(CFLAGS)/$(CXXFLAGS)/etc...
<zid> covering up for bad distros that expect userspace programs to all be PIE, but don't set the system gcc up to use PIE?
<rpnx_> \Test_User, yeah, cmake also communicates with the compiler to dynamically generate a dependency set so you don't have to code dependencies between different files in manually.
wolfshappen has quit [Ping timeout: 260 seconds]
<rpnx_> Oh, and did I mention if you compile a library with particular cmake options, it can then enforce the compiler preprocessor definitions as a transitive dependency, making sure you don't have ABI breaks?
<zid> I'd rather write that in the source
<rpnx_> For example, if you have some macro like LIBRARY_FOO_ABI_VERSION, you can set that as a cmake option to the project, and anything that imports the library gets the compile preprocessor definitions automatically
<zid> because that's what cares about it
<\Test_User> from what I've seen, cmake effectively just forces everything to be rebuilt if any options change, even if it's not related to what its rebuilding
<rpnx_> \Test_User, if you use make yes, but not if you use Ninja instead :)
<rpnx_> Ninja is also much faster than make
<\Test_User> so you can force the client to have both less-common-than-make cmake and obscure ninja for compiling
<\Test_User> very portable
<rpnx_> lol
<rpnx_> You do realize
<zid> no no it's portable, cus bloodshed dev-c++ understands.. ninja
<zid> or something
<rpnx_> Well, at least for C++
<\Test_User> so now it only counts if you write C++, and other languages and it won't work
<\Test_User> s/now //
<rpnx_> CMake has over 55% market share, more than all other systems combined including make and visual studio https://www.jetbrains.com/lp/devecosystem-2021/cpp/#Which-project-models-or-build-systems-do-you-regularly-use
<bslsk05> ​www.jetbrains.com: C++ Programming - The State of Developer Ecosystem in 2021 Infographic | JetBrains: Developer Tools for Professionals and Teams
<rpnx_> This is for C++, not C
<zid> 'market share'
<zid> I'm not sure that means what they think it means
<rpnx_> But still, calling CMake "obscure" is a bit of a stretch.
<\Test_User> I didn't say cmake was obscure
<\Test_User> I said ninja was
<geist> one thing that does or doesn't work for you is various all-in-one build systems tend to not be as set up for bare metal
<zid> cmake is obscure in the grand scheme of things still
frkzoid has quit [Ping timeout: 244 seconds]
<\Test_User> and cmake is less common than make
<geist> and/or they have a lot of features intended to abstract how to build for a particular target
<rpnx_> No, not according to jet brains data.
<zid> I bet I can't find cmake for any number of thousands of platforms I could name
<rpnx_> CMake 44% for C and Make 41%
<geist> but really you want to drie things fairly manually
<zid> rpnx: again, your definition of portable/reality is *very* differen tot mine
<zid> I've never even *ran* jetbrains
<\Test_User> how many of those 44% cmake make makefiles, so they also depend on make
<zid> I have however ran tens of thousands of makefiles
<geist> i do question a little bit precisely where they got their data
<geist> not saying they're not wrong,m but i dont see it (quickly browsing) where that comes from
<zid> It's like someone who's into motorsport saying that everybody uses light-weight clutches
<zid> well no, racing cars do, *everybody* on average, uses a road car
<rpnx_> I mean, CMake kind of exploded in the last like, 3 years?
<rpnx_> So historically, yes, make has been around a while.
<zid> geist: jetbrains users who self-reported
<geist> b ut i'm not saying they're wrong, just if you ask any subset of programmers you're going to find widely different answers
<zid> which is effectively 0% of all people, and 99% of a certain type of people
<geist> i dont think it's also generlly a valid thing to say 'we talked to all programmers and results are X'
<bslsk05> ​www.jetbrains.com: Methodology - The State of Developer Ecosystem in 2020 Infographic | JetBrains: Developer Tools for Professionals and Teams
<rpnx_> Should check how they get the data first
<zid> That 99% of people being a group of people I care almost exactly nil for
<rpnx_> "For developers from each country, in addition to their employment status, we calculated the shares for each of the 30+ programming languages, as well as the shares for those who answered “I currently use JetBrains products” and “I have never heard of JetBrains or its products”. Those shares became constants in our equations."
<rpnx_> In other words, they controlled for these things
<geist> sure
<geist> anyay, again i'm not trying to diss on the results, it's probably just generally averaged out over whatever subset of folks self selet to take these things
<geist> and hobby osdev is an extremely niche subset of programming things
<rpnx_> Ninja is pretty recent though as well.
<zid> I feel that people who use cmake with jetbrains are people who would not use any sort of makefile otherwise
<geist> ninja i wouldn't generally consider to be a complete build system per se, but a component of a bunch of other ones
<rpnx_> Yeah ninja is designed to be written by tools
<zid> as in, they're not embedded developers who write makefiles, and switched to cmake with jetbrains
<zid> they're random developers who got inducted into using cmake because it was available to them
<geist> zid: right. but this being said i wouldn't be surprised if cmake isn't extremely popular, just in the ay that most programming tasks that people have to just do i also wouldn't be interested in
<zid> Yea it is definitely popular, just not in a way that matters to anything I care about
<rpnx_> I've done it both ways, I have to say, I find cmake less of a headache overall
<geist> yah i've had to use it. kinda hate it but it does what it does
<zid> It's a pain in the arse to keep seeing people shit all over projects prosteteyzlizing for it, though
<geist> but most of the reason i hate it are the reason i'd generally hate build systems that do most of the hard stuff for you
<zid> like how a lot of random app developers on windows probably never heard of source control until msvc added 'source safe', so if you polled msvc users 90% of them would use source safe
<geist> i tend to prefer things that let me build from the ground up, specify prisely what's going on, etc
<geist> ie, gmake
<geist> but that's my preference
<zid> jetbrains has cmake, so either either use cmake, or have never used makefiles
<geist> i dont even know what jetbrains is
<zid> some corporate ide thing I think
<zid> like eclipse but for people who aren't java programmers
<geist> yah looks like it. whats it written in?
<zid> that their managers can buy a licence for
<zid> and say "this is how we do things here, click this to do y"
<rpnx_> geist, I like cmake precisely because it removes most chances to screw up
<zid> rather than the teams organically recreating a bunch of shit constantly
<rpnx_> If used properly anyway
<zid> ?/
<zid> wanna see several discord channels I am in that have 800 messages in a row talking about cmake?
<rpnx_> And not having to understand .msi, .deb, .rpm, etc
<geist> rpnx_: indeed. i think that's a generational thing, (assuming you're younger than I am)
<geist> i'm finding especially at work that younger devs were brought up in a world of 'make it hard for me to screw up by taking away footguns'
<zid> cmake is an absolute nightmare to configure right
<geist> which is generally the trend of things at a meta level the last 10 years or so
<rpnx_> geist, I've seen too much broken software and I spend too much time fixing things
<geist> sure. i'm not saying it's not valid, i'm just from an older era where i want the footguns
<rpnx_> zid, I agree that cmake allows you to shoot yourself in the foot, but it's actually not a nightmare if you learn how to use toolchains correctly.
<geist> a thing i deal with professionally
<zid> it's not about shooting yourself in the foot (that was geist)
<zid> You earlier for example, claiming that it'd "Just work right between gcc and llvm if I do this right" or whatever
<zid> I've seen *hundreds* of times where that just isn't true, and I don't even *use* cmake
<\Test_User> rpnx_: and its not a nightmare to use make if you know how to use it properly
<zid> because people try to convert existing projects to cmake, and realize actually
<zid> there's a lot more complexity between switching compilers than they realized
matt__ has joined #osdev
matt__ is now known as freakazoid333
<rpnx_> Yeah I was custom building a toolchain because cmake doesn't come with a kernel toolchain that I know of..
<geist> that's part of those class of build systems that take the responsibility of how to drive the toolchain generally away from you because you shouldt *have* to know
<rpnx_> There were a few mistakes but it seems to work now :)
<geist> extreme example being libtool
<zid> so it doesn't help in the places it claims to help, is just as hard as make to actually use, but has the additional requirement of me having to install and learn it
<rpnx_> geist, I hate Autotools
<geist> i think for most development of host tools that seems fine
<rpnx_> That's the main reason I like cmake
<geist> my point is for bare metal most of the time you *want* to drive the toolchains directly
<geist> becacuse you really want to tell it to do precisely wha tyou want
<zid> I know people who love cmake, and people who love C++, they seem to be very similar people
<zid> they think if it's hard to use and learn, it's worth it
<zid> regardless of the actual outcome
<geist> and you dont want the build sysstem to try to know better than you
<rpnx_> Nah, C++ is hard to learn and easy to use "once learned"
<zid> C++ is very very not easy to use "once learned"
<geist> i wouldn't want to have a new version of cmake to come along and generate different compiler steps for my kernel, for example
<\Test_User> C++ changes frequently, it's never fully learned
<rpnx_> I disagree, most of the things I want to do with C++ tend to be quite difficult with C
<geist> i want it to do precisely the same thing every time, on every machine, independent of version
<zid> \Test_User: if you can even keep up at all you're a god
<zid> gcc can't even keep up half the time
[itchyjunk] has joined #osdev
<rpnx_> I won't disagree that C++ is complicated
<zid> I think you can absolutely get work done with a shallow understanding
<rpnx_> Or that it takes many years to get a good enough understanding to be "worth it"
<zid> but good luck with your random move semantic template inheritance polymorphism overload blah blah blah bug
<zid> which is what people *actually* write with C++
<\Test_User> true, but full understanding certainly helps more than shallow understanding of some language with more features
<zid> not this theoretical "every statement is ultimately pretty simple" C++
<\Test_User> lol yeah
<rpnx_> Ok, I like templates. But I understand them and can read that stuff. A lot of people don't want to put in the effort to learn it
<rpnx_> And that's fine
<zid> It's not fine even in the case where you learn it
<geist> heh rule 63: all build system arguments will eventually devolve into programming language arguments
<zid> it's *very* mentally taxing to unpick
<rpnx_> I don't find it taxing to read.
<geist> rule 64: all programming language arguments will eventually devolve into editor wars
<zid> quick, throw some jetbrains at it?
<rpnx_> Partial specializations are a bit hard at first, because they're weird
<\Test_User> "I use nano" *quick, what's rule 65*
<rpnx_> But once you have an intuition for that, most code is fairly trivial
<geist> pancakes on rabbits
<zid> I should pick some random godbolts out of the gcc support channel for "Is this a bug or am I doing it wrong?"
<rpnx_> I mean, if you put in the effort to actually understand C++ template partial specializations, all the template code starts to be "readable"
<rpnx_> It's still weird, in the way that pointers are weird to people who program in Java
<rpnx_> But not incomprehensible
<geist> everyone gets a Waffle Party
<zid> Either you're some kind of savant at reading 400 page error messages
<zid> or you're telling fibs
<rpnx_> I mean, the errors are verbose... I'll give you that but
<bslsk05> ​godbolt.org: Compiler Explorer
<rpnx_> Some compilers (*cough* llvm *cough*) do a better job at giving readable errors than others (*cough* gcc *cough*)
<zid> A random "gcc doesn't even understand what the fuck is going on" moment
<zid> I'm told that's valid and that gcc 11 was wrong
<zid> dw though cus whatever weird issue you find, there's probably a workaround in the C++ spec for it, like the one where you can tag a thing as not needing a unique address anymore because it produces a 0 byte object that is illegal otherwise, being a thing I saw recently >_<
<rpnx_> Yeah that was added because they wanted you to be able to have structs with objects that might be 0 size for templates
<rpnx_> e.g.
<\Test_User> why would you want an object with 0 size
<zid> oh god please don't e.g.
<\Test_User> it's a <size of pointer> integer
<rpnx_> [[no_unique_address]] AllocatorType m_allocator;
<rpnx_> Foo * f = m_allocator.allocate(1)
<rpnx_> If "AllocatorType" just calls malloc and free it doesn't need any data
<rpnx_> As one example
<\Test_User> so uh, why would you use AllocatorType and not just malloc()?
<\Test_User> (or free())
<rpnx_> basic_linked_list<std::default_allocator>, basic_linked_link<my_memory_pool>
<rpnx_> 1 data structure implementation, support any allocator... etc
xenos1984 has quit [Read error: Connection reset by peer]
xenos1984 has joined #osdev
<rpnx_> This kinda thing could be quite useful in kernel, where you might have e.g. interrupt_memory_allocator or something be different from another area
<zid> not really?
<zid> #ifdef
<zid> there, now it takes 0 bytes
<rpnx_> I mean... how do you have 2 versions then?
<rpnx_> Sounds like ODR violation recipe :)
<zid> you don't ned two versions, that's the point
<rpnx_> I mean like
<rpnx_> Are you suggesting to include the same header file in two different TU with different preprocessor macros ?
<zid> I'm suggesting it isn't a thing you'd ever want to do in any capacity ever
<zid> if you wanted 'two versions' of something, it'd be at compile time only
<zid> to toggle between two behaviors for different architectures
wolfshappen has joined #osdev
<rpnx_> Hum, not sure I follow.
<rpnx_> Will there not be more than 1 memory allocator in the same kernel?
<zid> rarely, and even then, not in a way that would rquire you to perform silly tricks with 0 byte objects
<rpnx_> Well, I suppose if you manage the memory manually
<rpnx_> Well, 0 byte objects are just so you can use templates with an object that might or might not have data
<\Test_User> when writing a kernel, you do need to manage that memory somehow, and you don't exactly have a kernel to do it for you
<rpnx_> So if you put 40 empty objects in a struct, it does not create 40 byte object.
<zid> I don't need 40 empty objects
<zid> You keep wrapping back around to "yea but you need it"
<zid> and I keep telling you I really don't
<rpnx_> They're useful in C++ for templates though
<zid> And yet, nobody uses them
<zid> kernels aren't very templately code, much less "Incredibly rare template nonsense needing tricks"
<rpnx_> I mean, there aren't that many kernels.
<zid> Other than the thousands
<rpnx_> I am pretty sure more people use [[no_unique_address]] than write kernels :)
<rpnx_> Well, personally I doubt I will be able to write a very complicated or functional kernel without using C++
<\Test_User> linux was written without using C++
<zid> Infact, almost all things are written without C++ :P
<rpnx_> Well... linux still needs C++ to compile :p
<\Test_User> that's gcc's fault
<zid> yea because of annoying people who think that things are needed when they're not :p
<rpnx_> I think it has less to do with need and more productivity
<rpnx_> Although technically
<rpnx_> C++ is faster than C
<zid> no
<\Test_User> how
<zid> You create an ad-hoc domain language that simulates BASIC, by hiding 400lbs of fat in templates and headers
<rpnx_> Strict aliasing, templates, and STL being better aligned to hardware than common c patterns
<heat> what kind of cringe programming language argument is this
<heat> all languages are shit, get over it
<heat> unless you're writing things in RUST BABY RUST IS THE FUCKING SHIT
<zid> I'd be happy if if all C++ were replaced with rust tbh
<rpnx_> Zid they aren't even similar though
<heat> sadly right now rust doesn't compile to itanium so it's also deeply flawed
<zid> use rust as the 'Actually, I wanted a managed language here' language
<rpnx_> Rust is more like upgraded C than upgraded C++
<zid> instead of bolting on pages and pages of spec to C++ then banning the previous 800 pages
<zid> C++ people have *very* selective memory about what C++ actually is
<heat> you've got a really warped idea of what C++ actually is
<zid> Not really
<heat> yes really
<zid> Not really
<heat> yes re
<heat> ally
<zid> heat
<zid> smelly
<heat> zid
<heat> poopy
<moon-child> rpnx_: what in the world do templates have to do with the hardware?
<rpnx_> STL or templates?
[itchyjunk] has quit [Read error: Connection reset by peer]
<moon-child> either
gog has quit [Ping timeout: 268 seconds]
<heat> anyway, C++ in the kernel is fine, do it if you'd like; you just need to be selective
<moon-child> rpnx_: aliasing also has not that much to do with the hardware, except in multiprocessing context (which I don't want to minimise, but _most_ code is mostly single-threaded)
<rpnx_> Write a loop with templates and pointers as the iterator
<zid> heat: see, ignore the previous 800 pages
<rpnx_> And also one C style
<moon-child> I actually think it would be a good idea to add aliasing information to instruction sets
<zid> only use THESE pages, THESE are the correct pages, that ACTUALLY represent what C++ i s
<moon-child> it would allow for more ooo than currently
<moon-child> but you can't yet
<heat> zid, just because the standard library is hot trash doesn't mean C++ is a bad language overall
<zid> it has a standard libarry!?
<rpnx_> C style being pointer + size
<moon-child> heat: it doesn't mean it, but both are true :)
<heat> it's a deeply flawed, but better than C if wielded correctly
<rpnx_> Guess what GCC converts the C style one to...
<zid> godbolt both, rpnx
<zid> I wanna see
<moon-child> rpnx_: right--templates are _converted_ into something the machine is happy with
<rpnx_> Give me a sec
<moon-child> and pointer+length is similarly _converted_ into something the machine is happy with
<rpnx_> right
<moon-child> (also, start+end is not 'machine-friendly', it just saves a couple of bytes, sometimes. Sometimes not)
<heat> rpnx_ is giving all the wrong arguments and you guys are giving all the wrong reasons why C++ isn't good
<moon-child> I never gave any reasons why c++ isn't good
<moon-child> I just fiated it
gog has joined #osdev
<moon-child> (and countered rpnx_'s wrong arguments)
<heat> a good argument for C++ is RAII - show me how to do that shit in C and I'll fucking switch right now
<zid> rust does it better
<heat> or *actual object-oriented programming* instead of emulating it in C structs with struct something_ops *ops;
<zid> C++ could also use some OO features, yea
<gog> don't write code
<moon-child> zid: damning with faint praise? :P
<zid> lmk when they add some
<heat> a good chunk of linux core driver model code is dedicated to emulating object-oriented stuff
<moon-child> (re 'rust does it better')
<zid> moon-child: I don't mind rust at all, as long as they keep it away from my C, but that's going to be a battle
<heat> your C is getting replaced by rust
<Mutabah> C has its place
<Mutabah> that place being resource-constrained environments
<Mutabah> Now C++, that's a different matter
<zid> rust might still evolve into being semi-useful where C is
<kazinsal> and in the hearts and minds of madmen
<kazinsal> (such as myself)
<zid> them trying to get it useful for the kernel might lead to it, even
<moon-child> heat: these features that you are pointing out that c++ has _are_ useful. But I do not choose a language solely on the basis of whether it checks boxes
<zid> C++ has an incredible number of useful features
<heat> moon-child, it checks more boxes than C *shrug*
<moon-child> indeed
<zid> And an infinite set of ven diagrams of which of the features it has are useful.
<moon-child> it matters how they fit together, and what the end result looks like--the holistic experience
<zid> It has a poor TCO.
<moon-child> and for me, holistically, the experience of using c is better, despite the hoops you have to jump through
<moon-child> (there are also social factors, but for me those are increasingly less significant)
<zid> In an arena where the bytes stay around for a long time, need to be accountable for, need to be debuggable etc etc etc
<zid> C wins by a landslide for me
<heat> I realized when writing my kernel that I wasn't smart enough to keep track of all the manual memory management, refcounting, locking by myself, because the system is super duper complex
<heat> I was also emulating OO on top of C
<heat> that's why I switched - no regrets - my C++ is clean and fast to compile
<bslsk05> ​godbolt.org: Compiler Explorer
<rpnx_> C++ style is better
<heat> having control of the standard library you're using matters - the STL is cat shit wrapped in dog shit
<rpnx_> GCC just converts the C version to the C++ version plus some extra code to convert the arguments to the C++ form
<rpnx_> STL is extremely fast
<zid> erm
<zid> of course it converts the arguments? and the loop bodies are identical
<zid> you gave it different arguments
<zid> someone on the *outside* did the setup code
<moon-child> by the by, is this the same stl that will copy on a dime?
<zid> this is literally the same code but with lea; call vs call -> lea
<moon-child> :P
<heat> rpnx_, the stl is extremely slow and extremely bad at most things
<rpnx_> Heat: what?
<rpnx_> Microsoft sty?
<rpnx_> Or the llvm one?
<heat> microsoft's stl, llvm, gcc, intel, whatever the fuck
<heat> all bad, all horribly flawed
<rpnx_> Why would you say that?
<heat> forces exceptions and rtti on you, bloated compile times, lots of components just don't fit with each other (print a std::array for me, please)
<heat> <algorithms> usually pessimizes the shit out of your codegen
<rpnx_> Compile time is slow. rtti isn't required?
<rpnx_> I don't know any example where C++ STL is slow
<heat> it is, for a lot of features it is; also required for exceptions
<rpnx_> (Other than at compile time)
<rpnx_> Why do you think RTTI would make the program slower?
<rpnx_> Same for exceptions.
<heat> more memory usage, more pagefaults, most usages of rtti have the most ridiculous codegen ever
<rpnx_> Sure, they increase the binary size a bit
<rpnx_> 'More memory usage'?
<heat> they're also the most disgusting language features you can ever use
<heat> you literally cannot use the STL in non-rtti and fno-exceptions code!
<rpnx_> uh
<rpnx_> And?
<heat> they built it on the assumption everyone can use that shit
<rpnx_> I mean, there are parts that you can use
<heat> turns out - *lots* of people can
<rpnx_> Certainly not all of it
<heat> and lots of people can't
<rpnx_> Like who?
<heat> most people working at Google
<rpnx_> Google doesn't allow RTTI?
<heat> most people that don't like exceptions because they're horrible and hard to mentally keep track of (me included)
<heat> nope
<rpnx_> No wonder they make dumb stuff like carbon
<heat> kernel writers usually
<rpnx_> Lol
<heat> carbon is erm 10x the language C++ is
<zid> (Does that mean it has 8000 pages of spec to ignore, or 80?)
<rpnx_> I have yet to see any valid criticism of C++
<rpnx_> Most people think things are slow that aren't slow
<zid> That sounds right
<heat> man you're blind
<bslsk05> ​'CppCon 2019: Chandler Carruth “There Are No Zero-cost Abstractions”' by CppCon (00:59:52)
<heat> please watch this
<heat> if god himself chandler carruth can't explain this to you, no one can
<zid> I call it C++ stockhole syndrome
<zid> [01:58] <zid> I know people who love cmake, and people who love C++, they seem to be very similar people
<zid> [01:58] <zid> they think if it's hard to use and learn, it's worth it
<zid> [01:58] <zid> regardless of the actual outcome
<kazinsal> truly the solution is to write everything in either rust, C#, or node.js
<rpnx_> I mean there are plenty of different programming styles
* kazinsal hides
<heat> kazinsal, praise be thy nodejs
<heat> i'm still to port that shit and write a daemon on it
<heat> that would be funny as hell
<rpnx_> I just think carbon is dumb because it misses a lot of the points of why people like me still use C++
<heat> "the base system requires node.js"
<rpnx_> I don't want java with manual memory
<rpnx_> I want functional programming with efficiency
<heat> since you're not gonna watch it, please just look at this https://i.imgur.com/d4P5803.png
<heat> and this is a deep flaw in the C++ standard (not even the library, but the actual language!)
<moon-child> heat: as I recall, exceptions have been 'found to be expensive' because 1) people measured the cost of removing -fno-exceptions on their codebase which was already using result objects instead of exceptions everywhere
<moon-child> 2) compilers are insufficiently smart to ipo (including lto) and add noexcept to things that don't throw
<rpnx_> heat: I mean, that's not even idiomatic C++
<heat> i'm not saying exceptions are expensive, they're just bad
<moon-child> sure--that's a separate discussion; I don't agree with that either, but I don't care to argue that particular point
<heat> rpnx_, what?
<rpnx_> Passing around unique_ptr isn't idiomatic C++
wxwisiasdf has joined #osdev
<rpnx_> That's an anti-pattern
<geist> okay now everyone needs to hug
<rpnx_> Doesn't mean it's not common
<wxwisiasdf> Hi
<Mutabah> News to me
<zid> They're bad *and* expensive, best of both
<geist> i took a nap and came back and everyone is still arguing
<rpnx_> They're cheap actually
<rpnx_> But uh
<Mutabah> wxwisiasdf: 'morning
<zid> You know what's cheaper? no exceptions.
<geist> hola wxwisiasdf
<wxwisiasdf> hello geist and mutabah
* moon-child purrs and rubs against geist's legs
<geist> awww
<heat> rpnx_, how do you do that "idiomatically" "MODERN C++"
sonny has joined #osdev
<zid> I'm just here to see how long heat continues talking with someone who admitted they are doing it in bad faith
<geist> here's my general learing from C++ (and other high level languages) at various workplaces: Speed is not the main goal
<geist> as in, for the most part, most programmers, even excellent expert ones are okay with things being suboptimal
<heat> I'm just here to see how long zid stays here looking at me continuing to talk with someone who admitted they are doing it in bad faith
<geist> and that's really painful as a low level programmer, but it is the way
<wxwisiasdf> electron time!
<moon-child> if speed is not the main goal, then why not use a nice, gced language, rather than c++?
<Mutabah> Everyone has their own level of acceptable slowdown
<rpnx_> Heat: well generally I wouldn't be passing around ownership of a pointer to int
<moon-child> where 'move constructor' is not a word you even have to _think_ about
<geist> moon-child: because it's grey area. there is a substantial difference between 'not as fast as I can do the same thing in C/asm' and 'a nice gced language'
<heat> rpnx_, cheers, now I really know you're doing it bad faith
<heat> "how do you do this?" "well, the pointer to an int is stupid"
<geist> as in some of these patterns that heat was talking about in C++? not as fast. more code. but folks consider it to be an acceptable loss because it's safer
<moon-child> geist: java and common lisp are both quite fast
<moon-child> I think ocaml too
<heat> of course, if it were a complex data type the codegen would look wayyyyyyyy worse
<geist> moon-child: gced languages have their own sets of issues, especially systems programming langues
<moon-child> somebody did a unixoid kernel in go
<wxwisiasdf> assembly is fast, but good optimized C code can outmatch it, C is fast, but good optimized C++ code can outmatch it, C++ is fast...
<moon-child> and our very own https://github.com/froggey/mezzano, of course
<bslsk05> ​froggey/Mezzano - An operating system written in Common Lisp (179 forks/3236 stargazers/MIT)
<geist> and javascript and brainfuck, and whatnot. doesn't mean it's a great idea
<moon-child> sure, yes, there are challenges, but
<wxwisiasdf> yah
<geist> because outlier examples exist does not negate the argument
<sonny> this is the great debate
<heat> wxwisiasdf, C and C++ are usually faster than asm
<moon-child> that somebody did it doesn't _mean_ it's a great idea, but I nevertheless think it _is_ a good idea
<heat> I'm definitely not beating the compiler
<sonny> I don't understand why not either
<wxwisiasdf> heat: they're yes, the compiler is crazy
<geist> we found, for example, that GCed languages in fuchsia were a fairly bad idea, memory and gces happen at bad times sort of reasons
<geist> ie, writing your storage block driver in a GCed language? probably not a great idea
<moon-child> no, you can definitely beat the compiler, in the small. This is a bad myth. It's just not sustainable to develop large, modular applications this way
<wxwisiasdf> moon-child: omw to tell all the linux devs that they should rewrite in ASM
<moon-child> (and not particularly pleasant to develop small ones either)
<heat> if I spend a ridiculous amount of time on it? probbly
<heat> but the compiler knows a hell of a lot more about the CPU then I do
<wxwisiasdf> thing is, if you write something in asm, the next gen CPUs would outdo your work
<geist> but i think this is why C++ and rust are doing fairly well in systems programming: they're maybe not quite as efficient on the whole (space/speed) wise as the equivalent in C but you can write a lot of it pretty fast, and it can be made to be much safer and easier to unit test
<moon-child> heat: better read up
<geist> and therefore is fairly good for systems development
<moon-child> compiler was written by humans
<rpnx_> Heat: I think you are confusing idiomatic C++ with "Hello I'm a Java programmer let me learn C++" C++
<geist> and they're not GCed, etc
<heat> and I think you're trolling
<heat> I also have thoughts
<wxwisiasdf> the way you can achieve max performance is not using C or C++ or even ASM, it's removing all the cost abstractions - such as the kernel
<sonny> geist: I would not consider the other OSes to be outliers because there isn't that many to begin with, but I do realize the difficulty in decided what to do with the gc
<heat> wxwisiasdf, remove everything, become cpu
<raggi> the main objection to GC in _that doc_ was due to the memory overhead of logarithmic free heap
<wxwisiasdf> heat: like S370 programs!
<rpnx_> heat: what about performance of std::sort vs sort ?
<rpnx_> Er
<rpnx_> Sort
<geist> sonny: right, the GCs in resource constrained situations is really difficult problem to solve
<rpnx_> qsort
<raggi> there was an unsubstantiated claim that it cost more power, but no real data to demonstrate it
<rpnx_> Damn autocorrect in my irc client
<rpnx_> Let me figure out how to turn that off
* geist waves at raggi
<raggi> there is a rumor of a paper at apple that substantiates a claim that gc's a power inefficient, but no one i know has seen it first hand
<heat> it's CIA CONFIDENTIAL baby
<moon-child> geist: regarding gc: yes, it's annoying if your block storage driver requires gc. (In particular, in the limit, you can't implement your gc using a gc.) But I think the solution to this is to have gc-free subsets for sensitive components (where, basically, 'cons'/'new' is outlawed)
<geist> i think GCed languages can work fine in embedded constraints, where you're running A Single App that is GCed
<geist> easy to control when the events happen, etc
<heat> locked away in a vault, that's itself locked away in a vault
<rpnx_> heat: I think the difference here is that idomatic C++ would either pass around objects by value or use references
<raggi> in the data in _that doc_ sadly most of the memory overhead was pthread stacks, because it was benchign a go runtime that spawned a pthread per goroutine (which normal go runtimes don't do)
<sonny> geist: I would prefer to say it is not flexible, it's also hard to know what your requirements will be beforehand
<geist> it's when you have multiple of these things independently running where you can't model the interacts
<rpnx_> passing around pointers to objects is rarely done
<wxwisiasdf> all the GC nations lived in harmony, everything changed when the Interrupt nation attacked
<sonny> because there are strategies that work for resource constrained environments
<raggi> i think it's fine to use a GC language anywhere that might be preempted
<sonny> like a personal OS? :P
<moon-child> geist: in particular, a counterpoint to java is d, where it is idiomatic to allocate things on the stack, allocate things manually, allocate things with gc, or not allocate at all, in various contexts. The result is that you have convenient memory management 'in the large', for things that live long and have annoying lifetimes, but the tight code and the important loops can stay allocation-free
<geist> well and again i point out this is mostly systems level stuff
<geist> applications, etc are fine (unless they're particular kinds i suppose)
<rpnx_> I don't think there is a huge difference but obviously there are situations where you would use raw pointers in C++
<wxwisiasdf> raggi: mfw when i am GC'ing a task but then something important regarding PCIe so i task switch and oh fuck
<rpnx_> unique_ptr is a supplement to raw pointer, not a replacement
<rpnx_> now, as for this code
<geist> or say audio engines
<sonny> interesting approaches require you to forget about commen conventions for processes (e.g. stack + heap layout)
<geist> or gpu drivers, or whatnot
<raggi> right, but you also generally don't preempt those threads, either
<rpnx_> It's kind of impossible to say what would be a good way to do it
<rpnx_> since this code is too abstract
<sonny> or handling existing code
<raggi> that's why i state this preemption based condition specifically
* geist nods
<wxwisiasdf> fair
<wxwisiasdf> i just think that gc's can be spaghettified when approached with a wrongly timed scheduler on critical gc sections
<geist> i mean you *do* preempt threads like that, but usually they're more real time, or at least more deadline centric
<raggi> there are plenty of allocators we write which are just immediate mode gc's, and some of them have relatively jitter-inducing costs
<moon-child> you can do real-time gcs, same as you can do real-time scheduling
<wxwisiasdf> that requires intricate scheduler timing & control
<wxwisiasdf> or avoiding using gc on critical sections too
<rpnx_> heat: though if we're being honest, this is just C++ missing a tiny tail call recursion opportunity. Zero overhead means you don't pay for abstractions that you don't use. The two sets of code do not do the same thing.
<geist> moon-child: i think the harder thing is to then have 20 or 30 of these separate GCed processes live together
<raggi> geist: you can have a gc that allows for masked critical sections, it's not too bad
<raggi> geist: i think people mostly just never got into it, it wouldn't look like your average m&s
<moon-child> 'plenty of allocators we write which are just immediate mode gc's' indeed. There's this pattern in game development where you have 'per-frame allocation' in an arena which you clean up at the end of each frame--I think some _very slight_ tuning, you can get exactly the same performance profile out of a generational gc with pretenuring
<geist> raggi: yah
<moon-child> except you don't have to explicitly mark the per-frame allocations, so it's safer and less cognitive load
<sonny> what irks me is that, eventually you end up with a bunch of resources you want to manage ... which is what a gc is for
<wxwisiasdf> just static
<raggi> moon-child: right, in the end the api there just ends up looking like any other explicitly pooling allocator
<rpnx_> but ultimately, there are different styles, and we can benchmark them to see which are the fastest on each compiler.
<moon-child> well, no, because you should be able to do it automatically
<moon-child> without cluttering the api
<sonny> yeah, that's why I think too
<geist> but i do think whatyou want to generally arrive at is 'using idiomatic <language> can you get what you want'
<sonny> s/why/what/
<wxwisiasdf> sometimes some resources are unescesarily dynamically allocated, clear examples are- well, depending on the kernel ofc :')
<geist> it's generally possible to use a subset or some variant of a language to steer clear of the dragons
<moon-child> (obviously for the 95%ile case, so if you have enough regularity, you _can_ beat it, just with disproportionate effort and unsafety)
<geist> and C++ is all about that
<raggi> moon-child: you can do the free that way, but if you ever want to make persistent objects during a frame, you have at least two different ways to allocate things, so at least one must have an api
<moon-child> raggi: my point is, with the gc, you don't have to have multiple ways to allocate things
<moon-child> but it can give you almost exactly the same performance profile as if you did
<moon-child> by inferring what's per-frame and what's not
<wxwisiasdf> GC is just a duct tape to hide the real problem of having a lot of managed things
<raggi> well you do, unless you only ever make frames, and you never make anything that lives longer than a frame
<geist> in a single application, like a game. that makes sense
<geist> you assume you're basically the only thing running, and implement the code as such
<zid> what if it's an entire spreadsheet application, inside a game
<zid> see: eve online
<geist> that doesn't mean your game wont be pinging around size wise frame by frame
<zid> now you've written your excel sheet with gc overhead :( sad panda
<heat> zid, you mean football manager
<geist> now implement 20 or 30 of those simultaneously and you have some serious VM/memory issues to contend with
<heat> which is lterally almost a spreadsheet
<zid> hey sometimes it plays little animations of a ball b eing kicked
<moon-child> raggi: I don't see why
<zid> more than excel can do
<heat> now is it?
<moon-child> raggi: are you familiar with implementation techniques for generational gcs?
<raggi> yes
<moon-child> (and in particular pretenuring)
<moon-child> ok. Then why couldn't it be automatic?
<raggi> in the automated form it has non-deterministic runtime, which in this context is an undesirable property
<moon-child> if your allocations are predictable, and you force a nursery gc at the start of each frame (this is the 'slight tuning' I was referring to_, then it should be deterministic
<moon-child> if your allocations are unpredictable, then you didn't have deterministic runtime anyway
<geist> sure but isn't that just constraining the problem until it works?
<geist> ie, a game?
<moon-child> sure
<moon-child> I think you could apply the same sort of strategy to something like a network server, though
<moon-child> with a per-request pool/nursery
<geist> this is also why this sort of thing can work well in an embeded evironment too: it's The App, so there's no degenerate cases where N things all hit their 'frame' at the wrong time
<moon-child> so I don't think it's an artificial set of constraints
<moon-child> just an exemplar of a case which comes up a lot
<geist> but if you had N things that each grow by 100MB per 'frame' and they happen ti line up such that they all hit their limit and the system runs out of memory...
<geist> okay, so now the answer is everything preallocates 100MB. problem solved
<moon-child> again: if your allocations are unpredictable, then you didn't have deterministic runtime anyway
<geist> but now you just used N * 100MB extra memory to make the problem go away
<zid> as it turns out, there are always edge-cases that automatic systems are not tuned to care about
<zid> and if you don't control the automatic system, you may be the edge case next major version, good luck have fun
<geist> my use case i generally think about is the fuchsia problem: you have a sea of processes, each implementing various services
<geist> IPCing to each other. if each of those has a GCed heap individually they can control themselves, but in aggregate they're all pinging their memry usage as they grow and shrink
<geist> so you can quickly hit a point where they all get huge at the same time
<raggi> geist: is the channel allocator still using intrusive linked lists?
<geist> raggi: in the kernel? sure
<moon-child> geist: that seems to me like rather the opposite case
<moon-child> because you have many separate heaps, it smooths out the spikes that you would otherwise get if you had a single monolithic heap
<raggi> geist: i always wanted to try replacing that with external tracking, to see if it'll optimize stuff like io buffers much, which often spill over and waste good chunks of pages
<geist> moon-child: except it's not the spikes, it's the fact that their periods are all different so sometimes they have all just GCed so you're using a small amount of total ram
<geist> and sometimes you just so happen to snc up that all of the heaps are about to pop
<geist> raggi: ah no the internal tracking is quite on purpose
<raggi> geist: i know it is, and for small messages it's fine, but people regularly make page sized buffers for io :)
<geist> it is a speed vs space tradeoff, with the idea that channel buffers are quite ephemeral and should not hang around for long
* geist nods
<raggi> that said, maybe more and more io is moving to vmos?
<geist> there are potentially some cachong advantages to having to metadata/data in the same page, next to each other
<geist> though that's harder to measure
<geist> in general yeah
SGautam has joined #osdev
<geist> the big abuser of it is the fdio stuff, where it shoves through 8KB data packets + header
<raggi> geist: indeed, i think there are ways to do ok with other shapes too, though they may need to be explicitly topology aware in which case pain in the ass
<raggi> yeah, fdio being limited to 8kb is also sad panda
<raggi> you need to get closer to 64kb to amortize the cost of the syscalls
<raggi> (and it does, i tested it)
<geist> mostly a forcing function to try to get folks to build better abstractions :)
<raggi> haha :)
<geist> the stream object stuff is much more load bearing now
<raggi> is that working out?
<geist> seems to be, the fxfs stuff is using it, i believe
<raggi> i was a little worried that it's still too synchronous
<geist> oh no it's annoyingly asynchronous
<geist> that's where all the edge cases are
<geist> getting that and the pager to cooperate is difficult, with async in mind
<raggi> depends on the lens, when i left it was still non-overlapping issuance
<raggi> so apps will do read(); process(); write() under nominal conditions
<raggi> which is slow when those involve rtt's
<geist> oh no, the stream object is a kernel mediated read/write mechanism
<geist> oh oh i see what you mean
<geist> yah it doesn't have a async model there, which i think is unforunate too
<raggi> the problem is pages fill in response to a synchronous read request, so it's "bursty" effectively
<raggi> so it's much faster than the 8kb chunks because ultimately it has a bigger buffer per 6 syscalls
<raggi> but it still has big stalls
<geist> yah there's still a round trip with the fs server, tis true
<raggi> that'll only show up in more subtle benchmarks, I tried to explain it to abarth before i left but he gave me funny looks
<raggi> i think everyone was going to be happy for throughput to be much better :)
<geist> yeah the stream object is still very synchronous, posixy looking indeed
<heat> y'all need some io_uring
<geist> i thin kthere has been some thought about adding an async api to it, and i think that's doable, just have to get the basics sold first
<heat> but before that, posix-aio please
<raggi> *nod*
<geist> or some io_uring like stuff, though i dont personally know io_uring
<moon-child> just don't do posix-aio the way linux does it
<heat> please do, you get the funnies
<geist> i dont tend to look too closely at what linux does as my first idea
<geist> doubleplus so since it may be patented
<heat> iz not
<moon-child> geist: io_uring is like vdso but it's not a one-way mirror
<heat> it's a cq/sq ringbuffer where you submit operations to
<heat> NVMe-esque
<geist> kay.
<raggi> zx_ports could gain a generic uring-alike if they could accept buffers for async request fulfillment
<raggi> but it'd be much more general
<raggi> you can already pump operations on channels somewhat similarly anyway, it's just slightly painful to program with the fidl api
<heat> oh yeah, interesting development on the io_uring front: https://lwn.net/Articles/903855/
<bslsk05> ​lwn.net: An io_uring-based user-space block driver [LWN.net]
<moon-child> linux: the next ukernel?
<raggi> well, turns out the cost of syscalls isn't so much the _cost of syscalls_ as the _cost of waiting for syscalls_
<moon-child> yeah
<raggi> in a sad state of irony, here i am replacing vdso time of day calls with stuff that avoids even hitting the vdso because the cost of the stack switching in a go program i'm workign on is actually too high, so don't take it as wrote :(
<geist> yah we end up having something halfway ike that on the pager side (queue of requets, fs driver responds to them asynchronously) but not necessarily something for the client to async request from files
<raggi> yeah, the block driver fifo controls are very similar
<rpnx_> Go is actually quite a nice programming language to use.
<raggi> it's nice for writing http servers in
<heat> raggi, why do you need to switch stacks to hit the vdso?
<heat> is this a stupid goroutines thing?
<raggi> heat: goroutines have weird stacks
<moon-child> presumably switching from using libc to do it to doing it by hand?
<geist> the vdso, being a traditionally coded blob of Cish code, expects a stack, though it makes geuarantees about using no more than N bytes per stack
<raggi> nah, i'm mostly removing code that's doing if time.Now().Sub(sometime) > expiry into time.AfterFunc and friends, which inserts a future task into the scheduler timer heap
<raggi> which if you're doing the check every, say, packet, is much more efficient
<raggi> geist: m.o.s.t.l.y
<moon-child> geist: oh, I assumed the dso itself just contained a correction factor & offset for the tsc
<moon-child> and the code was expected to be all provided by userspace libraries
<raggi> then they break it in some api in some release, and then you're stuck with it
<moon-child> never looked into it too closely
<raggi> the real problem is there be bugs
<raggi> the code in question is going through a general path intended to make syscalls
<raggi> and the same is supposed to apply, that there's a definition for how much space you need, but, bugs
<geist> moon-child: it can. it also can call through to the kernel. depends on implementation of hardware
<geist> raggi: oh is there cases where the vdso used more? we should add some built time tests for that
<raggi> not in fuchsia, but in linux
<geist> oh!
<moon-child> sure, but I assumed that it would just hold a flag if you can't tsc, and that that decision would be made by userspace
<geist> oh! okay.
<raggi> :)
<geist> okay that makes more sense anyway, i didn't think you worked on fuchsia anymore :)
<heat> maybe not far fetched to work on fuchsia in other companies anymore
<raggi> geist: right i don't, i work on this vpn thingy, and i've been staring at profiles going whyyyy is time taking so much tiiime
<raggi> which is all kinds of deja vu
<geist> i kinda wish we hadn't called it vdso, since it gets confsed with linuxs one, but no one came up with a better name
<geist> ZXDO
<geist> or zx_glory_hole
<heat> yo what
<raggi> ZXO
<raggi> no one cares about the D
<geist> ZX, drop the vowel, it's shorter
<moon-child> secret tunnel
<raggi> zapi
<heat> z library
<heat> zlib for short
<raggi> oh no
<geist> nah just rexuce it to z
<geist> then you can link against -lz
<moon-child> ._.
<geist> and call computers zircon runs on: Z Machines
<kazinsal> libd, so you can pass -ld as an argument to ld
<moon-child> isn't that the frotz thingy?
<geist> and have the zircon kernel implement an infocom game... haha
<heat> z architecture? zarch?
* moon-child is still proud of https://github.com/moon-chilled/libeftpad
<bslsk05> ​moon-chilled/libeftpad - The best the JavaScript ecosystem has to offer, in C! (0 forks/0 stargazers)
<heat> that's the shittiest best shitproject I've ever seen
<heat> +1
<raggi> moon-child: nice, someone reminded me a little while ago about one of my old shitpost projects shardnull https://gist.github.com/raggi/560087
<bslsk05> ​gist.github.com: the secrets of the web scale sauce · GitHub
sonny has left #osdev [#osdev]
<moon-child> lmao that's great
<heat> thank christ you're sharding it
<heat> else netbsd can just randomly allocate a buffer full of zeros and memcpy it
<raggi> i managed to persuade someone once that it was a faster database than mongodb
<rpnx_> by the way, does anyone know the name of this assembly syntax? using .global and .section ".text" for example
<geist> did you try .text?
eroux has quit [Ping timeout: 268 seconds]
<geist> that might be a more direct way to signal that it's text code
<geist> therwise, yes that's generally specifying that you're in a code segment (vs a data one)
<raggi> do you just mean gas ?
<moon-child> seems like gas, but I think gas uses .globl, not .global?
<geist> they're using clang's build in assembler, but it's following gas in this case
<heat> .global and .globl is the same thing
<rpnx_> ah, okay
<rpnx_> I'm reading some references and I'm not sure if I need to swap the argument orders
<bslsk05> ​developer.arm.com: Documentation – Arm Developer
<rpnx_> I'm not sure which way around they go in ARM ARM either.
<Mutabah> GNU AS doesn't mangle the ordering of things in ARM
<rpnx_> ah ok
<rpnx_> that's good
<Mutabah> just with x86 (where it uses at&t syntax)
eroux has joined #osdev
<heat> yeah
<geist> correct. it's a syntax thing, there are multiple syntaxes for x86 (intel, at&t, some others), but for ARM there's really only one
<heat> same with riscv, no mangling
<geist> 68k for example has more than one as well
<geist> pretty much the same thing as at&t for the same reason: someone made a syntax that follows VAX and made 68k and x86 look close to VAX syntax, since it was the predominant cpu for unix at the time
<wxwisiasdf> so that person is the responsible for `mov %%eax, *4(%%rbx,6,3)`???
<geist> but arches born since then didn't get the at&t treatment since they weren't around to get a unix port at the time
kof123 has joined #osdev
<geist> wxwisiasdf: well to be fair that's not valid syntax
<moon-child> the only one you have to blame for duplicate %s is gcc
<geist> but in general yes, it's beause unix at the time wanted a consistent syntax for the different arches that it was ported to
<heat> praise be unix
<moon-child> can see the vestiges of this in the go assembler
<moon-child> heat: :<
<heat> and the vax
<wxwisiasdf> moon-child: gcc, printing directly to FILE*, since 1994
<moon-child> ? that has nothing to do with it
<geist> i always paste this as my answer to it, it's some VAX code i wrote a few years back, using *the* vax syntax as defined by DEC
<moon-child> it's because of the way the constraint syntax works
<bslsk05> ​github.com: lk/start.S at vax · littlekernel/lk · GitHub
<geist> and if you'll notice, x86 at&t syntax looks a hell of a lot like it
<wxwisiasdf> iirc gcc has %% because it still uses printf/snprintf formatting under the hood
<heat> oh look its x86 att but weirder
<geist> you'd almost miss that it's not x86 if you didn't look closer
<moon-child> wxwisiasdf: I just told you why it does that
<geist> heat: also notice left to right
<geist> anyway that's why
<moon-child> ooh wow, multiple memory operands!
<moon-child> fun
<heat> ILLEGAL
<moon-child> it has doubly-indirect stuff too, right?
<wxwisiasdf> moon-child: oops, didn't notice,
<heat> this stuff is banned by the house un-intelian activities committee
<moon-child> heat: movsb [rdi], [rsi]
<geist> anyway the primary things that att syntax picked up from vax is the % prefix for registers and the $ prefix for constants
<geist> which frankly i like, makes it more explicit
<geist> the main thing i think it fucked up on x86 is the extra addressing modes are weird and inconsistent
<geist> but then they are weird and inconsistent in the ISA too
<moon-child> explicit, yes, but also error-prone
<geist> moon-child: also yeah right? multiple memory operands. VAX is very flexible since any argument slot can be any addressing mode, including just the register itself
<heat> moon-child, that doesn't assemble
<heat> also wtf queue insertion instructions in the CPU?
<geist> heat: yep and there's a locked variant too, that is SMP safe
<heat> can't wait for INSRBTREE
<wxwisiasdf> heat: don't look at z/arch
<geist> it's actually not that strange when you think about it, because the particular linked list the cpu implements is a very simple, no branch version
<wxwisiasdf> mfw when BTREE search as an instruction
<geist> basically your simple circular doubly linked list
<geist> so it just does a single 4 op instruction
<moon-child> heat: .intel_syntax noprefix
<wxwisiasdf> mfw when UTF-to-ASCII conversion as an instruction (CU8)
<geist> wxwisiasdf: (what does mfw mean?)
<wxwisiasdf> my face when
<geist> ah.
<heat> moon-child, still doesnt
<\Test_User> > my face when when
<heat> smy my head
gxt_ has joined #osdev
<heat> smh*
gxt has quit [Remote host closed the connection]
<geist> at&t style 68k is also pretty similar: https://github.com/littlekernel/lk/blob/master/arch/m68k/start.S
<bslsk05> ​github.com: lk/start.S at master · littlekernel/lk · GitHub
<geist> has the $ and whatnot stuff, but then it uses # for constants and @ for dereferences, so a bit strange
<geist> but that's *not* the same as the official motorola syntax
<heat> moon-child, actually movsb works, the others don't
<heat> fuckin weird
<moon-child> huh
<geist> but also left to right
<zid> can we get a NSFL tag on this entire convo pls
<Mutabah> I learnt with early x86 - nothing a CPU does can break me
<moon-child> zid: what's wrong?
<zid> You have vax assembly just sitting out in the open where a child might see it!
<moon-child> I _did_ see it!
<heat> are you anti-vax zid
<geist> zid: we must train the children in the Old WAys
<moon-child> lmao
<geist> one day they will roam the outback in their gas guzzler cars, looking for the old Vaxen
<geist> with the last of the V8s
<wxwisiasdf> Mutabah: what about itanium
* geist senses heat sitting upright
<heat> PRAISE BE ITANIUM BEST ARCHITECTURE EVER BABYYYYYYY
<wxwisiasdf> Yes
<wxwisiasdf> instruction scheduling, my beloved
<geist> ALL HAIL IA64 the best 64
<Mutabah> Try the XB360's PPC :)
* geist looks around, hoping they have pleased the gods
<heat> hey had two fucking reset vectors
<wxwisiasdf> "we don't have to do register colouring if we just throw a bunch of registers at the problem"
<geist> yo dawg i heard you like registers
<heat> someone at intel (from tianocore) told me they used the ia64 registers as a small, early heap
<geist> though honestly the cell processors were kinda ridiculous for that
<geist> sice it was just a shitton of registers and, well have at it
<wxwisiasdf> fun
<geist> oh hey qemu v7.1.0 was released apparently
<wxwisiasdf> i think you had to schedule instructions in a certain specfic way when doing asms because itanium ran them on parallel so you couldn't have data dependencies on a block or it goes to shit, that is what i recall
<wxwisiasdf> oh, qemu v7.1.0? i need to look at it, have they fixed the qemu-system-alpha yet? :^)
<geist> oh i dunno, i just synced the tree and the tag showed up
<Mutabah> It's still in alpha, gotta expect some breakage
<geist> has been in rc for a while
<Mutabah> :D
<wxwisiasdf> lol
* moon-child trouts Mutabah
<wxwisiasdf> windows nt for alpha won't run atleast that is what i was told from the super-enthusiastic windows nt for alpha workstations people :^)
<kof123> es40 is free (but slow) there are a few others, yes, no nt4 or 2000 beta as well IIRC
<wxwisiasdf> also didn't qemu have like a surprise christmas thing
<wxwisiasdf> are they going to do that thingy again?
<geist> it not running windows nt could be for any number of a bazillioon reasons
<geist> probaby just have to have someone dig into it, but that's a bitch
lainon has joined #osdev
<geist> trying to debug an emulator running a large binary that you dont have source to is no fun
<heat> unixware 1.0 doesn't even run on virtualbox x86
<heat> the IDE driver fucks up somewhere and just hangs
<geist> yah lots of times you get ancient drivers that assume certain timings about hardware
<wxwisiasdf> geist: the worst parts of osdev is when the emulator is actually wrong
<geist> that either new hardware doesn't meet or emulators done
<wxwisiasdf> i mean it's super rare an emulator as mature as qemu is wrong, in fact i hadn't have the pleasure to prescence it becoming wrong at some point
<geist> i've seen stuff like idea where some driver writes to a register and then reads back assuming the hardware couldn't have possibly finished by now
<geist> and then *really* reads back, but that first read back latched an IRQ or something (registers that latch on read are the devil)
<geist> but that kinda logic fails in emulators or faster hardware
<wxwisiasdf> x_x
<geist> 'wrong' is very much a grey area when it comes to running ancient software
<geist> many times the emulator has to work around bugs in the old software
<geist> since you can't just recompile it
<wxwisiasdf> well, why not take a more simulationist approach?
<wxwisiasdf> oh yeah because this is an emulator, whoops
<geist> feel free to implement it
<geist> if you can somehow simulate an ancient alpha workstation
<wxwisiasdf> no, i am not that skilled
<geist> nor is anyone
<wxwisiasdf> fair enough
<wxwisiasdf> what emulator would you say it's better for debugging? bochs or qemu? and that has a better % of running on real hw
<geist> for what hardware in particular?
<wxwisiasdf> normal x86
<geist> define 'normal x86'
<Mutabah> qemu has much wider hardwars upport
<geist> that's 40 years of hardware there
<Mutabah> but... bochs is much more accurate for CPU emulation, and much more detailed for debugging initial bringup
<wxwisiasdf> well let's say i686
<heat> i think it takes a mix
<geist> right. in general bochs can be useful for very very early cpu bringup, but then i'd switch to qemu for everything else
<geist> and yo ucan in general get qemu to be about as useful as bochs, if you kow the appropriate incantation
<heat> lots of shit you pull in emulators doesn't work in real hardware, some shit you pull in kvm doesn't work in real CPUs or emulators, etc
<geist> (singlestep, TCG, etc)
<Mutabah> Being a pure simulator, bochs is very accurate/debuggable, but SLOW
<wxwisiasdf> fair enough
<Mutabah> And really - you'll want to test on a variety
<Mutabah> bochs/qemu/virtualbox/vmware
<Mutabah> all have slightly different implementations of the "standard" virtual hardware (e.g. USB controllers)
<geist> also qemu doesn't emulate very far back into the past. i think the oldest cpu can emulate as a 486 and in my experience it's a bit sloppy
<geist> more like a pentium pro that looks like a 486
<Mutabah> so will prepare your code for real hardware
lainon has quit [Quit: Konversation terminated!]
<geist> tis why i was asking what generation hardware you're concerned about
<wxwisiasdf> well
<wxwisiasdf> mostly 2009-2018
<heat> qemu also can't emulate anything haswell+
<heat> no avx
<heat> it can use KVM and work as a hypervisor, that works super well
<heat> but no actual emulation (tcg)
<geist> right, as a VMM (virtual machine monitor) it's quite good
<heat> also because of that my OS kinda doesn't run on TCG right now
<geist> how so? assumes avx?
<heat> because it assumes haswell and avx and I even added an avx memset
<geist> ah
<heat> well, OS here = userspace ofc
<geist> sure
<heat> the kernel works but crashes right in pid 1
<heat> I'll need to hack the build system if I want to get CI testing
<rpnx_> hum, is the DTB the right way to determine how much ram is on the device?
<heat> yes
<rpnx_> is this... in ascii format?
<heat> no
<geist> https://wiki.qemu.org/ChangeLog/7.1 looks like a bunch of riscv changes
<bslsk05> ​wiki.qemu.org: ChangeLog/7.1 - QEMU
<geist> priviledged spec 1.12, i should double check what that entails
<bslsk05> ​developer.arm.com: Documentation – Arm Developer
<heat> geist, also arm
<rpnx_> I want to know the format that this struct gets passed in when the kernel actually boots
<heat> new machines too
<geist> yah bunch of arm features
<kazinsal> looks like priv 1.12 is just 1.11 but ratified
<geist> rpnx_: look up device tree
<geist> the format is extremely well documented
<bslsk05> ​IRCCloud pastebin | Raw link: https://irccloud.com/pastebin/raw/jBI3glEf
<geist> however i'd also just recommend libfdt. it's no fun to low level parse it, but libfdt is very handy for first level parsing
<wxwisiasdf> use libfdt, rolling your own is prone to all sorts of tomfoolery
<geist> it doesn't parse the structure for you, but it gives you a bunch of routines to deal with the low level bit stuffing of the format
<geist> and is quite small
<wxwisiasdf> (can say from trying to implement FDT from scratch, kind of regret it now)
<rpnx_> is the format described with IEEE 1275?
<geist> i think so yes
<geist> it's an old format
<heat> i think you need to ditch dtb, and use ACPI and ACPICA
<heat> it's a really good format, really recommend it
<geist> thank you microsoft
<heat> best 100kloc you'll ever introduce into your kernel
<geist> haha
<wxwisiasdf> lmao
<wxwisiasdf> btw, any way to dump gdt on qemu?
<geist> all of the gdt or just what is currently loaded?
<wxwisiasdf> all
<geist> info registers shows you what it has loaded. ah no, not as far as i know
<geist> not formatted at least
<geist> though you can use info registesr to figure out the gdt base and then print the memory with x
<wxwisiasdf> that's better than nothing :&)
<wxwisiasdf> * :^)
<heat> bochs has that
<heat> iirc
<heat> btw I forgot to mention ACPICA is basically maintained by a single guy
<wxwisiasdf> well i always have gdb to printout the gdt entries - whats the bochs thign through?
<heat> it actually gives you info IIRC
<heat> but the last time I used bochs was in like 2015
<wxwisiasdf> great, say, was there a LDT debugging thing?
<heat> idunno
<heat> try it
<heat> you know, all-in-all I think ACPI was just a shit solution to a really really shitty problem
<heat> problem: "PC platform fragmentation is huge, your product needs to support every PC out there or customers will complain, and it needs to be extensible"
<heat> it's the same problem that originated EFI really
<heat> but EFI really has shown that it does effectively work, for better or worse
<zid> I wish acpi were more like usb hid, as in, it was just a list of names that map to numbers
<zid> just a key value store for various properties (the other end can be mmio to toggle the LED that it describes, or whatever)
<heat> I think actual code would've been better
<heat> arm-ish
<heat> mov some values to some registers and hvc
<zid> fuck code
<zid> make your device simpler
<heat> effectively that's what ends up doing in ACPI anyway
<zid> brightness should be a logicalmin logicalmax you write to a single address
<zid> etc
<zid> (like hid)
<heat> lots of methods just do a write to an IO register that is defined by like every intel board ever since 2000 to trap into SMM
<heat> yeah this is it, 0xb2
<rpnx_> hum
<epony> How are you in the arm implementers hobbyists club doing today? https://en.wikipedia.org/wiki/Open_Letter_to_Hobbyists#Open_letter
<bslsk05> ​en.wikipedia.org: Open Letter to Hobbyists - Wikipedia
heat has quit [Ping timeout: 240 seconds]
mrvn has quit [Ping timeout: 248 seconds]
wxwisiasdf has quit [Quit: leaving]
MiningMarsh has quit [Quit: ZNC 1.8.2 - https://znc.in]
<rpnx> Can qemu emulate the raspberry pi 4?
<geist> not tip of tree, but i think there are branches maintained that can
<bslsk05> ​www.qemu.org: Raspberry Pi boards (raspi0, raspi1ap, raspi2b, raspi3ap, raspi3b) — QEMU documentation
<geist> right
<rpnx> I am wondering if I should scavenge for an older raspberry pi, I may have lost my old ones.
MiningMarsh has joined #osdev
<geist> for precisely what purpose?
<geist> (just trying to nail down your requirements so i can giv eyou a good answer)
<rpnx> oh, just so that I can have hardware and VM that match.
<rpnx> I am not sure how to debug the kernel with hardware
<geist> sure, but what i mean is what are you trying to do preisely?
<rpnx> I thought that might be easier with a vm
<geist> because you're going to....?
<kazinsal> what problem are you attempting to *solve*
<geist> exactly
<rpnx> I don't know yet, maybe view memory or try to gdb the kernel somehow
<geist> ... 'the kernel' is what?
<geist> something you wrote? linux?
<kazinsal> hold on, this is gonna need another guinness
<rpnx> Well something I write, just to be able to read cpu memory and so on.
<geist> i dont know what you're trying to do here at the overall thing
<geist> okay, so are you interested in a particular architecture?
<geist> arm32? arm64?
<rpnx> arm64
<geist> etc etc. i'm trying to get details out of you so i can suggest which rpi to get, etc
<geist> okay. thank you.
<geist> so a rpi3 should be sufficient for that. do you have one?
<geist> both rpi3 and rpi4 are 64bit. however rpi4 changes some details (in a positive way) which is probably why qemu hasn't been revved to emulate it yet
<rpnx> Not sure, I have 2 more model B, but I'm not sure where they are and whether one of them is a RPi3 or not. I guess I'll need to dig around in the basement or order a new one.
<geist> trouble is they're all pretty unobtanium right now
<geist> chip shortage, etc
<geist> so yeah finding a rpi3 would be good, or stick with qemu for now
<rpnx> Is it possible to debug the board via serial or something like that?
<geist> depends. what do you mena by debug precisely?
<rpnx> maybe after soldering to some debug pads and connecting to some kind of usb device?
<rpnx> Like embedded debugging
<geist> via jtag? not sure, but i'm fairly certain no
<geist> oh wait, no there is jtag available, so maybe
<geist> but you need a jtag unit for that. openocd may do it
<rpnx> JTag, SPI, or whatever it is using
<rpnx> I don't know all the protocols these boards use
<geist> in general folks use rpis as the jtag unit, not as the thing to be debugged, but i think the jtag specs are open for cortex-a53, and i'm fairly certain openocd understands it
<geist> otherwise, no you just have a serial port (and a screen)
<rpnx> Are there SWD breakout pads?
<geist> i dont think it' uses SWD so no
<geist> that's generally for cortex-m class stuff
<geist> (the big raspberry pis, raspberry pi pico is a different beast)
<rpnx> ok, so I need a jtag usb debugger thing
<geist> yeah or you just go without a jtag
<geist> honsetly that's what most of us do. jtag can be a serious crutch, IMO. i rarely use it except for extreme debugging situations
<geist> useful occasionally, but dont rely on it ever being there, since most of the time it isnt
<rpnx> Hum? How so?
<geist> well most devices dont have it available
<geist> or you dont have the hardware to do it, etc. but a serial port is far more available
<rpnx> I mean, if I manage to make a kernel that supports 1 device I'll call that an accomplishment. :)
<geist> sure, and you dont need jtag for that
<rpnx> Supporting other devices... by that time I'll probably start working on Linux or something.
<geist> though i should ask why in particular you want to support arm64?
<klange> > start working on Linux
<klange> that's quitter talk
<geist> is it that you want to support arm64, or you want to suppoer a raspberry pi? or just anything in particular?
<geist> personally i'd start with qemu. gets you the most of what yo uneed for debugging. you can get through the hard bootstrap stuff much easier. then once you get a hwllo world style thing working and boostrapped the cpu you can go back and try to get it running on something
<rpnx> Hum, 1 raspberry pi device and then maybe others, other arm devices, then maybe x64, I donno.
<geist> then can work in lockstep like that
<geist> starting out cold on real hardware is much more difficult
<klange> I've just been using a serial console. https://klange.dev/s/Screenshot%20from%202022-09-02%2015-20-48.png
<geist> though it was The Way back in the day before bochs and qemu, but i can tell you from experience it was a lot more difficult
wxwisiasdf has joined #osdev
<geist> and yeah i think you'll find that most folks just use a serial console
<rpnx> Well I chose raspi because the steps to compile the kernel and make a bootable image were super easy
<klange> Eh, not really?
<rpnx> copy file on sdcard... if the instructions are to be believed.
<geist> yeah it's a bit more complicated, unforunately
<rpnx> assuming you hijack an existing sdcard image
<rpnx> which is what it says to do
<geist> again, i'd just start with qemu for right now. it might not be that exciting but it lets you get over the difficult parts with a fair amount of safety net
<rpnx> What major differences are there between raspi 4 and 3?
<geist> lots. quad core a72 vs a53 (though mostly just a lot faster), and different interrupt controller
<geist> various hardware layout differences
<geist> bootloader situation is different
<rpnx> let me go downstairs and see if I can find some older raspberry pi
<geist> though almost all of these are better, honestly
<kazinsal> Pi 3's ethernet is attached to a USB 2 bus so it's bottlenecked
<geist> ah yes pi4s eth is actually native
<kazinsal> 3 also caps at 1 GB RAM but the 4 series goes up to 8
<geist> i think rpnx is in the age old problem of hardware first. i've seen it countless times here over the last 20 years
<kazinsal> mmmyep
<geist> someone has a piece of hardware they wanna run on, so they contort themselves to work around that limitation
<geist> sometimes it works out, sometimse they just get frustrated
<klange> I definitely get it, wanting to have a thing you can hold / smack someone over the head with that is running your coe...
<klange> code*
<geist> oh totally
<geist> there's just a point where its counterproductive
<geist> or at least depends on what your goal is
<klange> It is quite literally why I bought an RPi400 - or even targetted aarch64 instead of going for riscv next.
<geist> word
<geist> looks like you were lucky too, i think those are unobtanium as well
<kazinsal> that's why you should always write an OS for a vintage IBM microcomputer. bash someone over the head with the keyboard, then keep coding
<geist> i have mine right here, i should load up tauros on it
<klange> I think current master should at least boot to an unusable GUI.
<kazinsal> I need to sit down and figure out what the architecture of PCem looks like because its UART implementation is quite literally just enough to make serial mice work and nothing else
<geist> actually rpi 400s look available
<geist> surprising
<geist> at least adafruit seems to hav eone
<klange> I have been talking-about-but-not-actually-doing things with xhci for the last several months... my priorities just keep bouncing around too much to make solid progress.
<geist> unless their site is lying, cause no one else has one
<Mutabah> klange: If you have questions, feel free to ask :) I just "completed" a driver
<geist> looks like rpi4s are in general unobtanium right now
<klange> I got the 400 kit, the one with the cute book and the shoddy mouse, from Digikey Japan about a year ago.
<rpnx_> ah I think I am missing one
<rpnx_> but I found 2 model B and 3 B+
<rpnx_> also a zero W
<rpnx_> and a lattice FPGA board and some arduionos :)
<rpnx_> I swear I had the raspberry pi 1 though
<rpnx_> not sure where it went
<klange> I have a 2040 I just got as a free throw-in when I ordered some programmable LED strips from Adafruit, and I have an original B from when they launched.
<rpnx_> oh I found it
<rpnx_> I had it on a shelf up here
<rpnx_> Raspi 1 model B+ v1.2
<geist> well, the 3 is what you want anyway
<wxwisiasdf> hmmm
<geist> the previous ones are arm32 only
<kazinsal> hmm. looks like pcjs supports serial ports now? neat
<klange> Mutabah: I think I mentioned that I've at least gotten to the point where I can communicate with the controller and have set up some of the initial ringbuffers, which was a terribly involved task with initializing PCIe and getting the firmware loaded...
<kazinsal> might need to start using that for working on this dumb project
<Mutabah> Ah, yes, I remember that
<klange> Mutabah: And then I went on vacation several months ago and haven't really touched it since - I think I'm at a stage where I can assign an address to a device? So most of the rest is "write the ****ing USB stack"?
<rpnx_> I think I had a zero and another Zero W somewhere as well but I don't know where I put them
<rpnx_> this one for some reason
<rpnx_> I decided to solder directly onto the power pads
<Mutabah> klange: Well, if you don't have a stack already, XHCI is the place to start :)
<rpnx_> to skip the usb interface
<Mutabah> It does a lot of things in the controller that were part of the stack before
<Mutabah> E.g. address allocation
<wxwisiasdf> time to speedrun
<klange> I think with the state I have the rpi driver in, I should be able to continue in QEMU (with either arch) and make some progress on the stack more rapidly, but those bouncing priorities have kept me distracted.
<rpnx_> I wonder if my raspberry pi 1 will become a precious antique
<rpnx_> They're $70 on ebay right now... pretty sure that's more than I paid for mine so many years ago
<kazinsal> oh hello. pcjs has a vt100 emulation and you can hook its serial port up to the serial port of another machine in the same emulation definition
<kazinsal> wonder if you can hook it up to like, a websocket or something
<Mutabah> qemu's xhci emulation was a godsend. Although, it does take some shortcuts
<geist> move semantics are magic: https://gcc.godbolt.org/z/vK7Ysn1xx
<geist> what i dont get is i only have to delcare that the move constructors are there on class foo
<geist> i dont have to actually put the body in it. i thought i'd have to at least put = default
<geist> but just saying `foo(foo &&);` is sufficient
<rpnx_> interesting
<rpnx_> I wonder if that behavior is C++ spec or an extension
<geist> yeah that's what i'm worried about
<Mutabah> Ah, the move constructor is never referenced there
<geist> yeah it never actually uses it
<Mutabah> It's defined to not use it in that case
<geist> ah yeah i think i see what you're getting to
<Mutabah> there's a language feature where if you return a locally-defined object, that object is actually constructed in the return slot
<Mutabah> so `bar`'s `a` is the same memory as `main`'s `b`
<rpnx_> oh yeah
<geist> right, but if i explicitly do *not* declare the move constructor then it fails, because the copy constructor is declared as deleted
<rpnx_> it's probably not defined in this case... interesting
<Mutabah> NRVO iirc - named return value optimisation
<rpnx_> yeah
<geist> it's the declaring of the move constructor, even though it doesn't use it, that makes it work
<rpnx_> it's required to move elide
<Mutabah> ^
<Mutabah> I assume it needs the move constructor (for semantic reasons) but then just doesn't call it
<rpnx_> the declaration of the move constructor makes the type movable even though it will not link if you try to use it
<rpnx_> but since it is movable, you get the move elision
<geist> hrm. okay
<Mutabah> yay for C++
<geist> i knew about NRVO which is really what i was trying to get it to trigger for an otherwise non copyable object
<rpnx_> I think the spec will not implicitly define it for you, it's still an undefined symbol :)
<rpnx_> is NRVO required to get move ellision?
<rpnx_> I thought it's not required
<geist> yeah if i tried to actually use it. so now what i kidna wnat is this behavior without needing to declare it
<geist> basically i have a non copy class, but would like to have it return on the left side
<geist> so it seems lika perfect case of move
<Mutabah> Define the move constructor with `=default;`?
<geist> guess i'll jsut have to declare a =default move constructor in this case
<geist> yah, that
<Mutabah> yes it's janky, but welcome to C++
<geist> now i just have to remember if that will *always* guarantee that the destructor is not run at the call site?
<geist> if so, perfect
<rpnx_> ah ok, yes it is required
<geist> iirc the language considers the object dead at this point and just lets it go out o scope?
<Mutabah> For NRVO, yes.
<Mutabah> but other moves - they will leave the moved-out slot around
<Mutabah> (and call the destructor)
<geist> but for not NRVO will need a real constructor that ctually wipes out the old one (it's a RAII style wrapper)
<rpnx_> geist, move constructors don't destroy the object, they just leave it in unspecified state
<rpnx_> usually e.g. empty/null state
<geist> yeah
<geist> and i assume the =default move constructor just copies
<rpnx_> the =default does a memerwise move
<rpnx_> e.g.
<geist> and thus i really shoukld override it such that it wipes out the key parts of the old object (because it's RAII and it'll close up the internal ref to thing)
<Mutabah> It calls the move constructors of all contained fields
<rpnx_> struct Buz {std::string foo; std::string bar; }
<rpnx_> implicit move constructor moves foo and moves bar
<Mutabah> If you have managed resources, it's a good idea to wrap them in thin RAII wrappers
<rpnx_> Basically, the implied move constructor is "move all subobjects"
<rpnx_> same for move assign
<rpnx_> so.. if you have raw pointers, you should null-initialize and then swap fields with the other object.
m5zs7k has quit [Ping timeout: 252 seconds]
<geist> yeah filling it in now
<geist> https://gcc.godbolt.org/z/zqxj1fdWc is now using the move constructor
<rpnx_> That's the pattern I use. You could also copy and then null but why do that? since this pattern delegates nicely. and if you forget any fields they end up null or in the other object, not duplicated
<Mutabah> ^
<geist> makes sense
<Mutabah> IMO - if a you need a manual copy/move constructor, your class should only have one field - the field that needs manual handling
<rpnx_> foo(foo&& other) : foo{} { swap(other); }
<Mutabah> Reduces chances of forgetting it
<rpnx_> void swap(foo & other) noexcept { std::swap(a, other.a); std::swap(b, other.b); ... }
<rpnx_> this also makes std::swap work on your type as a byproduct
<rpnx_> two birds with 1 stone
<geist> yah though 'm not using std:: at all here, so trying to do it manually
<geist> but makes sense
m5zs7k has joined #osdev
<rpnx_> Meanwhile I'll be abusing std:: and just reimplementing the parts that I need.
<geist> https://gcc.godbolt.org/z/qPEsc1jG6 using std::swap
<geist> okay, makes sense. been a while since i wrote a raw move constructor, always a re-learning experience
<rpnx_> oh, you might also be able to do
<rpnx_> auto tie() { return std::tie(a, b); }
<rpnx_> then
<rpnx_> void swap(foo && other) noexcept { auto t1 = tie(); auto t2 = other.tie(); t1.swap(t2); }
<rpnx_> weirdly they have to be lvalues
<rpnx_> do you have to name them
<rpnx_> *so
<rpnx_> not sure if that actually works
<rpnx_> i think it should but.. not working for me
<geist> https://gcc.godbolt.org/z/z1d4TMhs5 pretty happy with, so i think i grok it enough for now
<geist> just didn't want to leave a footgun that bites me later
<rpnx_> It says it calls swap. but the assembly code generated by godbolt doesn't match
<rpnx_> I wonder if the swap implementation is wrong , since swapping a reference to int should swap the value referenced, not the sub object containing the reference
<rpnx_> I bet the implementation is swapping the reference subobjects instead of what they are supposed to be swapping
<geist> ah maybe
<rpnx_> maybe I'm just reading the specification behind tuple::swap completely wrong though
<geist> https://gcc.godbolt.org/z/j5br9dxWE is it broken out
<geist> at first glance it looks like a swap, so maybe it's just aggressive optimizations when it's all inlined
<rpnx_> geist the behavior is technically undefined there
<geist> oh yeah?
<rpnx_> you used an uninitialized variable,
<rpnx_> foo(foo &&other) doesn't initialize a before swapping it
<rpnx_> need to do
<rpnx_> foo(foo &&other) : foo {} { swap(a, other.a); }
<rpnx_> assuming foo{} is valid
<rpnx_> if not, you can do
<rpnx_> foo(foo && other) : a {} { std::swap(a, other.a); }
<geist> ah good to note
wxwisiasdf has quit [Ping timeout: 244 seconds]
<rpnx_> but you have to repeat that for all member variables, so I'd suggest making the default constructor
<rpnx_> oh here's a fun one btw
<moon-child> see, it's this kind of thing
<moon-child> exactly this kind of thing
<moon-child> in c, if I want to swap two variables, I swap them
<moon-child> done!
<rpnx_> foo(int) try : a() { } catch (std::exception & ex) {}
<rpnx_> moon-child, it's undefined behavior in C too
<rpnx_> so not really very interesting example
<geist> yeah no exceptions here so dont care
<rpnx_> well, c does not have std::swap but
<moon-child> what's UB? All I saw was some exceptionally complicated code which seems to swap two variables. Does that exceptionally complicated code not even work properly?
<moon-child> 'c does not have std::swap' yeah, and take a guess as to why...
<rpnx_> int temp = uninitialized_variable; uninitialized_variable = other; other = temp;
<rpnx_> that's undefined in C
<rpnx_> just like C++
<geist> thanks for the help btw
<geist> i really didn't intend on this to become a C++ night, but i'm slowly having to build up a mini template lib for the kernel over time
<moon-child> rpnx_: I don't care about the UB case. I care about all the junk you're doing for the _not_ UB case
<geist> and adding little helper routines here and there as i go. in this case i wanted a simple RAII wrapper around an open block cache sector
<rpnx_> moon-child, you mean setting the variable to 0 before swapping it?
<\Test_User> why would you set a variable to 0, then swap it? a=b; b=0;
<rpnx_> \Test_User, because some types cannot be copied, but can be moved and swapped.
<\Test_User> everything is a bunch of 1s and 0s, 1s and 0s can be copied
<rpnx_> Well, unique_ptr cannot be copied, otherwise you would get multiple delete for the same pointer
<rpnx_> as one example
<geist> we're having a passive aggressive C vs C++ argument here
<geist> i think both of you recognize that the concepts are different
<\Test_User> if you insist, you can simply move b to a, then set b to 0
GeDaMo has joined #osdev
<\Test_User> no swapping involved
<rpnx_> ... the swap is part of the implementation of move though
<rpnx_> so... stack overflow
<rpnx_> if you try to move inside the move :)
<\Test_User> erm, so now move doesn't just move, it swaps stuff around? so move is swap
<rpnx_> well yes, move construct is usually implemented with swap, although it depends on the object
<\Test_User> or you mean move is a swap with 0, so your earlier point of "can only be either moved or swapped" can be simplified to "can only be swapped"
<moon-child> just tell me this: is there any code which would actually break if you changed std::swap to just copy bits?
<gog> one .net. CLR to bind them, one c# to rule them a
* gog leaves a fish on the floor
<rpnx_> moon-child, and not set the other object to 0?
<\Test_User> rpnx_: that's the intention here with move
* moon-child chomps fishy
<rpnx_> Yes, any object which has a non-trivial move or non-trivial swap.
<rpnx_> well
<rpnx_> for example
<geist> like, an atomic variable maybe
<rpnx_> an object that when moved, updates pointers to its new location
<geist> (to stay within the realm of simple looking integer like things)
<rpnx_> that would obvious break if you did a bit copy
<moon-child> geist: you can't do an atomic exchange on multiple memory locations anyway
<moon-child> (unless you have transactions ... not on any consumer hardware though :\)
* \Test_User questions why anyone would use a pointer to itself
<geist> that's not this code's job to decide if its a good idea or not
<\Test_User> "where is the location of this data at x"
<geist> you could be inside some critical section that makes it largely moot
<rpnx_> Well, swap is a good strategy because it's general and always works.
<geist> \Test_User: TLS sections frequently do this
<rpnx_> the compiler optimizes the code so it ends up the same as doing it manually anyway
<geist> or a circular linked list that's empty
<moon-child> if you're in a critical section, then--exactly, it's moot :P
<geist> the default value of one of those kinda lists is literally a pointer to itself
<\Test_User> circular linked list, so you never reach your desired destination and go into an infinite loop?
<geist> moon-child: not if you want to maintain atomic sementatics
<rpnx_> std::atomic I don't think can be atomically swapped
<geist> \Test_User: no you test to see if you're at the end. i use them almost exclusively
<geist> they're really nice low level data structure, even in C
<\Test_User> test to see if you're at the end of a circle makes no sense
<geist> and incidentrally to come full circle (bada bum) it's exactly the data structure the VAX double linked list instructions implement
<geist> they're nice becuse there's no tests or branches on insert/remove
<geist> just a blind 4 pointer swap
<\Test_User> ah
<geist> basically the head of the list itself is a node in the list
<moon-child> prefer singly linked list much of the time, but then you don't have an actual backpointer within the object, so you have to supply the context explicitly
<rpnx_> that's swap with register and memory, memory to memory swap isn't implement on most architectures
<geist> and thus to test that you're at the end you have to test that *next == head
<rpnx_> at least not atomically
<\Test_User> fair enough
<rpnx_> That's why atomics implement exchange but not swap
<geist> behold my old ass doubly linked list circular routines: https://github.com/littlekernel/lk/blob/master/top/include/lk/list.h
<bslsk05> ​github.com: lk/list.h at master · littlekernel/lk · GitHub
<\Test_User> so, you would need a custom move function anyways, so wouldn't it be better to impliment that by copying everything and updating references, and use that in the swap defintion?
<rpnx_> Depends on the situation
<rpnx_> 99% of types can implement move using swap
<\Test_User> swap involves 2 moves
<\Test_User> well, 3, bc tmp
<geist> i think you two are arguing at different levels here
<\Test_User> probably
<rpnx_> no, swap doesn't
<rpnx_> because
<rpnx_> compilers optimize
<geist> as in, in C yes, everythig is bytewise, but in C++ swap can be more complicated, though it might not be
<geist> and that's kinda the end of the discussion, honestly
<geist> since swap can based on the type do something more complex than a bytewise
<geist> and thus you use it always because you dont know what the type is necessarily, especially in templated code
<rpnx_> The C++ compilers are optimal enough that the '3 move' only exists in theory for the most part.
<geist> in C none of this exists, so it's moot
<moon-child> the question was not whether it can, but whether it _should_
<\Test_User> geist: fair enough
<moon-child> in particular, whether the resultant pervasive structural complexity is worth it
<geist> except of course in C you *could* implement all of the same thing, but you just dont have the compiler to help you do it automatically at compile time
<geist> ie, you can build objects in C all you want, and imeplement move constructors or whatnot if you wanted, manually
SGautam has quit [Quit: Connection closed for inactivity]
<geist> but you'd have to manually decide what to do for every field, since there's no compiler assist to do variable things based on types
<rpnx_> yeah for writing complex ideas I find C extremely verbose
<geist> yah i find it comforting, but there's a level at which the complexity of manually doing things exceeds the complexity of using a bit of assist frmo the language to do some of this for you
<geist> where that quickly gets out of control is when you go down that slope a bit too far. so (to me at least) it' about restraint and using what is needed and nothing more
<geist> (C vs C++ in that case)
<geist> in general i'm doing more and more stuff i used to do in C in C++. for example writing a FAT filesystem driver right now in C++
<geist> but using pretty basic language bits because its still an embedded situation
<\Test_User> semi-random question: does C++ even have integer overflow detection yet or is it still stuck back with C on that one?
<geist> not without builtin compiler intriscs and some wrapper objects
<geist> (there are good gcc/clang compiler intrinsics for this though you can use in C even)
<rpnx_> that's not C++ that's a vendor extension .-.
<\Test_User> yes I've heard about adding it to C, just was wondering if it was available by default on C++
<geist> right, but the idea is the intrinsics are intended to be wrapped in a C++ class or whatnot
<rpnx_> (same reason I do not use inline asm)
<geist> it's just in C you can't really do it
<geist> rpnx_: oh that reminds,you should lookat arm_acle.h
<geist> that is an offical wrapper for arm intrinsics
<geist> it has all sorts of builtins for things like accessing most common control registers
<geist> and since they're official, any ARM sanctioned compiler must support it
<rpnx_> I saw some of them, I would not introduct something like uint32x4 into my codebase though :)
<geist> sure, but that's just part of them
<geist> it also supports things like reading/writing from control registrers
<geist> or wfi/wfe instructinos, etc
<rpnx_> I already did tests and found that std::array<std::uint32_t, 4> etc could be vectorized automatically by the compiler using cross platform code
<rpnx_> so I might look at their code but
<rpnx_> I don't think I would copy the style
<geist> no, again i'm not saying use the vector shit
<geist> i'm saying there are convenience routines so you dont have to write all of them all overa gain for the basic control register stuff
<rpnx_> I also disabled all standard libraries since I am not trying to compile my kernel for macos so
<rpnx_> not sure if I can use that header or not
* geist shrugs
<geist> okay, just tryin to help you out
<rpnx_> Maybe I can copy paste it into my project, I dunno
<rpnx_> sometimes these things have many layers of dependencies
<rpnx_> and my "klibc" doesn't implement much of the standard library right now
<geist> i just told you, it's a header around builtin intrinsics
<geist> there's no runtime required
<geist> all that aisde, i just checked and sadly the GCC version is far behind the llvm version, so sadly can't really rely on all of the instrincs. oh well.
<rpnx_> ah, I looked at it
<rpnx_> seems to only pull in stdint.h
<rpnx_> which I reimplemented
<geist> guess for a few types
<rpnx_> hum?
<geist> like uint32_t etc
<rpnx_> I think I got all of them,.. maybe, I didn't check
<rpnx_> wait no only cstdint is close to complete
<geist> OTOH most of these can also be implemented in inline asm anyway, whcih i already did for my project, so i haven't had a strong reason to switch
<rpnx_> clang/llvm makes implementing this kinda easy
<bslsk05> ​gitlab.com: Checking your Browser - GitLab
<kazinsal> I'm realizing that I may have bitten off more than I can chew with this stupid side osdev project, because part of the point of research unix is that the system can build itself, and I've never written a compiler before...
zaquest has quit [Remote host closed the connection]
<rpnx_> kazinsal, can you use llvm/clang?
<klange> kazinsal: I highly recommend https://craftinginterpreters.com/ - particularly the second half :)
<bslsk05> ​craftinginterpreters.com: Crafting Interpreters
<\Test_User> system first, compiler that uses the system later ;)
<geist> wow gitlab is annoying
zaquest has joined #osdev
<geist> it wants me to create an account before i can see anything
<kazinsal> yeah, I'll have to read a compiler book
<rpnx_> oh hang on
<rpnx_> I forgot to make it public
<kazinsal> llvm/clang won't work because my target machine is an 8086 ;)
<rpnx_> try again
<geist> ah yeah there we go
<\Test_User> 8086? and you're using what language?
<kazinsal> C
<geist> oh side note: one thing that might byte you if you're not careful with ARM: chars are unsigned
<kazinsal> I'm currently using OpenWatcom as the compiler/linker
<geist> one of those quirks of arm, and valid C/C++, just only a few arches chose unsigned chars, and arm is one of em
<\Test_User> C assumes a flat memory map, 8086 uses segmentation
<\Test_User> how did you manage that?
<geist> negative. C does not assume a flat memory map
<kazinsal> 8086 C compilers have extensions to handle far pointers etc
<j`ey> geist: hehe byte you
<\Test_User> oh? interesting, I thought it did
<\Test_User> nice
<rpnx_> -fsigned-char :)
<geist> actually the fact that it doesn't is a large source of reasons why there are all these weird pointer type comparison stuff
<geist> sicne C came about in an era when segmentatino and whatnot was a big deal
<geist> j`ey: heh, funny that was 100% unintentional
<j`ey> geist: :D
<geist> but *modern* C assumes flat memory because aint nobody got time for segmented memory
<kazinsal> eg. in openwatcom if you declare `__segment vidmem = 0xB000` and later do `unsigned short __based(vidmem)* ptr" you now have a far pointer whose accesses are made via a segment reference to 0xB000
<geist> except maybe on PICs or AVRs
<rpnx_> kazinsal, implement an llvm backend for 8086 then :) easier than starting a new compiler from scratch
<kazinsal> ahh, but I'm five nines of sure that I can't run llvm/clang on an 8086
<\Test_User> ah, probably where I got confused on it
<rpnx_> can you run tcc on that?
<geist> but clearly there were C compilres that could deal with segmentation. that's what you did in the DOS days
<rpnx_> even...
<geist> starting with some sort of dos cross compiler maybe your best bet
<geist> but frankly this is why i'm not so interested in anything in that era either
<geist> i sort of go back to flat 16 or 32 bit machinse and not much older
<kazinsal> yeah, it's really just a fun little "what kind of bizarre constraints can I work in" project
<klange> I'll stick to wasm for that.
<GeDaMo> 64K should be enough for anyone :P
<geist> hence why i like fiddling with say VAX or 68k. they're old but they're still 32bit flat machines
<klange> Need to de-emscripten my wasm builds of my interpreter... for size reasons, and just because there ain't nothin' in there I haven't built myself somewhere...
<geist> off to watch some tv and then sleep
<rpnx_> someone ran doom in wasm
<rpnx_> I should also...
rpnx_ has quit [Quit: This computer has gone to sleep]
<kazinsal> yeah I'm going to finish my drink and this episode of Best of the Worst and then grab some z's
<klange> It's Friday, there should be a new Lower Decks soon if it's not already 'available'...
<kazinsal> oh yeah I should downl-- er, I mean, legally acquire that show and watch it
bauen1 has quit [Ping timeout: 255 seconds]
SGautam has joined #osdev
vancz has quit []
pie_ has quit []
pie_ has joined #osdev
vancz has joined #osdev
mrvn has joined #osdev
theobjectivedad has quit [Ping timeout: 244 seconds]
bauen1 has joined #osdev
Burgundy has joined #osdev
wolfshappen has quit [Ping timeout: 244 seconds]
wolfshappen has joined #osdev
<geist> kazinsal: heh yeah this BotW was greatr
<kazinsal> totally, the Silk 2 part sent me down a rabbit hole
<kazinsal> to flush my brain of horrible film I am now watching Dr. No
shikhin has joined #osdev
freakazoid333 has quit [Ping timeout: 244 seconds]
* mrvn suggest "Plan 9 from outer space"
<mrvn> +s
gog has quit [Ping timeout: 240 seconds]
isaacwoods has joined #osdev
nyah has joined #osdev
carbonfiber has joined #osdev
Burgundy has quit [Ping timeout: 244 seconds]
<MelMalik> whaaat
gxt_ is now known as gxt
gog has joined #osdev
heat has joined #osdev
farah has joined #osdev
Burgundy has joined #osdev
farah has quit [Quit: WeeChat 3.6]
farah has joined #osdev
farah has quit [Client Quit]
farah has joined #osdev
farah has quit [Client Quit]
farah has joined #osdev
farah has quit [Client Quit]
farah has joined #osdev
gxt has quit [Remote host closed the connection]
gxt has joined #osdev
farah has quit [Ping timeout: 268 seconds]
opal has quit [Remote host closed the connection]
gxt has quit [Read error: Connection reset by peer]
opal has joined #osdev
gxt has joined #osdev
farah has joined #osdev
farah has quit [Ping timeout: 240 seconds]
SGautam has quit [Quit: Connection closed for inactivity]
alpha2023 has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
arch_angel has joined #osdev
arch_angel is now known as arch-angel
alpha2023 has joined #osdev
frkzoid has joined #osdev
kkd has joined #osdev
ids1024 has quit [*.net *.split]
kaichiuchi has quit [*.net *.split]
dminuoso has quit [*.net *.split]
jeaye has quit [*.net *.split]
WaxCPU has quit [*.net *.split]
kaichiuchi has joined #osdev
Andrew has joined #osdev
jeaye has joined #osdev
nyah has quit [Quit: leaving]
dminuoso has joined #osdev
nyah has joined #osdev
nj0rd_ has joined #osdev
[itchyjunk] has joined #osdev
scoobydoo_ has joined #osdev
scoobydoo has quit [Ping timeout: 252 seconds]
scoobydoo_ is now known as scoobydoo
bauen1 has quit [Ping timeout: 244 seconds]
FreeFull has joined #osdev
<heat> nerds
<heat> operating systems are cringeeeeeeeeeeeeeeee
<froggey> they are? fuck! I don't touch mine for a year and now they're cringe!
<GeDaMo> CringeOS! :P
xenos1984 has quit [Quit: Leaving.]
isaacwoods has quit [Quit: WeeChat 3.6]
<kof123> that's the point. os is the blood and guts, and then someone later puts make-up on
<sbalmos> ugly bags of mostly water
<mats1> that's what i said about your mom
bauen1 has joined #osdev
[itchyjunk] has quit [Ping timeout: 252 seconds]
[itchyjunk] has joined #osdev
saltd has joined #osdev
farah has joined #osdev
<geist> you all are cringe
<geist> there
dude12312414 has joined #osdev
bauen1 has quit [Ping timeout: 268 seconds]
bauen1 has joined #osdev
<heat> no u
<heat> BeOS? more like BecringeOS
<heat> roasted
* geist runs away crying
Matt|home has quit [Ping timeout: 255 seconds]
<heat> geist, lets imagine I have 8 1byte registers in io ports [0, 7]; is it generally defined by architectures to write them all at once with an outq(or equivalent)?
<geist> i dont think so. i'm fairly certain it's a) not up to the x86 side of things and b) it's all whether or not the devie can handle it
<geist> also there's no outq AFAIK
<heat> yeah probably not
<heat> I was looking at a pdf about SMM and they mentioned you could do outw %reg, $0xb2 and it would write to b2 and b3
<geist> this is a thing in MMIO registers too: most of the time they must be accessed with their native size, and if they work with say a smaller read/write it's because the hardware has extra logic to deal with it
<heat> and that seemed off
<geist> yah and dont quote me i actually remember reading the opposite of that: io ports have the odd property that you can shove say a 32bit value in subsequent regs
<geist> ie, write 32 bits into io port a, and then another 32 bits in io port b
<geist> i dont know if hardware does that, or if that's a myth, etc. really would take reading an old 8086 or 80286 manual and see what it precisely does on io port bus cycles, since that defined the pattern
<heat> yeah
<geist> i have this suspicion that it's a) only defined up to 16 bits since that was the width of the 8086 and then it's up to hardware to interpret it, since it'd be a bus cycle like 'address X, 16 bits of data here'
<geist> otoh 8088 may be the key, since it'd hvae to split a 16 bit transfer across 2 bus cycles
<geist> does it increment the address on the second cycle or is it address then two data cycles?
dude12312414 has quit [Remote host closed the connection]
<geist> but, MMIO on pretty much all arches i know of works with incrementing addresses
<geist> 32 bit io accesses came after 80386 i'd assume (if they're present? are there 32bit io instructions?)
<mrvn> geist: you would get problems wioth write combining and caching in general.
<mrvn> but x86 has special IO instruction so that probably won't hold.
<heat> geist, yeah outl is valid
<heat> thats how PCI works
<geist> ah looks like 8088/8086 only had bytewise io ports, and could only address up to io port 255
<geist> the 16bit io port extension must have come later, probably 286
pretty_dumm_guy has joined #osdev
<geist> which lines up with IBM PC AT and when more devices appeared avove it, and 16 bit ISA bus, etc
<mrvn> heat: there is nothing that says io ports have to write to memory on the card. It could just be a set of register and the port address selectes the enable line for the right register. The upper bits of the data bus wouldn't be routed at all.
<geist> oh wait, no, could use 16bit io ports,just only 8 bit transfer
<heat> mrvn, sure there isnt
<heat> never said it would write to memory
<geist> mrvn: yeah we're explicitly talking about io ports, which can behave a it differently
<mrvn> heat: but the spilling over to the next register is something you get when it's actually just memory backing the ports.
farah has quit [Ping timeout: 244 seconds]
<heat> yes but the example I gave is explicitly not memory
<mrvn> heat: yeah, might work if the card has extra logic or might not work. It's something the card has to specify. The architecture can work both ways all at the same time.
<heat> the chipset says no but this random pdf says yes
saltd has quit [Remote host closed the connection]
<heat> I don't know if its based on experimentation or if they don't quite understand IO ports
<mrvn> assume it doesn't and you are golden.
netbsduser has joined #osdev
<geist> the 8086/8088 manual is confusing WRT io addressing and the hardware, but it really seems to indicate that no, io addresses hould be laid out in a non oberlapping way
<geist> it talks about how 16bit io addresses should be even alignment, etc
<geist> so that it can just toss out a transfer on the bus, and that A0 is used to signify the low or high half of the 16bit bus, etc
<geist> *shrug*
<geist> it also pretty explicitly talks about how the 8088 needs an address latcher and doesn't get a new A bus cycle when reading/writing 16 bit vaues
<geist> so you need external circuitry to do the bottom/top half cycle
<geist> the low A lines are shared on 8088 as well (AD0-AD7) so you have to latch and hold the address anyway
<geist> there's a standard 8xxx helper chip that does all of this for you
farah has joined #osdev
<geist> unclear what it does in an io transfer
frkzoid is now known as freakazoid333
<heat> yeah but that's all old data no? I guess the internal chipset doesn't look anything like that anymore?
gareppa has joined #osdev
farah is now known as dococ
saltd has joined #osdev
GeDaMo has quit [Quit: Physics -> Chemistry -> Biology -> Intelligence -> ???]
dococ has quit [Quit: WeeChat 3.6]
dococ has joined #osdev
dococ is now known as wut
wut is now known as wutt
wutt has quit [Client Quit]
dococ has joined #osdev
saltd has quit [Remote host closed the connection]
dococ is now known as amos
amos is now known as jarvis
jarvis is now known as jafar
jafar is now known as scar
scar is now known as a-khan
<mrvn> a-khan: please stop changing nicks
a-khan is now known as farah
farah is now known as saffron
saffron has quit [Quit: WeeChat 3.6]
saltd has joined #osdev
dococ has joined #osdev
dococ is now known as doodool
doodool has quit [Client Quit]
dococ has joined #osdev
gareppa has quit [Quit: Leaving]
dococ has quit [Changing host]
dococ has joined #osdev
Burgundy has quit [Remote host closed the connection]
<heat> lol
Burgundy has joined #osdev
carbonfiber has quit [Quit: Connection closed for inactivity]
netbsduser has quit [Remote host closed the connection]
saltd has quit [Remote host closed the connection]
saltd has joined #osdev
saltd has quit [Remote host closed the connection]
GreaseMonkey has joined #osdev
leah_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
leah_ has joined #osdev
saltd has joined #osdev
dococ has quit [Quit: WeeChat 3.6]
ggherdov has quit [Ping timeout: 268 seconds]
ggherdov has joined #osdev
<geist> yah at least irccloud flattens those
<geist> i used to poopoo of irc clients that did it, but now that i've been irccloud for a year or so i'm pretty happy with that sort of noise reduction
<kazinsal> same, it's quite handy
StoaPhil has quit [Quit: WeeChat 3.5]
dococ has joined #osdev
dococ is now known as solenya
epony has quit [Ping timeout: 252 seconds]
solenya is now known as dococ
dococ is now known as amos
amos is now known as jarvis
jarvis is now known as jafar
jafar is now known as scar
scar is now known as a-khan
a-khan is now known as farah
farah has quit [Quit: WeeChat 3.6]
solenya has joined #osdev
solenya is now known as saffron
saffron is now known as farah
farah has quit [Client Quit]
solenya has joined #osdev
solenya is now known as venom
venom is now known as Guest763
FreeFull has quit []
SpikeHeron has joined #osdev
Guest763 is now known as solenya
solenya is now known as Guest763
Guest763 has quit [Quit: WeeChat 3.6]
solenya has joined #osdev