<muurkha>
all the systems I'm even vaguely familiar with (Unix fs, B5000 jobs, KeyKOS, Postgres) rely on ownership hierarchies or at least DAGs for disk storage, and I'm guessing that this is because reading your entire disk in order to do a major GC is just infeasibly expensive
jellydonut has quit [Read error: Connection reset by peer]
adjtm has quit [Remote host closed the connection]
adjtm has joined #riscv
DoubleJ2 has joined #riscv
DoubleJ2 is now known as DoubleJ
DoubleJ has quit [Ping timeout: 245 seconds]
dilfridge has quit [Remote host closed the connection]
prabhakarlad has quit [Ping timeout: 250 seconds]
dilfridge has joined #riscv
Sofia has quit [Ping timeout: 240 seconds]
foton has quit [Quit: %Bye, bye, ...%]
foton has joined #riscv
Sofia has joined #riscv
vagrantc has quit [Quit: leaving]
prabhakarlad has joined #riscv
jacklsw has joined #riscv
seninha has quit [Quit: Leaving]
geranim0 has quit [Remote host closed the connection]
handsome_feng has joined #riscv
Sofia has quit [Ping timeout: 240 seconds]
<sorear>
not every system with a bus has a bus error signal, and you can't assume that accessing nonexistent memory will give a bus error
<muurkha>
yeah, I definitely wouldn't want to do that, since if malicious code can provoke an access to nonexistent memory, it can probably provoke an unauthorized access to existing memory too
Sofia has joined #riscv
<dh`>
by "pessimistic check" I mean that when a transaction reads or writes a data item it checks the timestamps then and there for inconsistency rather than logging it and sorting it out at commit time
<dh`>
that system had spinlocks as latches for groups of data items (thus, not trying to automatically infer locking granularity) and used timestamps for checking consistency across groups
<dh`>
hmm. spinlocks? I think it's old enough it was disabling interrupts instead
<dh`>
was a long time ago
<dh`>
anyway it was related to but not the same as the transaction system used in the VINO kernel
<muurkha>
oh, that makes sense
<muurkha>
thanks!
freakazoid343 has joined #riscv
<muurkha>
this is probably a good cue for me to read TR-30-94
freakazoid12345 has quit [Ping timeout: 256 seconds]
<dh`>
the OSDI paper will probably get you further, or Chris Small's thesis
<muurkha>
thanks!
<muurkha>
probably hacking on a prototype will be a better use of time than reading hundred-page dissertations pretty soon though
<dh`>
well, yeah
<dh`>
write down your own ideas before you start reading the literature
<muurkha>
not just because a month of labwork can save you an afternoon in the library, as they say
<dh`>
otherwise you will lose critical bits of them
<dh`>
also a lot of it is not about the transaction system itself, but more that it exists and is used for rolling things back
<muurkha>
but also because a few frustrating nights trying to get a botched design to work are sometimes a necessary precondition to learning from other people's experiences
<dh`>
not losing your own ideas in an onslaught of other people's normative ideas about how to do things is also important
<dh`>
maybe more important
<muurkha>
oh hey, there's a note in here about Rio Vista
<dh`>
but anyway the transaction system in vino has three main points: (a) it exists, (b) the overhead is low, and (c) it's based on undo logging
<dh`>
oh and (d) it's in memory, as opposed to say ErOS
<muurkha>
yeah. undo logging (rather than write-back logging, if that's what it's called) sort of requires pessimistic synchronization
<muurkha>
I think
<dh`>
depends on what the data is
<dh`>
if there aren't pointers and memory allocation involved, it's ok to read trash optimistically as long as you can tell before you commit
<muurkha>
maybe I'm using the term "pessimistic synchronization" wrong?
<dh`>
e.g. in an ordinary relational database it's fine
<dh`>
I've always interpreted that as: pessimistic means you check for and detect conflicts immediately (and, generally, bail right away)
<dh`>
and optimistic means you collect info and sort it out when you commit (if you commit)
<muurkha>
consider concurrently running a transaction SELECT SUM(X) FROM Y; and another one that starts by updating a row with UPDATE Y SET X = 10000 WHERE Z = 42;
<dh`>
but "optimistic concurrency control" as a topic usually also includes everything that isn't lock-based whether or not it uses optimistic or pessimistic checking
<muurkha>
if the second transaction has overwritten the old value of X in row 42 before the first one gets to reading it
<muurkha>
there's no way for the SELECT to complete and produce the correct result unless it roots around in the other transaction's undo log (does anybody do this?)
<dh`>
assuming the first transaction is supposed to be before the second (that isn't necessarily nailed down)
<muurkha>
then there's no way for the SELECT to complete and print "3482"
<muurkha>
unless the second transaction commits first
<dh`>
it still won't print 3482 :-)
<muurkha>
or aborts. suppose it aborts
<dh`>
(if we assume the values are unsigned anyway)
<muurkha>
well, if the old value of X was 7, it might
<muurkha>
even assuming unsigned
<dh`>
you said it was 42
<muurkha>
no, that's Z
Sofia has quit [Ping timeout: 240 seconds]
<dh`>
oh
<dh`>
oops
<muurkha>
anyway you need some way to read the old value in a case like that, or you need to block
<dh`>
no, you just need to detect and abort
<muurkha>
abort the SELECT SUM(X)?
<muurkha>
I mean you could abort it and retry, that would be okay I guess
<dh`>
basically with a traditional pessimistic lock-based system whichever goes first will get the lock on that row and the other will sit and wait until it commits or aborts
<muurkha>
and it's true you could block after you finish doing the sum
<muurkha>
to see if the other transaction commits and makes the sum valid
<dh`>
with a timestamp-ordering system, both will just chug through and they'll note the timestamps on each row
<muurkha>
but if you want the read query to finish promptly instead of being blocked, somehow it needs to get access to the old value of X (which would be restored after the abort of the update)
<dh`>
and in the case of the conflicting row the timestamps will be inconsistent
<dh`>
and you can either pessimistically check it when you get there and abort right away, or optimistically log it and chug ahead and then crosscheck it all for consistency at commit time
<dh`>
and if that fails, abort
<muurkha>
but aborting the SELECT SUM(X) seems like the wrong thing to do
<dh`>
and if you read a value from a transaction that hasn't committed yet, you have to wait for it to commit (and abort if it aborts)
<muurkha>
retrying it might be okay
<dh`>
the keyword for that is "dependent transactions"
<muurkha>
right
<muurkha>
but that's all, from my point of view, pessimistic
<dh`>
in a multiversion system you can avoid reading uncommitted data by picking the right version to read from given the timestamp of your transaction
<muurkha>
because at the point that you're having to block (or abort!) a transaction because it *might* conflict with a transaction that might or might not commit...
<dh`>
well, that depends on how you do the checks
<muurkha>
that sounds like a pretty pessimistic-concurrency-control kind of thing to do to me
<dh`>
if you just chug ahead and log the timestamps, you don't check anything until you go to commit
<dh`>
but the other wrinkle is, multiversion is very expensive for in-memory stuff like we're talking about when you have pointers
<muurkha>
note that in a multiversion system you also don't have the torn-read-pointer problem either
<muurkha>
(I think!)
<dh`>
not just because you have to allocate and copy a lot
<dh`>
but because pointer aliasing moves you straight to hell
<muurkha>
what happens with pointer aliasing?
<dh`>
you can't point directly at data, everything becomes doubly-indirect
<dh`>
suppose there's an object and you've got a pointer to it and I've got a pointer to it
<dh`>
you want to update it, fine, you copy it
<dh`>
I want to update it, fine, I copy it
<dh`>
now we have a problem
<muurkha>
no, that's a solution, that's the I in ACID
<muurkha>
isolation
<muurkha>
you're only supposed to see your reads, not the other transaction's reads
<muurkha>
you have a problem only if both transactions are allowed to commit
<dh`>
it's a problem because when you commit I can't see your version
<muurkha>
when I commit you get aborted
<dh`>
how do I know?
<muurkha>
either immediately or when you try to commit
<muurkha>
because when you copied it into your own transaction buffer, you logged which version you read, and that version is no longer the current version
<dh`>
or, more accurately: how do we know we're accessing the same object?
<muurkha>
it has the same object ID
<dh`>
so I'm looking up the object by oid and not by pointer?
<muurkha>
so when you go to commit, that OID is in your transaction's read-set, and the timestamp (or its moral equivalent) of your read has been superseded
<dh`>
sorry, I shouldn't be trying to work through this this way
<muurkha>
right, the OID in the reference you were following only got turned into a physical pointer when you dereferenced an OOP
<dh`>
there's a million ways to set up any such system and we're probably both using unstated assumptions and your set and mine are likely incompatible
alMalsamo has joined #riscv
<muurkha>
yeah, I should probably write down precisely what I'm thinking will work
<muurkha>
how many lines of coherent description would you be willing to read? :)
<dh`>
dunno
<dh`>
if you think it'll work once you finish writing all the details down, it probably will
<dh`>
(you don't appear to be the kind of person who routinely gets stuff wrong)
<muurkha>
my thought is that I can directly point at immutable data, data in my own transaction's update log, and committed data that I'm reading but not updating
<muurkha>
oh, I do, as I'm reminded every time I debug code I wrote
<dh`>
oh, we all do in that sense, but that's not the same as producing plans with major gaps you don't notice
<muurkha>
oh, and I can directly point at data I allocated within the transaction
<muurkha>
well, I probably notice *eventually* :)
Sofia has joined #riscv
<dh`>
anyway, when I said <dh`> everything becomes doubly-indirect
<dh`>
it sounds like you've already bought into that
<muurkha>
yes
jmdaemon has quit [Ping timeout: 272 seconds]
<muurkha>
but I'm hoping that I can make most of the double indirection a one-time kind of thing, when I "open" an object, rather than on every field or array access
<muurkha>
of those four, data I've copied into the update log (because I'm writing it) and committed mutable data incur the expense of logging the read. the reference to immutable data still needs to be recorded somewhere so the system doesn't swap it out but it doesn't need to be revalidated at commit time
<muurkha>
and I'm hoping I can even pass a direct pointer to an opened object from one function to another within the transaction
jmdaemon has joined #riscv
<dh`>
when I did the system I was talking about before, I was writing C with manual undo annotations, so I did not want any more crap in the handwritten code path than necessary
<muurkha>
the tricky case I'm not sure how to handle yet is when I open an object for writing that is already open for reading, because any reading pointers then need to point to the copy in the transaction's write log instead of the committed copy
<dh`>
so stuff like double indirection or fishing updated objects out of the transaction history were not on the table
<dh`>
how explicit is the programming model you're intending?
<dh`>
(I gather that you're compiling this)
<muurkha>
right, my plan is to compile from something like TypeScript (but with CIL-like "value types") to a compact bytecode, then interpret the bytecode
<muurkha>
because 384KiB of RAM plus 1 MiB of Flash is not very much space for a graphical IDE, but 48MHz of ARM speed is way more CPU than you need for that
<dh`>
do like the original macos and handcode the graphics primitives in asm and it'll be fine
<muurkha>
and running every key event handler in a transaction, and background tasks in sequences of transactions
<muurkha>
yeah, but I have like 100× the CPU speed of the original macos and only 2× the screen space
<dh`>
but anyway one way to deal with that problem is to prohibit writing to an object you originally opened readonly
<muurkha>
that breaks composability pretty badly, but it might be a thing to try
<dh`>
transactions aren't real composable anyway, unfortunately
<dh`>
they are in one sense, but not in the usual PL senses
<muurkha>
code inside transactions can be
<dh`>
not without a bunch of reasoning about nested transactions that as of about ten years ago at least nobody has sat down and worked out properly
<muurkha>
yeah, I don't plan to support nested transactions, probably
<dh`>
my thing was all about nested transactions
<muurkha>
but one of the great draws of optimistically synchronized transactions is that you get better composability
<dh`>
there's a nice relationship between nested transactions and abstraction layers
<dh`>
but that's the stuff nobody's AFAIK worked out
<dh`>
I meant to at one point
<dh`>
might yet still sometime
<muurkha>
because priority inversion and deadly embraces just don't exist
<dh`>
deadlocks don't exist in ~any transaction system, you have to be able to cope
<muurkha>
I really enjoyed the "Composable Memory Transactions" paper
<dh`>
lock-based transaction systems have deadlock detectors
<muurkha>
yeah, but if you have a deadlock detector, you have to retry the transactions
<dh`>
you have to retry some transactions sometimes, it's inevitable
<muurkha>
well, sufficiently pessimistic synchronization allows you to not have to do that, ever
<dh`>
the standard OS coding methods of writing code that doesn't deadlock rely on knowing the workloads ahead of time and structuring them to avoid lock order inversions
<muurkha>
right
<dh`>
and that's often highly nontrivial
jmdaemon has joined #riscv
<muurkha>
and I've done that in database-backed systems too
<dh`>
I'm pretty sure the only system pessimistic enough to avoid ever needing to abort anything is one biglock for the entire database
<muurkha>
that works. or you can lock tables in alphabetical order
<muurkha>
but aborts due to actual errors, that won't go away on retry, are okay in my book
<dh`>
doesn't help if some bozo does begin; update Z where foo; update A where foo; commit
<muurkha>
right, we solved that problem by not being that bozo ;)
<muurkha>
(at least not too often)
<dh`>
right
<muurkha>
the DBMS can't prevent it, but our huge loogie of Perl could
<muurkha>
since that's where all the queries were
<dh`>
right
<muurkha>
what were you saying about nested transactions and abstracted layers?
<muurkha>
*abstraction
<dh`>
anyway the other thing I meant to say above and got distracted was
riff-IRC has quit [Remote host closed the connection]
<dh`>
the reason vino's transactions were/are efficient (10-15% overhead rather than 10-15x like most memory transactions) was never entirely clear
<dh`>
but the best I've ever been able to figure is that it's about not trying to have the system figure out the locking granularity
<muurkha>
I guess Small never published the code
<dh`>
because that seems to be a feature of ~every garden-variety memory transaction scheme, whether hardware or software based
<dh`>
oh, we did
<dh`>
not sure it's still posted anywhere though
<dh`>
been 25 years
<muurkha>
oh really? I didn't know that!
riff-IRC has joined #riscv
<dh`>
yeah there were two vino releases you could download and install
<dh`>
and some work on a third that petered out
<muurkha>
so in theory you could analyze them
<muurkha>
that's too bad
<muurkha>
10× CPU overhead is kind of the price of entry when my starting point is "everything will be interpreted from compact bytecode to save RAM"
<dh`>
well, it was a research system, limited manpower and funding for things that don't lead to papers
<dh`>
of which there's a ton in OSes
<dh`>
yeah, but you don't need another 10x transaction overhead on top of that
<muurkha>
right, but it wouldn't be on top of that
<dh`>
it might or might not be
<dh`>
AFAICR in things like STM Haskell it tends to be
<muurkha>
I'm hoping I can define the bytecode in a way that admits reasonably efficient compilation though, both for bootstrapping and for hot spots
<dh`>
anyway I have the vino code if you want parts of it (tip: you don't)
<muurkha>
haha
<muurkha>
does it build?
<dh`>
I kept the periodic test build going for years after the project shut down
<muurkha>
nice!
<dh`>
eventually gave up after I moved to a 64-bit machine though since it would only build in a 32-bit chroot
<muurkha>
if I do end up compiling bytecode then the transaction overhead becomes a big deal
<dh`>
it uses gcc 2.7 and it's very tied to it
<muurkha>
did it get a reasonable open-source license?
<dh`>
and you know how gcc 2.7 is from the late 90s and the alpha is from the early 90s and theoretically gcc ran on alpha?
<dh`>
well.
<muurkha>
yeah
<dh`>
gcc 2.7 does not run on amd64.
<muurkha>
nope
<dh`>
you can fill in the configury easily enough
<dh`>
but it's not remotely 64-bit clean
<dh`>
I tried, I even fixed a few glaring problems, but it was going to take real work
<muurkha>
relatedly I think someone got Self to compile again with modern compilers recently
<dh`>
anyway I think if you make a 32-bit chroot it should still build
<dh`>
since it's not like anyone's been committing to it
<muurkha>
heh
<dh`>
and it will probably boot in qemu
<muurkha>
did it get a reasonable open-source license or was it one of those "non-commercial use only" academic licenses?
<dh`>
it's bsd 3-clause
<muurkha>
fabulous, sign me up
<muurkha>
where do I ftp?
<dh`>
you don't
<dh`>
the department shut down everything resembling hosting infrastructure some years back
<muurkha>
oh really?
<dh`>
so I'm pretty sure it's no longer available
<muurkha>
that's a bummer
<dh`>
I am not sure I saved copies of the release downloads
<dh`>
(myself)
<dh`>
they are archived somewhere but ... not anywhere useful
<muurkha>
let me know if you find a copy that might build!
<dh`>
I have the source control trees but that's also a mess
<dh`>
you know how cvs doesn't support rename
<dh`>
we would periodically tar up a copy of the repo for archival and then rearrange the directories inside the repo
<muurkha>
I sure do
<dh`>
because that was all you can do
<dh`>
but it makes reconstructing the history... interesting
<muurkha>
alternatively you could delete from one place and create elsewhere
<muurkha>
but neither choice was acceptably good
<dh`>
and lose the ability to follow history
<dh`>
right
<muurkha>
exactly
<dh`>
we decided to keep history for individual files in favor of being able to check out old trees
<dh`>
(without going to the archival copy)
<dh`>
I eventually loaded it into a mercurial tree and I have that somewhere but it doesn't have most of the history
<muurkha>
still should work for checking out the latest copy
<dh`>
but the other thing is that the latest is from well after the last release
<muurkha>
so everything is half broken?
<dh`>
I kept poking at it intermittently and trying to improve things
<dh`>
it should build and at least boot
<muurkha>
sounds like it was kind of a discouraging experience
<muurkha>
want to pass me a copy of the CVS tarball?
<dh`>
being involved in the OS research community in the early 2000s was a discouraging experience overall
<muurkha>
how come?
<muurkha>
because Pike's jeremiad was still true outside of Google?
<dh`>
you remember how in about 1998-1999 or so it really looked like microsoft was going to be taking over and everything else was going to die?
<muurkha>
yeah, Microsoft or Linux
<muurkha>
and that kind of did happen, except it was Linux and now Microsoft
<muurkha>
*not
<muurkha>
Pike sort of called out Microsoft as a bright spot in systems research in his rant
<dh`>
what looked like happening in 1998 fortunately did not
<muurkha>
like, it sucked that you couldn't tweak their systems but at least they were innovating
BOKALDO has joined #riscv
<muurkha>
and here I am in 02022 running X-Windows, a terminal emulator, Netscape, and mplayer
<dh`>
microsoft stopped being able to wish away their fundamental security problems and various powerful forces gathered behind linux
<muurkha>
I think the bigger thing that happened in systems software research was Google though
<muurkha>
there was some pretty interesting stuff outside Google
<dh`>
but in 1998 or 1999 or so it really looked like it was going to be all NT and nothing else, because no matter what microsoft fucked up the suits were totally committed
<muurkha>
that's how I feel about cloud now
<dh`>
and you got ridiculous directives about NT coming down from management
<dh`>
anyway lots of people bailed on the community in those years and lots more in the following years as the consequences played out
<dh`>
(including me)
<muurkha>
companies that didn't do that ended up being the ones that won: Yahoo, Google, Apple, Amazon (surprisingly), Microsoft (even more surprisingly), and eventually Fecebutt
<muurkha>
but GFS, V8, Chrome, NativeClient, MapReduce, Golang, and all but the earliest stages of Android came out of Google
<muurkha>
and that's what I think of when I read Pike's rant from that time about systems software (including OSes but not limited to)
<dh`>
...and the only thing on that list that's anything like traditional systems research is GFS
<muurkha>
Chubby, (NotReally)OpenTitan, BigTable, Google App Engine, ...
<dh`>
I mean, it's almost another 20 years later now and things are different
<muurkha>
no, I think all of that except parts of Chrome is traditional systems research, just not traditional OS research
<dh`>
depends what you mean by "systems"
<muurkha>
Pike specified
<dh`>
the academic field didn't rebrand itself "systems" instead of "operating systems" until this period
<dh`>
"systems" also used to be (and still is) a term that also includes compilers, databases, networking, architecture, and sometimes graphics
<muurkha>
he also gives as examples "lots of papers in file systems, performance, security, Web caching, etc."
<muurkha>
right, that's what he was talking about
<dh`>
as opposed to theory or AI
<muurkha>
oh, not that
<muurkha>
he didn't mean systems as in systems vs. theory
<muurkha>
he meant "the things that connect programs together", SOSP and HotOS kind of stuff
<dh`>
which doesn't include networking and languages
<dh`>
(even today)
<muurkha>
well, up until about 01980 it was pretty common for every new computer to come with its own networking and languages
<muurkha>
maybe 01990
<dh`>
own networking? mostly not
<muurkha>
Tandem, Apollo, DEC, and IBM all had their own networking
<dh`>
languages, that depended on which market you were looking at
<muurkha>
and of course their own languages
<muurkha>
Burroughs and Data General, I don't know if they had their own networking, but certainly their own languages
<muurkha>
HP and Burroughs had different languages for each of their dozen or so lines of machines
<muurkha>
Sperry Rand I have no idea
<dh`>
also it depends what you mean by networking
<muurkha>
Tandem had their own clustering stuff between the nodes in a cluster, Apollo had networked filesystems and remote login, DEC had DECNET, IBM had SNA
<muurkha>
in all of those cases it embraced all of those layers
<muurkha>
in other cases maybe less so
<dh`>
I don't know
<muurkha>
MIT's CHAOSNet also kind of did the same thing
<dh`>
in the 80s I was only plugged into what was going on in the "microcomputer" segment
<muurkha>
I was mostly plugged into elementary school
<dh`>
and by 1990 that was all either novell (ipx/spx), microsoft (netbios and/or netbeui) or tcp/ip
<dh`>
and maybe one other player I'm forgetting?
<dh`>
and by 1990 or so it was also pretty clear that only tcp/ip had any future
<muurkha>
LANTastic
<muurkha>
also AppleTalk/LocalTalk
handsome_feng has quit [Quit: Connection closed for inactivity]
<dh`>
all the other stuff was for company LANs and had no ability to go beyond that
<dh`>
netbeui was broadcast-only!
<muurkha>
yeah
<dh`>
appletalk was dead by 1990
<dh`>
hmm
<muurkha>
I don't think OSI was dead yet, and ATM was just being born
<dh`>
maybe not quite
<dh`>
ATM is a hardware thing, not comparable
<muurkha>
appletalk didn't die until about 02000
<muurkha>
ATM supports internetworking though. same with frame relay and SONET
<muurkha>
it's not just a physical-layer thing like ethernet, it has this whole hairy protocol stack on top of it
<dh`>
all of that people ran tcp/ip over though
<muurkha>
a deeply weird one from a tcp/ip perspective
<muurkha>
eventually yeah
<muurkha>
but it wasn't until after 02000 that the bellheads came to terms with that
<dh`>
true
<muurkha>
telecom was a different world from systems research though
<dh`>
and even then networking research had its own community
<muurkha>
tcp/ip research did yeah
<muurkha>
but languages and networking were pretty coupled to operating systems from their origins in the 01950s until the 01980s or so
<dh`>
anyway I won't deny that the pace of change in the industry has decreased
<muurkha>
not sure, this whole cloud thing I keep shaking my fist at seems to be causing a lot of change pretty fast
<muurkha>
wasm, vulkan, and risc-v are three new interesting platforms
<dh`>
it's been doing it for what, ~15 years now
<dh`>
things do change, but if you look at now and 2012
<muurkha>
yeah, I'd say 16
<dh`>
it's not like 1992 and 1982
<muurkha>
though a couple of my friends were early employees at a cloud computing startup in 01968
quantum_ has joined #riscv
<dh`>
except nobody called it that then, and people have been trying to make utility computing of some kind work at least that long
<dh`>
mostly without success
<muurkha>
no, they didn't call it that of course ;)
<muurkha>
maybe one of these days Ethereum or its successor will turn out to be another interesting platform
<dh`>
that I doubt
<muurkha>
well, it definitely has some interesting capabilities in theory
<muurkha>
they just don't work
<dh`>
the whole blockchain concept is fundamentally flawed
EchelonX has quit [Ping timeout: 246 seconds]
<dh`>
but let's not get into that, we're already vastly offtopic
<muurkha>
well, of course it is; that's why it took us almost 20 years to find it, because it wasn't what we were looking for
<muurkha>
but the flaw may turn out not to be fatal, we'll see
<muurkha>
anyway there are a lot of interesting new platforms appearing. iPhone clones are now how most people use computers
<dh`>
anyway if you really want the vino source, remind me tomorrow and I'll go rake it up
<muurkha>
I do! I'll try to remember
<dh`>
I also have the source for the later thing I built but it isn't very functional
<muurkha>
I haven't made it through the wasm spec yet but it looks like maybe it doesn't really support coroutines, which is pretty surprising
<dh`>
I thought it isn't threaded at all
<muurkha>
as far as I can tell, that's true
<muurkha>
but you don't need multiple stacks to do CLU-iterator-style coroutines
<muurkha>
but I don't think wasm can do it
<muurkha>
it's been a very pleasant conversation, thank you
<dh`>
sorry, fell asleep
<dh`>
night
pecastro has joined #riscv
winterflaw has joined #riscv
freakazoid12345 has joined #riscv
freakazoid343 has quit [Ping timeout: 256 seconds]
AEtherC0r3 has joined #riscv
jmdaemon has quit [Ping timeout: 260 seconds]
jellydonut has joined #riscv
radu242 has quit [Ping timeout: 260 seconds]
radu242 has joined #riscv
alMalsamo has quit [Ping timeout: 240 seconds]
alMalsamo has joined #riscv
ivii has joined #riscv
jacklsw has quit [Ping timeout: 260 seconds]
<gordonDrogon>
winding back to the SIGSEGV stuff above - I did consider implementing it in the bytecode VM I've written for my system - there are issues though - the main one being the overhead - this is really something to do in hardware if possible. There are other issues in BCPL such as static data being in-line with code which C separated out when we had the notions of text,data,etc. segments MMUs with separate data/address regions.
<gordonDrogon>
I did have a look at a C RISC-V emulator which implemented enough of an MMU to run Linux under - it was checking the memory address more or less every cycle. It would have been a bit slow!
q66 has quit [Remote host closed the connection]
alMalsamo is now known as lumberjack123
prabhakarlad has quit [Quit: Client closed]
ivii has quit [Read error: Connection reset by peer]
q66 has joined #riscv
jjido has joined #riscv
jjido has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
X-Scale` has joined #riscv
X-Scale has quit [Ping timeout: 256 seconds]
X-Scale` is now known as X-Scale
aburgess_ is now known as aburgess
geranim0 has joined #riscv
freakazoid12345 has quit [Read error: Connection reset by peer]
freakazoid12345 has joined #riscv
handsome_feng has joined #riscv
aerkiaga has joined #riscv
freakazoid343 has joined #riscv
<gordonDrogon>
well. exciting. I got my GoWin fpga/ tank nanp 9k to run up a risc-v cpu. at least I think I have - it's also doing hdmi output which is nice but as yet I've absolutely no idea how to get code into it. that might be tomorrows task.
freakazoid12345 has quit [Ping timeout: 260 seconds]
<gordonDrogon>
ended up using openFPGA loader to get the thing into it.
vagrantc has joined #riscv
<josuah>
gordonDrogon: did you see the apicula project yet?
<josuah>
ah, you are past generating the bitstream, all right
<gordonDrogon>
josuah, not yet - although I'm lurking on the channels here...
<gordonDrogon>
my feelings were to use the gowin ide & their programmer first, then once I've proved the concept move to something else..
<josuah>
the 9k support is a WIP, but hopefully not too different to the 1k or 4k version
<josuah>
there is also #yosys-apicula for discussion
<gordonDrogon>
this is very mch 'here be dragons' territory for me right now - and this fpga has binary blobs of verilog to do some of the stuff
<josuah>
and indeed, the README propose openFPGALoader
<josuah>
what are you using insofar?
<gordonDrogon>
if only gowin would publish more on exactly what's inside their fpga- theres a whole ARM core for example .. (not that I want to use it, however...)
<gordonDrogon>
right now - for the examples off the Sipeed's github - used the gowin IDE to build this thing and openFPGAloader to program it.
<josuah>
not any FPGA maker I heard off my short experience did provide any documentation of the internals
motherfsck has joined #riscv
<gordonDrogon>
however I want to see what alternatives there are to the picoRV core.
<josuah>
which apicula project's authors know very well, since they reversed engineered the chip to build a whole open-source toolchain
<gordonDrogon>
I'd really like one that supported 32IM ultimately.
<josuah>
but the support for the 9k in particular is pending
<gordonDrogon>
well, time to start reading then :)
<josuah>
if you want a hands-on, the README and linked doc would be a good start
<josuah>
btw, I am just linking someone else's work, I did none of it :P
jacklsw has joined #riscv
<gordonDrogon>
:)
<gordonDrogon>
thanks for the pointers.
* josuah
bows
<gordonDrogon>
I also have a couple of ESP32-C3's arriving tomorrow. they may be a slightly easier target for me to get some actual RV code running on, but I have a lot to do.
<gordonDrogon>
Currently my BCPL OS relies on an underlying OS on both the CPU and 'host' MCU for filing system and seial IO. I may need to write the filing system to run natively in BCPL on the RV target platform, so talking to an SD card or maybe the internal flash if it's writable on the fly.
<gordonDrogon>
this fpga has an sd card slot but the ESP32-C3's don't.
<gordonDrogon>
I'm liking the idea of this device as it can drive video - one thing I want is a "boot to basic" type retro computer... one day!
<gordonDrogon>
the stumbling block here is keyboards - all USB now, more or less - hard to get those old ps/2 ones (or expensive).
<josuah>
these same Sipeed folks did a wrapper board for the GD32VF103 (their RISC-V spin-off of their STM32F103 clone GD32F103)
<josuah>
farily inexpensive, well-documented enough to write a bootloader/sdk by hand (with some help with examples from others and the official SDK)
<josuah>
and it has an SD card slot
<josuah>
it might still be reasonably feasible to plug an SD card breakout board though
<gordonDrogon>
I can do SD cards with an SPI interface - I've already written code for that once, so how hard can it be again ;-)
<gordonDrogon>
ah, so picpRV32 can support M. that's good.
mahmutov has joined #riscv
gdd has quit [Ping timeout: 272 seconds]
elastic_dog has quit [Ping timeout: 260 seconds]
gdd has joined #riscv
jtm has left #riscv [Leaving]
elastic_dog has joined #riscv
guerby has quit [Remote host closed the connection]