<tpw_rules>
actually this might be a question more for mithro
pbsds has joined #litex
coco3431 has joined #litex
<tpw_rules>
mithro: i think it would be good if the pythondata concept had scope for additional code with the repo, or we did something else for the CPUs which are generated
<tpw_rules>
it's pretty inconvenient to have the driver code which converts the cpu request to a generator invocation as a part of litex itself and some CPUs also do additional setup and downloads prior to generation which makes tracking versions complicated
<mithro>
I would just send a PR.
<mithro>
tpw_rules: it would be great if the pythondata repos handled generated CPUs better
<mithro>
tpw_rules: I just never got around to doing the "run command between git pull and git commit + push" bit
<tpw_rules>
minerva sort of does what i want where the options are directly passed to code in the pythondata repo but it's done by shelling out to another python
<tpw_rules>
in my mind there would be a little more intelligence so you can call a function directly on the data module to do that, or maybe query it for some option descriptions
<tpw_rules>
but that doesn't fit the template or "just run a command" model
<tpw_rules>
and that sort of code doesn't quite belong in litex because it's cpu specific and cpu isolated and not dependent on litex, and it doesn't quite belong upstream for a lot of the cpus because they are used in other applications or not python themselves
<mithro>
tpw_rules: the code was almost completely written while I was avoiding doing something more important
<tpw_rules>
maybe i'm reading more into some grand plan than there really is :) i have a bad habit of that
<tpw_rules>
let's try again more directly
<tpw_rules>
from what i get from issues and glancing at the code, your pythondata-auto tool wants to completely control the content of the pythondata repos and generate them from scratch using that .ini. but there are good reasons imo to add more stuff to them. how should that be done
<mithro>
tpw_rules: the pythondata-auto already copies files from the template
<tpw_rules>
maybe this is something to push upstream, but most of the CPUs don't have a working build command anyway without code from litex
<mithro>
tpw_rules: no reason it couldn't also pull extra stuff from another repo or something
<tpw_rules>
i was more thinking a semi-automatic approach where there's some script to generate the initial layout and separate ongoing maintenance to merge upstream code/update submodules, run build commands, and update the __init__.py metadata. then glue code would be subject to the normal PR process and ideally change relatively infrequently
<tpw_rules>
having an additional pythondata-specific source repo seems complex to me
<tpw_rules>
(that actually raises the question, why are the commits copied instead of using submodules?)
<mithro>
I would say we want to keep the modules updating as quickly as the upstream, so that prevents manual intervention.
<tpw_rules>
you wouldn't need to manually intervene to keep upstream updating though
<tpw_rules>
that ongoing maintenance i mention would be fully automatic. we would just have to think a bit about what parts of the repo the bot and humans should touch
<mithro>
The main reason for not having humans touch the output is the version number issue
<mithro>
The current version numbers are the sum of the source and pythondata-auto