buzzmarshall has quit [Quit: Konversation terminated!]
rob_w has joined #beagle
xet7 has quit [Remote host closed the connection]
thinkfat_ has joined #beagle
thinkfat_ has quit [Ping timeout: 250 seconds]
thinkfat_ has joined #beagle
thinkfat_ has quit [Remote host closed the connection]
thinkfat_ has joined #beagle
thinkfat_ has quit [Remote host closed the connection]
thinkfat_ has joined #beagle
Daulity has quit [Remote host closed the connection]
javaJake_ has joined #beagle
javaJake has quit [Ping timeout: 265 seconds]
javaJake_ is now known as javaJake
ikarso has joined #beagle
thinkfat_ has quit [Quit: Konversation terminated!]
Daulity has joined #beagle
<Daulity>
hey
<Daulity>
is there a kernel option to not reset gpio state at linux boot ?
florian has joined #beagle
<zmatt>
Daulity: the kernel does not reset any gpio unless explicitly requested to by a driver
<zmatt>
though by default if cape-universal is enabled then a gpio-of-helper device gets set up which configures all gpios as input... though that is normally the state they are in anyway after reset
<zmatt>
Daulity: why? are you setting gpios in u-boot?
<Daulity>
yes u-boot sets a few gpio's before it boots the kernel they get reset to a certain state not certain if u-boot or linux kernel
<Daulity>
was just wondering
<zmatt>
the annoying bit is that this isn't really fixable by applying an overlay on top of cape-universal due the the limitations of overlays and the fact that status="disabled"; doesn't work on individual gpios of a gpio-of-helper device node
<zmatt>
so your options are to modify the cape-universal overlay or disable cape-universal entirely and use an overlay to declare/export gpios (with initialization of your choice)
<Daulity>
i see
<zmatt>
(or fix the gpio-of-helper drivers to respect the status property of individual gpios... which is probably a 2-line patch)
<zmatt>
*driver
<zmatt>
interesting, if CONFIG_OF_KOBJ=n then nodes with non-okay status property don't even get deserialized, however in practice CONFIG_OF_KOBJ is always y (specifically, it is only n in kernels that lack sysfs support)
<zmatt>
rcn-ee: can you include that patch? that way overlays can disable cape-universal's gpio export for individual gpios used by the overlay
<zmatt>
e.g. &ocp { cape-universal { P9_14 { status = "disabled"; }; }; };
<zmatt>
Daulity: you can use that in an overlay and then if you still want the gpio exported you can just declare your own gpio-of-helper ... unfortunately it doesn't support exporting a gpio without initializing it, but at least you can choose *how* to initialize it (input, output-low, output-high) and whether or not linux userspace is allowed to change the direction of the gpio
Outrageous has quit [Remote host closed the connection]
Shadyman has quit [Remote host closed the connection]
lucas_ has quit [Ping timeout: 250 seconds]
lucascastro has joined #beagle
lucascastro has quit [Ping timeout: 240 seconds]
lucascastro has joined #beagle
rob_w has quit [Quit: Leaving]
lucascastro has quit [Ping timeout: 240 seconds]
lucascastro has joined #beagle
<rcn-ee>
zmatt, how far back, should i do everything? (v4.14.x -> mainline). ;)
vd has joined #beagle
blathijs has quit [Quit: reboot]
blathijs has joined #beagle
lucascastro has quit [Remote host closed the connection]
lucas_ has joined #beagle
blathijs has quit [Remote host closed the connection]
blathijs has joined #beagle
<zmatt>
rcn-ee: if you include it e.g. in a 4.19 kernel I'm willing to verify it actually works (although I have no idea how it could possibly _not_ work) and then I'd say make it part of the patch that adds gpio-of-helper ... it should probably have been part of it from the start
<zmatt>
not being able to disable cape-universal gpios is presumably also why you did the hack to try to disable gpio conflict detection
<rcn-ee>
i assume it would work for every version..
<zmatt>
indeed
<zmatt>
of_device_is_available(node) returns true if the "status" property of the node is missing, "okay", or "ok", and has been around with that exact functionality since ancient prehistory... like kernel 2.6 or something
vagrantc has joined #beagle
florian has quit [Quit: Ex-Chat]
lucas_ has quit [Ping timeout: 256 seconds]
buzzmarshall has joined #beagle
ikarso has quit [Quit: Connection closed for inactivity]
lucascastro has joined #beagle
vd has quit [Quit: Client closed]
vd has joined #beagle
lucascastro has quit [Remote host closed the connection]
vagrantc has quit [Quit: leaving]
otisolsen70 has joined #beagle
lucascastro has joined #beagle
lucascastro has quit [Ping timeout: 268 seconds]
mattb0ne has joined #beagle
<mattb0ne>
zmatt: are you aware of python having some overhead if you were to do a save?
mattb0ne has quit [Ping timeout: 252 seconds]
lucascastro has joined #beagle
vagrantc has joined #beagle
Shadyman has joined #beagle
mattb0ne has joined #beagle
<mattb0ne>
if python is locked to 1 cpu how would threading help
<mattb0ne>
stuck between a rock and a hard place
<mattb0ne>
I need to save data while doing other stuff
<mattb0ne>
this is just something that python is not very good at
<mattb0ne>
if I switched to C or C++ would i see a benefit if I had 1 thread dedicated to saving data
<zmatt>
mattb0ne: for logging data to eMMC/SD you mean? is that necessary, as opposed to sending data to a client system (not the BBB) and saving it there?
<zmatt>
regardless, if you're getting blocked on I/O then using a separate thread in python will indeed help, in the sense of allowing other stuff to get done while the writer thread is blocked on I/O
<zmatt>
while python only allows a single thread at any given time to be executing python code, a thread that's blocked in I/O does not count as "executing python code"
<zmatt>
so while that worker thread is blocked in I/O, your main thread can do other stuff
<zmatt>
mattb0ne: also, everything on the bbb is "locked to 1 cpu" since it only has one :P
<mattb0ne>
yeah I am talking about computer side
<zmatt>
hmm, it's having trouble writing a few dozen KB/s ?
<mattb0ne>
no i have images, so I am having to write ideally 15MB at 30 fps
<mattb0ne>
so I have an enterprise drive that can handle the throughput
<mattb0ne>
using DD I can write sequentially at 1.2GB/s
<mattb0ne>
but saving data to disk and running a gui is too much for a single python thread
<zmatt>
where is the data coming from?
<zmatt>
I'd be more concerned with how the data is being handled in python
<zmatt>
although yes, you may indeed also need a writer thread, but I don't know if that will fix the problem
<mattb0ne>
coming from a area scan camera
<mattb0ne>
so i have these big numpy arrays that I save as npy files for speed
<mattb0ne>
but they are big
<mattb0ne>
tiff have some sort of conversion and are saved at half the speed but the file size is much smaller
<mattb0ne>
i was going to whip up a python program that just makes large randome matrix and save to disk to see how fast I can go