<zmatt>
an npy file is the raw data of your array with a tiny header
<zmatt>
if the npy file is larger than you expect, that's probably due to whatever data format your array has
<mattb0ne>
hmmm
<mattb0ne>
so my image is like 4024x3036
<mattb0ne>
and I thjnk it is 16 bit
<mattb0ne>
its monocrhome but I should look at maybe forcing it to int to save space
<zmatt>
??
<zmatt>
"monochrome" is not a datatype
<zmatt>
if it's 16-bit then 4024x3036 pixels will be 23.3 MB
<mattb0ne>
i crop to save space
<mattb0ne>
so my size will vary but it will be still large
<mattb0ne>
16bit would explain the size as a pure numpy array
<zmatt>
width * height * 2 bytes if you're using 16 bits per pixel
<mattb0ne>
monochrome means i just have 2D
<mattb0ne>
if it were color I would have 3 times the size
<mattb0ne>
lol
<zmatt>
not if it's still 16 bits per _pixel_ :P
<zmatt>
if it's 16-bits per channel per pixel then yes it would be
<zmatt>
anyway, tiff is a container format so that says absolutely nothing about the resulting size
<zmatt>
tiff can contain raw images, RLE compressed images (like GIF or PNG), JPEG, and a bunch more
<mattb0ne>
did not know that
<mattb0ne>
do you think if I ported to C or C++ i could get an improvement in performance
<zmatt>
I cannot possibly opine on what might improve performance without a determination of what it is you're spending your time on
otisolsen70 has quit [Quit: Leaving]
<zmatt>
like, python would be slow if you were to do actual per-pixel calculations or process in it, but that's why you use libraries like numpy that already do the hard work in optimized C code
<zmatt>
if python is solely pass big blobs of data from one optimized C function to the next, as it should be, then absolutely nothing will be gained from reimplementation in C/C++, other than many bugs probably
<zmatt>
*solely passing
<ds2>
hmmmm people are back on IRC?!
<zmatt>
ds2: ehh, people have never been gone from irc?
<ds2>
thought everyone left for slack?
<zmatt>
lol what, no?
<ds2>
*shrug* seemed that way... along with discordia
<zmatt>
I have an account there but I'm never there since slack is annoying
<ds2>
slack is beyond annoying
<zmatt>
especially two accounts, since you can't have both open in one tab and I refuse to open up two slack tabs
<ds2>
the attempts at controlling content is just unacceptable
<mattb0ne>
well moving to C or something i could get true multithreading
<mattb0ne>
and just have a thread that just saves
<mattb0ne>
is what I am thinking
<zmatt>
mattb0ne: that will work equally well in python
<zmatt>
for the reason I explained above
<ds2>
btw - has anyone used the battery charger on the pocket?
<mattb0ne>
but you said python only has 1 thread
<zmatt>
mattb0ne: go scroll back and read what I said
<zmatt>
ds2: I mean, there's still the same hazard with using a battery
<mattb0ne>
I am passing a blob of data from numpy to a disk
<zmatt>
ds2: the 3.3v available on the headers and used for e.g. SD card is the one from the ldo, which remains permanently enabled if on battery
<ds2>
yes but are things properly brought out like on the stock bone?
<mattb0ne>
but I dont see how that cannot be improved by seperating the gui management vs the IO to disk
<ds2>
it is? blah
<ds2>
I can see the plans for a quick fun project btwn xmas and new years flying out the window :(
<zmatt>
ds2: and you can't patch the pocket like you can the bbb
<zmatt>
well, you can still use the battery... provided you never shut down ;)
<ds2>
is the SD the only thing on that LDO?
<ds2>
no, I need shutdown
<ds2>
as the sleep mode on the am33x sucks
<ds2>
I guess I can price out some cheap pfets
<zmatt>
mattb0ne: I don't understand what you're saying. shoving your bigass file writes into one or more worker threads will almost certainly help a lot
<mattb0ne>
right but only for C since for python I would still be blocking while the write executres
<mattb0ne>
right?
<zmatt>
mattb0ne: again, scroll back and read my explanation of threading in python so I don't have to repeat myself
<mattb0ne>
ok
<mattb0ne>
so based on all your statements I think it makes sense for me to port to C so I can have a worker that just handles writes to the disk
<zmatt>
mattb0ne: then you understood nothing I said
<zmatt>
what you will most likely get by doing that is more crashes, not more performance :P
<mattb0ne>
i am talking about my computer
<mattb0ne>
not the beaglebone
<mattb0ne>
so I am not locked
<mattb0ne>
i got 12
<zmatt>
my explanation of threading in python is platform-independent
<mattb0ne>
so while that worker thread is blocked in I/O, your main thread can do other stuff
<mattb0ne>
that is the key statement that I guess I am misunderstanding
<mattb0ne>
python is smart enough to do something else while waiting for the I/O to clear
<mattb0ne>
i thought unless you do something specific it would be blocking
<zmatt>
that has nothing to do with python being smart, that's how threads work
<mattb0ne>
I do not have threads in my implementation
<mattb0ne>
just asnycio
<zmatt>
right, which won't help when dealing with large file I/O which is unfortunately always blocking
<zmatt>
hence, use worker threads
<mattb0ne>
aha
<mattb0ne>
so I need to make a worker thread in python and see how that goes
<mattb0ne>
for the I/O
<mattb0ne>
maybe that would have been better than all this asnycio stuff
<mattb0ne>
did not really by me much other than allowing the gui to update
<zmatt>
they're solving different problems
<mattb0ne>
so the GLI does the thread switching
<mattb0ne>
all for me
<zmatt>
and asyncio makes it pretty easy to execute stuff in a thread pool and get asyncio futures for them that you can await if you want to know when those executions have completed
<zmatt>
the OS does thread switching (keeping in mind threads may also run at the same time on separate cores)
<zmatt>
the GIL prevents two threads from running python code at the same time, so e.g. when that big file write completes it will want to return to python code, and at that point it'll have to wait for its turn to be allowed to do so
<mattb0ne>
but bear with me, just to clarify while the big fat right is going GIL will allow the main thread to proceed and eventually let the worker thread back to execute more code
<mattb0ne>
big fat write*
<zmatt>
while the big fat write is going, that thread will release the GIL
<mattb0ne>
so writing to disk is basically outside of python so that allows the GIL to be released
<mattb0ne>
since python is calling a c function anyway or it is the domain of the OS
<zmatt>
it's up to the individual functions (implemented in C) to release the GIL when appropriate
<zmatt>
e.g. numpy will definitely drop the GIL while doing a big calculation
<mattb0ne>
ok I will read up on threads then
<mattb0ne>
so what would be a case where threads do not make sense and asyncio would be the better approach
<mattb0ne>
is really for big task that you would move to a thread?
<zmatt>
asyncio is almost always the better choice
<zmatt>
asyncio is for event handling
<mattb0ne>
it is just not helpful in this context because i just have a big fat event
<mattb0ne>
nothing it can do
<mattb0ne>
i got it now
<zmatt>
right, this isn't a problem where asyncio can help you
<zmatt>
though like I said, it does offer integration via loop.run_in_executor which can run code in a thread pool and give you an asyncio future for its completion
<zmatt>
there's also asyncio.to_thread() which is even simpler, but it looks like it spawns a new thread each time which would be undesirable (of if it doesn't then it's not clear how to influence the number of worker threads) .. also it might be too new, introduced in python 3.9
<zmatt>
ah it uses a single worker
<zmatt>
sorry, thread pool
<zmatt>
a default thread pool
<zmatt>
I mean, that seems like it could be fine
<zmatt>
mattb0ne: so this would be the simplified version (without requiring python 3.9): https://pastebin.com/3YCXU02t
<zmatt>
jkridner: I just noticed that lately all of the BBBs we've been getting have the wrong board id in eeprom, they're claiming to be industrial (A335BNLTEIA0) when they're not
<zmatt>
in fact all of the boards since the seeed transition have wrong board id, and also have production year and production week swapped
vagrantc has quit [Quit: leaving]
vd has joined #beagle
buckket has joined #beagle
buzzmarshall has quit [Quit: Konversation terminated!]
dkaiser has joined #beagle
ikarso has joined #beagle
Posterdati has quit [Read error: Connection reset by peer]
dkaiser has quit [Quit: Leaving]
lucascastro has quit [Ping timeout: 252 seconds]
Niv44 has joined #beagle
vd has quit [Ping timeout: 256 seconds]
florian has joined #beagle
otisolsen70 has joined #beagle
zjason` has joined #beagle
russell-1 has joined #beagle
robert_ has joined #beagle
ft_ has joined #beagle
Niv44 has quit [Quit: Ping timeout (120 seconds)]
florian has quit [*.net *.split]
Daulity has quit [*.net *.split]
zjason has quit [*.net *.split]
ft has quit [*.net *.split]
russell-- has quit [*.net *.split]
ft_ is now known as ft
florian has joined #beagle
xet7 has joined #beagle
Guest56 has joined #beagle
<Guest56>
hi
<Guest56>
Good day!
<Guest56>
I bought beagle board x15 today
<Guest56>
I am trying to boot from SD card but it is not happening
<Guest56>
By default, it is booting from eMMC
<Guest56>
Let me know the procedure to boot from SD card and also boot strap settings for the board
<Guest56>
Please help me with this issue
<zmatt>
Guest56: please don't send unsolicited private messages... and I'm pretty sure all that's needed to boot from sd card is inserting a bootable sd card
<Guest56>
i have downloaded the buildroot and generated the below images
<zmatt>
yes correct, if there's no bootable sd card inserted it will boot from eMMC
<Guest56>
i got the images.. i want the procedure to load the images to sd card
<Guest56>
i have found the procedure online...but it is not working
<Guest56>
I have made 2 partitions in sd card and programmed the boot files in partition-1 and rfs in partition-2
<zmatt>
if you're talking about a filesystem image (like the official ones) the recommended way to flash those to SD card is using Balena Etcher (etcher.io). if you're talking about the buildroot output files then no idea since I have no experience with buildroot
<Guest56>
I also used balena etcher
<Guest56>
please share the working image path if u have one
<Guest56>
i have downloaded debian images from the path u mentioned..
<zmatt>
in general any image for AM572x should work
<Guest56>
should i directly load this image to sd card using balena etchefr
<zmatt>
yes
<Guest56>
i already did this with different image
<zmatt>
what image?
<zmatt>
(file path or full name)
lucascastro has joined #beagle
<Guest56>
Images under "Debian console images" in the same url
<zmatt>
should be fine, if you get the AM5729 image
<Guest56>
Ok..
<Guest56>
Image verification is getting failed when trying to load with balena etcher
<zmatt>
that typically indicates the SD card is defective
<zmatt>
(resulting in data corruption)
<zmatt>
e.g. due to wear-out
<Guest56>
Ok
<Guest56>
I will try with other sd card
<Guest56>
actually, this sd card is working fine for all other images..
<Guest56>
i have loaded raspi image
<Guest56>
earlier
<zmatt>
you can try again, but Etcher doesn't care what's on the image, it's just a big pile of bytes as far as it's concerned
<Guest56>
Ok
Guest56 has quit [Quit: Client closed]
<zmatt>
if you're concerned about the download's integrity, check the sha256sum of the .img.xz file matches the one listed on the download page (below the download link)
<zmatt>
not any official ones anyway... but ubuntu and debian are pretty similar anyway, especially if you want a lean console image it would be silly to use ubuntu instead of debian