<zmatt>
set_: your "fixes" are just breaking things. the error is probably because you didn't configure the gpio as output
<zmatt>
I've had a bunch of examples for gpio, this one in particular assumes you've already configured the direction of the gpios
<set_>
Hello! I am slowly writing my own source that is more basic in notation.
<set_>
Oh!
<set_>
I did not know that idea.
<set_>
Whelp, I will do that promptly.
<zmatt>
probably the most basic way of doing gpio in python is just using pathlib directly
<set_>
Right. That is what I am trying.
<zmatt>
Path('/sys/class/gpio/gpio31/direction').write_text('low') # configure gpio as output, initially low
<set_>
I thought direction was 'out'?
<zmatt>
also, using my code Gpio('gpio31') was never correct, it would have to be Gpio(31)
<set_>
Aw.
<zmatt>
when writing the direction attribute you should write 'in' (configure as input), 'low' (configure as output, initially low), or 'high' (configure as output, initially high)
<zmatt>
writing 'out' is a deprecated synonym for 'low'
<set_>
Oh.
<set_>
Okay. I was unaware of this fact too.
<zmatt>
you can't configure a gpio to output without specifying what to output
<set_>
Right. That makes sense, i.e. hence the errors on the pathlib file.
<zmatt>
??
<set_>
Sorry. My python3 file w/ pathlib.
<zmatt>
writing 'out' does not result in an error
<set_>
Oh.
<set_>
Double wrong over here then.
<zmatt>
it's just treated as if you wrote 'low'
<zmatt>
if that's your intention you should write 'low' instead for clarity
<set_>
I will write 'low'.
<set_>
Okay.
<zmatt>
and also your changes to the __init__ method of my Gpio class are just nonsense, it was correct the way it was written originally
<set_>
Okay. Maybe I was having issues w/ /dev/gpio?
<zmatt>
if you don't have the udev rule that creates the symlinks in '/dev/gpio' that's fine, it just means you can't open gpios by name... but my code also accepts gpio numbers and absolute paths
<set_>
Okay.
Guest18 has quit [Ping timeout: 260 seconds]
mattb0ne has joined #beagle
mattb0ne has quit [Ping timeout: 240 seconds]
mattb0ne has joined #beagle
mattb00ne has joined #beagle
mattb0ne has quit [Ping timeout: 265 seconds]
<set_>
well, I can say that the job source kills the PSU and then turns it back on only...
<set_>
Oh. I can excite the motor or release its state to "free" only so far w/ three GPIOs from the BBBW.
<set_>
Odd? Yeppers. What is even more odd is the fact that it almost works and I have no understanding of this IC and circuitry.
<set_>
Anyway, until tomorrow!
brook has joined #beagle
mattb00ne has quit [Ping timeout: 276 seconds]
buzzmarshall has quit [Quit: Konversation terminated!]
brook has quit [Remote host closed the connection]
brook has joined #beagle
brook has quit [Ping timeout: 248 seconds]
Guest59 has joined #beagle
Guest59 has quit [Quit: Client closed]
Shadyman has joined #beagle
mattb00ne has joined #beagle
set_ has quit [Remote host closed the connection]
mattb00ne has quit [Ping timeout: 265 seconds]
set_ has joined #beagle
rob_w has joined #beagle
Stat_headcrabed has joined #beagle
Posterdati has quit [Ping timeout: 240 seconds]
otisolsen70 has joined #beagle
Stat_headcrabed has quit [Quit: Stat_headcrabed]
Posterdati has joined #beagle
mattb00ne has joined #beagle
mattb00ne has quit [Ping timeout: 256 seconds]
russ has quit [Ping timeout: 246 seconds]
russ has joined #beagle
mattb00ne has joined #beagle
Guest663 has joined #beagle
<Guest663>
Hi! I work on EMS company NOTE in Sweden. We produce for Ferroamp, application smart Electricity control system. Ferroamp use 102110420 from Beaglebord round 3K for a year. Today we buy this from Farnell. Can you recommend some other disit as well? Many thanks. Best regards Jessica
ft has quit [Quit: leaving]
Guest663 has quit [Quit: Client closed]
Guest6 has joined #beagle
<Guest6>
Hi! I work on EMS company NOTE in Sweden. We produce for Ferroamp, application smart Electricity control system. Ferroamp use 102110420 from Beaglebord round 3K for a year. Today we buy this from Farnell. Can you recommend some other disit as well? Many thanks. Best regards Jessica
<zmatt>
please don't repeatedly repost your question
<zmatt>
this is a community chat, not a sales channel. you can find a list of distributors under "Purchase" on the bbb's product page: https://beagleboard.org/black
hays has quit [Read error: Connection reset by peer]
<zmatt>
lol, looks like there isn't... it's possible the BBAI64 image works for it but the one on latest-images doesn't explicitly mention it so maybe it would be better to use the latest testing image which does explicitly mention BeaglePlay support: https://forum.beagleboard.org/t/debian-11-x-bullseye-monthly-snapshots-arm64/32318
<jkridner>
i can link to it via files.beagle.cc. i will move all the old image links there and take down the old file server.
mattb00ne has joined #beagle
<jkridner>
shoragan: the forum does have a link to the image on rcn-ee.net
<oxff>
Hi!
<oxff>
I am toguether my collegue, LBARROS>
<oxff>
We are working together in one project. We need some help. Please, do have someone here with experience using ethernet configured to 1Gbps of the BBE?
<LBARROS>
Basically, we want to transfer at least 256 Mbits/s over Ethernet interface.
<bradfa>
oxff LBARROS: what's the "BBE"? Beaglebone ???? blue? white? black? green? none of those start with "E"
<oxff>
BeagleBone Enhanced
<bradfa>
oxff: it should be pretty easy to get 256Mb/s transmit or receive on that board. Are you not able to using iperf?
<bradfa>
do you have a 1Gb/s link between your BBE and another device, like a desktop PC?
<oxff>
Yes. Both, PC and BBE tem ethernet interface confgigured to use 1Gbps.
<oxff>
We used more than one host (PC) to make this test.
<oxff>
And the results were the same.
<LBARROS>
@bradfa We used iperf3 to test
<zmatt>
oxff: you didn't mention what results you got
mattb00ne has quit [Ping timeout: 255 seconds]
<zmatt>
it would also be useful to know what kernel you're using, e.g. I've observed much poorer network performance on an -rt kernel compared to a non-rt one (though this was a long time ago on an old kernel, haven't done anything with -rt lately)
<oxff>
root@beaglebone:~# uname -a
<oxff>
Linux beaglebone 4.9.78-ti-r94 #1 SMP PREEMPT Fri Jan 26 21:26:24 UTC 2018 armv7l GNU/Linux
<zmatt>
wow that's really old
<zmatt>
do note that kernel 4.9 is no longer maintained in mainline at least (LTS support ended this year)
<oxff>
We have used this becouse it needs to use one device driver based BeagleLogic. This Kernel support this device driver.
<oxff>
We have read on datasheet of Texas Instrument that manifactured a AM335x chip allows to use the maximium MTU equl 1500 - 2016.
<oxff>
*manifactured the...
<zmatt>
oxff: the current beaglelogic driver should work on kernel 5.4-ti at least
<zmatt>
and 5.10-ti
<bradfa>
LBARROS: use iperf2 not iperf3. iperf3 is quite horrible at performance.
<zmatt>
lol, how'd they manage to do that
<bradfa>
zmatt: iperf3 is not so much an improved version of iperf2 but seemingly a rewrite by a bunch of people who misunderstood the point
<oxff>
bradfa, we used both version, iperf and iperf3 with same results. We didn't know iperf2.
<bradfa>
LBARROS: you should be able to get into the 800 or 900Mb/s range with iperf2. 250Mb/s should be no problem. If you need to go faster, you can try using a larger ethernet MTU. Normally ethernet MTU is 1500 but if all your devices on the link support it, an MTU of 9000 may help reduce overhead
<bradfa>
ah, sorry, just read backscroll about MTU
<bradfa>
oxff: what performance are you seeing?
<oxff>
root@beaglebone:~# iperf -version
<oxff>
iperf version 2.0.9 (1 June 2016) pthreads
<oxff>
root@beaglebone:~#
<oxff>
We used iperf version 2.
<LBARROS>
We used both version of iperf
<zmatt>
yeah mtu limit is 2016 (excluding vlan tag, if any) for cpsw (on am335x at least)
<bradfa>
zmatt: that's unfortunate :( is it the same on am57xx?
<zmatt>
checking
<bradfa>
I know on am57xx that >900Mb/s with iperf is definitely possible, even with 1500 MTU. I've done it.
<oxff>
Ok... it use Arm CPU2 Arm Cortex-A15 with 1GHz of clock.
<oxff>
*1.5 GHz
<zmatt>
bradfa: similar limits yeah, it says 2020 bytes but that may be including the vlan tag, for mcu_cpsw they say max 2024 though if that's true I really wonder how
<bradfa>
weird, I mean it's probably no big deal, at 1Gb/s usually 1500 MTU is still plenty
<oxff>
BBE has a Chip with one Coretex A8 single core with 1GHz of clock
<bradfa>
oxff: what performance are you getting now with iperf?
<zmatt>
oxff: I don't think you've mentioned what results you got?
<oxff>
tcpdump monitored the TCP packages and send one messages with 1476 lenght and each new mesage has a timestamp on PCAP file with difference of the 52 microseconds
<bradfa>
oxff: what performance are you getting now with iperf?
<zmatt>
oxff: wait, you ran tcpdump on the bbe while testing? that sounds like it could likely cause a serious performance hit
<LBARROS>
We got 135 Mb/s and 95% of CPU usage
<bradfa>
LBARROS: while running tcpdump at the same time?
<oxff>
the first limitation seems the MTU size... and the second was to seems the clock rate of MPU.
<LBARROS>
Tcpdump was used on the PC side
<zmatt>
ok
<zmatt>
it would be interesting to try the -zerocopy flag of iperf3 to see how much time it's spending on copying the packet data
<oxff>
Performance with zerocopy decreased 3 % CPU consume
<zmatt>
it might be interesting to test udp with varying packet sizes to see whether the performance is limited by bandwidth or by packet rate
<oxff>
I will send the tcpdump result with MTU equl 1500
<zmatt>
I don't see how that's of any use?
<oxff>
995944 21:04:16.007901 IP 192.168.1.106.44982 > 192.168.1.200.5201: UDP, bad length 8192 > 1472
<oxff>
995945 21:04:16.007901 IP 192.168.1.106 > 192.168.1.200: udp
<oxff>
995946 21:04:16.007901 IP 192.168.1.106 > 192.168.1.200: udp
<oxff>
995947 21:04:16.007901 IP 192.168.1.106 > 192.168.1.200: udp
<oxff>
995948 21:04:16.007901 IP 192.168.1.106 > 192.168.1.200: udp
<oxff>
995949 21:04:16.007901 IP 192.168.1.106 > 192.168.1.200: udp
<oxff>
995950 21:04:16.007901 IP 192.168.1.106.44982 > 192.168.1.200.5201: UDP, bad length 8192 > 1472
<zmatt>
please don't paste multi-line stuff into chat
<zmatt>
it's spammy
<bradfa>
oxff: you are doing a UDP test or a TCP test?
<oxff>
Iperf showed that send with more than 400 Mb/s, but the TCPDump in PC showed this.
<oxff>
Sorry about this zmatt.
<bradfa>
oxff: iperf can send UDP packets faster than line rate, they just get dropped. A TCP test will better show what the link is actually capable of supporting
<zmatt>
in the future if you do want to share multi-line text output then use a paste service like pastebin.com
<zmatt>
bradfa: uhh they shouldn't get dropped
<zmatt>
I think?
<bradfa>
zmatt: it's UDP, nothing is guaranteed
<zmatt>
yeah sure, but does the kernel actually drop them instead of marking the socket as non-writable?
<bradfa>
iperf also gets weird when you try to do high rate UDP tests, in my experience
<oxff>
I used point-to-point connection....
<zmatt>
bradfa: if the send speed is cpu-limited (which is what's implied by 100% cpu load) then I wouldn't expect to see any packet drops
<zmatt>
packet drops might make sense if the link is saturated, but that's not remotely the case
<bradfa>
oxff: tcpdump logs won't be helpful here. You need to figure out why your system is using maximum CPU for such a low ethernet datarate. Either you're running iperf incorrectly, running the CPU at a low speed, have other CPU consuming apps running, or some other reason why you're not getting the expected datarate.
<zmatt>
or the kernel is just spending a lot of time per packet for some reason
<oxff>
Ok! I will try to update the kernel.
<bradfa>
oxff: you can't increase the MTU due to am335x limitations, so that's not a solution. You can try other settings in iperf to see what happens, which is what I would suggest first. You will need to experiment with how you run iperf to try to figure out what's going on
<zmatt>
well increasing the MTU could in theory help if the kernel is spending a lot of time on each packet, though if that's the case I'd be more interested in _why_ the kernel is spending so much time per packet
<zmatt>
maybe there's a tool that can send packets at a fixed rate so you can measure how cpu load scales as a function of packet rate and packet size
<zmatt>
I also wonder if there's a way to get performance stats from EMIF... it's conceivable the limited memory bandwidth on the am335x might be part of the problem, but I'm not sure
<zmatt>
that could cause high cpu load if the cpu is just spending a lot of time waiting on cache fills/evictions
<bradfa>
zmatt: the EMIF in am335x should have performance metric registers, most TI SOC have them
<zmatt>
yeah but then the question is what info is available exactly and how hard is it to get your hands on it in practice... possible /dev/mem pokery might be needed
<oxff>
I will get performance stats from EMIF
<zmatt>
ambitious
<zmatt>
oh actually never mind, I think I may have configured bits and bytes
<zmatt>
memory bandwidth should be plenty
<zmatt>
like I said earlier I'd probably first try to figure how much the cpu load is scaling with packet rate vs packet size, to see how much time is being spent on copying data versus how much time is being spent per packet
<zmatt>
the fact that -zerocopy had little influence suggests that the limiting factor is packet rate
<zmatt>
like, your results would suggest it's spending somewhere around 100k cpu cycles per packet... that just seems like a ridiculous amount of cpu time to spend
<zmatt>
then again, it's the linux kernel, who knows what it's doing
<zmatt>
like, I once measured the latency of a gpio interrupt delivered to userspace using uio, i.e. basically just the time for linux to take the interrupt and do a context switch to my process, and iirc that was tens of microseconds
<zmatt>
which is also a mind-boggling amount of time for linux to spend on doing that
<zmatt>
anyway, I should get back to work, afk
<LBARROS>
Thank you zmatt by your help
<oxff>
We will make the home works! Thanks zmatt!
<zmatt>
jkridner: the beagleplay design repository doesn't actually contain the design files, those are stored in LFS and not actually in the repository
<zmatt>
(unlike all previous design repos)
<zmatt>
while putting binary files outside the git repo makes sense in some cases (though most of those are kinda abusing git anyway), in this case I think they should definitely be part of the repository itself. you want them distributed as part of the repository and versioned as part of the repository, since that's the repository's sole purpose
<zmatt>
jkridner: I think the docs are rather vague about the usb-c port, it should be more clearly indicated to be a device-role-only port (since has no ability to negotiate or switch to host role)
<zmatt>
and of course tying off CC like this isn't allowed if you want to draw more than 500 mA but I guess people have just given up on actually implementing the USB-C spec
brook has joined #beagle
ft has joined #beagle
<zmatt>
wait, the beagleplay has no coexistence signalling between the wifi and BLE controllers that both use 2.4 GHz? uhhhh that can't be good
<jkridner>
BBAI gets much faster Eth perf than BBE.
<jkridner>
and AI64 and Play.
<jkridner>
er, those are faster than BBE.
<zmatt>
yeah but I wonder what's causing that
<zmatt>
like, is there a hardware reason or is linux just that inefficient
<jkridner>
Play USB is data device mode only. Possible to switch to host mode explicitly, but not via spec. Power has to be from type-C, not legacy type-A.
mattb00ne has joined #beagle
<zmatt>
jkridner: since there are two different single-pair 10Mbps ethernet standards (10BASE-T1S and 10BASE-T1L) it's probably a good idea to explicitly mention the beagleplay's port is 10BASE-T1L
<zmatt>
in feature overviews and such
<zmatt>
it's unfortunate that TI has no multi-protocol phy for single-pair ethernet, just separate PHYs for 10BASE-T1L, 100BASE-T1, 1000BASE-T1 (none for 10BASE-T1S it seems)
mattb00ne has quit [Ping timeout: 276 seconds]
<zmatt>
but I guess most applications other than experimentation don't have a need for it
Stat_headcrabed has quit [Quit: Stat_headcrabed]
mattb00ne has joined #beagle
vvn has quit [Quit: WeeChat 3.8]
mattb00ne has quit [Ping timeout: 252 seconds]
vvn has joined #beagle
vagrantc has joined #beagle
brook has quit [Remote host closed the connection]
LBARROS has quit [Quit: Client closed]
mattb00ne has joined #beagle
buzzmarshall has joined #beagle
oxff has quit [Quit: Client closed]
florian has quit [Quit: Ex-Chat]
brook has joined #beagle
buzzmarshall has quit [Ping timeout: 276 seconds]
buzzmarshall has joined #beagle
brook has quit [Remote host closed the connection]
brook has joined #beagle
brook has quit [Remote host closed the connection]
brook has joined #beagle
brook has quit [Remote host closed the connection]
Guest18 has joined #beagle
brook has joined #beagle
buzzm has joined #beagle
buzzmarshall has quit [Ping timeout: 268 seconds]
buzzm has quit [Quit: Konversation terminated!]
mattb00ne has quit [Ping timeout: 276 seconds]
Guest18 has quit [Ping timeout: 260 seconds]
mattb00ne has joined #beagle
brook has quit [Remote host closed the connection]
brook has joined #beagle
oxff has joined #beagle
brook_ has joined #beagle
mattb00ne has quit [Ping timeout: 276 seconds]
mattb00ne has joined #beagle
<oxff>
Please, where can I find the sources that was used to compile AM3358 Debian 10.3 2020-04-06 1GB SD console?
brook has quit [Ping timeout: 276 seconds]
brook_ has quit [Ping timeout: 268 seconds]
mattb00ne has quit [Ping timeout: 268 seconds]
<oxff>
I found!
oxff has quit [Quit: Client closed]
mattb00ne has joined #beagle
brook has joined #beagle
jfsimon1981_c has quit [Remote host closed the connection]