ChanServ changed the topic of #armlinux to: ARM kernel talk [Upstream kernel, find your vendor forums for questions about their kernels] | https://libera.irclog.whitequark.org/armlinux
heat has quit [Ping timeout: 256 seconds]
punit has quit [Ping timeout: 268 seconds]
Pali has quit [Ping timeout: 260 seconds]
apritzel has quit [Ping timeout: 256 seconds]
elastic_1 has joined #armlinux
elastic_1 is now known as elastic_dog
elastic_dog is now known as Guest7946
chipxxx has joined #armlinux
chipxxx has quit [Ping timeout: 256 seconds]
naoki has quit [Quit: naoki]
amitk has joined #armlinux
naoki has joined #armlinux
cbeznea has joined #armlinux
amitk_ has joined #armlinux
marcan has quit [Ping timeout: 260 seconds]
mwalle has quit [Ping timeout: 260 seconds]
sven has quit [Ping timeout: 260 seconds]
mwalle has joined #armlinux
sven has joined #armlinux
marcan has joined #armlinux
hanetzer has quit [Ping timeout: 256 seconds]
hanetzer has joined #armlinux
hanetzer has quit [Ping timeout: 256 seconds]
hanetzer has joined #armlinux
hanetzer has joined #armlinux
hanetzer has quit [Changing host]
guillaume_g has joined #armlinux
cleger has joined #armlinux
mcoquelin has quit [Ping timeout: 256 seconds]
cbeznea has quit [Quit: Leaving.]
iivanov has joined #armlinux
apritzel__ has joined #armlinux
viorel_suman has joined #armlinux
mcoquelin has joined #armlinux
<arnd>
milkylainen: yes, drivers need to call dma_set_mask() if they support a larger address range
<arnd>
there may also be limitations on the bus, so if you are missing dma-ranges properties, or they define a translation that doesn't contain all of RAM, calling dma_set_mask() may still not result in high DMA
<milkylainen>
arnd: but the default is always 32? for endpoints on pcie? I mean, there is no way of telling if the device can do more, by probing extension registers or something like that?
<milkylainen>
I just know basic stuff unfortunately. :\ I was a bit surprised. Was assuming that the DMA mask would be part of the walking the bus for devices.
<milkylainen>
Sure, the device driver can rearrange that, but that the default would be more close to device capability per default.
cbeznea has joined #armlinux
sszy has joined #armlinux
<milkylainen>
So many questions. :) Does devices really care? Or are they hoping for iommu single bus cycle (to map to 64-bit) over dual address cycles?
<sven>
there are devices that only support 32bit DMA even if they are on a 64bit capable bus
<sven>
essentially you can only do 64bit DMA if the bus and the iommu and the device (and maybe something else that's in the way) all support it. dma_set_mask() tells the DMA API that your devices supports that mask.
<milkylainen>
sven: Yes. But I was expecting the device to be able to expose it's DMA capability? To me it looks like the mask information is "We know the device can do this so lets set it..."
<sven>
some devices expose it somewhere in their MMIO
luispm has quit [Quit: Leaving]
headless has joined #armlinux
<milkylainen>
sven: ok.
<milkylainen>
Would be nice if the pci standard had exposed it somewhere in status or config regs.
mcoquelin has joined #armlinux
luispm has joined #armlinux
cleger has quit [Quit: Leaving]
Pali has joined #armlinux
amitk__ has joined #armlinux
amitk_ has quit [Ping timeout: 260 seconds]
headless has quit [Quit: Konversation terminated!]
amitk_ has joined #armlinux
amitk__ has quit [Ping timeout: 252 seconds]
<robmur01>
milkylainen: it's not necessarily one single value though - we already have two separate masks in Linux because some devices have mixed capabilities, and there have also been drivers that adjust one mask between allocations/mappings to cope with different requirements of different buffers/descriptors/etc.
hanetzer has quit [Quit: WeeChat 3.6]
hanetzer has joined #armlinux
<robmur01>
PCI cares about whether the device sends a 32-bit or a 64-bit address for a transaction, but not how many of those bits are significant beyond that
<milkylainen>
robmur01: Do you know if there are hard requirements for pcie compatible devices to do > 32-bit dma?
<bjdooks>
iirc pcie yes, but pcie<>pci bridges exist
<milkylainen>
robmur01: I was thinking that it ends up with modern devices always expanding the bitmask. I was thinking if the standard would have exposed the device capabilities for dma, something sane could have been done at probing, instead of every device redoing all of the dma setup. I guess most of the dma setups in drivers are very normal, duplicated code?
<milkylainen>
Like this amd gpu I have. Default 32-bit mask unless a driver says otherwise.
<milkylainen>
Maybe there is no better way, because the standard does not expose anything that can be used to do sane dma defaults?
<milkylainen>
It always has to end up in the hands of the driver.
<milkylainen>
Which then always ends up with a potential collision with the reality of the physical machine. :)
<milkylainen>
It ends up colliding with reality either way I guess. Just with a lot of duplicated driver code.
elastic_dog has joined #armlinux
elastic_1 has joined #armlinux
elastic_dog is now known as Guest8966
elastic_1 is now known as elastic_dog
<robmur01>
I believe 32-bit PCIe devices are allowed and do exist (and there are certainly some nominally-64-bit ones whose handling of >32-bit addresses is just broken)
<robmur01>
the only thing you can detect generically is whether a device supports 64-bit MSI addresses or not, but that isn't necessarily indicative of its DMA capability in general
<robmur01>
I can't see that there could be any standard way to describe that, say, a device supports 44-bit DMA addresses for its control ring buffer but 64-bit addresses for data descriptors, because those concepts are defined by that device and its driver
<biju>
we have 34 bit DMA capbale devices in RZ/V2{M,MA} series, currently we use dma_set_mask() for 34-bit dma operation. but unfortunately the drivers are designed for 32 bit dma map/unmap, How do we extend it to 34 bit dma map/unmap()?? currently we use hacks for unmap operation.
torez has joined #armlinux
elastic_dog has quit [Read error: Connection reset by peer]
elastic_dog has joined #armlinux
<robmur01>
biju: sorry, I don't follow... dma_set_mask() is for the streaming DMA API which *is* dma_{map,unmap}_*() - if you're already setting a 34-bit mask then it will already allow mappings to return up to 34-bit DMA addresses (assuming dma_addr_t is 64-bit)
<robmur01>
if a driver is stuffing a dma_addr_t into a u32 somewhere then the only answer is "don't do that"
<biju>
I need to hack BIT(32)| le32_to_cpu(desc->dptr))) to make it work
<sven>
"don't do that" ;)
<biju>
what is the best way to handle both cases. 1 is for 32 bit machines and another for > 32 bits
<sven>
le32_to_cpu sounds like you devices actually only reads 32bit from dptr - so how can it see the extra two bits?
<sven>
*your
amitk_ has quit [Ping timeout: 260 seconds]
<biju>
sven: you mean to support >32 bit just use desc->dptr instead??
<robmur01>
um, yeah, AFAICS the DMA addresses are always truncated by cpu_to_le32() to begin with, so I can't see how it works at all.
<sven>
^-- that
<robmur01>
if your DMA descriptor can only encode a 32-bit address to pass to the device, then you only support 32-bit DMA
<biju>
But the we need to use a memory beyond 32 bit address range. for eg:- reg = <0x1 0x80000000 0x0 0x80000000>;
<biju>
this is the memory map of the system. with dma_setmask, 34 and hacks in unmap, it works fine.
xvmt has quit [Remote host closed the connection]
<robmur01>
well then presumably you must have an IOMMU or some kind of dma-ranges offset in the interconnect
<robmur01>
note that the DMA mask represents the size of address that the device itself can emit, not whatever it might get translated to downstream
xvmt has joined #armlinux
cbeznea has quit [Ping timeout: 268 seconds]
<biju>
Not IOMMU , but there is an IP, that has a bank switching register to extend the two higher-order bits of the addresses and output to the ICB (common to the AR and AW channels) to make it 34 bits.
amitk_ has joined #armlinux
rvalue has quit [Read error: Connection reset by peer]