赛灵思
直播中

李泽坚

7年用户 1333经验值
私信 关注
[问答]

GTY 100G Aurora IP的一些问题

Vivado:2016.4FPGA:xcvu190Hello,我在两个xcvu190平台之间遇到Aurora 64B66B IP(v11.1)的一些问题。
使用x4 GTY通道将IP配置为全双工,成帧和100G。
下面描述的所有链接都是稳定的,并且这些问题与AXIS用户界面有关。
此外,当设计满足时序和使用AXIS ILA时,会观察到所有问题。
添加ILA往往会导致时序违规,因为Aurora AXIS接口的时钟频率为400MHz。首先我注意到AXIS“tlast”信号和数据包中的最后一个数据字没有传播到接收伙伴,这是
通过启用CRC错误检测意外解决。
我用过10G和40G应用程序的Aurora IP已经过去了(XC7VX690T,KU060),我从来没有观察到这种行为。
我正在使用一些自定义逻辑进行数据包成帧,但我也在利用Xilinx的AXIS IP进行宽度转换,时钟交叉和数据包模式FIFO。
在任何情况下,当启用CRC时,按预期接收AXIS数据包(tlast和最后一个dword)。
这是一个已知的问题?
我已经使用Vivado 2016.3和2016.4确认了这种行为。我的第二个问题有点复杂,可能与第一个问题有关。
我的测试应用程序涉及使用100G Aurora链接从两个平台(A0和A1)向平台B发送AXIS数据包(4x256位,3.4Gb / s)。
平台B使用AXIS互连聚合数据包,并将数据包广播回平台A0和A1。
当在任一A平台上观察收到的数据包时,我偶尔会注意到一个数据包中的数据字被删除并插入到相邻的数据包中。
有时“简化”数据包仅包含带有“tlast”的最后一个数据字,有时它只缺少1个数据字(tlast始终存在)。
我没有观察丢失的数据,因此链接层框架或Aurora IP的其他内部功能似乎存在问题。
两个Aurora IP都使用相同的参考时钟(322.265625 MHz),我正在设计相对于Aurora用户时钟的时钟交叉。
此外,我在整个平台B的设计中探测了AXIS总线,并且所有数据包在进入Aurora AXIS TX端口时似乎都被正确格式化。
平台B的数据路径类似于以下内容:
Aurora [0] RX  - > INTC  - > Aurora [0] TX
INTC  - > AXIS Broadcast  - > Aurora [1] RX  - > INTC  - > Aurora [1] TX
任何帮助将不胜感激!谢谢!

以上来自于谷歌翻译


以下为原文

Vivado : 2016.4
FPGA  : xcvu190

Hello,

I'm experiencing a few problems with the Aurora 64B66B IP (v11.1) between two xcvu190 platforms. The IP is configured for full-duplex, framing and 100G using x4 GTY lanes. All links described below are stable, and the issues are related to the AXIS user interface. In addition, all issues are observed when the design meets timing and when AXIS ILA are used. Adding ILA(s) tend to cause timing violations, since the Aurora AXIS interface is clocked at 400MHz.

At first I noticed that the AXIS "tlast" signal and the last data word in the packet was not propagated to the receiving partner, which was unexpectedly resolved by enabling CRC error detection. I've used the Aurora IP for 10G and 40G applications is the past (XC7VX690T, KU060), and I've never observed this behavior. I am using some custom logic for packet framing, but I'm also leveraging Xilinx's AXIS IP for width conversion, clock crossing and packet-mode FIFOs. In any case, when CRC is enabled AXIS packets (tlast and last dword) are received as expected. Is this a known issue? I've confirmed this behavior using Vivado 2016.3 and 2016.4.

My second problem is a bit more complicated and might be related to the first. My test application involves sending AXIS packets (4x256-bit, 3.4Gb/s) from two platforms (A0 & A1) to Platform B using 100G Aurora links. Platform B aggregates the packets using an AXIS interconnect and broadcasts the packets back to Platforms A0 and A1. When observing the received packets on either of the A platforms, I occasionally noticed that data word(s) from one packet are removed and inserted into a neighboring packet. Sometimes the "reduced" packet only contains the last data word with "tlast," and sometimes it's only missing 1 data word (tlast is always present). I'm not observing data lost, so there seems to be a problem with link-layer framing or some other internal function of the Aurora IP. Both Aurora IPs use the same reference clock (322.265625 MHz), and I'm properly handling clock crossing in the design relative to Aurora's user clock. In addition, I've probed the AXIS bus throughout Platform B's design, and all packets seem to be formatted properly as it enters the Aurora AXIS TX port. Platform B's data path resembles the following:

Aurora[0] RX ->                                                INTC -> Aurora[0] TX
                           INTC -> AXIS Broadcast ->   
Aurora[1] RX ->                                                INTC -> Aurora[1] TX

Any help would be appreciated
Thanks!

回帖(4)

卢鉴冰

2018-9-28 11:46:36
通过在AXIS互连中将“最大传输数的仲裁”设置为零来解决此问题。
在原帖中查看解决方案

以上来自于谷歌翻译


以下为原文

 
This was resolved by setting "Arbitrate on maximum number of transfers" to zero in the AXIS interconnect.
 
View solution in original post
举报

卢鉴冰

2018-9-28 12:05:59
我设法将我的数据包损坏问题隔离到平台B上的AXIS互连IP。如我在上一篇文章中所述,INTC用于从两个100G Aurora x4 GTY端口聚合数据包(4x256位,3.4 GB / s)。
我的“简化”块设计如下所示,INTC配置为在“tlast”,一个最大传输和零个低tvalid周期上进行仲裁。
此外,我启用了FIFO深度为16的数据包模式,以及所有主/从端口上的AXIS寄存器片。
使用TSM触发器后,我即将从ILA捕获损坏的输出数据包。
请注意,第一个数据包有5个dword,第二个数据包有3个。此行为也会发生变化,如上一篇文章所述。
以下是预期输出的捕获:
我尝试了很多配置,但INTC的仲裁器似乎没有正常运行。
我甚至能够通过将块设计移植到HDL来重现这一点。
该设计确实存在一些时序违规,但与ILA相关。
我的想法已经不多了......
任何帮助将不胜感激!谢谢!

以上来自于谷歌翻译


以下为原文

I managed to isolate my packet corruption issue to the AXIS Interconnect IP on Platform B.  As described in my previous post, the INTC is used to aggregate packets (4x256-bit, 3.4 GB/s) from two 100G Aurora x4 GTY ports. My “simplified” block design is shown below, and the INTC is configured to arbitrate on “tlast,one max transfer and zero low tvalid cycles. In addition, I've enabled packet mode w/ a FIFO depth of 16, and AXIS Register Slices on all master/slave ports.
 

 
After using a TSM trigger, I was about to capture the corrupt output packet from the ILA. Notice that the first packet has 5 dwords and the second packet has 3. This behavior also changes as described in the previous post.
 

 
Here is a capture of the expected output:
 

 
I've tried numerous configurations, but the INTC's arbiter doesn't seem to be behaving properly. I was even able to reproduce this by porting the block design to HDL. The design does have a few timing violations, but are associated with the ILA. I'm running out of ideas...
 
Any help would be appreciated
Thanks!
举报

卢鉴冰

2018-9-28 12:20:07
通过在AXIS互连中将“最大传输数的仲裁”设置为零来解决此问题。

以上来自于谷歌翻译


以下为原文

 
This was resolved by setting "Arbitrate on maximum number of transfers" to zero in the AXIS interconnect.
 
举报

李玉华

2018-9-28 12:28:05
你好clarkm2,
这很有趣。
非常感谢您的研究。
我恐怕现在无法帮助,但也许在不久的将来。
我实际上是在构建基于Aurora 64b66b IP的等效100G接口(在我的案例中是VCU118)。
我刚刚完成编码,我还包括一个AXI互连,虽然它没有直接连接到Aurora接口。
我通过此AXI互连将我的自定义Aurora接口连接到PCIe端点(PCI / 3.0的DMA /桥接子系统,设置为“AXI桥接”模式)。
我在我的界面中使用CRC,但我会检查是否有任何表现方式与你相同。
我过去也实现了Aurora接口(但是第一次使用64b66b)所以我对这些错误感到非常惊讶。
从赛灵思获得一些关于此的反馈意见真是太好了。
问候,
山姆

以上来自于谷歌翻译


以下为原文

Hi clarkm2,
 
This is very interesting. Thank you very much for your research. I am afraid I cannot help right now but maybe in a near future.
 
I am actually building an equivalent 100G interface based on the Aurora 64b66b IP myself (on VCU118 in my case). I have just finished the coding and I also include an AXI Interconnect, although it is not connected to the Aurora Interface directly. I connect my custom Aurora Interface to a PCIe Endpoint (DMA/Bridge Subsystem for PCIe 3.0 set to "AXI Bridge" mode) through this AXI Interconnect.
 
I do use CRC in my interface but I will check if anything is behaving the same way as you. I also did implement Aurora interfaces in the past (but first time with 64b66b) so I am quite surprised of those bugs.
 
It would be more than nice to have some feedback from Xilinx about this.
 
Regards,
Sam
举报

更多回帖

发帖
×
20
完善资料,
赚取积分