HDMI Intel FPGA IP as Receive with AXIS fails in Analysis & Synthesis of Quartus 24.3 Pro
Hi support, When creating a design with "HDMI Intel FPGA IP" having significant values as: - Direction: Receiver - Enable Active Video Protocol: AXIS-VVP Full - Support FRL: Untick (disabled) Then Quartus 24.3 Pro Build 212 (newest) fails in Analysis & Synthesis with the errors: Error(13224): Verilog HDL or VHDL error at hdmi_rx_core_altera_hdmi_1975_ozylggi.v(771): index 2 is out of range [1:0] for 'cv_vid_de' Error: Failed to elaborate design: Error: Flow failed: Errors generated during elaboration Error: Quartus Prime Synthesis was unsuccessful. 3 errors, 0 warnings Archived project is attached. Question: Is there any known fix or workaround for this problem? Regards M_DK_FPGA PS. It appears that an internal non-designer assigned value PIXELS_PER_CLOCK is assigned to 8, as for FRL enabled, thus causing an internal loop to go out of range.Solved6.4KViews0likes38CommentsProblem W/ Altera VIP Frame Buffer II
I have configured an FPGA design for my linux device that is displaying SDI and HMDI inputs. It was originally configured to receive YCbCr 4:2:2 video format. Since I am experiencing green spill on images when trying to chroma key, I am attempting to change the input data to RGBA. For a test, I have implemented a conversion into the FPGA that handles the RGB input from the frame buffer. I believe this conversion is correct because the color values come out as expected. FPGA Design Here: FB -> CSC -> CRS -> Output This is the same for all four mixers. I am using gstreamer to input the color bars pattern into the pipeline. (gst-launch-1.0 videotestsrc ! videoconvert ! video/x-raw,format=BGR,width=1920,height=1080 ! videoconvert ! filesink location="/dev/fb3") There are 4 frame buffers located at "/dev/fb0", "/dev/fb1", and so on. Basically the issue is these frame buffer's graphics are overlapping and changing the memory regions in the DTS file does not fix the issue: "gst-launch-1.0 videotestsrc ! videoconvert ! video/x-raw,format=BGR,width=1920,height=1080 ! videoconvert ! filesink location="/dev/fb0" "gst-launch-1.0 videotestsrc ! videoconvert ! video/x-raw,format=BGR,width=1920,height=1080 ! videoconvert ! filesink location="/dev/fb1" "gst-launch-1.0 videotestsrc ! videoconvert ! video/x-raw,format=BGR,width=1920,height=1080 ! videoconvert ! filesink location="/dev/fb3" I can even see the bottom portion of the color bars pattern when changing the height to 720. "gst-launch-1.0 videotestsrc ! videoconvert ! video/x-raw,format=BGR,width=1920,height=720 ! videoconvert ! filesink location="/dev/fb3" The drivers being used for the frame buffer is altvipfb.c and altvipfb2-plat.c Here is the DTS snippet of these frame buffer components and the boot logs where the frame buffers are initialized: DTS snippet Boot Logs The virtual address and length seen in these boot logs do not change at all when DTS changes are made. I also get this error when inputting a pipeline: If anyone could help me approach this issue it would be greatly appreciated. Thanks, Evan6.1KViews0likes22CommentsNo User Manual for Terasic D8M-FMC (NDA) ?
I'm interfacing a Terasic D8M-FMC to a Stratix 10 Dev Board, and the problem is none of the register definitions listed in Chapter 6 of the OV8865 Rev 1D manual that Terasic supplied with the CD package are explained anywhere. I wrote Terasic Support and I informed them that they actually documented how to setup the register definitions when they confingured the camera via SCCB protocol (Omnivisions copy of the I2C protocol), and they told me to directly contact Omnivision. However they are staying silent on how to configure these registers. I am simply inverting the image from the camera, and there are two registers to do this, 0x3820 and 0x3821, but there are other registers that are never explained anywhere, like vsub48_blc, byp_isp_o, then theres digital flip vs mirror flip and sync_hbin2. Nothing is explained. I'm ending up with a purple image with correct inversion, unless I just compile the demo code with no changes and color is fine but image is inverted. Terasic even documents the fact they are turning off these features in the comments of the demo code or turning them on, with no explanation, so they must know more about these settings. Can someone please point me to the supplemental manual or is there a NDA setup for mass supplies of these sensors being ordered to supply the missing details ?Solved5.3KViews0likes4CommentsHow to switch the HDMI PHY reference clock to the HDMI Rx TMDS clock ?
Hi In the document "HDMI PHY Intel FPGA IP User Guide" ver. 2022.10.31 (newest) section "5.1.2. Dynamic Reconfiguration" it is described that: The RX reconfiguration management is handled predominantly by the RTL. The software is mainly used to switch the reference clock over from the fixed rate clock to the RX TMDS clock (via IOPLL). This is because the transceiver requires a clock to be present at power-up. However, in section "5.1.5. RX PHY Address Map" the register description does not have any register that handled such switch of the clock. Also, based on the frequency of the output clock rx_clk[0] I can seen that is is the reference clock at fr_clk divided by 2, thus not the HDMI TMDS clock, so switch is required in order to get the correct clock on rx_clk for data to the HDMI core. Question: How to switch the HDMI PHY reference clock to the HDMI Rx TMDS clock ? Thanks in advance for any help. Br M_DK_FPGASolved4.7KViews0likes18CommentsNASA’s Perseverance Mars Rover Mission Team relies on 4K video collaboration system based on Crestron DM NVX AV-over-IP streaming media devices
Last month, NASA’s Perseverance rover landed successfully in Jezero Crater on Mars and has taken its first test drive. You may have seen video of the mission team at NASA Jet Propulsion Laboratory (JPL) standing up and cheering when news of the successful landing arrived here on Earth. The Mars 2020 Perseverance Mission Team is the first at NASA JPL to use a new high-resolution, multi-mission AV collaboration system developed by Amplified Design LLC, which was completed in time for the Mars 2020 Perseverance Rover launch in July of 2020. This AV collaboration system spans 45 new collaboration areas across multiple floors at NASA JPL’s Space Flight Operations Facility in Pasadena, California. The system is based on nearly 200 Crestron DM NVX streaming media devices that provide secure, real-time AV encoding and decoding for the transport of real-time 4K video streams over standard Gigabit Ethernet. The DM NVX backbone distributes video to both the large and small work rooms as well as mission support areas for anywhere-to-anywhere video sharing. Crestron developed the DM NVX streaming media devices using Intel® Arria® 10 SoC FPGAs. Click here to see the Amplified Design case study describing the NASA JPL AV collaboration system. Click here for more information about Crestron DM NVX AV-over-IP streaming media devices. Click here for more information about Intel Arria 10 FPGAs and SoC FPGAs. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.3.4KViews0likes0CommentsIncorrect Frame Buffer Stride Validation in VVP Frame Buffer
Hello, We believe we have discovered an issue with the VVP Frame Buffer component GUI validation for checking invalid values for Inter-buffer Stride in Quartus Pro 23.2. The documentation says the TCL validates the values of Inter-buffer Stride and Inter-line Stride: But the calculation of the maximum frame size in the TCL (intel_vvp_vfb_hw.tcl) doesn't take into account that the Inter-line Stride can introduce unused space. (This is in addition to the unused space resulting from pixel packing, which is correctly accounted for.) The attached screenshots show an invalid set of settings that does not error. The Inter-line Stride of 16kB multiplied by the maximum field height of 1080 lines gives a minimum Inter-buffer Stride of 16.875MB. The actual Inter-buffer Stride is 16MB, so the buffer is overflowed by 896kB = 56 lines. However, no error is displayed. We believe the solution would be to change line 307 in intelFPGA_pro\23.2\ip\altera\mtm\cores\vvp\frame_buffer\intel_vvp_vfb\intel_vvp_vfb_hw.tcl From: set max_bytes_per_frame [expr $max_bytes_per_line * $max_height] To: set max_bytes_per_frame [expr $mem_line_stride * $max_height] Thanks, DavidSolved3.2KViews0likes8CommentsDisplayPort TX IP Core Link Training Abort
Hi All, we are using the Intel DisplayPort TX IP core on Cyclone 10 GX FPGA and we see link training failures with a certain sink (monitor) of our customer. Using a DP aux channel analyzer, we see that during the test of voltage levels and pre-emphasis, the DP TX core aborts the link training process instantly after about 60 ms. This is caused due to a delay behavior of the sink: Each time, after the DP TX core writes training pattern sets (0x102) or training lane sets (0x103), the sink answers the next 8 source status reads each with AUX_DEFER, until finally providing the requested status. This results in a long duration of the whole link training process, which is aborted by the DP TX core after about 60 ms. Tests of the monitor at a commercial GPUs shows that the link training process takes more than a second until finally succeeds. We have no link training problems with other commercial monitors. Questions: Why does the link training process not complete? We expect that there is some kind of internal timeout within the DP TX core and ask of how we can change this value. We knew that within the sink (monitor) works a Xilinx DP RX IP core, so probably there are pre-known incompatibilities between Intel and Xilinx DisplayPort cores? Note: We use dp tx core version v20.0.1, config without support for DP 1.4 We use quartus pro version 22.3 I attached two aux channel analyzer logs, where the AUX-DEFER packets and the DP TX abort can be seen. I had to compress them to zip due to the forum upload restrictions. Thank you in advance! Stefan3KViews0likes10CommentsWe Hope to See You at Intel® FPGA Technology Day 2021
Intel® FPGA Technology Day (IFTD) is a free four-day event that will be hosted virtually across the globe in North America, China, Japan, EMEA, and Asia Pacific from December 6-9, 2021. The theme of IFTD 2021 is “Accelerating a Smart and Connected World.” This virtual event will showcase Intel® FPGAs, SmartNICs, and infrastructure processing units (IPUs) through webinars and demonstrations presented by Intel experts and partners. The sessions are designed to be of value for a wide range of audiences, including Technology Managers, Product Managers, Board Designers, and C-Level Executives. Attendees to this four-day event will learn how Intel’s solutions can solve the toughest design challenges and provide the flexibility to adapt to the needs of today’s rapidly evolving markets. A full schedule of Cloud, Networking, Embedded, and Product Technology sessions, each just 30 minutes long, will enable you to build the best agenda for your needs. Day 1 (December 6), TECHNOLOGY: FPGAs for a Dynamic Data Centric World: Advances in cloud infrastructure, networking, and computing at the edge are accelerating. Flexibility is key to keeping pace with this transforming world. Learn about innovations developed and launched in 2021 along with new Intel FPGA technologies that address key market transitions. Day 2 (December 7), CLOUD AND ENTERPRISE: Data Center Acceleration: The cloud is changing. Disaggregation improves data center performance and scalability but requires new tools to keep things optimized. Intel FPGA smart infrastructure enables smarter applications to make the internet go fast! Day 3 (December 8): EMBEDDED: Transformation at the Edge: As performance and latency continue to dictate compute’s migration to the edge, Intel FPGAs provide the workload consolidation and optimization required with software defined solutions enabled by a vast and growing partner ecosystem. Day 4 (December 9): NETWORKING: 5G – The Need for End-to-End Programmability: The evolution of 5G continues to push the performance-to-power envelop, requiring market leaders to adapt or be replaced. Solutions for 5G and beyond will require scalable and programmable portfolios to meet evolving standards and use cases. To explore the detailed program, see the featured speakers, and register for the North America event, Click Here. Register in other regions below: EMEA China Japan Asia Pacific2.8KViews0likes0Comments[HDMI IP] Vendor Specific Infoframes -> how to insert?
Hi, I need to insert 20 Vendor Specific Infoframes (VSIF), each one has 32 bytes payload length, into a single HDMI Video Frame. 1) Is this possible? 2) How to do this? The Example Design only shows how to insert DRMI infoframes, should I do the same with Vendor Specific Infoframes (VSIF)? 3) is there any restriction for a number of Vendor Specific Infoframes (VSIF), which could be sent/received by the HDMI IP? Thanks!Solved2.5KViews0likes9CommentsUpcoming Webinar: Computational Storage Acceleration Using Intel® Agilex™ FPGAs
Today’s computational workloads are larger, more complex, and more diverse than ever before. The explosion of applications such as high-performance computing (HPC), artificial intelligence (AI), machine vision, analytics, and other specialized tasks is driving the exponential growth of data. At the same time, the trend towards using virtualized servers, storage, and network connections means that workloads are growing in scale and complexity. Traditionally, data is conveyed to a computational engine -- such as a central processing unit (CPU) -- for processing, but transporting the data takes time, consumes power, and is increasingly proving to be a bottleneck in the overall process. The solution is computational storage, also known as in-storage processing (ISP). The idea here is that, instead of bringing the data to the computational engine, computational storage brings the computational engine to the data. In turn, this allows the data to be processed and analyzed where it is generated and stored. To learn more on this concept, please join us for a webinar led by Sean Lundy from Eideticom, Craig Petrie from BittWare, and myself: Computational Storage Using Intel® Agilex™ FPGAs: Bringing Acceleration Closer to Data The webinar will take place on Thursday 11 November 2021 starting at 11:00 a.m. EST. Sean will introduce Eideticom’s NoLoad computational storage technology for use in data center storage and compute applications. NoLoad technology provides CPU offload for applications, resulting in the dramatic acceleration of compute-plus-data intensive tasks like storage workloads, data bases, AI inferencing, and data analytics. NoLoad’s NVMe-compliant interface simplifies the deployment of computational offload by making it straightforward to deploy in servers of all types and across all major operating systems. Craig will introduce BittWare’s new IA-220-U2 FPGA-based Computational Storage Processor (CSP) that supports Eideticom’s NoLoad technology as an option. The IA-220-U2 CSP, which is powered by an Intel Agilex F-Series FPGA with 1.4M logic elements (LEs), features PCIe Gen 4 for twice the bandwidth offered by PCIe Gen 3 solutions. This CSP works alongside traditional Flash SSDs, providing accelerated computational storage services (CSS) by performing compute-intensive tasks, including compression and/or encryption. This allows users to build out their storage using standard SSDs instead of being locked into a single vendor’s storage solutions. BittWare’s IA-220-U2 accelerates NVMe FLASH SSDs by sitting alongside them as another U.2 Module. (Image source: Bittware) We will also discuss the Intel Agilex™ FPGAs that power BittWare’s new CSP. Built on Intel’s 10nm SuperFin Technology, these devices leverage heterogeneous 3D system-in-package (SiP) technology. Agilex I-Series FPGAs and SoC FPGAs are optimized for bandwidth-intensive applications that require high-performance processor interfaces, such as PCIe Gen 5 and Compute Express Link (CXL). Meanwhile, Agilex F-Series FPGAs and SoC FPGAs are optimized for applications in data center, networking, and edge computing. With transceiver support up to 58 Gbps, advanced DSP capabilities, and PCIe Gen 4 x16, the Agilex F-Series FPGAs that power BittWare’s new CSP provide the customized connectivity and acceleration required by compute-plus-data intensive power sensitive applications such as HPC, AI, machine vision, and analytics. This webinar will be of interest to anyone involved with these highly complex applications and environments. We hope to see you there, so Register Now before all the good virtual seats are taken.2.2KViews0likes0Comments