MathWorks, a leading global developer of mathematical computing software, and Altera, a subsidiary of Intel, announced today that the two companies will collaborate to accelerate wireless development of Altera FPGAs by supporting wireless system engineers to use AI based autoencoders to compress Channel State Information (CSI) data and significantly reduce front-end traffic and bandwidth requirements. Engineers engaged in 5G and 6G wireless communication systems can now ensure the integrity of user data while reducing costs, and maintain the reliability and performance standards of wireless communication systems.
Mike Fitton, Vice President and General Manager of Vertical Markets at Altera, stated that the collaboration between MathWorks and Altera enables organizations to leverage the powerful capabilities of AI in a wide range of 5G and 6G wireless communication applications, including 5G RAN and advanced driver assistance systems (ADAS). By using our FPGA AI suite and MathWorks software, developers can simplify the workflow from algorithm design to hardware implementation, ensuring that their AI based wireless systems meet the strict requirements of modern applications
MathWorks provides a comprehensive toolkit to enhance AI and wireless development, particularly suitable for Altera FPGAs. Deep Learning HDL Toolbox ™ Specially developed for engineers to implement deep learning networks on FPGA hardware. This innovative toolbox can utilize HDL Coder ™ The feature enables users to customize, build, and deploy efficient and high-performance deep learning processor IP cores. This progress significantly enhances the performance and flexibility of wireless applications by supporting standard networks and layers.
Houman Zarrinkoub, Chief Product Manager of MathWorks, said, "AI based compression is a powerful technology in the telecommunications industry. MathWorks software has laid a solid foundation for the development of AI and wireless technology. By integrating our tools with Altera's FPGA technology, wireless engineers can efficiently create high-performance AI applications and advanced 5G and 6G wireless systems. ”
The FPGA AI suite uses the OpenVINO toolkit and utilizes pre trained AI models from common industry frameworks to provide button based custom AI inference accelerator IP generation on Altera FPGA. It further assists FPGA developers in using the best in class Quartus ® The Prime software FPGA process seamlessly integrates AI inference accelerator IP into FPGA design. The combination of Deep Learning Toolbox and OpenVINO toolkit has opened up a simplified path for developers to optimize AI inference on Altera FPGA.








