×

GDPR & your privacy.

Your privacy as a member is important to us. Recently, rules surrounding privacy have changed, so we have created this manifest for you to read and accept.

It is not possible for you to be classed as an authentic member of the DCA unless you accept the terms, that includes but is not limited to the GDPR statement below.

Please read the privacy policy here.

Open Compute Project Announces New Contributions from NVIDIA and Wiwynn


Open Compute Project Announces New Contributions from NVIDIA and Wiwynn

 

 

 

Taipei, Taiwan - May 31, 2022 - Wiwynn (TWSE: 6669), an innovative cloud IT infrastructure provider for data centers, today announced the contribution of its ES200 edge server to the Open Compute Project (OCP) as an OCP Inspired™ platform. The short-depth edge server is designed to address the surging demand for computing power at edge sites for applications including instant AI inference, on-site data analytics and multi-channel video processing. These applications are widely used in vertical domains like retail, smart factories and content delivery networks (CDN) to bolster more service innovation at the edge.

“Wiwynn is committed to the OCP community with more than 32 contributions. It’s our pleasure to have ES200 receive OCP Inspired™ from the vibrant community,” said Steven Lu, Wiwynn’s Senior Vice President. “We believe the open edge platform will inspire more service innovation generated for vertical domains, such as retail, smart factories, CDN and more. The holistic functionality and flexibility of ES200 will address the challenges of diverse use case scenarios at edge sites. “

 

Wiwynn ES200 is a 2U dual socket edge server, equipped with 3rd Gen Intel Xeon Scalable processor (codename: Ice Lake). Its pervasive performance, along with NUMA-balancing, unleashes the full computing power of each installed processor to fulfill different workloads separately without compromising the communication bottlenecks between CPUs.   

The extra PCIe Gen4 x16 slots are capable of two double-width add-in cards, perfect for AI inference acceleration, video transcoding, data analytics, or extra network connections. Wiwynn has pre-tested and qualified NVIDIA A2 and A30 GPUs for ES200 to satisfy customers’ needs of compute acceleration for various use scenarios.

 

In addition, ES200 is NEBS-3 compliant and has short-depth compact formfactor. Even under challenging operating conditions, it can fit in various edge sites while serving holistic functionality. It has flexible storage and networking options as well as extraordinary expansion capability. With flexible storage modules composed of EDSFF (E1.S) or U.2 SSD and two OCP NIC 3.0 cards, ES200 enables fast local data processing and high-bandwidth network connection.

 

“AI-powered services in retail, smart cities, manufacturing and more are generating demand for accelerated edge systems that can process live data from sensors and cameras to help customers check out faster, keep traffic moving efficiently and boost safety in factories,” said Justin Boitano, vice president of Enterprise and Edge Computing at NVIDIA. “Wiwynn’s new NVIDIA-accelerated ES200 server is a powerful platform for AI inference, onsite data analytics and multi-channel video processing that can run demanding edge AI workloads in real time.”

 

 "As one of the very first OCP Solution Providers, Wiwynn has shown its continuous commitment to the Community. The OCP Inspired™ Wiwynn ES200 is a phenomenal example of how they continue to champion open designs and have extended their focus from cloud to edge computing. We are thrilled to see the great progress of the OCP Edge Project through the close collaboration of Wiwynn, and look forward to more edge products and solutions becoming OCP recognized. The Wiwynn ES200 is available to view on the OCP Marketplace here.” said Steve Helvie, VP of Channel for OCP.

 

Searcher