Intel wants to challenge the NVIDIA AI field with this chip!

Intel has announced plans for artificial intelligence, but at least one key puzzle has not yet been completed.

Intel previously acquired Nervana Systems and announced that it will continue to sell all of its products. These products are designed for high-end applications, especially neural network training. The current leader in this field is NVIDIA. At the same time, Intel's acquisition of Movidius has not yet been completed, so there are still a lot of gaps in computer vision and edge networks that need to be filled. In addition, Intel has announced several artificial intelligence software products, services and cooperation projects.

At the recent Intel Artificial Intelligence event, the CEO of Movidius made a short appearance. He did not disclose when the deal will be completed and what the obstacles are. He said: "We look forward to joining the Intel family." He introduced plans for low-power chips for automobiles, drones, security cameras and other products.

Even if the deal is completed, Intel cannot provide a complete artificial intelligence product. However, there is no doubt that this is Intel's goal.

Intel CEO Brian Krzanich said in a keynote speech at the event: "Artificial intelligence will transform most of the industries we know so far, so we want to be a trusted leader and developer in the field of artificial intelligence."

Nervana CEO and co-founder Naveen Rao is the star of the event. Intel has given a green light to Nervana's full range of products, including processors, boards, systems, software and artificial intelligence cloud computing services.

Nervana's accelerated processor Lake Crest will be launched next year. It is said that at the same level of energy consumption, this product will have better performance when running neural network tasks than current top-level graphics processors. The chip will be fabricated using TSMC's 28-nanometer process.

Rao first demonstrated the architecture of the chip from scratch. This chip can accelerate various neural networks, such as the Google (microblogging) TensorFlow framework. The chip consists of a so-called "processing cluster" array that handles simplified mathematical operations called "active points." This method requires less data than floating-point operations, resulting in a 10x performance boost.

Lake Crest uses private data connections to create larger, faster clusters with a circular or other topology. This helps users create larger, more diverse neural network models. This data connection contains 12 100Gbps bidirectional connections, and its physical layer is based on 28G serial-to-parallel conversion.

The 2.5D chip is equipped with 32GB of HBM2 memory and has a memory bandwidth of 8Tbps. There is no cache in the chip, and the software is used to manage on-chip storage.

Intel did not disclose the future roadmap for this product, only that it plans to release a version called Knights Crest. This version will integrate future Xeon processors and Nervana acceleration processors. This is expected to support the Nervana cluster. However, Intel did not disclose how and when these two types of chips will be integrated.

Rao said that the integrated version will have better performance and is easier to program. Today's graphics processing chip (GPU)-based acceleration processors make programming more complicated because developers maintain separate GPU and CPU memory.

Rao also said that by 2020, Intel will launch chips to improve the performance of neural network training by 100 times. One analyst said the goal was "extremely radical." There is no doubt that Intel will quickly move this architecture to more advanced manufacturing processes, competing with GPUs that already use 14nm or 16nm FinFET processes.

The initial acceleration processor needed to connect to the host via the PCI-E bus. Recently, Intel decided to go further and provide not only applications but also cloud computing services.

This product is a bold step for Intel. Intel hopes to gain a technological advantage over NVIDIA with this product. Currently, the latter's GPU chips are widely used in the training of neural networks. This is a very processor-dependent task. Researchers at companies such as Baidu say they often spend months trying to limit the size of their data sets.

Rao said: "At present, the art-level neural network model takes weeks to months to train." He pointed out that a model used by Baidu Research Institute needs to consume millions of operations.

Ke Zaiqi said: "Nervana's high-end positioning gives us the best performance for deep learning."

However, this is still a smaller emerging market.

According to Diane Bryant, general manager of the Intel Data Center Group, only 0.15% of servers were dedicated to neural network training last year. Bryant pointed out that Intel's acquisition of artificial intelligence cloud computing service Saffron Technologies has attracted end users.

Four researchers have agreed to join the Nervana Artificial Intelligence Advisory Board to help the company's chip architecture grow in the future. Nervana will explore a variety of ways to optimize algorithms, simplify neural network models, explore new directions for precision-precision operations, and expand chip scale.

Analysts are optimistic about Intel's embrace of non-x86 architecture.

Patrick Moorhead, president of Moor Insights & Strategy, said: "If you look at how they quickly integrate Altera, Nervana, Phi, Xeon and other required software, then it is so large for Intel. This is impressive for the company. The current situation will depend on how Intel performs without error."

Although not optimized for artificial intelligence, Intel is still very focused on Xeon Phi. Such multi-core x86 chips are increasingly being used as acceleration processors for supercomputers.

The Knights Mill version, which will be available next year, will support up to 400GB of main memory, far exceeding the current GPU's 16GB of main memory. Knights Mill uses one of the x86 cores as the integrated host controller and supports multiple precision operations.

Intel has developed a system that uses up to 128 current Knights Landing versions of the Phi chip. Pradeep Dubey, director of the Intel Parallel Computing Lab, said: "We plan to expand the number of chips to hundreds or even thousands."

On the software side, Intel will release and open source graphics compilers for Nervana early next year. Intel is also optimizing the mainstream artificial intelligence framework running on x86 processors, including a version of TensorFlow that will be available before the end of this year. The SDK for deep learning will be available in January.

Intel is nurturing the artificial intelligence developer community in several ways.

Intel and Google have worked together to optimize cloud computing code based on x86 processors. Intel announced a five-year, $25 million investment in collaboration with Broad InsTItute to develop tools and reference architectures for gene processing. Intel also created a new developer community specifically for the Nervana architecture and launched a new artificial intelligence student developer project.

Vibration Speakers

Vibration speaker:

Vibration speaker is a kind of speaker unit which is made from vibration principle. It is also called resonance speaker. Vibration speakers have no diaphragms and also called exciters. They drive any solid plane surface they contact and made that surface sound, so people can have original sound from different materials. Vibration speakers have a special penetrability which traditional speakers don`t have. These kind of speakers are mainly used for digital audio speakers, music massage facilities and music treatment facilities.

Our main vibration speakers include:

1) From the diameter: we have speakers in diameter of 18mm ~ 44mm.

2) From the power output, we have speakers of 2W ~ 15W.

vibration speaker

vibration speaker

vibration speaker

FAQ

Q1. What is the MOQ?
XDEC: 2000pcs for one model.
Q2. What is the delivery lead time?
XDEC: 15 days for normal orders, 10 days for urgent orders.
Q3. What are the payment methods?
XDEC: T/T, PayPal, Western Union, Money Gram.
Q4. Can you offer samples for testing?
XDEC: Yes, we offer free samples.
Q5. How soon can you send samples?
XDEC: We can send samples in 3-5 days.



Vibration Speaker,Vibration Driver,Vibro Speaker,Vibration Bluetooth Wireless Speaker

Shenzhen Xuanda Electronics Co., Ltd. , https://www.xdecspeaker.com