For a little over two years, Nvidia had refrained from presenting a roadmap for its SoC. It must be said that his plans at this level have changed several times, including the abandonment of the smartphone market too competitive, too costly to the certification level for different networks and in which it was hard to really stand out respecting a strict thermal envelope.
Gradually, Nvidia Tegra SoC repositioned its other markets where its expertise can more easily make a profit. This is the case for autonomous driving, or everything about AI and more precisely the inference or the operation of an artificial neural network ( deep learning ). For this type of use of a SoC, it is now crucial to provide maximum performance in a reasonable thermal envelope but not as strict as in the mobile world. Moreover, her big GPU found themselves, almost by chance from the heart of the evolution of deep learning. An experience that enabled Nvidia to build an efficient software ecosystem and to add small touches to its CPU and GPU architectures that can have a crucial impact on performance. While the competition looks extremely rude to the deep learning dedicated to spread in many sectors, both at the network drive that inference, Nvidia retains a good card to play provided support a high rate in its developments.
There is just one month, Nvidia unveiled and some details about its SoC Parker manufactured in 16nm. It will succeed a few times to SoC Logan (Tegra K1) and Erista (Tegra X1) with a clear focus for the automotive market. It is to recall equipped with a CPU portion composed of 2 cores house Denver 2 and 4 Cortex-A57 cores associated with GPU 256 Pascal computer units (2 SM), the whole being interfaced LPDDR4 128-bit.
For the automotive market, including specific instructions supports deep learning (to boost certain operations INT8 / FP16), ECC memory, hardware virtualization, 12 cameras and appropriate connectors. The first use of Parker will be in the platform Drive PX 2 derived in two versions. One is satisfied with a single SoC and Parker is dedicated to aid for single pipe, the other associates 2 2 GPU SoC Parker and GP106 for a semi-autonomous driving while Nvidia talks to combine several of these platforms a fully autonomous driving. What demonstrate in passing the need for more powerful platforms yet.
This is where the successor to Parker whose code name is Xavier, in reference to the superhero. Unveiled at the first edition of the GTC Europe being held in Amsterdam, Xavier will be based on custom ARM cores 8 (Denver 3?) And Volta 512 GPU computing units with a video engine capable of working 8K. Still manufactured in 16nm, it will carry no less than 7 billion transistors, which is huge for a SoC. Nvidia aims computing power of not less than 20 Teraops in deep learning for thermal envelope 20W.
What potentially replace a full Drive PX platform 2 with a single SoC (what Nvidia illustrates the simple version of Drive PX 2 for driving assistance, there are no real photo of Xavier or platform that hosts). Is of course that there is a trick and that this value is a maximum 20 Teraops obtained on specific instructions. With 512 processing units clocked for example 2 GHz, Xavier display a power of +/- 2 teraflops in FP32 (against 8 teraflops for PX Drive 2). Be content with INT8 instructions Pascal would rise to 8 Teraops in deep learning (against 24 for Teraops Drive PX 2).
Announcing 20 Teraops, so Nvidia unveils GPU Volta architecture that will bring new developments that will provide additional significant boost to some algorithms related to deep learning and clearly the inference. calculation precision than 8-bit? New specific instructions equivalent to more ops? He will have to wait some time before more. Nvidia provides a first sampling Xavier end of 2017 for commercial availability in 2018. In the meantime the first GPU Volta should have been launched.