Nvidia, a company that develops microprocessors and related software, is on a record-breaking performance streak. Last quarter, this company's revenue grew 55 percent to $2.2 billion.
2016 was the year that artificial intelligence exploded, and on the back of that wind, Nvidia's stock price has nearly quadrupled in the past 12 months, amazingly.
A large part of Nvidia's success can be attributed to their chip product: the Graphics Processing Unit (GPU), which can also be understood as a graphics card that allows a computer to perform better when playing games. Now, however, GPUs have a new place to be used: to provide the massive amounts of computing power needed for artificial intelligence (AI) programs, especially in data centers.
The non-soaring sales of these chips (shown here) are the clearest sign of a long-term transformation in information technology. The processor market is being reconfigured by the slowdown of Moore's Law (the computing power of chips now doubles roughly every two years) and the rapid rise of cloud computing and AI. This has had a profound effect on the semiconductor industry and its overlord, Intel.
The past few days have been nothing short of smooth sailing for Intel. Whether it is the personal computer market or the server field, Intel produces the central processing unit (CPU), can be capable of almost all the "workload". Because of its powerful CPUs, Intel controls 80 percent of the personal computer market and has an almost complete monopoly on the server market.
In the past year, Intel's revenue was nearly $60 billion.
Despite this, CPUs are being updated at a rate that no longer meets demand. Machine learning and other AI applications, which require massive amounts of data, demand more data processing power than the entire data center consumed just a few years ago. So Intel's customers, such as Google and Microsoft and other big data center operators, are choosing increasingly specialized processors from other vendors and even starting to design their own processors.
Nvidia's GPUs are a case in point. Originally designed to perform the massively complex calculations needed for interactive video games -- that is, to speed up big games -- GPUs have hundreds of "cores" dedicated to calculating data, all working in parallel. CPUs, on the other hand, have only a few cores that process computational tasks sequentially.
Nvidia's latest processors have 3,584 cores, while Intel server CPUs have up to 28.
Nvidia has developed a programming architecture called CUDA to help customers program their processors for different tasks.CUDA enables GPUs to solve complex computational problems. So when cloud computing, big data, and AI began to take off a few years ago, Nvidia chips that could meet the demand were simply born at the right time.
Every Internet giant uses Nvidia GPUs to power their respective AI services to mine huge amounts of data, whether it's medical images or human speech, etc. Nvidia's sales to data center operators have tripled to $29.6 billion in the past year.
But GPUs are just one of many specialized processors. The range of processors is expanding as cloud computing companies mix and match chips to stay ahead of the curve and continue to improve operational efficiency.
At the moment, it seems that there is plenty of room for Nvidia's technology to grow.
Because Nvidia is transforming into a platform company rather than a hardware company, the GPU will be its core but not the whole thing, and what it wants to do is a platform and an ecology around the GPU. Various facilities supporting GPUs, such as development platforms, developer communities, and development tools including programming languages, are also very important. For example, in the notebook PC market, in fact, ARM's processor performance can fully compete with Intel, but why is there basically no notebook computer with ARM's processor? Is because ARM in the notebook PC does not have any ecological. Once the platform and ecosystem is done, even if its technical development is not as vigorous as the original, I believe Nvidia's commercial value can still be guaranteed.
The biggest risk Nvidia may face is that its current stock price is completely supported by artificial intelligence, but it is doubtful that the application of artificial intelligence will develop as fast as investors think. In fact, it's pretty clear that there's a big bubble in AI adoption right now, and people expect it to be up within a year or two. But if it doesn't get up in a year or two or if certain applications don't really get off the ground, investors may have some backlash at that time. Now it's an overshoot, and there will be an undershoot after finding that expectations have not been met, and after a few shocks it will slowly return to a rational valuation.
Intel has focused in recent years on making more powerful CPUs rather than ASICs or FPGAs.
It is widely believed that traditional processors are not going to lose their status any time soon: every server needs them, and countless applications run on them. Intel's chip sales are still growing, though Alan Priestley, an IT consultant at Gartner, argues that the rapid growth of accelerated chips is bad news for the company, and that the more computation that gets done on those chips, the less of it runs on CPUs.
One of Intel's responses has been to play catch-up with the help of acquisitions: in 2015, it bought FPGA maker Altera for $16.7 billion; and in August, it paid $400 million for Nervana, a startup just three years old that develops specialized AI systems that go from software to chips.
Intel said they see dedicated processors as an opportunity, not a threat. Diane Bryant, head of Intel's data center business, said new computational work is often done first on dedicated processors and then "pulled into the CPU," such as encryption, a computation that was once handled by a separate semiconductor component but is now a simple instruction on an Intel CPU. Intel CPUs have captured nearly all of the world's processor market, and running new types of computational work such as AI on accelerated chips means extra overhead and higher complexity for organizations.
Intel is already investing in such convergence. This summer, it will start selling new processors, codenamed "Knights Mill," to compete with Nvidia. At the same time, Intel is also developing another chip, called "Knights Crest", this chip incorporates Nervana's technology. In a sense, Intel is also looking to integrate Altera's FPGAs into its CPUs.
Predictably, the competitors have different views of the future.
Nvidia argues that they already have their own computing platform, with many companies using their chips to develop and run AI applications, and they have created software infrastructure for other types of programs for areas such as visualization and VR.
Computing giant IBM is also trying to steal Intel's business. in 2013, IBM open-sourced its own processor architecture, Power, turning it into a sort of public **** asset for the semiconductor industry. Makers of specialized chips have an easier time combining their hardware with powerful CPUs, and IBM holds the reins on how the platform evolves.
This is very much dependent on how AI develops, and if AI doesn't bring about the kind of change that many people are looking for in a couple of years, Intel's chances are pretty good; but if AI continues to impact industries over the next decade or so, other processors have a much better chance of winning, said Matthew Eastwood, a market analyst at IDC. (Synthesis)