What do you know about chip design?

This is a long introduction to the development of the semiconductor industry, some of the expression of the industry may be a little uncritical, welcome to exchange.

First of all, to explain two concepts: chip design and chip foundry

They are different, here is an example: Qualcomm, Samsung, Huawei can design chips. In this case, Samsung is able to produce its own chips, while Qualcomm and Huawei, need to find foundry.

Samsung and TSMC, the two most widely recognized chip foundries.

Qualcomm in the US, for example, designs its own chips. But it doesn't make the chips. Qualcomm's high-end chips, for example, are given to Samsung for foundry work, and the high-end chips designed by Huawei are given to TSMC for foundry work.

Why can't the mainland produce high-end chips right now?

In terms of chip design, we're not weak anymore. Huawei's Kirin chips are developed in-house, and they're already very strong in high-end chips.

But the foundry of the Kirin chip didn't go to a mainland manufacturer.

Because even SMIC, which is currently number one on the continent, doesn't have the capacity to produce Kirin 970 chips right now.

Huawei's Kirin 970 chip, the process is 10nm.

About the process will be described in detail later, that is, the smaller the number, the more advanced the process. The chips in our phones, the process is good, determines the performance of the chip.

7nm chips are necessarily stronger than 10nm ones, and 10nm ones are stronger than 14nm ones.

In 2017, Samsung and TSMC, both mastered the most advanced 10nm process. So now the 10nm production process is monopolized in the hands of Intel, Samsung and TSMC.

And the mainland's most advanced SMIC can only produce the highest specification 28nm process.

Why is the mainland lagging behind in production processes?

Mainly photolithography: Because of the production of chips, the key to photolithography.

The main reason is the photolithography:

In fact, the principle of early photolithography is as simple as a slide projector, which is to project light through a mask with a circuit diagram onto a wafer coated with photosensitive adhesive. (more on wafers in the chip design section below). In the early photolithography of the 1960s, the mask was 1:1 on the wafer, which was only 1 inch in size at the time.

As a result, lithography was not high tech at the time, and semiconductor companies usually designed their own tooling and fixtures - Intel, for example, started by buying 16mm camera lenses and disassembling them. Only a few companies like GCA, K&S, and Kasper made a little bit of equipment.

In the late 60s, Nikon and Canon in Japan started to enter the field, after all, lithography was no more complex than cameras at that time.

In 1978, GCA introduced a truly modern automated stepper lithography (Stepper) with a resolution of 1 micron, 5 times higher than projection.

But at this time, the lithography industry was still a small market, and those who sold dozens of units a year were considered big players. Because there are only so many semiconductor manufacturers, a machine can last for years. This leads to your machine is a little behind, no one wants to buy. Technology leadership is the key to capturing the market, and the winner takes all.

At the beginning of the 80s, GCA's Stepper had a slight lead, but soon Nikon shipped its first commercial Stepper, the NSR-1010G, which had more advanced optics and dramatically increased capacity. Together, the two began to squeeze the others out of the market.

By 1984, in the lithography industry, Nikon and GCA were tied at 30% each, Ultratech had about 10%, and Eaton, P&E, Canon, Hitachi, and a few others split the remaining 30%.

But the turnaround also happened this year, when Philips developed a prototype of the stepper in the lab, but it was not mature enough. Because the lithography market is too small, Philips could not confirm whether it has commercial value, and went to the United States and P&E, GCA, Cobilt, IBM and so on to talk about a circle, but no one is willing to cooperate.

Coincidentally, there is a small Dutch company called ASM International's boss Arthur Del Prado heard of such a thing, and took the initiative to ask for cooperation. However, this agent from the company only semiconductor some experience, lithography in fact do not know much, is equivalent to half of the angel investment plus half of the distributor. Philips hesitated for a year, but finally agreed to a 50:50 joint venture, and when ASML was founded on April 1, 1984, there were only 31 employees working in a boarded-up room outside the Philips building.

ASML's earliest bungalow, with the Philips glass building behind it.Credit: ASML

ASML worked with Zeiss in 1985 to improve the optics, and finally in 1986 they released a great second-generation product, the PAS-2500, which was sold for the first time to the U.S. to the then-startup company Cypress, today's Nor Flash. Cypress, today's Nor Flash giant.

But the following year, the 1986 semiconductor market crash caused serious financial problems for a bunch of US lithography vendors, and ASML was still small enough that it didn't lose much money, and was able to develop new products on a pre-existing schedule. However, GCA and P&E, the oldest vendors, were unable to cope, and their new product development came to a standstill.

In 1988, GCA was purchased by General Signal with a severe lack of capital, and a few years later GCA went bankrupt without a buyer, and in 1990, P&E Lithography was sold to SVG.

The American trio, which had been the majority of the market in 1980, was completely replaced by the Japanese duo by the end of the 80's. At this point, ASML had about 10% of the market. By this time, ASML had about 10% of the market.

Ignoring companies like SVG, which was marginalized in the US, after the 90s it was ASML and Nikon competing while Canon watched from the sidelines.

Later on, ASML introduced an immersion 193nm product, followed by Nikon announcing its own 157nm product and the completion of its EPL prototype. However, immersion is a small improvement with big results and very high product maturity, whereas Nikon seemed to be experimenting, so almost no one bothered to order Nikon's new product.

This led to a big debacle for Nikon later. Nikon was still the leader in 2000, but by 2009 ASML had a market share of nearly 70% and was way ahead. The immaturity of Nikon's new products was also indirectly related to the collective decline of the Japanese semiconductor manufacturers that used a lot of its equipment.

As for Canon, when they saw Nikon and ASML fighting so hard in high-end lithography, they simply withdrew. They went straight to the low-end lithography market, and to this day they are still selling 350nm and 248nm products to LCD panel and analog device makers.

Back again, the reason Intel, Samsung, and TSMC were able to produce 10nm chips in the first place was that they were able to import high-end lithography from ASML for 10nm chips.

The mainland doesn't have high-end lithography, and lacks the technology to use low-end and mid-range lithography, so for the time being, they can only produce relatively backward chips.

Let's talk about chip design. Before we talk about design, we need to know the concepts of CPU, GPU, microarchitecture, and instruction set.

The CPU, or central processing unit, is the component responsible for the main computing tasks of a computer. It functions like a human brain. You may have heard that CPUs are categorized as x86, ARM, with the former mainly used in PCs and the latter mainly used in devices such as cell phones and tablets.

CPUs need to follow certain specifications when performing computational tasks, and programs need to be translated into a language that the CPU understands before they can be executed. This language is called an instruction set architecture (ISA). The process of translating a program into CPU-aware code according to the specifications of an instruction set is called compiling. Instruction sets such as x86, ARM v8, MIPS, etc. are code names for instruction sets. Instruction sets can be extended. To develop a CPU that is compatible with a particular instruction set, a vendor needs to be authorized by the instruction set patent holder, typically Intel authorizes AMD to develop CPUs compatible with the x86 instruction set.

The basic building block of a CPU is the core, and the way the core is implemented is called the microarchitecture (or "microarchitecture") and the instruction set. The way the core is implemented is called a microarchitecture, which is similar to an instruction set, and is the codename for things like Haswell, Cortex-A15, and so on. The design of the microarchitecture affects the maximum frequency that the core can reach, the amount of computation that the core can perform at a given frequency, the level of power consumption of the core at a given process level, and so on.

But it's worth noting that microarchitecture and instruction set are two different concepts: the instruction set is the language the CPU chooses to speak, while the microarchitecture is the specific implementation.

Take, for example, a chip that is compatible with the ARM instruction set: ARM develops its own instruction set called the ARM instruction set, and at the same time it develops and licenses specific microarchitectures, such as the Cortex family.

However, just because a CPU uses the ARM instruction set does not mean it uses the microarchitecture developed by ARM. Vendors like Qualcomm and Apple have developed their own ARM-compatible microarchitectures, and many others use ARM-developed microarchitectures to make their CPUs, like Huawei's Kirin chips. Generally, the industry recognizes that only companies with independent microarchitecture development capabilities have CPU development capabilities, and whether they use their own instruction sets is irrelevant. Microarchitecture development is also one of the most technologically advanced areas of the IT industry.

In the case of the Kirin 980, for example, the main components are the CPU and GPU. The Cortex-A76 and Mali-G76 are both microarchitecture licenses that Huawei bought from ARM, so can Huawei develop its own microarchitecture? Can Huawei develop its own microarchitecture? Definitely, but to achieve Apple's application in the cell phone system there is still a long way to go, at least now it seems to be the case, in addition to their own R & D will encounter a variety of problems, because the development of the chip and software development, the same, the need for EDA tools, the use of ARM's micro-architecture, they will provide a lot of tools, these things are quite the core, so once you start a separate stove will need to consider all aspects of the problem.

To figure out how to do this, you have to think about all aspects.

Once you've figured that out, you can start designing the chip, but this step is also very complex and tedious.

The process of chip manufacturing is like building a house, with wafers as the foundation, and then stacked on top of each other, after a series of manufacturing processes, you can produce the necessary IC chip.

What is a wafer?

Wafer is the basis for manufacturing various types of chips. We can think of chip manufacturing as building a house, and a wafer is a smooth foundation. In solid materials, there is a special crystal structure - single crystal (Monocrystalline). Its characteristic is that the atoms are tightly arranged one after another, forming a flat atomic surface. For this reason, we use monocrystals to make wafers. But how do you produce such a material? There are two main steps, purification and crystalization, which are then used to create the material.

Purification is divided into two stages, the first step is metallurgical purification, this process is mainly to add carbon, in the form of redox, will be converted to silicon oxide more than 98% purity of silicon. However, 98% is still not enough for chip manufacturing, and further improvements are needed. Therefore, the Siemens process will be used to further purify the polysilicon to obtain the high purity polysilicon needed for semiconductor manufacturing.

Next, there is pulling.

First, the high-purity polysilicon obtained earlier is melted to form liquid silicon. Then, a single-crystal seed is placed in contact with the surface of the liquid and slowly pulled upward while spinning. The reason why single-crystal seed is needed is that silicon atoms are arranged like people in a line, and they need a leader to let the subsequent ones know how to arrange themselves correctly. Finally, when the silicon atoms leave the liquid surface solidified, the neatly arranged single crystal silicon column will be completed.

But a whole column of silicon can not be made into a substrate for chip manufacturing, in order to produce a piece of silicon wafers, and then you need to use a diamond knife to cut the column of silicon wafers horizontally into round slices, and then polished rounds can be formed into silicon wafers needed for chip manufacturing.

As for the 8-inch and 12-inch wafers, what do they represent? Obviously, it means that the surface is treated and cut into thin wafers after the diameter. The larger the size, the higher the speed and temperature requirements for crystal pulling, and the more difficult it is to make.

After so many steps, the manufacturing of the chip substrate is finally complete, and the next step is chip manufacturing. How do you make a chip?

IC chip, the full name of the integrated circuit (Integrated Circuit), by its name can be seen that it is the design of the circuit, in order to stack the combination.

As we can see from the image above, the blue part at the bottom is the wafer, and the red and yellow parts are where the ICs are designed to be made, just like how a house is designed to be built.

Then we look at the red part, in the IC circuit, it is the most important part of the whole IC, will be combined by a variety of logic gates to complete the full-featured IC chip, so it can also be seen as a root on the root of the root.

The yellow part, on the other hand, will not be too complex, and its main role will be to connect the red part of the logic gates together. The reason why so many layers are needed is because there are too many wires to be connected together, and if a single layer can't hold all the wires, then several more layers are needed to achieve this goal. In this case, the wires from different layers are connected up and down to meet the wiring needs.

Then you start making the parts:

There are four simple steps to making an IC. Although the actual manufacturing steps will vary and the materials used will be different, the principles are generally similar.

After completing these steps, you end up with a lot of ICs on a single wafer, and then you just cut out the completed square ICs and send them to the packaging plant for packaging.

Packaging:

After a long process, you finally get an IC chip. However, a chip is quite small and thin, and can be easily scratched and damaged if not protected from the outside. In addition, because of the tiny size of the chip, it's not easy to place on the board without a larger case, which is why the final package is needed.

There are many types of packages, including Dual Inline Package (DIP), Ball Grid Array (BGA), SoC (System On Chip), and SiP (System In Packet).

After completing the packaging, it is then necessary to enter the testing phase, in which it is necessary to confirm that the packaged ICs are functioning properly, and after testing, they can be shipped to the assembly plant to be made into the electronic products that we see.

This completes the entire production process.

This completes the entire production process.