C/s mode for short, i.e., the client runs the client program, transmits the result to the server, the server runs the server-side program, receives the result, and runs the corresponding program, which runs
5) Barton
Uses a 0.13um manufacturing process, with a core voltage of about 1.65V, a L2 cache of 512KB, and an OPGA package.
(3) Core type of the new Duron
AppleBred
Using 0.13um manufacturing process, core voltage is around 1.5V, L2 cache is 64KB, package is OPGA, and front side bus frequency is 266MHz. Instead of the nominal PR value, the actual frequency is labeled as 1.4GHz, 1.6GHz, and 1.8GHz.
(d) Core types of Athlon 64 series CPUs
1) Sledgehammer
Sledgehammer is the core of AMD's server CPUs, which are 64-bit CPUs, typically on the 940 interface, and on a 0.13-micron processor.
Sledgehammer is a powerful, integrated triple HyperTransprocessor. Sledgehammer's powerful, integrated triple HyperTransprot bus, the core uses a 12-stage pipeline, 128K L1 cache, integrated 1M L2 cache, and can be used for single- to 8-CPU servers. Sledgehammer's integrated memory controller has less latency than the memory controller traditionally located on the North Bridge, and supports dual-channel DDR memory, and because it's a server CPU, it supports ECC checksums. It supports dual-channel DDR memory, and because it is a server CPU, of course it supports ECC checksums.
2) Clawhammer
Uses a 0.13um manufacturing process, with a core voltage of around 1.5V, 1MB of L2 cache, mPGA packaging, Hyper Transport bus, and a built-in 128bit memory controller. Socket 754, Socket 940, and Socket 939 interfaces are available.
3) Newcastle
The main difference between Newcastle and Clawhammer is that the L2 cache has been reduced to 512KB (as a result of AMD's relatively low price policy in order to market and speed up the rollout of 64-bit CPUs), but the rest of the performance is basically the same.
4) Wincheste
Wincheste is a relatively new AMD Athlon 64 CPU core, which is a 64-bit CPU, generally 939-interface, 0.09-micron manufacturing process. This core uses a 200MHz external frequency, supports 1GHyperTransprot bus, 512K L2 cache, and has a better price/performance ratio. Wincheste integrates a dual-channel memory controller, supports dual-channel DDR memory, and due to the new process, Wincheste generates less heat than the older Athlon, and has improved performance.
5) Troy
Troy was AMD's first Opteron core to use the 90nm manufacturing process, and was based on Sledgehammer with a number of new technologies, typically 940 pins, 128K L1 cache and 1MB (1024 KB) L2 cache. It also uses a 200MHz external clock, supports the 1GHyperTransprot bus, has an integrated memory controller, supports dual-channel DDR 400 memory, and can support ECC memory. The Troy core also offers support for SSE-3, the same as Intel's Xeon. Overall, Troy is a good CPU core.
6) Venice
The Venice core is an evolution of the Wincheste core, and has basically the same technical specifications as Wincheste: it is based on the same X86-64 architecture, has an integrated dual-channel memory controller, a 512KB L2 cache, a 90 nm manufacturing process, a 200MHz external frequency, and support for 1GHyperTransprotect. The changes of Venice are mainly in three aspects: firstly, it uses Dual Stress Liner (DSL) technology, which can increase the response speed of semiconductor transistors by 24%, so that the CPU has more frequency space, and it is easier to overclock; secondly, it provides support for SSE-3, which is the same as that of Intel's CPU; thirdly, it further improves the SSE-3 support, which is the same as that of Intel's CPU; thirdly, it further improves the SSE-3 support. Secondly, it provides support for SSE-3, which is the same as Intel's CPU; thirdly, it further improves the memory controller to increase the performance of the processor to a certain extent, and more importantly, it increases the compatibility of the memory controller with different DIMM modules and different configurations. Additionally the Venice core uses dynamic voltages, which may vary from CPU to CPU.
7) SanDiego
SanDiego kernel, like Venice, is an evolution of the Wincheste kernel, and its technical parameters are very close to those of Venice, and the new technologies and features that Venice has, SanDiego kernel also has. However, AMD has positioned the SanDiego kernel above the top Athlon 64 processors, even for server CPUs, and it can be thought of as an advanced version of the Venice kernel, except that the cache size has been increased from 512KB to 1MB, and, of course, due to the increase in the L2 cache, the core size of the SanDiego kernel has been increased, from 84 square millimeters for the Venice kernel, to 1.5 square millimeters for the Venice kernel, and from 84 square millimeters for the Venice kernel. Of course, with the increase in L2 cache, the SanDiego core size has also increased, from 84 square millimeters on the Venice core to 115 square millimeters, and of course, the price is higher.
(V) Core types of the SanDragon family of CPUs
1) Paris
The Paris core is the successor to the Barton core, and was used mainly in AMD's SanDragon, with the Paris core being used in some of the earlier 754-interface SanDragon. The Paris core is a 32-bit CPU, derived from the K8 core, and therefore also has a memory control unit.The main advantage of having a memory controller built into the CPU is that the memory controller can run at the frequency of the CPU, with less latency than a memory controller traditionally located on the Northbridge. Using the Paris cores, SanDisk delivers a significant performance boost over the Socket A interface of the SanDisk CPU.
2) Palermo
The Palermo core is currently used in AMD's Flashdragon CPUs, which use the Socket 754 interface, 90nm manufacturing process, 1.4V or so, 200MHz external frequency, and either 128K or 256K L2 cache. The Palermo core is derived from the Wincheste core of the K8, but is 32-bit. In addition to having the same internal architecture as AMD's high-end processors, it also features AMD-exclusive technologies such as EVP, Cool'n'Quiet; and HyperTransport, bringing users a superior processor with more "coolness" and higher computing power. Because it is derived from the ATHLON 64 processor, Palermo also features a memory control unit, and the main advantage of having a memory controller built into the CPU is that the memory controller can run at the same frequency as the CPU, with less latency than the memory controller traditionally located on the Northbridge.
(F) Dual-core types
Before 2005, processor frequency was the focus of competition between the two processor giants, Intel and AMD. And processor frequency has reached one peak after another, driven by Intel and AMD. At the same time as the processor frequency increase speed, it is also found that in the current situation, purely the main frequency increase, has not been able to bring significant benefits for the overall performance of the system, and high main frequency brings the processor a huge heat generation. More unfavorably, Intel and AMD have been a bit overwhelmed by the processor frequency increase. In this case, Intel and AMD have coincidentally turned their attention to the direction of multi-core development. Without the need for large-scale development, it is certainly a wise choice to develop existing products into theoretically more powerful multi-core processor systems.
The dual-core processor is a processor based on a single semiconductor that has two processor cores with the same functionality, i.e., two physical processor cores integrated into a single core. In fact, the dual-core architecture is not new, but dual-core processors, which had previously been the preserve of servers, are now becoming commonplace.
1) Introducing Intel's Dual-Core Processors
Currently, Intel has introduced dual-core processors, the Pentium D and the Pentium Extreme Edition, as well as the 945/955 chipsets to support the newly introduced dual-core processors, which are produced using the 90nm process. The two new dual-core processors, produced using the 90nm process, use the pinless LGA 775 interface, but the number of chip capacitors on the bottom of the processor has been increased and the arrangement is different.
Figure 18
The desktop platform's core code name, Smithfield, is officially the Pentium D processor. In addition to moving away from Arabic numerals to English letters to signify the change in generation of dual-core processors, the letter D is also more reminiscent of Dual-Core.
Figure 19 Dual-core Pentium D processor with case removed
Figure 20 Dual-core architecture internal schematic
Intel's dual-core architecture is more of a dual-CPU platform, with the Pentium D processor continuing to be produced using the Prescott architecture and 90nm production technology. The Pentium D core is actually made up of two separate Prescott cores, each with its own 1MB L2 cache and execution unit, and the two cores together have 2MB of L2 cache, but since both cores in the processor have separate caches, it's important to make sure that each L2 cache has exactly the same amount of information in it or else you'll get arithmetic errors.
Figure 21 The MCH coordinates calls between the two cores
To solve this problem, Intel has left the coordination between the two cores to an external MCH (Northbridge) chip. Although the data transfer and storage between the caches is not huge, the need to coordinate the processing through the external MCH chip will undoubtedly bring about a certain delay in the overall processing speed, thus affecting the overall performance of the processor.
Because of the Prescott core, the Pentium D also supports EM64T technology and XD bit security. It is important to note that the Pentium D processor will not support Hyper-Threading technology. The reason for this is obvious: it is not easy to correctly distribute the data flow and balance the computational tasks between multiple physical processors and multiple logical processors. For example, if an application requires two threads, it is obvious that each thread corresponds to one physical core, but what if there are three threads? So in an effort to reduce the complexity of the dual-core Pentium D architecture, Intel decided to remove support for Hyper-Threading technology from the Pentium D for the mainstream market.
The difference in the names of the Pentium D and Pentium Extreme Edition dual-core processors, both from Intel, suggests that the specs of the two processors are different. One of the biggest differences is the support for Hyper-Threading, which is not supported on the Pentium D, but not on the Pentium Extreme Edition. With Hyper-Threading enabled, the dual-core Pentium Extreme Edition processors emulate two additional logical processors and can be recognized as a quad-core system.
2) Introducing AMD's Dual-Core Processors
AMD's dual-core processors are the dual-core Opteron series and the new Athlon 64 X2 series. The Athlon 64 X2 is a family of desktop dual-core processors designed to rival the Pentium D and Pentium Extreme Edition.
Figure 22
AMD's Athlon 64 X2 is a combination of two Venice cores from the Athlon 64 processor, each with its own 512KB (1MB) L2 cache and execution unit. Aside from the extra core, there are no major architectural changes to the current Athlon 64 architecture.
Figure 23 Athlon 64 X2 (left) versus regular Athlon 64
Most of the specifications and features of the dual-core Athlon 64 X2 are the same as those of the familiar Athlon 64 architecture, which is to say that the new Athlon 64 X2 dual-core processor, which still supports the 1GHz specification of the HyperTransport bus, is the first of its kind to be introduced. The new Athlon 64 X2 dual-core processor still supports the 1GHz form factor of the HyperTransport bus, and has a built-in DDR memory controller that supports dual-channel setups.
Unlike Intel's dual-core processors, the Athlon 64 X2's two cores don't need to go through the MCH to coordinate with each other. AMD provides a technology called System Request Queue (SRQ) inside the Athlon 64 X2 dual-core processor, where each core puts its request in the SRQ during operation, and when resources are available, the request is sent to the appropriate execution core. In other words, all processing is done within the CPU cores, without the need for external devices.
Figure 24 AMD Athlon 64 X2 internal schematic
With dual-core architectures, AMD's approach is to integrate the two cores into the same silicon core, whereas Intel's dual-core approach is more like simply putting two cores together. Compared to Intel's dual-core architecture, AMD's dual-core processor systems do not suffer from transfer bottlenecks between the two cores. In this respect, the Athlon 64 X2 architecture is significantly better than the Pentium D architecture.
While AMD doesn't have to worry about power and heat hogs like the Prescott cores compared to Intel, it does need to consider ways to reduce power consumption for dual-core processors. Instead of lowering the main clock, AMD used the so-called Dual Stress Liner strain-silicon technology in its Athlon 64 X2 processor, which is produced using the 90nm process and works in conjunction with SOI to produce higher-performance, lower-power transistors.
The most affordable benefit of AMD's Athlon 64 X2 processor is that users can use the new dual-core processor without changing platforms, simply by upgrading the BIOS on older motherboards. This saves you money on upgrading your dual-core system compared to having to replace your platform with a new one to support Intel's dual-core processors.
Front Side Bus
A bus is a set of transmission lines that carry information from one or more source components to one or more destination components.
6. DDR memory vs SDR memory?
DDR memory is faster than SD memory.
In the other hand, DDR is a dual-channel technology. SD is old school.
While the capacity is the same, as you said, the application speed is definitely not the same. The difference is huge. And the price is different too.
At these prices. There's almost no need to buy SD memory anymore. It's just a few dollars. So the difference is huge.
7. What is AGP?
AGPiAccelerated Graphics Port stands for "Accelerated Graphics Port" and is a new generation of localized graphics bus technology developed by Intel. The two core elements of AGP technology are: first, using the PC's main memory as an extension of the graphics memory, which greatly increases the potential capacity of the graphics memory; and, second, the use of higher bus frequencies. 66MHz, 133HZ or even 266MHz, which greatly improves the data transfer rate. AGP bus is a dedicated display bus, and the display card from the POI: independent of the work of the PCI sound card, SCSI devices, network devices, I / S devices and other equipment, and subsequently improve the efficiency. Those who benefit most from AGP are some of the 3D programs that focus on 3D games.
8. What are the components controlled by the Southbridge and Northbridge chips on a motherboard?
The South and North Bridge chipsets are the soul of the motherboard, and their performance and technical characteristics determine what hardware the motherboard can be paired with. It's the first time I've ever seen a motherboard with a single hard drive, and the second time I've seen a motherboard with a single hard drive, I've never seen one.
The Northbridge chipset is mainly responsible for the data exchange and transmission between CPU and memory, so it directly determines what CPU and memory the motherboard can support. In addition, the Northbridge chip is also responsible for the control, management and transmission of the AGP bus or PCI-E 16X. Overall, the Northbridge chip is primarily used to take care of the connectivity of high data transfer rate devices.
The Southbridge chip is responsible for connecting devices with lower transfer rates. Specifically, it is responsible for communicating, managing, and transmitting with USB 1.1/2.0, AC'97 sound cards, 10/100/1000M NICs, PATA devices, SATA devices, PCI bus devices, serial devices, parallel devices, RAID architectures, and external wireless devices. Of course, the Southbridge chip cannot realize so many functions independently, it needs to cooperate with other function chips***, so as to let all kinds of low-speed devices operate normally,
9. What is the minimum system, which is composed of those parts?
Minimal system method;
Minimal system method refers to, from the perspective of maintenance judgment can make the computer boot or run the most basic hardware and software environment.
The minimum system has two forms:
Hardware minimum system: consists of the power supply, motherboard and CPU. In this system, there is no signal line connection, only the power connection from the power supply to the motherboard. In the process of judging by the sound to determine whether this core component can work properly:
Software Minimum System: consists of power supply, motherboard, CPU, memory, graphics card / monitor. Keyboard and hard disk. This minimum system is mainly used to determine whether the system can complete the normal startup and operation.
For the minimum software environment, the software has the following points to explain:
1. Hard disk software environment to retain the original software environment, mainly used to analyze and determine the application software problems.
2. Hard disk software environment only a basic operating system environment < may be uninstalled all the applications, or reinstall a clean operating system environment, is to determine the system problems, software conflicts or conflicts between hardware and software.
3. Under the software minimum system, you can add or change the appropriate hardware as needed. For example: in determining the start-up failure, because the hard disk can not start. Want to check whether it can start from other drives. At this point, the software can be added to the minimum system under a floppy drive or the minimum system to add a sound card; in the judgment of network problems, the software should be added to the minimum system of network cards and so on. Minimum system method, mainly to determine the first in the most basic hardware and software environment, whether the system can work properly if not work properly, but can determine the most basic hardware and software components have a failure, thus playing a role in fault isolation.
Minimum system method of gradually adding the combination of methods, can quickly locate other board software failures, improve the efficiency of maintenance
Chip, also known as a single microcontroller, it is not the completion of a logical function of the chip, but the integration of a computer system into a chip. To summarize: a chip becomes a computer.
It's small size, light weight, inexpensive, for learning, application and development to provide a convenient condition. At the same time, learning to use a microcontroller to understand the principles and structure of the computer the best choice.
It can be said that the twentieth century spanned three "electric" era, namely, the electrical era, the electronic era and now entering the computer era. The computer, however, is usually referred to as a personal computer, or PC for short. It consists of a mainframe computer, a keyboard, a monitor, and so on (as shown in Figure 1). There is another class of computers that most people are not so familiar with. This type of computer is the microcontroller (also known as a microcontroller, as shown in Figure 2) that gives intelligence to various machines. As the name implies, this computer's smallest system uses only a single integrated circuit for simple operations and control. Because of its small size, usually hidden in the controlled machinery "belly". It is in the whole device, plays a role as the role of the human mind, it went wrong, the whole device is paralyzed. Now, the use of this microcontroller has been a very wide range of areas, such as intelligent instrumentation, real-time industrial control, communications equipment, navigation systems, home appliances and so on. A variety of products once used on the microcontroller, can play a role in upgrading products, often in
Product name before the title of the adjective - "intelligent", such as intelligent washing machines and so on. Now some factory technicians or other amateur electronics developers to come up with certain products, either the circuit is too complex, or the function is too simple and easy to be imitated. The reason for this, may be stuck in the product does not use a microcontroller or other programmable logic devices.
Currently, microcontrollers permeate all areas of our lives, and it is difficult to find any field without a trace of microcontrollers. Missile navigation devices, aircraft control of a variety of instruments, computer network communications and data transmission, industrial automation processes, real-time control and data processing, the widespread use of a variety of smart IC cards, civilian limousine safety systems, video recorders, camcorders, automatic washing machine control, as well as programmable toys, electronic pets, and so on, which can not be separated from the microcontroller. Not to mention the automatic control field of robotics, intelligent meters, medical equipment. Therefore, the study, development and application of microcontroller will create a number of computer applications and intelligent control of scientists and engineers.
10. What are the common interfaces on the chassis panel?
USB interface, IE1394 interface, MIDI interface, audio input interface, audio output interface, etc.
11. There are several types of hard disk, and what are their characteristics?
The hard disk can be divided into ST-506/412 interface according to the interface: this is a kind of hard disk interface developed by Seagate, and the first hard disk that uses this interface is Seagate's ST-506 and ST-412. The ST-506 interface is quite easy to use, which does not need any special cables and connectors. The ST-506 interface is quite easy to use and does not require any special cables or connectors, but it supports very low transfer speeds, so by 1987 or so this interface was basically phased out, and most of the old hard disks that used this interface had a capacity of less than 200MB.The hard disks that were used in the early IBM PC/XT and PC/AT machines were the ST-506/412 hard disks, or MFM hard disks-MFM (Modified Frequency Modulation) is a term that refers to the process of modifying the frequency of a hard disk by modifying its frequency. Modified Frequency Modulation) is a coding scheme.
ESDI Interface: The Enhanced Small Drive Interface (ESDI) interface was developed by Maxtor in 1983. It is characterized by placing the codecs in the hard disk itself rather than on the control card, and the theoretical transfer speed is 2...4 times that of the previously mentioned ST-506, generally up to 10 Mbps. However, it is more costly, and has no advantage over the later IDE interfaces, and has been phased out since the 1990's.
The ESDI interface is a new type of interface that allows users to easily access and control the hard disk, but it is also a new type of interface that allows users to easily access and control the hard disk.
IDE and EIDE interfaces: IDE (Integrated Drive Electronics) actually refers to the controller integrated with the disk body of the hard disk drive, we often say that the IDE interface, also known as the ATA (Advanced Technology Attachment) interface, and now PCs Most of the hard disks used in PCs today are IDE-compatible, and can be connected to the motherboard or interface card with a single cable. Integrating the platters with the controller reduces the number and length of cables in the hard drive interface, increases the reliability of data transfer, makes it easier to manufacture hard drives because manufacturers no longer need to worry about whether their hard drives are compatible with controllers made by other vendors, and makes installation of the hard drive easier for the user.
ATA-1 (IDE): ATA is the official name of the earliest IDE standard, and IDE actually refers to the hard disk itself attached to the hard disk interface.ATA has a socket on the motherboard that supports a master device and a slave device, each with a maximum capacity of 504MB, and the earliest PIO-0 mode supported by ATA (Programmed I/O-0) was only 3.3MB per second. The earliest PIO-0 mode (Programmed I/O-0) supported by ATA is only 3.3MB/s, while ATA-1 I*** specifies 3 PIO modes and 4 DMA modes (which are not used in practice), and to upgrade to ATA-2, you need to install an EIDE adapter.
ATA-2 (EIDE Enhanced IDE/Fast ATA): This is an expansion of ATA-1, which adds 2 PIO and 2 DMA modes, increases the maximum transfer rate to 16.7MB/s, and introduces LBA address translation to break through the 504MB limit inherent in the old BIOS to support up to 8.1 GB hard disk. If your computer supports ATA-2, you can find (LBA, LogicalBlock Address) or (CHS, Cylinder,Head,Sector) settings in CMOS settings. Its two sockets can be connected to a master device and a slave setup respectively, thus supporting four devices, and the two sockets are also divided into master and slave sockets. The fastest hard disks and CD-ROMs can usually be placed on the master socket, while the lesser devices can be placed on the slave socket. This placement was necessary for the 486 and early Pentium computers to allow the master socket to be connected to the fast PCI bus and the slave socket to be connected to the slower ISA bus.
12. What are NTFS permissions and what are the conditions for setting them?
To set NTFS permissions on a folder, the conditions are as follows:
1. All users can create their own files, and can modify and delete their own files;
2. All users can READ other users' files, but can't modify or delete other users' files.
3, even if there are a lot of users (such as 2000) can be easily set up, set up once complete, new users in the domain without changing permissions
Solution:
in the directory security,
everyone group - -Read and run, list folder directories, read
Advanced Properties - -Allow everyone permissions gives him create file/write data, create folder/attach data two permissions
administrators group - -Full Control
creator owner--Full Control
Right-click on the folder properties->Security->Advanced->Valid Permissions->Select->Advanced ->Find Now->Select the user you want to control underneath->After two OKs, there will be the user's permission to edit the list, the "Delete Subfolders and Files", "Delete" the two front of the tick removed on the deal< /p>
13. An electric and regular file editing, moving, copying and deleting operations after a period of time found that the machine runs much slower, with antivirus software to check and kill did not find any viruses, reinstalled the operating system after a period of time to reduce the speed of operation, may I ask how to solve this problem now?
This may be too much disk fragmentation, you just need to "run" in the "dfrg.msc", open the disk defragmentation program, defragmentation on the OK.