A preliminary understanding of big data
It seems that overnight, big data (Big Data) has become one of the most fashionable words in the IT industry.
First of all, big data is not something completely new, Google's search service is a typical use of big data, according to customer demand, Google real-time from the world's massive digital assets (or digital garbage) in the rapid identification of the most likely answer, presented to you, is a typical big data service. Only in the past such a scale of data volume processing and commercially valuable applications are too few, in the IT industry did not form a molded concept. Now with global digitization, network broadband, Internet applications in all walks of life, the amount of accumulated data is getting bigger and bigger, more and more enterprises, industries and countries found that similar technology can be used to better serve customers, discover new business opportunities, expand new markets as well as improve the efficiency of the gradual formation of the concept of big data.
The Internet is a magical big network, big data development and software customization is also a mode, here to provide the most detailed offer, if you really want to do, you can come here, the beginning of this cell phone number is one eighty-seven in the middle of the three children zero last is one four two five zero, in accordance with the order of the combination can be found, I would like to say that, unless you want to do or to understand this aspect of the contents of the content, if it just come along for the ride, don't come along.
There's an interesting story about luxury marketing, where PRADA has RFID codes on every piece of clothing in its flagship store in New York. Whenever a customer picks up a piece of PRADA and goes into the fitting room, the RFID is automatically recognized. At the same time, the data will be transmitted to PRADA headquarters. Each piece of clothing in which city and which flagship store what time was taken into the fitting room to stay how long, the data are stored and analyzed. If there is a piece of clothing sales are very low, the previous practice is to directly kill. But if the RFID data back to show that although the sales of this dress is low, but the number of times into the fitting room. That can be another indication of some problems. Maybe this dress will be very different, maybe in a small change in the details will re-create a very popular products.
A single piece of data has no value, but more and more data add up, quantitative change will cause qualitative change, as if one person's opinion does not matter, but 1,000 people, 10,000 people's opinions are more important, millions of people will be enough to set off huge waves, hundreds of millions of people enough to change everything.
There's more data, but if it's blocked or not used, it's worthless. China's flights are very late, compared to the U.S., where flights are on time much better. This is a good practice of the U.S. air traffic control agencies to play a positive role, to say it is very simple, is that the United States will be announced by each airline, each class of aviation in the past year, the late rate and the average late time, so that customers in the purchase of air tickets will naturally choose to punctual rate of high flights, so that through the market means of the airlines to pull efforts to improve the punctuality rate. This simple method is more direct and effective than any management tools (e.g. the Chinese government's macro-control tools). Here a sentence or two more, in the past a tyrannical state of internal control is mainly physical violence, that is, powerful institutions unlimited power to engage in state terrorism; and now a tyrannical state, mainly by monopolizing information, blocking information, so that the public is difficult to obtain a wide range of real information, so as to achieve the control of the state. This information blockade is the blockade of big data.
Without integration and mining of data, the value is not presented. Cooper in "Never Ending Story" would have no value if he could not integrate and tie together the massive amounts of information surrounding the stock price of a particular company.
Therefore, the generation, acquisition, mining and integration of massive amounts of data, so that it shows great commercial value, this is what I understand as big data. Today, when the Internet is reconfiguring everything, these issues are not problems. Because, in my opinion, big data is the next wave of applications of the deep development of the Internet, and it is a natural extension of the development of the Internet. At present, it can be said that the development of big data has reached a tipping point, which is why it has become one of the hottest words in the IT industry.
Two, big data will reconfigure the business thinking and business model of many industries
I would like to unfold this topic with the wild imagination of the future automotive industry.
In a person's life, the car is a huge investment. Taking a 300,000 car with a seven-year replacement cycle, the annual depreciation cost is more than 40,000 (not counting the capital cost here), and together with the costs of parking, insurance, oil, repairs, maintenance and other expenses, the annual consumption should be around 60,000 yuan. The automobile industry is also a very long chain of leading industries, this aspect is only comparable to real estate.
But at the same time, the automotive industry chain is an inefficient and slow-changing industry. The car has always been four wheels, a steering wheel, two rows of sofas (Li Shufu language). Such an expensive thing, but the data generated around the car is pitifully small, the industry chain between a few no data transfer.
We're going to go wild here and imagine what would happen if cars were fully digitized and big data was available.
Some people say, car digitization, not just add a MBB module? No, that's too trivial. In my ideal, digitalization means that the car can be connected to the Internet at any time, it means that the car is a large computing system plus the traditional wheel, steering wheel and sofa, it means that it can be digitally navigated and self-driven, it means that every action you take in relation to the car is digitized, including every repair, every driving route, every video recording of an accident, the status of key car components every day, and even your every driving habit (such as every brake and acceleration) is recorded. In this way, your car could be generating T-bits of data every month or even every week.
Well, let's assume that all this data can be stored and shared with relevant governments, industries, and businesses. Without discussing the implications of privacy issues here, let's assume that the data can be shared freely with privacy protections in place.
So what do insurance companies do? The insurance company takes all your data over to modeling and analysis, and finds a few important facts: one is that you drive mainly just to and from work, Nanshan to Bantian this route is a non-busy route, there are very few traffic lights, this route in the past year statistics of the accident rate is very low; your car condition (the car's age, the car model) is good, this model in the whole of Shenzhen is also a low rate of car accidents; and even statistics on your driving habits, refueling average, the number of temporary brakes, less overtaking, and the number of car crashes. Temporary brake less, overtaking less, and around the car to maintain a due distance, good driving habits. The final conclusion is that you have a good car model, good condition, good driving habits, often go to the line accident rate is low, the past year has not had a car accident, so you can give more substantial discounts. In this way, the insurance company has completely reconstructed its business model. Before there is no big data support, the insurance company only made a simple classification of car insurance customers, a **** divided into four types of customers, the first is two consecutive years without car accidents, the second has not had a car accident in the past year, the third has had a car accident in the past year, and the fourth is the past year had two or more car accidents, just four types. This kind of simple and rough categorization is just like a woman looking for a husband who dares to marry a man by merely categorizing him into four types of men: those who have never been married, those who have been married once, those who have been married twice, and those who have been married three times and above. With the support of big data, insurance companies can be truly customer-centric, dividing customers into thousands of types, with personalized solutions for each customer, so that insurance companies operate in a completely different way, daring to offer bold discounts for low-risk customers, and quoting high prices or even refusing to do so for high-risk customers, and it will be completely difficult for general insurance companies to compete with such an insurance company. Insurance companies that own and use big data will have an overwhelming competitive advantage over traditional companies, and big data will become the core competitiveness of insurance companies, because insurance is a business based on the assessment of probability, and big data is undoubtedly the most favorable weapon for accurately assessing probability, and it's simply a tailor-made weapon.
With the support of big data, the service of 4S stores is also completely different. Vehicle condition information will be regularly transmitted to the 4S store, 4S store will be based on the situation in a timely manner to remind the owner of timely maintenance and repair, especially for problems that may jeopardize the safety of the customer's consent will even be taken to take remote intervention, but also in advance of the stock, the owner of the car as soon as they arrive at the 4S store can be repaired without having to wait.
For drivers who don't want to drive, vehicles can drive themselves with the support of big data and artificial intelligence, and can self-learn and self-optimize for the routes you often drive. Google's self-driving car, in order to make predictions about the surrounding environment, every second to collect almost 1GB of data, without the support of big data, automatic driving is inconceivable; in and around the vehicle is too close to the time, it will remind the owner to avoid; to and from work, according to the real-time big data situation, you often drive the line to be a reminder to bypass congestion, to help you choose the most appropriate line;
In the event of an emergency, such as the emergence of artificial intelligence, the vehicle can automatically drive, and for your frequent driving line can self-learning and self-optimization. In the event of an emergency, such as a flat tire, the self-driving system will automatically take over to improve safety (a person's whole life can be difficult to encounter a flat tire, the reaction of people in an emergency is often disastrous, it will only be worse); to the center of the city, looking for a parking space is a very troublesome thing, but in the future, you can arrive at the door of the shopping mall, so that the car itself to find a parking space, such as the desire to return when the notification in advance to let the car drive itself over to pick it up.
Vehicles are the largest and most active moving objects in cities, a source of congestion and one of the biggest sources of pollution. Digitalized vehicles, big data applications will bring many changes. Traffic lights can be automatically optimized, according to the congestion of different roads automatically adjusted, and even in many places can be canceled traffic lights; urban parking can also be greatly optimized, according to big data to optimize the design of urban parking spaces, if with the vehicle's auto-driving function, the parking lot can be a revolutionary evolution of the design of the parking building dedicated to self-driving vehicles, the underground, above ground floors can be as high as Dozens of layers, the parking building layer can be shorter, as long as it can be higher than the height of the car can be (or put the car up to park), which will have a huge impact on urban planning; in the event of emergencies, such as the collapse of the front, can be the first time to notify the surrounding vehicles (especially to the collapsed road vehicles); now the fuel tax can also be revolutionary changes can be based on the vehicle's travel distance, and even based on the amount of Car emissions to charge, emissions less cars can even engage in carbon trading, selling emissions sold to high fuel consumption; the government can also be announced annually the actual amount of emissions of various types of models, taxes, safety and other indicators, to encourage the public to buy a more energy efficient, safer cars.
The e-commerce and courier industries could also change dramatically. The cars that carry couriers could all be self-driving, not having to rush the congested roads during the day, driving in the middle of the night, and designing automatic receiving boxes in front of your house, opening them with a password to automatically drop them in, as if the newsboys used to drop off the newspaper.
So imagine down, I think, automotive digitization, Internet, big data applications, artificial intelligence, will produce unimaginable huge changes and industrial revolution to the automotive industry and the related long industrial chain, with unlimited imagination, may be completely reconstructed. Of course, to realize the scene I described, it is estimated that at least 50 years, 100 years later, I guess I can not see in my lifetime.
The next imagery centers around people themselves. The digital existence of human beings is just a matter of these few decades. In my grandparents' life, it was towards the end of their lives that they had photographs, sort of an initial bit of digitization in terms of their personal image, so that we and future generations could still know the glorious image of our grandparents. And we grew up with photos, and over the years we have become more and more digitized, identity is digital (that is, ID card), bank deposits are digital, photos are all digital, medical checkup slips are also digitized, shopping is digitized (Taobao has dozens of addresses, hundreds of shopping information, tens of thousands of searches for information about me), communication is digitized (WeChat has a new ecology of the circle of friends), and initially constructed a digital survival of the state. And our next generation or the next generation will enter a completely digitalized existence, people from the moment they are born have a genetic map, to every subsequent medical examination, every laboratory test, to the activities of every year, every month, every day, to the trajectory of the relevant relatives, from every person, to every generation, to the entire genealogy, to the entire country, to the entire globe, the generation of these massive data will change from quantitative to qualitative, these data mining and use will have a revolutionary impact on humanity itself. Here, we also imagine:
For example, when you are looking for an object, you bump into a beloved girl, big data system is like a fortune-telling system, according to the two sides of the massive amount of data mining, tell you and the girl match index is how much, tell you the global couples in similar situations in the future how much the probability of divorce, below a certain match index, the big data system will be careful to suggest that you seriously consider not this The girl continues to socialize. Doesn't it sound like the digitization of door-to-door? Of course, you may say, how meaningless this life ah, mistakes are the most beautiful part of life. Oh, I only discuss scientific issues, to your kind of "romanticism" in the name of, in fact, is not for the purpose of marriage to play the hooligan style of love, do not pay attention to. In fact, I admit in my heart that it's good to be a hooligan once in a while. Oh, just kidding.
Big data will, to a certain extent, subvert the traditional management of enterprises. The modern enterprise management style is derived from the imitation of the military, relying on layers and layers of organization and strict process, relying on the information of the layers of aggregation, convergence to develop the right decision, and then through the decision in the organization of the transmission and decomposition, as well as the standardization of the process, to ensure that the decision to be carried out, to ensure that every business activity has quality assurance, but also to ensure a certain degree of risk aversion. In the past this was a useful and clumsy approach. In the era of big data, we may reconfigure the way the enterprise management, through the analysis and mining of big data, a large number of business itself can be self-decision-making, unnecessarily relying on inflated organizations and complex processes. Everyone is based on big data to make decisions, are dependent on the established rules to make decisions, is the high CEO decision-making, or front-line personnel decision-making, itself does not make a big difference, then the enterprise still need so many layers of organization and complex processes?
The other major role of big data is to change the logic of business, providing the possibility of straight answers from other perspectives. Nowadays, human thinking or business decision-making is in fact dominated by the power of logic. We go to research, to collect data, to summarize, and finally form their own inferences and decision-making opinions, which is a business logic process of observation, thinking, reasoning, and decision-making. The formation of human and organizational logic requires a great deal of study, training and practice, and the cost is enormous. But is this the only path? Big Data gives us other options, which is to utilize the power of data to get answers directly. As if we learn mathematics, when we were young, we learned the nine-nine multiplication table, secondary school geometry, the university also learn calculus, encountered a difficult problem, we are the use of many years of learning precipitation of experience to try to find a solution, but we also have a way to directly search the Internet is not such a topic, and if there is, directly copy the answer is good. Many people will criticize that this is plagiarism and cheating. But why do we study? Isn't it to solve problems. If I can search for the answer at any time, I can find the best answer with the least effort, so the search can not be a bright road? In other words, in order to get the "what", we don't necessarily need to understand the "why". We're not denying the power of logic, but at least we have a great new force to rely on: the future power of big data.
With big data, we may have a new perspective to discover new business opportunities and reconstruct new business models. When we look at the world now, for example, analyzing food spoilage in a home, we are mainly relying on our eyes plus our experience, but if we have a microscope and we see the bad bacteria in a single glance, then the analysis will be totally different. Big data is our microscope, and it allows us to discover new business opportunities from a whole new perspective and potentially reframe business models. Our product design may be different, a lot of things do not have to guess, customer habits and preferences at a glance, our design will be able to easily hit the customer's heart; our marketing is also completely different, we know what customers like, what they hate, more targeted. Especially the microscope coupled with a wide-angle lens, we have more new horizons. This wide-angle lens is the cross-industry data flow, so that we can see what we could not see in the past, for example, the case of the car described earlier, driving is driving, insurance is insurance, originally unrelated, but when we drive the big data to the insurance company, then the entire insurance company's business model has all changed, completely reconfigured.
The last point I want to talk about is the revolutionary impact of big data development on the technical architecture of IT itself. Big data is rooted in IT systems. Our modern enterprise IT system is basically built on the IOE (IBM minicomputer, Oracle database, EMC storage) + Cisco model, such a model is Scale-UP-type architecture, in the solution of the established model of a certain amount of data under the business process is adapted, but if it is the era of big data, will soon be faced with the problem of cost, technology and business models, big Data demand for IT will soon exceed the technological apex of the existing vendor architecture, mega-data growth will bring a linear relationship between the growth of IT spending, making it difficult for enterprises to afford. Therefore, the de-IOE trend currently proposed in the industry, utilizing Scale-out architecture + open source software to replace Scale-up architecture + private software, is essentially brought about by the Big Data business model, which means that Big Data will drive a new round of architectural changes in the IT industry. The so-called national security factor in the de-IOE trend is completely secondary.
So, the Americans say, big data is a resource, and like big oil fields and big coal mines, it can be dug up in a steady stream of big wealth. And unlike general resources, it is renewable, the more it is dug up, the more it is dug up, the more valuable it is, which is against the laws of nature. This is true for enterprises, for the industry, for the country, and for people as well. Who wouldn't like such a thing? Therefore, it makes perfect sense that big data is so popular.
Three, the birth of new intelligent creatures
The following imagination is even wilder, really want to realize, it is estimated to be at least ten or a hundred lifetimes after our things. At that time, we are already ancestors haha. Just think of it as science fiction everyone.
From a recent speech by a Microsoft VP. Rick Rashid, a senior vice president at Microsoft Research, stepped up to the podium one day in Tianjin, China, to give a speech in front of 2,000 researchers and students, and he was very, very nervous. There was a reason he was so nervous. The problem was that he didn't speak Chinese, and his translation skills used to be so bad that they seemed to doom the embarrassment.
"We hope that within a few years, we can break down the language barrier between people," the senior vice president of Microsoft Research told the audience. After a nerve-wracking two-second pause, a translator's voice came over the loudspeaker. Rashid continued, "I personally believe this will make the world a better place." A pause, then the Chinese translation again.
He smiled. The audience applauded his every word. Some even shed tears. The reaction, which seemed overly enthusiastic, was understandable: Rashid's translations were so hard to follow. Every sentence was understood and translated seamlessly. One of the most impressive things about the translation was that it was not human.
This is machine translation of natural language, and it's a key manifestation of long-standing artificial intelligence research. Artificial intelligence, with its clear and huge business prospects from the past to the future, was the hot spot of the IT industry, no less than the current "Internet" and "big data". However, in the past, human beings encountered huge obstacles in advancing the research of artificial intelligence, and finally almost despaired.
At the time, AI was all about simulating human intelligence to construct machine intelligence. In the case of machine translation, linguists and language experts had to work tirelessly to compile large dictionaries and rules related to grammar, syntax, and semantics, with hundreds of thousands of words making up the thesaurus and tens of thousands of grammatical rules, to consider a variety of scenarios and contexts, to simulate human translations, and then the computer experts constructed the complex programs. In the end, it turns out that human language is simply too complex for an exhaustive approach to achieve even the most basic translation quality. The end result of this path was that after several years of stagnation in the development of AI technology after the 1960s, scientists painfully realized that defining AI in terms of "simulating the human brain" and "reconstructing the human brain" had led to a dead end, which led to almost all AI projects entering a dead end later on. This has led to almost all AI projects going into the cold.
Here's a little tidbit. When I was in college, I had a teacher who was one of the top professors of AI in China, and was also the vice president of one of the AI research societies in China. He commented that the AI of the time was not artificial intelligence, but artificial stupidity, breaking down, breaking down and breaking down again simple human behaviors, and then simulating them clumsily, not how people are smart and how to learn, but simulating and learning the simplest actions of the dumbest people. He said, for the progress of artificial intelligence at that time, some people complacent, said as if the moon landing program mankind further away from the moon, in fact, it is standing on a rock to the moon lyric, ah, I am closer to you. His self-mockery of his own career is something I remember very well to this day.
Later, some people thought, why should the machine to learn logic from people, and difficult to learn and learn not good, the machine itself is the most powerful computing power and data processing capabilities, why not take advantage of the shortcomings, another road? This road is IBM "Deep Blue" walked the road. 1997 May 11, chess master Kasparov and IBM developed computer "Deep Blue" for the game announced that the failure, computer "Deep Blue" thus won this far-reaching "man-machine confrontation". "Deep Blue" does not rely on logic, does not rely on the so-called artificial intelligence to win, is to rely on the super computational power to win: think but you, but calculate you to death.
Similar logic has been applied to machine translation. Google, Microsoft and IBM have all taken this path. That is, the main use of matching method, while combining machine learning, relying on massive amounts of data and its related relevant statistical information, regardless of grammar and rules, the original text and the translation data on the Internet comparison, to find the most similar, the most frequently cited translation results for the output. That is, big data as well as machine learning technology is used to realize machine translation. The larger the amount of data available, the better the system will work, which is precisely why the new machine translation is only likely to make a breakthrough again after the advent of the Internet.
So at the moment there are quite a few computer scientists on the machine translation teams at these companies, but not even a single pure linguist, who is good at math and statistics and then programming, and that's all that matters.
In a nutshell, with this technique, computers teach themselves to build patterns from big data. With a large enough amount of information, you can get machines to learn to do things that look intelligent, never mind navigating, understanding discourse, translating language, recognizing faces, or simulating human conversation. Chris Bishop of Microsoft Research in Cambridge, England, draws an analogy, "You pile up enough bricks and then take a few steps back and you can see a house."
Here we assume that this technology can continue to progress, and in the future, AI based on big data and machine learning will be able to simulate human conversations more smoothly, meaning that humans will be able to have more comfortable conversations with machines. In fact, IBM's "Watson" program is such technology engineering, such as trying to make the computer as a doctor, able to diagnose most diseases, and communicate with patients. In addition, it is also assumed that the current emergence of wearable computing devices to make great progress. How far has this progress gone? That is, your family pet puppy is also loaded with a variety of sensors and wearable devices, such as image capture, sound capture, smell capture, small medical devices to monitor the puppy's health, and even electronic pills in the puppy's stomach for digestion monitoring. The puppy is of course connected to the Internet and generates just as large a volume of data. At this point, let's assume that modeling based on this big data is able to simulate the puppy's joys and sorrows, and then also be able to voice expression through anthropomorphic processing, in other words, simulating the puppy to speak human language, such as when the owner comes home, the puppy wagging its tail, wang wang barking, then this AI system attached to the puppy will say, "Master, so glad to see you home! ". Not only that, you can also have a conversation with the puppy's AI system, because this AI system can basically understand what you mean, and is able to anthropomorphize the expression instead of the puppy. Here's how we'd simulate a possible conversation:
You: "Puppy, how was your day?"
Puppy: "Not bad, master your new dog food tastes great today, I always feel like I don't get enough of it."
You: "That's good. We'll continue to buy this kind of dog food from now on. By the way, is anyone coming today?"
Puppy: "Only the mailman came to deliver the newspaper. Also, Mary, the neighbor's puppy, came over and we played together all afternoon."
You: "And how did you play?"
Puppy: "It was a lot of fun. It's like I'm in my first love again."
......
We can think of the above mock conversation as a joke. But in fact, we will find an amazing fact at this time, which is that you are actually facing two puppies, one is a puppy in the physical sense, and one is a virtual puppy with artificial intelligence based on big data and machine learning, and the virtual puppy is even smarter than the physical puppy, and really understanding. So is this virtual puppy a new intelligent creature?
We continue to extend the story to do, the puppy into the future of people, people in their lifetime to produce a lot of data, based on these data modeling can be directly deduced a lot of conclusions, such as like to watch what kind of movies ah, like what taste of food ah, in the face of what problems will be how to take what action ah.
This data accumulates until the person dies. Can this huge amount of data somehow keep that person alive? When future generations need answers to questions, such as key decisions in life, such as what major to take in college and whether to marry a certain girl, can they ask this virtual person (ancestor) what advice they have? The answer is of course. In this case, digital existence not only exists during a person's lifetime, but can also continue to exist after a person's death. People can continue to exist in virtual space when they die. A lifetime, a lifetime of people deceased, these virtual wisdom can continue to exist, assuming that many years have passed, these virtual wisdom of the ancestors are too many, the living children and grandchildren can even form a "joint staff committee of the ancestors", preferential selection of those who did well in the exams (such as won the top prize), as a senior civil servant (such as the governor), as a senior civil servant (such as the governor), as a senior civil servant (such as the governor), as a senior civil servant, as a senior civil servant (such as the governor). Preferably those who have done well in the examination (such as won the Scholar), as a senior national civil servant (such as the governor), as a corporate executive (such as CEO), as a professor, as a writer, and so on, as a successful person's ancestor, dedicated to future generations of advice, to solve the problem. Let these ancestors have competition after death, don't die with nothing to do. This scene is not very familiar ah? It's the same scene that appeared in the Disney animated film "Mulan," where Mulan, faced with the momentous decision of whether or not to join the army on her father's behalf, confided in the "Joint Staff Committee of the Ancestors" and received guidance.
Imagine even more boldly that, assuming huge advances in materials science, we could reimplant these virtual beings into simulated human ecosystems? Of course we could. This new intelligence could be very much like a real person. So is this a resurrection from the dead? Can this new intelligence continue to have its old ID? Can it continue to own the property it had before? Can it continue to enjoy the pension? Should there be a mandatory lifespan limit? Will this intelligence learn and evolve on its own? Will they start a war with humans? If you think about it more y, you'll feel that it's all messed up, and that the ethics and laws are facing huge challenges.
What does all this mean? It's that with the further progress of big data and machine learning, new intelligent creatures have emerged in this world! After Big Data and Machine Learning have changed, reconfigured and disrupted many businesses, industries and countries, it is finally time to change humanity itself! A new branch of human evolution has emerged!
Some scientists drew the following diagram to describe the two intelligent beings. One is based on biological, after millions of years of evolution; one is based on IT technology, based on big data and machine learning, through self-simulation, self-learning. The former is more logical, more emotionally rich, and creative, but has a limited life; the latter is not very logical, not biologically emotional, but has strong computing, modeling and search capabilities, and theoretically has an unlimited life.
Of course, all of these things are going to be very, very far away if they're going to happen. We won't see it while we're alive anyway, and we won't see it when we're dead, because when we die, I don't believe this kind of virtual life built on big data and machine learning will exist yet.
Fourth, Concluding Remarks
The last thing I would like to say is that our perception of the future is largely based on common sense and imagination of the future. Statistically, the New York Times now has more information in a week than a person received in a lifetime in the 18th century, 18 months now produces more information than the last 5000 years combined, and a 5000 dollar computer in my house now has more computing power than the entire university did when I first entered college. The advancement of technology always exceeds our imagination in many cases, imagine what will happen to the world if in the future one of us owns more computer equipment than the sum of the current global computing power now, one person generates more data than the sum of the current global data volume, and even your pet puppy generates more information than the current global data volume, what will happen to the world? That's up to your imagination.