Lei Feng Net Note: The article was transferred from Big Data Abstract. The author of this article is Lu Jingling, a researcher at the Institute of Mathematics, Chinese Academy of Sciences Institute of Mathematics, and a professor at Fudan University. His systematic and creative work in knowledge engineering and knowledge-based software engineering is one of the pioneers in China's field research. In 1999 he was elected a member of the Chinese Academy of Sciences. There is a saying in the artificial intelligence community that machine learning is the branch of artificial intelligence that best reflects intelligence. Historically, machine learning seems to be one of the fastest growing branches of artificial intelligence. In the 1980s, symbolic learning may be the mainstream of machine learning, and since the 1990s, it has been the world of statistical machine learning. We do not know if we can think that the development of learning from symbolic machine learning to the mainstream for statistical machine learning reflects the development of machine learning from purely theoretical research and model research to applied research that aims to solve practical problems in real life. A progress in scientific research. In the past, friends in the machine learning community have had more contact with them, and they often receive some hearsay information as well as experts' comments on the current state of machine learning and its future. In the process, it will inevitably produce some of your own questions. Take this opportunity to write it down and put it here, be regarded as "amateur learning machine learning." Question one: In the early days of artificial intelligence, the technical connotation of machine learning was almost all symbolic learning. However, starting in the 1990s, statistical machine learning was like a dark horse emerging into full swing, rapidly overriding and replacing the status of symbol learning. People may ask: In the face of statistical learning journals and conference articles, is sign learning completely ignored? Can it also become a research object for machine learning? Will it continue to live in the shadow of statistical learning and prolong survival? There are three possible answers to this question: First, tell the symbol to learn: "You should quit the stage of history, accept it!" Second, tell the statistics study: "Your words should be closed!" Simple statistical learning has come to an end If we want to move forward, we must combine statistical learning and symbol learning. Third, there are always “three decades of Hedong, thirty years of Hexi†phenomenon in the development of things. There is also a “turning over†day for symbol learning. The first concept I did not hear people say, but I think I may have been the default of many people. The second point of view I have heard Professor Wang Kai repeatedly said. He does not think that statistical learning will decline, but only thinks that machine learning has reached a turning point. From now on, statistical learning should be combined with the use of knowledge. This is a "spiral rise, into a more advanced form." Otherwise, statistical learning may stop at the status quo. Prof. Wang Xi also believes that the indication of entering the turning point is the publication of the book “Probability Model†by Koller et al. As for the third point of view, I just received an old friend, a senior scholar of artificial intelligence from the United States, and a letter from Professor Chandrasekaran of the Ohio University. He happened to be talking about the phenomenon that symbol intelligence was “pressed†by statistical intelligence, and he just expressed the viewpoint of Hedong Hexi. The full text is as follows: "In recent years, artificial intelligence has largely focused on statistics and big data. I agree that these technologies have achieved some impressive results due to the dramatic increase in computing power. But we have absolutely Reasons to believe that although these technologies will continue to improve and improve, one day this area (referring to AI) will say goodbye to them and turn to more basic cognitive science research. Even though the pendulum swing will take some time, I believe It is necessary to combine statistical techniques with a deep understanding of cognitive structure."  It seems that Prof. Chandrasekaran also does not think that AI will really return to Hexi in a few years. His opinions are basically the same as those of Prof. Wang Xi, but they are not limited to machine learning but involve the entire field of artificial intelligence. Only Professor Wang Xi emphasized knowledge, and Professor Chandrasekaran emphasized more basic "cognition". Question 2: Prof. Wang Xi believes that statistical learning will not be "smooth" based on the assumption that statistical machine learning algorithms are based on the assumption that sample data are independently and identically distributed. However, the phenomenon of the natural world is ever-changing. Professor Wang Xi thinks "How can there be so many independent and identical distributions?" This leads to the next question: Is the condition of "independent and identical distribution" really necessary for machine learning? Isn't the existence of independence and distribution necessarily an insurmountable obstacle? Machine learning without independent and identical distribution may be a difficult problem rather than an unsolvable problem. I have a "crazy idea." It is thought that the "transition learning" that appeared some time ago may bring about a first-line solution to this problem. Although the current migration study requires that both parties have the condition of “independence and distributionâ€, but they cannot be transferred between distributions. The migration learning before the same distribution and different distribution may occur sooner or later. Question 3: Some new trends have emerged in recent years, such as “deep learning†and “non-terminating learningâ€. Special attention has been given to the society, especially deep learning. But do they really represent the new direction of machine learning? Some scholars, including Professor Zhou Zhihua, believe that the upsurge of deep learning may be greater than its true contribution. There is not much innovation in theory and technology. It is only because of the revolution in hardware technology that the speed of computers has greatly increased. , making it possible to use the original algorithm with high complexity to get finer results than in the past. Of course, this is of great significance for promoting machine learning in practice. However, we cannot but ask boldly: Does deep learning replace statistics learning? In fact, some experts have already felt the pressure from deep learning, pointing out that statistical learning is being suppressed by deep learning. It is as if the symbolic learning that we have long seen is suppressed by statistical learning. However, I feel that this suppression is far from being as powerful as statistical learning to suppress symbol learning. This is because the "theoretical innovation" of deep learning is not yet obvious. Second, because the current deep learning is mainly suitable for neural networks. Today, various methods of machine learning are in full bloom. Its application scope is still limited, and it is not yet straightforward. It is the return of the connectionist approach; the third reason is that statistical learning is still widely used in machine learning, and it is not easy to abandon it. Question 4: Since the emergence of machine learning research, we have seen the evolution from symbolic methods to statistical methods. The main use of mathematics is probability statistics. However, mathematics, like the sea, is it only statistical methods suitable for machine learning? Of course, we also see some good examples of other mathematics applications in machine learning, such as the application of differential geometry in manifold learning, and the application of differential equations in inductive learning. However, if compared with statistical methods, they can only be regarded as supporting actors. Other mathematical branches such as algebra may be applied more widely, but algebra in machine learning is generally used as a basic tool, such as matrix theory and eigenvalue theory. Again, the solution to a differential equation often ends up in solving algebra problems. They can be regarded as behind-the-scenes heroes: "The probabilities and statistics that appear in front of you, and the algebraic and logical forces that pluck you down." Is it possible to think of a mathematical learning method as a protagonist and a statistical method as a supporting machine learning theory? In this regard, manifold learning has been “somewhat interestingâ€, and the prediction of financial trends by academician Peng Shige's theory of inverted stochastic differential equations may be a better example of using advanced mathematics to promote new machine learning models. However, from a macro perspective, the degree of involvement in mathematical theory is far from enough. This mainly refers to the profound and modern mathematical theory. We are looking forward to more mathematicians participating in and opening up new models, new theories, and new directions for machine learning. Question 5: Continuation of the last question, the era of symbolic machine learning deals mainly with discrete methods, and the era of statistical learning deals with issues in a continuous manner. There should be no gap between these two methods. The introduction of Lie group and Lie algebra method in manifold learning gives us good enlightenment. From differential manifolds to Lie groups, and then from Lie to Lie algebras, it is a process of continuous and discrete communication. Then, the existing method is not mathematically perfect. Looking at the literature of manifold learning, it can be seen that many theories directly treat arbitrary datasets as differential manifolds, thus identifying the existence of geodesics and discussing the evolutionary dimension. Such examples may not be individual enough to explain the need for mathematicians involved in machine learning studies. Question 6: Has the emergence of the big data era brought about a fundamental impact on machine learning? In theory, it seems that "big data" provides more opportunities for statistical machine learning because of the need for statistical and sampling methods. Industry figures estimate that the emergence of big data will make the role of artificial intelligence more prominent. Some people divide big data processing into three phases: collection, analysis, and forecasting. The work of collecting and analyzing has been relatively good, and the focus now is on scientific prediction. Machine learning technology is indispensable here. This is probably beyond doubt. However, it is also the use of statistics, sampling methods, and also the collection, analysis and prediction. Is the use of this method in the era of big data fundamentally different from the previous use of such methods? Quantity change to qualitative change is a universal law of dialectics. So, from the era of big data to the era of big data, has there been any fundamental change in mathematical statistics? Are there any essential changes in their application to machine learning? What kind of machine learning methods are being called for in the era of big data? Which machine learning methods are driven by big data research? Lei Feng Network (search "Lei Feng Net" public concern) Note: The article was transferred from Big Data Abstract. This article is the preface written by Prof. Lu Yong for the "Machine Learning" written by Professor Zhou Zhihua (Tsinghua University Press, published in January 2016). The book has been reprinted 8 times over 70,000 copies in 7 months of publication. Diesel Generator,Xcmg Diesel Power Set,Trailer Type Generators,Marine Diesel Generator Set XCMG E-Commerce Inc. , https://www.xcmg-generator.com