Five technology giants want to develop ethical standards for AI to explore the impact of employment

The five technology giants to develop ethical standards of employment and the war the influence of Tencent Francisco, according to foreign media reports AI, 4 years ago, a science fiction movie producer let us on artificial intelligence (AI) machine to fear that they could pose a threat to human. However, our biggest concern over the next 10 to 20 years will be whether the robot will take over our work or hit us on the highway. Now, Google (micro-blog) parent company Alphabet, Facebook, Microsoft, IBM and Amazon, such as the five giants of science and technology is planning to develop moral standards for artificial intelligence. Science fiction mainly focused on artificial intelligence threat to human survival, and researchers of technology giants such as Alphabet will hold a meeting on more specific issues, such as the influence of artificial intelligence on employment, transportation and even war. For a long time, technology companies are exploring what artificial intelligence machines can do in the end. In recent years, the field of artificial intelligence has made breakthrough progress in a series of aspects, from driverless cars to understand language machine (such as Amazon Echo), and then to a new generation of automatic combat weapon system etc.. According to 4 sources involved in the creation of industry partnership negotiations said, although the company has not yet finalized the Alphabet artificial intelligence industry organization’s name, but its basic intention is very clear, that is to ensure that the artificial intelligence research focus on the benefit of mankind, and not to harm humans. A study by the Stanford University has highlighted the importance of the organization’s efforts. The Stanford University report was funded by Microsoft researcher Eric · (Horvitz), is one of the executives to participate in the discussion of the industry, (Eric). Stanford University, the project called "artificial intelligence hundred years of research," they plan to 5 years as a benchmark, the development of artificial intelligence in the next 100 years, the impact of the community on the detailed report. For the tech world, they are most worried about whether regulators will create rules around their AI research. As a result, they are trying to develop a framework for self regulatory organizations, although it is not clear about their specific functions. The University of Texas at Austin computer scientist, Stanford University, author of the report Peter · Stone (Peter Stone) said: "we’re not saying there should be no regulation, but should be the correct supervision." Although the technology industry has always been full of competition, but there are precedents for the cooperation of technology companies, the premise is to bring them the greatest benefits. For example, in 1990s, technology companies agreed to set standard encryption for e-commerce transactions, which laid the foundation for the Internet business for 20 years of growth. Stanford University report called "artificial intelligence and life in 2030", the authors believe that artificial intelligence is likely to be regulated. The report said: "the team has reached a consensus, trying to regulate artificial intelligence is often misleading, because we are now on the artificial intelligence has not been clearly defined, in different areas, also need to consider the different risks." Stone said that the report recommended to raise the level of!相关的主题文章:

« »

Comments closed.