Ray Cozwell: Grasping policies and regulations to make artificial intelligence better

Inventor and author Ray Kurzweil recently collaborated with the Gmail team on Google to automatically respond to emails for research. He recently spoke with Nicholas Thompson, editor-in-chief of Wired magazine, at the Council on Foreign RelaTIons. The following is an edited conversation record.

Ray Cozwell: Grasping policies and regulations to make artificial intelligence better

[Nicholas Thompson]: Our conversation begins with an explanation of the law of accelerated return, which is one of the basic ideas that underpin your writing and work.

[Ray Cozwell]: At half of the Human Genome Project, 1% of the genome was collected seven years later. So the mainstream critics said, "I told you that this won't work. It took you 7 years to reach 1%, which took 700 years. My reaction at the time was: "Wow, we finished 1%? Then we are almost done. "Because 1% is only 7 times 100%. It is doubling every year. Indeed, this situation continues. Seven years later, the project was completed. This situation has continued since the end of the genome project. The cost of a genome is one billion dollars, and now we have dropped to $1,000.

I would like to mention a meaning of the accelerated return law because it has many chain reactions. It is indeed the reason behind the eye-catching digital revolution we have seen, with information technology deflation rates as high as 50%. So, I can get the same calculations, communications, gene sequencing, and brain data as I did a year ago, and the price a year ago was only half of what it is today. That's why you can buy an iPhone or an Android phone today at half the price two years ago, and the phone performance is twice as good as before. The improved price/performance ratio is partly reflected in the price, and the other part is reflected in the performance of the mobile phone. So when an African girl buys a smartphone for $75, it's equivalent to $75 in economic activity, even though it was about $1 trillion around 1960 and about $1 billion in 1980. Dollar. It has millions of dollars of free information applications, one of which is an encyclopedia, far better than what I bought when I was a teenager. All of this is zero in economic activity because they are free. So we really don't calculate the value of these products.

All of this will change: we will print out the clothes on a 3-D printer. We are currently in the stage of some kind of 3-D printing hype. But in the early 1920s, we will be able to print out the clothing. There will be a lot of cool open source designs that you can download for free. We still have a fashion industry, just as we still have the music, film and book industries, coexisting with free, open source products that are first-class products and proprietary products. We can produce very cheap foods in vertical agriculture: planting fruits and vegetables with water, and cloning muscle tissue in vitro. The first hamburger produced in this way has already been purchased. It is expensive and sells for hundreds of thousands of dollars, but it is very good. All of these different resources will become information technology. Recently, a small Lego-style module from the 3-D printer in Asia was built into a three-story office building in a few days. This will be the essence of architecture in the 1920s. The 3-D printer will print out the physical objects we need.

[Nicholas Thompson]: Let's talk about intelligence, just like the phone in my pocket. It is better for math than me, and playing chess is better than me. It does better than I do on many things. When is it better to talk to people than I do? When will I interview you instead of me?

[Ray Cozwell]: We do have the technology to talk. My team created a smart reply on Google. So we are writing millions of emails. It must understand the meaning of the email to be replied, although the recommendations are short. But your problem is a problem similar to the Turing test, which is equivalent to the Turing test. I believe that the Turing test is an effective test of the full range of human intelligence. You need complete human intelligence to pass an effective Turing test. There is no simple natural language processing technique to do this. If human judges can't tell the difference, then we think artificial intelligence is part of human intelligence, and that's exactly what you have to ask. This is a key prediction for me. I have been talking about 2029. In 1989, in The Age of Intelligent Machines, I set the boundaries between the early 1920s and the late 1930s, and in 1999, The Age of Spiritual Machines. In The Age of Spiritual Machines, I am talking about 2029. The artificial intelligence department at Stanford University felt it was daunting, so they held a meeting when the consensus of artificial intelligence experts was that it would take hundreds of years. 25% of people think this will never happen. My point of view is getting closer to the common point of view or intermediate point of the artificial intelligence expert, but this is not because I have been changing my point of view.

In 2006, Dartmouth held a conference called "Artificial Intelligence @50." The consensus at the time was 50 years, when I was talking about 23 years. We just held an artificial intelligence ethics conference in Asilomar. The consensus was between 20 and 30 years, and that would be 13 years. I am still optimistic, but not so optimistic, more and more people think that I am too conservative.

One of the key issues I didn't mention about accelerating the law of return is that hardware not only grows exponentially, but so does software. I feel more and more confident. I think the artificial intelligence community is getting more and more confident. I believe we are not far from this milestone.

We will combine with artificial intelligence technology to make us smarter. It has done it. These devices are brain expanders, and people think so, this is a new thing. Just a few years ago, people didn't see their smartphones as brain expanders. By definition, they enter our bodies and brains, but I think this is an arbitrary distinction. Although they are outside our bodies and brains, they are already an extension of the brain that will make us smarter and more interesting.

[Nicholas Thompson]: Please explain the framework of policy makers, how they should look at this acceleration technology, what they should do and what they should not do.

[Ray Cozwell]: People are very concerned about artificial intelligence, how to ensure the safety of technology, this is a discussion of polarization, just like many discussions now. In fact, I have been talking about commitment and danger for a long time. Technology is always a double-edged sword. The fire warms us, cooking our food, and burning our house. These technologies are more powerful than fire. I think we should go through three stages, at least I think so. The first is to be happy to have the opportunity to solve the long-standing problems: poverty, disease, etc. Then warn that these technologies can be destructive and even pose a risk to survival. Finally, I think we need to point out that we must realize that in addition to the progress we have made, we must continue to promote the development of these technologies. This is a completely different problem. People think that the situation is getting worse, but in reality the situation is getting better and there are still many human sufferings that need to be solved. Only sustained development in the field of artificial intelligence will enable us to continue to overcome poverty, disease and environmental degradation, but at the same time we will be at risk.

This is a good framework. Forty years ago, foresighted people saw the prospects and risks of biotechnology, fundamentally changed biology, and made biology develop on the right path. Therefore, they held an “Asiloma Conference” at the conference center in Asilomar and proposed ethical guidelines and strategies to ensure the safety of these technologies. It is already 40 years later. We are being clinically affected by biotechnology. Today is a trickle, and the next decade will be a flood of beasts. So far, the number of people who have been harmed by the abuse of biotechnology is zero. This is a good learning model.

We have just held the first Asiloma conference on artificial intelligence ethics. There are many of these ethical guidelines, especially in the field of biotechnology, which have been written into law. So I think this is our goal. The most extreme is, "Let us ban this technology," or "Let us slow down." This is really not the right way. We must guide in a constructive way. There are some strategies to do this, and this is another complicated discussion.

[Nicholas Thompson] You can imagine that Congress may say that everyone working in a certain technical field must disclose their own data, or must be willing to share his data set, at least to make it difficult for the highly competitive market to overcome A powerful tool for confidence. As you can imagine, the government will say, “In fact, we will have a large government-funded project, just like OpenAI, but managed by the government.” As you can imagine, there is a huge national infrastructure movement to develop this. Technology, so at least a core person in charge of the public interest can control some of them. Do you have any recommendations?

[Ray Cozwell]: I think open source data and algorithms are usually a good idea. Google puts all artificial intelligence algorithms and open source source code (TensorFlow) in the public domain. I think this is actually a combination of open source and accelerating return laws that will bring us closer to ideals. There are a lot of questions, such as privacy, which are the key to maintenance. I think people in this field are usually very concerned about these issues. It is not clear what the correct answer is. I think we want to continue to improve, but when you have such a big power, even if it is in good faith, there will be abuse.

[Nicholas Thompson]: What are you worried about? You are very optimistic about the future. But what are you worried about?

[Ray Cozwell]: I am accused of being an optimist. As an entrepreneur, you must be an optimist, because if you understand all the problems you encounter, you may never start doing it. Any item. But as I said, I have been paying attention to and writing some negative effects that actually exist. These techniques are very powerful, so I am really worried, even though I am an optimist. I am very optimistic, we will finally get through the storm. I am not optimistic that we will not encounter any difficulties. During the Second World War, 50 million people died, and the power of technology at that time exacerbated the death toll. I think it's important to make people realize that we are making progress. A 24,000 poll was recently conducted in 26 countries. The question asked is, “Is the world’s poverty situation alleviated?” 90% said the situation has gotten worse, but this is the wrong answer. Only 1% gave the correct answer, thinking that the world's poor have fallen by at least 50%.

[Nicholas Thompson]: What should the audience do about their careers? They are about to enter a world in which career choices map to a world of completely different technologies. So in your opinion, what advice is given to people in this room?

[Ray Cozwell]: This is indeed an old suggestion, that is to follow your passion, because there is really no field that will not be affected, or this is not part of this story. We will merge in the cloud to simulate a new cerebral cortex. So, we will still become smarter. I don't think artificial intelligence will replace us. It will enhance our strength. It has done it. Without the brain extenders we have today, who can do their job. This situation will continue. People will say, "Well, only wealthy people have these tools," I said. "Yes, just like smartphones, there are now 3 billion people who already have smartphones." I said before. Billions of people, but I just read the news, probably about three billion people. It will become six billion in a few years. This is because of the amazing price performance explosion. So find the place where you have passion. Some people have complex emotions, they are not easy to categorize, so find a way to use the tools available to make you feel like you can change the world. The reason I propose to accelerate the law of return is to record the time required for my own technology project, so I can start and try to predict the direction of technology development a few years before the project is feasible. Just a few years ago, we had some small devices that looked like smartphones, but they didn't work very well. Therefore, such technological revolutions and mobile applications have barely existed five years ago. After five years, the world will be very different, so try to keep your project growing faster than the train at the train station.

Audience question: So many emphasises are on the beautiful side of human nature, science and exploration, and I am also curious about the next step towards our robot partners. What about the dark side? Then war, war machine and violence?

[Ray Cozwell]: We are learning how to use and manipulate these platforms to expand the human tendencies, many of which are the latest information we are learning. So artificial intelligence learns from examples. There is a saying in this field that life begins with a billion examples. The best example is to learn from people, so artificial intelligence often learns from humans. This is not always the case. AlphaGo Zero has just learned how to play against itself, but this is not always possible, especially when you are trying to deal with more complex real-world problems. Great efforts have been made in this area, and all large companies and open source research areas are also working hard to eliminate prejudice from artificial intelligence and overcome gender bias and racial bias, because artificial intelligence learns from biased humans. It will gain prejudice from people. As human beings, we will gain prejudice from everything we have seen, many of which are subconscious. As educated people, we will recognize prejudice and try to overcome it, and there will be conflicts in our minds. There is a whole area of ​​research to eliminate the prejudice of artificial intelligence and overcome the prejudice they have acquired from humans. So this is a study that can overcome the problems of machine intelligence. From these perspectives, machine intelligence is actually less of a human bias than it learns. In general, although the various promises and risks on social media are intertwined, overall, this is a very beneficial thing. I walked through the airport and every child over the age of two took their own equipment. Social media has become a worldwide community, and I think the current generation is more likely than any other generation to feel that they are world citizens because they are connected to all the cultures of the world.

[Nicholas Thompson]: Last year, the relationship between the United States and the rest of the world did not become closer. Many people would say that our democracy has not become better. Is this a twist on the road of continuous progress and humanity, or is it misunderstood by many people?

[Ray Cozwell]: The political polarization between the United States and the rest of the world is unfortunate. I don't think this is the topic we are talking about today. I mean, we have experienced major twists and turns in the world. The Second World War was a considerable twist and it did not actually affect these trends. In some government officials or governments, there may be things that we don't like. But there is one point to discuss here. We are not in the era of totalitarianism that cannot express our own views. If we move in that direction, I will be more worried, but I don't think this will happen. So don't undermine the importance of the government and those in power, but this is on a different level. The issues we are talking about are not affected by these things. I am worried about the risks, because technology is a double-edged sword.

Audience question: My problem is related to inequality. In most of human history, economic inequality has been quite high, and there are many stages. I want to know if you think the 20th century is an anomaly and how the spread of technology will affect this inequality.

[Ray Cozwell]: The issue of economic equality is developing in a good direction. According to the World Bank, in the past 20 years, the number of poor people in Asia has decreased by more than 90%, from the primitive agricultural economy to the prosperous information economy. The economic growth rates of Africa and South America are much higher than those of developed countries. There is inequality everywhere you see, but things are moving fast in the right direction. In the past 20 years, the world's poor have fallen by 50%. There are many other metrics. So we are moving in the right direction. At any point in time, there are serious inequalities, and some people are experiencing suffering, but these phenomena are developing in a good direction.

Audience question: I know from your comments that you are predicting that artificial intelligence will be 12 years from the next stage. You have mentioned it a few times. Although you are very optimistic, you are worried about the risks, so I want to know. Can you elaborate on what you mean and what do you think technical experts should do to reduce these risks?

[Ray Cozwell]: I mean, the risk is to threaten the survival of our civilization. Therefore, the first risk to survival for humanity is nuclear proliferation. We have the power to destroy all human life. With these new technologies, it is not difficult to imagine that they can be extremely destructive and destroy all human beings. For example, biotechnology. We have the ability to re-plan biology from disease, such as immunotherapy, which is a very exciting breakthrough in the treatment of cancer, which I think is revolutionary, but it is just beginning. It is reprogramming the immune system to treat cancer, which is usually not possible. However, bioterrorists can reprogram a virus to make it more deadly, more contagious, and more secretive, creating a super weapon. This is the problem caused by the first Asiloma meeting 40 years ago. These recurring meetings have made these ethical guidelines, security protocols and strategies more complex, and so far it has worked. But we have been making this technology more complicated, so we have to redefine these guidelines again and again. We have just held our first meeting on ethics of artificial intelligence. We have put forward a set of ethics and have been signed. Many of them are vague. I think this is a very important issue. We find that we must establish ethical values ​​in our software. A typical example is a driverless car. The whole motivation for self-driving cars is to avoid 99% of the deaths of 2 million human drivers, but it will fall into a situation where ethical decisions must be made: it should be moving towards a stroller or an elderly couple. Still driving towards the wall, which can cause passengers to die. Does the driverless car have ethical guidelines that do not cause the death of passengers inside the car? In this case, the driverless car can't send an email to the software designer, and then say, "God, what should I do?" The ethics must be built into the software. So these are practical problems, and artificial intelligence has a complete field on this issue.

But how do we deal with the more real risks: the artificial intelligence weaponization that can be achieved in the short term. Artificial intelligence is used by the Department of Defense around the world. There was a document asking people to agree to ban the use of automatic weapons, which sounded like a good idea, and the example used was "We ban chemical weapons, so why not use autonomous artificial intelligence weapons?" This is a bit complicated because we can not Anthrax does not use smallpox. It is okay to ban chemical weapons. But automatic weapons are a dual-use technology. An Amazon drone that can send your frozen waffles or medicines to a hospital in Africa, possibly for transporting a weapon. This is the same technology and everything is ready to go. In other words, this is a more complicated issue and how to deal with it. But our goal is to reap the promise and control the risks. Without a simple algorithm, we can put this subroutine into our artificial intelligence. "Well, put this subroutine in." It will keep your artificial intelligence benign. "Intelligence is inherently uncontrollable. My strategy is not stupid. It is to practice the morality, morality, and values ​​we want to see in the world in our own human society. Because the future society is not certain from Mars." The invasion of intelligent machines. It was born in our civilization today. This will strengthen our self. Therefore, if we are practicing the values ​​we cherish today, it is the best strategy to have a world with these values ​​in the future.

Mercury Slip Ring

A slip ring is an electromechanical device that allows electricity and data to pass through a rotating assembly. A mercury slip ring uses liquid mercury as the electrically conductive element inside the rotating assembly, as opposed to traditional carbon brushes. Mercury is a better conductor of electricity than carbon, and it also has a very low contact resistance. This makes it an ideal choice for applications that require high-speed data transmission or where reliability is critical. Mercury slip rings are used in a variety of industries, including medical technology, aerospace, and defense.


Why do we choose a mercury slip ring?

A slip ring is an electromechanical device that allows electrical current to pass between rotating objects. Slip rings are often used in applications where a cable or connector would otherwise twist and tangle as the object rotates. There are many different types of slip rings, but one of the most common is the mercury slip ring. Mercury slip rings offer several advantages over other types of slip rings, including high reliability, low maintenance, and long life. Here are three reasons why we choose a mercury slip ring:


1. Reliability: Mercury is an extremely reliable material, and mercury slip rings are among the most reliable types of slip rings available. Mercury has a very low failure rate, and it is not affected by changes in temperature or humidity. This makes mercury slip rings ideal for critical applications where reliability is essential.


2. Low Maintenance: Mercury slip rings require very little maintenance. Because mercury is a very inert substance, it does not corrode or generate any corrosive gases that would affect its reliability. Mercury slip rings do not require any lubrication, and they can operate in a wide range of temperatures and environments.


3. Economical Mercury slip rings have a lifetime cost advantage over other types of slip rings as well. Generally speaking, mercury slip rings are approximately 25% more expensive per kilowatt than other types of slip rings. However, because they require less maintenance and have a greater degree of reliability than other types of slip rings, they will save you money over the lifetime of your equipment.

Mercury Slip Ring,Slip Ring Gigabit Ethernet,Slip Ring 400V,Slip Ring Pneumatic

Dongguan Oubaibo Technology Co., Ltd. , https://www.sliprobs.com