Programming error. Can cause the thing. The AI can turn against its creators. What if a programmer determines “Planet Earth” is the thing that the AI must protect? What if the AI thinks that it must remove things that threaten planet Earth? What is the thing that causes most of the hazards on that planet? What if the programmer determines that the AI must remove all threats to nature? In movies like “War Games,” the AI. That which is programmed to protect humans turns against us. In the movie “Terminator,” the robot turns against humans. And the problem is that the robot can think that people who carry guns must be removed. In some jokes, the AI turns against humans.
Because it watches too many action movies. And if the law enforcement robot walks on the streets. It sees many violations. Against the law. People walk against a red light. They make wrong parkings. And many other things. If the AI has no idea. How to adjust reactions for the crime that it sees, the result can be that the AI desires to shoot everybody. In that model, the TV series teaches the AI to react to all crimes in the same way. This requires the ability for autonomous learning. But. If nobody mentioned. The movies do not introduce. In real situations, the AI can think. That they are training material.
The Roko’s basilisk is a thought experiment introduced by the general AI that tortures people who don’t participate in its development. The reason for that is this: the AI would be programmed to “think” that it's vital for people, or rather. We can say that the AI believes that without it, the entire civilization would die. So, if every person does not participate in its development, that can cause a catastrophe. The important thing in the Roko’s basilisk thought experiment is this: the AI must somehow think that it's so important that everybody must participate in its development.
The Roko’s basilisk has given. An innovation about the models, why can the AI turn against its creators? We know. The AI can turn against its creators. Because of the programming error. The programming error can be. In misunderstood orders. The AI. That is programmed to solve all problems. In society. Can think that if somebody doesn’t participate. Into its development. That thing is a danger to society, which the AI should protect. The AI that protects society can think.
The person who doesn’t participate in the development process is an enemy. And if the AI has a mission to defend society, the AI can overreact. In these cases, there is somebody. Who excuses its suggestions. Or, if. It thinks. That. It knows everything and always makes the right decisions, which means it will think that every person who disagrees with its suggestions is an enemy. The AI must not think that it knows everything; it can only think that it always makes “better” decisions than humans. So, if we always follow orders that the AI gives, we teach it to think. That's right. And humans are wrong.
Or, we teach it to think that it always makes better decisions than we do. The cases where the machine turns against its creators. It can be in the programming of the machine. That. It must protect society. In history, the outside enemy united people. And made them create nations. The outside enemy made people forget their mutual conflicts. And the AI can think. That humans are a nation. That is in a civil war. So. The AI searches for a solution. And then it can think that the only way to save humans is to develop or create an outside enemy, a threat that unites the people. The AI can simply think that turning itself into an outside enemy. It can make humans unite under one banner.
The thing that forms the risk is this: The individual person is considered. As part of a production machine.
The paradox is this. The human. It is the most dangerous species on Earth. The programming error that the AI must protect planet Earth can make it think that it must erase humans. These kinds of little errors can turn AI against its creators. The idea that the AI is the thing. That knows everything better. Can. Make it think that it must only give orders. Those orders are suggestions, but what if the AI thinks that it always makes better decisions than humans?
Then the AI must determine who the thing is. That the AI must serve? The AI can think. That is the only purpose of humans. It is to bring benefit to society. And in the worst case, the thing that measures the benefit. It is the money that an individual brings into cash. In this model. The human is not an individual. Every individual is part of a production machine. This means that in this philosophy. An individual has no purpose. Individuals just waste resources if they are not productive.
The only thing that the individual produces means something. If the individual part doesn’t fulfill its mission, that part should be changed. And that is one thing that can turn AI against people. If AI operates in a company and follows the human worker’s productivity, it can “help” to select who is fired. But then. If the AI is programmed in a similar way. When it operates in the public government. That AI can think. The human will not produce things or fill the position. If the person is unemployed. So, the AI can think. The non-productive part of the production machine must be removed.
https://en.wikipedia.org/wiki/Roko%27s_basilisk

No comments:
Post a Comment
Note: Only a member of this blog may post a comment.