If we like somebody, we might forget to ask critical questions. In working interviews, sometimes the interviewer forgets to ask about the school history, and a person could pass the interview for a company leader without even having completed elementary school.
The primary problem with organizations and societies is this: what if somebody breaks rules? What if somebody endangers safety? Where do we put the line? When must the authorities know about things like dangerous plans? What is the limit of suspicion about crimes, and what is unnecessary stalking? Then we must determine things like security. North Korea has a different point of view on that thing. Than Western democracies.
But what is the limit? When should AI report to law enforcement about people who seem dangerous? If officials hit somebody’s home and find things like assault rifles. That doesn’t necessarily mean that person will do bad things. But if the authorities don’t react. That can cause problems if the person does bad things . And uses AI for planning. That means that shooting could be prevented. But otherwise. The user accounts for AI could be made by using stolen identities. Or hackers can get those passwords.
This means: those plans can be made by other people. Or. They are made. For destroying the person’s reputation. But if authorities take guns away, nobody can be sure if the person will rule out the threat. If authorities will not do that, and something happens. That causes legal actions in court. This is the problem with predictive actions. Nobody can be sure if the person who seems to make bad things is sure about the action. But if the authorities don’t react. And a person rules out the threat. Which is also. A very bad thing.
The AI’s ability to collect information is incredible. And that brings one. A very big ethical problem in the world of the net. People consider their privacy, but what if a person uses AI in a way that harms the state’s or a private person’s security? What if some people plan things like bank robberies or school shootings by using AI? Or what if some North Korean intelligence officers establish a company in some Western cities like Toronto, Canada, and then use AI as a hacking tool? When we read the news that the Chinese government denies firms the right to fire workers because AI replaces them. We must realize that the news can be faked.
The Chinese government has a very big budget for that kind of project. And they give large-scale support to AI development. The main problem is this. The AI development has turned into an arms race. The AI shows its capacities. In the military and intelligence work. And that means the AI is a tool that. Those overseas actors want to invest. The AI is a tool that allows for observation. And controlling people. The thing that makes AI powerful and effective also makes it dangerous.
The AI has no feelings. It just sees and reports. So, it's hard to please AI. When people develop AI for observation and intelligence work, they use the same arguments as people who resist AI. The AI has no friends. That means it should be trustworthy and neutral. When AI is used to control police work, the question is, who controls the controller, who observes the observer? If humans follow people like police officers. There is always a possibility that feelings or friendships affect people’s ability to think objectively.
When AI sees something, it just tells what it sees. It sees that the person puts something into their pocket. It doesn’t see things like clothes, age, or anything other than the action.
When some people are handsome. That can cause a situation. Where those people can manipulate undercover officers. And this is why the machine view is a tool that can act neutrally. The AI can follow how long the judge keeps files open. If the judge doesn’t even look at the files. Those are brought into the courtroom. Or the inspector doesn’t open things like a thesis in the university, but they make decisions. Those are things that require intervention. The problem is human, who uses tools and positions. In the wrong way.
https://futurism.com/artificial-intelligence/two-shootings-openai-stopped
https://futurism.com/artificial-intelligence/openai-school-shooter-tumbler-ridge-lawsuits

No comments:
Post a Comment
Note: Only a member of this blog may post a comment.