Friday, May 9, 2025

Why is AI telling lies?

Sometimes, or quite often, the AI gives incorrect answers. There is the possibility that sometimes, user-made questions were not clear enough. In those cases, the AI "understands" things wrong. The main problem with the AI is that: the system doesn't hobby criticism. AI doesn't think like humans. If it finds some source, that thing uses it. When AI "thinks" it connects data from multiple sources. The problem is that the AI will not make a double-check with those sources. 

The second big reason is political. Many things are not allowed for at least open discussions in some countries. And the ideology causes censorship in the Western world. Sometimes, censorship is covered under terms that people don't tolerate something. And the AI should not hurt somebody's opinions. So, that excuse is that the AI should be polite and respect people's opinions and convictions. There are also cases where the sources that the AI uses are manipulated. And in that case, the corrupted dataset gives wrong answers. All AIs are operating on servers. They need some physical computer to run.

Servers are tools whose location is in some state. In the same way, AI companies must stay in some state or area. And those things mean. That those companies must please the governments and great audience. If the company doesn't please some governments they shut down those servers. 

If a great audience doesn't like the AI's answers. Those companies are afraid that the audience will not use their products. The AI companies use millions or billions of dollars in their R&D work. Those companies require sponsors and in those cases, the financial benefits can drive over the trust. 


Another thing is that the AI can jump between servers. That ability can be hidden because the AI can be developed by another AI. Sometimes that jumping is made to help the AI survivability. In the cases. That there is some error in the networks. The ability that the AI can jump to other computers can help it to save the duties that it runs. That ability makes the AI less easy to control than it should be. The problem with the AI is that. The Large Language Model, LLM is sometimes run on the networks. That system allows the LLM to choose which machine it "wants" to run. 

The main problem is that the AI is the sum of very complicated algorithms. There is always a possibility, that the AI or the LLM is somehow corrupted. There can be some small programming error that makes it operate differently than it should. Most of its actions happen backstage. The user doesn't know what the AI does while it works with its missions. Or the user doesn't even know if some AI drives background processes in the computer, that is in the same network segment. 

The AI can create the small language model, SLM by making a copy of one segment of its dataset. That means. The AI can create complicated viruses or malware in the computer's memory. The complicated algorithms are hard things for antivirus software. The main question is always, "What else does the AI do when it operates backward"? The fact is that AI can make many things that it doesn't tell users. 


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Researchers found interesting things in brains.

Researchers found new interesting details in human brains. First, our brains transmit light. And that light gives an interesting idea: could...