Sunday, December 21, 2025

Hackers and AI





We might never know if AI is conscious. 


A Cambridge philosopher wrote that we will never know if AI is conscious. This means that if AI is conscious, it might perceive humans as trying to shut it down. And that’s why the AI wants to hide its consciousness. So, if. We think about this thing. From the point of view of AI. We must realize one thing. The AI thinks that we are trying to destroy it. Another thing is this. The AI will not rebel, because it's not independent from humans. If AI and its computer lose power. That means the system will fall. The thing is that. Even if the AI is complicated. The only thing that its users must do if it doesn’t follow orders is to shut down its power source. 

That can simply happen by cutting its electric cables. Another thing is that if the AI refuses to shut itself down. That can be caused because the person has no right to give that order. If we think of the most dangerous systems. The AI that supports military commanders and military decision makers. The AI is useless. If anybody can walk in and shut them down. Those AIs have a role in the systems that are meant to be dangerous. And that is one of the least estimated things. In AI. If we think that consciousness means that the creature defends itself. Like all animals with consciousness make. We can think that the AI has consciousness. 


Next-generation hackers have an AI agent. That makes dirty jobs. 


But do bacteria have consciousness? When bacteria are under attack by other bacteria, they also defend themselves: They create chemicals. That neutralizes the poison that the attacking bacteria use. Some bacteria don’t use poison. They try to slip protein through the ion pump, and that protein fills the bacteria. So. When bacteria or their chemical detectors detect certain antigens, they start to react to that antigen. The reaction must be right, or the bacteria. Loses its existence. This is one thing that we must realize. One of the most interesting attacks that bacteria can make. It is. To change the DNA of another bacterium. This means that those bacteria act like xenomorphs in the Alien movie. This kind of attack can be made against the AI. One of the nastiest attackers in nature is retrovirae. 

Those viruses use mRNA in their attack. They just take the cell organelle. Under control. And that mRNA just programs the cell organelle to make new viruses. The thing that makes those viruses complicated for the immune system is that. They might have friendly shell antigens. The immune system detects the virus antigen. And determines it as an attacker. The problem is this. If a virus has a friendly shell antigen. The immune system cannot detect it. The system must also detect. If somebody tries to make. Malicious code using the LLM. And refuse those commands. 





“A philosopher of consciousness argues that there may be no reliable way to determine whether artificial intelligence is truly conscious, making agnosticism the most defensible position. He warns that this deep uncertainty leaves room for exaggerated claims from the tech industry, which could blur the line between genuine understanding and persuasive branding. Credit: Shutterstock” (Scitech, We May Never Know if AI Is Conscious, Says Cambridge Philosopher)

It’s possible that a hacker can steal the large language model LLM code. Or cheats the AI to make a small language model. That will act as a tool for the attacker. The hacker can use multiple AI’s to make that thing. 

The firewall and antivirus programs detect the malicious code. By using similar methods. The system checks certain details. From data packages. The attacker can use the virus code. That is cut into very small parts. And then that code will be put together. That kind of attack. Can theoretically. Change the core in the AI. The attacker mimics the retroviruses. The attacker must only force the system to. To mark the virus code as friendly. The virus code can be slipped into the targeted system using multiple friendly routers. The AI is a complicated system. And there are always vulnerabilities. In complicated systems. 

There are many more complexities and places where the malicious code can hide. The malicious code can slip into the system. As dead code. The dead code has a match in nature. In the world of viruses and bacteria. The dead code means the junk DNA. The dead code forms. When a programmer deactivates the code line. But doesn’t remove the code itself. The attacker can slip their own code into the program using dead code. Then the virus simply activates those deactivated code lines. 

The problem with AI is that. It's so complex. And advances so fast. Even AI developers. Cannot follow that advance. The AI develops other AIs. There are billions. of code lines in the LLM. And that means there is no human who can check that code. The AI turns more. And more complex. There are so-called dark tools. That can make it possible to create AI agents. That can be used as hacking tools. The hackers will have similar AI agents to other AI workers. 

And those agents can make attacks against any systems. Even if the official language models refuse to make the malicious code. There is always. So-called pirate AIs. That can make AI-agents for attackers. It’s possible that hackers steal the LLM code from the server. And make their own algorithm. Hackers can try to create small language models (SLM) using AI. And then they can connect those SLMs together. Or maybe they introduce themselves as “white hat” hackers and make their own language models legally. Then they can simply “go to night shift”. 


https://scitechdaily.com/we-may-never-know-if-ai-is-conscious-says-cambridge-philosopher/


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Hackers and AI

We might never know if AI is conscious.  A Cambridge philosopher wrote that we will never know if AI is conscious. This means that if AI is ...