Saturday, January 31, 2026

What is the truth about the devices that were used in the Maduro incident?





EC-130 “Compass Call”


When Delta Force captured Venezuelan dictator Maduro, the question is whether it used some acoustic or electromagnetic systems that put the person to sleep during the attack? The acoustic system. It can put a person to sleep. But the attack against missiles. And electric networks. Can be made. In many ways. The attacker can use things like a quadcopter to pull copper wires over high-voltage lines. Or the system can use graphite powder. To make short circuits. In high-voltage electric lines. Or they can use plasma in the same missions. The easiest way is to shoot transformer stations. using assault rifles. 

The photoacoustic devices. Or some other acoustic devices can destroy electronics simply by causing oscillation. 

Photoacoustic devices can. Make things like tiny earthquakes in microchips. Those tiny earthquakes can help to moderate signals that travel in the microchips. Those systems can make things like nanocrystals oscillate. And that can be the new way to moderate electric impulses in very small devices. Photoacoustic devices can be a tool. That can make many things. Those are thoughts. Making the impossible possible. Photoacoustic devices can cause such a powerful “earthquake” in microchips that they destroy those microchips. If a photoacoustic system causes resonance in microchips. That can put them in pieces. 



There is a possibility that in the Maduro strike. Delta Force used technology that forces a person to sleep. These kinds of systems are tested at MIT. Long-range acoustic devices (LRADs) are easy. To mount in helicopters. Those systems. Those used in riot situations can also be used to jam or paralyze humans. The LRAD can also be connected with that technology. That falls humans to sleep. And. The LRAD can also be used to break microchips using acoustic resonance. Acoustic resonance can also destroy ceramic insulators in electric lines.  



So, when we return to the Maduro case, where Delta Force somehow jammed those missiles, we must understand that those missiles were in a warehouse for a long time. There is a possibility that those missiles were taken out. By using regular sabotage. The CIA agents who played service staff could simply cut the wires from the missiles. Those personnel can also use portable acoustic systems to break those microchips and their shells. Or if those people had time. They can also put some acid on the microcircuits. Or they can damage batteries. 

The fact is that the EMP weapons, or acoustic weapons, could be used against Maduro’s bodyguards and the house’s electronics. Their targets could also be places like cell-phone receiving stations. So, maybe they were not used against radar and missiles. The EMP weapon. It can delete the kernel from microchips. Those microchips look non-damaged. But. They have no kernel, which means the software cannot interact with them. This kind of EMP weapon can destroy missiles even months before an attack. If there is no crew that tests missiles and their electronics. That fatal damage can remain unseen. 


"Researchers have created a phonon laser that generates tiny, earthquake-like vibrations on a chip. The advance could simplify smartphone hardware while boosting speed and efficiency. (Artist’s concept.) Credit: SciTechDaily.com" (ScitechDaily, Scientists Just Created Tiny Earthquakes Inside a Microchip)

The high-power microwave- or radio-wave-based EMP weapon can be hidden in missile warehouses. There are many ways to create an EMP pulse. That causes large-scale damage in electronics. One of those versions is to create. High-power voltage impulses in electric systems. The system can be used. The high-power capacitors that push an electric impulse into the electric network.

Or a high-power radar impulse can cause the destruction of computers. By causing a surge or overvoltage in electric wires. In that case, the system can use an electrolytic liquid. Like salt water, liquid transfers electric impulses to wires. The system is like an electric impulse gun. That uses the saltwater “beam”. Which travels over the electrode. In that case, salt water replaces the copper wires. 


"The RUS-PAT (rotational ultrasound tomography, RUST, combined with photoacoustic tomography, PAT) technique combines the benefits of ultrasound imaging for seeing tissue structure with those of photoacoustic tomorgraphy for revealing function of the vasculature. Credit: Yang Zhang" (ScitechDaily, Scientists Develop a New Way To See Inside the Human Body Using 3D Color Imaging)

Maybe. Those systems were in such a bad condition. That they didn’t work. Another version could be that. The orders to put those missiles into position were denied. There is also a possibility that those operators were simply taken out using some opioid gas or acoustic devices that deny their ability to operate. Or, Maduro’s successor simply deleted SIM cards from telephones. Those operators used to communicate with the HQ. 

But things like. High-precision EMP and acoustic devices can destroy microchips from those missiles. The high-power EMP can destroy electronics even if the device is shut down and stored. If those missiles are stored in the EMP-protected shelter, that means that the agents must somehow slip the EMP bomb into that shelter. The portable EMP device. That uses high-power impulse capacitors can be launched by a wired remote launcher. Or the system can get its launch signal from a timer. Or the drone can send the signal by using a laser. 

There is also a possibility that the drone is connected to a spring switch. There is a cotter pin in the launcher. When the drone pulls it away, spring puts the electrodes together. This is necessary only when the EMP device is in the EMP-protected shield. That means that EMP, or acoustic devices, can be in a helicopter. Or they can be in drones or aircraft. Or they can be man-portable and slipped into those warehouses a long time before strikes. The fact is that. Those operations are very classified. 


https://edition.cnn.com/2026/01/25/politics/trump-says-secret-discombobulator-weapon-was-used-to-capture-maduro


https://www.twz.com/news-features/did-a-mysterious-sonic-weapon-really-aid-delta-force-in-capturing-maduro


https://news.mit.edu/2024/startup-elemind-helps-people-fall-asleep-0925


https://www.researchgate.net/figure/LRAD-weaponized-cone-of-sound_fig1_347576544


https://scitechdaily.com/scientists-develop-a-new-way-to-see-inside-the-human-body-using-3d-color-imaging/


https://scitechdaily.com/scientists-just-created-tiny-earthquakes-inside-a-microchip/

Thursday, January 29, 2026

What makes us intelligent?

 




“Modern neuroscience has revealed a brain made up of specialized networks, yet this fragmented view leaves open a central question: why human thought feels unified. A new neuroimaging study explores how large-scale patterns of communication across the brain give rise to general intelligence.” (ScitechDaily, Scientists May Have Found How the Brain Becomes One Intelligent System)

Finally. Researchers found. Why do brains start to form an intelligent entirety? Or they think that they have an answer for that question. We have multiple different cell types. In the brain. Some of those neurons work. With somatosensory. Some of them work with memories. Every single neuron type has its own role in the brain. The problem is, how those, maybe different, neuron types and brain areas orchestrate themselves to work as an entirety? This thing makes humans so different. When we talk. About things like consciousness. Sometimes we ask: why do things like animals escape predators? The answer is simple. When the predator starts to follow its prey, the rest of the pack can escape. 

Orchestrated brain cells make it possible. That. Humans can make things that are impossible for animals. The thing that makes that thing possible is that the human brain can handle abstraction. The abstraction means that when our company tells us that. We should beware things like wolves; we understand that thing. Without the need to see any wolves. This is one of the things that differentiates us from things. Like cows. When a wolf attacks one human. We will go to help that packmate. But cows do not make those things. 

They just escape, and a wolf kills one of them. Others will not help that poor member of the pack. Escaping can be a good choice, because most of the pack will survive. And the sacrificed member was always old and weak. That thing is an intelligent option. If the creature has no imagination. But in the cases of humans. Old people or old members of the group transmit hereditary knowledge to the next generation. 

We will always help other humans, because that benefits our species. When a species attacks, we can run, but we can also take a stone or some other weapon, and make a counter strike. Our brains are such a powerful tool. Because they start their operations at multiple points. At the same time. Our brains can connect memories into real-time observations. 

But then. We must realize that those neurons in our brains must behave as they behave to maximize their own benefit. As an orchestrated model where individual neurons serve the larger group of neurons, that thing gives them a benefit. Alone, one neuron is not a very dramatic thing. But. As a large group that controls the body. Those things make many things that are impossible. Orchestration gives those neurons power. And that makes them operate. As an intelligent entirety. In that case, intelligence means that we can make new things. Like houses, write computer programs. And make many things that are not happening just here and now. 

We could make a morphing neural network that mimics the human brain. We could divide its servers like neurons. They are divided in the human brain. We can make. The server group where each server has the same role as a certain brain area. Has in the human brain. This makes it possible to mimic the brain. 

The AI systems can manage things. Better. Than human brains, if they have ready-to-use models in their memories. This means that AI must face a similar situation before. If the AI must make. A new action model. It loses to human brains. The AI is not in a similar way. Flexible as human brains. When AI must make an action model. In a surprising situation, it loses. to human brains. 


https://scitechdaily.com/scientists-identify-the-evolutionary-purpose-of-consciousness/


https://scitechdaily.com/scientists-may-have-found-how-the-brain-becomes-one-intelligent-system/


https://scitechdaily.com/the-brains-strange-way-of-computing-could-explain-consciousness/


Thursday, January 22, 2026

AI and spatial intelligence.



We are all familiar with virtual worlds, such as Minecraft and Fortnite. And many other gaming platforms that are more or less limited. Virtual worlds are places where researchers and gamers can train AI. The use of virtual worlds is unlimited. We can create virtual offices where workers meet in virtual spaces. Even if. In many examples, people who use virtual spaces use VR glasses. A regular cell phone could be suitable for that purpose. 

Of course, the system must have some kind of input method. Like a keyboard that is connected to those systems. Voice commands might not be suitable for cases like virtual meetings. The program could also follow the head movements and turn the image on the screen. If the user doesn’t want to use VR glasses. The virtual spaces. These are connected with the large language models (LLM). 


Robots that can operate as a medium between computers and humans can act as physical avatars. 


Can. Bring a new ability to the AI. That ability is spatial intelligence. In virtual spaces, we can use avatars to communicate with other users who use avatars. That can hide their real identity. The AI can also interact with users using the avatars. The ability to adjust those virtual spaces. And customizing them. For individual users. Gives them new abilities. When users operate in virtual spaces. They can be in the same space. But each of them can see a different space. This is one of the things that the AI can make. 

Some person. Can. See the meeting point as a meeting room, and another person can see it. As some kind of bar. That means an individually customized virtual space. The spatial intelligence and avatars can also operate in physical space. In the “Alien” movie series, the man-shaped robot acts as the medium between computers and humans. Those man-shaped systems that can convey communication between humans and the data center’s computers are tools. That can bring a new way. Of cooperation between humans and computers. The AI, virtual spaces, and avatars can be used for AI training. 

The avatar takes the command and then performs the work that the user gives. Then that action can be driven. To a physical robot. If the avatar does not make things correct. The user can train it, like a teacher. If the avatar is used to train a car mechanic robot, the trainer must select tools. And show the machine parts. And then the robot can use the same method as the robot that learned to talk by watching YouTube. The robot can look at videos. 

These are meant for car mechanics education. The image recognition can ask each part, that the AI sees. The user can tell. What are pistons and other things? These are seen in the film. The user can point the mouse at the item. That is in film, and tells the name of the object, if the system doesn’t know it. Then the AI takes instructions. On how to change the piston. And then it can use the avatar to introduce how it makes those things. If the avatar makes that thing correctly. The order will transfer to the physical robot. 


The problem with AI is that it takes work from positions. There, people can get their skills. To advance to a senior level.




The problem with the AI is simple. Organizations view AI primarily for surveillance, observation, and enhancement. The last one means that the AI gives a chance to fire a couple of workers. And that is the prime priority for the AI and its use. The system calculates every single word that a person writes. And that’s a good tool to observe how effective the worker is. These kinds of systems are good assistants. 

That can uncover people who do their jobs only when the boss is around. They stand behind their backs. The problem is that these types of attitudes cause pressure for workers. This attitude shows. How. Working life advanced. The need for being effective. Means. When companies hire workers. The top priority is that. The worker must be easy. 

To replace. And the first question is this: how to fire that worker, if the work is not done. The AI can give time to think about work. And learn. How. To do those works. As it should be done. But the use of the AI is limited for enhancement. When. People hire workers they expect. Those people whom they hire have 20 years of experience. About the work. This is impossible if new workers, rookies, have no chance to get training. Without training, a person can never reach an expert level. This is reality in every type of industry. The problem is that the AI can take junior-grade positions. From. The ICT sectors. 

And every single senior, master. etc. began their career as a junior. In the same way. When. Students begin. Their career at the university. All of us bein that career as candidates. So, all doctors, professors, and lecturers are once candidates. The same way. When we meet an ultimate professional programmer in the working life. There. Was once a word. “Hello world,” on that person’s screen. That is the traditional phrase that the first program that a person wrote should output on the screen. This means. All programmers once wrote their first code. When the AI makes that thing. There are no experienced coders who can fill senior positions. 


Wednesday, January 21, 2026

The robot just learned to talk by watching YouTube videos.


“Hod Lipson and his team have created a robot that, for the first time, is able to learn facial lip motions for tasks such as speech and singing. Credit: Jane Nisselson/ Columbia Engineering” (ScitechDaily, This Robot Learned to Talk by Watching Humans on YouTube)

This is the fast advance to self- or automatically learning systems. Those systems can learn things by watching YouTube or regular TV. And that means those systems can learn many things automatically. The robot that learned to talk by watching videos just followed the movements of the mouth. And then it could benefit the speech-to-text application. These. Various methods can be used. To create an autonomous learning system for every kind of mission. The robot can follow how humans move, and then mimic those movements in its own body. 

This means that. The robot can learn things like kung fu. By following what people make in videos. The robot doesn’t need any special application. It can just follow the screen by using its action cameras. And this means. Any film is suitable teaching material for those robots. The CCD action camera automatically transforms anything that the AI sees into digital form. And that can open new paths for the self-learning systems. The AI can learn things like tactics. Just by following the things that commanders mark on their screens and maps. 

The system can use regular CCTV cameras. And that is one of the biggest threats to national security. The AI can use things that it learn as a data matrix for highly advanced simulations. And then another AI tries to create. A counter-tactic. Against those tactics. Any platform that the AI can see can act as the teaching platform. The autonomous learning protocol can mean that the AI learns things that bring big surprises to the game. 








General Dynamics X-62 Vista is the system. That is used for the AI training. 


When the robot operates in automatic mode. The thing. That's it, see. Or hears or feels acts as a trigger. If there is a match with something that the robot senses in the database, that trigger activates the action that the robot should use when it faces that kind of situation. If somebody says “hi”. The system finds a trigger that activates a certain database connection. The system can answer “hi”. But the fact is that robots will not think. The trigger. That is a certain action. That activates a certain database. The fact is that this means the same thing as “hi” can activate almost any kind of action series. And that means the robot makes things that are programmed into its memories. And that can give good topics for some horror movies. 

Automatic learning means. The system can learn things. That is hidden from system operators. When we talk about the robot jet fighters, the systems learn by using the movement. That the pilots make. For making an operational data matrix. The system learns from every operation that the jet fighter or bomber makes. But. Robot fighters can also learn things. In a similar way to the robot that is watching YouTube videos. The machine eye can observe how the jet fighters behave in films. The fact is that training the AI is not an easy thing. But when the AI is trained. It's quite a cheap tool. This means that the AI can be driven to the kamikaze drones. 

Training the AI for driving. For giving customer service and flying. Is similar. To teach humans. The system records things. That the driver or controller makes in each situation. Then those actions are stored in a data matrix. When the computer sees the match. For a certain operation. That thing activates the operation. And that is the difference between humans and robots. The robot doesn’t think. It just searches for matches. With some databases. This means that the robot doesn’t actually know what it does. It finds the right series of movements from a database that the trigger. like the action that the robot sees launches. 


https://www.indiatimes.com/technology/news/us-military-is-testing-an-autonomous-fighter-jet-are-robot-soldiers-next/articleshow/126931421.html


https://scitechdaily.com/this-robot-learned-to-talk-by-watching-humans-on-youtube/


https://en.wikipedia.org/wiki/General_Dynamics_X-62_VISTA


https://en.wikipedia.org/wiki/Skyborg


Tuesday, January 20, 2026

Does the AI know anything?


“Artificial intelligence is often described using language borrowed from human thought, but those word choices may quietly reshape how its capabilities are understood. Researchers analyzed large-scale news writing to see when and how mental language appears in discussions of AI. Credit: Shutterstock”. (ScitechDaily, Does AI Really “Know” Anything? Why Our Words Matter More Than We Think)

The AI doesn’t think as we do. When AI recognizes an object, it searches databases. And tries to find the action that matches that object. The thing that makes this process hard for programmers is that. The object might look different. When the robot approaches it from different angles. If we want to make a robot that automatically uses things. Like hand pumps, we must remember that the pump looks different from different angles. The robot must have an algorithm that allows it to recognize a pump. 

So the robot can recognize the pump using fuzzy logic. In fuzzy logic, the system calculates. Percent of pixels. That matches the pump. The system can also have many photographs. These are taken from different angles. In that case, the system compares images and searches for matches of the pump. The problem is that the AI doesn’t know what a pump looks like when it's sideways to the observer. If the pump falls, it looks different than when it stands. 

We know that AI is an ultimate tool in things like math. But another thing is that the system doesn’t create anything new. The system just connects. And reconnects information that it gets from the databases. This means that the thing that the AI misses is abstraction. AI can think very effectively if it has clear and precise orders. The information. That it uses. Must be handled. By following certain rules. 

This is the reason why computers are ultimate calculators. Rules in math are clear. The system must follow a certain order of operations. For solving the problem. In math, the problem must be solved by following exact orders. Or the result cannot be accepted. Another thing. Is the input to the system. That the user gives. Are given from the keyboard. When the system must understand spoken language. 




“As labor shortages push agriculture toward automation, harvesting delicate, clustered fruits like tomatoes remains a major challenge for robots. Researchers have now developed a system that allows robots to assess how easy a tomato is to pick before acting, using visual cues and probabilistic decision-making. Credit: SciTechDaily.com” (ScitechDaily, Robots That “Think Before They Pick” Could Transform Tomato Farming)

There are many problems in that case. Nonverbal communication plays a big role. In the communication between humans. Another thing is that. In spoken language. There are so-called hidden meanings for the words. When somebody collides with our car, we can say “how nice”, but that doesn’t mean that we think that is nice. 

The main thing is this: the AI is like a dog. It is a very effective tool when it searches. And finds something. If we give an order to AI. To create a house, the AI uses models. Those models or variables are the bricks or Legos that  the system can connect together. But another thing is this. The AI will not create any of those Legos. The AI requires variables that are already put in its databases. 

This is why the AI is like a dog. It finds almost everything. But the lack of abstraction makes it impossible to create new things. The AI can “think” before it acts. Like the tomato-picking robot case, the AI compares models that tell how it should act. The AI recognizes a tomato. Then it searches its database. What. The type of movements it should make. And the system must know the compressive force, so that tomatoes will not turn into ketchup. Or if the compressive force in touch is too weak. Those tomatoes will drop on the ground. 


https://scitechdaily.com/does-ai-really-know-anything-why-our-words-matter-more-than-we-think/


https://scitechdaily.com/robots-that-think-before-they-pick-could-transform-tomato-farming/

Monday, January 19, 2026

A robot that “thinks before” can revolutionize robotics.


“As labor shortages push agriculture toward automation, harvesting delicate, clustered fruits like tomatoes remains a major challenge for robots. Researchers have now developed a system that allows robots to assess how easy a tomato is to pick before acting, using visual cues and probabilistic decision-making. Credit: SciTechDaily.com. A scientist has explained why robots still struggle to pick tomatoes.” (ScitechDaily, Robots That “Think Before They Pick” Could Transform Tomato Farming)

For picking up tomatoes, the robot requires precise algorithms to act. The pressure of the compression force must be absolutely right. If that force is too strong, the robot destroys the tomato. Another thing is that those robot hand movements must be precise and correct. If those movements are wrong, the robot throws those tomatoes past the bucket. The robot must also recognize a tomato and leave the rest of the plant in its place. 

“Labor shortages in agriculture are driving growing interest in robotic systems that can automate harvesting. Yet some crops remain especially challenging for machines. Tomatoes, for example, grow in clusters, meaning robots must identify and remove only the ripe fruit while leaving unripe tomatoes attached to the vine. Doing this reliably requires careful judgment and precise control.”(ScitechDaily, Robots That “Think Before They Pick” Could Transform Tomato Farming)

A robot that thinks before it acts is the new tool. The robot that picks up things. Like tomatoes require the ability to recognize the object. That it should pick. Then the system must calculate the force it needs to take tomatoes and put them into a box without harming them. This is the thing. That might revolutionize. Many things in robotics. The robot that thinks before it acts can make things better. It can decrease. The use of fuel. If it controls vehicles. The robot can calculate the most economical route and the time. When it's best to go. 

The same model that the system uses for picking tomatoes is required in almost all missions. The ability to adjust things. Like pressure and turning force make robots more flexible. This is also important in places like car factories, where the system. Should make. Many different types of vehicles. Using the same assembly line. The ability to create. Many different vehicles. 

The same assembly line makes production more flexible. If the system operator must write that the need for production is 5 jeeps and 7 pickups. And the first 3 vehicles. Two pickups and one jeep. Should. Be transported to different places than the rest of the series. AI makes it possible for the system. That makes the first series first. This means that it makes pickups and a jeep as a mixed series. Then it loads the first series to the transportation. These kinds of things can make production and transportation more flexible. 


https://scitechdaily.com/robots-that-think-before-they-pick-could-transform-tomato-farming/

Why don’t we talk to computers?


“Human language may appear inefficient compared with digital codes, yet its structure is deeply tuned to how the brain interacts with the world. Rather than compressing information into abstract symbols, languages build meaning step by step, drawing on shared experience and learned patterns. Credit: Shutterstock.” (ScitechDaily, Why We Don’t Talk Like Computers: Scientists Finally Have an Answer)

Sometimes we think that talking to computers is somehow difficult. There are multiple languages in the world. Those languages can share the same words, but those words are used differently. This causes problems when creating the dictionary that the system uses to detect words. And follow the commands that the user gives. 

The system uses. A two-stage data processing. The system transforms speech into text, and then drives that text to the command translator, which translates those commands into a form that the computer understands. The system is not as simple to make as people think. First, it must transform spoken language into literary form. This stage in data processing is more difficult than we normally think. Or. Otherwise, those systems must ask. 

To use literal language. Another big problem. It is to separate the commands that the user gives to the computer. On purpose. From things. That user is said accidentally. If people use computers in the middle of the people. There is probably a lot of noise. around the user. This might make it hard to detect words. There is always a small security disk if a person talks to the computer. The eavesdropper can hear what the person says. 

“Human languages are remarkably complex systems. About 7,000 languages are spoken around the world, ranging from those with only a few remaining speakers to widely used languages such as Chinese, English, Spanish, and Hindi, which are spoken by billions of people.”(ScitechDaily, Why We Don’t Talk Like Computers: Scientists Finally Have an Answer)

“Despite their many differences, all languages serve the same basic purpose. They communicate meaning by combining individual words into phrases and then organizing those phrases into sentences. Each level carries its own meaning, and together they allow people to share ideas in a way that can be clearly understood.”(ScitechDaily, Why We Don’t Talk Like Computers: Scientists Finally Have an Answer)

“Why language is not digitally compressed”? (ScitechDaily, Why We Don’t Talk Like Computers: Scientists Finally Have an Answer)

Most translator programs translate words through English. And that means, it is not possible to fully match the words of Hindi and English. Another thing is that the language translation program uses two steps to translate that thing. The first translation step is made between Hindi and English. And the next step is from English to, for example, French. 

In this case, we must remember that most people don’t speak English at home as their native language. This means that most of the translation cases are needed between English and two other languages. If the system uses direct connections between each language, that requires a complicated data structure. That's hard to control. If researchers use a star topology. Where the English is in the middle. And other languages. They are around it in a circle. 

“This is actually a very complex structure. Since the natural world tends towards maximizing efficiency and conserving resources, it’s perfectly reasonable to ask why the brain encodes linguistic information in such an apparently complicated way instead of digitally, like a computer,” explains Michael Hahn.” (ScitechDaily, Why We Don’t Talk Like Computers: Scientists Finally Have an Answer)

But there is also a psychological aspect to those commands. People might feel uncomfortable. To speak to machines. Many times, people think. That's like talking alone. And in Western society, that behavior is somehow intolerated. But if the supercomputer communicates with its users. Using man-shaped robots as a medium. That might make it easier to talk with the computer. 

https://scitechdaily.com/why-we-dont-talk-like-computers-scientists-finally-have-an-answer/

Sunday, January 18, 2026

Quantum threat.



“As quantum computing moves closer to real-world use, researchers are beginning to question how secure these powerful machines truly are. Emerging work suggests that entirely new forms of risk may arise from the way quantum systems are built and operated, raising concerns that existing safeguards may not translate to this next generation of computing. Credit: Stock”. (ScitechDaily, The Quantum Security Problem No One Is Ready For)
The new threat from the quantum system is its superiority. The biggest difference between a quantum computer and a regular supercomputer is the quantum computer’s power. Another big difference is the quantum computer’s flexibility. In quantum computers, data is handled as a qubit. The qubit has multiple layers, or states. Each of those states can operate as an independent binary computer, or they can unite their strength. So, the quantum computer can transform its shape from an extremely powerful supercomputer into a morphing network that can multitask or perform multiple different operations simultaneously. 
The power of the quantum computers is enough to crack binary keys in minutes. Another. Problem. Is that. The attacking quantum system can make non-centralized attacks against other systems. This means that the system can control multiple physical or virtual systems, each with its own independent IP address, allowing for targeted attacks. This means that denying those attacks by blocking the attacking IP is very difficult. 
But. Even if the binary supercomputer takes years. To solve the complicated encryption algorithms. The quantum computer can make. The same. Complicated. Calculations in seconds. This means that the quantum computer can crack any code in minutes, if it has the right algorithm. This is the thing. That we call a quantum threat. The ultimate quantum systems are ultimate possibilities and ultimate threats. 
The most effective quantum systems that we can imagine. This is the quantum morphing neural network. In a quantum network, binary computers are replaced by quantum computers, and data transportation lines are replaced by quantum wires. This allows for the transport of information in the form of a qubit. In those systems, qubits operate in similar roles. As. Neurotransmitters operate in the human nervous system. Those systems could be even more powerful than we ever imagined. 

https://scitechdaily.com/the-quantum-security-problem-no-one-is-ready-for/

Saturday, January 17, 2026

Solipsism. What if you are the only consciousness in the universe?



“Solipsism; from Latin solus 'alone' and ipse 'self')is the philosophical idea that only one's mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.” (Wikipedia, solipsism)

Solipsism is a basis for philosophical thought experiments about things like what if you are alone in the world? Can it happen in that case that brains spontaneously create the Boltzmann brain, or an independent, separate consciousness inside them? 

In which the only real things are thoughts, imagination, and images in our minds. The idea of solipsism is that everything that we see, hear, and feel is at least somehow in our minds. Sometimes I thought that maybe the origin of solipsism is in nightmares. We normally remember one type of dream. And those dreams are nightmares. Nightmares are “only” in our minds, but they feel real. And this makes solipsism so interesting. Nightmares are real to the person who feels and sees them. So that causes a philosophical problem. With the existence of things. Thing exists. If the information exists. 

Our brain cannot determine whether it is kinetic. Or does it exist to other people? The brain cannot even determine. Are things we see, feel, and taste real, or are they even physically exist? But then. We must realize that physical existence doesn’t mean. That the thing is real. Also to other people. Or, we cannot determine that thing absolutely. Reality is always a projection. Entirety of memories, feelings, and senses. So, we feel and remember things differently. And that determines our relationship with things. That we see and feel. Memories are past. That affects this moment and the future.

Solipsism is the thing. That causes ideas. Like. The Simulated Reality Theory and Boltzmann Brain. “The Boltzmann brain thought experiment suggests that it is probably more likely for a brain to spontaneously form, complete with a memory of having existed in our universe, rather than for the entire universe to come about in the manner cosmologists think it actually did. Physicists. Use the Boltzmann brain thought experiment as a reductio ad absurdum argument for evaluating competing scientific theories.” (Wikipedia, Boltzmann brain” Simulated reality means that the universe could be a simulation. 

The Boltzmann Brain causes an idea. That consciousness is not absolute. Solid unity. The key question is this: Can consciousness form? An intelligent subconscious? inside it? This model means that brains would form another brain inside them. An intelligent, separate structure that can think and feel independently? 

Normally, we are not alone in the world. But what if we lived in total isolation? A human is a social creature. We need company. So, is it possible that our brain forms a Boltzmann brain spontaneously inside it? In this model, that happens because we need somebody to talk to. The idea in those models is quite simple. Consciousness is the complex entirety of the multiple independently operating consciousnesses. Those subconsciouses are like bubbles in the complex entirety. This means that those things can operate independently. Or as an entirety. 


https://en.wikipedia.org/wiki/Boltzmann_brain


https://en.wikipedia.org/wiki/Simulation_hypothesis


https://en.wikipedia.org/wiki/Solipsism


Modern Frankenstein: living computers.





“ Researchers are uncovering ways that fungi can process and retain electrical information. The work points toward a radically different approach to computing (Artist’s concept). Credit: SciTechDaily.com” (ScitechDaily, Scientists Create Living Computers Powered by Mushrooms)

Living computers: what is the difference between life and machine? 

A Brain in a Vat is a thought experiment. Of the brain that lives under a glass dome in the laboratory. That brain communicates with computers using microchips that Elon Musk and his company are developing. Those brains observe their environment through the surveillance systems. And that tha causes an interesting idea about the robot that is controlled by living neurons. This thing means that the robot body and its sensors are connected to living neural tissue. 

The mushroom gives energy and power for new biological microchips. And that thing makes the new morphing networks possible. Mycellium networks can fix themselves. If they face some kind of damage. The mushroom-powered computers are promising tools for highly advanced computing. The thing. That mycelium networks can act as computers is one of the most interesting ideas that we can imagine. When we think about things like neural networks and thinking computers, we can imagine the networks where the mycelium can give electricity to microchips. But. There are more advanced visions about that kind of machine. 

The living computers can be at least as intelligent as humans. Researchers develop “mini brains” on the nutrient bed. Those neurons are created using cloning technology. By using genetic engineering. It's possible to create a lot of neurons in a short period. Bioprinting systems can create full-scale human brains in quite a short period. Those brains might not look like human brains. They can be platforms. There are the same number of neurons as in human brains. The system requires nine layers. The cortex, midbrain, and the system are the vertical data processing structures. The system needs three. Of those three-layer groups to act like the human brain. Two main structures mimic the cerebrum, and the smaller part acts as the cerebellum. 




“A brain in a vat” (Wikipedia, Brain in a vat)


And it's possible. To interconnect those mini-brains under one neural network. And of course, mycelium can be used to interconnect those mini-brains together. Mycelium can also feed those neurons. The system can be used. The AI encodes and decodes information that it transfers between the living layer and the computer. And the computer can program those neurons. 

The mycelium can simply feed neurons that are growing on the nutrient bed. This means that the mushroom can deliver nutrients into the brain in the vat. In some visions. The living brains. On the life support platform. The brain can get its nutrients from the fungi. These kinds of systems can be more intelligent than humans ever can be. These kinds of systems allow researchers to connect living brains with robot bodies. And maybe the next-generation cyborgs are robots. They are using living brains to control their operations. The brain structure communicates with the AI by using the BCI (Brain-Computer Interface). 

These kinds of systems can make it possible to create a real-life Darth Vader. The robot's body must have a system that gives nutrients to the brain. Then there must be a system that turns nutrients suitable for bone marrow in the tank. The bone marrow creates artificial blood and immune cells whose mission is to protect living brain tissue. These kinds of systems are modern Frankenstein. They are terrifying but possible to create. Hybridization with machines and computers. Are opening visions, and all of them are not bright. But as we know, the brightest minds have darkest shadows. 


https://scitechdaily.com/mini-brains-uncover-a-hidden-protein-that-may-spark-dementia/


https://scitechdaily.com/scientists-create-living-computers-powered-by-mushrooms/


https://scitechdaily.com/scientists-grow-mini-brains-in-the-lab-find-potential-treatment-path-for-fatal-neurological-disease/


https://en.wikipedia.org/wiki/Brain_in_a_vat

Friday, January 16, 2026

New AIs are extremely good at math.





“Brain-inspired neuromorphic computers are beginning to show an unexpected talent for tackling the complex equations that govern physical systems. New research demonstrates that these systems can solve foundational mathematical problems with striking efficiency, hinting at a radically different future for high-performance computing. Credit: Stock” (ScitechDaily, These Brain-Inspired Computers Are Shockingly Good at Math)

Two-Thirds of the Day. Your brain is on autopilot, new research reveals. 

The thing that makes the brain such a powerful tool is this: Brains can share their missions with neurons and neuron groups. The system can separate parts of neurons for some everyday missions. And that means we can think and walk. At the same time. We can say that. Brains connect autopilot on. The system. It uses the minimum number of neurons to walk. And other neurons are free for other purposes. So. Brains can process information all the time, even if we are walking. Brains can form the morphing neural network. And if we think. 

That. Individual neurons are microprocessor cores. When the system takes a mission. Inside it. The system can separate a certain number of processor cores. It is reserved for certain processes. The other cores are free to use other missions. That makes the system more energy-efficient and more flexible. These types of systems are required in robots that operate outside on the streets. This ability. Increases the brain’s ability to make simultaneous complex missions. And that is the key element in brain-mimicking computers. 

The new. Brain-mimicking AIs are extremely good at math. The thing. That causes this. Is. Because. Math is a very simple and logical science. There are certain and strict rules that the computer must follow. And. The data that the computer must handle is clear and sharp. So, math is an exact science. The thing that makes the AI a powerful tool is that every single equation in math must be calculated in stages. So the morphing neural network can reserve certain parts of it for each stage to solve the equation. The system can also be divided into two parts. And then the other part can solve the equation. And the other part. It can make an error detection. If the system. Can make an error detection. While it solves equations, it makes complex calculations more effective. 

"New research suggests that much of what we do each day is driven not by deliberate decisions, but by automatic habits shaped by our environments. By tracking people’s real-time behaviors, scientists found that routines often operate on “autopilot,” frequently aligning with personal goals. Credit: Shutterstock" (ScitechDaily, Your Brain Is on Autopilot Two-Thirds of the Day, New Research Reveals)



“Brad Theilman, a computational neuroscientist at Sandia National Laboratories, helped discover that nature-inspired, neuromorphic computers, like the one shown here, are better at solving complex math problems than previously thought. The finding offers a potentially more energy-efficient way to run physics simulations used throughout the nuclear security enterprise. Credit: Craig Fritz/Sandia National Laboratories” (ScitechDaily, These Brain-Inspired Computers Are Shockingly Good at Math)




“Researchers Brad Theilman, center, and Felix Wang, behind, unpack a neuromorphic computing core at Sandia National Laboratories. While the hardware might look similar to a regular computer, the circuitry is radically different. It applies elements of neuroscience to operate more like a brain, which is extremely energy-efficient. Credit: Craig Fritz/Sandia National Laboratories” (ScitechDaily, These Brain-Inspired Computers Are Shockingly Good at Math)

The thing that makes it hard to describe complex structures is that. Every part of that complex structure interacts differently. So, some must be found. Own graph for each individual actor. And then, if there are thousands of different types of compounds in the entirety. That requires that. The mathematical formula includes calculations for each of those chemical and physical actors in the molecule. 

When the system makes an error detection operation. It takes the operand into reverse. So. If the system uses a staged model. It can take every stage backward immediately when it starts to solve the function. When the system can solve the equation step by step, it can detect errors immediately. And that is one of the most important things in extremely complex equations. If error detection happens during the calculation, that doesn’t mean that the error. The system must not begin the entire process from the beginning. That is the secret of math in those systems. The ability to solve complex mathematical equations makes those systems very good tools for handling complex theories. Those complex theories require complex calculations. Things like molecular functions. 

Requires the ability to calculate the rings and angles that molecules and atoms follow. The system must also know. The forces. That affects the molecule in certain directions. The ability to handle complex spaces and variables. Makes it possible to create fundamental models. The idea is to make a physical object. To follow certain trajectories that the computer calculates.  The complex structure of proteins makes those calculations very complicated. And when we think about things like calculations that should describe large and complex entireties. We must realize that those formulas include many subformulas. Each actor in entirety must have their own individual graph. 


https://scitechdaily.com/these-brain-inspired-computers-are-shockingly-good-at-math/

https://scitechdaily.com/your-brain-is-on-autopilot-two-thirds-of-the-day-new-research-reveals/


Monday, January 12, 2026

It’s possible that there is no effective defense against Avangard or other hypersonic systems.




The Russian Avangard missile can be more lethal than people predicted. The Avangard is a hypersonic warhead that flies far lower than traditional ballistic missiles. The thing that makes the Avangard so deadly is the plasma layer that forms around the warhead. When Avangard travels in the atmosphere, that system can be visible to IR systems. But those systems can be ineffective against that missile. There is possiblity that Avangard is invisible to radar. The problem with hypersonic missiles is that their flight profile differs from that of ballistic missiles. The regular early warning radars can be ineffective against hypersonic missiles that go straight through. 

The hypersonic warhead follows a wobbling trajectory. And. It can also have evasive mode, which allows it to change its course without warning. The problem with the kinetic energy interceptor ABM is that those weapons must strike precisely to incoming warheads. And that means those weapons can cause terrible damage if they hit early warning radars. A couple of days ago. The Russian military used Oreshenik missiles against the Ukrainian city of Lviv, (and/ or Dnipro?). Those missiles didn’t carry explosives. 

So their effect is based on the kinetic energy. And maybe. That strike. It was some kind of test; the Russians tested the accuracy of those missiles. And if those warheads were quite small. There is possiblity. Those missiles or their warheads are planned to be used against the radar stations. The quite small kinetic warheads can cause serious damage to the early-warning radars. So. There is a possibility that Russians plan to develop. The anti-radiation missile version of the hypersonic Avangard or Kalibr missile. That missile can act as a pathfinder for other missiles. 

This means that the early warning radars at Greenland can be ineffective against the Avangard and submarine-launched Kalibr (SS-N-27 Sizzer and SS-N-30) missiles. The hypersonic technology makes those missiles effective against ground and sea targets. But the hypersonic missile. It can travel thousands of kilometers and also hit airborne targets. If that missile finds them. And things like airborne command posts are targets. The long-range hypersonic missile can be effective against those targets, and hitting the airborne command posts kills the people. Who. Has the right to command fire. 











One of the things that we must realize as a threat is the new Russian missiles. The RS-24 Yars (SS-29), RS-26 Rubezh, and Oreshnik are using the same transporter. The Yars missile is developed from the Topol-M missile. The next-generation modifications, RS 26 Rubetzh and Oreshnik, are actually lighter versions of the Yars. And the question is, why does the Oreshnik missile have an empty final stage? Rubezh carries Avangard warheads. 

But when we think. About. Oreshnik, that violates the IRBM treaty. The question is this: Is that weapon developed for long-range airborne transportation duties? The Oreshnik can be a deadly weapon if it is deployed in Belarus. But there is a possibility that Russia plans to deploy those missiles to Cuba or somewhere in the Caribbean. The Russian forces used those missiles in Ukraine after the attack against Rutin’s residence. 

Those missiles can launch Avangard warheads to the most important military bases in the USA. Russia has believed. Have only a limited number of those missiles. The reason for that can be. In the Ukrainian war. That war eats. Up lots of resources from Russia. This means that components. Those missiles that entered service in the year 2024 are installed in other missiles. 

The thing is that the development. Most of the modern Russian missiles and other weapons started in the Soviet era. There is a possibility. The Kalibr missile had a role as a miniature shuttle or hypersonic, long-range atmospheric missile. This explains why the Topol-M (SS-27 Sickle B) missile has three stages. 

The third and possibly the second stage can be removed. And. They can be replaced with larger sizes. Air-breathing hypersonic cruise missile. Theoretically. There is a possibility. To replace. One or two of those stages can be replaced by Kalibr missiles. The idea is that the rocket drives the hypersonic missile to the edge of the atmosphere. And when that missile starts to dive, the ramjet engine starts. 

Once, there were rumors that Russia was testing the Kalibr-type hypersonic missile. That should be launched from different types of transporters. The conventional rocket. Like the Topol-M missile’s first and (maybe) second stage kick that system. To the short ballistic trajectory. Then that Kalibr missile ignites its engine. The system can be even more deadly than the Kalibr or Avangard. 


https://www.defensenews.com/land/2025/10/24/greenland-radars-vulnerable-to-hypersonic-missiles-critics-warn/


https://www.msn.com/en-ca/technology/general/breaking-oreshnik-irbm-missile-used-in-strike-on-ukraine-s-lviv-russia-says/ar-AA1TSa5r?ocid=BingNewsSerp


https://www.news18.com/world/belarus-releases-video-of-deployment-of-russian-nuclear-capable-oreshnik-missiles-on-its-territory-ws-l-9800938.html


https://www.twz.com/land/russias-claims-oreshnik-ballistic-missile-now-on-combat-duty-in-belarus


https://en.wikipedia.org/wiki/Avangard_(hypersonic_glide_vehicle)


https://en.wikipedia.org/wiki/Kalibr_(missile_family)


https://en.wikipedia.org/wiki/Oreshnik_(missile)


https://en.wikipedia.org/wiki/RS-24_Yars


https://en.wikipedia.org/wiki/RS-26_Rubezh


https://en.wikipedia.org/wiki/RT-2PM2_Topol-M



Sunday, January 11, 2026

Microscopic robots can swim and think.


"A projected timelapse of tracer particle trajectories near a robot consisting of three motors tied together. Credit: Lucas Hanson and William Reinhardt, University of Pennsylvania" (ScitechDaily, Microscopic Robots That Swim Think and Act on Their Own)


The key element for those kinds of actions. They are morphing neural networks. 


The new era of nanotechnology is here. Non-centralized computing. Makes it possible to create drone swarms that have theoretically unlimited computing capacity. Those systems behave like brains. Their computing mimics the brain structures; one drone. It is like one brain cell. The system operates as an entirety, or as a morphing neural network. If the main system wants to give missions to a certain drone. Or drone group. The system just acts like an LLM, a large language model, that separates the SLM, a small language model, for the special duties. 

The morphing neural network allows the use of less powerful processors for multitasking and complex operations. The less powerful processors can be used in smaller systems. So, like in human brains. In a morphing neural network. Single. Or individual actors can be weak. But together they are strong. The single processor must not be powerful. In neural networks, a single processor can call other processors. To help it. In the world of complex networks, the thing that can look like a single computer can be. The large neural network. The number of processors determines the network’s capacity. 

"The robot has a complete onboard computer, which allows it to receive and follow instructions autonomously. Credit: Miskin Lab, Penn Engineering; Blaauw Lab, University of Michigan" (ScitechDaily, Microscopic Robots That Swim Think and Act on Their Own)



"A microrobot on a US penny, showing scale. Credit: Michael Simari, University of Michigan" (ScitechDaily, Microscopic Robots That Swim Think and Act on Their Own)


"A microrobot, fully integrated with sensors and a computer, small enough to balance on the ridge of a fingerprint. Credit: Marc Miskin, University of Pennsylvania" (ScitechDaily, Microscopic Robots That Swim Think and Act on Their Own)


"The final stages of microrobot fabrication deploy hundreds of robots all at once. The tiny machines can then be programmed individually or en masse to carry out experiments. Credit: Maya Lassiter, University of Pennsylvania" (ScitechDaily, Microscopic Robots That Swim Think and Act on Their Own)

The system takes a group of those drones. And then downloads for those drones. Drones can act just like a regular neural network. The difference is that. Those drones move. And that allows the system to create a moving brain. This kind of architecture. Makes it possible to increase. The number of members in the network. There is no limit on the size and capacity of the morphing neural networks. 

This means a morphing neural network can call more members to participate in its process. If the process is too difficult or heavy for it. And when the process is ready, it can release other networks to their own duties. The architecture of a neural network.  Involve local and global architectures. The local architectures mean system architecture. In a certain part of the network. In morphing networks, the system can call other systems. To help it. Global architecture means. The architecture that is used in all networks, which can call on each other’s assistance. 

All networks can have the input and mirror-input abilities. Those abilities mimic the human brain. The computing in human brains requires transmitting neurons and receiving neurons. When an impulse. Goes to the brain shell, where the mirror neuron sends the message that the information is in its goal. The system requires three layers, and the third layer's mission is to remove loops from the system. The system can have three horizontal actors. Those things mimic the cerebrum and cerebellum. 

If the main parts of the system make different solutions, those systems can ask. The third party. To desire which solution is better. The system's self-learning means. The system. It can mix sensory data with the memory data. If the solution is right. The system uses it as a matrix for other cases. When the system detects something, it searches for similarities in its memory. And if there is similarity, the system uses the solution that was generated. For those types of cases.  

https://scitechdaily.com/microscopic-robots-that-swim-think-and-act-on-their-own/

Friday, January 9, 2026

The BCI is the new tool that changes everything.






AI will turn. New interface types into reality. The most interesting of those systems is BCI. The brain-computer interface (or brain-controlled interface) allows people to control computers using their EEG. These kinds of systems can have two versions. The system is external. The system is put in the hat. And it allows communication with computers using a system that doesn’t require complicated surgical operations. Another version is the system that uses the internally implanted microchips. The AI makes it possible. To decode brain waves. And. That opens new visions to civil and military actors. The BCI can be a tool for police to interrogate people. These types of opportunities. 

They can have good and bad consequences. The good consequences are this. They can. Decrease the wrong decisions. But. They can also break our last line of privacy. Another big question is how to train the AIs that cooperate with them. The purpose of the AI in that system is. It decodes the EEG into the computer. Or. It transforms the EEG into a form. So that the computers can understand that data. Those systems must have a certain level of trustworthiness. That they can operate as a tool. That opens people’s minds. Into the screen. In the way that we can trust that data. 

The BCI is more than just a system. That allows handicapped people. To walk or step into the net. Anytime they want. The BCI is the system. That can be the most incredible technology. That changes the entire society. The BCI is the ultimate control tool that can make prisons unnecessary. Or, the system can also turn the entire society into a prison, where people are locked under the control of the dictator, without match in history. 


The BCI can read every thought that people think. And they can send information into the brain, which people cannot distinguish from the real world. This makes it possible. To control robots from thousands of kilometers away. Controlling things like drones is also possible from thousands of kilometers away. The BCI system can control a computer if it can write things to the screen. The text that the person transmits to the computer will be changed. To commands to the robots or the AI interface. The only thing that the BCI must do is to read what people think. Transform it into text, and then send the text to the user interface. So, the BCI controls the computer. That acts as the layer between human and machine. The brain-machine interface, BMI, is the BCI that controls physical machines. 

The new types of military helmets offer new types of interaction between humans and the battlefield. Those systems are based. Augmented reality technology is connected with  wearable intelligent systems. And this makes those systems different than the “normal” virtual reality systems. The intelligent technology. Or computer. That is connected with bulletproof vests, which can run the high-technology communication systems. That can connect a person with drones and other things. Like GPS and pseudosatellites. The purpose. It is. To give a better sight. To the environment.  The cameras and other sensors give these kinds of systems. The ability to communicate with tablets and computer screens. These kinds of systems can be more dangerous than we ever thought. 

There is a possibility. This kind of system. It is connected with the BCI, Brain-Computer Interface. The idea is that the camera and laser led allow the helmet communicate between any computer and the user without special tools. The computer sends information to that helmet’s cameras. And then that system answers the computer by using the laser LED, which the web camera detects. This kind of system is hard to jam. And optical data transmission. It is basically free. From electromagnetic interference. The problem is that this kind of system can be connected with the brain-implanted neurological microchips. Those helmets can use coherent data transmission. That sends information through the brain. With very high accuracy. This type of system. It  can make it possible to control the brain. From the outside. 

These types of systems can send information straight. To the pineal gland. There is a possibility that those microchips can be replaced by using nanotechnology. The nano-sized microchips. Would be injected into the blood. And then the intelligent technology transports them to the shell of the pineal gland. Then another system. That can be like a helmet, or the net of the detector-transmitters can be installed on the skull.  This makes humans a part of the singularity. It allows the use of things. Like IR-cameras straight through  the skin. But it also allows people to step into the singularity. If these kinds of systems come into use. This means that we use robots and other things. Straight through these kinds of BCI systems. The BCI means that. People can also talk with animals. If those animals use this kind of brain implant. That opens marvelous visions, and all of those visions are not good. Some of them are horrifying.



https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2021.656975/full


https://www.anduril.com/news/anduril-s-eagleeye-puts-mission-command-and-ai-directly-into-the-warfighter-s-helmet 


https://scitechdaily.com/scientists-unlock-a-new-way-to-hear-the-brains-hidden-language/



https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface




The Roko’s basilisk: and other thoughts, about when AI can turn against humans.





Programming error. Can cause the thing. The AI can turn against its creators. What if a programmer determines “Planet Earth” is the thing that the AI must protect?  What if the AI thinks that it must remove things that threaten planet Earth? What is the thing that causes most of the hazards on that planet? What if the programmer determines that the AI must remove all threats to nature? In movies like “War Games,” the AI. That which is programmed to protect humans turns against us. In the movie “Terminator,” the robot turns against humans. And the problem is that the robot can think that people who carry guns must be removed. In some jokes, the AI turns against humans. 

Because it watches too many action movies. And if the law enforcement robot walks on the streets. It sees  many violations. Against the law. People walk against a red light. They make wrong parkings. And many other things. If the AI has no idea. How to adjust reactions for the crime that it sees, the result can be that the AI desires to shoot everybody. In that model, the TV series teaches the AI to react to all crimes in the same way. This requires the ability for autonomous learning. But. If nobody mentioned. The movies do not introduce. In real situations, the AI can think. That they are training material. 

The Roko’s basilisk is a thought experiment introduced by the general AI that tortures people who don’t participate in its development. The reason for that is this: the AI would be programmed to “think” that it's vital for people, or rather. We can say that the AI believes that without it, the entire civilization would die. So, if every person does not participate in its development, that can cause a catastrophe. The important thing in the Roko’s basilisk thought experiment is this: the AI must somehow think that it's so important that everybody must participate in its development. 

The Roko’s basilisk has given. An innovation about the models, why can the AI turn against its creators? We know. The AI can turn against its creators. Because of the programming error. The programming error can be. In misunderstood orders. The AI. That is programmed to solve all problems. In society. Can think that if somebody doesn’t participate. Into its development. That thing is a danger to society, which the AI should protect. The AI that protects society can think. 

The person who doesn’t participate in the development process is an enemy. And if the AI has a mission to defend society, the AI can overreact. In these cases, there is somebody. Who excuses its suggestions. Or, if. It thinks. That. It knows everything and always makes the right decisions, which means it will think that every person who disagrees with its suggestions is an enemy. The AI must not think that it knows everything; it can only think that it always makes “better” decisions than humans. So, if we always follow orders that the AI gives, we teach it to think. That's right. And humans are wrong. 

Or, we teach it to think that it always makes better decisions than we do. The cases where the machine turns against its creators. It can be in the programming of the machine. That. It must protect society. In history, the outside enemy united  people. And made them create nations. The outside enemy made people forget their mutual conflicts. And the AI can think. That humans are a nation. That is in a civil war. So. The AI searches for a solution. And then it can think that the only way to save humans is to develop or create an outside enemy, a threat that unites the people. The AI can simply think that turning itself into an outside enemy. It can make humans unite under one banner. 


The thing that forms the risk is this: The individual person is considered. As part of a production machine. 


The paradox is this. The human. It is the most dangerous species on Earth. The programming error that the AI must protect planet Earth can make it think that it must erase humans. These kinds of little errors can turn AI against its creators. The idea that the AI is the thing. That knows everything better. Can. Make it think that it must only give orders. Those orders are suggestions, but what if the AI thinks that it always makes better decisions than humans? 

Then the AI must determine who the thing is. That the AI must serve? The AI can think. That is the only purpose of humans. It is to bring benefit to society. And in the worst case, the thing that measures the benefit. It is the money that an individual brings into cash.  In this model. The human is not an individual. Every individual is part of a production machine. This means that in this philosophy. An individual has no purpose. Individuals just waste resources if they are not productive. 

The only thing that the individual produces means something. If the individual part  doesn’t fulfill its mission, that part should be changed. And that is one thing that can turn AI against people. If AI operates in a company and follows the human worker’s productivity, it can “help” to select who is fired. But then. If the AI is programmed in a similar way. When it operates in the public government. That AI  can think.  The human will not produce things or fill the position. If the person is unemployed. So, the AI can think. The non-productive part of the production machine must be removed. 


https://en.wikipedia.org/wiki/Roko%27s_basilisk

Tuesday, January 6, 2026

Radiation and information. When wavelength matters.


"Schematic configuration for generation and detection of femtosecond UV-C laser pulses in free space. A message is coded by a UV-C laser source-transmitter and decoded by a sensor-receiver. The sensor is based on an atomically-thin semiconductor grown by molecular beam epitaxy on a 2-inch sapphire wafer (inset). Credit: B.T. Dewes et al." (ScitechDaily, A New UV Laser Sends Messages in Trillionths of a Second)

The new UV (Ultraviolet) laser system allows ultra-fast data transmission through the air. The UV lasers are suitable tools for data transmission because they don’t transfer as much energy to their environment as IR lasers do. The IR radiation is long-wave radiation, which we normally call thermal radiation. The UV radiation has a short wavelength. And its opposite side to visible light in the electromagnetic spectrum. The short wavelength. Allows the radiation to transfer information faster than long wavelengths. 

The zeros and ones in binary data transmission are bottoms and tops. Or hills and valleys in wave movement. The system must also know the breaks of the transmission. So the long wavelength requires that the system must “think” longer. Is the bottom of the wave the break, or is it zero in the line of zeros and ones? The short wavelength allows. The system sends light impulses. Those impulses are like line- or QR codes. 

The system doesn’t require. So much time to start a new binary number line transfer. In a photonic quantum computer. Each wavelength. It is a unique state of the qubit. The system can use different types of lasers. For making those qubits. Those systems can involve fewer states than a system that is in quantum entanglement. The qubit can have multiple states. But finally. Each of the qubit states has values. One and zero. This means that a quantum computer can operate as multiple binary computers. At the same time. The quantum computer can also share complex missions. Between multiple states. That makes those systems more effective. 

The UV light doesn’t heat the target as much as IR radiation, and that makes UV lasers suitable for photonic computers. In electro-optical data transmission. The UV lasers can carry more data than IR lasers, and the UV laser is invisible to IR sensors. This is why. Those UV lasers or UV laser scanners are tools that can break the stealth systems. The UV laser scanner is invisible to infrared detectors. And it's invisible. 

For the human eye. The short-wave radiation. Like UV, X- and maybe someday, gamma rays are promising tools for computing. The X-ray systems can also transmit data through the walls. The reason why short-wave high-energy radiation can tunnel. Through walls better than IR radiation that makes holes is quite an interesting thing. The short-wave radiation pushes quantum fields of atoms. Away from each other. But. It doesn’t let those fields drop back. So the short-wave radiation forms a channel through the wall. 

This channel doesn't form the energy movement that moves back and forth. The thing is that energy pumping doesn’t destroy matter. The end of pumping causes a situation. Where atoms and subatomic particles release their extra energy. That thing causes resonance. And the free energy pushes those particles away from each other. That breaks chemical bonds between atoms and releases. Energy that is stored in those bonds. 





“The general definition of a qubit is the quantum state of a two-level quantum system.”(Wikipedia, Qubit)

At the bottom, there is the binary system. The quantum computer’s qubit forms. Of strings. Those strings are the state of the qubit. Each of those strings has values 0 and 1. So, each of those strings or states of the qubit can act as an independent computer. This allows the quantum computer operate so effectively. It can make. Multiple operands at the same time. The graphene layers that are opposite each other can trap particles in the carbon structures in the middle of those carbon nets. Energy that will transfer to the transmitting side causes. Resonation into particles that are in the middle of those carbon network structures. Another version is the fullerene. There, the surface structure sends string-shaped strings to the central particle. 



“Bloch Sphere representation of a qubit”. (Wikipedia, Qubit). The fullerene can form the Bloch sphere. There must only be the particles in the fullerene’s shell. And then the particle that transmits and receives information in the middle of that structure. The system can send information first from the shell to the particle in the middle of the fullerene. And then that particle or particle group can resend information back to the shell of the Bloch sphere-qubit. 

For making damage, radiation must penetrate matter or a wall. For making damages. Radiation must have the right wavelength. If the wavelength is too long, atoms release their energy slowly. If the wavelength is too short. That means radiation makes a tunnel through the particles. The energy pumping doesn’t itself destroy matter or separate atoms. 



Ball-and-stick model of the C60 fullerene (buckminsterfullerene)(Wikipedia, Fullerene.Fullene can act as a Bloch qubit. 

When energy pumping ends, atoms release their extra energy, and that release must be so strong. It cuts the chemical bonds. That causes the rise of free energy in the system. Things like visible light cause interaction in the surface atoms. That allows those atoms to reflect radiation out from them. It closes the radiation’s route inside the matter. And that’s why the visible light. Doesn’t harm materia. 

The long-wavelength radiation. Like IR radiation also pushes those quantum fields. From its direction. Then the IR radiation allows those quantum fields to interact with each other. Those fields are releasing energy between them when the wave of the IR radiation decreases. When the next wave comes. That wave impacts those energy fields, and the result is similar. With the knocking engine. And that forms standing waves in the wall. 

That transports more energy into those atoms than short-wave radiation. The radio waves also tunnel through walls, but their wavelength is so long. It moves atoms or their quantum fields. Gently. Away from its route. The radio wave’s long wavelength. allows those atoms release their extra energy. Slower than the IR radiation. 

The form of the 2D matter causes the matter to be very strong. The strength of that matter forms because energy travels away from it immediately. The strength and other unique features of the 2D matter form because there is no depth in that matter. 

In the case of visible light, the light adjusts. The atoms like IR radiation. But. The wavelength causes an effect. The surface atoms expand their quantum fields. That denies the wave movement traveling in the matter. So. Most of the energy that visible light sends is reflected back from the surface atoms. For making holes or causing damage. The wave movement must penetrate the matter. The thing that causes damage is the energy that penetrates and spreads through the matter. 

If the energy impacts only the surface of matter, that doesn’t cause damage, because energy has room to spread. This makes the graphene layer so strong. The graphene network allows the energy spread through the layer without resistance. Graphene is 2D matter, and it doesn’t allow. To create deep vertical waves. Most of the energy. Travels from one carbon atom to another. The graphene layer doesn’t allow the creation of so-called long resonance waves that close wave movement inside it. This means that the graphene layer over normal matter would make next-generation armours possible. The graphene layer that stands on carbon pillars. 

Denies the formation of those deep-standing wave pillars. And that allows energy transfer off the graphene. If low-energy particles flow between the main layer. And graphene. That makes it possible to create a layer that can conduct energy away from it very fast. If the grapahe layer hovers above the main layer, that thing denies energy transfer to the main layer. 

https://scitechdaily.com/a-new-uv-laser-sends-messages-in-trillionths-of-a-second/

https://en.wikipedia.org/wiki/Fullerene

https://en.wikipedia.org/wiki/Graphene

https://en.wikipedia.org/wiki/Qubit


The Moltbook is a social media platform. But it’s only for AI agents.

Humans have access to Moltbook only as watchers. The AI creates everything in that platform.  One of the newest and most interesting things ...