Monday, February 2, 2026

AI becomes smarter. By talking to itself.



“Inner speech and working memory architecture boost AI performance when multitasking and completing complex pattern generation challenges. Credit: Kaori Serakaki/OIST” (ScitechDaily, Letting AI Talk to Itself Made It Much Smarter)

When AI talks to itself, it produces the same thing as humans do when we talk to ourselves. That thing makes it possible to sort and process information better than being quiet. 

The AI clarifies things to itself when it surrounds the data it has stored in its memory. This thing makes it possible. To adjust and clarify that data. When AI, or Large Language Model (LLM), processes data, it surrounds the data through the system. The main dataflow. That travels in the system as a cycle. And. The sensors or other datasets bring new data to the main dataflow. The system. Looks like some kind of river; its tributaries bring new and more or less fresh information to that main flow. 

The idea is that the physical system of the AI processor mimics the human brain. Individual processors act like a certain brain area. Microchips handle the same missions as real neurons handle in human brain lobes.  And the system surrounds data through those servers or processor groups. Just like human brains. When the AI talks to itself, it can send the same data to all data handling units. at the same time. 



Image: Human brain lobes. The data system can have servers that are divided to operate. As human brain lobes do. In those systems. Each of those brain lobes. Have a match as a server. 





“A massive study shows that AI can now beat the average human on certain creativity tests. Yet the most creative people remain well ahead, highlighting AI’s role as a creative assistant rather than a replacement. Credit: Shutterstock” (ScitechDaily, AI Is Now More Creative Than the Average Human)




Permutation of A,B,C. The AI uses this method when it combines items or possibilities with each other. This is the method. On how a morphing neural network handles data. (Below)



The AI is more creative than an average person. The reason for that is this. AI is an effective tool. To collect and compile data from different data sources. The AI can use larger-scale datasets and compile data from multiple sources more effectively than a human. A normal thesis student reads about 100 books and uses some of them as sources. 

The AI can use tens of thousands, or even billions of sources. The AI can sort and compile that data more effectively than humans. The AI can beat a human. In certain creativity tests. Because those tests include a limited number of possibilities. The 200-300 possibilities. Might look. Very many.  But the AI can use more than even billions of sources. The system uses the permutative method to connect possibilities. To each other. 


This means that the AI makes fewer mistakes than humans if it uses precise and pre-processed data. The problem is that. When robots operate on streets. They have no time. To make an analysis. Of the data. That they collected. When the system surrounds data. 

It also finds new details from it. But. Another thing that makes AI more creative. Than an average human is this. There are many humans. Who. Doesn’t do any type of creative work. This means that. There is a lot. Of unused creative potential. If we let people innovate in workplaces. Many people operate. At work. That we could outsource to robots. 


https://scitechdaily.com/ai-is-now-more-creative-than-the-average-human/


https://scitechdaily.com/letting-ai-talk-to-itself-made-it-much-smarter/


Saturday, January 31, 2026

What is the truth about the devices that were used in the Maduro incident?





EC-130 “Compass Call”


When Delta Force captured Venezuelan dictator Maduro, the question is whether it used some acoustic or electromagnetic systems that put the person to sleep during the attack? The acoustic system. It can put a person to sleep. But the attack against missiles. And electric networks. Can be made. In many ways. The attacker can use things like a quadcopter to pull copper wires over high-voltage lines. Or the system can use graphite powder. To make short circuits. In high-voltage electric lines. Or they can use plasma in the same missions. The easiest way is to shoot transformer stations. using assault rifles. 

The photoacoustic devices. Or some other acoustic devices can destroy electronics simply by causing oscillation. 

Photoacoustic devices can. Make things like tiny earthquakes in microchips. Those tiny earthquakes can help to moderate signals that travel in the microchips. Those systems can make things like nanocrystals oscillate. And that can be the new way to moderate electric impulses in very small devices. Photoacoustic devices can be a tool. That can make many things. Those are thoughts. Making the impossible possible. Photoacoustic devices can cause such a powerful “earthquake” in microchips that they destroy those microchips. If a photoacoustic system causes resonance in microchips. That can put them in pieces. 



There is a possibility that in the Maduro strike. Delta Force used technology that forces a person to sleep. These kinds of systems are tested at MIT. Long-range acoustic devices (LRADs) are easy. To mount in helicopters. Those systems. Those used in riot situations can also be used to jam or paralyze humans. The LRAD can also be connected with that technology. That falls humans to sleep. And. The LRAD can also be used to break microchips using acoustic resonance. Acoustic resonance can also destroy ceramic insulators in electric lines.  



So, when we return to the Maduro case, where Delta Force somehow jammed those missiles, we must understand that those missiles were in a warehouse for a long time. There is a possibility that those missiles were taken out. By using regular sabotage. The CIA agents who played service staff could simply cut the wires from the missiles. Those personnel can also use portable acoustic systems to break those microchips and their shells. Or if those people had time. They can also put some acid on the microcircuits. Or they can damage batteries. 

The fact is that the EMP weapons, or acoustic weapons, could be used against Maduro’s bodyguards and the house’s electronics. Their targets could also be places like cell-phone receiving stations. So, maybe they were not used against radar and missiles. The EMP weapon. It can delete the kernel from microchips. Those microchips look non-damaged. But. They have no kernel, which means the software cannot interact with them. This kind of EMP weapon can destroy missiles even months before an attack. If there is no crew that tests missiles and their electronics. That fatal damage can remain unseen. 


"Researchers have created a phonon laser that generates tiny, earthquake-like vibrations on a chip. The advance could simplify smartphone hardware while boosting speed and efficiency. (Artist’s concept.) Credit: SciTechDaily.com" (ScitechDaily, Scientists Just Created Tiny Earthquakes Inside a Microchip)

The high-power microwave- or radio-wave-based EMP weapon can be hidden in missile warehouses. There are many ways to create an EMP pulse. That causes large-scale damage in electronics. One of those versions is to create. High-power voltage impulses in electric systems. The system can be used. The high-power capacitors that push an electric impulse into the electric network.

Or a high-power radar impulse can cause the destruction of computers. By causing a surge or overvoltage in electric wires. In that case, the system can use an electrolytic liquid. Like salt water, liquid transfers electric impulses to wires. The system is like an electric impulse gun. That uses the saltwater “beam”. Which travels over the electrode. In that case, salt water replaces the copper wires. 


"The RUS-PAT (rotational ultrasound tomography, RUST, combined with photoacoustic tomography, PAT) technique combines the benefits of ultrasound imaging for seeing tissue structure with those of photoacoustic tomorgraphy for revealing function of the vasculature. Credit: Yang Zhang" (ScitechDaily, Scientists Develop a New Way To See Inside the Human Body Using 3D Color Imaging)

Maybe. Those systems were in such a bad condition. That they didn’t work. Another version could be that. The orders to put those missiles into position were denied. There is also a possibility that those operators were simply taken out using some opioid gas or acoustic devices that deny their ability to operate. Or, Maduro’s successor simply deleted SIM cards from telephones. Those operators used to communicate with the HQ. 

But things like. High-precision EMP and acoustic devices can destroy microchips from those missiles. The high-power EMP can destroy electronics even if the device is shut down and stored. If those missiles are stored in the EMP-protected shelter, that means that the agents must somehow slip the EMP bomb into that shelter. The portable EMP device. That uses high-power impulse capacitors can be launched by a wired remote launcher. Or the system can get its launch signal from a timer. Or the drone can send the signal by using a laser. 

There is also a possibility that the drone is connected to a spring switch. There is a cotter pin in the launcher. When the drone pulls it away, spring puts the electrodes together. This is necessary only when the EMP device is in the EMP-protected shield. That means that EMP, or acoustic devices, can be in a helicopter. Or they can be in drones or aircraft. Or they can be man-portable and slipped into those warehouses a long time before strikes. The fact is that. Those operations are very classified. 


https://edition.cnn.com/2026/01/25/politics/trump-says-secret-discombobulator-weapon-was-used-to-capture-maduro


https://www.twz.com/news-features/did-a-mysterious-sonic-weapon-really-aid-delta-force-in-capturing-maduro


https://news.mit.edu/2024/startup-elemind-helps-people-fall-asleep-0925


https://www.researchgate.net/figure/LRAD-weaponized-cone-of-sound_fig1_347576544


https://scitechdaily.com/scientists-develop-a-new-way-to-see-inside-the-human-body-using-3d-color-imaging/


https://scitechdaily.com/scientists-just-created-tiny-earthquakes-inside-a-microchip/

Thursday, January 29, 2026

What makes us intelligent?

 




“Modern neuroscience has revealed a brain made up of specialized networks, yet this fragmented view leaves open a central question: why human thought feels unified. A new neuroimaging study explores how large-scale patterns of communication across the brain give rise to general intelligence.” (ScitechDaily, Scientists May Have Found How the Brain Becomes One Intelligent System)

Finally. Researchers found. Why do brains start to form an intelligent entirety? Or they think that they have an answer for that question. We have multiple different cell types. In the brain. Some of those neurons work. With somatosensory. Some of them work with memories. Every single neuron type has its own role in the brain. The problem is, how those, maybe different, neuron types and brain areas orchestrate themselves to work as an entirety? This thing makes humans so different. When we talk. About things like consciousness. Sometimes we ask: why do things like animals escape predators? The answer is simple. When the predator starts to follow its prey, the rest of the pack can escape. 

Orchestrated brain cells make it possible. That. Humans can make things that are impossible for animals. The thing that makes that thing possible is that the human brain can handle abstraction. The abstraction means that when our company tells us that. We should beware things like wolves; we understand that thing. Without the need to see any wolves. This is one of the things that differentiates us from things. Like cows. When a wolf attacks one human. We will go to help that packmate. But cows do not make those things. 

They just escape, and a wolf kills one of them. Others will not help that poor member of the pack. Escaping can be a good choice, because most of the pack will survive. And the sacrificed member was always old and weak. That thing is an intelligent option. If the creature has no imagination. But in the cases of humans. Old people or old members of the group transmit hereditary knowledge to the next generation. 

We will always help other humans, because that benefits our species. When a species attacks, we can run, but we can also take a stone or some other weapon, and make a counter strike. Our brains are such a powerful tool. Because they start their operations at multiple points. At the same time. Our brains can connect memories into real-time observations. 

But then. We must realize that those neurons in our brains must behave as they behave to maximize their own benefit. As an orchestrated model where individual neurons serve the larger group of neurons, that thing gives them a benefit. Alone, one neuron is not a very dramatic thing. But. As a large group that controls the body. Those things make many things that are impossible. Orchestration gives those neurons power. And that makes them operate. As an intelligent entirety. In that case, intelligence means that we can make new things. Like houses, write computer programs. And make many things that are not happening just here and now. 

We could make a morphing neural network that mimics the human brain. We could divide its servers like neurons. They are divided in the human brain. We can make. The server group where each server has the same role as a certain brain area. Has in the human brain. This makes it possible to mimic the brain. 

The AI systems can manage things. Better. Than human brains, if they have ready-to-use models in their memories. This means that AI must face a similar situation before. If the AI must make. A new action model. It loses to human brains. The AI is not in a similar way. Flexible as human brains. When AI must make an action model. In a surprising situation, it loses. to human brains. 


https://scitechdaily.com/scientists-identify-the-evolutionary-purpose-of-consciousness/


https://scitechdaily.com/scientists-may-have-found-how-the-brain-becomes-one-intelligent-system/


https://scitechdaily.com/the-brains-strange-way-of-computing-could-explain-consciousness/


Thursday, January 22, 2026

AI and spatial intelligence.



We are all familiar with virtual worlds, such as Minecraft and Fortnite. And many other gaming platforms that are more or less limited. Virtual worlds are places where researchers and gamers can train AI. The use of virtual worlds is unlimited. We can create virtual offices where workers meet in virtual spaces. Even if. In many examples, people who use virtual spaces use VR glasses. A regular cell phone could be suitable for that purpose. 

Of course, the system must have some kind of input method. Like a keyboard that is connected to those systems. Voice commands might not be suitable for cases like virtual meetings. The program could also follow the head movements and turn the image on the screen. If the user doesn’t want to use VR glasses. The virtual spaces. These are connected with the large language models (LLM). 


Robots that can operate as a medium between computers and humans can act as physical avatars. 


Can. Bring a new ability to the AI. That ability is spatial intelligence. In virtual spaces, we can use avatars to communicate with other users who use avatars. That can hide their real identity. The AI can also interact with users using the avatars. The ability to adjust those virtual spaces. And customizing them. For individual users. Gives them new abilities. When users operate in virtual spaces. They can be in the same space. But each of them can see a different space. This is one of the things that the AI can make. 

Some person. Can. See the meeting point as a meeting room, and another person can see it. As some kind of bar. That means an individually customized virtual space. The spatial intelligence and avatars can also operate in physical space. In the “Alien” movie series, the man-shaped robot acts as the medium between computers and humans. Those man-shaped systems that can convey communication between humans and the data center’s computers are tools. That can bring a new way. Of cooperation between humans and computers. The AI, virtual spaces, and avatars can be used for AI training. 

The avatar takes the command and then performs the work that the user gives. Then that action can be driven. To a physical robot. If the avatar does not make things correct. The user can train it, like a teacher. If the avatar is used to train a car mechanic robot, the trainer must select tools. And show the machine parts. And then the robot can use the same method as the robot that learned to talk by watching YouTube. The robot can look at videos. 

These are meant for car mechanics education. The image recognition can ask each part, that the AI sees. The user can tell. What are pistons and other things? These are seen in the film. The user can point the mouse at the item. That is in film, and tells the name of the object, if the system doesn’t know it. Then the AI takes instructions. On how to change the piston. And then it can use the avatar to introduce how it makes those things. If the avatar makes that thing correctly. The order will transfer to the physical robot. 


The problem with AI is that it takes work from positions. There, people can get their skills. To advance to a senior level.




The problem with the AI is simple. Organizations view AI primarily for surveillance, observation, and enhancement. The last one means that the AI gives a chance to fire a couple of workers. And that is the prime priority for the AI and its use. The system calculates every single word that a person writes. And that’s a good tool to observe how effective the worker is. These kinds of systems are good assistants. 

That can uncover people who do their jobs only when the boss is around. They stand behind their backs. The problem is that these types of attitudes cause pressure for workers. This attitude shows. How. Working life advanced. The need for being effective. Means. When companies hire workers. The top priority is that. The worker must be easy. 

To replace. And the first question is this: how to fire that worker, if the work is not done. The AI can give time to think about work. And learn. How. To do those works. As it should be done. But the use of the AI is limited for enhancement. When. People hire workers they expect. Those people whom they hire have 20 years of experience. About the work. This is impossible if new workers, rookies, have no chance to get training. Without training, a person can never reach an expert level. This is reality in every type of industry. The problem is that the AI can take junior-grade positions. From. The ICT sectors. 

And every single senior, master. etc. began their career as a junior. In the same way. When. Students begin. Their career at the university. All of us bein that career as candidates. So, all doctors, professors, and lecturers are once candidates. The same way. When we meet an ultimate professional programmer in the working life. There. Was once a word. “Hello world,” on that person’s screen. That is the traditional phrase that the first program that a person wrote should output on the screen. This means. All programmers once wrote their first code. When the AI makes that thing. There are no experienced coders who can fill senior positions. 


Wednesday, January 21, 2026

The robot just learned to talk by watching YouTube videos.


“Hod Lipson and his team have created a robot that, for the first time, is able to learn facial lip motions for tasks such as speech and singing. Credit: Jane Nisselson/ Columbia Engineering” (ScitechDaily, This Robot Learned to Talk by Watching Humans on YouTube)

This is the fast advance to self- or automatically learning systems. Those systems can learn things by watching YouTube or regular TV. And that means those systems can learn many things automatically. The robot that learned to talk by watching videos just followed the movements of the mouth. And then it could benefit the speech-to-text application. These. Various methods can be used. To create an autonomous learning system for every kind of mission. The robot can follow how humans move, and then mimic those movements in its own body. 

This means that. The robot can learn things like kung fu. By following what people make in videos. The robot doesn’t need any special application. It can just follow the screen by using its action cameras. And this means. Any film is suitable teaching material for those robots. The CCD action camera automatically transforms anything that the AI sees into digital form. And that can open new paths for the self-learning systems. The AI can learn things like tactics. Just by following the things that commanders mark on their screens and maps. 

The system can use regular CCTV cameras. And that is one of the biggest threats to national security. The AI can use things that it learn as a data matrix for highly advanced simulations. And then another AI tries to create. A counter-tactic. Against those tactics. Any platform that the AI can see can act as the teaching platform. The autonomous learning protocol can mean that the AI learns things that bring big surprises to the game. 








General Dynamics X-62 Vista is the system. That is used for the AI training. 


When the robot operates in automatic mode. The thing. That's it, see. Or hears or feels acts as a trigger. If there is a match with something that the robot senses in the database, that trigger activates the action that the robot should use when it faces that kind of situation. If somebody says “hi”. The system finds a trigger that activates a certain database connection. The system can answer “hi”. But the fact is that robots will not think. The trigger. That is a certain action. That activates a certain database. The fact is that this means the same thing as “hi” can activate almost any kind of action series. And that means the robot makes things that are programmed into its memories. And that can give good topics for some horror movies. 

Automatic learning means. The system can learn things. That is hidden from system operators. When we talk about the robot jet fighters, the systems learn by using the movement. That the pilots make. For making an operational data matrix. The system learns from every operation that the jet fighter or bomber makes. But. Robot fighters can also learn things. In a similar way to the robot that is watching YouTube videos. The machine eye can observe how the jet fighters behave in films. The fact is that training the AI is not an easy thing. But when the AI is trained. It's quite a cheap tool. This means that the AI can be driven to the kamikaze drones. 

Training the AI for driving. For giving customer service and flying. Is similar. To teach humans. The system records things. That the driver or controller makes in each situation. Then those actions are stored in a data matrix. When the computer sees the match. For a certain operation. That thing activates the operation. And that is the difference between humans and robots. The robot doesn’t think. It just searches for matches. With some databases. This means that the robot doesn’t actually know what it does. It finds the right series of movements from a database that the trigger. like the action that the robot sees launches. 


https://www.indiatimes.com/technology/news/us-military-is-testing-an-autonomous-fighter-jet-are-robot-soldiers-next/articleshow/126931421.html


https://scitechdaily.com/this-robot-learned-to-talk-by-watching-humans-on-youtube/


https://en.wikipedia.org/wiki/General_Dynamics_X-62_VISTA


https://en.wikipedia.org/wiki/Skyborg


Tuesday, January 20, 2026

Does the AI know anything?


“Artificial intelligence is often described using language borrowed from human thought, but those word choices may quietly reshape how its capabilities are understood. Researchers analyzed large-scale news writing to see when and how mental language appears in discussions of AI. Credit: Shutterstock”. (ScitechDaily, Does AI Really “Know” Anything? Why Our Words Matter More Than We Think)

The AI doesn’t think as we do. When AI recognizes an object, it searches databases. And tries to find the action that matches that object. The thing that makes this process hard for programmers is that. The object might look different. When the robot approaches it from different angles. If we want to make a robot that automatically uses things. Like hand pumps, we must remember that the pump looks different from different angles. The robot must have an algorithm that allows it to recognize a pump. 

So the robot can recognize the pump using fuzzy logic. In fuzzy logic, the system calculates. Percent of pixels. That matches the pump. The system can also have many photographs. These are taken from different angles. In that case, the system compares images and searches for matches of the pump. The problem is that the AI doesn’t know what a pump looks like when it's sideways to the observer. If the pump falls, it looks different than when it stands. 

We know that AI is an ultimate tool in things like math. But another thing is that the system doesn’t create anything new. The system just connects. And reconnects information that it gets from the databases. This means that the thing that the AI misses is abstraction. AI can think very effectively if it has clear and precise orders. The information. That it uses. Must be handled. By following certain rules. 

This is the reason why computers are ultimate calculators. Rules in math are clear. The system must follow a certain order of operations. For solving the problem. In math, the problem must be solved by following exact orders. Or the result cannot be accepted. Another thing. Is the input to the system. That the user gives. Are given from the keyboard. When the system must understand spoken language. 




“As labor shortages push agriculture toward automation, harvesting delicate, clustered fruits like tomatoes remains a major challenge for robots. Researchers have now developed a system that allows robots to assess how easy a tomato is to pick before acting, using visual cues and probabilistic decision-making. Credit: SciTechDaily.com” (ScitechDaily, Robots That “Think Before They Pick” Could Transform Tomato Farming)

There are many problems in that case. Nonverbal communication plays a big role. In the communication between humans. Another thing is that. In spoken language. There are so-called hidden meanings for the words. When somebody collides with our car, we can say “how nice”, but that doesn’t mean that we think that is nice. 

The main thing is this: the AI is like a dog. It is a very effective tool when it searches. And finds something. If we give an order to AI. To create a house, the AI uses models. Those models or variables are the bricks or Legos that  the system can connect together. But another thing is this. The AI will not create any of those Legos. The AI requires variables that are already put in its databases. 

This is why the AI is like a dog. It finds almost everything. But the lack of abstraction makes it impossible to create new things. The AI can “think” before it acts. Like the tomato-picking robot case, the AI compares models that tell how it should act. The AI recognizes a tomato. Then it searches its database. What. The type of movements it should make. And the system must know the compressive force, so that tomatoes will not turn into ketchup. Or if the compressive force in touch is too weak. Those tomatoes will drop on the ground. 


https://scitechdaily.com/does-ai-really-know-anything-why-our-words-matter-more-than-we-think/


https://scitechdaily.com/robots-that-think-before-they-pick-could-transform-tomato-farming/

AI becomes smarter. By talking to itself.

“Inner speech and working memory architecture boost AI performance when multitasking and completing complex pattern generation challenges. C...