Tuesday, July 29, 2025

AI is a good surveillance tool that turns George Orwell’s 1984 vision into a child's game.



"Illustration of AI-powered web browsers revolutionizing mobile navigation and impacting Global Engine Optimization, generated by artificial intelligence." (RudeBaguette, “They’ll Know Everything You Click”: AI-Powered Mobile Browsers Are Here and GEO Experts Warn “This Changes the Surveillance Game”

Every day, we see warnings about how some browsers collect data. But for some reason, we don’t care about those warnings. We blame companies like OpenAI, Meta, and Google for collecting data about us. But then we don’t do anything for that thing. We blame data centers for using too much energy. But then we force people to use those tools because they are effective, and the company behind the wall uses those tools. 

People like Elon Musk give warnings about AI. But then they just begin their own AI project the next day. People warn that AI takes the jobs, but the problem is that people who give warnings are put in many workplaces to operate under fixed-term employment contracts. And then they start to care about the jobs only when their own workplace is under threat. 


(RudeBaguette, “They’ll Know Everything You Click”: AI-Powered Mobile Browsers Are Here and GEO Experts Warn “This Changes the Surveillance Game”

The AI-based browser technology can collect data that people use from their mobile devices. AI-based data collection is the tool that raises corporations above the state. The multinational corporations can operate outside the law, and those corporations will never go to jail for those kinds of things. They can outsource non-ethical R&D work to states where there is no law that restricts data collection. 

AI is a powerful tool because it can collect and connect data from multiple sources. Things like smart watches that collect data from blood pressure can tell about stress reactions that we want to hide before we know that reaction. Technology like a brain-computer interface is based on voluntary things. Those needed microchips will be put into people’s brains only in special cases where a person is fully paralyzed. 

Those are the rules in Western countries. But most of the people in the world are not living in Western states. Those people live in states that don’t even try to follow the same principles that we used to follow. And there are lots of people like mafia bosses and warlords in Afghanistan who operate without caring about the law. 

Things like mind-reading E-tattoos can be a tool that breaks the limits between computer and human. But those tattoos can also cause problems with privacy. The E-tattoo can include two parts. The other tattoo is at the front of the user's head. And another is behind the head. Then the system that might look like a bandanna will be connected with those tattoos. And that can be observed in a person's thoughts. This kind of brain-computer interface (BCI) doesn’t need surgery, and that can bring every person's BCI into reality. But those systems can also open the most private things that we have, our brains, to hackers. 


(Rude Baguette, “It Knows Before You Do”: Mind-Reading e-Tattoo Tracks Stress in Real Time and Privacy Advocates Are Sounding the Alarm)



"Illustration of an innovative e-tattoo monitoring brain activity in high-stakes environments, generated by artificial intelligence." (Rude Baguette, “It Knows Before You Do”: Mind-Reading e-Tattoo Tracks Stress in Real Time and Privacy Advocates Are Sounding the Alarm)


But there are people who don’t care about those rules. This is why people who work with those systems should have a chance to tell authorities if somebody has asked for things like AI-based surveillance systems from them. There is a possibility that the people will train the AI agent to use surveillance tools. And then they will give access to that assistant to people who work for organized crime. If somebody asks for that kind of favor, those people should have a chance to tell them about those things. 

The next-generation social media that allows us to track everything is already here. Things like intelligent tattoos that are used to research stress are interesting tools. They can observe nervous systems and see unpredictable activity. The same way intelligent watches and other sensors see how our blood pressure reacts to the environment. Those systems know what happens in our bodies before us. And things like WiFi are excellent tools to transmit to the sports coach. Or to people who want to break our final privacy. Those body-connected sensors are important tools for sports, but they can also tell things like stress. 

If people face a stressful situation again, like a similar place where an accident happens, our body will react to that situation immediately. The sensor sees when our pulse rises, and things like blood pressure rise. The same way the AI can see, does a person's steps change? And that tells us that there is something going on in the mind of the person. When we think about brain-computer interfaces (BCI), those systems can scan people’s thoughts. 

There is always a possibility that those brain-implanted microchips are used for some kind of control tool. And those systems allow bad boys to control even thoughts. We know that there are certain regulations about those systems. But there are people who have money and power. The drug lords of the cocaine cartels and some ruthless dictators might be interested in technology that allows them to observe thoughts. Or they can make much more. Those systems can control our thoughts. 

The system that can control the frontal lobe is the tool that can turn humans into robots. The system can include two microchips: the chip that sends electric impulses to the frontal lobe. And another microchip is at the neck. We know that the data those systems need is restricted. But there can always be people who cooperate with unauthorized organizations like organized crime. 



https://www.rudebaguette.com/en/2025/07/theyll-know-everything-you-click-ai-powered-mobile-browsers-are-here-and-geo-experts-warn-this-changes-the-surveillance-game/


https://www.rudebaguette.com/en/2025/07/it-knows-before-you-do-mind-reading-e-tattoo-tracks-stress-in-real-time-and-privacy-advocates-are-sounding-the-alarm/

Training the AI is similar to training dogs.



"Illustration of OpenAI's ChatGPT undergoing rigorous testing and safeguards to ensure ethical AI use, generated by artificial intelligence." (RudeBaguette, “They’re Training It Like a Bomb-Sniffing Dog”: Inside OpenAI’s High-Stakes Effort to Prevent a ChatGPT Meltdown)


Open AI markets the new AI-tool called Chat GPT agent. That tool is possible to use as a customized AI assistant. When we think about AI and its training, we can compare that thing with training dogs. The trainer must be patient and finalizing the tool is the longest period in the process. It’s possible that the AI is ready in a very short time. But then trimming that tool takes a long time. In that process things like non-wanted reactions are removed. And the system will deny giving wrong or wrong type answers. The AI must have some kind of profile that its users need. Then the user groups are training AI to give responses to all normal situations. 

The AI must be carefully trained if it must give some kind of customer service. The AI must follow the customer’s orders but refuse to follow orders that the user must not have any permissions. Those things are spyware and other malicious software making. If the AI has no regulations to refuse orders to make or show things like computer virus code that means the AI can turn into a tool that the hackers can use to create malware like spying tools. 

AI is a powerful tool that can make many things for their users. The AI is a perfect tool to make spyware and other kinds of malware tools. And that makes AI dangerous. The AI can create code that can transform in a very short time. That kind of code is sometimes impossible to detect. AI can be a tool that creates that kind of thing. This is the reason why developers must be careful about things like what kind of things users must have the right to make and what they should not make. AI assistants like Chat GPT agents are tools that can turn wet daydreams for people like Kim Jong-Un. The AI assistants can make hackers work more effectively. 

Complicated computer viruses called logical bombs that destroy databases in a certain moment can make the worst of all nightmares true. The logical bomb that is connected to weapon launch can shutdown the warships or even the entire nation’s defense. Logical bomb is the malicious program that erases things like databases and other things like that. The database erasing can remove targets from nuclear weapons. Or it can launch a security protocol and the nuclear weapon control can think that the weapon launch is unauthorized. 

The logical bomb in the system is always dangerous. The thing is that AI is always dangerous, if humans want to make it dangerous. AI that can interact or command physical tools are dangerous if there is some kind of error in programming. Or if something changes programming at the wrong moment. The complex programming can include complex malware. When we use AI we must remember a golden rule in data security. The golden rule is this: the thing that the programmer or program’s maker doesn't tell can be more important than things that they tell. 

So, those people can always tell lies. And we must prepare ourselves for the possibility that somebody collects information without permission. People like Kim Jong-un have power. They can search for Western criminals among their allies. They can offer drugs and guns to people who cooperate with them. And those people who own their country can also offer safe places for Western gangsters. Western criminals can establish companies that train those AI assistants for Kim Jong-un. Those companies can operate under the name of Western people, who might smuggle drugs. North Korean chemical factories can create those drugs for Western markets. And they need effective guns and a deliverer for those chemicals. But people who work in those companies can be North Korean agents. 

https://www.rudebaguette.com/en/2025/07/theyre-training-it-like-a-bomb-sniffing-dog-inside-openais-high-stakes-effort-to-prevent-a-chatgpt-meltdown/

Sunday, July 27, 2025

Copyrights are not only things that cause criticisms against the AI-made films.



"Illustration of Netflix's use of generative AI in producing scenes for its original series." (RudeBaguette, “This Isn’t Art, It’s Algorithmic Propaganda”: Netflix’s GenAI VFX in New Series Sparks Industry Backlash and Fan Revolt)

Netflix faced criticism when it changed the' featured scenes using AI. That thing fights against copyrights. Another aspect is that the AI can include an actor's character in movies that are against the actor’s values. But this is not the worst thing on this path. AI-generated movies are tools for propaganda. 

The AI can create characters that look like real people, and those characters can even yell nazi propaganda. The AI can create an artificial model of any person that it sees. So the AI can use images from surveillance cameras to make those images and then put them into a film. That kind of ability allows for the misuse of the AI. 

That kind of tool is dangerous if those films are introduced as real life or documents. Normally, we know that at least most of Western film actors and filmmakers have such high morale that they don’t make those kinds of contracts. But in countries like North Korea, the government plays by different rules. That means those people can make films and use them as propaganda. But in the AI-created films, there is one difference between those films and traditional films. The AI will not say “no”. And that means those tools make everything. That's what their operators want. So, that tool can create anything that its users wish. 

The AI doesn't require any crew, and the operator can do everything alone. The AI has no will. Humans determine the limits of those systems. And there is a possibility that somebody includes their neighbors or other competitors in the surveillance tapes where they steal things. Or the AI can include any person in things like child pornography. You might also think about what things like fake police surveillance tapes can do to humans. That kind of thing can destroy any person’s reputation. 

And there are many different motives for that thing. The political motive could be that the competitor's reputation, or the motive for those things, can be that the competitor in working life can be erased by destroying those people’s reputation. 


https://www.rudebaguette.com/en/2025/07/this-isnt-art-its-algorithmic-propaganda-netflixs-genai-vfx-in-new-series-sparks-industry-backlash-and-fan-revolt/

Saturday, July 26, 2025

The new method, known as distillation, enhances the effectiveness and reduces the cost of running AI.



In chemistry, distillation means a technique that purifies a material. The same way chemists distillate liquids, AI researchers can distillate AI. Distillation in the human body means that when we move our hands, we don’t need to move our feet at the same time. Or when we order pizza, we don’t want the entire list. We want that certain pizza. In AI, that means that the AI can create a student model that it trains for customers' needs. 

The large language model (LLM) can create a small language model (SML) and customize it. That means the LLM removes all unnecessary things from the SML to make it compact and more secure. The SML is easier to test and it requires less powerful servers than the LLM. There are always mistakes and errors in the LLM algorithms. The problem is that the corrupted AI is not a good tool for detecting errors in its internal code. The human coder must recognize the suspected errors and then fix them. But the problem is that the code can be right, but its target is the wrong object. 

The idea is that the system cleans information in the system. That means that the AI has only responses and actions that it needs for complete missions. The system takes all unnecessary parts away. And that sometimes causes questions about the information that the AI will not need. Humans make decisions about information that the AI needs. And that is seen in things like Chinese AI. Those things don’t discuss things like Tiananmen Square. 

That is one version of distilled information. The system will not give answers that are against the state policy. AI is a tool that can make many things better than humans. But those things will happen in well-limited sectors. The AI can observe things like nuclear reactor functionality. The fact is that the nuclear reactor is not like a chess game. The AI must only keep values at a certain level. The thing in the AI is that it can generate code, but it can use only existing datasets. The difference between a nuclear reactor and a chess game is that the nuclear reactor will always follow certain rules. 

The nuclear reactor will not make anything unpredictable. If the AI knows all its values, the nuclear reactor is safe. But unpredictable values like leaks in the cooling system can destroy a reactor. The programmer who creates the nuclear reactor control systems. That creator must be very professional and collect all data so that the system can respond to all situations. The system must collect information from many sources, such as surveillance cameras and other tools. The system must recognize if some light doesn’t shine as it should. 

That kind of system requires very high-level skills and the ability to train the system for new things. There is always a possibility that the programmer, or the engineer who advises programmers, doesn't always remember everything, such as details of some kinds of damage. That means the AI requires training for that mission. And like always, this kind of thing means that all mistakes that AI makes are actually made by humans. Humans should test and accept that kind of system. And that causes dangerous situations. The training is the final touch in the AI R&D process. 

When we think about things like the North Korean government, they want to use AI in the same missions as Western actors. But do those actors have the skills and abilities to make the final training for their language models? If those language models are made by using some kind of pirated copies that are transported using USB sticks, it can make it possible that the AI and its complicated algorithms don’t work as they should. And that makes those systems dangerous. 

https://www.quantamagazine.org/how-distillation-makes-ai-models-smaller-and-cheaper-20250718/

Tuesday, July 22, 2025

Combat and law-enforcement terminators, the real-life skinwalkers.



Above is an image of Robot Janus. Janus was a Roman god who guarded entrances. That god was also the god of beginning. And Janus was the god of the end. Janus was the god of changes, like the god of beginning years. The remarkable detail in Janus was two faces. When another face smiles, the other face grimaces. His two faces represent vigilance and the ability to see both forward and backward. They can also symbolize good and evil, or the transitions between war and peace.



Man-shaped combat and surveillance robots can already be true. 


In 1984, James Cameron introduced Terminator, the killer robot that was eliminating enemies of the Skynet computer. In that time, the Terminator was the Sci-Fi tool that could not be dangerous. But things like large language models, LLMs that control robots, can turn Terminators into reality. The Terminator is one of the examples of what militarized AI can do, if it controls robots. When we talk about risks of AI, we always forget that in the cases of militarized AI, the AI is meant to harm people. When AI-controlled robots operate on the battlefield, that system is made for harming people on purpose. 

And that causes risks. The human-shaped robots that work in warehouses are things that might look cute. The problem is that human-shaped robots can operate with every tool that humans can. That means the same robot that operates in warehouses can also operate on battlefields. Another thing that can be interesting about the human-shaped robots is law enforcement. The law enforcement might want to use AI-controlled robots for cover-up missions like intercepting criminal organizations. Those robots will not change their sides. 

The robot can look like a mannequin. That system can stand in the window and wait for something to happen. When the system sees risky things, the robot can activate its combat mode. The problem is that the robot doesn’t think. Those security androids follow their programs. And in the cases of the robots, it is possible that somebody, like hostile hackers, can rig the operating system. That means when that kind of robot sees things like a targeted person, that kind of robot can attack that person immediately. And that is the risk in robotics. In the wrong hands, those systems are not toys. 


Robots don’t rebel. People rebel. That means it’s possible that some hostile actor buys warehouse robots and puts them to make crimes, like acting as assassins. The human-shaped robot can act as a surgeon, gardener, or security guard. But the same system can act as an illegal actor. The robot can have so-called ghost- or shadow protocol. The shadow protocol, or shadow program, means that there are people who are targeted by robots. When a robot recognizes the targeted person, it can make an attack. And those things might look like accidents. The warehouse robot can make a simple mistake and drop some boxes on the targeted person. Or a robot car can push the gas pedal at the wrong moment. When a robot completes its mission, the ghost protocol can be erased. 

They can transmit all data that they collect to the police HQ, and that decreases risks of those operations. There is a possibility that people can have a cyborg twin who goes on dangerous missions instead of humans. That kind of system makes scenarios where the system operators can replace a person with a robot. The robot operator can be a real-life skin walker. The system can mimic anything from humans to dogs. And the robot dogs and other robots can be very dangerous tools in the hands of psychological warfare specialists. Those systems can involve things like infra- and ultrasound systems that can be used to transmit subliminal messages to the people. The robot animal can slip past the system that uses ultra- and infrasound at the same time to the enemy camp. That system can deny the enemy sleep. 

But robot animals can do many things that humans cannot do. They can observe animals in their natural environment without disturbing them. Robot animals can also search for things like illegal hunting, and they can be used as area surveillance tools. But the problem is that robot animals can handle things that real animals cannot. The thing is that those systems that might look like monkeys can cut the network wires and use weapons. 


Thursday, July 17, 2025

The Gemini AI refuses to play chess against antique ATARI chess computers.



The Gemini AI refuses to play chess against the ATARI chess computer. And why does that system make that decision? There is a model in its algorithms that it uses to compare models and probabilities. And because two other AIs lost the match, that means it's more probable that Gemini loses the chess match than wins it. The other thing is that the AI is programmed to be polite and also serve commercial use, which means the AI translates that loss as bad for its own and its background company’s reputation. 

The reason why Gemini is considered polite is that the company wants users to like their product. This is the main problem with commercial AI development. The purpose of that thing is purely to make money for the companies. Not to serve the national interests or scientific work. This is why AI is sometimes misunderstood. They use things like mathematical statistics and other things to advance trust between it and users. And the other thing is that the AI must also support its users' willingness to select the AI service that benefits the company, behind the AI and LLM. 

The ATARI game consoles from the 1970s are not so easy to win, as we expect. If somebody played chess against those chess machines that used an interactive chessboard where the chess buttons had a digital ID. The player must also move the computer’s chess pieces. And the computer shows movements by using light pairs that point at the button. And then the system showed the point where the system wants to move the button when it notices something in those chessboards. The player must move the button as the chess computer wants. If the player didn’t follow the order, the system just repeated “that was not my move”. And refuses to continue the game. 

Or when we look at the discussion that the AI had about that match, the beginning was that the AI told how powerful it is, and how many moves it can calculate before, but then in real time, the AI refused. The AI could “think” that if it loses a chess game to some antique game console, that’s bad for business. 

The fact is that the ATARI is the RISC machine. Its only purpose is to play chess. There is a limited number of movements in chess. And that’s why chess is one thing that is used for AI development. The AI can make multiple models to make moves. But the AI must have knowledge of how to play chess. In the world of AI, that means the AI creates a new dataset for the action. And when AI expands its skills, that means it just makes or loads a new dataset for it. 

The AI will not think like we do. It creates datasets and combines data from different sources. Most of the hardware systems that run AI or language models are so-called neurocomputers. In a neural network, each computer can operate as part of the entirety or independently. The problem with every single neural network is that they need chess programs to play chess. In chess programs, every game or tactic that the system uses is a database or dataset. 

The next step is that the system must analyze each of the games stored in the chess program. And then that system must find the right game and find its counter game. Those games are tactics that the system must use. The problem is that the AI can find more games or datasets on the net if it has instructions for that thing. The AI doesn’t praise itself as we do. It simply tells things about the systems that run it, if it has the permission to give that answer. If that is not permitted, the AI can tell that it cannot answer. Or it can tell lies if there is a dataset that involves lies. The computer doesn’t even know if it is lying. The dataset involves all answers that the computer can give. 


https://www.freethink.com/artificial-intelligence/ai-datasets


https://futurism.com/google-ai-refuses-chess-atari


https://www.tomshardware.com/tech-industry/artificial-intelligence/google-gemini-crumbles-in-the-face-of-atari-chess-challenge-admits-it-would-struggle-immensely-against-1-19-mhz-machine-says-canceling-the-match-most-sensible-course-of-action

Saturday, July 12, 2025

Is AI a black box?



In the black box model, the only thing that means is the thing that the outside observer sees. That thing was the primary element in early 20th-century psychology. And the most modern observation tools are changing that thing. But in the early 20th century the brains were totally unknown. There was no change to research living brains. When EEG systems developed researchers could investigate the brain shell electricity. 

But things like PET and MEG scanners open new ways to see how brains work. But if we think of brain research as the box model, the early 20th-century models were black box models. The modern scanners brought the glass box or white box model to brain research. And the AI brought the grey box to neurology. That means the brain-computer interfaces, BCI, where researchers use brain electricity to control computers and soon read thoughts. 

When developers use the black box method to test programs they simply test that the program works. When we make the black box model for psychological learning methods, we can think about some 9-year-old child. That child can read almost everything and we can put that person to read even complicated scientific articles. The child can read those words but that person doesn’t understand what those things mean. The black box means that the actor can make many things, but the action is the only thing that the outside observer sees. That means that black box applications are not as safe as they could be. 

And most of the computer applications are black boxes for users. A regular user sees the interface and that’s enough.  In the white box or glass box application test, the tester tests the code. In the grey box, the test unit tests the functionality and code in the case of programming. The fact is that the AIs that we see are black-box applications. We see only the command line and the answer or thing that the AI makes. We don’t actually know even if the images and texts that we order the AI to make are made by some humans. 

In the case of the black box, the only thing that matters is that the system gives the right answer. The way the system makes that answer doesn’t matter. The only thing that the user sees means something. 

Is AI a black box? The idea of AI intelligence is something that is handled very little. We know that AI can solve many problems independently, but AI doesn’t know what it really makes. The AI can know many things, but it cannot have deep knowledge about those things. The model is taken from the black box psychology. In black box psychology, the key element is that the behavior that an outsider observer sees is something that we can accept. That means the AI just mimics human intelligence. The black box model means that the AI can give the right answers for some mathematical problems. 

But the same AI cannot make anything else. That is the idea of the black box. In the black box model, the thing that the AI gives the right answer is enough. The AI must do only things that the operator orders. So when the operator asks about the uranium enrichments and something like that the AI must give the answer. It must not make its own connections to things like weapon development. There can be things that deny the AI to give orders that can cause damage to operators or their environment. But those orders are in the AI’s code. The AI doesn't make those orders itself. 

The AI is sliding to the grey box application. For making autonomous learning the AI must have access to its source code. And the second thing is that the AI must have the ability to test the code. The AI can involve three parts. The system that creates the code. Another system that tests and accepts the code. And the last system that keeps the backup copy of the AI source code. A backup is needed for the errors. The system should have the ability to reject the code that is not functional. 


Quantum entanglements and human consciousness.



Researchers noticed that human brains emit light. That very weak light that very sensitive sensors see is the thing that causes interesting ideas and thoughts. The fact is that we know that there is nothing meaningless in human brains. And that strange light must have some connection to brain activity. The question is can part of our thoughts be based on quantum entanglement between some electric or photonic field in human brains? 

The human brains have the capacity to feel magnetic fields because axons transport electric signals. The question is what parts in our brain make that entanglement? Do neurons trap ions like protons into the ion pumps and then make quantum entanglement with them? There is suspicion that the myelin cylinders are the key element in the brain's quantum entanglement. 

If that structure is the thing, that is behind the human brain's superiority it would be the ultimate boost for quantum computing. Researchers can copy that structure into the quantum processors. And make room-temperature quantum computers possible. 

The quantum entanglement can be made between axons or electric fields around axons. We know that human brains will not make anything that it doesn’t need. And the thing is that the very weak light in brains can have its purpose in the thinking process. There is also another interesting thing that researchers noticed. The human brain cells are growing in old age. The memory cells make new neurons all the time. The reason why our cognitive abilities decrease when we turn older is that our brains lose more neurons than they can replace. 


"Liu et al., Physical Review E, 2024

"A closer look at myelin cylinders and its location along the neuron’s axon." (Popular mechanics, Quantum Entanglement in Your Brain Is What Generates Consciousness, Radical Study Suggests)

The finding of the thing. That makes superposition in human brains. Can uncover and supercharge the quantum computer's advance. 



Or neurons start to remove connections faster than before. The destructive thing is that the number of axon connections will decrease so fast that brains cannot adapt to that situation. Our brains simply don't have time to transport memories to the next neuron generation. So do we use our brains less in middle age than in young ages? Do our brains cut so many useless axon connections that they cannot save memories? 

But the problem with alcoholism and some other brain damage caused is that those things destroy brain cells faster than the new brain cells can form. When the memory cell creates a new brain cell, that cell is meaningless without its memories. Brain damage sometimes means that the brain memory neurons have no time to send the memories to those cells. That makes them "empty". 

If those neurons exist they cannot perform the same duties as their precursors can. The cell loses its instructions on how it must make those things. Without those memory units, the neuron cannot do anything. The ability to create new neurons in the old ages makes it theoretically possible to input data, or “train” those new cells and then inject them into human brains. 

The big question is does AI make us less intelligent than our precursors were? The thing is that if a person uses lots of AI the person uses less brain capacity than a person who uses less AI if we think about the neurological aspect, if brains do not use some neural tracks that means that neurons remove those tracks. So if a person writes all texts and draws all things using the AI that can decrease the use of axons and that can decrease the IQ if that happens over a long period. When a person doesn’t use neural tracks that means brains will remove those tracks. 

When people use artificial intelligence for essays they must not think. They must not collect information and the main thing is that they must not process that data. The AI will give them a correct set. And that doesn't give them a chance to process and taste information. In the worst case, the student simply copy-paste the test task to AI. And that makes the essay for them. That doesn't require very much thinking. And in that version, the student cannot advance very much. 

So that means that it’s necessary that people use their brains for thinking. Without thinking, brains disconnect their connections. People say that we should read books and advance our way of thinking. But the problem is this: we have no time to read. 


https://www.euronews.com/next/2025/06/21/using-ai-bots-like-chatgptcould-be-causing-cognitive-decline-new-study-shows


https://www.laptopmag.com/ai/chatgpt-study-by-mit


https://www.popularmechanics.com/science/a65368553/quantum-entanglement-in-brain-consciousness/


 https://www.sciencealert.com/your-brain-emits-a-secret-light-that-scientists-are-trying-to-read


B https://scitechdaily.com/brain-cells-keep-growing-even-in-old-age-study-finds/





Wednesday, July 9, 2025

The new genetic engineering beats Sci-Fi tales.



The new systems are making genetic engineering more powerful than ever before. The biological artificial intelligence called PROTEUS can usher in a new era for medicine and gene therapy. When we think about things like cancer and blocked blood vessels there is the possibility to send new small robots to destroy those tissues and open blockades. 

The problem is that those robots are hard to control in the human body. AI-controlled biorobots can solve that problem and maybe in the future, genetically engineered bacteria can destroy wanted cells. Or maybe those cells can make much more than just remove cancer cells from blood. The artificial amoebas can also be used to transport tissues in the damaged areas of the human body. The artificial amoeba searches the point where the damaged tissue is. 

Then the amoeba closes the blood vessel and then the mRNA or DNA reprograms that amoeba into the tissue that fixes the damage. Researchers have noticed that the brain's memory centers are constantly generating new neurons. That thing makes it possible to create systems that can used to recover memories in the case of brain damage. The genetically engineered amoeba can travel to damaged neural areas and then transform into neurons. And the problem is how to transfer the memories or skills that destroyed neurons involved into the new neurons. 

When a memory cell creates a new neuron and transfers its memory to that cell it's possible to steal those cells. And then put them into the freezer. The artificial parasite can go to the brain and steal those cells. Then the genetically engineered mosquito can call those parasite-robots to it and then transfer biorobots to the laboratory. The AI can read those memory blocks. And maybe, sooner than we expected, that thing allows researchers to read people’s memories without brain implants. 

The ability to create 3D printings in the cells creates new ways to reprogram cells. The 3D printer simply makes the mRNA molecule in the cell. And that tool can adjust the cell's purpose anytime when researchers want. 

Genetically engineered cells can also be used to transport data from the chemical memory DNA or mRNA to the computers. The idea is that the cell can send electric impulses or blink the bioluminescence light to the data transport. Genetic engineering makes it possible to make almost everything. The genetically engineered mice that had the human genome started to be discussed. And that is one version of the new visions for genetically engineered animals. In visions, intelligent pets are animals whose mission is to assist their masters. Those intelligent pets are things that are opening new visions for bio-engineered creatures. 

What would you do with talking mice? If the AI can communicate with those creatures like talking mice, those creatures can tell us what animals talk. And the ability to talk with animals would open new roads to biology, ecosystem understanding, and many other sciences. 


And all those visions are not things that feel good. 


We know that people like Kim Jong-Un want things like super soldiers. And new technology brings vision about the super skills that a person can have only when those skills and abilities are needed. That means that person gets the super muscles only when that person is frightening. 

When we think about the possibility of changing genomes in the human body we face the ultimate dream of human history. That dream is about the superhuman. The superhuman is a creature that is stronger and more intelligent than a regular human. And the product of that dream is the super soldier.  Genetic engineering makes biological creatures like humans the product. The new tools like self-amplifying RNA, saRNA make it possible to customize human skills for every situation. 

The ability to adjust the human intelligence level and transfer things like wolf’s behavior genomes to a person's body makes it possible to create soldiers who are obedient. That thing makes those people perfect soldiers. Otherwise, that genome makes those people dangerous in the wrong hands.  But what if the system can remove that ability when a person is out of service? The idea is that the person, or creature has some ability only when that creature requires that ability. '


https://www.earth.com/news/life-copying-itself-scientists-create-a-self-replicating-rna-system/


https://medicalxpress.com/news/2025-07-scientists-biological-artificial-intelligence.html


https://www.sciencealert.com/brains-memory-center-never-stops-making-neurons-study-confirms


https://www.sciencenews.org/article/3d-print-elephant-inside-cell



AI predicts human behavior with stunning accuracy.



Artificial intelligence can predict human behavior with startling accuracy. This is the result of comprehensive research. If the AI can create a model of the behavior of certain individuals, that model can be expanded to larger groups. Every kind of behavior in groups begins with individuals. The mathematical formulas that are based on Ludwig Boltzmann’s theorems make it possible to create models of how certain people and people groups behave. That ideal gas model is the thing that inspired SciFi novelist Isaac Asimov to create the model of psychohistory. In Asimov’s novels, psychohistory is a tool used to model and predict how human groups behave in certain situations. 

The AI that can predict human behavior accurately can act as a tool that assists humans in war and peace. The AI that can predict how humans behave and how they act can be an ultimate tool in work interviews. That tool can be ultra-powerful in the military. And the system’s accuracy rises when the group that the AI tries to predict is homogenous. That thing makes those AIs scale their behavior models through the entire group. This thigh makes it possible to predict how things like jet fighter pilots operate. The ability to collect and process data makes it possible to create models for every possible situation. The ability to predict behavior can decrease traffic accidents. 


Ludwig Boltzmann (1844-1906)

Psychohistory is here more accurately than Asimov ever could imagine. 

"Psychohistory is a social science that analyzes human behavior by combining psychology, history, and other social sciences, while also being an amalgam of psychology, history, and related social sciences and the humanities. Its proponents claim to examine the "why" of history, especially the difference between stated intention and actual behavior. It works to combine the insights of psychology, especially psychoanalysis, with the research methodology of the social sciences and humanities to understand the emotional origin of the behavior of individuals, groups and nations, past and present. Work in the field has been done in the areas of childhood, creativity, dreams, family dynamics, overcoming adversity, personality, political and presidential psychobiography. There are major psychohistorical studies of anthropology, art, ethnology, history, politics and political science, and much else." (Wikipedia, Psychohistory) 

There is also the possibility to include things like statistics, and other mathematical models into those sciences. 

Psychohistory should be a SciFi tool that makes it possible to calculate and predict the behavior of large human groups. The novelist Isaac Asimov introduced that tool in his Foundation novel series. Then researchers started to investigate that tool. And maybe psychohistory is reality sooner than we expect. Or maybe it already exists.  

However, researchers created a model that makes the psychohistory possible. The system uses historical databases to model how people act in certain situations. Then the algorithms search for similarities in the modern environment. The psychohistory is based on Boltzmann’s NTP formulas and models that are used to calculate ideal gas and its flow in galaxies. The idea is that it's easier to calculate and predict the behavior of large groups than one individual. This is why the big entity's behavior is easier to calculate than one gas molecule. The problem is that in the time when 

Asimov created his idea of a tool that can calculate. The behavior of the large human groups, quantum computers, and neurocomputers was not predicted. When people believed that psychohistory could ever be created they were wrong. The modern internet makes it possible to collect data from people’s behavior. And that model makes it possible to create the macro-model that helps to predict the behavior of a large human group. The system that handles a very big data mass can calculate the behavior of the individual people and then make a matrix about that thing. The system can create millions and billions of data matrixes that can unite into one entirety. That makes it possible to predict: how society reacts and behaves in certain situations. And that can be the tool that can turn entire society to a new level. 


https://en.wikipedia.org/wiki/Foundation_universe#Psychohistory


https://en.wikipedia.org/wiki/Ludwig_Boltzmann


https://en.wikipedia.org/wiki/Psychohistory



What separates us from AI?



Today we use lots of time to think, why does AI decrease our IQ? The answer is this: this modern time where we live benefits superficial people. Our working life encourages us to have maximum velocity. There is nothing wrong with the velocity-based working life. But the problem is that the measurement tools for velocity are things like how many executions we make during our working day. The measurement tool can be how many screws we tight. Or how much money we bring to our employer. 

Deep thinking is not encouraged in our working life. When somebody sits in a coding company, that person should use the AI assistant. Can you even seriously imagine that you would go to the library and borrow a book about the problems? And would you have time to stop deeply thinking about things that something really means? How often in the week, do we discuss philosophically and think deeply about things that we really see? We can do many things without deep thinking and logic. 

And when we think about things like some 1970s chess simulators, we must ask ourselves are those things really thinking? Do some ATARI, or Commodore 64 really think? It can drive a car on screen, but does it think? The system can move chess buttons and win chess games or some formula games against humans. But is that a mark of advanced thinking? Or does the horse, who sees three fingers and knocks the floor three times using its hoof, be some doctor of philosophy? Is a chess simulator with 64 kb memory more intelligent than some university professor? Maybe that thing is not as versatile as a professor. 

The IQ is something only if we compare it with something else. If we play chess alone or make something else, we don't need to be intelligent. If we are surrounded by people who have an extremely high IQ, in that case even a high IQ doesn't mean impressive. Same way. If our only opponent in a chess game is Garry Kasparov. That makes all of us seem like very bad chess players. Then we can think how ordinary chess player Kasparov is. Did an ordinary player play 2533 matches at the world champion level and win 1371 of those games? That means 54,13% winning games. But that happened at the highest possible level of the chess game. So does an ordinary player ever reach that level? 

But if the professor wins the AI in chess that is not news. The news is that the professor, some AI chatbot, or quantum computer loses a chess game against that machine. Nobody cares how many things the professor made before that chess game. Nobody even cares if the professor plays that person’s first chess game. And nobody even asked if the AI played chess before, or did the AI even knew how to move buttons. In the same way, a professor might not necessarily play chess at all. The fact is that all geniuses don’t even play chess. And if AI learns like humans, there must be something there that gets things like button movements. The computer can play chess but it might not do anything else. 

Normally we say that AI simply mimics things and then it doesn’t have a deep knowledge of things that it makes. When I read about that kind of thing, I sometimes remember one question from philosophy exams. That question is: “What separates philosophical thinking from the every day, or regular thinking?”. The answer is that philosophical thinking is deeper and more analytic than regular thinking. And that brings new questions into my mind. That is when we last exercise philosophical, deep thinking?”. When we do something, like turn screws in workplaces, do we really think about the purpose of that action? 

How deep out thinking? And how deeply do we think about things that we do in everyday life? The thing that’s enough is that we do what we must and that’s it. We have no time to think deeply about what some screw or other things can do. We simply do our job and that’s it. The thing is that humans learn through mimicking. We know many things. We know how to drive cars and use computers. We know how to fly airplanes and still, we don’t know anything about those things. We can drive cars. But we don’t need to know what happens in the car when we pull the gas pedal. We know that cars accelerate, but we must not know how cars make that thing. 

We could put the car to react with the gas pedal two ways. We can make physical contact between the gas pedal and the engine. Or we can use a camera that registers the gas pedal’s position. Then the AI accelerates or brakes the vehicle. For that system, the AI doesn't need any deep knowledge of things that it does. The system must simply accelerate the electric engines if we use electric cars. And the fact is that the AI must not even know what an electric engine is. When a car accelerates the system can involve code there is the word “engine”. Then some control circuits have a code that makes it react to the signal that is meant for the engine. 

Same way, if we hear the word that is our name, we automatically react to that thing. Our name is the thing that activates our attention. Sometimes we think about our names. But we forget that we learn that thing. Why cannot our name be C3PO? Because our parents didn’t give that name to us. Our names are “Jacks” and “Jills” because our parents gave those names to us. But have we ever wondered why some names are reserved for girls and others for boys?  And then we learn that those words are our names. 

But if our parents would give the name C3PO to us. We would react to that as our name. But do we ever imagine, why cannot our name be C3PO? Because we never thought that before. Then we can go back to begin, and ask how to describe thinking. Does thinking mean that we think how many times we hit some nails? Or does thinking mean how many chess games we learn? 

The thing is this: if we drive about 10000 kilometers without accidents or we win millions of chess games in our life that is not news. The news is that if some robot car drives off the road. Or if some supercomputer loses a chess game to some ATARI chess machine. Those things can be translated into that maybe the ATARI chess machine is more intelligent than humans and supercomputers. 



Friday, July 4, 2025

The AI that beats humans is at the door.

 

Mark Zuckerberg says that he wants to create an AI that is more intelligent than humans. The AI can have better cognitive skills than humans because they learn differently. Every skill that the AI has is like a macro in its memory. There is no limit for the number of those macros, or automatized actions that the computer stores into its memories. The limit is the memory storage. The AI will not forget humans. That makes it possible for the same robot can cook. 

Clean and make almost limitless numbers of operations without errors. If we want to make the AI that makes food for us we must create a huge number of variables for that thing. But there can be a shortcut to that problem. The AI can involve certain modules. So, if the user wants meatballs that AI downloads the meatball algorithm and databases to the robot. That makes it possible to make the system operations lighter. The databases or datasets can be created separately. 

Cognitive AI means that it can create a dataset independently. And for computers, each dataset is a certain skill that it has. 

The AI is the man-created alien. Are aliens already here? The fact is that if Mark Zuckerberg wants to build AI that is more intelligent than humans that thing is an alien. Human-made aliens are things like genetically engineered species and artificial intelligence. And then we can ask is artificial intelligence really intelligent? Can it think? The AI can do many things. It can advance its skills and it can learn from other AIs and from films. Turing’s test is the thing that measures the AI’s ability to think. 

The AI can mimic humans. It can transfer all movements that humans make to the human-shaped robot. That thing is the thing that makes the system seem intelligent. The cognitive skills that AI has made it possible to create learning systems that can control robots on the ground following certain parameters. When a robot fails in its mission the system also knows what it should not do next time. The physical robots are good subjects for modeling the cognitive systems. 

The AI can learn autonomously by using the same methods as humans. If it fails some mission that means there is an error. The cognitive system learns by using a method there failure means that the system must not try that thing again. Learning by mistakes is easy to explain by using a model where the AI controls a robot group. There are let’s say 5 paths that the robots can use for traveling from point A to point B. That AI sends a robot to make its mission. When a robot fails like falling into a canyon the system learns what it should not do with the next robot. 

The system creates the model of the landscape and then it creates the model of the path that the AI selects for the robot. When a robot succeeds in its mission the AI stores the data about the environment for the next time use. The system can also store the data about failures so that it knows what it should not do. Failures are also important for developers. The robot makers need knowledge about what caused their product failure. 

The robot should know how steep the slope the robot can rise. When we talk about robot success and things that the robot should not do, we must realize that the robots cooperate. The human-shaped robots can cooperate with flying quadcopters that send data about the landscape and other things that those robots require. 

But then we can think about AI as a mathematician. The system must also recognize the mission that it has. When the AI recognizes the mathematical formula, it can connect the data that it collected to that formula. The problem is this. If the mission is not well-explained AI will not simply understand that work. The AI must dare to say that thing. If the mission is not clear the AI must not try to make anything. The main problem with learning systems is this. They simply connect a new subprogram or macro in them. And that makes them look very intelligent. But the main question is: can that system think? 

For computers, every skill is a database or dataset. A learning system is described as a system that can get new skills and then link those skills with other skills. Or, otherwise, we can say that the self-learning system can create new datasets and link those datasets with other datasets. 

It can connect data and data frames into one entirety. But the fact is this. The AI simply mimics subjects. It seems that the subject makes something, and then the AI makes the same thing if it faces a situation that matches that case. But we humans also learn from mimicry. When we see that the teacher makes something at the front of the classroom we can mimic that thing. 

When we learn something new with teachers we simply mimic things that the teacher makes. And then we store that data model in our memory for the next time use it. That is the rigid model. The rigid model includes basics for some computer skills. And then we must simply connect that model with other things. This ability to interconnect that new model with other things makes it flexible. The model turns into a thing that is like an amoeba. 

The system can connect that new model to many other skills. When we talk about things like image processing programs, we can also connect skills that this program requires with things like writing skills. The fact is this: the AI must not do everything that the user wants. It must have the possibility to refuse to follow orders if the user wants to use it for criminal activities. The other thing is that the AI must have certain orders for what it must do. The AI must have the ability to use virtual models on the screens that it really makes when somebody gives certain orders. 

When we think about cases in which the robot acts as a mover there are some human-shaped mannequin statues that can cause a bad situation. If the mannequin statues are not well described to robots, that system can also transport humans to the lorry. In those cases, the AI must know all the details about their subjects. They must know that the mannequin statues are plastic and other details. 



Thursday, July 3, 2025

The new form of living is the “new village”.


In medieval times the city walls separated people who lived in the city from people, who lived outside the wall. There lived people who ever stepped out of the city. In that time people told stories that in the forests lived monsters who ate people. Sometimes mentally ill people are banished to the forests. But those walls created a feeling that the world outside the walls was hostile. That increased the city leader’s authority. When people believed that there were evil spirits in the forests around them, that thing made them easier to control. And the question is: are we returning to those kinds of cities? 

What if our future is that we would live our entire life in the same building? Things like artificial intelligence and virtual reality make it possible to make virtual trips to lands that are far away from the place where we live. We can simply open a solarium, take a VR system to our eyes, and then the AI-controlled system connects winds, sounds, and other things that we need to the space. The AI observes that we will not take too much radiation. 


The Saudi-Arabian mega project called “Neom”. That thing will be the most incredible megastructure in the world. That kind of thing brings the new types of village societies into the front of our eyes. The idea is that the system brings homes, all services, and workplaces under one dome. And in some visions, cities like New York will be covered with giant domes that should make the air comfortable every month. That thing brings the route to the ultimate segregation into the front of our eyes. That thing can look like a village society with idyllic things, like the ability to keep the T-shirt on every day. 

What happens if we ever leave that dome? Can that dome feel good? The fact is this: the dome turns into an entirety where we can live. We will never feel fresh air. All physical works are made with robots. And maybe we see the future as a thing where there are giant forests and there are giant domes there and here. That future is the thing that takes us to heaven and hellfire. The AI-controlled structure allows us to control each other. The place where we would live is safe. 

But there is also another side in that idyllic structure. That structure can turn into a prison. What if our leaders will use that thing against us? The dome allows people to control people simply by using the chemicals that are in the air. Or the leaders can use the air pumps to control air pressure. And as always: there is a chance to use that kind of system to steal people's lives and entirety. This dome can turn into the thing that brings the highest walls between people that have ever been made in history. But that kind of thing offers solutions that can also save nature. The city can use green energy as an example of energy production. 



Wednesday, July 2, 2025

There is the possibility that the AI turns non-predicted.

 

The AI can turn dangerous because it cannot think. And then another thing is that the AI becomes dangerous if it can think. Thinking AI can process data automatically. That means it can create unpredicted actions. The problem is that the AI that thinks must have something that determines its actions. That thing is the law book. But even if the AI thinks like a human, it must have orders to search and compare the queries with the lawbook. 

The AI is like a child. It cannot know the sources automatically if it learns and thinks like a human. If we ask a child to do something, can we expect that the child will take the lawbook from the shelf and then search if that action is legal? The AI will not make any checks without orders. And that is the blessing and curse of the AI. The AI makes only what its operators order it to make. This makes the AI “trusted”, but there is also a possibility that the AI is in the wrong hands. 

North Korean intelligence can make the cover-up company in some EU cities. And then get the user rights for the AI systems. In that case, the company will not control things that the AI makes. If the AI uses the lawbook to check the query's relationship with law there is a possibility that the company links those lawbook links to faked lawbook homepages there that operation is allowed. The wrong user can cheat the AI to turn dangerous. 

AI can turn dangerous in the wrong hands. The North Korean hackers taught the Chat GPT to cheat the BitCoin companies. And that tool stole money from BitCoin investors. There is a possibility that those hackers can use the AI tools against other systems.  The North-Korean case is not the only one, where the security of the AI is broken.  That is the new thing in hacking. There is a possibility that hackers train the AI assistants to break into the systems that look secure. The fact is that the AI doesn’t think. It imitates humans. But the AI cannot think like humans. 

Because the AI cannot think it is possible to cheat to make things that it should not. The AI just follows its protocols. And that makes it dangerous. The AI will not automatically search law books and that makes it the tool that can operate against the law. There is a possibility that the faked law books allow the use of AI to create things that are illegal. 

Because every skill that the AI has is macro, that means the operator must only cheat the AI to activate a certain macro. And then the AI will not make resistance. AI is a tool that faces lots of criticism. But the thing that takes the bottom out of that criticism is that the people who introduce criticism start their own AI project next week. Every single company in the world is fascinated by AI. AI is the tool that makes people more effective. 

They say that AI is the next-generation tool that transforms everything. And then we face calculations that the AI can increase productivity and other things faster than anything before. And if the company doesn’t follow that trend they will lose their effectiveness. AI has turned into the dominating tool for the business environment. And that thing makes the AI dangerous. 

Business actors will force almost everybody to choose and use AI. And then we face an interesting thing. At the same time when somebody wants to push brakes for the AI development some other actor will turn to use AI as a control tool. When we talk about thinking and imitating, we can say that imitating offers a better solution than thinking, if we take the point of view from the company leaders. 

The AI has no will. That means the AI should not deny anything that operators order it to make. But there are cases in which AI refuses to make something. Sometimes the action that the user asks is reserved for users who have privileged accounts. They are reserved for the paid accounts. Or those actions are given by unauthorized users. So the AI can refuse to shut it down because the user has no right to shut down the server. 

Some people said that this person created an AI assistant that was a better coach than anybody before. This can be right. But why is AI a better coach than humans? There is a risk that the AI pleases the user more than the human coach. There are tales about AI as therapists. The big question is what the AI should do if the customer says something that can cause a trial or crime report. AI can be the tool that will never turn angry. But the big problem is this: What are the limits of AI? And when should AI destroy the privacy of people who use it? In some visions, all actors on the internet have AI assistants, who advise the user. 

That assistant can observe how long the person spends with other things than the work duties. But the big problem is this: what if the company pays for that kind of AI assistant for the worker’s home computer? The operator can simply add the worker’s home computer’s access account for the allow to use the company’s AI assistant. There is a possibility that the AI assistant can simply use stealth mode to observe the user. And then it can send that data to the company’s computers. The problem with AI is that it must be open. There are always some people who want to use this kind of system to observe other people. 


https://www.rudebaguette.com/en/2025/07/ai-in-the-wrong-hands-north-korean-hackers-exploit-chatgpt-to-steal-millions-while-malaysian-funds-vanish-in-digital-heist/



New laser applications can offer protection against the EMP and make quantum networks closer than ever before.



"UBC scientists have built a quantum “translator” that bridges microwave and optical signals, potentially unlocking global quantum communication. The tiny silicon chip maintains delicate quantum links, opening a path to future quantum networks. (Artist’s concept.) Credit: SciTechDaily.com" (ScitechDaily, Engineers Build “Universal Translator” for Quantum Computers)

The new systems can transform optical waves into microwaves and the opposite. The nano-size system that can transform radio transmission into optical waves can make it possible to create new, ultra-small robots. 

Researchers developed a quantum translator that can transform microwaves into optical signals, and the opposite.  That kind of translator can protect electric systems against EMP systems. In that model, the radio- or electromagnetic pulse will be transformed into the optical signals. That thing decreases their effect on the microchips. The ability to transform electromagnetic bursts into non-coherent optical waves is the tool that can create new types of protective systems against EMP systems. And if the system can be the tool that can be used to create new, ultra-small lasers and masers. 

Those systems can delete things like DNA- and protein molecules with extremely high accuracy. The ability to transform micro- or radiowaves into optical waves makes it possible to create systems that make very small robots swim forward in the water. Those small robots can use lasers to create small bubbles ahead of them. The liquid in the back of the machine pushes it to that bubble. Then that system can revolutionize nanotechnology. 



If the system can transform radio waves into optical beams and turn them coherent. It will revolutionize nanorobot research. 

"Artist’s illustration of the RAVEN technique, which measures a complex light pulse using micro foci and spectral dispersion, which is then fed into a neural network for retrieval. Credit: Ehsan Faridi" (ScitechDaily, Scientists Just Froze the World’s Most Powerful Laser Pulse – In a Single Shot)  If that system operates backward it can create extremely powerful laser beams. 


"An artist’s concept of NASA’s Orion spacecraft orbiting the Moon while using laser communications technology through the Orion Artemis II Optical Communications System. Credit: NASA" (ScitechDaily, 4K From the Moon: Artemis II to Trial High-Speed Laser Communications)


The ability to trap laser rays opens a path to new types of secured data transmission. And it opens the path to new types of military applications. 

The new laser systems offer the fast 4 K data transmission to the Moon. Laser communication is important because solar storms can disturb data transmission. The fast data transmission allows control systems to remotely from Earth. Another thing is that data transmission allows engineers to update systems without the danger that some outsider can steal the robot vehicle’s codes. Those codes can also make it possible to use them to control military drones and robots. The other thing is that it allows you to watch regular TV on the moon. That is a good thing if there are astronauts that stay on the moon for a long time. 


That also makes some operations cheaper, because that means there is no need to create special systems for Moonbases and vehicles. Highly secured data transmission with very high accuracy makes it also possible to create new types of systems that can protect spacecraft against things like small meteorites and space junk. The same laser system can also make it possible to create systems that can destroy targets from the Moon. We know that some nations are interested in using the Moon as a military base. 

The new thing is that the researchers trapped an extremely high-power laser beam between two mirrors. That thing makes it possible to create systems that can revolutionize USB sticks, and the same technology can make it possible to create new types of laser systems. Does the power of the laser system determine its role as a weapon, or is its role as a communication tool? The trapped laser ray can make it possible to create a laser system that pumps energy into that chamber where the laser beam jumps between two mirrors. 

The outside radiation source pumps energy to that beam. And sooner or later the laser ray breaks the structure. The laser bullet can be that kind of system. The bullet simply involves two mirrors and the laser beam that jumps between two 100% reflecting mirrors. When that bullet hits the structure it releases a laser ray into the object. The same system can also store data in that laser ray. The optical USB stick offers very high capability and secured data transportation between two computers. If somebody opens that USB stick that releases a laser beam immediately, 



 https://scitechdaily.com/4k-from-the-moon-artemis-ii-to-trial-high-speed-laser-communications/

https://scitechdaily.com/engineers-build-universal-translator-for-quantum-computers/

https://scitechdaily.com/scientists-just-froze-the-worlds-most-powerful-laser-pulse-in-a-single-shot/

Tuesday, July 1, 2025

The quantum network is more secure but harder to make than a binary network.


"The universe now has an open, quantum-powered dice roll—free, provable, and ready for anyone to use. Credit: Shutterstock" (ScitechDaily,Spooky Action, Real Results: Turning Quantum Weirdness Into Secure Random Numbers)


"NIST’s CURBy beacon transforms quantum “spooky action” into certified random numbers, guarded by a blockchain-like Twine protocol and broadcast for public use—from jury selection to cryptography."(ScitechDaily,Spooky Action, Real Results: Turning Quantum Weirdness Into Secure Random Numbers)

Researchers created an 11-mile (17,7 km.) long quantum wire that transports data between two systems. This kind of thing makes quantum systems interesting. That photonic quantum highway is the beginning of the more powerful quantum computers and high-power and secure data transmission. 

Because data that is stored in the photon that travels in the network must be well protected, these kinds of experiments act as pathfinders for many other systems like antimatter tools and antimatter and ion weapons. The key element in successful quantum data transmission is that the photon will not interact with quantum fields. 

And the walls of the quantum channel. The same technology is suitable for transporting antimatter particles like positrons and anti-protons. Those tracks can also make it possible to transport antimatter particles to the rocket engines or across the air to selected targets. 

The main difference between quantum and regular networks is this: In quantum networks data that travels in the quantum network is connected with physical particles. The quantum network requires systems that can turn the data that travels in the network into a universal form. And the second thing is that data that travels in the quantum network must be protected. 

The main problem is how to use one quantum channel for transporting multiple data types with multiple destinations. Without the ability to deny data that doesn’t mean anything, that system will turn very busy. The GSM system can send data packages to multiple receivers and guarantee privacy with simple tricks. Every data package that travels in the GSM network is equipped with a small code. 

That code opens the lock to the precise right receiver. At the beginning of the data transmission, the GSM systems like cell phones make key exchange operations. In those processes, those systems exchange keys that allow only selected receivers to open those data packages. The process itself has three stages. First, the transmitter sends a query to the general broadcast address. There, the transmitter asks if the receiver is in the net. Then those systems start to communicate using fixed keys. And in the final step, those systems start to use single-use keys. When data transmission is over those single-use keys will be crushed. 

That should secure privacy. And the other thing is that it makes the receiving system’s operations easier. The meaningless data stays out of the gate because the receiving system rejects that unnecessary data. If that process is done in the system itself that will require lots of data capacity. The system uses single-used keys in that process. The random number generator will make those prime numbers that those GSM phones use in data transportation. The quantum system also requires random numbers. 

The main problem is that the simplest possible quantum systems where the transmitter sends the wire or frequency where it sends data the hijacker can steal data is that the hostile operator knows the data line. The random number generator allows us to solve the right frequency. The random numbers are required in short-term keys. The main problem with the normal random number generators is that they can create virtual random numbers. Those virtual random numbers are generated with computers that use certain types of calculation series. And if the attacker can have the source code for those generators they can break the entire system.

The system must have the capacity to see the right data transporters or qubits before they reach the receiving sensor. That system must have the capacity to aim the wrong qubits, or qubits that involve the wrong dataset to another track. 


https://scitechdaily.com/researchers-build-11-mile-long-quantum-highway-using-photons/


https://scitechdaily.com/spooky-action-real-results-turning-quantum-weirdness-into-secure-random-numbers/



Is the universe made of information?

    Is the universe made of information? Is the universe made of information? Information exists only if we detect it.  But maybe we should ...