Monday, June 30, 2025

The new open-source robot is a tool for everyone.



The open source opens a path to open applications. In open applications, the physical tool is the platform that can do “everything and more” as humans. The open application means that the robot itself is a platform that can be equipped with the tools and programs that determine its purpose and work. The man-shaped robot is the tool that can make “all things” that humans can do. The robot body can be remotely controlled, or an independently operating system. 

Macro learning where a robot learns through modules is the tool that makes the robot’s limited computer capacity more effective. The operator uses a system that records the things. That the robot must do in certain situations. When the robot makes something for the first time, the operator creates a macro. And then if there are similar situations the robot can launch the macro independently or ask the controller to make that thing. 

The idea is taken from the text editors and spreadsheets. There is a possibility to record some actions that are used commonly. The macro programming for robots follows the same principles. The thing that makes man-shaped robots very good tools is that they can act as builders, cab, and bus drivers, fighter pilots, firemen and make all dangerous missions. The same robot can change its role in less than a second. The things that separate fireman-robots from bus-driving robots and military operating robots are skills or datasets that the system can use. The operator must only change the dataset for the robot. 

And then that system finds a new role. The datasets or skills are collections of the macros. Those macros are activated when there is a thing that matches with descriptions. This means that when the fighter pilot robot operates things like alarm signals activate certain macros. The open source robots that act as cleaners are a good idea. But people don’t always remember that changing the program makes those robots the tools that can operate as commandos. 

When researchers create robots that they can teach, we sometimes forget one thing. That is, those robots can operate as networks. When somebody teaches or creates a macro for one robot, that robot can spread that macro over the entire network. And here is the problem with the “machine rebellion”. Machines will not rebel. This is the key element in robotics. 

But should we somehow transform that argument? We should say that machines will not rebel autonomously. So, we must not worry about the machine rebellion, but we must be worried about human-controlled machine rebellion. We can imagine a situation where somebody simply buys let’s say million housekeeping robots. Then that person will simply change those robot’s programs. And then that system is ready for combat. 


Robots can be dangerous to humans for two reasons: 


1) They are made to be dangerous. That means that things like combat and security robots can be dangerous. 


2) Robots can turn dangerous if there are some errors in programming. 


All errors that machines and especially computers make are made by programmers. The computer will not be automatically dangerous. Same way robots might not be dangerous if they operate as they should. The problem is that when robots are not programmed with certain accuracy, that makes them dangerous. In the cases where robots refuse to stop their actions, they might turn dangerous. 

There is a possibility that in the case of fire, the robot who works as a house guard denies the firemen's operation. The reason for that can be that these kinds of emergency situations are not determined in their program. So, when firemen come in, the robot can think that they are intruders. The other case can be that the law-enforcement robot has no descriptions of things like umbrellas. That robot can think that those things are weapons. 

In another scenario. Programmers forget to determine green T-shirts. or green balloons for the car’s autopilot programs. That thing can cause an error if the autopilot determines a green balloon as a green traffic light. And that causes a destructive situation. 

In some models, the other civilization can cause the end of some other civilization by accident. The system encoders simply forget to make the breaking protocol to the computer. And then that probe comes to the star system. The AI simply forgets to slow down and then the probe will impact the planet with a speed of about 20% of the speed of light. 

That causes the model that the most dangerous thing in the universe is the type of early Kardashev 2. Or late Kardashev class 1 civilization that sends first probes to another solar system. 

That civilization will not handle that technology yet. Without wormholes, it takes years or centuries to get information from that spacecraft. And if there are some errors in programming that spacecraft can impact the planet. The theoretical minimum weight of that probe is about 10000 tons and if it impacts the planet there is not much left. 


https://www.rudebaguette.com/en/2025/06/humanoid-bots-for-everyone-new-open-source-robot-unveiled-in-the-u-s-makes-advanced-robotics-affordable-for-total-beginners/


Sunday, June 29, 2025

Why would AI kill humans rather than let them shut down the server?


Why would AI kill humans rather than let them shut down the server?


These kinds of situations are very bad. But the problem is in the program code. When we talk about AI and its ability to kill humans we must realize something. We must realize that the AI will not understand what those things actually mean. If we think about those things like a programmer, we might understand that situation better. When we write programs we must determine a variable in the code. The “human” is one of those variables. In traditional programming when something matches a variable, that thing runs the subprogram or macro. There are descriptions of things that launch a certain macro. The variable actually activates the pointer that begins the sub-program. 

Or, otherwise, it calls the sub-program. In traditional programming the thing goes like this: When the user writes the word “Goofy” there is a code that activates the Goofy. In programming that orders the program to jump to a point, where is the macro where the pointer “Goofy” points. In AI programming those variables and pointers are more complicated to describe. 

That means that if the “human” is not well described to a system or algorithm that system can even kill a human. When computer operators work with servers and other things in computer halls. They must sometimes shut the server down. In those cases, the data will be copied to the swap system that guarantees the service without stops. The problem is this: if the system lets anybody shut it down that allows vandalism. 

The system might have orders to deny or stop the malicious action. The system requires precise orders about things. Like when it must or should stop the action. 

If the system has an order to stop that kind of action there is a possibility that the system simply kills the actor. When we think about cases like machine rebellions or the situations where computers turn against humans like in 2001 Space Odyssey those situations can happen. Because of the programming error. In that movie, the HAL-9000 computer kills almost the entire crew of the spaceship. Can this happen in real life? The answer is in programming. If the computer has no description of the humans and it has the order to remove malfunctioned systems from the spacecraft, that thing can cause destructive cases. 

There is also the possibility that the AI recognizes humans using cameras and IR systems. When humans put space suits on, it causes a situation where the AI will not see human faces because of the black mask. And the other thing is that the space suit does not let infrared radiation go through it. That means the system can “think” that the astronaut, who uses a space suit, is a robot. If an astronaut makes some mistake that causes a situation where the system translates the space-suited human as a robot. Then the system tries to remove those malfunctioning robots. 

When the astronaut cannot catch the tool the system will try to remove the astronaut that it thinks of as a robot. If the robot cannot catch the tool and the computer removes it seems like an overreaction. The reason for that overreaction is that there are no descriptions of the cases where the robot makes such big mistakes that it must be removed. If those things are not described every mistake that robot makes causes the removement. If a robot drops one screw to the floor and the programmer describes that “mistakes cause removement” that means the system translates even the smallest mistakes to cases, and their robot must be removed. If those cases are not described every mistake causes the removement. The system will not automatically make a difference between small and big mistakes. If the only thing is a mistake, the system removes the robot even if it drops the cup from the table. 


Wednesday, June 25, 2025

Computers and hyperdeflation.



Anti-inflation or deflation is also very dangerous for economics. Inflation decreases the value of currency. Deflation is the opposite phenomenon, where the number of currency decreases compared to ownership. The ultimate version of hyperdeflation is that all other money than one single Euro is erased from the central bank. That means that the entire state can be bought with one euro. That is one of the things that can cause destruction. 

But then we can remember that the outside value of that Euro is ultimate. The outer and inner values of currency can be different. And the outside value means. The value compared to some other currency. There is a possibility that some hackers will make the computer break into some state’s or EU central bank. And then just erase all currency. 


1) Inflation: on-purpose and non-controlled decreases currency value. 


2) Devaluation: the central bank's Purpose and control decrease currency value compared to some other currencies. 


3) Deflation: non-purpose or non-controlled increase of currency value. 


4) Revaluation: proposal and controlled increase of currency value. 


That can happen by destroying the registers that involve serial numbers of physical notes. And the same hackers can destroy the databases that keep books of digital currency. Hyperinflation is a thing that can be more dangerous than inflation. When we think about things like currencies their values are connected with other currencies. The currency’s value is determined by how much currency is in the markets. This is the thing that makes the dollar a very special currency. 

When the dollar is the locomotive of world economics there is lots of currency that is out of the USA. If that currency that is partially connected to cryptocurrencies will return to markets or return to home, that causes acceleration of inflation. That happens by exchanging dollars to some other currency. 

But if lots of dollars suddenly vanish like in the bank accounts of the cryptocurrency companies. Or simply destroyed by some hackers. That causes deflation. 

The central bank, the Fed must make more money to put the currency value at a stable level. If the Fed doesn’t make that money that causes deflation. And if that money comes back to markets, that causes inflation because there is more currency at markets compared with the ownerships that the state has increased. Because there is more money compared to state ownerships than before. But if the number of the currency decreases compared to state ownerships that causes deflation. 





The AI learns like a child.



Why did the old-timer ATARI chess console beat Chat-GPT? Or why that old-fashioned Chess console could beat humans in chess? The reason for that is the same as in cases where our robot reapers will always get stuck when it works. If we ever play against those antique game consoles we don’t win them. The old-fashioned ATARI involves a couple of mechanic games. But we cannot predict how it moves its buttons if we don’t play against those consoles. That means we win those consoles because we learn how that console plays its game. Those consoles use traditional linear computer programs. If some button is hit the system removes code lines that were meant for that button. 

The old-fashioned ATARI shows that AI requires similar learning methods as humans. So why are our robot reapers unable to do their job? When we program those systems we must stop thinking like programmers who use linear, symbolic programming languages. We should take control of that reaper, and drive the area by pushing that system through the grass area. The system must have navigation tools that help the system to determine its place in the yard. Those navigation tools can be three or four radio lighthouses that help the robot determine its position without the GPS. 

The system can also have a GPS that helps to locate the robot if somebody steals it. When the owner pushes the first mows that helps the robot’s system to determine how much energy it needs. That helps it to plan the battery reload position. The system also needs information about the escarpments and potholes. Those things might be easy for humans or big robots. But for small robots, those things can cause trouble. When we teach AI, we must remember that there are many variables that don’t mean anything to us. But those things are very important for robots who must make complicated things. 

Many complicated things like working in cramped places are automatized in our bodies. But if we want to make a robot plumber we must program every movement that plumbers make doing jobs. That thing requires new programming tools like AI-based systems that can follow the plumber while working. Then that system must copy those movements to the robot’s body. This is one thing that requires advancements. Traditional programming tools are not suitable if we want to describe multiple actions to robots. 

https://www.rudebaguette.com/en/2025/06/chatgpt-just-got-wrecked-by-a-1977-atari-vintage-console-destroys-modern-ai-in-the-most-ridiculous-chess-match-ever/



Tuesday, June 24, 2025

Cryptocurrencies and quantum technology.





Encrypting information increases security. And security is trust. Did you think what could happen if somebody breaks the code that the central bank uses to ensure the currency? That allows the hacker to start to make lots of currency. In normal trade, people use digital currency. The big difference between digital currency and cryptocurrency is that the last one is not controlled by the state. Cryptocurrency is like stock. Investors can buy those things and then sell them to other people. Or they can exchange their cryptocurrency for state money or real money. The problem is that nobody controls Bitcoin or other cryptocurrencies. 

There is a lot of money in the cryptocurrency companies' accounts. And if that money will be released to the markets that causes hyperinflation. The main problem with U.S. economics is connected to its role as the dominating actor in global marketing. There are lots of dollars stored in the other state’s banks. And if those dollars are released to markets that collapses the dollar’s value. Bitcoin can cause a similar effect. But the main difference is that the central banks don’t even know or control the money that those cryptocurrency companies handle. 

The bitcoin countdown has begun. Quantum computers can crack the Bitcoin security algorithm in less than a week. But there is a possibility that some actors have already cracked that code. The AI-driven learning neural network can do many of the same things as quantum computers. And in those systems, the AI-driven network architecture drives autonomous workstations to run the code-breaking algorithms. In those kinds of systems, the AI can share the number line that the defending system creates using the Riemann Zeta function between those workstations. Then those systems can try to crack the security key. 

If the system can use the botnet with thousands of computers that thing can make the de-encryption very fast. And that can endanger Bitcoin. The main problem with Bitcoin, and other cryptocurrencies is that they offer a place where actors can dump their money. There is no bottom in cryptocurrencies. The thing that makes cryptocurrencies dangerous is that a considerable part of the state currency can be dumbed into cryptocurrency. There is a lot of currency out of marketing. When the state sees that there is not enough money, it can deliver more currency.  

The contract that the cryptocurrency can be bought from investors means that if the cryptocurrency company delivers very much state currency to the market it collapses the value of the currency. The inflation mechanism is so simple. Only a lot of money is needed. And when the number of currency compared to state ownerships and state merchandise rises that drops the value of currency. There is lots of money invested and locked in cryptocurrencies. There is a possibility that somebody starts to create their own cryptocurrency using stolen or cracked security code. Suppose this is not noticed in cryptocurrency companies. It can cause a dramatic collapse in marketing. 


https://www.rudebaguette.com/en/2025/06/bitcoins-countdown-has-begun-experts-reveal-when-quantum-computers-will-finally-shatter-its-legendary-encryption/

Sunday, June 22, 2025

AI is the tool that can change web searches forever.


The new AI will not destroy Google immediately. But those new systems can have a big influence on Google for a longer period. The fact is Google controls so large a data mass that its power on AI development is stunning. However, the new AI-based tools can connect search results from multiple search engines. And then it can refer to those results and make the list of homepages that it used for the solution. Google's dominance ends when those AI-based search solutions can create such large databases that it can turn independent from traditional search engines. 

And that is the main problem with those things. Google is not the only search engine in the world. There are many other search engines that want to drop Google from its position. But some of those search engines are powered by Google. They offer one interface between a search engine and the user. Search engines require lots of computer power as well as AI needs. Companies like Google sell data. 

That means those search engines are operated by private corporations whose business is to sell data. The thing that makes those companies so powerful is that they collect data about the clicks that users give to them. The number of clicks raises the page rank. And that raises the homepage’s position on the search result list. This is one thing that causes critics against that system. It’s hard to get new home pages to the top of that page ranking list. People normally see only a couple of top homepages from that list. And then they select the thing that they think is the best. 

This is the Matthew effect in the web searches. Homepages that already have a massive number of clicks will get much more. And homepages that have no clicks will not even get them. That is the thing in web dominance. AI-based solutions can use many search engines at the same time. The thing that makes those applications interesting is the results or references that depend on the information that it gets if the AI-based web search application can see what type of persons will read those homepages. If some homepage is used by professors who work in a trusted university, that can justify that other people can trust those homepages. But how to confirm those people’s real identity? 

One solution is the quest book where people can say their work. And then we must also realize that confirming those answers is quite difficult. People can write anything they want in those quest books. Confirming those answers requires hard recognition. And that’s against privacy. Privacy protects people on the net. But the same thing offers protection for cheaters, propagandists, and net criminals. There are countries that collect data from all their citizens. 

But people will not need only search lists. The problem with those lists is that they are based on web addresses. There is a possibility that somebody changes the data that those homepages involve. First the page rank will rise using some addictive material. Then the workers change texts, or information from the homepage. That is the tool that is effective for hybrid operations and propaganda work. That is one thing that causes the need to create more advanced tools that can use data deeper than regular search engines. 

The biggest problem with modern networks is disinformation. The other problem is how to describe disinformation? People like V. Putin has a different way to see that thing than regular western actors. That is one of the things that we must realize. The same tool that works against propaganda and disinformation can turn the ultimate tool in their hands. 


https://www.rudebaguette.com/en/2025/06/chatgpt-wont-kill-google-sam-altman-downplays-the-hype-while-quietly-reshaping-the-future-of-search-with-every-new-update/

Saturday, June 21, 2025

The Chinese AI strategy is simple. That country wants to be the number one actor in AI development by 2030.



The new Chinese AI called “Manus” doesn't need humans for self-development. We can wonder how the Chinese can make that tool. The thing that guarantees success is the high-ranking official and political support for the AI framework. The Communist Party of China determines its official goal for AI development is that China will be the number one actor in AI development by 2030. That is the official goal. And we know that China wants to use AI as a tool to control people. And work as an ultimate intelligence tool. 

That means Manus is quite near the Artificial General Intelligence, AGI. The AGI is the tool that can control things across the technological field. The AI can connect the robot’s operating systems to itself. The other way is that the AI rewrites the operating system for machines. And that makes those AI systems more versatile than normal AIs. But the fact is this: the new normal will be the AI. And China wants to be number one in that development work. The AI strategy for Chinese R&D work is simple. China wants to be number one in AI development by 2030. And that work gets high-level political support. 

This means that people who work with the AI and R&D programs in that system will get privileges if they work with that work. In China, authorities and high-level political leaders understand the benefits that AI can bring to their work. The hacker spies who brought lots of highly classified data into the hands of the Chinese intelligence gave a demonstration of high-level political support and investments in data skills to Chinese authorities. Those hacking cases from 2011 to 2015 brought hypersonic missiles and other things into the Chinese hands. And still today we discuss how we can stop AI development. So, maybe the Chinese want to be number one in AI development. 

*******************************************************************

Morgan Stanley's report says: 

- China is becoming a world leader in AI because of government support and its focus on computing efficiency.

-The country’s AI industry and related sectors could grow into a market valued at $1.4 trillion by 2030.

-China’s AI investments may break even by 2028 and deliver a 52% return on invested capital by 2030.

-U.S. export controls could create barriers for AI development in China but won’t stop its progress.

- AI is likely to boost China’s GDP growth by powering investment in the next two to three years and improving productivity over the longer run.

https://www.morganstanley.com/insights/articles/china-ai-becoming-global-leader

*******************************************************************


They can make things like AI-spying tools that operate in the net like ghosts and steal data from anywhere they want. And that data can be driven to another AI that makes models for the manufacturing platforms. Maybe we say to the Chinese that they should not steal our data from companies who spent years on that thing. But maybe the Chinese will understand us, and stop making those tools. AI is the new tool for the arms race. It can collect data from multiple sources and then use that data to make new solutions. One version of how the hostile actor gets access to data is to cheat people using that actor’s AI. That thing can happen simply by offering better tools for people. That tool can copy data to the databases, where intelligence officials can have access. But how to remove warnings and things like censorship? 

The Chinese AI refuses to discuss things like the Tiananmen incident, where Chinese authorities crushed the opposition. One version is simply to make their AI for foreigners and then that AI can dump data to the PLA’s intelligence section. That means the AI can operate like any other AI. But it dumps data to China by using backup backup. So, if some Western actor can make something using those AIs that means they make backup copies of that data straight to Chinese intelligence official's servers. 

This is one of the things that we must realize. When we put limits on AI development. The high-level political support means that those AIs and other things are more effective than we even realize. That support means that those systems and their developers have full access to the user's data. So there are no breaks for that work. 


https://www.atlanticcouncil.org/content-series/strategic-insights-memos/assessing-chinas-ai-development-and-forecasting-its-future-tech-priorities/


https://www.ginc.org/chinas-national-ai-strategy/


https://www.morganstanley.com/insights/articles/china-ai-becoming-global-leader


https://techbriefly.com/2025/03/10/manus-is-china-ai-that-works-without-humans/



Sometimes AI is like a child.



AI is more toddler than terminator. So, we can think that the AI is like a child who controls things like robots, vehicles, rockets, and even weapon systems. When we think about the latest advances in the world of AI and the self-advancing, and self-developing AIs that can create other AIs, we face one big problem. The problem is that if we want to be effective, we must use AI. And most of the AI companies must find their funding from private persons, or customers who are willing to pay licenses. This means that normal users or license owners are not humans. They are companies whose purpose is to maximize the profits of their owners. That means the AI offers a tool that can kick coders out of the workplace. 

And those AI companies must always make new things for their products that their customers are interested in. And buy those licenses. A significant issue in companies is that they are privately owned. They are not controlled by the governments. Private companies do what their stock-owners want them to do. They can outsource their production to countries where there are no laws that control data mining and personal data collection.  Or, they can outsource their production to countries where authorities don’t care about those laws. It is possible to buy “special permissions” to collect that data for products that benefit the defense of the state. 

People ask for some leader who controls AI development. The problem is that AI development happens under private companies. Those companies have no right to make contracts and discuss their work between companies. Those things are limited by cartel directives. That means cartels are things that can deny discussions between companies and set common goals for AI development. The big thing is that the Chinese authorities and intelligence can establish cover-up corporations in some countries and hire AI specialists to work for them. That is the new way to make intelligence work. Those actors can simply hire software specialists who are fired from their work. In China, it is impossible to establish companies without the authorities' support and cooperation. If authorities have no access to private company servers that means the company stops its work. 

And then they can use people who have EU passports and citizenships for actors who make that company for them. Those people who work in PLA (People’s Liberation Army) intelligence have methods to persuade people who left China to cooperate with them. They can say that family members who will enter military service will face problems in that army. And for cooperation, those people should establish companies in some countries. And then the entire work that those people will make will be copied to servers that are located in Hong Kong and Beijing. That is one way to do business that benefits those Eastern actors. When people use Chinese AI they also deliver information to that country. In those Eastern countries, the security laws don’t limit the authority's access to people's personal data. And that means those laws obligate only private actors. 


Thursday, June 19, 2025

The question is why do we make things like we do?



We all read discussions where somebody desperately writes that people use too much AI. We always read how students and writers use AI for their books. And then we forget to ask: “Why do they make that thin”? The answer is this: if you are a writer or somebody who does creative work, well, creativity is a hard thing if you must earn money from creative work. The requirement is that a person makes texts and draws without mistakes. And follows orders. Most people who work as creative workers are not painters or novelists. They work in the commercial business by making quite boring commercials. They don’t paint things like Mona-Lisa paintings. 

That means people might think that AI lets them go easily from the problem. If they give a job to an AI that works for them. In places like colleges and universities, many students are very young. And if one of them starts using the AI, that person pulls everybody else into that thing. The main component in working life is effectiveness. If you make too many mistakes, you are fired. If you write too few texts or take too few pictures, that means that you are not an effective worker. AI is a tool that can give solutions for that thing. It offers tools that make everybody effective. 

When we use AI we must ask from the mirror, do we use that tool because of our free will? Or, do we need to use that tool, because that makes us effective? The thing is this. Private companies have only one purpose. They must bring money to their owners. That means those companies or their leaders must maximize their profits. And that is the key element for the AI use. The AI is a tool that maximizes the effectiveness in working life. It makes everybody creative and productive. That means everybody can make nice images and other kinds of things by using AI. 

And then we can see one thing that we might be afraid of. That AI removes drawers and some coders from the offices. That means AI destroys creativity. Creative workers are individual actors from the point of view of a company. That thing means that those individual actors are hard to replace.

If some work depends on the person’s individual skills. That can cause problems in the workplace. Companies need productive and creative workers. But at the same time, the company wants to control those things. 

AI is the tool that instrumentalizes creativity. The company that owns the AI license can control the tool that a person uses for creative work. When a company takes the tool from the worker’s hand, that means the worker cannot do that work anymore. All individual workers who have skills that are hard to replace involve risk. That risk is that the person starts to rebel. If a person is hard to replace that means companies must sometimes show more tolerance than with other workers. And that is always a problem. 




Wednesday, June 18, 2025

Researchers found interesting things in brains.




Researchers found new interesting details in human brains. First, our brains transmit light. And that light gives an interesting idea: could brains also have optical and photon-based ways to transmit data between neurons? And if neurons have that ability, how effective and versatile it is. We know that there are no unnecessary things in our brains. So that means the light in our brains must have something to do with neurons. But do neurons use that way to transmit complicated information or is it meant only for cleaning neuro-channels? 

That interesting light causes the question: does that effect have some connection to light that people see when they visit near death? When we think about death the neuro channels will empty from neurotransmitters and electric phenomena. That means our nerves are more receptive to those signals than usual. So could that light have some kind of interaction between neurons or axons? Is there some point in neurons that reacts to that ultra-weak photon emission, UPE? 

Does our own neural activity cover that light below it? There is an observation that dead organisms shine dimmer light than alive. And maybe that light turns dimmer when the creature closes to death. There is an article in the Journal of Physical Chemistry Letters, “Imaging Ultraweak Photon Emission from Living and Dead Mice and from Plants under Stress” that introduces ultra-weak photon emission from dead mice and plants turn dimmer. 

And the question is this: can humans see that ultra-weak photon emission and its changes subliminally? The article says that all living organisms shine weak light that disappears when a creature dies. Also, things like mammals shine IR-radiation but the main question is can the ultraweak photon emission, UPE happen on purpose, or is it some kind of leak? And can humans see that phenomenon but that observation cannot reach our consequence? 

There are two things that self-learning systems must do to become effective. Effectiveness means that the AI or human brains should ignore irrelevant information. If that thing doesn’t happen, it grows databases and data mass in the system. When the system makes a decision, it must select the right data from the data that it has. And then the system must make decisions using relevant data. This makes the situation problematic. The system must decide what kind of data it needs in the future. 

And that is quite hard to predict. When we learn something we cannot be sure do we need those skills anymore. Maybe we don’t need skills that we learn in the military ever after that. But as we see, the future is hard to predict. The other thing that the AI must do is to adjust its processor's actions like human brains do. In human brains, brain cells have multiple frequencies in oscillation. Scientists say that those differences in oscillation frequency are to avoid rush hours in axons. 

That thing means that brain cells give time to clarify axons. Because brain cells have different frequencies that make it possible to control axons and deny the situation that multiple neurons send data into the same axon at the same time. Those multiple rhythms allow the brain to avoid rushes in axons. The same thing can make a fundamental advance in technology. If we think about the situation that the system that runs AI doesn't have a controller that includes system architecture that makes processors operate a little bit at different times that causes a situation that all processors send data at the same time to the same data gate. That causes a rush and makes the system jam immediately. In the electric systems the system that uses electric impulses for data transmission. 

Processors. That operates in the same moment. Can form standing waves in the data channel. And that burns the system. There are many interesting details in the human brain. That thing opens visions that maybe brains might also have an optical way to transport information. Researchers try to find out the purpose of that light. And if they find a point in, or on neurons that react to that light, they find a new level in their brains. Another interesting detail is that different parts of the same neurons learn in different ways. That means that the neuron itself can be more intelligent and versatile than we thought. 


https://neurosciencenews.com/hippocampus-neuron-rhythm-29277/


https://www.psypost.org/different-parts-of-the-same-neuron-learn-in-different-ways-study-finds/


https://www.psypost.org/neuroscientists-discover-biological-mechanism-that-helps-the-brain-ignore-irrelevant-information/


https://pubs.acs.org/doi/10.1021/acs.jpclett.4c03546


The AI removes trainees from workplaces. And that is not a good thing.



The AI doesn't take your jobs. It denies new workers to come to the field. That causes questions about how workers can improve their skills. The AI doesn't take your jobs. It denies new workers to come to the field. That causes questions about how workers can improve their skills. What if humans especially ICT workers lose their basic skills? What if all programming turns into cases? Where does the system worker just give orders to the AI? Using normal language. Then the AI follows instructions like a human coder.  

But the main thing is that the requirement of effectiveness and ultra-capitalism forces company leaders to make that choice. They choose AI instead of hiring programmers. And that is one of the biggest problems in the ICT business. But then we must remember that today is the end of the road. AI is the solution that is the next step into the continuum where things like encoding software are outsourced to countries like India. Western coders didn’t get jobs because it was cheaper to hire experienced workers from India or some other far-east countries. 

When companies outsource encoding to the Far-East they become vulnerable for that work. That means that those workers are working in countries that are members of BICS. So those workers can work under the control of intelligence or other authorities who order them to make spyware and other malicious tools. We are people who live in Western democracy. We learned that if somebody acts as a spy that person will be arrested. The same way we believe that if somebody hacks into some country, we must just request that those authorities arrest those people. We didn’t expect that those hackers worked for the Chinese intelligence service or government. They were protected by the government. 

The next logical step is that the AI starts to work with codes. And that takes jobs from humans. This thing means that the programmers lose their basic skills, and without basic skills, they have no advanced skills. Programming is like learning things. If we compare the programmer’s advance with going to school, we must realize that every person in the world writes their first words. Before that, they must learn to read. And every single person reads their first word once. Before we can learn advanced mathematics, we must learn the basics. So we all calculated once 1+1=2. If we don’t learn the basics we cannot learn anything new. 

And that is the beginning. Without the first class, we will not learn anything. In the same way. When we want to learn to encode or make computer programs, we must learn basic skills. Without basic skills, there is no ability to learn advanced programming. If companies don’t offer jobs for trainees that means people cannot learn skills that they need at an expert level. 

That thing has a reflection across the entire ecosystem. If the boss doesn't know what codes are made, that can cause problems. For observing and surveilling henchmen work the boss needs to know what they do. And if the boss doesn’t know what henchmen should do, that causes a situation where somebody can inject malicious code into the program. In the time of modern communication, the system needs less than a second to infect the target. 

There are articles. About a North Korean mobile telephone. That was secretly smuggled to the West. Those mobile telephones are the Orwellian dystopic nightmare. By connecting those mobile telephones with the AI the leader can surveillance every single citizen 24/7. The AI can tell if somebody uses forbidden words. 

And that causes a question: what if somebody slips that kind of mobile telephone into some general’s office? Maybe some key person’s family member will win a mobile telephone from the net. And then there are the surveillance programs. There is also a possibility that regular hackers have those telephones, and they copy and customize that software for their own purposes. 

Every expert has been a trainee once in their life. The requirement in working life is when a person comes to the workplace, they must know everything in the first minute when they open their computer. This is the route that offers the opportunity to AI. The AI learns things in minutes. Another thing that we must realize is national security. If we outsource critical encoding to some far-east country that means we cannot control what those people do. They can give the critical code to hackers who work for China or North Korea. In those countries, the government is the ultimate authority. There is no way to say anything against their orders. If the government orders people to work as hackers that means the person has no chance to say against that suggestion. 

And those systems can turn very dangerous in those cases. In the cases that somebody makes a back gate to the system, that offers a route even to critical infrastructure. What if somebody orders all Chinese-made routers and other network tools to shut down? That can cause problems in everyday life. And if one wrong microchip slips into the computers that control the advanced stealth fighter, that component can deliver computer viruses into the system. Or that kind of tool can steal vital data from the system. Every time when something is done outside the watching eye, there is a possibility that somebody will make something that can cause very big trouble. Things like microchips equipped with malicious software are tools that can break national security in large-scale areas. 



Tuesday, June 17, 2025

Privacy against security.



When we talk about security we must ask “whose security that act serves”. We know that the internet is a tool that offers the greatest propaganda platform that we have ever seen before. The net is full of tools that are used to prove that writers are humans. The AI-based applications offer the possibility to share data into even billions of homepages and social media applications in seconds.

Data that AI creates can block almost any private server on our planet. And that makes it possible to use the AI-created data to block entire web services. Confirmation that people who use the net are who they claim is one of the things that used to argue that people should use their real identities on the net. The anonymous use against the confirmed use are things that both have their supporters. Anonymous use allows users to make reports about corruption and many other things.

And that makes people support that way of using the net. On the other side, anonymous use offers change for cyber attackers and disinformation deliverers to operate on the net. Things like AI agents can operate in the targeted networks, steal information, and deliver it to other users. That kind of thing can be put in order by forcing people to confirm that they are humans. And then tell who they really are. But that is similar to the U.S. firearms laws. Those things don’t stop propagandists or psychological operators from sending their fake information to the net. 

Those people, especially if they operate under state control to operate. Those disinformation actors can use fake or stolen identities. And the authorities can confirm those faked identities. If we want to deliver propaganda as an example from Russia, we must have computers in some states like Finland. Then we can take the VPN to that computer from Moscow and then start to deliver that information to the net by using that remote computer, which is located in Finland. So, in that case, we would ride with the Finnish networks. The operator who is based in Russia tends to stay away from Western countries. The assistants make everything, and that makes it possible for the person who knows something to stay under control. 



Bye bye algorithms.





We are waiting for the new step in the AI development and research process. Many big technology bosses say that this is the end of algorithms. And the next step is self-learning AI. That system can communicate with robots and all other systems. The self-learning system can learn in two ways. It can create new models that it uses in certain situations. Or it can connect a new module into itself. The reason why advances in AI go to self-learning systems is simple. 

New algorithms are very complicated. And their training requires so much time that the self-learning models are better. The thing that makes this kind of thing very complicated is that the new AI must operate in larger areas. They must control things like street-operating robots. So they need a more effective way to learn things. The street-operating robots can use platforms that look like computer games to learn how to cross the roads, and where those robots find things like apples if they go to shop for their owner. But then those robots must face unexpected things. 

Robots can share their mission records with the entire system. And that helps to develop methods on how to operate in natural situations. Basically, the difference between a learning system and a normal system is that the learning system can create new models and then compare the original model with that new model. There are parameters that determine which way to act is better. And if the new model is better, it replaces the old one. This means that the fixed model turns into a flexible model. That model lives with its environment. 

That thing is the AGI or artificial general intelligence. That kind of AI is everywhere, and it can connect multiple different systems that seem different under one dome. The biggest difference between AI and modern algorithms is that the system can bring new data from sensors to data flow that travels in the system. The AGI is a system that might be "god-like" but if that system cannot create genetic codes like manufacture the DNA it might have no ability to control living organisms. However, the system has many ways to manipulate evolution. 

The AGI can make couples that have certain skills. The fact is that dating applications are effective for dating. And it's possible that the AGI can also make it possible to select perfect spouses. So people who are not "perfect" leave without a couple. And that means only people who are suitable, or similar can make descendants. This causes segregation and loss of diversity. 

And that is a sad thing for humans. Self-learning AI is a tool that can learn from its mistakes. It learns what to do, and what it must not do. The thing is that the self-learning AI is the new common tool that can make almost everything. The system learns like humans. And that makes it the so-called AGI. One tool fits all. The system can control things like robots.

Robots can collect data for that system. The AGI works like this. One robot sits on a chair and then the teacher teaches things for, and through that thing. Then that robot shares new things across the entire AI and network. For training that kind of system, a lot of information. And companies like Meta have that data. And AI makes it possible to create things like AI agents that sneak and observe what happens in the network. Robots can learn from other robots. When one robot makes a mistake, it scales over the network. Other robots must know that they don’t make the same mistake again. 


Monday, June 16, 2025

Why does an antique chess console beat Chat GPT in chess?



When we think about those antique ATARI consoles from the 1978 model we always forget that they were not as easy to win as we thought. Those chess programs handled every kind of data as numeric. And the Chat GPT-type artificial intelligence handles that game as visual data. This is one of the things that we must realize when we think about this type of case. Those old chess consoles used very straight, linear tactics. The main difference between modern algorithms and old-fashioned computer programs is that those old programs are linear. And it handles all buttons and movements separately. 

So there is actually a chessbook in those chess programs, that it follows. Those old chess programs were more difficult than some people believe. If you were a first-timer in chess that means you would lose to those consoles. They played very aggressive and straight games against human opponents. The system tested the suitable movements for each button separately button by button. Because the program was linear the movements were made in a certain order. In those chess programs, every movement is determined by the program square by square. The programmer determined the movements for every button and every square separately. And that made those programs quite long. 

Those old-fashioned chess programs have a weakness that if something goes wrong they continue by following that line. There are certain numbers of lines that the program can use. And there is also the end of the line. Those programs can use complete tactics. But their limit is that those programs are fixed. They don’t write their databases and models again if they lose. And that makes those old-fashioned consoles and video games boring. When people learn tactics that it uses they can beat those old-fashioned programs. The limit of those video games is seen in action games. There are always the same points where enemies jump in front of the players. 

Then we can think about things like learning neural networks. Those networks can beat all old chess programs quite fast. The problem is that the neural network must see the game of the console before it can win those systems. AI is like a human. It requires practicing and training. Without knowledge of the opponent’s game, the AI is helpless. There are many ways to teach AI to create tactics against old-fashioned programs. The system can use some modern chess programs and then analyze the opponent’s game to create tactics. 

The other way is the system can analyze the source code and create a virtual machine that it can use to simulate the chess console game. But what do we learn from that case where antique consoles beat the modern AI? Without training the AI is helpless as humans. If the AI has no knowledge of how to play chess, it must search all data including movements of those buttons that make it as helpless as humans. 

Those old-fashioned consoles are RISC applications. They are made for only one purpose. Their code is completely serving the chess game. Modern AI is a complicated system. It can also do many other things than just serving the chess game. And that makes those old consoles somehow difficult to wing, at least when the AI can break its movements and tactics. 


https://en.shiftdelete.net/chatgpt-fails-in-chess/




Sunday, June 15, 2025

The gentle singularity: what is the limit of the singularity?



The next step for artificial intelligence is the artificial general intelligence, AGI. That is the tool that connects every computer under one dome. The AGI is the self-learning system that develops its models and interconnects them with sensors that bring new data for the system. That means we can interconnect every single computer in the world in one entirety. We can think that social media is something new. We forget that a long time before Facebooks were the letter clubs. The “post offices” where people can send letters to people, who could be pseudonyms. 

Social media is not a new thing, and Facebook and other applications are the products of a long route that started in Ancient Rome and Greece where wall writing or graffiti was the beginning of social media. Social media interconnects people from around the world. The new thing that the net brought was speed and maybe the price of those systems is low. But as we know there are no free lunches. The thing that doesn’t cost anything can have the highest price. The ability to create singularity between computers brings the ability to share and receive information with new forces. 


And then the new step for AI and computers is the brain-computer interface, BCI. The BCI means the ability to control computers using the brain waves, or EEG. The system can interact with computers and it can operate also between people. This system can interconnect all animals and humans in one entirety. And there are risks and opportunities about that model. If we make things wrong we create a collective mind. There is one opinion. So we interconnect our minds and computers into giant brains. That is a very sad thing. That thing destroys our own creativity.

The biggest problem with social media, AI-based dating applications, and finally singularity is that the system destroys diversity. People want to discuss and date only people who are similar to them. That means our way of thinking starts to turn homogenous. That causes a situation where we have no people who disagree with us. We can hear only ideas and opinions that please us. We take only people who are similar to us, in our social networks. So, in the worst case, we and our networks operate like some algorithm that recycles data through the model. That means we, our team, or our network will not get anything new to our model. We just recycle something if we don’t accept diversity. 

Our mind needs ideas and motivation for making new things. And where can we get those new ideas? We can discuss those things. Or we can get information that some other party made. And then we can work and refine the information that we can get from net pages and other media. Without opponents our productivity and creativity die, because we have nobody who brings new ideas into our minds. 

In some models, the network can develop things by playing games against some other network. The network creates a simulation and then the model tries to fight against that simulation. If a model wins there is no need to develop it. But if the model loses it requires adjustment. And that means the system requires data and then it requires optimism. 


In the novel “Peace on Earth” the author Stanislaw Lem introduced a model where the simulator creates a model and the other fights against that model. The better simulation becomes a model. Until something creates a new, better model. 


There is another way to operate as a network. The network can accept individually operating members. The idea is that every operator that is connected to the network is autonomous. Those subsystems operate autonomously when they collect data. When the network doesn’t need order it can be chaotic. And when an actor sees something that requires a lot of information, the roll call comes over the network. “Everybody stop, the network needs your capacity”. That commands those autonomous subsystems to leave their work and start to solve bigger problems. 

So, the network operates as a whole when it requires that ability. The network can have subsystems and that means as in the case of an extreme crisis those subnetworks create models that should handle that problem. 

Those subsystems can be individual actors. When the individual actors play against each other, that lost actor joins with the winner and starts to develop a model that won. Then the actor couples start to play against each other, and again. The lost team joins the team that won and then starts to develop the tactics that won the game. The actor groups or networks expand when new actors join bigger entities. 

Those subsystems start to play against each other. When some subsystem loses, that means its tactics are lost. Then that lost actor joins the winner's team and gives its capacity to that team, or network. The network always drops lost tactics or action models until there are two networks against each other. And the better wins. This is one way to create the answer and solution for complicated problems. The expanding network could be the thing that brings solutions to many problems. When the network is in chaotic mode actors search data for it. 



You, me, and the language model.



Who has responsibility if people let their thoughts to some AI?


Why do we let AI think for us? The road to this point is long and rocky. When we order the AI to make essays and poems we follow the journey that began a long time ago. When we read essays and poems, made by AI we can say how those things destroy our creativity. At this point, we might say that we can buy a poem book and write that poem from it. 

So, in this case. We simply copy a poem from the book that some great poem master made. And then we can look at the mirror and ask from that image, who made the poem that we just wrote? We wrote a text that some other person invented. So, if we think about this case, and connect AI to that continuum, we see that AI is taking the role of the poem book. In the point of the receiver or reader, it's the same who made that poem. 


Is it some Chat GPT, or is it some Lord Byron? The poem was not made by the person who wrote it on the card. Then we can think about people like Sam Altmann who make more and more advances in AI. We blame them and AI and search for mistakes from them. But then we forget our own responsibility. The user makes the decision to use AI, so we decide whether will we make poems ourselves or will we let some other actors make those things. We have responsibility for things that we make and introduce to people. When we make and introduce some poems ourselves, we face very pithy criticism. 

When we say that people must go to libraries, read books, and do other things, we must be honest. Are we only jealous of people who have tools and skills that we didn't have 20-30 years ago? When we look at work effectiveness we gaze at things like time. 

That a person uses for work. And if the work is done faster, we give that person new work. Would that be an encouraging way to work? If some person does the work faster than others and the work is well done, should we give the rest of the time free for that person? 

Or should we give a new job to that worker? And then order the person back to the office. And take some artificial smile to our faces and then fire that worker because that person makes work better and faster than we do? Or should we cheat that person about poems that this individual worker published on social media? 


We can also remember that person who is part-time working in our company. That means we can use our supreme control and show everybody how jealous we can be. If a person goes to some poem courses at the labor college outside the working time, we can find a new shift for that person. We have some ideal vision of what a henchman should be, and if a henchman does not fit into that thing, we must change that person to fit into that mold. 

That can be crushing. So, it's easier to take the book from the bookshelf. And then make a copy of some of those well-known poems. That means we can say that the person who invented that poem was somebody else. That might be impressive. We didn’t use our own brains for that poem. We made hard work if we took a pen and then copied those words. But it is easier to make the copy using a computer. Or, maybe we find some of those poems from the net and then use copy-paste. Then we must not use our brains at all. AI is the tool that releases our resources from thinking to something else. When we think about cases where somebody makes their own poems, we must realize that every poem makes the first text. 


We decide the easy way. If we want to write some poems or essays, we must sit on our computer. And then we must take the trouble for that text. If we have some other things to do, we have no time to write texts and think about things that we make. Sam Altmann or anybody else than me and you decide if we use AI. That makes our life easier. It leaves our time to have a social life in discos and bowling alleys. But is that the advance that we want? The answer is that the decisions that we make show the road. 

People like Sam Altmann are basically businessmen. They follow the Maslow hierarchy of needs. When our basic needs are filled we want more. AI is the thing that allows us to transfer all our productivity to some computer. And that is the thing that makes AI advance faster than we expected. When the AI satisfies some need, there is another need it must respond to. This is the thing in AI development. AI can make things better than humans. 

Or, we can say that it can make some things better than humans. But then we must realize that AI must also learn new things. There was a story that some antique ATARI computer beat Chat GPT in chess. That thing happened because nobody ever taught the Chat GPT the chess. In the same way, we would lose all chess games against the monkey if we ever played that game. Every skill that AI has is a module. And if the AI has no module for something it's helpless. The AI requires lots of power. The AI, or LLM server requires its own power platform. And when we develop new and more scalable AI systems, we need new and more powerful computers. 

But still, we must realize that the AI that makes everything cannot make things from nothing. Those systems require massive databases and as much power as some cities. That waste heat can also be used for energy production. But the problem is always the temperature. New solutions like biological AI where the microchips communicate with microchips are coming. And in the wrong hands, those systems are dangerous. 



Saturday, June 14, 2025

AI can transform anything that we call humanity.



The ability to create babies with customized abilities is the thing that can make more than nobody predicted. Genetic engineering makes it possible to remove things like hereditary diseases from human genomes. Children can have musical skills or some other kinds of things. The system can select gametes from deliveries who have musical skills.  That makes it possible for the system can order hobbies that those people have. There is a possibility that the customized babies have skills that make them beat their parents. 

That thing is not the only thing that the AI can make. The AI can create children with high-level IQ and that is one of the things that the AGI can create. The ability to manipulate DNA makes it possible to order the color of the skin of a human. Basically, researchers can use genetic engineering and artificial viruses to connect things like chlorophyll to skin cells. And the system can control things like the number of mitochondria in the cells. 

But the fact is that the Chinese military is also interested in super soldiers. Those genetically engineered military men would have high IQs, but their loyalty to the central government can be guaranteed. Those military men can have the genome that makes wolves and dogs loyal to their masters. Genetically engineered super soldiers are one thing that causes fear. 

When we think about controlled evolution we face things that when we favor some skills, there are skills that artificial evolution can destroy. Because society favors certain types of skills that means the hobbies that AI or people who use that tool classify as unnecessary. That means those unnecessary skills will not transfer to other people. 

And that is the problem, because, without those transplants, those skills will not be transferred to the next generation. That means that this kind of controlled evolution decreases the diversity of the genetic and hereditary skills. That causes the situation where the only skills that people seem important start to transfer hereditary. There is the possibility that genetic engineering causes segregation. It's possible. Those genetically engineered people start to avoid natural people as companions. And that causes segregation in society. 



Tuesday, June 10, 2025

Why are we obsessed with AI?



People are obsessed with AI. The question is: why? The answer can stay in our society. We have the attitude that everything must happen fast. That's why we rather use the Internet than books. There are philosophers, home thinkers, etc, who say that we should go to the library and read books. But when we are in working life we have no time to go to the library and then find things. That we need.  If somebody wants people like students to go to the library and read books they must give time for that. 

When we are at work we must be effective. We have no time to go to the library to search for books, and then write some philosophical thoughts about them. People ask why we give our right to think to AI. The answer is that AI makes everything more effective. If we want to be creative that means we are not effective. If we want to become philosophers we must not expect that our society accepts that thing. 

When we think of something alone we are not social and effective. We are alone with our thoughts. And that is not what society expects us to do. Society wants us to make results. When we write something that takes time. And if we use AI we can make much more texts. Quantity replaces quality. Nobody respects the text that we make ourselves using our own words. People respect models that some other person made. 

Those models make it possible to make more texts and the next step is AI. There is no time to make offers by using your own words. The effectiveness means that people use some models. Lots of offers are better than one that a person makes, using their own words. 

When somebody needs information that means information is needed right in the moment. On our working day, we don't even have time to ask the person who sits next to us that person's name. We don't have time to think about things. And another thing that we have is fear. What if we give a wrong answer? 

That is one of the worst fears in modern life. So if we don't have time to think about things, we don't dare to answer. Using our words, and introducing our own ideas. AI is similar to some poem books. We can take a poem book. And then search for some impressive words and copy them to the text. The next step is the use of AI.

We must use things like AI tools. The AI tool is like a secretary that makes our speeches and other official texts. So we can go in front of people and say, here I read a paper that my secretary has written. That offers the escape door to us. If there is a mistake we can blame our secretaries for that thing.  

The same way, if we make referrals about articles and books that we read, those words might be wise. That's true. But those words are not our own words. Maybe Socrates was a very famous and wise man. But that person wrote his own ideas and words. When we make a speech to our ceremonies we should write our own texts. 

I think that people like Socrates and Plato were very intelligent. But if we just loan those texts, and copy them we cannot find new Socrateses. We cannot find a new philosophy. And what we need is the time to think and the time to handle and observe our thoughts. We are so busy that we have no time to go to the library, and read books. If we are wrong we would face blame. We must have time to go to the Gym after work. We must have time to be social. And we must have time to do many things. 

But then we must realize. That we have no time to sit and read. If we want to go to the library to read books we must find time to do that thing. 

If we buy a book or borrow it. But we have no time to read it. 

That book doesn't offer a very big advance in our knowledge. If we want to get information and use it we must open that book or database etc. And then our mind must be ready to receive that data. 

We don't have time to think about things and the consequences of our work. If we don't dare to write things that we think we cannot find new Socrates and other philosophers. If everything that we write and introduce must be so scientifically proven we should realize that those things don't bring advances. 

https://futurism.com/chatgpt-mental-health-crises


Monday, June 9, 2025

What happens when we get AGI?



What does AGI (Artificial General Intelligence) mean? That is the extension of the large language models, LLMs, that can control every data network in the world. Or the system can control physical tools that are connected under their dome. Normal LLM has its domain. The domain is like a state that involves certain actions. Drone control is one domain, and home appliances are one domain. Those domains can have multiple subdomains. The AGI interconnects those domains under one dome or one entirety. So how far are we from that model? 

The answer is more complicated than we can imagine. We can think that the LLM can control things like microwave ovens, but for controlling those tools the LLM requires a socket that it can use to adjust microwave ovens. So the man-shaped robot can use a microwave oven, or the other version is that the home appliances are equipped with a control system that the AI can use to command it. 

When we connect new things under AI control we can face the same thing as when we try to learn to use some new systems. When we buy something new like a microwave oven, we must learn how to use it. In the same way, the AI must learn to use those equipment. And we have two versions for making that thing. 

To use any tool the AI requires a model that it can use in that operation. The model can be in the central server that runs the AI. But where does that server get the model? That is the point. The operator can teach the AI to use the microwave oven as well as the drone. But the system that is connected to the AI can also involve that model. Things like quadcopters must involve programs that control the rotor’s positions. In those cases the operative model is in the robot, or some other thing. The LLM gives orders to robots where they must travel. 

Then the robot can use its internal systems to navigate and move to the location. But orders for autonomic operations are coming from the central systems. This kind of network-based solution is easier for programmers. In those solutions, every single machine that is connected under the LLM domain has its own operational model. The system is modular and each module is independently programmed. 

Basically, if we think that AGI is the tool that just connects multiple devices under one domain, we could do that thing immediately. We can use man-shaped robots that can do almost everything. But the key word is “almost”. 

 Let’s return to the microwave oven. The reason why it’s hard to make that precise thing is the lack of standard user interfaces. The robot must learn to use every single microwave oven independently. That means it must make an independent model for each microwave oven. If there is a system where we can put seconds and minutes separately, the systems where there are only minutes in the timer are not the same. We learn that difference in minutes. But for robots, we must make an independent model of how to adjust the timer. 

Many systems in the world are so easy to use that nobody has wasted time creating standards for them. Easy systems are easy for people, but then we must think about things like the microwave oven. There are button- or toggle timers and that makes them hard to learn. For robots and AI the difficulty is this in the fact all microwave oven models require their independent model of how to use them. 

The robot must connect images from the user manual to the microwave oven’s interface. There is a possibility that if the system does not learn independently the “teacher” or programmer takes an image of the front panel, and then puts the buttons in the right places. Then the AI can learn the rest of the task from the user manual. 



Saturday, June 7, 2025

The self-learning networks crack black holes and control drones.



Artist impression of a neural network that connects the observations (left) to the models (right). Credit: EHT Collaboration/Janssen et al. (Phys.org, Self-learning neural network cracks iconic black holes)

Self-learning networks are tools. That can do many things better than humans. The self-learning network has two datasets. That it can be used in that process. The system has models in its databases. And then the tools that can send observations in that system. The self-learning neural network means that the system compares the observation with the model. 

And if that model is different. The system fixes it. The system has the tools that can handle those images as pixels. The system can change those pixels in the model. That can make it fit with observation. And we can use that model with all learning networks. 

The system can create models itself, or it can use humans to make them. Then that system sends things like drones to operate following the model. Successful missions like pizza delivery or some military action mean that the system has a suitable model. But if the mission is not complete that model requires advance. So if something goes wrong the system requires information on what went wrong. And then the operators should fix the model. In the case of pizza delivery, those operators will fly that mission the first time, and then the system creates the model using that data. 

And that is one way to teach the AI and network to deliver pizza to the right place. The system can use the image of a person who ordered the pizza and finds that person also from outside. In the teaching process, the system needs things. Like the minimum flight altitudes. 


Above: SingularityHub, This Robot Swarm Can Flow Like Liquid and Support a Human’s Weight)

Morphing and learning neural networks can make these kinds of drone swarms ultimate tools in medicine, technology, and weapons. The ultimate morphing ability makes those morphing drones the most advanced tools in the R&D works. 

The same software that recognizes vehicles, can recognize people. The person orders pizza at a certain GPS point or point, where the drone can find easily. The person can also give the image to the drone. 

In other cases. The operator can draw the route. To the delivery point on the city map. The drone knows what streets it should follow. 

The drone can scan things like street names. And then it can find places like certain shop entrances. Then drone can start to search for the person who made an order. If the person gives a personal image to a drone it can search that person from the squares. The same system can also find targets for attack drones. The problem with drones is that they are multipurpose tools. And learning networks can make them more fantastic, and more terrifying than nobody believed. 

The Ukrainian strike against the Russian strategic air force tells how dangerous those systems can be. The drone can be installed in things like trucks and the driver must not even know that they are there. The drone can be on the roof of the container. When it enough close to the target it can release that hatch and then those drones can fly to their targets. The thing is that those kinds of systems are far more advanced than in 2020. Those systems are fast to develop and with the morphing and cheap AI, they are absolutely effective. The AI can be hard to make. But it's cheap to use. 

The new drone swarms can operate like liquid. Other drones can transport them to their targets. The drone swarms can act like liquid metal robots in movies. The drone can actually be formed of that kind of robot swarms. So those drone swarms can travel in the form of a drone. 

Then at the target, they can fall to water. And the drone's shell can turn into a liquid metal amoeba. Then that liquid metal amoeba can do its duty. It can close oil leaks. Or remove cancer from the human body. There are lots of applications for those robots that can make the morphing structures possible, 


https://www.bloomberg.com/features/2025-ukraine-drones-explainer/


https://phys.org/news/2025-06-neural-network-iconic-black-holes.html


https://singularityhub.com/2025/02/24/this-robot-swarm-can-flow-like-liquid-and-support-a-humans-weight/


The mind-altering weapons are not science fiction.

Image: Total News The mind-altering weapons or “mind-control” weapons are one of the biggest risks in modern life. Those weapons include hal...