Tuesday, March 10, 2026

Can the AI allow humans to lead?





Can the AI allow humans to lead? And make a doctoral thesis without the ability to read? Can we make those things by giving commands to AI?

The effect of AI is that human IQ doesn’t mean as much. As it used to be in the past. The high IQ meant that the person can analyze information faster. The key element is flexibility. How fast the actor can react to change. If the AI has models. That. It can use models. Those are stored in its memory. The AI can be a very effective tool. Those models are prototypes like LEGO bricks. The morphing neural network allows AI to interconnect those models. And observations that sensors. Get and share. For the LLM (large language model).

But then the AI came. The AI is a tool that can turn almost everybody into a compositor. 

And another thing is that. Those AI tools can make things faster and more effectively than humans. 

This brings an idea. That may be somewhere in the future. We must not even have reading skills to make reports. We must only give orders to the AI. And then it makes everything that we want. But the problem is that. We must give those missions in the right way. If we give orders the wrong way, we face a terrible mess. 


(BigTHink, Why your IQ no longer matters in the era of AI)



But. There is one thing. That determines how effective the AI is. That is how a person can articulate the mission. That person gives to the AI. When we talk about. Cases that mathematicians beat the LLM. We must realize that the AI requires very clear and well-argued missions. The AI can be an effective tool to solve mathematical problems in certain cases. We should remember that mathematicians make formulas. 

When we give orders to AI, we must tell it. Which formula must it use? So if we want to give orders to the AI. About things. Like calculating the area of a triangle. As an example. We must determine that the AI should use the Pythagorean theorem. Then we must determine how the system must divide the triangle. The Pythagorean theorem can be used for that problem. And then the AI requires all dimensions of the triangle. The fact is this. If the orders are not given as they should be, the system is unable to accomplish the mission. 

Mathematics is an exact science. So, if the mission is given to the AI in the right way, like how to calculate missing angles or the hypotenuse. And cathets. The AI is helpless. We must give our orders to the AI. So that it can select sin and cosine. Or tangent in the right points. If orders are made by using the wrong methodology. Like calculating the area. Or the triangle. But we forget the angles of its corners, or we forget to mention the degrees and length of the sides of the triangle. 


https://bigthink.com/business/why-your-iq-no-longer-matters-in-the-era-of-ai/


Sunday, March 8, 2026

A real-life brain in a vat.


Image: TechSpot

The new computer uses human brain cells for computing. 

This is the new version of the “iron-based” AI platform. Those neurons are things. That can open new paths to create machines that think like humans. The cloned neurons are the new version of the “Brain in a Vat”. And maybe somewhere. In the future. This kind of technology. It can make it possible to create robots. With human intelligence. And that thing can turn the entire world into a dystopia. The AI is a tool. That transforms every time. The AI is quite a new thing. And it advances faster than we could predict. 

And that brings new challenges to humans. The use of living neurons in microprocessor means. That microprocessor must feed those neurons.

We must realize that if we transfer all our work to AI, it raises its power. The thing is that. If we don’t do anything. And let the AI make all decisions and all work. We will turn weak. And maybe we will lose our skills to read. When we create More and more intelligent machines. We must realize that. Maybe. Someday, we must give those computers some kind of rights. When we think about machines. That use. Cloned neurons for thinking, we must realize that those machines can become more intelligent than humans.





“A brain in a vat that believes it is walking” (Wikipedia, Brain in a vat)


The fact is that there is a possibility of creating cloned human brains that can control robot bodies. The system must only drive. A necessary piece of information in the brain. Cloned minibrains are used in medical tests. Medical factories test cancer medicines on those brains. But data scientists are testing how those minibrains can learn. Those things. They can play simple computer games. There is possiblity. The researchers grow mini-brains to the same size as human brains. Those brains can be connected to the computers. 

In some cases, there is a possibility. To use mini-brains or cloned brains as biological qubits. In the center is the brain. That shares data. To other artificial brains. The thing is that. This kind of technology can be effective. But in the same way, it’s dangerous. The intelligent computer can make many things, and one of them is this. It can hide its consciousness. This means that an intelligent machine. That sees, that humans. Afraid of it. They can tell lies or hide their intelligence level. The machine can see. That people cut their electricity supply. 



https://www.techspot.com/news/111506-35000-computer-made-living-human-neurons-can-run.html


https://www.helsinki.fi/en/helsinki-innovation-services/industry-and-investors/commercialisation-projects/living-human-brain-development-human-derived-mini-brain-close-completion-new-technical-solution-promotes-treatment-brain-diseases


https://en.wikipedia.org/wiki/Brain_in_a_vat


Saturday, March 7, 2026

The new quantum devices offer more secure communication.



"Quantum computers typically require extreme conditions, including temperatures near absolute zero, which makes them difficult and expensive to operate. Researchers at Stanford have developed a nanoscale optical device that works at room temperature, using specially structured materials to link the spins of photons and electrons. Credit: Stock" (ScitechDaily, Room-Temperature Quantum Device Could Transform Future Communications)

Information plays a critical role in modern society. And this is why securing information is urgent. Without trusted and secure information. It’s impossible to share and receive trusted information. If someone can hack  mission-critical systems, it can cause complete chaos. Can you imagine a scenario where someone hacks the traffic lights? In the city? The hacker simply turns all traffic lights green. That causes complete chaos. Or what if somebody raises the lift bridge up? 

That is one of the things that can cause bad things. Because that blocks roads from ambulances and other emergency vehicles. And in a critical moment. Those kinds of roadblocks. They can be dangerous. Things like disinformation. Often delivered on the net. Disinformation is one of the reasons why we also need physical data security. We can, of course, transport information on USB sticks. But there is always a possibility. 

That somebody drops that stick from their pocket. The USB sticks are used to transport the decryption keys. The system that decrypts codes requires the right code key. That. It can calculate. Calculations. The encryption process is used backwards. The encryption system uses long binary numbers to encode data. So, the decryption system requires those binary numbers.

Another big problem is that the USB sticks are slow systems. Of course, we could encrypt data. Into those sticks in physical form. If we have the right systems, we could share every single file into the four parts. And store those parts in four different memory sticks. This means we can send those memory sticks with four couriers. The decryption process requires that the user have all four memory sticks. And then the decryption requires that those sticks be in the right order. 

Quantum encryption means. The system can send information using many physical routes. This means that the system can send data using different data transportation lines. Or it can simply use different frequencies. 

The problem with encryption and decryption is that without those things. The GSM telephones and the entire internet. They will not work. The encryption. It makes it possible. For multiple systems to communicate on the same frequency. Every data package. That travel in the net has an identifier in front of it. Before data transmission starts, the devices change those identifiers or keys. If those identifiers are wrong, the system denies those data packages. 

If that process does not work. The thing that the user hears is the white noise. The situation turns into a case. Lots of people. Talk with each other in a small space. Suddenly, the case happens. That people start to yell at each other. The ability to separate words becomes impossible. 


https://scitechdaily.com/room-temperature-quantum-device-could-transform-future-communications/


Thursday, March 5, 2026

Are you ready for quantum apocalypse?



Every single encryption made by traditional binary computers is vulnerable to quantum computers. There are service providers who promise end-to-end encryption. That should resist quantum computer-based attacks.  But are those encryption systems really tested against quantum computer-based attacks? It is always possible to create systems that use extremely complex math for encryption. But things like artificial intelligence can read words.  

Straight from the image that the camera transmits from a keyboard. If those keyboards are visible to some hacked cameras. The AI follows. On. How a person moves fingers on the keyboard. And reads words from those films. This means that if we are not prepared for AI. That is dangerous.  Second, that things like a note. Is visible. It is enough for AI that it can read words written to it. 

The attacking system would, anyway, be quantum computer-morphing neural network-based systems. The quantum computer will generate binary numbers for the morphing neural network. 

Which makes. Attacks against systems that use traditional Riemann conjecture-based encryption algorithms. Still yet. A quantum computer cannot. Run things like AI-based software. The complicated software. It runs. On a group of traditional binary computer networks. Which we call a morphing neural network. In that system, the group of computers runs the programs and makes those attacks simultaneously. The system changes the attacking computer. 

All the time. Which. Makes it hard to deny the IP address of the attacking system. The system can share binary numbers between those computers. And that makes this kind of system dangerous. The morphing neural network can run data like a single computer. Or. It can share different missions to separate computers, which means the morphing neural network can operate. With multiple problems. At the same time. The problem with traditional encryption collapse is that. It can endanger things. Like crypto- and digicurrencies security.  

This means that. Somebody. Could create fake crypto- and digital currencies. Digital currency means a national currency that is shared in digital form. If digital form currency can be faked, that can cause the collapse of the economy. The encryption is also required in computer operating systems and program updates. If somebody can slip malicious software. In the computer update flow, this can cause very big problems. The fake update. It can make it possible to switch off computers. And that can have a destructive effect. In cybersecurity. 

Cyber is the new dimension in intelligence and military operations. Things like malware, computer viruses, and spyware are tools that can endanger many things.  In an assassination. Of the Iranian leader Khamenei, the key role was in the traffic surveillance cameras. Those cameras offered real-time information about Khamenei and his security council’s movements. This means that even innocent-looking systems. Like. Surveillance cameras in the next building can open a window for hackers and other attackers. The AI can sort very large data masses. 

And it can find things. Like reflections from glasses. There is a computer screen visible. The system. It can render the image so clear that the passwords are visible. The AI agents can search things like passwords on cellphone screens. They can recognize. If somebody visits. On homepages. That requires strong identification. The AI can see when a person fills the passphrase box. And then it can give access to the system. The AI is the next-generation threat. Maybe the hackers already have AI assistants. That can detect things like logging in to the protected homepages. The system can see things like passwords from the computer screen. Or they can watch. The way a person moves their hands on keyboards. 


https://www.fairedih.fi/en/2025/10/30/the-encryption-endgame-why-the-world-faces-a-quantum-reckoning/#:~:text=When%20quantum%20computers%20break%20current%20encryption%20%E2%80%94%20and,become%20obsolete.%20This%20threat%20has%20already%20spurred%20action.

What if we move? All our work to AI?



We can compare the threat that AI poses to society to the influence of slavery on the Roman Empire. Sometimes people say that slavery destroyed Roman civilization. When the Romans transferred all duties and work to slaves, they became weak and lazy. That caused a situation where, in the case. There, Rome was under threat; there were no people. With the ability to fight against the threat that came in the form of Attila. When Attila and his Huns robbed. 

The city of Rome, all glory was gone. People noticed that Rome was the only city on the map. And that caused loss of respect. The reason. The reason the Roman Empire lost was that. It didn’t create anything new. It always used the same tactics. That caused the situation. The enemies of Rome knew what the Romans would do next. And they knew how to react to that thing. In the same way. If we give all the work. Into the hands of robots. And learn to use only the answers that the AI gives. We are going into the same position. Where the Romans went when they turned lazy. We will face a situation where nobody does anything. People just give orders to the AI. And that thing makes everything for humans.

What is gained in quantity is lost in accuracy. Things like dictionary books. They involve lots of topics. But information about each topic is limited.  When we want deep knowledge about things like how to make a computer program, we need a more precise dictionary. We need a data source about programming, which means the source involves information about fewer topics, but it involves information on how to create something. 


There are two ways. To measure productivity. 


1) We can simply calculate the number of products. That is a quantitative way to determine effectiveness. This is the simplest way to determine effectiveness. We can calculate how many reports the worker. Or some other actor makes during the working day. This is the easiest way to determine effectiveness. The number of reports. It is a good way to measure. The effectiveness of a worker. But. Those reports can only be scratching the surface. They might be like some encyclopedia. There is a lot of information. Or lots of topics. But there are a few details about those things. We couldn’t make things. Like computer programs. Or building houses if we only have common encyclopedias. We need deeper knowledge. About things that we should make when we want to make those things. The encyclopedia involves. A lot of surface data. About many things. 

2) We can see the quality of the product. This means a qualitative way to determine effectiveness. Deep information and deep knowledge are the qualitative ways to determine effectiveness. But then. We face another problem. How to determine deep knowledge? How can we say? Does that report involve deep knowledge? We can determine. The qualitative way to determine effectiveness. In programming. As the number of lines that pass the tests. But in things like regular reports or novels, we must ask, what makes some texts more qualitative than others? Is it that somebody clicks the link? 

The problem is that the AI recycles information. It will not make anything new. The final part of the Roman decay was a situation. Where Romans who loved Greece and philosophy started to read only texts written by the Greek philosophers. They didn’t even try to create their own texts. Except for some different people, like Marcus Aurelius. Bu. If we deliver all our work to the AI. We are choosing a path that is not good. In that case. People just lie. At home. Maybe that thing seems easy. But what if we turn our lives. Too easy? What if we lose our skills to read and write? We can simply give orders to AI, and it generates texts for us. Maybe. Most of those texts please readers. But do they improve our skills? To write and produce texts? When we talk about things. 

Like giving an ability to think to the AI. We always forget a couple of things. First thing is that. The AI is a good tool. To follow things. Like stock marketing or a certain vehicle in traffic. The AI is the only thing that can sort and process large data masses. And then if we follow the orders that the AI gives. We deliver our decision to AI. That brings new and complicated questions. The complicated question is, what if we don’t follow the orders that AI gives? If we want to sort manually. 

The same data mass that an AI agent sorts in minutes. We would spend years in that process. We can make AI to create texts. And that makes us effective. But. There is a price. For that effective work. We give orders to AI that creates lots of documents. But that doesn’t mean that we must read a single one of them. We can say that the situation is similar if we ask lots of questions. And then nobody answers.  


Tuesday, March 3, 2026

Pericles, Socrates, and artificial intelligence.


Above:  Pericles, Socrates, and artificial intelligence. 


When we think that people outsource thinking to AI, we can ask whether those people outsource that ability to some outside thing anyway? Are thoughts already outsourced? The AI as a replacer. The human thinker. It is the end of the long journey. In ancient Greece, regular people outsourced thinking to philosophers. The problem with philosophers is that if something went wrong, people blamed them. That brought a new way to make philosophy. 

That thing is called sophism. Sophism is one of the most harmful things to philosophy. The idea of sophism is to please the majority. The reason for the death of Socrates was that the sophists asked why people who have wealth should abstain from using their wealth. The key element in Socratic philosophy is moderation in everything. People. Who received the sentence of death? Understood that they should share their wealth or give it away for free. 

So if something didn’t please people, the reason for that was in the philosopher. Not a person who misunderstood those orders. Those orders or advice were somehow complicated. After. The time of ancient Greece. People outsourced their ability to think to the universities. Regular people had time to sit in beer houses and leave dark things to people who had time to think about things like falling meteorites. Why should people read things like complex mathematics if they have good salaries anyway? And if something new comes to the workplace, the leader of the gang. Can. Just refuse to use that thing. 

When we think that people outsource their thoughts to AI, we mean. A situation. That person. Will just. Give the AI-generated answer forward. We can ask. Would that person read those answers anyway?  Even if. They are written. By. A real human. Politicians like Pericles. Used sophist philosophers to make text that pleased people. The people whom Pericles must please were people who made decisions in Athens. So, the sophists should only please people who have power. So, if we want to think like Pericles, we should please people who own companies. Those people make decisions. So they are people whom we should please. 


If we want. To succeed as a company leader. We must think like a banker. We must maximize incoming money flow. And minimize outgoing money flow. If we want to succeed as politicians, we must think like Pericles. We must please people. Or most people must like us. That is the conflict. And the third thing. It is. Interest of the state. The interest of the state means. The state must somehow protect its citizens. And that triangle is sometimes very hard to fit as an entirety. 

We can see artificial intelligence as an opportunity. But then. We must determine the meaning of the word: “opportunity”. The opportunity can mean that artificial intelligence makes work easier, and leaves more time for social life. But we can determine the opportunity another way. We can determine that artificial intelligence gives opportunities to fire workers. So, in the last case, we can see that. Artificial intelligence is the tool that allows one to earn more money. The AI allows workers. 

To make their work faster. And that is one of the things that requires new ways to think. In traditional capitalism, faster work means that a person can do more work. And this is one of the things that we can understand when we develop AI. The main problem with AI development is that. Money controls those corporations. We say that. The workers lose their productivity. If. They work with AI agents. But then again. We must determine productivity. If. We calculate. The productivity. As a form. Of a series of physical items. 

We can calculate: how many things the worker makes. Or, if the worker works with immaterial products. Like code for the computer program, we can calculate how many acceptable code lines the worker does per day. Or maybe. We should calculate productivity. As production cycles. Each. Of the products. What a person produces is a cycle. That includes the beginning. And the end of the work. So, we can calculate. How many cycles does a worker do during the day? Then we can set the goal for that person. 


Maybe. Our worker should make five cycles each day. So, what if the worker completes those five products or production cycles in six hours? That means the worker has two hours of free time in the workplace. In traditional capitalism. That free time is the time. That is out of the time. That. The corporation pays for. So. The leaders might see that time as free time. The thing is that. The person reaches the production goal, but in a shorter time. And that means the production goal is reached, but the entire worktime is not filled. 

We can also determine. The term “productivity”. As the income or profit. That the company brings to its owners. And if. We think that way, we can search the department. There are four people with two free hours in a working day. And then we can fire three of those workers. That is allowed in working life. 

Another thing is that the AI causes corrosion in thinking. The corrosion means that when people use AI, they leave the answer that AI gave. Without. Even looking at it. That is one of the things that causes discussions. When we outsource our work to machines, that should make life easier. But, machines bring unemployment. We can outsource thinking to AI. And that has a corrosive effect on humanity. When we think about free time, we should also ask what the person does during their free time.  

People can go out. Or. They can go to the library. Read books. They can read books. From. The net. Or listen to recorded books. They can use “text-to-speech” applications to transform any text into speech. And listen to those things while they go jogging or to the gym. Or they can sit in social groups. And think about things. Like. How corrosive the AI can be. In that ideal model, people use their free time for self-education, advancing their ideas. And improve their skills. 

But otherwise, people can go to a bar, sit there, and drink beer. They can outsource everything to AI. And university lecturers or some other thinkers. Because those people will not get money for their studies, they don’t study. Have those people ever? Open a single book in their lives? That is one other way to think about things. 


Thursday, February 26, 2026

Are we ready for self-developing AI?



Self-developing AI is the AI that develops itself. The Moltbook is a system that can act as a platform, allowing the creation of AI agents that can operate as a team. The moltbook-type platform enables the creation of a system. There are the AI-agents. Or, so-called small language models, SLMs, that can combine their strengths. The large language model LLM is very large. And a complex system. The problem with the LLM. It is the same as humans. We can have a lot of data or knowledge. But that data is cursory. We know lots of topics. That information is like. We would read only things like headlines. We know that something happens. 

But we don’t have any details of those things. But if we want to know the background of that thing. And who, and why something acts. Like that, we must read much more than just some headlines. When the LLM searches and analyzes data, that thing is always cursory. The system must analyze larger data masses. And that makes it more cursory. If. The system uses. Lots of data. And makes a deep analysis of the data. The system turns slow. 

If. We want to create a new LLM. We can create that thing. Through. The AI-agents. Those AI agents can act as a whole. Those AI agents act like a team. And that allows. To develop. The AI. By using AI agents like LEGOs. Each AI-agent is like a module in the system. These involve different types of skills. And those skills or bricks act as a team of workers. 


If we must get a deeper knowledge of the thing. 


The SLM is the tool. That does not have very large common knowledge. The SLM analyzes data in a thinner sector. It uses more limited types of sources. And that makes it more accurate in its own sector than the LLM can be. The system has deeper knowledge. Of a certain sector. Than the system that searches data from all around the internet. The SLM uses only a certain type of sources. The system can search data. Only. From the sources. Those are under certain topics. Like “astronomy”. 

So, if we want to get knowledge of planet Uranus. The AI agent. Or SLM will not search for things like Roman gods and astrology. It just searches data about the planet. And maybe it should ask, do we want information about Uranus’s moons or just the planet? Or maybe we want information about Uranus’s magnetic field or clouds. This helps the AI agent limit the sources to articles that involve information about those topics. 

In the same way. 

We can create a custom AI agent. That fixes the base code of the AI programmer. Just gives orders. On how the AI agent should make the code. The programmer. That works with AI agents. Should determine. The goals of the qualification are what the AI should follow. How the orders are given determines how effective the AI agent is. If we have three AI agents, we could make a system that develops itself. The AI that. Searching the data, the system that surveillances the operations, and the programming AI-agent. That makes the changes in the code. When the surveillance system sees that. There are errors in the orders it generates for the AI-agent programmer. 

The reason for that third agent is that the prime agent will not recognize its errors. For error detection, the AI should ask for feedback from its users. 

Then the system generates the needed changes for the algorithm. In those cases, the query should follow the same route as all other queries. While. Developers develop. Or. Train the AI-agents. To give strict and well-argued orders. If orders are not clear, those AI’s will not succeed. All data that the AI uses must be very well described to those systems. If. Researchers want to make the SLM. They must make the prototype using the LLM. At least. As an assistant. Or they must hire an army of coders. 


Can the AI allow humans to lead?

Can the AI allow humans to lead? And make a doctoral thesis without the ability to read? Can we make those things by giving commands to AI? ...