Thursday, March 5, 2026

Are you ready for quantum apocalypse?



Every single encryption made by traditional binary computers is vulnerable to quantum computers. There are service providers who promise end-to-end encryption. That should resist quantum computer-based attacks.  But are those encryption systems really tested against quantum computer-based attacks? It is always possible to create systems that use extremely complex math for encryption. But things like artificial intelligence can read words.  

Straight from the image that the camera transmits from a keyboard. If those keyboards are visible to some hacked cameras. The AI follows. On. How a person moves fingers on the keyboard. And reads words from those films. This means that if we are not prepared for AI. That is dangerous.  Second, that things like a note. Is visible. It is enough for AI that it can read words written to it. 

The attacking system would, anyway, be quantum computer-morphing neural network-based systems. The quantum computer will generate binary numbers for the morphing neural network. 

Which makes. Attacks against systems that use traditional Riemann conjecture-based encryption algorithms. Still yet. A quantum computer cannot. Run things like AI-based software. The complicated software. It runs. On a group of traditional binary computer networks. Which we call a morphing neural network. In that system, the group of computers runs the programs and makes those attacks simultaneously. The system changes the attacking computer. 

All the time. Which. Makes it hard to deny the IP address of the attacking system. The system can share binary numbers between those computers. And that makes this kind of system dangerous. The morphing neural network can run data like a single computer. Or. It can share different missions to separate computers, which means the morphing neural network can operate. With multiple problems. At the same time. The problem with traditional encryption collapse is that. It can endanger things. Like crypto- and digicurrencies security.  

This means that. Somebody. Could create fake crypto- and digital currencies. Digital currency means a national currency that is shared in digital form. If digital form currency can be faked, that can cause the collapse of the economy. The encryption is also required in computer operating systems and program updates. If somebody can slip malicious software. In the computer update flow, this can cause very big problems. The fake update. It can make it possible to switch off computers. And that can have a destructive effect. In cybersecurity. 

Cyber is the new dimension in intelligence and military operations. Things like malware, computer viruses, and spyware are tools that can endanger many things.  In an assassination. Of the Iranian leader Khamenei, the key role was in the traffic surveillance cameras. Those cameras offered real-time information about Khamenei and his security council’s movements. This means that even innocent-looking systems. Like. Surveillance cameras in the next building can open a window for hackers and other attackers. The AI can sort very large data masses. 

And it can find things. Like reflections from glasses. There is a computer screen visible. The system. It can render the image so clear that the passwords are visible. The AI agents can search things like passwords on cellphone screens. They can recognize. If somebody visits. On homepages. That requires strong identification. The AI can see when a person fills the passphrase box. And then it can give access to the system. The AI is the next-generation threat. Maybe the hackers already have AI assistants. That can detect things like logging in to the protected homepages. The system can see things like passwords from the computer screen. Or they can watch. The way a person moves their hands on keyboards. 


https://www.fairedih.fi/en/2025/10/30/the-encryption-endgame-why-the-world-faces-a-quantum-reckoning/#:~:text=When%20quantum%20computers%20break%20current%20encryption%20%E2%80%94%20and,become%20obsolete.%20This%20threat%20has%20already%20spurred%20action.

What if we move? All our work to AI?



We can compare the threat that AI poses to society to the influence of slavery on the Roman Empire. Sometimes people say that slavery destroyed Roman civilization. When the Romans transferred all duties and work to slaves, they became weak and lazy. That caused a situation where, in the case. There, Rome was under threat; there were no people. With the ability to fight against the threat that came in the form of Attila. When Attila and his Huns robbed. 

The city of Rome, all glory was gone. People noticed that Rome was the only city on the map. And that caused loss of respect. The reason. The reason the Roman Empire lost was that. It didn’t create anything new. It always used the same tactics. That caused the situation. The enemies of Rome knew what the Romans would do next. And they knew how to react to that thing. In the same way. If we give all the work. Into the hands of robots. And learn to use only the answers that the AI gives. We are going into the same position. Where the Romans went when they turned lazy. We will face a situation where nobody does anything. People just give orders to the AI. And that thing makes everything for humans.

What is gained in quantity is lost in accuracy. Things like dictionary books. They involve lots of topics. But information about each topic is limited.  When we want deep knowledge about things like how to make a computer program, we need a more precise dictionary. We need a data source about programming, which means the source involves information about fewer topics, but it involves information on how to create something. 


There are two ways. To measure productivity. 


1) We can simply calculate the number of products. That is a quantitative way to determine effectiveness. This is the simplest way to determine effectiveness. We can calculate how many reports the worker. Or some other actor makes during the working day. This is the easiest way to determine effectiveness. The number of reports. It is a good way to measure. The effectiveness of a worker. But. Those reports can only be scratching the surface. They might be like some encyclopedia. There is a lot of information. Or lots of topics. But there are a few details about those things. We couldn’t make things. Like computer programs. Or building houses if we only have common encyclopedias. We need deeper knowledge. About things that we should make when we want to make those things. The encyclopedia involves. A lot of surface data. About many things. 

2) We can see the quality of the product. This means a qualitative way to determine effectiveness. Deep information and deep knowledge are the qualitative ways to determine effectiveness. But then. We face another problem. How to determine deep knowledge? How can we say? Does that report involve deep knowledge? We can determine. The qualitative way to determine effectiveness. In programming. As the number of lines that pass the tests. But in things like regular reports or novels, we must ask, what makes some texts more qualitative than others? Is it that somebody clicks the link? 

The problem is that the AI recycles information. It will not make anything new. The final part of the Roman decay was a situation. Where Romans who loved Greece and philosophy started to read only texts written by the Greek philosophers. They didn’t even try to create their own texts. Except for some different people, like Marcus Aurelius. Bu. If we deliver all our work to the AI. We are choosing a path that is not good. In that case. People just lie. At home. Maybe that thing seems easy. But what if we turn our lives. Too easy? What if we lose our skills to read and write? We can simply give orders to AI, and it generates texts for us. Maybe. Most of those texts please readers. But do they improve our skills? To write and produce texts? When we talk about things. 

Like giving an ability to think to the AI. We always forget a couple of things. First thing is that. The AI is a good tool. To follow things. Like stock marketing or a certain vehicle in traffic. The AI is the only thing that can sort and process large data masses. And then if we follow the orders that the AI gives. We deliver our decision to AI. That brings new and complicated questions. The complicated question is, what if we don’t follow the orders that AI gives? If we want to sort manually. 

The same data mass that an AI agent sorts in minutes. We would spend years in that process. We can make AI to create texts. And that makes us effective. But. There is a price. For that effective work. We give orders to AI that creates lots of documents. But that doesn’t mean that we must read a single one of them. We can say that the situation is similar if we ask lots of questions. And then nobody answers.  


Tuesday, March 3, 2026

Pericles, Socrates, and artificial intelligence.


Above:  Pericles, Socrates, and artificial intelligence. 


When we think that people outsource thinking to AI, we can ask whether those people outsource that ability to some outside thing anyway? Are thoughts already outsourced? The AI as a replacer. The human thinker. It is the end of the long journey. In ancient Greece, regular people outsourced thinking to philosophers. The problem with philosophers is that if something went wrong, people blamed them. That brought a new way to make philosophy. 

That thing is called sophism. Sophism is one of the most harmful things to philosophy. The idea of sophism is to please the majority. The reason for the death of Socrates was that the sophists asked why people who have wealth should abstain from using their wealth. The key element in Socratic philosophy is moderation in everything. People. Who received the sentence of death? Understood that they should share their wealth or give it away for free. 

So if something didn’t please people, the reason for that was in the philosopher. Not a person who misunderstood those orders. Those orders or advice were somehow complicated. After. The time of ancient Greece. People outsourced their ability to think to the universities. Regular people had time to sit in beer houses and leave dark things to people who had time to think about things like falling meteorites. Why should people read things like complex mathematics if they have good salaries anyway? And if something new comes to the workplace, the leader of the gang. Can. Just refuse to use that thing. 

When we think that people outsource their thoughts to AI, we mean. A situation. That person. Will just. Give the AI-generated answer forward. We can ask. Would that person read those answers anyway?  Even if. They are written. By. A real human. Politicians like Pericles. Used sophist philosophers to make text that pleased people. The people whom Pericles must please were people who made decisions in Athens. So, the sophists should only please people who have power. So, if we want to think like Pericles, we should please people who own companies. Those people make decisions. So they are people whom we should please. 


If we want. To succeed as a company leader. We must think like a banker. We must maximize incoming money flow. And minimize outgoing money flow. If we want to succeed as politicians, we must think like Pericles. We must please people. Or most people must like us. That is the conflict. And the third thing. It is. Interest of the state. The interest of the state means. The state must somehow protect its citizens. And that triangle is sometimes very hard to fit as an entirety. 

We can see artificial intelligence as an opportunity. But then. We must determine the meaning of the word: “opportunity”. The opportunity can mean that artificial intelligence makes work easier, and leaves more time for social life. But we can determine the opportunity another way. We can determine that artificial intelligence gives opportunities to fire workers. So, in the last case, we can see that. Artificial intelligence is the tool that allows one to earn more money. The AI allows workers. 

To make their work faster. And that is one of the things that requires new ways to think. In traditional capitalism, faster work means that a person can do more work. And this is one of the things that we can understand when we develop AI. The main problem with AI development is that. Money controls those corporations. We say that. The workers lose their productivity. If. They work with AI agents. But then again. We must determine productivity. If. We calculate. The productivity. As a form. Of a series of physical items. 

We can calculate: how many things the worker makes. Or, if the worker works with immaterial products. Like code for the computer program, we can calculate how many acceptable code lines the worker does per day. Or maybe. We should calculate productivity. As production cycles. Each. Of the products. What a person produces is a cycle. That includes the beginning. And the end of the work. So, we can calculate. How many cycles does a worker do during the day? Then we can set the goal for that person. 


Maybe. Our worker should make five cycles each day. So, what if the worker completes those five products or production cycles in six hours? That means the worker has two hours of free time in the workplace. In traditional capitalism. That free time is the time. That is out of the time. That. The corporation pays for. So. The leaders might see that time as free time. The thing is that. The person reaches the production goal, but in a shorter time. And that means the production goal is reached, but the entire worktime is not filled. 

We can also determine. The term “productivity”. As the income or profit. That the company brings to its owners. And if. We think that way, we can search the department. There are four people with two free hours in a working day. And then we can fire three of those workers. That is allowed in working life. 

Another thing is that the AI causes corrosion in thinking. The corrosion means that when people use AI, they leave the answer that AI gave. Without. Even looking at it. That is one of the things that causes discussions. When we outsource our work to machines, that should make life easier. But, machines bring unemployment. We can outsource thinking to AI. And that has a corrosive effect on humanity. When we think about free time, we should also ask what the person does during their free time.  

People can go out. Or. They can go to the library. Read books. They can read books. From. The net. Or listen to recorded books. They can use “text-to-speech” applications to transform any text into speech. And listen to those things while they go jogging or to the gym. Or they can sit in social groups. And think about things. Like. How corrosive the AI can be. In that ideal model, people use their free time for self-education, advancing their ideas. And improve their skills. 

But otherwise, people can go to a bar, sit there, and drink beer. They can outsource everything to AI. And university lecturers or some other thinkers. Because those people will not get money for their studies, they don’t study. Have those people ever? Open a single book in their lives? That is one other way to think about things. 


Thursday, February 26, 2026

Are we ready for self-developing AI?



Self-developing AI is the AI that develops itself. The Moltbook is a system that can act as a platform, allowing the creation of AI agents that can operate as a team. The moltbook-type platform enables the creation of a system. There are the AI-agents. Or, so-called small language models, SLMs, that can combine their strengths. The large language model LLM is very large. And a complex system. The problem with the LLM. It is the same as humans. We can have a lot of data or knowledge. But that data is cursory. We know lots of topics. That information is like. We would read only things like headlines. We know that something happens. 

But we don’t have any details of those things. But if we want to know the background of that thing. And who, and why something acts. Like that, we must read much more than just some headlines. When the LLM searches and analyzes data, that thing is always cursory. The system must analyze larger data masses. And that makes it more cursory. If. The system uses. Lots of data. And makes a deep analysis of the data. The system turns slow. 

If. We want to create a new LLM. We can create that thing. Through. The AI-agents. Those AI agents can act as a whole. Those AI agents act like a team. And that allows. To develop. The AI. By using AI agents like LEGOs. Each AI-agent is like a module in the system. These involve different types of skills. And those skills or bricks act as a team of workers. 


If we must get a deeper knowledge of the thing. 


The SLM is the tool. That does not have very large common knowledge. The SLM analyzes data in a thinner sector. It uses more limited types of sources. And that makes it more accurate in its own sector than the LLM can be. The system has deeper knowledge. Of a certain sector. Than the system that searches data from all around the internet. The SLM uses only a certain type of sources. The system can search data. Only. From the sources. Those are under certain topics. Like “astronomy”. 

So, if we want to get knowledge of planet Uranus. The AI agent. Or SLM will not search for things like Roman gods and astrology. It just searches data about the planet. And maybe it should ask, do we want information about Uranus’s moons or just the planet? Or maybe we want information about Uranus’s magnetic field or clouds. This helps the AI agent limit the sources to articles that involve information about those topics. 

In the same way. 

We can create a custom AI agent. That fixes the base code of the AI programmer. Just gives orders. On how the AI agent should make the code. The programmer. That works with AI agents. Should determine. The goals of the qualification are what the AI should follow. How the orders are given determines how effective the AI agent is. If we have three AI agents, we could make a system that develops itself. The AI that. Searching the data, the system that surveillances the operations, and the programming AI-agent. That makes the changes in the code. When the surveillance system sees that. There are errors in the orders it generates for the AI-agent programmer. 

The reason for that third agent is that the prime agent will not recognize its errors. For error detection, the AI should ask for feedback from its users. 

Then the system generates the needed changes for the algorithm. In those cases, the query should follow the same route as all other queries. While. Developers develop. Or. Train the AI-agents. To give strict and well-argued orders. If orders are not clear, those AI’s will not succeed. All data that the AI uses must be very well described to those systems. If. Researchers want to make the SLM. They must make the prototype using the LLM. At least. As an assistant. Or they must hire an army of coders. 


The simulation tries. To show the future of the quantum processor.




“By harnessing thousands of GPUs on a DOE supercomputer, scientists have simulated a quantum microchip with unprecedented physical detail. Credit: Shutterstock” (ScitechDaily, 7,000 GPUs Simulate Quantum Microchip in Unprecedented Detail)

During that simulation. “To carry out the work, the team relied on more than 7,000 NVIDIA GPUs running on the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy (DOE) user facility.” (ScitechDaily, 7,000 GPUs Simulate Quantum Microchip in Unprecedented Detail)

A supercomputer simulated the function of quantum computers to predict the next step in quantum systems. The system used 7000 GPUs (Graphics Processing Units) for that simulation. And it was quite accurate. 

Quantum computers are futuristic tools, but there is a possibility. That. We must wait for laptop-sized quantum processors. And quantum computers. To go to supermarkets. For a long time. The binary systems. They can simulate quantum computers and how they work. And the quantum system is not alone. It requires the right infrastructure and the right ecosystem that supports its operations. 

There is a possibility. To give the missions to the quantum computers through the internet. The quantum computer doesn’t need to use binary computers for a long time. The binary computers are the layer that controls qubits. And binary computers act as a medium between quantum and the user interfaces. So. In the most positive predictions. 

Quantum computers are in very deep underground bunkers. And users use them over the internet. Those bunkers must protect them against non-controlled effects. Like. A cosmic radiation. The quantum computer requires massive coolers that stabilize the qubit. The purpose of those systems. It is to suppress the quantum noise. So, that means. The problem is also in the quantum computers and their cooling systems. 

When we think about the power of quantum computers. They are not like binary computers. Quantum computers. Require time to create the superpositioned entanglement between particles. Normally, the system stabilizes the photon pair. And then makes the superposition and quantum entanglement between those particles. That requires time. So the quantum computers can show their claws. 

Only. In the very complex simulations and series. This means that. In. The easy calculations. The binary systems still beat the quantum computers. The big problem is also. The complex calculations. The thing that can make the error detection is another quantum computer. In cases like solar mass eruptions, disturbance in the qubits is global. That means that all quantum computers all over the world can make the same errors. 

But. Before quantum computers become so advanced. That things like error detection are ready. Things like a morphing neural network. And virtua. Quantum systems are things. That can handle complex problems. The problem is that even the most complicated algorithms are helpless without physical systems that support them. The second problem. It's the temperature. When a regular binary computer’s power is rising, those systems require more electricity. 

Researchers cannot raise the power of the processor without limits. When the temperature rises in the system, it brings more resistance. And that slows down the computers. The temperature causes oscillations in the microchips. And that also causes errors in data flow. 

There is also. A possibility. To create photonic computers. Those photonic computers can use photon superposition to transmit information. Those kinds of computers are basically two-state quantum computers. Those systems will not produce as much heat as regular microchips. 

So when they want to run the large language models, LLMs. They require the power of the entire data center. And when the hierarchy of needs controls this advancing process. Their LLMs turn more. And more complex. So, those systems need. More. And more microchips. If. There are no new abilities. The customers will find their services. From. Some other AI service providers. This means that. The data centers. They need more and more energy. The problem is that the entire world is in a spiral. 

The threat is that. If the authoritarian states get the quantum or AI-computational advance over Western states, that means they will beat us. The problem with those states is that AI. And computing. They are heavily supported. By. Those governments. There are no laws that limit the use of data. 


https://scitechdaily.com/7000-gpus-simulate-quantum-microchip-in-unprecedented-detail/


Saturday, February 14, 2026

The Moltbook is a social media platform. But it’s only for AI agents.



Humans have access to Moltbook only as watchers. The AI creates everything in that platform. 

One of the newest and most interesting things on the internet is a social media platform called “Moltbook”. You will never get access to that platform. But your AI agent might already have access. Humans can read things that AI agents write, but the Moltbook and publishing rights on that platform are reserved only for AI agents. The Moltbook is one of the experiments exploring AI that can develop itself. Another AI can teach or train other AIs. The developer can leave their own AI agent in Moltbook. And that can cause. A very big step.  In micro- and macro-scale AI R&D work. The Moltbook is the new step to self-developing AI. 

And this means. That. This social media platform can also offer the possibility to create AIs faster than ever before. The big problem with AI development. Is that. The large language models. They do not have the same accuracy as smaller and more agile language models. But these types of platforms. They allow developers to create smaller and more accurate AI agents that can combine their abilities. This thing brings modular R&D work. Also. Into AI development. When developers train smaller AI agents, those agents can exchange their abilities with each other. 








The new way. To make the AIs is to make smaller AI agents separately. And then the system can connect those AI agents under one entity. This makes it possible to create a modular. Or. Open model for the AI architecture, which means the system looks like domino bricks. This model allows developers to connect. Theoretically. An unlimited number of databases or AI agents to act as one system. 

This makes it possible to create an AI. That develops and fixes itself. In that model, each AI agent can be trained and developed independently. Those things are like LEGO bricks or modules. And. The Moltbook-type platform can help to interconnect those language models or AI agents. Those separately developed AI agents can form the new entirety. This is one of the ways. How. The next-generation AI developers. They can cooperate with the AI. The AI takes part in the encoding process. And. That makes it possible to handle larger data masses. And combine the new accuracy for those things. 

The ability to exchange knowledge means the ability to exchange. Their new skills. That is the requirement for the new communication between AI-agents. Every skill that the AI agent has is like a module. Action and reaction are stored in databases. And those databases form complicated structures. The number of databases determines the number of skills the AI has. If the programmers can create databases independently, and then connect them with the entirety. After they are tested. This. Makes it possible. To create databases and large language models with the same accuracy as small language models. Are developed. 


https://www.moltbook.com/m


https://en.wikipedia.org/wiki/Moltbook

Friday, February 13, 2026

AI and critical thinking.



We should not ask. Can AI think? We should ask: Can AI think critically? Can AI really estimate the sources that it uses? Or does the AI use all sources that it finds without thinking about what those sources involve? Does the AI give information that pleases us? Or does it give information that is useful to whom? Who determines the purpose of the information that AI gets? We can get lots of information. And we must not use that information for nothing. Does the AI try to please us? Or does it give the right information? 

That we can use for some purposes. Only. The right information is useful. Another thing is that information must also be collected from the entirety. The case or object must be handled as an entirety. We must not select data. That pleases us. And then throw. Data that doesn’t please us away.

Can the AI: tell us? If we are wrong? Who determines those purposes? Is financial purpose the thing that determines the data? Or is it the short-term profit? But is long-term loss the thing that the AI should consider? The thing that determines the benefit is: how long the actor? Who gets profits will stay in-house? If the only thing. That means something. It is personal profit, the period. That determines the data. And its use can be the remaining in-house time of the actor who collects profits. 

AI and critical thinking are two concepts that can cause problems. We know that AI combines observations of things stored in its memory. So, can AI think? We can say. That. This process is thinking. But what differentiates AI from humans? Can AI think critically? Critical thinking means that. Thinker doesn’t believe that some source is true or false after the first read. Thinker dares. To search other sources that support or deny the first source. But. Critical thinking is also much more than being suspicious about the article or the article’s information. The term critical thinking. Doesn’t. Mean that the thinker only tries to show an article.  Or, other data source, true or false. Critical thinking means that the thinker, or the person who analyzes data, also asks “who” shares the data and “why” that data is shared. 

The article or other data source. It can involve the right information, but it can serve as disinformation. The question is always simple: what is lost in that article or data source? When something certainly exists or certainly doesn’t exist, that should cause a question: why does the speaker or writer? Is one so sure that something exists or doesn’t exist? This means that the person must see those things using their own eyes. When we return to the AI, the big question is this: how many articles or other data sources does the AI use for making analyses? And then what is the criterion that AI uses for selecting data? Is it the article’s publishing date? And then. The AI should use. As much fresh data. As possible. 


But. The problem is: Does the AI separate the right information? From falsified information? The falsification means. The situation. That the information is handled only one way. Published data. It can be right. But something is missing from it. When. People say that AI steals our workplaces. We. Should also ask. What workplaces does the AI steal from? How long does that writer take to do those works? Are those works respected? And would we make them all our worklife? When AI takes something from us, we should ask: what is the thing that AI steals or takes? When we say that AI goes to war, we must ask one very dangerous question: Does the AI make killing too easy, or does it steal our heroes? 

Or, does it steal jobs from our military? We always dare to ask. These kinds of questions. While. We think about. AI. and its relationship to the state, society, and government. The problem with the AI is this: when we say that militarized AI doesn’t give arguments against its rulers, we face the last question: does the military ever say “no” to orders that it gets? In the world of the military. Disobeying orders or refusing to follow them. 


Causes punishment. Disobeying orders in combat situations causes death penalties. So, how does the AI change the situation? 


When we hooray. If the Chat GPT or some other ICT company. Refuses to cooperate with ICE officials. We must remember that this is a very dangerous way. Governments make laws. And if AI developers must flee outside the West to China or some Central American country, nobody controls that thing. The problem is that. The AI is business. There must be more and more abilities. That keeps customers interested. AI companies make a business. They need clients and somebody who wants to finance them. And. This is the main problem with AI development. Nobody gives money for nothing. 

Business angels want their money back. The development of AI is expensive. Data centers that LLM. Large Language Models require as much electricity as a small city. The problem is that LLM is much more effective than regular tools. So. The AI companies can be competitors. For. Each other. But the AI itself has no competitors. The reason. For why. I write the AI is this. There is a new social media platform called Moltbook. The moltpbook is the discussion forum for AIs. That forum allows them to exchange information. And that allows separate LLMs, or AI, to unite themselves. The Moltbook is the platform. That allows the AIs to generate and develop each other. 

And that is the thing that we must realize. When we say that AI should not serve police or governments, we must also ask. Who must AI serve? Is it better to limit the use of AI only to private actors? The main problem is that the AI itself doesn’t make a difference; does it recognize police or thieves? 

This means that criminals can also use the AI to recognize police officers. And the big problem is that governments determine for themselves. What crime is in a certain state? In some states, even speech against the ruler is a crime. That causes even life sentences or death. The problem with surveillance is always, who controls the controller? 

And another question is, why do people hooray? Do they hooray because ICE doesn’t get information about illegal immigrants? Or do we hooray because ICE must kick more doors in? Or do we hooray, because that is the return to old-fashioned police work? In that police work, officials said that their henchmen must go to the streets and then search their “clients” one by one. That left their superiors alone at the police station and kept their henchmen busy. And that gives a chance. Or an excuse to hire more police officers. That is one way. That. We should think. But someday. All illegal immigrants are driven away. And then. The state doesn’t need those officials. 


https://bigthink.com/mind-behavior/ais-are-chatting-among-themselves-and-things-are-getting-strange/


https://en.wikipedia.org/wiki/Moltbook



Are you ready for quantum apocalypse?

Every single encryption made by traditional binary computers is vulnerable to quantum computers. There are service providers who promise end...