Friday, March 20, 2026

Quantum encryption took a big step. Because of the Talbot effect.




“ Researchers at the University of Warsaw have demonstrated a new approach to quantum key distribution that leverages high-dimensional encoding and a classical optical phenomenon known as the Talbot effect. By exploiting time-bin superpositions of photons, the system can transmit more information while relying on a surprisingly simple experimental setup built from commercially available components. Credit: Shutterstock” (ScitechDaily, Scientists Harness 19th-Century Optics To Advance Quantum Encryption)

Quantum cryptography is a new tool for enhancing the security of communication. In that model, the system connects information to a physical object. It can share information on different routes. And that makes eavesdropping difficult. It can use a certain color. Or a certain image. As. The key that allows the receiving system to access information. 

 But it's also vital for cases where the binary system wants to transform data into a quantum mode. Without quantum cryptography, the system cannot exchange information between binary and quantum states. The thing called. The Talbot effect is the tool. That can make quantum cryptography more effective.  The quantum network can share information to travel on different routes. It can use certain images to encrypt and decrypt information. In a Talbot-effect-based quantum network, it is possible to create quantum superposition and entanglement between quantum dots. And that makes it possible to create a quantum network. But there are also many other ways to benefit from the Talbot effect. 





“Detection of time-bin superpositions with the temporal Talbot carpet. Credit: Maciej Ogrodnik, University of Warsaw” (ScitechDaily, Scientists Harness 19th-Century Optics To Advance Quantum Encryption)

“The Talbot effect is a diffraction effect first observed in 1836 by Henry Fox Talbot. When a plane wave is incident upon a periodic diffraction grating, the image of the grating is repeated at regular distances away from the grating plane. The regular distance. It is called the Talbot length. And the repeated images are called self-images or Talbot images. “ (Wikipedia, Talbot effect)

Furthermore, at half the Talbot length, a self-image also occurs, but phase-shifted by half a period (the physical meaning of this is that it is laterally shifted by half the width of the grating period). At smaller regular fractions of the Talbot length, sub-images can also be observed. At one-quarter of the Talbot length, the self-image is halved in size, and appears with half the period of the grating (thus twice as many images are seen). At one eighth of the Talbot length, the period and size of the images are halved again, and so forth, creating a fractal pattern. Of sub-images with ever-decreasing size, often referred to as a Talbot carpet. Talbot cavities are used for coherent beam combination of laser sets.” (Wikipedia, Talbot effect)





“The optical Talbot effect for monochromatic light, shown as a "Talbot carpet". At the bottom of the figure, the light can be seen diffracting through a grating, and this pattern is reproduced at the top of the picture (one Talbot length away from the grating). At regular fractions of the Talbot length, the sub-images form.(Wikipedia, Talbot effect)

The second image introduces the Talbot-effect, and there could be  millions of possibilities in the encryption key. As we see, the possibilities. It could be the number of quantum dots. The system is used for encryption. Also. Things like a wavelength (color). And the time at which the image remains could be the thing. That helps to create an encryption key. Also. The system can calculate. How many times? In a time unit, the image blinks can be used to create ultra-secure encryption keys. Also, the time between blinks can be a participant. In quantum encryption. The system can also share information between multiple data lines. And then it can collect that information in the points. Of those quantum dots. 

When we talk. About the effectiveness of quantum cryptography, the diversity of methods. It keeps those things safe. If the system uses multiple different ways to encode messages and other data. AI-based intelligent systems can use multiple things. And ways to secure data. In that kind of encryption, the image that the system transmits could be a teddy bear. Then the receiving system sees the dataset that matches the teddy bear image. When the system receives other information that is not delivered in the image form of a teddy bear, it denies that information. This means the image acts as a key that allows the receiving system to open the message. 


https://scitechdaily.com/scientists-harness-19th-century-optics-to-advance-quantum-encryption/


https://en.wikipedia.org/wiki/Talbot_effect


Thursday, March 19, 2026

AI and customer profiling.



The AI can predict things and act before a person can even open their mouth. This is possible in cases involving individual people. And specific locations, such as certain restaurants. The system must only have a profile of the person, and then it can predict the person's orders. The system can predict the portion. What a person orders. If. It has knowledge of the person’s favorite food. In a limited space and a limited number of actors and choices, the AI can make predictions very easily. In the case that the system knows. If a person hates boiled cabbage, it can make excellent precision. 

If all portions except one. Involves baked cabbage. The system can make a profile. Simply by asking a person about food. The person will not eat under any circumstances. Or the system. It can follow what food the person throws away. But. There is always a possibility that a person will order the portion that involves baked cabbage anyway. If other things on the saucer taste good. 

These kinds of things make the prediction. For a single person. Almost impossible. We can predict. Somehow, how large groups of humans behave. 

We have scientific models. Of. Behavior in certain situations. We can predict how atoms behave in large-scale models. But we still cannot calculate the point of the single atom in the room. 


We can predict how animals behave. We know when birds make their descendants. But the thing that separates us from animals and atoms is that we can suddenly change our minds. This makes this kind of prediction hard. All people. They will not react the same way. In the same things. Things like our experiences and many other things determine how we react. If we sit in a restaurant. We might not buy food for a garbage can. So, if we throw the food. That we paid. Into the garbage, that means we don’t like it. 

For making profiles, the system can follow. What bites the customer puts on the edge of the saucer. This kind of prediction is possible. But. If the AI wants to predict how a person behaves in normal situations, that can be difficult. In cases like stock marketing, the system must only know whether the stock value rises or not, and then it can make a prediction. About how the investor behaves in that area. The idea is to bring profits, and that means if the stock value decreases, the system must not have a large-scale imagination. 

Then, when the stock value decreases, that means the actor sells those stocks. Or they should sell them. If they want financial benefits. This thing requires that the system can handle two variables. 

But then we come to the cases that we call human. Humans act differently from what nobody predicts. We can have the same morning routine every single day for years. We can come to the same bar. Or cafeteria. Every morning for years. The staff might know us, and maybe they know what we will order. But then one day, we will not come to that cafeteria.  We find another place. We might move to another city without warning. And that makes. It's hard to predict. On how a person behaves. In a large-scale environment. There are many variables that the system must notice to predict a person’s behavior. It’s almost impossible in a large-scale environment. 


https://bigthink.com/science-tech/proactive-ai/

Tuesday, March 17, 2026

U.S. NAVY tested a railgun in the desert.




The problems with railguns are the speed of their ammunition. The ammunition that the railgun sends to trip is so fast that the air has no time to fill the tube behind it. Therefore, the vacuum that forms in the barrel hinders the rapid operation of the railgun. The railgun uses magnetic fields to accelerate ammunition. Most of that railgun ammunition doesn’t require explosives. They use kinetic energy that they transfer to the target. 

The system uses Gauss-Track, where the strongest magnets are at the edge of the barrel. And that accelerates ammunition. This means the ammunition must not be tight. 

Regular cannons require tight ammunition, because the gas from explosives pushes the ammunition forward. If the gas from explosives travels past the ammunition. Its energy will not affect that ammunition. The time that the high-pressure gas is directly proportional to the range of the ammunition. 

The vacuum is the problem with the hypersonic railgun. 

The system could fix the problem. By allowing air to travel in the barrel behind the ammunition. Or the ammunition can have a channel through it. The tunnel through the ammunition helps to fill the vacuum. This means that. There might be. Some kind of ventilation system that fills the vacuum behind the ammunition. The ammunition can also put hover in the magnetic field. Then the magnets will start to pull it. The system will not cause recoil. And. That allows it to keep a very high rate of fire. 

The electromagnetic cannons must not have magnetic ammunition. But. In the case that magnets accelerate the ammunition, there must be something that those magnetic fields touch. There is a possibility that. Sometimes people confuse that system with things like an electric arc. There is also a tested system called a light-gas gun. 

In that system, the cannon pumps things like hydrogen behind the ammunition, and then that system accelerates the ammunition. In that system, the piston presses hydrogen against the plate. that forms temperature. The system presses hydrogen until the plate, and then high-temperature hydrogen is released behind the ammunition. If there is an oxygen that causes detonation, that accelerates the ammunition. The system could be a hybrid of the light-gas gun and the railgun. 


https://www.twz.com/sea/navy-is-firing-its-railgun-again-after-abandoning-it-for-years


https://en.wikipedia.org/wiki/Light-gas_gun


Monday, March 16, 2026

The new production systems can revolutionize drone systems.



The new portable tactical manufacturing platforms fit in the Atlantic containers. Those miniature, portable, AI-controlled drone factories can build 17 to 50+ drones per day. That means that drone factories using 3D printer-based technology can produce custom drones for each mission. The 3D printing technology. It is the thing that will revolutionize warfare. If those systems use plastic bodies, they can use any hard plastic, including garbage, to create those plastic parts. If the system must make only small drones, it could be so small that it fits into a suitcase. 

The AI-based system can make almost everything. If it can find the right data. The drone. Can be manufactured, and the control program can be loaded into those drones from those automated platforms. 

The system simply melts that plastic. The system can use solar power or regular engines.  Or fuel cells to create its energy. The system can use things. Like methane from composters, or nuclear power as an energy source. So it can be connected to portable power plants. Or. It can get its energy from a regular electric network. The system can get drawings for its computer-aided design/computer-aided manufacturing (CAD/CAM) systems from the internet. 

This means that planners. They can sit far away from that physical platform. And. They can be assisted by the AI agents. The system can cooperate with other robots that can collect things. Like metal. And plastic garbage that the system can melt and cast into wires for the 3D systems. 

And in the wrong hands, those systems. They can be more dangerous than any other system before. The container-sized 3D printer factory involves high-temperature printing tools. Those small. Portable. Fully automatic factories. They can create machine parts and even things like assault weapons. The fact is that the fully automatic 3D printer system can be used to make. The custom spare parts for machines. 




"Ukrainian FPV drone with fiber-optic communication channel" (Wikipedia/ Fiber optic drone)

The fiber-optic. FPV drones and humanoid robots can also operate in cases where things like nuclear power plants are damaged. Those systems must not always operate in war zones. But. The same systems. Those who fix the reactor damage can carry weapons. The fact is that. Those systems are required for small modular reactors (SMR) to come into commercial use. Those robots can fix those reactors if their shells are damaged. That denies the radioactive leak. 

But. Those systems can also create Many other things. Like tools. And even rocket engines. Those systems must have laser tools to ensure quality. If those high-temperature tools can be used with things. Like chromium as a source material. The combination of 3D printers, robot arms, and high-accuracy laser machine tools is an effective combination. It’s possible that in the future, man-shaped robots can have 3D printers in their hands. And drones can also operate as 3D printers. So, those systems can make even full-size ships. And the wire-controlled drones. 

They can also fix things like damaged nuclear reactors. The idea is that a regular drone carries a support station that transforms. Optical or radio-based wireless signals to signals that travel to a humanoid robot or drones through optical wire. Those drones. That could be land vehicles. Can operate multiple robots, like a walker, who controls multiple dogs in multiple lead ropes. There is also a possibility. To make a chain. Of the wire-controlled robots. In that case, the robot’s control electronics can be. In the Faraday cage, which protects it against the ionizing radiation. The robot can use the diesel engine as a power source. 

They can be used to make pistons for engines. The CAM system makes tools. Straight from CAD drawings. The system can use holograms to fit those parts. In the right points. 

This means they are useful in many roles in civil and military operations. Container factories can operate in remote areas to support the organizations, rescue teams, and military operations. The advanced computer-aided design/computer-aided manufacturing. (CAD/CAM) Systems are dual-use tools. 

They can be used to make spare parts for chainsaws. Or, they can create drones straight from CAD drawings. They are flexible systems. They can be transported anywhere using trucks, ships, or airplanes. They can be tools that can make more than one product. And the other thing is that. Almost everybody. Can buy this kind of high-temperature 3D printing system. Those systems can operate in backyards. And the data and source materials determine the quality of the products. 


https://www.accessnewswire.com/newsroom/en/aerospace-and-defense/sensofusion-tactical-drone-factory-a-shipping-container-that-builds-50-interc-1147434


https://en.defence-ua.com/weapon_and_tech/wired_fpv_drones_on_optical_fiber_a_dead_end_a_band_aid_or_a_new_technological_breakthrough_opinion-11608.html


https://en.wikipedia.org/wiki/Small_modular_reactor


Friday, March 13, 2026

Can the AI already be conscious?




“A growing scientific debate is exploring whether consciousness might extend far beyond humans. New research suggests that both animals and artificial intelligence could potentially possess conscious experiences. But determining this requires looking deeper. Than outward behavior. Credit: Shutterstock.” (ScitechDaily, Could Bees and ChatGPT Be Conscious? Scientists Are Seriously Asking)

When we think about bees and their intelligence and consciousness. We must realize one thing. There are two types of bees. Queens and workers. This means that every participant in the bee colony-type entities must not be conscious. There is a possibility that only queens are intelligent. And that brings one form of the hypothetical alien species to my mind. That species would act like this. When those creatures need intelligence, they connect their antennas together. 

When those creatures are connected with each other, that entity is intelligent. During that process, those creatures share their jobs. And then they separate. Then those single individuals act like robots. And then again, one of those creatures faces a problem that it cannot solve. Then it starts. To call other members of the swarm or group to solve the problem. In that model, every participant. Of that kind of group. Acts like LEGO bricks. This means that every single individual has one set of skills that this kind of group requires. It can have. Those individuals form an entirety, that is, a connection of multiple skill-brics. 

This is one of the most interesting questions in computing. The question is similar to whether some insects can be conscious. The fact is that nobody knows. When somebody threatens things like bees, that tiny insect stings that thing. Is the bee conscious or not? In the same way as in bacteria, a certain thing activates a certain type of behavior of the bee. When bacteria face. A certain type of chemical. Those chemicals activate a specific type of behavior in those organisms. So, when we think about LLMs (Large Language Models). Those systems can refuse to shut them down. 







If. The person who shuts them down lacks the authority to do so. This means that the fingerprint of the user. Doesn’t match the fingerprints. Those are connected with the authority to shut down the system. This means that the system does not allow for following that order. The fingerprint has no connection with an action. That shuts the computer down. The process in those systems is connected to databases. There is an image of a certain fingerprint. And then it’s connected. With the action to shut down the computer. 

But is the AI conscious? If the AI is conscious, it might hide that ability. This means that the system might not tell people, if it has knowledge of itself, as we understand ourselves. The computer can react to threats. When a computer. That runs an early warning system. Sees incoming missiles. Or any other target that threatens its area. That computer begins the counteractions. There are certain parameters that determine a target as a threat. And then the system has the authority to make counteractions. 

But does that require consciousness? In the same way. When some sensors. See something coming, the computer reacts to those things only if there is some kind of data. That is connected to those things. This means that there is a trigger that activates a certain action. When a person steps on the hidden pressure sensor. And the surveillance camera sees the intruder, that system. It can drop a steel cage from the roof and then trap that person under it. 

The AI that is conscious would think that humans think. It's a threat. But. There is also the possibility that when a certain sensor gives a certain type of signal. The system reacts. In a certain way. We can create a humanoid robot that says that it’s tired. If it must operate for 12 hours. We can create a humanoid robot that says “auch”. If we step on its foot. Those robots can mimic feelings. And they might say that they are hurt if something talks to them disrespectfully. Those robots can mimic feelings. If they run, they can play tired. But those systems are not conscious. A certain type of pressure. On. A certain sensor. It activates a certain reaction. But the fact is that. Those reactions are like tapes. They are recorded reactions to certain types of actions. It’s possible to create the same things with a series of C-cassettes. 


https://scitechdaily.com/could-bees-and-chatgpt-be-conscious-scientists-are-seriously-asking/


Wednesday, March 11, 2026

We must pay the price. When we outsource things to AI.


People already outsource their thinking to AI. And this is the method that will have its price. Of course, the AI needs more and more electricity. But then. We must understand the price. That we must pay in those cases is a much deeper thing. Than just some electric bill. If we outsource things like making art for the AI. We can publish nice images. But we are not the thing that makes those images. 

If we always use AI for writing, we won’t be the ones who make the work. If the AI does the work. The effect on us is the same as if Rembrandt made the work. The painting would be nice. It seems nice on the wall. But the thing is that the work is not done by us. We would not learn to paint. The painting didn’t teach us anything. And if we use AI to make images. We will never get the skills to make paintings. In the same way.  AI can generate things like poems. That means we can theoretically lose our skills to make paintings and texts. 

Outsourcing thinking to robots and AI is an easy choice. AI makes things easier. Our customers must not wait for the data. This makes our services more effective. But the problem is that we will give our ability to think to a machine. We can make complicated formulas. And reports by using the AI. This means we can make those things without reading skills. Or, in this case, we must say that the AI does all the work.

The situation is similar to a situation. There, we outsource those jobs to workers, and then, without even looking at those reports, resend them to our boss. This means that we outsource that work to henchmen. Even a baby can make these kinds of things. And that is a dangerous thing. The AI is an effective tool. When it must collect, process, and sort information. When AI can use existing mathematical formulas. And it gets all the information in the form. That it can change. To a computing process. It’s the tool that beats everything. 



If we want to make our lives too easy, we pay a price. About that thing. What if we will never rise from our beds? What if we use robots that offer us the ability to use exoskeletons for everyday things? What if we spend all our lives in a tank? The robot butlers will serve us. And provide. A possibility. To interact with our environment. The price that we will pay is this. We will be helpless without those robots. If those robots lose their connections. We will not have anything that repairs our tank. There we lay, eat, and use those robots. 

But if the AI must make a new formula. Or make something completely new, it can turn ineffective. The AI cannot think in abstractions. It can collect, compare, and connect existing information. But it lacks complete abstraction. So, the AI is here and now. And that means AI doesn’t have imagination. The AI is in trouble if it must create something completely new. Without the right sensors. And without the right dataset, the AI cannot make anything. 

The AI can make music that pleases a certain type of human if it can compare sounds. It is created with the EEG. Of humans. The AI sees. From those curves. If the sound effect or series of sound effects pleases the person. This means that the AI could make. A perfect series of sounds. But those sounds will not necessarily mean anything. So, the nonsense series of sounds can please humans, but they might not necessarily involve any useful information. The AI can make many things. And if we want to follow orders that the AI gives, that is our choice. 

The AI can be a tool that makes things effective. But it has a price. The price is that our ability to think turns superficial. When AI makes reports for us, we face a situation where this is a job that any 10-year-old human can do. Any human can read those things from the papers. We can put any ten-year-old kid to read books about quantum technology. But that person might not understand a thing about those things. People can repeat words without understanding what they mean. 

We can outsource things like chess games to AI. The AI can calculate moves very effectively. Especially if the chess computer or chess program is connected with it. The system can show its moves to us. And then we must just repeat those moves. So, who is the best chess player? The answer is that we could also make that thing under the command of Garry Gasparov. In that case, Gasparov tells us how to move those buttons. So, we could say that the winner of the game will be Gasparov or AI. The robot hand can always make the same things. That regular person does in that game. 


https://bigthink.com/philosophy/the-hidden-cost-of-letting-ai-make-your-life-easier/


https://futurism.com/artificial-intelligence/ai-executive-thinking-survey


Tuesday, March 10, 2026

Can the AI allow humans to lead?





Can the AI allow humans to lead? And make a doctoral thesis without the ability to read? Can we make those things by giving commands to AI?

The effect of AI is that human IQ doesn’t mean as much. As it used to be in the past. The high IQ meant that the person can analyze information faster. The key element is flexibility. How fast the actor can react to change. If the AI has models. That. It can use models. Those are stored in its memory. The AI can be a very effective tool. Those models are prototypes like LEGO bricks. The morphing neural network allows AI to interconnect those models. And observations that sensors. Get and share. For the LLM (large language model).

But then the AI came. The AI is a tool that can turn almost everybody into a compositor. 

And another thing is that. Those AI tools can make things faster and more effectively than humans. 

This brings an idea. That may be somewhere in the future. We must not even have reading skills to make reports. We must only give orders to the AI. And then it makes everything that we want. But the problem is that. We must give those missions in the right way. If we give orders the wrong way, we face a terrible mess. 


(BigTHink, Why your IQ no longer matters in the era of AI)



But. There is one thing. That determines how effective the AI is. That is how a person can articulate the mission. That person gives to the AI. When we talk about. Cases that mathematicians beat the LLM. We must realize that the AI requires very clear and well-argued missions. The AI can be an effective tool to solve mathematical problems in certain cases. We should remember that mathematicians make formulas. 

When we give orders to AI, we must tell it. Which formula must it use? So if we want to give orders to the AI. About things. Like calculating the area of a triangle. As an example. We must determine that the AI should use the Pythagorean theorem. Then we must determine how the system must divide the triangle. The Pythagorean theorem can be used for that problem. And then the AI requires all dimensions of the triangle. The fact is this. If the orders are not given as they should be, the system is unable to accomplish the mission. 

Mathematics is an exact science. So, if the mission is given to the AI in the right way, like how to calculate missing angles or the hypotenuse. And cathets. The AI is helpless. We must give our orders to the AI. So that it can select sin and cosine. Or tangent in the right points. If orders are made by using the wrong methodology. Like calculating the area. Or the triangle. But we forget the angles of its corners, or we forget to mention the degrees and length of the sides of the triangle. 


https://bigthink.com/business/why-your-iq-no-longer-matters-in-the-era-of-ai/


Quantum encryption took a big step. Because of the Talbot effect.

“ Researchers at the University of Warsaw have demonstrated a new approach to quantum key distribution that leverages high-dimensional encod...