Wednesday, November 26, 2025

The mind-altering weapons are not science fiction.



Image: Total News

The mind-altering weapons or “mind-control” weapons are one of the biggest risks in modern life. Those weapons include hallucinogens, subliminal acoustic systems, and other types of systems that control consciousness and brain functions. Those weapons. Those that affect the central nervous system (CNS) are of concern to researchers. And people try to include them in the Hague-Convention, which denies the use of chemical weapons. There are four major types of those weapons. 

1) Chemical agents like heroin, cocaine, fentanyl, LSD, BZ (Substance 78), and other neuro-active components. Most of those chemicals are used in the POW and political prison camps for interrogations. Sometimes sodium amythal, “truth serum,” is also associated with those weapons. 

2) Acoustic systems that send infrasound to the opponents. The subliminal commands will travel straight through the nervous system. And that makes them hard to resist. The infrasound just passes the nervous system's resistance. 

3) Visual systems. Those systems send fast blinking images to the observer. Those images don’t reach consciousness. But in the same way as in subliminal infrasound systems, those signals travel straight through the nervous system’s defence. There is suspicion that those methods are used on some internet pages. 

4) Electromagnetic systems. Those systems are probably under development. They send electromagnetic signals. That affects neurons’ electric carriers. That means those electromagnetic signals send information to neurons. There is a possibility that the electric system. Sends electric impulses to the nervous system. Those systems are modified electric shock systems. That sends EEG-curves. 


To the targeted person’s brain. Those systems can use the recorded EEG. Today. The AI can create those brainwaves synthetically. Those systems can look like jewels. The ring can send those electric impulses to the targeted person's neural systems. So, the brainwasher just gives a ring or a wristwatch. To the victim. And then that system sends those impulses to targets. Or those systems can be in the hat. 

Things like narcotics are used to force people to cooperate with interrogation officials. The person will be turned into a narcotics addict. If they cooperate, they will get drugs. And if they don’t cooperate, they will not get drugs. That method is the classical brainwashing. Called the Pavlov method.  When a controller gets the desired response, the person gets a price. If the response does not please the controller. The person gets punishment. 

The hallucinogen chemicals like LSD and BZ are used to decrease the consciousness level. Sometimes those chemicals are planned to be installed in chemical bombs. Those chemicals give a stun effect. To those bombs. which makes opponents unable to operate.  

“The only time a CNS-acting weapon was used at scale was by the Russian Federation in 2002 to end the Moscow theatre siege. Security forces used fentanyl derivatives to end the siege, in which armed Chechen militants had taken 900 theatregoers hostage.”The Guardian, Mind-altering ‘brain weapons’ no longer only science fiction, say researchers)

Most of the hostages were freed, but more than 120 died from the effects of the chemical agents, and an undetermined number suffered long-term damage or died prematurely.” (The Guardian, Mind-altering ‘brain weapons’ no longer only science fiction, say researchers)

Things like hallucinogens can be used to create fake memories in people’s minds. Those systems can turn people against their former friends. 

Those kinds of methods can be used with virtual reality (VR) systems. But those systems are used in limited areas or for individual people. The acoustic and electromagnetic systems give the ability. To send signals to large groups of people. Those acoustic systems are quite easy to install in loudspeakers. When we think about things like the legendary “Psychotron.” 



"An NYPD LRAD on top of a police humvee outside the 2004 Republican National Convention! (Wikipedia, Long-range acoustic device). The non-lethal systems to control riots' role is to deny radicals from becoming martyrs. Deaths can fuel riots. 



Mind-altering weapons can be used to motivate future soldiers. Those soldiers can use the BCI interface to communicate with other systems. And that brings an idea. To use those systems to give motivation. The BCI systems can also be used for teaching people. And that means. That it is possible. To misuse those systems. The BCI opens the track to the people’s brains. And that offers the possibility. To use those tracks in the “security operations”. The problem with nervous-interacting microchips is in cases. That somebody gets those microchips illegally. There is a possibility that those systems have been hacked. And then they can open the attacker path to the person’s brain. 


Those systems require only the infrasound loudspeaker and radio telephone. Infrasound systems are sometimes. Planned for us in hostage situations. The infrasound that is connected with ultrasound causes nausea to the targeted person. This type of system can be used in the combat area to disturb enemy snipers. Those acoustic systems are also used for interrogations in some countries. 

Those systems are also suitable for Pavlov’s method, created by Ivan Pavlov. That method is called “classical conditioning”. But those systems can disarm dangerous persons. Long-range acoustic devices (LRADs) are meant to save lives. Their purpose is to offer an alternative to batons and rifles. But those systems can be dangerous in the wrong hands. 

And they probably tested by Nazis and in Soviet GULAG camps. The use of those weapons causes headaches. And problems with sleep. This causes suspicion that things like “Havana syndrome” are caused by the use of that kind of weapon. 

When we think of modern technology. The new threat. It is in Brain-Computer Interface ( BCI) systems. Brain-implanted microchips. Make it possible to move paralyzed limbs. But the same system can be used to turn a person into a robot. Modern technology allows the creation of systems. These are far more advanced than some loudspeakers. 

But when we think about modern technology. That opens the road to the more dangerous systems. Nano- and biotechnology allow the creation of new types of CNC weapons. Those weapons can be bacteria or “trained neurons” that transmit information straight into the brain. There is a possibility of creating the nanomachines that seek out the brain shell. 

And then those systems. Start to transmit data to the brain. Those technologies are under development for fixing injuries. Those accidents cause. For people. But the same technology can be extremely dangerous in the wrong hands. The problem is that the same technology can save lives in the cases of violent sieges. 

The imagination is the limit in those cases. In some futuristic models, the entire person will be cloned. And then the politically correct clone replaces the original person. 


https://nypost.com/2025/11/24/science/scientists-issue-wake-up-call-over-mind-altering-brain-weapons/



https://www.theguardian.com/world/2025/nov/22/mind-altering-brain-weapons-no-longer-only-science-fiction-say-researchers


https://totalnews.com/researchers-raise-alarm-about-mind-altering-brain-weapons/


https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface



https://en.wikipedia.org/wiki/Havana_syndrome




https://en.wikipedia.org/wiki/Ivan_Pavlov



https://en.wikipedia.org/wiki/Long-range_acoustic_device



https://en.wikipedia.org/wiki/3-Quinuclidinyl_benzilate




Can robots replace humans?



Elon Musk’s vision of the future without work is one of the most radical models for future ecosystems. A world where robots do all the work is technically possible. But we must remember that outsourcing work to robots and neural networks requires. The government's revenue logic has changed. Today, the revenue logic is based on a model where most people are working. People spend most of their adult lives in the working life. They pay taxes, and the main problem with this model is this. Those people. Like me, they are working in private companies. Those companies are not any social security offices. Their mission is to bring money. To their owners. This means that if there is no work in the company, they can fire their workers. Or, nicely, those companies can outsource those workers and their lives to the government. 

That brings costs for the government. Human workers are a necessary thing. Until the human-shaped robots take their places. The human worker is a necessary cost for the company. Whose purpose is to bring money. To their owners. If we think purely from an economic point of view. The robot. Which costs turn lower per year than a human worker is economical. Those robots require electricity. Same way. As supercomputers. Those drive algorithms that those systems need require electricity. 

The problem with ultimate capitalism is this: the human worker is the necessary cost. Part of the production line, or value chain. The reason for. Why robots have not yet taken the place of human workers. Is that. They are expensive. And their computer capacity is limited. But modern WLAN networks allow robots to connect to supercomputers and even quantum computers. In high-power radiation, those robots can also use wire-optical control. 

That means those robots can also use optical wires to transport information. Human-shaped robots can also use similar tools. As humans. And that means the same robot that works as a welder can work as a soldier. And that is one of the biggest threats in robotics. The robot soldier can also cooperate with vehicles that can carry high-power computers. Robot soldiers can also be. Wire optical guided. This means that jammers will not affect them. 

That means those robots can connect. With the mobile armoured vehicles that carry more powerful computers using optical wires. Those robots can also have laser LEDs in their hands. And receivers in multiple places in their body. So those robots can also form a protected wired network by touching each other's shoulders. Or by taking hands with each other. That means those systems can connect their data systems. Under one entity. This allows them to drive complex algorithms. Without central computers. 


And those systems can drive complicated algorithms. The independently operating robots with advanced CAD/CAM (Computer-Aided Design/ Computer-Aided Manufacturing) systems allow. The operator can simply drive parameters. Like CAD drawings and material lists to the system. And then those robot workers with 3D printer systems make the products. 

The fact is that capitalism guarantees competition. Competition keeps the markets wealthy. If somebody makes bad products, people will not buy them. We can say that this is one version of Darwinism. If the actor cannot respond to changes in the ecosystem. That causes removal. Or that causes a need to change the internal ecosystem. The product can be bad. Because it's too expensive. And one thing that we must realise is that. There is no company. That can operate without governmental acceptance. Every single company. Works in some government jurisdiction. 

One version of the industrial spy operations. Is simple. Calling the companies to make their products. In their country. There are security laws. In some countries. The companies must cooperate with the authorities. At any time when the authorities want. If the production happens in those countries. Authorities can observe it at any time when they want. One version is to force the companies. To transport their production lines to those countries. 

That happens. By driving down the production lines in our countries. It is simple. To dump merchandise. To markets here for free. They can seem like a very nice thing. But they disturb the competition. There is a model.  Some multinational corporations hijack markets by dumbing free products there. That causes bankruptcy for its competitors. And then. That company can use. It's dominating position as it wants. So, that operation guarantees. The dominating marketing position. And the ability to order prices. 

But can robots replace humans? That requires a change of attitudes. We must prepare ourselves for situations. Where there are people who will not go to work. We must realise that even welders and cleaners can be replaced by robots. This thing requires that the governments. And, individuals, too. Must change their revenue logic. We must change our attitudes. 

https://www.rudebaguette.com/en/2025/11/elon-musk-claims-ai-will-end-work-and-poverty-can-we-trust-his-vision-for-our-future/


Wednesday, November 19, 2025

The money and AI are hard to fit.



The AI ecosystem is taking its form. NVIDIA provides the microchips and other systems to those ecosystems. The problem with that ecosystem is its addiction to raw materials. If one part of the ecosystem falls, the entire AI business falls. If we think. About the new table-sized supercomputers that can run the AI or algorithms. Those systems will not remove the cloud-based solutions. The AI that runs on those table-top supercomputers requires training. 

So, the system users. Might use cloud-based solutions. And give the AI the training. It needs. And then the system drives that “Trained code” to the table-top system. The cloud-based system is like the lake, and the table-top version is like a droplet that the system separates from the main system. But the development of those things requires money. 

The problems with AI companies are minimal profits and great costs. Says some investors. A special feature of the AI business is that the product is immaterial. To sustain the interest of the users. The AI must be more. And more advanced. This means. Fast advancement is the requirement. For a successful AI company. Large language models require. Lots of calculation power. 

And the need for new processors grows every time. In the cases. When new AI solutions are coming into marketing. The company must also renew its hardware. And that costs lots of money. Another thing is that the AI is quite a new product for marketing. That means the investors will not know if there are any legal statements. That will put limits on the AI development. Another thing that makes AI a problematic investment object is that in some media. 

The AI workers and companies play the role that is reserved for stepsons. AI companies are not known for treating their workers. As they were important people. Sometimes, AI companies are mentioned. As treating their workers.  As a garbage. Or a necessary evil that exists because there is nothing that can replace them. Or they just wait. For the AI that replaces those workers. These kinds of things cause problems for investors to estimate. How big profits their investments can bring. The biggest problem with the AI is this. 

It's a new and complicated product. This means the AI requires the entire data center to be effective. The requirement for those large data centers forms in two ways. First, the algorithm is complicated, but another thing is that the AI service provider requires a server. That can offer that service to as many users as possible. 

The companies must have enough users. So that they can get enough license payments. But that causes another problem. When new users come to use the AI. Those AIs require larger and larger servers. There are also free applications whose purpose is to entice users to buy licenses. That gives more abilities. To the user. The investors need their money back. But the requirement for new systems means. Companies need to invest in their data centers. Those investments require money. 

The AI interests the investors. But the problem is that. The AI ecosystem. It is in danger of overheating. That means that when investors put money into those companies, their values rise very high. That value is much higher. Than their real values. The investors' money. Is always. A little bit problematic. Investors want their money. 

With an agreed interest amount. That means. Those companies cannot use. That money. Into their new systems.  There is a possibility that the investor’s profit.  Will be paid. From some other investor’s money. Another way to invest in a company is to buy stocks. Those investors buy stocks, and that puts money into the company’s cash box. That raises the company value. But if there are too many investments. That brings so-called empty value into the company’s stocks. And that can cause. A very fast collapse. Into those companies' stock values.

The problem is this. When an investor sees. That investment loses its value. This causes the need to sell those stocks. If some big investor. Sells lots of stocks in a short period. That can cause the collapse of the company’s stock values. 


https://futurism.com/artificial-intelligence/ai-investors-furious-bubble


https://futurism.com/artificial-intelligence/mercor-meta-ai-labor


https://futurism.com/artificial-intelligence/financial-world-nvidia-earnings-call


Augmented reality turns any surface into a keyboard.



The augmented reality allows for turning any surface into a keyboard. And that advancement can open a new era for mobile systems. Many AI bosses say that the mobile telephone as we know it is dead. The future belongs to smart classes and other more sophisticated things. Technology that connects smart glasses and augmented reality to a practical solution exists. The system must connect the vision that comes from outside and the data that the system gives to the user. 

The system can be based. On head-up display (HUD) technology, where the user sees data that is transparent, and the real world is backward. The system can have things. Like an IR camera that allows a person to see in the dark. The system can use a speech recognition interface. But those things are problematic in noisy environments. 

Also things. Like, data security is endangered. If the person uses the voice commands. So, a virtual keyboard. The system is created on-screen. It might be the answer to that problem. The system follows the fingers using a camera, and then the person will calibrate. That virtual keyboard. The system might also have a virtual mouse, and the gesture control tells if the person wants to move the cursor on the screen. The problem with that kind of system is simple. 

They require a new interface. Of course. The user can use things like physical mice or trackballs. The virtual mouse would be the most compact and safe system. The role of the mobile telephone. Turns into a central unit. For those wearable systems. The virtual and augmented reality systems must be safe to use. The problem is how to make those systems safe when the user is walking. The VR systems are impressive tools. They can introduce car gauges into the head-up display. Or deliver data for the user from a drone that flies over the person’s head. 

That tells if there is something behind the corner. The same system can also be used to search homepages. The cloud-based architecture that outsources calculations to the data centers decreases the need for high-power processors in mobile systems. 


https://techxplore.com/news/2025-11-augmented-reality-tech-surface-keyboard.html

Monday, November 10, 2025

Can AI reach the level of human intelligence?



The fact is that we don’t know. A Microsoft manager said that AI will never reach the same level of intelligence as humans. And that causes interesting thoughts, because we must determine what that intelligence means. We can do many things without knowing anything else, but we must do something. When we see a sign. This means we can drive cars and use computers without knowing a thing about them. We must not have deep knowledge. Of how some codes work, or how the computer makes contact with databases with QR codes. We must only point the camera at those QR-codes. 

And then it connects us to the homepage, where we can read things. In this case, the system seems to know everything. But the process is quite simple to make. The developer must only make the homepage and connect the QR code to it. The computer still doesn’t know. Is information. It collects and shares. Right or wrong, or is it something that we don’t even understand? We can put letters one after another. And make letter rows that involve any useful information. Things like randomly collected mark lines don’t normally involve. Any information that we can use. The AI can collect data over the internet, and that system seems quite wise. But it might not know anything. About the topics that it collects. 


When the AI collects information from the net, it uses keywords, and then it collects data. Almost any human can make the same thing. We can collect lists of things. Like Japanese homepages that involve certain types of information without knowing any Japanese words. The user must only get paper. There are the letter marks that the person must search for. And then that user must search for similar marks in the headlines.  This is one version of cases where we can make many impressive things without knowing anything about the topics themselves. We must only know what those letter symbols look like, and then. 

Search for the same marks in texts. That doesn’t require any deep knowledge of the Japanese language. This is how the AI works. It searches for similarities on homepages. and then collect that data into a new entirety. So. The AI can drive a car by using GPS and a system that recognizes the road. This doesn’t mean that the AI can think as we do. Same way. AI can perform surgical operations. And find things like tumors. The AI must only know what wealthy tissue looks like. And then the system must have a cancer tissue image. Then the system can remove all abnormal cells. Or it removes everything that is not similar to the target’s background. 

In AI systems, the QR code is a trigger. That drives the system into new homepages. The QR code can be replaced. Using some other image. The image can be users’ faces. The computer is used in the login process. Even if we use face recognition software. For making those systems more secure. That doesn’t mean that the same image cannot activate some other processes. The same system that recognizes human faces can recognize things. Like vehicles from the highway. If we teach that system to that process. The actor. What is connected to those images determines how the AI or robot behaves. 

That connected it reacts. We just must give those vehicles’ names. To the system. The system requires images that it can compile in that process. But in that process, the system doesn’t really know what those things are. The system just connects an image. To a certain word. But the AI doesn’t know what those words really mean. But it can recognize any vehicle on the road. The AI just connects a word with an image. And then share that connection with the user. 

If we want to create a robot. That reacts to some kind of action. We must create a model of that action in the computer's memory. And when those memory loops see similar action in their sensor. That activates the action connected to that case. 


Wednesday, November 5, 2025

Robots are terrifying weapons.



Drones are new systems, but the counter-systems are already under development. 


The Chinese military has done. Extensive work with robotics. And the cooperation of human soldiers and autonomous systems. Chinese new robot wolves are impressive systems. They can cooperate with other manned and unmanned systems. And those robots can carry lightweight machine guns and grenade launchers. And they can also do things like suicide attacks. If robot dogs have suckers at their feet, those systems can climb over the glass windows on a straight wall. 

Those robot dogs can have points, where the quadcopter touches them and carries those robots. Or they might have propellers in those robots' feet. And that thing can make them more flexible than ever before. The modular systems, where separated quadcopters carry robot dogs, are easier to make. And when we look at the strength of those robots. In drill. 

They crashed the Taiwanese military in 10 seconds. The battle was a simulation. But that causes serious discussions. About the effectiveness of the Western military. We must realize that robots and drones are new systems. There are already developed microwave, radio-based EMP, and laser systems that can jam or destroy robots from the battlefield. We know that the tanks of the future are equipped with high-power EMP cannons that can use radio and/or microwaves to destroy incoming drone swarms. 





“GDLS partnered with Epirus for the TRX Leonidas. The solution takes Eprius’ Leonidas high-power microwave (HPM) platform and mounts it on GDLS’ Tracked Robot 10-ton (TRX) unmanned ground vehicle. (Photo courtesy of Epirus.)” (Breakingdefense.com, GDLS to roll out drone-killer robot, tank-launched switchblades, more at AUSA)


The Chinese military has done. Extensive work with robotics. And the cooperation of human soldiers and autonomous systems. Chinese new robot wolves are impressive systems. They can cooperate with other manned and unmanned systems. And those robots can carry lightweight machine guns and grenade launchers. And they can also do things like suicide attacks. If robot dogs have suckers at their feet, those systems can climb over the glass windows on a straight wall. 

Those robot dogs can have points, where the quadcopter touches them and carries those robots. Or they might have propellers in those robots' feet. And that thing can make them more flexible than ever before. The modular systems, where separated quadcopters carry robot dogs, are easier to make. And when we look at the strength of those robots. In drill. 

They crashed the Taiwanese military in 10 seconds. The battle was a simulation. But that causes serious discussions. About the effectiveness of the Western military. We must realize that robots and drones are new systems. There are already developed microwave, radio-based EMP, and laser systems that can jam or destroy robots from the battlefield. We know that the tanks of the future are equipped with high-power EMP cannons that can use radio and/or microwaves to destroy incoming drone swarms. 

The EMP systems that can defend entire cities are easy to make. Only high-power radio or microwave impulses are needed to destroy entire drone swarms. And those weapons can be installed on aircraft, other drones, and mobile systems. The high-power radar can use high-power radio or microwave impulses. They destroy entire drone swarms. The difference between jammers and EMP is this. The EMP destroys electronics. Jammer just denies communication. 

Drones and other unmanned systems are advancing. Computer technology advances. Faster than ever before. The high-power computing. Makes single drones more intelligent and more independent. Those new drones don’t require networks. So, low-power jammers are ineffective. Against those drones, as well as. They are ineffective against optical wire control drones. That’s why the power of those EM-weapons must increase. 

And that means the EMP systems must be created to drop the drones. The reason for that is that the microwave systems can affect multiple drones. At the same time. The microwave amplification by stimulated emission of radiation (MASER) or “microwave laser” makes it possible to create. Coherent, highly precise microwaves that can destroy single drones. 


https://breakingdefense.com/2025/10/gdls-to-roll-out-drone-killer-robot-tank-launched-switchblades-more-at-ausa/


https://interestingengineering.com/innovation/china-tests-robot-wolves-drones


https://interestingengineering.com/military/chinas-new-wolf-robots-breach





When an AI project turns into a disaster.




"Entities must not be multiplied beyond necessity."


Willem Occam, English philosopher, 14th century. 


Remember Occam’s razor during AI projects. 


"Entities must not be multiplied beyond necessity." Entities can be any work that we do. We should not do the same thing twice. Without necessity. Everything that is made without necessity is useless. 

Popularly, the principle is sometimes paraphrased as "of two competing theories, the simpler explanation of an entity is to be preferred. In AI projects, that principle can take on a form. "Of two competing models, the simpler model of an entity, or work, is to be preferred."

Keep the orders that you give to AI. As simple, clear. And minimal. As you can. The orders that you give should involve only things that are necessary for work. Orders should be given using proper language.  So, grammar must be followed in that order. And don’t write the text. Just give orders to the AI. About what it should do. And leave the work to it. Don’t introduce complete work to AI if you want to use it as a helper. 

Remember Occam’s razor. Give only necessary information to the AI. Don’t turn orders unnecessarily complicated. And don’t fill orders using information that doesn’t belong to the mission. When AI writes business letters, don’t tell how the sun shines. Just give orders that involve only the information that the AI requires for that mission. When orders are long, there is a possibility. There are more mistakes than in short texts. 


And then. Beginning is always. The most difficult part of the project. 


Sometimes, or almost always, the AI project turns into a disaster. Then we start to find out what went wrong.  The answer can be that the AI is misunderstood. People had very high expectations about the AI, and when the AI didn’t meet everything that people expected, that was considered a failure. The biggest mistake about AI is to treat it as a real expert. In that case, people can give AI orders. To write some reports or texts. Then they simply. Copy-paste that text to an Email. Without even reading it before. We should treat the AI as a secretary. And if we use AI to make some reports. We should read that text before we send it forward. 

The second thing is this: we should train the AI. When we first take the AI into use. Or start the AI project. We must train the AI. We cannot use AI as a production tool without training that system. Even the simplest AI algorithms, like grammar checkers, require training. That they make things as we want. Training the AI means. People use AI in their everyday work. The AI requires access to data so that it can learn things. Without that access. The AI program, or project, will fail. So, if an AI program faced failure, one of the reasons for that could be that people simply didn’t use it. If people don’t use AI, the AI doesn’t learn things. 


And that causes failure. So, when we start our AI project, we must ensure. That people use it. And especially. At the beginning of the project. AI doesn’t have enough data. It can create things automatically . About its users. And in the beginning. The untrained AI can make multiple errors. We must realize that AI is not ready when we create usernames and passwords for those systems. Those systems need training. That they work correctly. They learn only when we use them. Same way. Workers don’t learn things if they don’t do everyday jobs. If we don’t allow them to get access to the necessary data. They don’t learn anything in their work. 

The AI requires clear orders to accomplish its duty. If orders are not clear. Or they are not properly written. That can cause misunderstanding. Training the AI requires time. And as all workers AI requires access to information, so that it can learn how to cooperate with humans. And at the beginning of the project. Work can be done faster and probably easier. Using the old-fashioned products. Same way. When computers came to offices. Qualified typists. It could make their work faster than computer users. But then. The computers beat the workers who used paper. 


Automatic grammar error detectors helped the work of people who were not so qualified typists. But when the nets came to the offices. Computer users can share their documents all over the network. They could make error correlation on the screen, and that didn’t require physical papers. Same way. As hackers can attack networks, people can steal physical paper. If a company uses millions of papers each year. Physical papers cause very big bills. 

The problem is: how to motivate people to use AI. Why should people use AI if it takes their jobs? That is one of the things. That people should think about. What would you do with an AI project if you knew that the successful AI project would cause you to be fired? The problem with working life is that. We should be over-effective. We must work all the time. We have no time to take breaks. The attitude that the AI offers the possibility. To fire half of the workers is dangerous. 


So, when people start their AI project, they should follow two golden rules. One is that workplace AI is for work use. The second one is this: follow Occam’s razor. If every part. Of the use of the AI. Keep orders that the AI gets as simple as possible. The AI requires clear and precise orders. Occam’s razor is the idea that Willem Occam introduced in the 14th century. That golden rule is to remove everything unnecessary from the AI and the orders that it gets. The use of AI should also be connected to the work. If the AI gets information that doesn’t belong to work. That can cause a mess. The AI doesn’t think as we do. It doesn’t separate the information. That is connected to working life. From the information that the AI requires for love letters. If AI is trained. By using the wrong type of information. The result is catastrophic. 

The AI project mean. The new tool is coming to the workplace. People need preparation. And they must understand that AI is a tool. The AI makes things that people order it to make. People need instructions for that project. They need to know what kind of orders they can give. And they must know what types of writing the AI requires. AI requires clear and direct orders. Those orders must involve only the necessary information. We should follow the orders. The philosopher Willem Occam wrote in the 14th century. People should not unnecessarily complicate things. Things should be introduced as simply as possible. That principle is called Occam’s razor. 

People should remove everything unnecessary. From the orders that they give to the AI. That means when we, users, want to use the AI. To write a letter to customers. We should give direct orders for writing text to the customer. We should not tell the AI. About things. Like how nice the weather is. The AI requires only orders about things. That should be in the customer service letter. Everything that doesn’t belong to those letters should be removed. And that causes one thing about the AI. Should the company limit the AI that they brought only to work use? The problem is that AI uses all data.  It gets. For learning. And if people write things like love letters using those AIs. That can cause misunderstandings. People should remove everything else. Than the things. They need to work. From the workplace’s AI solutions. 


https://en.wikipedia.org/wiki/Occam%27s_razor


The mind-altering weapons are not science fiction.

Image: Total News The mind-altering weapons or “mind-control” weapons are one of the biggest risks in modern life. Those weapons include hal...