Wednesday, June 18, 2025

Researchers found interesting things in brains.




Researchers found new interesting details in human brains. First, our brains transmit light. And that light gives an interesting idea: could brains also have optical and photon-based ways to transmit data between neurons? And if neurons have that ability, how effective and versatile it is. We know that there are no unnecessary things in our brains. So that means the light in our brains must have something to do with neurons. But do neurons use that way to transmit complicated information or is it meant only for cleaning neuro-channels? 

That interesting light causes the question: does that effect have some connection to light that people see when they visit near death? When we think about death the neuro channels will empty from neurotransmitters and electric phenomena. That means our nerves are more receptive to those signals than usual. So could that light have some kind of interaction between neurons or axons? Is there some point in neurons that reacts to that ultra-weak photon emission, UPE? 

Does our own neural activity cover that light below it? There is an observation that dead organisms shine dimmer light than alive. And maybe that light turns dimmer when the creature closes to death. There is an article in the Journal of Physical Chemistry Letters, “Imaging Ultraweak Photon Emission from Living and Dead Mice and from Plants under Stress” that introduces ultra-weak photon emission from dead mice and plants turn dimmer. 

And the question is this: can humans see that ultra-weak photon emission and its changes subliminally? The article says that all living organisms shine weak light that disappears when a creature dies. Also, things like mammals shine IR-radiation but the main question is can the ultraweak photon emission, UPE happen on purpose, or is it some kind of leak? And can humans see that phenomenon but that observation cannot reach our consequence? 

There are two things that self-learning systems must do to become effective. Effectiveness means that the AI or human brains should ignore irrelevant information. If that thing doesn’t happen, it grows databases and data mass in the system. When the system makes a decision, it must select the right data from the data that it has. And then the system must make decisions using relevant data. This makes the situation problematic. The system must decide what kind of data it needs in the future. 

And that is quite hard to predict. When we learn something we cannot be sure do we need those skills anymore. Maybe we don’t need skills that we learn in the military ever after that. But as we see, the future is hard to predict. The other thing that the AI must do is to adjust its processor's actions like human brains do. In human brains, brain cells have multiple frequencies in oscillation. Scientists say that those differences in oscillation frequency are to avoid rush hours in axons. 

That thing means that brain cells give time to clarify axons. Because brain cells have different frequencies that make it possible to control axons and deny the situation that multiple neurons send data into the same axon at the same time. Those multiple rhythms allow the brain to avoid rushes in axons. The same thing can make a fundamental advance in technology. If we think about the situation that the system that runs AI doesn't have a controller that includes system architecture that makes processors operate a little bit at different times that causes a situation that all processors send data at the same time to the same data gate. That causes a rush and makes the system jam immediately. In the electric systems the system that uses electric impulses for data transmission. 

Processors. That operates in the same moment. Can form standing waves in the data channel. And that burns the system. There are many interesting details in the human brain. That thing opens visions that maybe brains might also have an optical way to transport information. Researchers try to find out the purpose of that light. And if they find a point in, or on neurons that react to that light, they find a new level in their brains. Another interesting detail is that different parts of the same neurons learn in different ways. That means that the neuron itself can be more intelligent and versatile than we thought. 


https://neurosciencenews.com/hippocampus-neuron-rhythm-29277/


https://www.psypost.org/different-parts-of-the-same-neuron-learn-in-different-ways-study-finds/


https://www.psypost.org/neuroscientists-discover-biological-mechanism-that-helps-the-brain-ignore-irrelevant-information/


https://pubs.acs.org/doi/10.1021/acs.jpclett.4c03546


The AI removes trainees from workplaces. And that is not a good thing.



The AI doesn't take your jobs. It denies new workers to come to the field. That causes questions about how workers can improve their skills. The AI doesn't take your jobs. It denies new workers to come to the field. That causes questions about how workers can improve their skills. What if humans especially ICT workers lose their basic skills? What if all programming turns into cases? Where does the system worker just give orders to the AI? Using normal language. Then the AI follows instructions like a human coder.  

But the main thing is that the requirement of effectiveness and ultra-capitalism forces company leaders to make that choice. They choose AI instead of hiring programmers. And that is one of the biggest problems in the ICT business. But then we must remember that today is the end of the road. AI is the solution that is the next step into the continuum where things like encoding software are outsourced to countries like India. Western coders didn’t get jobs because it was cheaper to hire experienced workers from India or some other far-east countries. 

When companies outsource encoding to the Far-East they become vulnerable for that work. That means that those workers are working in countries that are members of BICS. So those workers can work under the control of intelligence or other authorities who order them to make spyware and other malicious tools. We are people who live in Western democracy. We learned that if somebody acts as a spy that person will be arrested. The same way we believe that if somebody hacks into some country, we must just request that those authorities arrest those people. We didn’t expect that those hackers worked for the Chinese intelligence service or government. They were protected by the government. 

The next logical step is that the AI starts to work with codes. And that takes jobs from humans. This thing means that the programmers lose their basic skills, and without basic skills, they have no advanced skills. Programming is like learning things. If we compare the programmer’s advance with going to school, we must realize that every person in the world writes their first words. Before that, they must learn to read. And every single person reads their first word once. Before we can learn advanced mathematics, we must learn the basics. So we all calculated once 1+1=2. If we don’t learn the basics we cannot learn anything new. 

And that is the beginning. Without the first class, we will not learn anything. In the same way. When we want to learn to encode or make computer programs, we must learn basic skills. Without basic skills, there is no ability to learn advanced programming. If companies don’t offer jobs for trainees that means people cannot learn skills that they need at an expert level. 

That thing has a reflection across the entire ecosystem. If the boss doesn't know what codes are made, that can cause problems. For observing and surveilling henchmen work the boss needs to know what they do. And if the boss doesn’t know what henchmen should do, that causes a situation where somebody can inject malicious code into the program. In the time of modern communication, the system needs less than a second to infect the target. 

There are articles. About a North Korean mobile telephone. That was secretly smuggled to the West. Those mobile telephones are the Orwellian dystopic nightmare. By connecting those mobile telephones with the AI the leader can surveillance every single citizen 24/7. The AI can tell if somebody uses forbidden words. 

And that causes a question: what if somebody slips that kind of mobile telephone into some general’s office? Maybe some key person’s family member will win a mobile telephone from the net. And then there are the surveillance programs. There is also a possibility that regular hackers have those telephones, and they copy and customize that software for their own purposes. 

Every expert has been a trainee once in their life. The requirement in working life is when a person comes to the workplace, they must know everything in the first minute when they open their computer. This is the route that offers the opportunity to AI. The AI learns things in minutes. Another thing that we must realize is national security. If we outsource critical encoding to some far-east country that means we cannot control what those people do. They can give the critical code to hackers who work for China or North Korea. In those countries, the government is the ultimate authority. There is no way to say anything against their orders. If the government orders people to work as hackers that means the person has no chance to say against that suggestion. 

And those systems can turn very dangerous in those cases. In the cases that somebody makes a back gate to the system, that offers a route even to critical infrastructure. What if somebody orders all Chinese-made routers and other network tools to shut down? That can cause problems in everyday life. And if one wrong microchip slips into the computers that control the advanced stealth fighter, that component can deliver computer viruses into the system. Or that kind of tool can steal vital data from the system. Every time when something is done outside the watching eye, there is a possibility that somebody will make something that can cause very big trouble. Things like microchips equipped with malicious software are tools that can break national security in large-scale areas. 



Tuesday, June 17, 2025

Privacy against security.



When we talk about security we must ask “whose security that act serves”. We know that the internet is a tool that offers the greatest propaganda platform that we have ever seen before. The net is full of tools that are used to prove that writers are humans. The AI-based applications offer the possibility to share data into even billions of homepages and social media applications in seconds.

Data that AI creates can block almost any private server on our planet. And that makes it possible to use the AI-created data to block entire web services. Confirmation that people who use the net are who they claim is one of the things that used to argue that people should use their real identities on the net. The anonymous use against the confirmed use are things that both have their supporters. Anonymous use allows users to make reports about corruption and many other things.

And that makes people support that way of using the net. On the other side, anonymous use offers change for cyber attackers and disinformation deliverers to operate on the net. Things like AI agents can operate in the targeted networks, steal information, and deliver it to other users. That kind of thing can be put in order by forcing people to confirm that they are humans. And then tell who they really are. But that is similar to the U.S. firearms laws. Those things don’t stop propagandists or psychological operators from sending their fake information to the net. 

Those people, especially if they operate under state control to operate. Those disinformation actors can use fake or stolen identities. And the authorities can confirm those faked identities. If we want to deliver propaganda as an example from Russia, we must have computers in some states like Finland. Then we can take the VPN to that computer from Moscow and then start to deliver that information to the net by using that remote computer, which is located in Finland. So, in that case, we would ride with the Finnish networks. The operator who is based in Russia tends to stay away from Western countries. The assistants make everything, and that makes it possible for the person who knows something to stay under control. 



Bye bye algorithms.





We are waiting for the new step in the AI development and research process. Many big technology bosses say that this is the end of algorithms. And the next step is self-learning AI. That system can communicate with robots and all other systems. The self-learning system can learn in two ways. It can create new models that it uses in certain situations. Or it can connect a new module into itself. The reason why advances in AI go to self-learning systems is simple. 

New algorithms are very complicated. And their training requires so much time that the self-learning models are better. The thing that makes this kind of thing very complicated is that the new AI must operate in larger areas. They must control things like street-operating robots. So they need a more effective way to learn things. The street-operating robots can use platforms that look like computer games to learn how to cross the roads, and where those robots find things like apples if they go to shop for their owner. But then those robots must face unexpected things. 

Robots can share their mission records with the entire system. And that helps to develop methods on how to operate in natural situations. Basically, the difference between a learning system and a normal system is that the learning system can create new models and then compare the original model with that new model. There are parameters that determine which way to act is better. And if the new model is better, it replaces the old one. This means that the fixed model turns into a flexible model. That model lives with its environment. 

That thing is the AGI or artificial general intelligence. That kind of AI is everywhere, and it can connect multiple different systems that seem different under one dome. The biggest difference between AI and modern algorithms is that the system can bring new data from sensors to data flow that travels in the system. The AGI is a system that might be "god-like" but if that system cannot create genetic codes like manufacture the DNA it might have no ability to control living organisms. However, the system has many ways to manipulate evolution. 

The AGI can make couples that have certain skills. The fact is that dating applications are effective for dating. And it's possible that the AGI can also make it possible to select perfect spouses. So people who are not "perfect" leave without a couple. And that means only people who are suitable, or similar can make descendants. This causes segregation and loss of diversity. 

And that is a sad thing for humans. Self-learning AI is a tool that can learn from its mistakes. It learns what to do, and what it must not do. The thing is that the self-learning AI is the new common tool that can make almost everything. The system learns like humans. And that makes it the so-called AGI. One tool fits all. The system can control things like robots.

Robots can collect data for that system. The AGI works like this. One robot sits on a chair and then the teacher teaches things for, and through that thing. Then that robot shares new things across the entire AI and network. For training that kind of system, a lot of information. And companies like Meta have that data. And AI makes it possible to create things like AI agents that sneak and observe what happens in the network. Robots can learn from other robots. When one robot makes a mistake, it scales over the network. Other robots must know that they don’t make the same mistake again. 


Monday, June 16, 2025

Why does an antique chess console beat Chat GPT in chess?



When we think about those antique ATARI consoles from the 1978 model we always forget that they were not as easy to win as we thought. Those chess programs handled every kind of data as numeric. And the Chat GPT-type artificial intelligence handles that game as visual data. This is one of the things that we must realize when we think about this type of case. Those old chess consoles used very straight, linear tactics. The main difference between modern algorithms and old-fashioned computer programs is that those old programs are linear. And it handles all buttons and movements separately. 

So there is actually a chessbook in those chess programs, that it follows. Those old chess programs were more difficult than some people believe. If you were a first-timer in chess that means you would lose to those consoles. They played very aggressive and straight games against human opponents. The system tested the suitable movements for each button separately button by button. Because the program was linear the movements were made in a certain order. In those chess programs, every movement is determined by the program square by square. The programmer determined the movements for every button and every square separately. And that made those programs quite long. 

Those old-fashioned chess programs have a weakness that if something goes wrong they continue by following that line. There are certain numbers of lines that the program can use. And there is also the end of the line. Those programs can use complete tactics. But their limit is that those programs are fixed. They don’t write their databases and models again if they lose. And that makes those old-fashioned consoles and video games boring. When people learn tactics that it uses they can beat those old-fashioned programs. The limit of those video games is seen in action games. There are always the same points where enemies jump in front of the players. 

Then we can think about things like learning neural networks. Those networks can beat all old chess programs quite fast. The problem is that the neural network must see the game of the console before it can win those systems. AI is like a human. It requires practicing and training. Without knowledge of the opponent’s game, the AI is helpless. There are many ways to teach AI to create tactics against old-fashioned programs. The system can use some modern chess programs and then analyze the opponent’s game to create tactics. 

The other way is the system can analyze the source code and create a virtual machine that it can use to simulate the chess console game. But what do we learn from that case where antique consoles beat the modern AI? Without training the AI is helpless as humans. If the AI has no knowledge of how to play chess, it must search all data including movements of those buttons that make it as helpless as humans. 

Those old-fashioned consoles are RISC applications. They are made for only one purpose. Their code is completely serving the chess game. Modern AI is a complicated system. It can also do many other things than just serving the chess game. And that makes those old consoles somehow difficult to wing, at least when the AI can break its movements and tactics. 


https://en.shiftdelete.net/chatgpt-fails-in-chess/




Sunday, June 15, 2025

The gentle singularity: what is the limit of the singularity?



The next step for artificial intelligence is the artificial general intelligence, AGI. That is the tool that connects every computer under one dome. The AGI is the self-learning system that develops its models and interconnects them with sensors that bring new data for the system. That means we can interconnect every single computer in the world in one entirety. We can think that social media is something new. We forget that a long time before Facebooks were the letter clubs. The “post offices” where people can send letters to people, who could be pseudonyms. 

Social media is not a new thing, and Facebook and other applications are the products of a long route that started in Ancient Rome and Greece where wall writing or graffiti was the beginning of social media. Social media interconnects people from around the world. The new thing that the net brought was speed and maybe the price of those systems is low. But as we know there are no free lunches. The thing that doesn’t cost anything can have the highest price. The ability to create singularity between computers brings the ability to share and receive information with new forces. 


And then the new step for AI and computers is the brain-computer interface, BCI. The BCI means the ability to control computers using the brain waves, or EEG. The system can interact with computers and it can operate also between people. This system can interconnect all animals and humans in one entirety. And there are risks and opportunities about that model. If we make things wrong we create a collective mind. There is one opinion. So we interconnect our minds and computers into giant brains. That is a very sad thing. That thing destroys our own creativity.

The biggest problem with social media, AI-based dating applications, and finally singularity is that the system destroys diversity. People want to discuss and date only people who are similar to them. That means our way of thinking starts to turn homogenous. That causes a situation where we have no people who disagree with us. We can hear only ideas and opinions that please us. We take only people who are similar to us, in our social networks. So, in the worst case, we and our networks operate like some algorithm that recycles data through the model. That means we, our team, or our network will not get anything new to our model. We just recycle something if we don’t accept diversity. 

Our mind needs ideas and motivation for making new things. And where can we get those new ideas? We can discuss those things. Or we can get information that some other party made. And then we can work and refine the information that we can get from net pages and other media. Without opponents our productivity and creativity die, because we have nobody who brings new ideas into our minds. 

In some models, the network can develop things by playing games against some other network. The network creates a simulation and then the model tries to fight against that simulation. If a model wins there is no need to develop it. But if the model loses it requires adjustment. And that means the system requires data and then it requires optimism. 


In the novel “Peace on Earth” the author Stanislaw Lem introduced a model where the simulator creates a model and the other fights against that model. The better simulation becomes a model. Until something creates a new, better model. 


There is another way to operate as a network. The network can accept individually operating members. The idea is that every operator that is connected to the network is autonomous. Those subsystems operate autonomously when they collect data. When the network doesn’t need order it can be chaotic. And when an actor sees something that requires a lot of information, the roll call comes over the network. “Everybody stop, the network needs your capacity”. That commands those autonomous subsystems to leave their work and start to solve bigger problems. 

So, the network operates as a whole when it requires that ability. The network can have subsystems and that means as in the case of an extreme crisis those subnetworks create models that should handle that problem. 

Those subsystems can be individual actors. When the individual actors play against each other, that lost actor joins with the winner and starts to develop a model that won. Then the actor couples start to play against each other, and again. The lost team joins the team that won and then starts to develop the tactics that won the game. The actor groups or networks expand when new actors join bigger entities. 

Those subsystems start to play against each other. When some subsystem loses, that means its tactics are lost. Then that lost actor joins the winner's team and gives its capacity to that team, or network. The network always drops lost tactics or action models until there are two networks against each other. And the better wins. This is one way to create the answer and solution for complicated problems. The expanding network could be the thing that brings solutions to many problems. When the network is in chaotic mode actors search data for it. 



You, me, and the language model.



Who has responsibility if people let their thoughts to some AI?


Why do we let AI think for us? The road to this point is long and rocky. When we order the AI to make essays and poems we follow the journey that began a long time ago. When we read essays and poems, made by AI we can say how those things destroy our creativity. At this point, we might say that we can buy a poem book and write that poem from it. 

So, in this case. We simply copy a poem from the book that some great poem master made. And then we can look at the mirror and ask from that image, who made the poem that we just wrote? We wrote a text that some other person invented. So, if we think about this case, and connect AI to that continuum, we see that AI is taking the role of the poem book. In the point of the receiver or reader, it's the same who made that poem. 


Is it some Chat GPT, or is it some Lord Byron? The poem was not made by the person who wrote it on the card. Then we can think about people like Sam Altmann who make more and more advances in AI. We blame them and AI and search for mistakes from them. But then we forget our own responsibility. The user makes the decision to use AI, so we decide whether will we make poems ourselves or will we let some other actors make those things. We have responsibility for things that we make and introduce to people. When we make and introduce some poems ourselves, we face very pithy criticism. 

When we say that people must go to libraries, read books, and do other things, we must be honest. Are we only jealous of people who have tools and skills that we didn't have 20-30 years ago? When we look at work effectiveness we gaze at things like time. 

That a person uses for work. And if the work is done faster, we give that person new work. Would that be an encouraging way to work? If some person does the work faster than others and the work is well done, should we give the rest of the time free for that person? 

Or should we give a new job to that worker? And then order the person back to the office. And take some artificial smile to our faces and then fire that worker because that person makes work better and faster than we do? Or should we cheat that person about poems that this individual worker published on social media? 


We can also remember that person who is part-time working in our company. That means we can use our supreme control and show everybody how jealous we can be. If a person goes to some poem courses at the labor college outside the working time, we can find a new shift for that person. We have some ideal vision of what a henchman should be, and if a henchman does not fit into that thing, we must change that person to fit into that mold. 

That can be crushing. So, it's easier to take the book from the bookshelf. And then make a copy of some of those well-known poems. That means we can say that the person who invented that poem was somebody else. That might be impressive. We didn’t use our own brains for that poem. We made hard work if we took a pen and then copied those words. But it is easier to make the copy using a computer. Or, maybe we find some of those poems from the net and then use copy-paste. Then we must not use our brains at all. AI is the tool that releases our resources from thinking to something else. When we think about cases where somebody makes their own poems, we must realize that every poem makes the first text. 


We decide the easy way. If we want to write some poems or essays, we must sit on our computer. And then we must take the trouble for that text. If we have some other things to do, we have no time to write texts and think about things that we make. Sam Altmann or anybody else than me and you decide if we use AI. That makes our life easier. It leaves our time to have a social life in discos and bowling alleys. But is that the advance that we want? The answer is that the decisions that we make show the road. 

People like Sam Altmann are basically businessmen. They follow the Maslow hierarchy of needs. When our basic needs are filled we want more. AI is the thing that allows us to transfer all our productivity to some computer. And that is the thing that makes AI advance faster than we expected. When the AI satisfies some need, there is another need it must respond to. This is the thing in AI development. AI can make things better than humans. 

Or, we can say that it can make some things better than humans. But then we must realize that AI must also learn new things. There was a story that some antique ATARI computer beat Chat GPT in chess. That thing happened because nobody ever taught the Chat GPT the chess. In the same way, we would lose all chess games against the monkey if we ever played that game. Every skill that AI has is a module. And if the AI has no module for something it's helpless. The AI requires lots of power. The AI, or LLM server requires its own power platform. And when we develop new and more scalable AI systems, we need new and more powerful computers. 

But still, we must realize that the AI that makes everything cannot make things from nothing. Those systems require massive databases and as much power as some cities. That waste heat can also be used for energy production. But the problem is always the temperature. New solutions like biological AI where the microchips communicate with microchips are coming. And in the wrong hands, those systems are dangerous. 



Researchers found interesting things in brains.

Researchers found new interesting details in human brains. First, our brains transmit light. And that light gives an interesting idea: could...