Self-developing AI is the AI that develops itself. The Moltbook is a system that can act as a platform, allowing the creation of AI agents that can operate as a team. The moltbook-type platform enables the creation of a system. There are the AI-agents. Or, so-called small language models, SLMs, that can combine their strengths. The large language model LLM is very large. And a complex system. The problem with the LLM. It is the same as humans. We can have a lot of data or knowledge. But that data is cursory. We know lots of topics. That information is like. We would read only things like headlines. We know that something happens.
But we don’t have any details of those things. But if we want to know the background of that thing. And who, and why something acts. Like that, we must read much more than just some headlines. When the LLM searches and analyzes data, that thing is always cursory. The system must analyze larger data masses. And that makes it more cursory. If. The system uses. Lots of data. And makes a deep analysis of the data. The system turns slow.
If. We want to create a new LLM. We can create that thing. Through. The AI-agents. Those AI agents can act as a whole. Those AI agents act like a team. And that allows. To develop. The AI. By using AI agents like LEGOs. Each AI-agent is like a module in the system. These involve different types of skills. And those skills or bricks act as a team of workers.
If we must get a deeper knowledge of the thing.
The SLM is the tool. That does not have very large common knowledge. The SLM analyzes data in a thinner sector. It uses more limited types of sources. And that makes it more accurate in its own sector than the LLM can be. The system has deeper knowledge. Of a certain sector. Than the system that searches data from all around the internet. The SLM uses only a certain type of sources. The system can search data. Only. From the sources. Those are under certain topics. Like “astronomy”.
So, if we want to get knowledge of planet Uranus. The AI agent. Or SLM will not search for things like Roman gods and astrology. It just searches data about the planet. And maybe it should ask, do we want information about Uranus’s moons or just the planet? Or maybe we want information about Uranus’s magnetic field or clouds. This helps the AI agent limit the sources to articles that involve information about those topics.
In the same way.
We can create a custom AI agent. That fixes the base code of the AI programmer. Just gives orders. On how the AI agent should make the code. The programmer. That works with AI agents. Should determine. The goals of the qualification are what the AI should follow. How the orders are given determines how effective the AI agent is. If we have three AI agents, we could make a system that develops itself. The AI that. Searching the data, the system that surveillances the operations, and the programming AI-agent. That makes the changes in the code. When the surveillance system sees that. There are errors in the orders it generates for the AI-agent programmer.
The reason for that third agent is that the prime agent will not recognize its errors. For error detection, the AI should ask for feedback from its users.
Then the system generates the needed changes for the algorithm. In those cases, the query should follow the same route as all other queries. While. Developers develop. Or. Train the AI-agents. To give strict and well-argued orders. If orders are not clear, those AI’s will not succeed. All data that the AI uses must be very well described to those systems. If. Researchers want to make the SLM. They must make the prototype using the LLM. At least. As an assistant. Or they must hire an army of coders.








