Sunday, April 6, 2025

The future of the AI is the network of LLM-style programs.

"Leading scientists predict a future where ‘Collective AI’—networks of AI units that learn and share knowledge—will revolutionize fields like cybersecurity, healthcare, and disaster response. Inspired by sci-fi concepts like Star Trek’s Borg but with built-in safeguards, this democratic AI model aims to promote rapid learning and collaboration without centralized control." (ScitechDaily, The Rise of AI: Leading Computer Scientists Predict a Star Trek-Like Future)

"Scientists envision a future of AI units sharing knowledge like a hive-mind, enabling fast, adaptable responses across fields, without the risks of centralized control." (ScitechDaily, The Rise of AI: Leading Computer Scientists Predict a Star Trek-Like Future)

Maybe that is a reality in the future. 


In the future, AI would be like a network of interconnected large-language models, LLMs. The network-based global AI models will be hybrid systems. The system will be a combination of small language models. In user interfaces the spoken language will be more practical. And because the global AI network is everywhere people can make orders from shops saying for example what they want to eat. 

The robot will get receipts from the net and then collect needed stuff from the shop. The robot's hands RFID sensors tell what it took and then it can calculate the bill. The payment can be made using the telephone or the person's biometric recognition allows them to connect those payments to the telephone bill. 

It might ask if a person wants to walk with it or maybe the customer wants to wait at the cafeteria. The robots are parts of the same network and the robot takes images from customer's faces. The waiter robot continues discussions about food asking, is that the first time when a person makes that thing? Or maybe a person wants to borrow one of the humanoid robots that the shop owns to make that food. The humanoid robots are a good combination with AI. 

When the language models are made using multiple small language models that makes development work easier. The developers can make small language models faster than one LLM. 

A system that can look monolithic from the outside can have multiple independently-operating cores. So if that kind of cell-based architecture requires some new things developers must make the new cell to the multicore system. 

The difference between those systems and existing systems is that future LLMs can share data. That means they are more flexible. The LLM can also learn things in different ways than existing LLMs. Today's LLMs cannot change data in their datasets. The modern LLM learns things in an energy-intensive process. That doesn't support the lifetime learning model. 

The future versions of the LLMs can exchange data with each other. They can connect data to the system and disconnect data freely from the system. That means if the AI doesn't use data in a certain time, it erases that data. In the global networked model, the unit that doesn't use some database can send a query to the network. Does some other LLM require that kind of dataset? 

Then, the network transfers that dataset under the control of the LLM or server that requires that data. The system can transfer data in the networks very effectively. The network-based LLM systems are modular. That means researchers can collect them from small-language models. Or SLMs. The network structure helps to keep the system running even if there is some kind of attack. 

The system that core formed of thousands or even billions of independent language models. The AI network can connect or remove those models when needed. The system can use similar language programs, that can learn things from each other. That means the system can recycle the same code in many places. The ability to connect new databases to the entirety. Makes the system more flexible. 

https://scitechdaily.com/the-rise-of-ai-leading-computer-scientists-predict-a-star-trek-like-future/

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

AI’s advancement turns slower.

 AI’s advancement turns slower. Growing accuracy requires more complicated code. That causes a situation where AI’s advancement slows. When ...