Friday, February 13, 2026

AI and critical thinking.



We should not ask. Can AI think? We should ask: Can AI think critically? Can AI really estimate the sources that it uses? Or does the AI use all sources that it finds without thinking about what those sources involve? Does the AI give information that pleases us? Or does it give information that is useful to whom? Who determines the purpose of the information that AI gets? We can get lots of information. And we must not use that information for nothing. Does the AI try to please us? Or does it give the right information? 

That we can use for some purposes. Only. The right information is useful. Another thing is that information must also be collected from the entirety. The case or object must be handled as an entirety. We must not select data. That pleases us. And then throw. Data that doesn’t please us away.

Can the AI: tell us? If we are wrong? Who determines those purposes? Is financial purpose the thing that determines the data? Or is it the short-term profit? But is long-term loss the thing that the AI should consider? The thing that determines the benefit is: how long the actor? Who gets profits will stay in-house? If the only thing. That means something. It is personal profit, the period. That determines the data. And its use can be the remaining in-house time of the actor who collects profits. 

AI and critical thinking are two concepts that can cause problems. We know that AI combines observations of things stored in its memory. So, can AI think? We can say. That. This process is thinking. But what differentiates AI from humans? Can AI think critically? Critical thinking means that. Thinker doesn’t believe that some source is true or false after the first read. Thinker dares. To search other sources that support or deny the first source. But. Critical thinking is also much more than being suspicious about the article or the article’s information. The term critical thinking. Doesn’t. Mean that the thinker only tries to show an article.  Or, other data source, true or false. Critical thinking means that the thinker, or the person who analyzes data, also asks “who” shares the data and “why” that data is shared. 

The article or other data source. It can involve the right information, but it can serve as disinformation. The question is always simple: what is lost in that article or data source? When something certainly exists or certainly doesn’t exist, that should cause a question: why does the speaker or writer? Is one so sure that something exists or doesn’t exist? This means that the person must see those things using their own eyes. When we return to the AI, the big question is this: how many articles or other data sources does the AI use for making analyses? And then what is the criterion that AI uses for selecting data? Is it the article’s publishing date? And then. The AI should use. As much fresh data. As possible. 


But. The problem is: Does the AI separate the right information? From falsified information? The falsification means. The situation. That the information is handled only one way. Published data. It can be right. But something is missing from it. When. People say that AI steals our workplaces. We. Should also ask. What workplaces does the AI steal from? How long does that writer take to do those works? Are those works respected? And would we make them all our worklife? When AI takes something from us, we should ask: what is the thing that AI steals or takes? When we say that AI goes to war, we must ask one very dangerous question: Does the AI make killing too easy, or does it steal our heroes? 

Or, does it steal jobs from our military? We always dare to ask. These kinds of questions. While. We think about. AI. and its relationship to the state, society, and government. The problem with the AI is this: when we say that militarized AI doesn’t give arguments against its rulers, we face the last question: does the military ever say “no” to orders that it gets? In the world of the military. Disobeying orders or refusing to follow them. 


Causes punishment. Disobeying orders in combat situations causes death penalties. So, how does the AI change the situation? 


When we hooray. If the Chat GPT or some other ICT company. Refuses to cooperate with ICE officials. We must remember that this is a very dangerous way. Governments make laws. And if AI developers must flee outside the West to China or some Central American country, nobody controls that thing. The problem is that. The AI is business. There must be more and more abilities. That keeps customers interested. AI companies make a business. They need clients and somebody who wants to finance them. And. This is the main problem with AI development. Nobody gives money for nothing. 

Business angels want their money back. The development of AI is expensive. Data centers that LLM. Large Language Models require as much electricity as a small city. The problem is that LLM is much more effective than regular tools. So. The AI companies can be competitors. For. Each other. But the AI itself has no competitors. The reason. For why. I write the AI is this. There is a new social media platform called Moltbook. The moltpbook is the discussion forum for AIs. That forum allows them to exchange information. And that allows separate LLMs, or AI, to unite themselves. The Moltbook is the platform. That allows the AIs to generate and develop each other. 

And that is the thing that we must realize. When we say that AI should not serve police or governments, we must also ask. Who must AI serve? Is it better to limit the use of AI only to private actors? The main problem is that the AI itself doesn’t make a difference; does it recognize police or thieves? 

This means that criminals can also use the AI to recognize police officers. And the big problem is that governments determine for themselves. What crime is in a certain state? In some states, even speech against the ruler is a crime. That causes even life sentences or death. The problem with surveillance is always, who controls the controller? 

And another question is, why do people hooray? Do they hooray because ICE doesn’t get information about illegal immigrants? Or do we hooray because ICE must kick more doors in? Or do we hooray, because that is the return to old-fashioned police work? In that police work, officials said that their henchmen must go to the streets and then search their “clients” one by one. That left their superiors alone at the police station and kept their henchmen busy. And that gives a chance. Or an excuse to hire more police officers. That is one way. That. We should think. But someday. All illegal immigrants are driven away. And then. The state doesn’t need those officials. 


https://bigthink.com/mind-behavior/ais-are-chatting-among-themselves-and-things-are-getting-strange/


https://en.wikipedia.org/wiki/Moltbook



No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

The Moltbook is a social media platform. But it’s only for AI agents.

Humans have access to Moltbook only as watchers. The AI creates everything in that platform.  One of the newest and most interesting things ...