Wednesday, March 25, 2026

The AI still cannot think like humans


"An AI model once claimed to replicate human cognitive behavior across a wide range of tasks, sparking excitement about unified theories of the mind. However, new findings suggest its performance may rely more on learned patterns than true understanding, raising deeper questions about how intelligence is defined and measured. Credit: Shutterstock" (ScitechDaily, Did Scientists Overestimate AI’s Ability To Think Like Humans?)

The AI still cannot think like humans. The AI is an ultimate tool to handle large data masses. And.  In these cases, there is no need for so-called deep knowledge. The AI beats humans. When we think about AI. And its ability to solve mathematical problems. We face an interesting thing. The AI handles things better than humans. If we set formulas for it. And determine variables for the AI. But if the AI must find the right formula from the net. And make the calculations, it will not be. So effective at all. Mathematical problems and mathematics are examples of exact sciences. Every single mathematical problem must be solved following the exact rules. Calculation orders are simple and clear. 

But searching. The right formula is not an exact science. The system must compile its orders with the data that it finds. In the cases where the system must answer verbal questions, it’s not very effective. The world is full of mathematical formulas. And that makes it hard to find them. The AI handles things. Like stock market investments. It must only know whether the value of the stock rises or decreases. This means it must handle only two variables. 

But when the system must think deeply. Or, otherwise saying, answer to the question: Why is something done? Or why something happened, the AI has problems. The AI is a good tool in limited areas. When we develop robots that operate automatically in closed areas, those robots don’t require complicated algorithms. The robot can read even QR codes: Where it must turn. Or where is the thing that it needs? There are no surprises in those closed and limited environments. But. When we make programs for self-driving cars, there are many variables that the robot must notice. 


******************************************

When AI starts data processing, it collects data from multiple sources. It takes this first step. But then it doesn’t take the next step. It doesn’t compare the information that those data sources involve. And that makes the AI give answers that have no connection. With topics of the question. 

******************************************


Do you remember everything? What do you need to notice when you drive a car? And imagine what happens if the programmer doesn’t remember to describe things like balls to the autopilot. Things like DNA-controlled nanomachines are more effective. Chemical programming allows them. To make complicated things. But they follow fixed orders. They can learn many things. They can learn to search for things like new types of cancer cells. But that thing doesn’t mean. Those systems can think like humans. For humans, thinking is more than just a reaction to something that activates some database. 

Human thinking involves aspects like feelings. Computers and machines can mimic feelings. They can react to things like tears. But those systems will not feel anything. The thing. That they see or hear activates some kind of reaction. AI can collect data, but it doesn’t process it. Very deeply. When we ask things. Like. Something that is not very common. The AI might give an answer that is completely wrong. It might be about the wrong topics. And that tells us that the AI cannot think. 

It can collect information, but it cannot compare the sources of the information. Or, it doesn’t know the meaning of the information. This means that the system can search a data source. But it cannot confirm that source. The AI detects topics about the homepage. But then. It cannot understand the thing. That. The homepage involves. 

When we think. About the information processing models. We can say that the AI takes the first step. It searches information from multiple sources. But then. It misses the second step. It doesn’t compare those sources. And that makes AI. To give false and ridiculous answers. 

https://scitechdaily.com/did-scientists-overestimate-ais-ability-to-think-like-humans/

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Memristor-based technology can solve AI’s energy problem.

“A new brain-inspired nanoelectronic device offers a glimpse into a future where artificial intelligence hardware consumes far less energy w...