Newswise — Researchers from Nanjing University and Carnegie Mellon University introduced an AI approach that improves how machines learn from past data—a process known as offline reinforcement learning. This type of machine learning is essential to allow systems to make decisions using only historical information without needing real-time interaction with the world. By focusing on the authentic cause-and-effect relationships within the data, the new method enables autonomous systems—like driverless cars and medical decision-support systems—to make safer and more reliable choices.
From Misleading Signals to True Causality: A New Learning Paradigm
Traditionally, offline reinforcement learning has struggled because it sometimes picks up misleading patterns from biased historical data. To illustrate, imagine learning how to drive by only watching videos of someone else behind the wheel. If that driver always turns on the windshield wipers when slowing down in the rain, you might incorrectly think that turning on the wipers causes the car to slow down. In reality, it is the act of braking that slows the vehicle. The new AI method corrects this misunderstanding by teaching the system to recognize that the braking action, not the activation of the windshield wipers, is responsible for slowing the car.
Enhancing Safety in Autonomous Systems
With the ability to identify genuine cause-and-effect relationships, the new approach makes autonomous systems much safer, smarter, and more dependable. Industries such as autonomous vehicles, healthcare, and robotics benefit significantly because these systems are often used when precise and trustworthy decision-making is critical. Lead researcher Prof. Yang Yu stated, “Our study harnesses the power of causal reasoning to cut through the noise in historical data, enabling systems to make decisions that are both more accurate and safer—an advancement that could improve how autonomous technology is deployed across industries.” For policymakers and industry leaders, these findings could support improved regulatory standards, safer deployment practices, and increased public trust in automated systems. Additionally, from a scientific perspective, the research paves the way for more robust studies on AI awareness of causality.
A Causal Approach That Outperforms Traditional Models
The researchers found that traditional AI models sometimes mistake unrelated actions as causally linked, which can result in dangerous outcomes. They demonstrated that many of these errors are significantly reduced by incorporating causal structure into these models. Moreover, the new method—referred to as a new causal AI approach—has been shown to perform consistently better than existing techniques (i.e., MOPO, MOReL, COMBO, LNCM) when tested in practical scenarios.
Advanced Methods Uncover Real Causal Relationships
To achieve these promising results, the research team developed a method that identifies genuine causal relationships from historical data using specialized statistical tests designed for sequential and continuous data. This approach helps accurately discern the true causes behind observed actions and reduces the computational complexity that often hampers traditional methods, making the system more efficient and practical.
This research enhances our understanding of AI capabilities by embedding causal reasoning into offline reinforcement learning. It offers practical improvements in the safety and effectiveness of autonomous systems in everyday life. The complete study is accessible via DOI: 10.1007/s11704-024-3946-y.
MEDIA CONTACT
Register for reporter access to contact detailsArticle Multimedia

Credit: Zhengmao ZHU
Caption: The three basic structures

Credit: Zhengmao ZHU
Caption: The basic, impossible and compound situations of the causation between target variables and condition variables.

Credit: Zhengmao ZHU
Caption: The architecture of FOCUS

Credit: Zhengmao ZHU
Caption: The visualization of the example

Credit: Zhengmao ZHU
Caption: The heat map of the three offline data sets in the Car Driving.
CITATIONS