Skip to Content

The Causal AI conference is back in San Francisco for 2024, bigger and better than ever.

Register Interest

What is Causal AI?

What is Causal AI?

Causal approaches empower data scientists to answer questions that cannot be answered using standard machine learning techniques, leading to a clearer connection to ROI from their models. Examples of such questions include:

● “What is the optimal treatment to change a specific outcome?”
● “What is the effect of intervening on a certain input parameter?”
● “What is the root cause of an outlier?”

Causality within the scientific community has been growing at a rapid pace over the last few years:
● The world’s largest technology companies are heavily invested in it, having created their very own causal research labs, and seeing great returns from their research. Examples include Uber, Netflix and Meta.
●  Amazon and Microsoft have collaborated for the first time on their own causal package: PyWhy.
●  And we have seen an exponential number of papers being published at leading machine learning conferences like NeurIPS and ICML.

Capabilities

Causality ultimately unlocks unique capabilities that traditional machine learning and statistical approaches cannot:

  1. Causal Drivers: traditional methods are correlation based and can therefore learn spurious relationships. Causal AI focusses on unearthing true causal drivers from observational data through causal discovery algorithms, experimentation or domain expertise.
  2. Explainability: methods like LIME or SHAP provide limited explainability as they can only be used with already trained models. This poses two main problems:
      • They cannot guarantee the model will always act sensibly as they are trying to explain based on observed inputs. Imagine you have a datapoint that is drastically different from those observed in the past, there is no guarantee that the output of the model will make sense. SHAP and Lime are only explaining the past and not trying to anticipate what the model may do in the future.
      •  They tell us what features are associated with the prediction but not necessarily what features drive the outcome. The SHAP documentation touches on this issue in more detail.

    Hallucinations are not just a problem with chatGPT – every ML model hallucinates, it’s a problem when you try to extrapolate beyond the training data.
    Causal AI provides a priori explainability. This means you know globally what the model is doing and sensible outcomes are guaranteed.

  3. Ability to embed domain knowledge: causal methods allow domain experts to incorporate their unique knowledge in the modeling process. Experts can constrain specific relationships to their functional form ensuring the model always respects these and is generalisable. Causal models therefore merge the best of domain expertise and data driven approaches. For example, think of a causal model of a manufacturing line. The process engineer may know that Temperature in sensor 1 has a linear positive relationship with pressure in chamber X. Leveraging decisionOS’ Human-Guided Causal Discovery they are able to embed this knowledge into the model and ensure the model always respects this relationship.
  4. Go beyond predictions: Machine learning approaches have always focussed on predicting. Causal AI can also predict, but more importantly it can take you a step further, allowing you to answer questions that you cannot with traditional machine learning models. Structural causal models allow you to estimate treatments and simulate counterfactuals, such as “What is the causal effect of intervening on a certain input variable?” or “What would be the most cost-effective way to change an outcome?”. This unlocks a whole new class of problems for data scientists to tackle. We call this decision intelligence.
image description