Molecular counterfactual method helps researchers explain AI predictions | Research

Machine learning methods can solve complex problems effectively, by training models on known data and applying those models to related problems. However, understanding why a model returns a particular result, which is essential for validating and applying this information, is often technically difficult, conceptually difficult, and model-specific. Now, a team in the US working on explainable AI for chemistry has developed a method that generates counterfactual molecules as explanations, which works flexibly on different machine learning models.1

“There have been high-profile crashes in computing where a model could predict things very well, but the predictions weren’t based on anything significant,” says Andrew White of the University of Rochester, whose team developed the new method of counterfactual explanations. . ‘Occasionally [a machine vision model] will predict that there is a horse in an image, not because there is a horse in the image, but because there is a photographer’s watermark. Missing a picture of a horse is obviously a minor issue, but if you’re trying to predict whether something is carcinogenic, toxic, or flammable, we start to run into bigger problems. Understanding whether a model hit the right answer for the wrong reasons — known as the Clever Hans effect after a math-savvy horse — is one of the goals of Explainable AI.2

Counterfactuals are an intuitive and informative explainable AI approach. For any particular prediction, for example that an input molecule is soluble, a counterfactual is the most similar example where the model gives a different prediction. “Through a comparison of what has changed, such as the loss of a carboxylic group leading to a change in chemical activity, you ‘learn’ why the model gives the prediction that it is,” says research scientist Kim Jelfs. discovery of computer materials. at Imperial College London, UK. “It’s inherently a pretty satisfying way for a chemist to understand how a machine learning model works.” If the model performs well, this counterfactual is also a useful prediction on its own. “A counterfactual explanation is actionable, it tells you how to modify your molecule to change its behavior,” White notes. “It gives you a real molecule that you can synthesize and test.”

A counterfactual explanation is exploitable

However, finding a counterfactual still typically depends on the intricacies of the specific AI model being used. “Suppose you’re working with a graphical neural network,” says Geemi Wellawatte, a researcher in White’s team. ‘You need special attention because you are working with the graph rather than a string representation [of a molecule]. Most of these explainable AI methods have been very model sensitive, and the downside is that your method cannot be applied generally, no matter how good it is. Finding the most similar molecule is also a unique challenge. “Taking the derivative with respect to the molecular structure is a very strange and numerically very difficult concept,” says White.

Do not leave anything to chance

The answer was to use a simpler method to make similar molecules. White’s student Aditi Seshadri suggested trying Stoned, the Superfast traversal, optimization, novelty, exploration and discovery method developed at the University of Toronto in Canada, which generates the chemical neighbors of a molecule by modifying the Selfies string that describes it.3 “It’s such a simple method to use: no derivatives, no GPUs, no deep learning. It’s just literally changing the strings,” enthuses White.

This idea led the team to create Mmace – short for Molecular Model Agnostic Counterfactual Explanation. Mmace takes a molecule and uses a refined Stoned search to build a library of similar molecules. These can be examined with the machine learning model to see which molecules give different results, and the Tanimoto distance shows which are most similar.

As Mmace does not depend on the internal structure of the machine learning model, it is simple to implement and widely applicable. “Often in machine learning research, researchers may prefer to modify the model they’re using based on data availability or the specific property being predicted,” says Heather Kulik, a chemical engineer at the Massachusetts Institute of Technology. United States who studies machine learning in chemistry. “Having an approach to model interpretability that applies to multiple types of machine learning models will ensure its broad applicability.” Jelfs is also happy with the convenience of Mmace. “As they provide their approach open source, others can immediately use it to interpret their own deep learning models. Their method can be applied to any machine learning model, so it is immediately usable in the community.

White’s team tested Mmace on a wide variety of chemical problems and machine learning models, including predicting HIV activity with a graphical convolutional network and solubility with a recurrent neural network, obtaining in each case of the counterfactuals which helped to justify the properties of the original molecule. “How do you demonstrate that you have succeeded? White wonders. “We tried to look at this from many different angles, but at the end of the day, ‘what’s a valid explanation? is such a nebulous concept that we worked it out with a philosopher.

It is immediately very usable in the community

White is keen to emphasize that Mmace is not a panacea. Selfies have difficulty representing certain classes of molecules and bonds such as organometallic structures like ferrocene, and although all Selfies strings meet certain criteria for chemical meaning – atomic valence, for example – not all Selfies structures are not necessarily synthesizable molecules. To solve the latter problem, White’s team attempted a similarity search on the PubChem database of experimentally reported molecules to generate chemical neighbors, instead of Stoned. This yielded counterfactuals that were more different from the original molecule, but still provided useful information: modifications to a tertiary amine in a molecule removed its predicted ability to cross the blood-brain barrier, implying that this group plays a role in allowing the molecule to cross.

White and his team continue to work on the nuances of the method, such as their definition of molecular similarity. “Maybe an organic chemist would think, ‘I could just synthesize this one with this pathway, and then if I make a little change, I could synthesize this one, and so they could be one step away from the other,” White explains. “We also create explanations with the same tools, but trying to categorize these similar molecules into mechanistic explanations. We like the idea of ​​only communicating in chemical structures for explanations, like counterfactuals, but at some point we have to align the explanations with our mental models of why a molecule works or doesn’t work.

Source link

About Donald P. Hooten

Check Also

The 369 Manifestations Method: How to Make Your Therapy Sessions Work

Psychologist Nancy Sokarno walks us through the 369 Manifestations Method and how to get the …