homebtn masthead

Physicists Peer Inside AI's Black Box

After repurposing facial recognition and deepfake tech, physicists think they can help shape the responsible use of AI.

By: SOPHIA CHEN | 22 Sep 2020

In AI’s current state, Tegmark sees potential parallels with a shameful physics legacy: his field’s contributions to the development of nuclear weapons. In the 1940s, physicists applied their fundamental discoveries about atomic nuclei to enable the bombings at Hiroshima and Nagasaki. Their technology culminated in existential and international tensions that continue today. “I feel quite responsible that my tribe, physicists, have contributed to the fact that we've almost had an accidental nuclear war now between the US and Russia over a dozen times,” says Tegmark.

With this historical perspective, he thinks physicists have a unique role to play in ethical AI development. Some physicists, including Tegmark and Nord, have taken up the mantle by fixating on one main problem: figuring out why these algorithms even work. Experts frequently use the term “black box” to describe AI: The computer can differentiate a cat from a dog, but it’s not clear how it got to the right answer. “I believe if we make our AI systems more intelligible, then we'll be able to trust that they're going to do what we actually want them to do,” says Tegmark.

For example, say you use an AI algorithm to “train” a computer to construct a mathematical model, or set of equations, that describes a cat, by showing it thousands of cat photos. The computer “learns” this model through a complicated process involving the repeated adjustments of sometimes millions of variables. Then, by calculating how well a new photo matches this model, the computer can determine whether it contains a cat. But if you have the computer construct this cat model multiple times, training it with the same set of cat photos, it produces a slightly different model each time. The computer might be comparably competent at identifying cats with each model, but be identifying them using subtly different mathematical arguments.

It’s unclear why the inconsistencies arise, or even how to talk about them. “My friends and I will talk about how machine-learning research sometimes seems a little unscientific, because a lot of the results aren't reproducible or super quantified,” says physicist Savannah Thais of Princeton University, who has used AI algorithms to analyze LHC experimental data.

Thais recently published an essay for the American Physical Society encouraging physicists to participate more actively in AI ethics conversations. In the essay, she writes that physicists could play a critical role in improving algorithm “interpretability”—to carefully analyze algorithms and their inconsistencies to make them fairer. (Disclosure: This reporter writes for the American Physical Society.) In other words, Thais thinks that physicists can unpack how the AI sausage is made. “That’s such a big focus in physics work already—how precisely certain we are that something works the way we think it does,” she says. Physicists are already compelled to break down any object into its most fundamental pieces and interactions; Thais wants them to direct that compulsion to AI algorithms.

One strategy for unpacking AI involves training algorithms with data that physicists understand in excruciating mathematical detail. For example, Nord and a colleague, João Caldeira, studied how a neural network processes simulated data of a pendulum swinging back and forth. Physicists have understood pendulum behavior for centuries. Because the data is so simple, Nord and Caldeira can more easily track, step-by-step, how the algorithm organizes and processes the data.

Nord and Caldeira essentially trained their algorithm to estimate the strength of Earth’s gravity from their pendulum’s motion. But they included human fallibility in their training data: imprecise measurements that might correspond to a flawed measuring tape used to measure the pendulum’s length, or the sweaty high school student whose fingers slip while making the measurement. They then studied how the algorithm processed these errors into its final estimate of gravity.

While their understanding of algorithm error is still in early stages, their ultimate goal is to help develop a clear and accurate way to communicate an algorithm’s margin of error, a new area of AI research called “uncertainty quantification.” “It’s this idea of rigorously understanding AI’s error bars,” says Nord.

Tegmark is tackling AI inscrutability with a different strategy. Instead of deciphering an algorithm’s process in granular detail, he focuses on repackaging its complicated output.

In a way, Tegmark treats AI algorithms like we treat human intuition. For example, a child learns to catch a ball intuitively, without ever needing math. But in school, she might learn to describe her intuition using equations of parabolas. Tegmark thinks that scientists should think of AI as providing intuition, like the instinctive ability to catch a ball, that should then be repackaged into elegant, comprehensible math equations. Among computer scientists, this repackaging is called “symbolic regression.”

For example, Tegmark and an MIT graduate student, Silviu-Marian Udrescu, fed data about planetary orbits to an AI algorithm. Conventionally, an AI algorithm would identify patterns in that data and represent them with some long, murky formula for the trajectory of a planet around a star. Tegmark’s algorithm, however, took the extra step of translating that esoteric formula into Kepler’s third law of planetary motion, a concise equation that describes a planet’s orbit in relation to its mass and a few other variables. Publishing in Science Advances this April, they call their algorithm “AI Feynman,” because it successfully rediscovered 100 equations from physicist Richard Feynman’s classic introductory physics lecture series.

By making AI processes more transparent, physicists offer a technical solution to a particular ethics challenge. But many ethical challenges cannot be solved with technical advances, says AI ethicist Vivek Nallur of University College Dublin. Ethical dilemmas, by nature, are subjective and often require people with conflicting priorities to settle their differences. People may disagree with an algorithm’s recommendation simply based on cultural or personal preference. “For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is,” writes Vincent Müller in the Stanford Encyclopedia of Philosophy.

For example, a 2018 MIT study of people in 233 countries found that participants’ reactions to ethically gray situations were culturally dependent. The study, presented as a game, asked participants variations of the trolley problem: In one case, should a self-driving car swerve to save its three passengers, including a child, and kill four elderly pedestrians? The researchers found that participants from cultures that emphasize the collective, such as in China and Japan, were less likely than participants from individualistic cultures like in the US to spare children over the elderly. “If you buy a car that was programmed in Germany and drive it to Asia, whose ethics should the car obey?” asks Nallur. The question cannot be answered with more math.

But the more stakeholders involved in the discussion, the better, says Nallur. Physicists are still working to integrate their AI research into the mainstream machine-learning community. To understand his role in ethical conversations, Nord says he’s working to partner with social scientists, ethicists, and experts across many disciplines. He wants to have a conversation about what constitutes ethical scientific use for AI algorithms, and what scientists should ask themselves when they use them. “I'm hoping that what I do is productive in a positive way for humanity,” says Nord. As AI applications barrel forward, these physicists are trying to lay the track to a more responsible future.

“Use an AI algorithm to ‘train’ a computer”
headerimage