Andy Taylor has a goal both modest and ambitious: bring artificial intelligence, or A.I., to air traffic control for the first time. A career air traffic controller, Taylor was quick to see the potential benefits that advances in computer vision technology could bring to his profession.
Example: Every time a plane clears its runway, an air traffic controller must flag it and notify the next plane that the runway is free. This simple, repetitive task takes controllers’ attention away from everything else that’s happening on the tarmac. Even short delays can add up considerably over the course of a day—especially at airports such as London’s Heathrow, where Taylor works, which has flights booked end-to-end from six in the morning till 11:30 at night.
What if an A.I. system could handle this work autonomously? Taylor now leads the groundbreaking effort by NATS, Britain’s sole air traffic control provider, to answer that question, and to bring A.I. to bear on this and related air traffic control tasks.
His biggest obstacle to innovation? The nonexistence of A.I. safety regulations for aviation.
That a lack of regulations might obstruct innovators like Taylor might be counterintuitive to some. After all, arguments around regulation usually pit proponents of unencumbered innovation against those concerned about social harms resulting from unchecked competition.
The Trump administration falls into the former camp, advocating that agencies adopt a light-touch approach toward new regulations, which it feels could “needlessly hamper A.I. innovation and growth.”
So do many Silicon Valley elites—an increasingly powerful political constituency with a well-documented distaste for regulation.
But while a hands-off approach might foster innovation on the Internet, in aviation and other industries it can be an obstacle to progress. In a report from UC Berkeley’s AI Security Initiative, I explain why. Part of the problem is that safety regulations for aviation are both extensive and deeply incompatible with A.I., necessitating broad revisions and additions to existing rules.
For example, aircraft certification processes follow a logic-based approach in which every possible input and output receives attention and analysis. But this approach often doesn’t work for A.I. models, many of which react differently even to slight perturbations of input, generating a nearly infinite number of outcomes to consider.
Addressing this challenge isn’t a mere matter of modifying existing regulatory language: It requires novel technical research on building A.I. systems with predictable and explainable behavior and the development of new technical standards for benchmarking safety and other performance criteria. Until these standards and regulations are developed, firms will have to build safety cases for A.I. applications entirely from scratch—a tall order, even for pathbreaking firms like NATS.
“It’s absolutely a challenge,” Taylor told me earlier this year, “because there’s no guidance or requirements that I can point to and say, ‘I’m using that particular requirement.’”
A further issue is that air traffic control firms, as well as manufacturers such as Boeing and Airbus, know that new rules for A.I. are inevitable. While they are eager to reap the cost and safety benefits offered by A.I., most are understandably reluctant to make serious investments without confidence that the resulting product will be compatible with future regulations.
The result could be a major slowdown in A.I. adoption: Without more resources for regulators and strong leadership from the White House, the process of setting standards and developing A.I.-appropriate regulations will take years or even decades.
The incoming Biden administration is poised to offer that leadership, striking a contrast with the Trump administration’s light-touch approach to A.I. governance.
Business leaders and technologists have a key role to play in influencing the Biden administration’s attitude toward A.I. regulation. They might start by encouraging the administration to prioritize A.I. safety research and regulatory frameworks for A.I. that support innovation in aviation and other industries. Or they could do what they do best: develop prototype solutions in the private sector (for a great example, see OpenAI’s proposal of regulatory markets for A.I. governance).
If successful, these efforts could free up Andy Taylor and other entrepreneurs to innovate in safety-critical industries from aviation to health care to the military. If not, a handful of companies like NATS will still try to develop new A.I. applications in these industries. But it won’t be easy and could increase the risk of accidents. The potential benefits of A.I.—improved medical diagnoses, affordable urban air mobility, and much more—would remain technically feasible, but always a few years away.
Pro-innovation business leaders and technologists should therefore worry less about new regulations slowing down progress and instead work on developing the smart regulations required to speed it up.
Will Hunt is a research analyst at Georgetown University’s Center for Security and Emerging Technology and a political science Ph.D. student at the University of California at Berkeley. He has coauthored commentary on technology policy in the Wall Street Journal, and he was previously a graduate researcher at the UC Berkeley AI Security Initiative.