Status: Rough cut

Use AI to Write Provably Secure Software

The first vector of disruption from advanced AI systems is likely going to be in cyber. AI is increasing cyberattack capabilities, and misaligned AIs might exploit vulnerabilities in hardware and software systems to self-exfiltrate.

It is possible to build dramatically more secure systems. Proving specific security properties of codebases, rewriting codebases from less secure to more secure languages, or formally verifying entire systems can reduce the attack surface of software and, at the limit, shift the cyber balance to defense.

This is all possible today, but it takes a tremendous amount of effort1. It's simply not feasible to deploy highly secure systems in all the places they're needed given the available supply of talent. With the rise of AI coding agents, this will no longer be true. AI agents could build and maintain formally secure systems, automate the proof generation process, rewrite code into memory safe languages. This cuts the time and costs of creating secure systems by orders of magnitude.

All of this will require new tooling and new practices and targeted r&d to2 remove bottlenecks. But the opportunity is available now, and the right people could drive this revolution in software engineering and security.

1 For example Sel4 – a fully formally verified microkernel – took approximately 25-30 person-years.

2 Likely bottlenecks include the lack of datasets of secure code to train AIs, and efficiently eliciting specifications for secure systems from human designers.