Status: Stub

Creating Positive Visions for the Future

I've come to believe that one of the biggest things that the world needs hopeful visions for the future. I know, it's a tad saccharine - "what we're missing is hope!" - but having a shared, aesthetic conception of what we (a company, a community, a country) are working towards is incredibly important - it's how we coordinate large groups over time to commons ends.

Michael Nielsen has a beautiful essay where he describes the value of creating positive visions for shaping the future.

I've eschewed the language of prediction, focusing instead on imagination and possibility. This contrasts with much AI safety discourse, which is often framed in terms of timelines, probabilities of doom, prediction markets, and so on. I've avoided this predictive viewpoint in part because it's the passive view of an outsider, not a protagonist; and in part because it de-centers imagination. ASI isn't something that happens to us; we control it. Timelines are something we are collectively deciding. To do that well we must see ourselves as active imaginative participants, not merely passive or reactive respondents. The predictive viewpoint should serve the imaginative viewpoint, not vice versa. That's why I've deliberately centered the imaginative viewpoint: imagination is more fundamental than prediction. If people such as Alan Turing and I. J. Good hadn't imagined AGI and ASI we wouldn't be discussing predictions related to them. Imagining the future well is both extraordinarily challenging and an extraordinary opportunity.

I use the term hyper-entity to mean an imagined hypothetical future object or class of objects. AI is just one of many hyper-entities; closely related examples include AGI, ASI, aligned ASI systems, mind uploads, and BCI. Outside AI, examples of hyper-entities include: world government, a city on Mars, utility fog, universal quantum computers, molecular assemblers, prediction markets, dynabooks, cryonic preservation, anyonic quasiparticles, space elevators, topological quantum computers, and carbon removal and sequestration technology. Even on-the-nose jokes like the Torment Nexus are examples of hyper-entities. Many important objects in our world began as hyper-entities – things like heavier-than-air flying machines, lasers, computers, contraceptive pills, international law, and networked hypertext systems. All were sketched years, decades, or even centuries before we knew how to make them. But they ceased to be hyper-entities when they were actually created, sometimes rather differently than was expected by the people who originally imagined them. By contrast, things like dragons or unicorns, while imagined objects, are not examples of hyper-entities, since they aren't usually considered to be future objects. There are related hyper-entities, though: "genetically engineered dragon" is an example.

The most interesting hyper-entities often require both tremendous design imagination and tremendous depth of scientific understanding to conceive. But once they've been imagined, people can become invested in bringing them into existence. Crucially, they can become shared visions. That makes hyper-entities important co-ordination mechanisms. The reason AGI is a subject of current discussion is that the benefits of AGI-the-hyper-entity have come to seem so compelling that enormous networks of power and expertise have formed to bring it into the world. It's become a shared social reality. This is a common pattern with successful hyper-entities. While still imaginary, they may exert far more force than many real objects do. As a result, the futures we can imagine and achieve are strongly influenced by the available supply of hyper-entities. This makes the supply of hyper-entities extremely important: they determine what we can think about together; they are one of the most effective ways to intervene in a system; a healthy supply of hyper-entities helps pull us into good futures. If all we imagine is bad futures that's likely what we'll get.