We need to examine the beliefs of today’s tech luminaries
https://www.ft.com/content/edc30352-05fb-4fd8-a503-20b50ce014ab
We need to examine the beliefs of today’s tech luminaries
The futuristic philosophies favoured by AI’s most prominent supporters ignore the issues we should be grappling with now
People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.
The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.
As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.
The acronym can be traced back to an unpublished paper by Timnit Gebru, the former co-lead for Google on AI ethics, and Émile Torres, a PhD student in philosophy at Leibniz University. An early draft of the paper, yet to be submitted to a journal, contends that the unexamined race towards AGI (artificial general intelligence) has produced "systems that harm marginalised groups and centralise power, while using the language of social justice and ‘benefiting humanity’, similar to the eugenicists of the 20th century". An all-purpose, undefined AGI, the authors add, cannot be properly safety-tested and therefore should not be built.
Gebru and Torres go on to explore the intellectual motives of the pro-AGI crowd. "At the heart of this [Tescreal] bundle," Torres elaborates in an email to me, "is a techno-utopian vision of the future in which we become radically ‘enhanced’, immortal ‘posthumans’, colonise the universe, re-engineer entire galaxies [and] create virtual-reality worlds in which trillions of ‘digital people’ exist".
Tech luminaries certainly overlap in their interests. Elon Musk, who wants to colonise Mars, has expressed sympathy for longtermist thinking and owns Neuralink, essentially a transhumanist company. Peter Thiel, the PayPal co-founder, has backed anti-ageing technologies and has bankrolled a rival to Neuralink. Both Musk and Thiel invested in OpenAI, the creator of ChatGPT. Like Thiel, Ray Kurzweil, the messiah of singularitarianism now employed by Google, wants to be cryogenically frozen and revived in a scientifically advanced future.
Another influential figure is philosopher Nick Bostrom, a longtermist thinker. He directs Oxford university’s Future of Humanity Institute, whose funders include Musk. (Bostrom recently apologised for a historical racist email.) The institute works closely with the Centre for Effective Altruism, an Oxford-based charity. Some effective altruists have identified careers in AI safety as a smart gambit. There is, after all, no more effective way of doing good than saving our species from a robopocalypse.
Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.
Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.
Perhaps those are the plot twists we were not meant to notice.
____________________________________________________________________________________