Victor Galaz Victor Galaz

The Biosphere Code

In 2015, my colleagues Fredrik Moberg, Fernanda Torre and yours truly organized a one-day workshop together with a mix of philosophers, hackers, artists, data scientists and sustainability experts to help develop what we called “the Biosphere Code manifesto”. The Biosphere Code were essentially a set of principles to help guide “the current and future development of algorithms in existing and rapidly evolving technologies” in a way that was responsible for both people and planet. As you can tell by the way we use the term “algorithms”, this was before the terminology around artificial intelligence, deep learning or generative AI was an issue discussed by the public.

A shorter version of these principles - put together collaboratively after very intense discussions in the group - were published in the Guardian shortly after our workshop and did spur some interest, for example in a Nature editorial the same year. I see the Code being mentioned occasionally by people interested in complementing current discussions about ethical AI, or responsible uses of AI, with a focus on climate and sustainability issues.

Many have asked for a longer version of the manifesto after the website we set up for the project was scrapped. I’ve made a bit if digging in the mailbox, and have managed to recover our latest draft of the longer version of the Code. I’ve only made minor tweaks in the text to keep the original language we ended up with - with warts and all! I’ve also inserted a few photos from the meeting and one video including many of the lovely participants. Enjoy!

Promo video of the Biosphere Code meeting (here called a Transformation Lab, or T-Lab) prepared for the 2015 Transformations Conference in Stockholm. Video courtesy of colleagues at the Stockholm Resilience Centre, Stockholm University.

The Biosphere Code v1.0

October 4th, 2015, Stockholm

Algorithms - step-by-step sequences of operations that solve specific computational tasks – are transforming the world around us. They come in many shapes and forms, and soon they will permeate all spheres of technology, ranging from the technical infrastructure of financial markets to wearable and embedded technologies. One often overlooked point, however, is that algorithms are also shaping the biosphere – the thin complex layer of life on our planet on which human survival and development depend. Algorithms underpin the global technological infrastructure that extracts and develops natural resources such as minerals, food, fossil fuels and living marine resources. They facilitate global trade flows with commodities, and they form the basis of environmental monitoring technologies. Last but not least, algorithms embedded in devices and services affect our behavior - what we buy and consume and how we travel, with indirect but potentially strong effects on the biosphere. As a result, algorithms deserve more scrutiny.

It is therefore high time that we explore and critically discuss the ways by which the algorithmic revolution – driven by applications such as artificial intelligence, logistics, remote sensing, and risk modelling – is transforming the biosphere, and in the long term, the Earth’s capacity to support human survival and development.

Below we list seven Principles that we believe are central to guiding the current and future development of algorithms in existing and rapidly evolving technologies, such as block chains, robotics and 3D printing. These Principles are intended as food for thought and debate. They are aimed at software developers, data scientists, system architects, computer scientists, sustainability experts, artists, designers, managers, regulators, policy makers, and the general public participating in the algorithmic revolution.

Principle 1. With great algorithmic powers come great responsibilities

 Those involved with algorithms, such as software developers, data scientists, system architects, managers, regulators, and policymakers, should reflect over the impacts of their algorithms on the biosphere and take explicit responsibility for them now and in the future.

Algorithms increasingly underpin a broad set of activities that are changing the planet Earth and its ecosystems, such as consumption behaviors, agriculture and aquaculture, forestry, mining and transportation on land and in the sea, industry manufacturing and chemical pollution. Some impacts may be indirect and become visible only after considerable time. This leads to limited predictability in the early and even later stages of algorithmic development. However, in cases where algorithmic development is predicted to have detrimental impacts on ecosystems, biodiversity or other important biosphere processes on a larger scale, or as detrimental impacts do become clear over time, those responsible for the algorithms should take remedial action. Only in this way will algorithms fulfill their potential to shape our future on the planet in a sustainable way.

Principle 2. Algorithms should serve humanity and the biosphere at large.

 The power of algorithms can be used for both good and bad. At the moment there is a risk of wasting resources on solving unimportant problems serving the few, thereby undermining the Earth system’s ability to support human development, innovation and well-being.

Algorithms should be considerate of the biosphere and facilitate transformations towards sustainability. They can help us better monitor and respond to environmental changes. They can support communities across the globe in their attempts to re-green urban spaces. In short, algorithms should encourage what we call ecologically responsible innovation. Such innovation improves human life without degrading the life-supporting ecosystems - and preferably even strengthening ecosystems - on which we all ultimately depend.

Principle 3. The benefits and risks of algorithms should be distributed fairly

 It is imperative for algorithm developers to consider more seriously issues related to the distribution of risks and opportunities. Technological change is known to affect social groups in different ways. In the worst case, the algorithmic revolution may perpetuate and intensify pressures on poor communities, increase gender inequalities or strengthen racial and ethnic biases. This also include the distribution of environmental risks, such as chemical hazards, or loss of ecosystem services, such as food security and clean water.

 We recognize that these negative distributional effects can be unintentional and emerge unexpectedly. Shared algorithms can cause conformist herding behavior, producing systemic risk. These are risks that often affect people who are not beneficiaries of the algorithm. The central point is that developing algorithms that provide benefits to the few and present risks to the many are both unjust and unfair. The balance between expected utility and ruin probability should be recognized and understood.

 For example, some of the tools within finance, such as derivative models and high-speed trading, have been called "weapons of mass destruction" by Warren Buffet. These same tools, however, could be remade and engineered with different values  and instead used as “weapons of mass construction". Projects and ideas such as Artificial Intelligence for Development, Question Box, weADAPT, Robin Hood Coop and others show the role of algorithms in reducing social vulnerability and accounting for issues of justice and equity.

Principle 4. Algorithms should be flexible, adaptive and context-aware

 Algorithms that shape the biosphere should be created in such a way that they can be reprogrammed if serious repercussions or unexpected results emerge. This applies both to accessibility for humans to alter the algorithm in case of emergency and to the algorithm’s ability to alter its own behavior if needed.

 Neural network machine learning algorithms can, and do, misbehave - misclassifying images with offensive results, assigning people with the “wrong” names low credit ratings - without any transparency as to why. Other systems are designed for a context that may change while the algorithm stays fixed. We cannot predict the future, but we often design and implement algorithms as if we could. Still, algorithms can be designed to observe and adapt to changes in resources they affect and to the context in which they operate to minimize the risk of harm.

 Algorithms should be open, malleable and easy to maintain. They should allow for the implementation of new creative solutions to urgent emerging challenges. Algorithms should be designed to be transparent with regards to operation and results in order to make errors and anomalies evident as early as possible and to make them possible to fix. Also, when possible, the algorithm should be made context-aware and able to adapt to unforeseen results while alerting society about these results.

Illustration of how algorithms interact with the biosphere. Illustration by Fernanda Torre (visual) with support from Victor Galaz and Fredrik Moberg.

Principle 5. Algorithms should help us expect the unexpected

Global environmental and technological change are likely to create a turbulent future. Algorithms should therefore be used in such a way that they enhance our shared capacity to deal with climatic, ecological, market and conflict-related shocks and surprises. This also includes problems caused by errors or misbehaviors in other algorithms - they are also part of the total environment and often unpredictable.

This is of particular concern when developing self-learning algorithms that can learn from and make predictions on data. Such algorithms should not only be designed to enhance efficiency (e.g., maximising biomass production in forestry, agriculture and fisheries). They should also encourage resilience to unexpected events and ensure a sustainable supply of the essential ecosystem services on which humanity depends. Known approaches to achieve this are diversity, redundancy and modularity as well as maintaining a critical perspective and avoiding over-reliance on algorithms.

Also, algorithms should not be allowed to fail quietly. For example, the hole in the ozone layer was overlooked for almost a decade before it was discovered in the mid-1980s. The extremely low ozone concentrations recorded by the monitoring satellites were being treated as outliers by the algorithms and therefore discarded, which delayed our response by a decade to one of the most serious environmental crises in human history. Diversity and redundancy helped discover the error.

 Principle 6. Algorithmic data collection should be open and meaningful

 Algorithms, especially those impacting the biosphere or personal privacy, should be open source. The general public should be made aware of which data are collected, when they are collected, how they will be used and by whom. As often as possible, the datasets should be made available to the public - keeping in mind the risks of invading personal privacy, and made available in such a fashion that others can easily search, download and use them. In order to be meaningful and to avoid hidden biases, the datasets upon which algorithms are trained should be validated.

 With a wide range of sensors, big data analytics, blockchain technologies, and peer-to-peer and machine-to-machine micropayments, we have the potential to not only make much more efficient use of the Earth’s resources, but we also have the means to develop innovative value creation systems. Algorithms enable us to achieve this potential; however, at the same time, they also entail great risks to personal privacy, even to those individuals who opt out of the datasets.  Without a system built on trust in data collection, fundamental components of human relationships may be jeopardized.

Principle 7. Algorithms should be inspiring, playful and beautiful

 Algorithms have been a part of artistic creativity since ancient times. In interaction with artists, self-imposed formal methods have shaped the aesthetic result in all genres of art. For example, when composing a fugue or writing a sonnet, you follow a set of procedures, integrated with aesthetic choices. More recently, artists and researchers have developed algorithmic techniques to generate artistic materials such as paintings and music, and algorithms that emulate human artistic creative processes. Algorithmically generated art allows us to perceive and appreciate the inherent beauty of computation and allow us to create art of otherwise unattainable complexity. Algorithms can also be used to mediate creative collaborations between humans.

But algorithms also have aesthetic qualities in and of themselves, closely related to the elegance of a mathematical proof or a solution to a problem. Furthermore, algorithms and algorithmic art based on nature have the potential to inspire, educate and create an understanding of natural processes.

We should not be afraid to involve algorithmic processes in artistic creativity and to enhance human creative capacity and playfulness, nor to open up for new modes of creativity and new kinds of art. Algorithms should be used creatively and aesthetically to visualize and allow interaction with natural processes in order to renew the ways we experience and understand with nature and to inspire people to consider the wellbeing of future generations.

Aesthetic qualities of algorithms applied in society, for example, in finance, governance and resource allocation, should be unveiled to make people engaged and aware of the underlying processes. We should create algorithms that encourage and facilitate human collaboration, interaction and engagement - with each other, with society, and with nature.

Palle Dahlsten performs live at the Biosphere Code meeting. Video: Victor Galaz.

Victor Galaz (Stockholm Resilience Centre), Fredrik Moberg (Stockholm Resiliece Centre), Fernanda Torre (Stockholm Resilience Centre), Koert van Mensvoort (Next Nature), Peter Svensson (Evothings), Robin Teigland (Stockholm School of Economics), Palle Dahlstedt (Chalmers University of Technology), Daniel Hassan (Robin Hood Coop), Maja Brisvall (Quantified Planet), Anders Sandberg (Future of Humanity Institute), Ann-Sofie Sydow, and Andrew Merrie (Stockholm Resilience Center).

Read More
Victor Galaz Victor Galaz

Can you feel it? Two interviews about emotions, AI and sustainability

There’s something interesting and highly contested about how AI increasingly engages with human emotions. We organized a virtual workshop on this topic last year (you can find the recordings here), and the topic continues to fascinate me. I was interviewed recently by Schibsted Future Report 2023 about some of my thoughts on this topic, and especially about how AI connects to resilience and our emotional world. You can find the full interview online here. And you can also find a longer interview done in connection to one of my talks at the Austrian Academy of Sciences in December 2022 (in German here).

Read More
Victor Galaz Victor Galaz

Going live!

I’m delighted to share the news that I’ve signed a contract with Routledge for the book “Dark Machines” to be published sometime in 2022. I’ll be posting a few reflections and background research on this site from time to time. If you are interested in some early ideas that will be elaborated in more depth in this book, some of the following online lectures, news articles and blogposts might be of interest:

Online lectures and seminars

“Our Planet on the Edge – Can Technology Really Save Us?”, KIKK Festival Namur, Belgium, 2019.

Couch Lessons - “AI+Climate Change”, Goethe Institut, 25th of June, 2020.

Articles and blogposts

“Will the Fourth Industrial Revolution Serve Sustainability?”, Project Syndicate, 3rd of May, 2021.

Künstliche Intelligenz für den Planeten, Der Standard (Austria), 22nd of February, 2020.

AI can tackle the climate emergency – if developed responsibly, The Conversation, 23rd of April, 2020.

“Why #sciencecommunication is dead — or at least more difficult than before…”, Medium, November 26th, 2019.

“A manifesto for algorithms in the environment”, The Guardian, October 5th, 2015.

På svenska

“Är digitala skor lösningen på klimatkrisen?”, Svenska Dagbladet (Kultur), 3:e april, 2021.

“Lätt att manipulera bilden av framtiden”, Svenska Dagbladet (Kultur), 22:a mars, 2021.

“Maskinerna undrar allt mer hur du mår”, Svenska Dagbladet (Under Strecket), 9:e mars, 2020.

“Massförstörelsevapnen idag är matematiska”, Svenska Dagbladet (Under Strecket), 25:e september, 2018.

“Robothandlarna har tagit över Wall Street”, Svenska Dagbladet (Under Strecket), 22:a april, 2016.

“Har de sociala medierna öppnat Pandoras ask?”, Svenska Dagbladet (Under Strecket), 16:e februari, 2010.

IMG_7407.JPG
Read More