New World Same Humans
New World Same Humans
New Week #107
1
0:00
-13:09

New Week #107

Meta's new generative AI science tool enrages users. Plus more news and analysis from this week.
1

Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.

If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮


To Begin

A shorter instalment this week. But that means a chance to dive a little deeper than usual into one story that caught my eye.

I speak of Meta’s short-lived AI science tool, Galactica.

Plus some snippets to ruminate on over the weekend. Let’s get into it.


📖 Open season

This week, another strange episode in the unfolding story that is the generative AI revolution. It’s a story that can be told in three tweets.

Meta released Galactica, a chatbot tool fuelled by a large language model trained on 48 million published scientific papers. Galactica was intended to be an all-purpose science wonderbot, and Meta trumpeted its amazing capabilities:

But within hours, problems arose. Users found that Galactica produced nonsense answers to simple maths questions, and garbled explanations of basic science. What’s more, some of its outputs were grossly offensive. Here’s a tweet from one of the leaders of the backlash:

Prompted by users, the bot produced outputs suggesting that the HIV virus is not the true cause of AIDS, that white people are superior to others, and that there can be health benefits to eating crushed glass.

Amid what quickly became a raging tweet storm of anger, Meta withdrew Galactica. The tech giant’s Chief AI Scientist, though, was unrepentant:

LeCun’s argument in short: humans create and publish lots of toxic material every day, so why is this instance uniquely bad?

Release, furore, withdrawal: there’s a whole lot going on here. What to make of it?

⚡ NWSH Take:

First, the simple part. On balance, Meta were right to withdraw Galactica. Its outputs were just too chaotic for a general release to be viable, or useful.

The tech giant didn’t do enough to position the tool as an experiment, or to warn people that because it is trained on data created by humans, it would reflect biases — including racial and gender biases — sadly commonly found among us. That meant too great a risk that users would mistake its chaotic pronouncements for reliable science. And it meant handing a powerful tool to bad actors who might use it to generate disinformation that carries a veneer of plausibility.

I understand that Meta wanted to road test a potentially exciting innovation. But a somewhere-in-between solution would have been best. Meta should have released Galactica in a closed and controlled way, to a volunteer community of beta users who had been warned of the risks and agreed to bear them.

But does Meta deserve the ire of those who accuse it of indifference to Galactica’s offensive outputs? Here’s where things get more complex.

Meta did say (just not loudly enough!) that the tool was experimental, and that users should independently verify all of is pronouncements. Galactica went on to output some truly toxic stuff, and all reasonable people agree that racist, sexist, and other toxic pronouncements are bad.

The question, then, is how we should respond in this case. Lots of humans use, say, MS Word to write racist statements, and then use the internet to publish those statements to millions of people. Does this mean MS Word is also a bad thing? Does it mean we should withdraw the internet from public availability?

The analogy is imperfect, but Meta’s AI Chief does have a point: many people use a wide array of tools every day to create and disseminate toxic material, and we don’t summarily decide that those tools or the platforms behind them are Definitely Not To Be Tolerated. We blame, instead, the people who create and distribute the material.

I understand the concern, and even anger, of Galactica users faced with racist and sexist outputs. These prejudices are widespread across our societies and they’re embedded in our cultural history. When generative models display these biases, that’s a reflection of this deeper truth.

So if we’re going to work our way to a generative tool that can help with science, or anything else, we’re going to have to tolerate prototypes that produce false, and even offensive, statements. Are we saying that we’re not willing to do that? Are we saying that it’s more important that we are never exposed to any offensive statements than it is that we advance these technologies? If so, okay: that is a coherent position and we’re free to stick to if we like. But it comes at obvious costs around our ability to develop and refine tools such as Galactica, which have the potential to amplify us in so many ways, including via the democratisation of access to scientific knowledge.

I also wonder if anger over the toxic outputs of these AIs doesn’t come a little close, sometimes, to avoiding the real problem here. Yes, it’s unwelcome that generative models reproduce racist and sexist attitudes. But they only do so because those attitudes exist among us. We are the source of the problem. We need to change ourselves. You could argue that generative AIs will make that process of change more difficult, but I’m not so sure. Humans have been spewing out their own toxicity for millennia; there will always be plenty of it around with or without AIs.

In short: yes, it’s bad that generative AIs produce toxic outputs. But could we see that as a shared challenge to be overcome, rather than proof that the tools, and those who create them, are not to be borne?

In the end, all this has me pondering a set of questions that I’ve thought about for a long time. Those questions are around open and closed innovation; when to go public with an innovation, when to give it away for free for the good of the collective, and when to keep it behind closed doors.

Those questions have played out afresh with the generative AI revolution; OpenAI, for example, were cautious about making GPT-3 publicly available, and even now they limit the uses to which it can be put.

I think we’ll see these questions become even more acute across the coming decade, as we grapple with a new wave of technologies — transformer models, robots, virtual worlds and more — that will have impacts we can’t possibly yet understand.

That will raise expectations that organisations of all kinds think more carefully about the unintended consequences of the innovations they make public. Sometimes, that will mean holding back a new product or service because of concern over the harms it may cause. Equally, there are huge opportunities for organisations willing to share valuable IP with everyone, including competitors, for the good of us all. See how dating app Bumble recently open sourced its AI that detects unsolicited nude pictures.

We need new frameworks, and new industry norms, around these kinds of decisions. In the meantime, it will be left to organisations and the professionals inside them to try to do the right thing. Those who do will reap benefits, both across their innovation practises and when it comes to public sentiment. Those who get it wrong will get flamed.

I’ll be talking more about all this and its implications at the end of year LinkedIn Live event I’m co-hosting with my friends at Wavelength next week.


🗓️ Also this week

🌆 The US startup Praxis published its Master Plan to build a new self-governing city state. The Praxis city will be governed by its own laws and is intended to supercharge the future of humanity. Praxis say they’ll partner with an existing sovereign entity to choose a location. I’ve written before about the startup and the growing Charter City movement.

🤖 New research suggests automation technologies have been the principal driver of wage inequality since 1980. MIT researchers say that over the last 40 years the wage gap between more and less educated workers in the US has grown dramatically, and that automation accounts for more than half of that change. That’s because automation technologies — think, for example, self-service checkouts at supermarkets — have tended to displace less educated workers while allowing big corporations and their executives to capture more profits.

🛰 The EU says it will develop is own satellite internet network. The bloc has finalised a €6 billion deal that will see 170 satellites launched into low orbit. It comes amid rising concerns over Russian and Chinese space technologies, and awareness of the role that Elon Musk’s Starlink satellite internet has played in the war in Ukraine.

💥 Violent protests erupted at the largest iPhone factory in China. Videos show hundreds of workers at the Zhengzhou factory clashing with police and shouting ‘Defend our rights!’ Last month, workers were quarantined at the factory after a Covid outbreak.

🌊 Tuvalu announced that it will become the first nation to be recreated in the metaverse. The Pacific island nation faces an existential threat due to rising sea levels caused by global heating. ‘As our land disappears, we have no choice but to become the world's first digital nation,’ said Tuvalu’s foreign minister, Simon Kofe, at COP27.

🌖 NASA says it expects humans to live on the Moon ‘within the decade’. Speaking in the wake of the launch of the Artemis I rocket, NASA scientist Howard Hu said astronauts would set up a permanent settlement on the Moon in the 2020s, and use it as a base from which to go deeper into space.


Open Source

Thanks for reading this week.

The ongoing collision between generative AI and human nature — at its best and worst — is another case of new world, same humans.

This newsletter will keep thinking about where these strange AI dreams may lead.

If that mission resonates with you, there’s one thing you can do to help: share!

Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.

Share New World Same Humans

I’ll be back next week. Until then, be well,

David.

P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.

1 Comment
New World Same Humans
New World Same Humans
New World Same Humans is a weekly newsletter on trends, technology and our shared future by David Mattin.
Born in 2020, the NWSH community has grown to include 25,000+ technologists, designers, founders, policy-makers and more.
Listen on
Substack App
RSS Feed
Appears in episode
David Mattin
Recent Episodes
  David Mattin
  David Mattin
  David Mattin
  David Mattin
  David Mattin
  David Mattin
  David Mattin