The scourge of generative human intelligence

There’s a terrifying genre of post going around LinkedIn.


It starts with a promise: “Here’s how I went from 0 to 7 million followers in 30 days and you can too.”

It unravels step by step: “Start by using TopicBeastAI to find out what topics are popular. Then use QopyQat to find out which posts on those topics get the most engagement. Then use FormulaiQ to derive the underlying post structures. Then use PostFaker to generate your own version of each post.”

 

Set aside whether the suggested tools are effective. Set aside whether the process generates the sought-after followers. The entire idea of it is fundamentally, vomitously broken.

I know: It’s not new. But it is enabled and accelerated by generative AI.

It’s bad enough to outsource creativity to ChatGPT. What this type of process does is outsource our lack of creativity.

 

When we ask our AI tools to find a “proven formula,” by definition, what they come up with is not new. It’s a reconstitution of what everyone else has done. The irony of it all is that generative AI itself functions in this way: by digesting and regurgitating content. And where that gets us, as Azeem Azhar pointed out this week, is “reversion to a bland mean.”

The bland mean is an issue even before we get to the concept of model collapse: “the degenerative process that large language models like ChatGPT can experience when they're trained on AI-generated junk data.”

 

Model collapse is what you get when AI ingests all the existing data, pulps it, and then spits out so much rehashed content that it begins to feed on itself.

We’re not far off. A couple weeks ago, OpenAI CEO Sam Altman tweeted, “openai now generates about 100 billion words per day. all people on earth generate about 100 trillion words per day.”

He said that like it’s a good thing.

 

Last month, Vice reported that, already, “a ‘shocking’ amount of the internet is machine-translated garbage, particularly in languages spoken in Africa and the Global South.”

(All of this, of course, is before we even get into hallucinations or tools designed to “poison” content to protect copyright.)

 

In the face of this impulsion towards pre-chewed dreck, rather than striving to be more creative and innovative, we’re trying to copy the very reduction in creativity generated by AIs themselves. Instead of developing our own positions, opinions, points of view, we’re absorbing existing content and spitting out what we think is expected from us.

 We’re turning ourselves into human versions of generative LLMs.

The reason to share on LinkedIn is not that you’ve found a way to digest and regurgitate popular content. The reason to share on LinkedIn is that you have something of value to offer.

 

If you are crafting what you share to reverse-engineer popularity, you’ve already lost.

 

Ngā mihi maioha / warm regards,

Kaila