This past weekend, I spent some time on YouTube catching up on talks and stumbled across an excellent concept from Maggie Appleton and her talk at the FFconf conference in Brighton relating to the Dark Forest theory and our interactions on the internet.
The internet is getting shady
The concept of the “dark forest theory” is one borrowed from astronomy, really as a response to the question… ‘if there is intelligent life out there, and the universe is infinite, why when we listen do we not hear anything?‘
An answer… is that we live in a hypothetical dark forest, and in a dark forest, there are many scary predators. Signalling loudly ‘I’m here’… is just not, well, a great idea… CHOMP.
A scary thought… yet putting this aside we can think about what would be our logical human reaction to this?
Most likely, if true, our immediate action would be to keep our heads down, stay in the metaphorical burrow, and hope to not be noticed. (probably making lots of Hollywood movies about how we the plucky human race fought back and brought pizza to the universe again).
So how does this relate to the public internet and to our widespread use of AI/Large Language Models?… well there is an analogy here.
The internet is getting shady
Outside of any concerns we may personally have around bias in AI and the quality of the output, what these new models have done is fundamentally change the speed at which content can now be summarised and created.
It has gone from days to seconds, at the push of a button.
It is amazing, and for those of us looking at large amounts of information can be a great way to wade through the vast sea of information to find ideas and distill the data points you are looking for. Yet this same tool is also making the problem worse.
Never have we had more content, articles, or podcasts it seems, and they are all vying for our attention.
If you really want to get noticed, it is all too easy to play the volume game, forget quality, and just publish more… so pressing the AI button to generate another 50 articles on the same topic is too easy.
The argument is, that this is already starting to happen and starting to pollute the open nature of the internet.
AI bots are watching and if you step out of your corner, waving I’m here, it is a signal to be bombarded by offers, articles, and spam, vying for attention… and if they can personalise or make this interaction even more human to more likely you are to respond…. BAM
Now it is starting to sound a little like the dark forest theory… and consumers are responding as you may expect… at least for now retreating to trusted spaces or into hiding.
Growing groups and walled gardens
By way of example, I see this myself. Yes, I still look at social media, but it tends to be increasingly one-way, people telling me information, or me putting information out there, rather than a two-way discussion…
This two-way interaction happens in chat groups, such as on WhatsApp, on video calls, on webinars with video on, or simply in person at live meetings or events.
In some ways, it feels like we are retrenching from social media to these walled gardens. It is somewhere we know everyone is human and interaction is genuine, or at least can be better judged.
So what does this mean for us at work?
Undoubtedly, in public information spaces, there is an impending virtual arms race between mass content providers and content readers. AI is being used on both sides to generate and filter content. I don’t think there is much we can do to stop this and to do so will put us at a disadvantage (at least as a reader).
Yet we are also going to look for trusted spaces or sites, where we know quality information is available, or for customers where there is trust around interaction and how they receive, and send information.
In some cases, we are going to want to know when we are talking with a human (and also when we are not). This is not to say that we won’t find AI interaction/content useful, just the need to be aware, linking to the trust factor, is going to be crucial.
Lastly, there will still likely be a role for human-to-human interaction (or as they termed it ‘meatspace’). In some instances, this may become paramount, and meeting someone directly will be the last line of defense to guarantee you are speaking with an actual human being. This will be especially valued in some areas.
What can be done, today?
So as someone who does use AI to summarise content… podcasts and those lengthy articles into key bullets… this discussion really got me thinking, especially about transparency.
Recognising this (and as a nod to this talk and these ideas) I’ve started using a new logo “AS AN AI”. This is to reflect when a piece of content has been largely AI-generated, and if possible always with links back to the original article.
I can of course not guarantee the original was not written by an AI… but hopefully this is a start, to support the lean towards greater transparency more generally.
Why ‘AS AN AI’… take a look at the video, I do thoroughly recommend having a watch.
Have a great weekend, everyone.