Please note: This post was written by Highlander prior to their rebrand to FluidOne Business IT - Sheffield.
Earlier this year I attended Gamma’s GX: Frontiers event in London. There were plenty of interesting contributions from a number of guest speakers, but one particular talk caught my interest.
This was from renowned author and advisor Nina Shick, who explored the rapid evolution of synthetic media and deepfake technology. Nina spoke not only about the concept of synthetic media, but the incredible innovation taking place in this area that is creating both cost-saving real-world applications, and eyebrow raising security concerns.
The idea of AI-created or modified media might seem like something out of sci-fi, but this technology is having a very real impact today and is something we must all be aware of.
With that in mind, I wanted to share some key takeaways from the talk, and dig a little deeper into the world of synthetic media.
In simple terms, synthetic media is any piece of media generated or modified by an algorithm, especially through the use of artificial intelligence. This could be written content, 2D images, video or audio. The intention is to change or correct the original sentiment or even to create a net new message to spread information. This new message could be legitimate, but it could just as easily be false.
The concept of synthetically altering media is not a new one. Many of us create synthetic media every day. Everyone who has ever edited a photo on their smart phone, for example, is essentially creating a new piece of media that is an edit of the original source.
The most commonly known form of synthetic media is a deepfake. This uses artificial intelligence, just like other forms of synthetic media, but specifically focuses on images or audio. Using a real media object as the source, advanced machine learning delivered through autoencoders and generative adversarial neural networks (GANs) analyses the original media to create new variations at scale. These new versions are continually created and matched against the original until they become so accurate that they can no longer be distinguished as fakes.
Once this process is complete, this fake model can then be used to create new or altered media, either as an overlay or edit, or as a completely original piece.
The advancement of synthetic media has, unsurprisingly, unearthed some exciting opportunities in the media industry, and this technology is already being used to create or amend content of various formats today.
One widely reported example is the television ads for food delivery brand Just Eat, who used synthetic media to amend an existing ad with US rapper Snoop Dogg so that he appeared to say Menulog, the brand name used in Australia. This technology allowed for the seamless creation of a localised variation without the need for any additional filming, dramatically cutting the potential costs as a result.
This is just one example of synthetic media being used to amend existing footage, but in time this technology could also radically change the way new forms of content are created. AI can already originate images, video content and new music, and could potentially create new scripts and full feature episodes or albums from scratch. All-digital TV and music personalities could be created as new, or well-known celebrities could appear even after they are dead.
All of this can be achieved with little to no human input, allowing new content to be created through code to rapidly accelerate production. It’s estimated that by 2030 around 90% of online content will be synthetic.
One other application from media, which also extends into other areas, is automated face blurring. Whether background B-roll footage for TV or cinema, or the capturing of CCTV as the result of physical security measures, there are many instances where members of the public are unwittingly captured in video footage. There are also many reasons why this footage may potentially need to be anonymised. Other tools and technologies exist to support the blurring of faces in video content, but this is a manual and time intensive process that is not easily scaled.
Automatic AI-powered face blurring allows for multiple faces, even hundreds or potentially thousands, to be blurred at scale – a major advancement for individual data and identity privacy.
As with any technological innovation, the potential applications can unfortunately be dangerous and malicious. The creation of deepfakes in particular can have some fairly concerning implications.
A deepfake allows for the convincing spread of misinformation from a seemingly trusted source. It might seem harmless to see a synthetically altered Snoop Dogg on a TV ad, but imagine a synthetically created world leader making a falsified public address, or providing fake instructions to members of their team? As humans we are wired to believe what we hear from figures of authority, and the quality of deepfakes can make it almost impossible to distinguish a fake piece of media from the original.
The quality of deepfakes is already creating smaller, but very real security concerns for individuals and businesses. In fact, the FBI recently warned of the risks presented by cybercriminals utilising deepfakes to apply, interview for, and ultimately earn remote working positions. Scammers also recently synthetically impersonated a Crypto CCO, sending communications and conducting client meetings without the business or individual even being aware.
Whether it’s through the impersonation of a potential hire, or of a senior member of your own business, it’s clear that cybercriminals are already looking to harness deepfake technology to breach cybersecurity postures, access internal systems, or catch out unsuspecting customers and prospects.
The pace of synthetic media innovation currently outpaces the creation of detection technologies, but that doesn’t mean that we can’t all take steps to prepare for the emerge of synthetic media threats
One of the biggest challenges synthetic media presents, as Nina highlighted in her session, is a phenomenon known as the liar’s dividend. This is where cynicism around all media due to concerns over its legitimacy sees authentic media disregarded as fake. This is dangerous territory, but there are approaches we can take to counter this.
We already know of the Zero Trust concept commonly deployed in conventional cybersecurity – where a person looking to access any internal systems is only allowed to do so once their identity is verified, and at multiple stages.
It’s this concept that we will soon need to adopt across other areas of our lives so that instead of making our own judgement as to whether we believe something to be genuine or not, we don’t trust a piece media until its authenticity has been verified.
Despite the rapid advancement, it’s likely that many of us will remain untouched by synthetic media or deepfakes for some time. So as new innovations emerge, lets continue to marvel at the incredible feats already possible, while keeping a cautious eye on what this could mean for our trust in what we see and hear.