- GPT-4 raises new concerns in video authenticity and misinformation/disinformation
- Technology currently exists to accurately identify synthetic created media
- Additional awareness is required when evaluating social media/shared video
The past months have found the internet ablaze with talks of ChatGPT, a natural language processing tool that allows a user to type inputs to a chatbot and create content based upon ChatGPT’s underlying AI technology. Discussions of ChatGPT have ranged from potential game-changing use cases, to questions about accuracy and ethical use of the technology. With the release of GPT-4, the boundaries of AI have been pushed even further with the ability to create hyper-realistic video content. Content may be so realistic that it could be virtually impossible to detect if an actor is real or completely computer generated for a specific show or movie.
With these advances rightfully come increasing concern over public trust of media files, misinformation and disinformation in society. There are numerous organizations and myriad of approaches to tackle this issue from a policy and technology standpoint. Medex Forensics recognizes that there is no “magic bullet” solution for this issue and is proud to be a part of many of the efforts addressing the future of video authentication and public trust. While there are many efforts underway to protect our future, it is important to highlight that the technology and skills to accurately differentiate true camera original video from deepfake videos created by AI like GPT-4 exist today.
While the entertainment aspect of AI created video is certainly a game changer, that application is likely not the true threat of disinformation in modern society. When we sit in a movie theater we expect to leave reality at the door and be immersed in a world of fiction and entertainment. This differs greatly from receiving a news alert on our cell phone showing a video of a political figure making disparaging remarks about a foreign country. How do we trust the information? How do we know it is real?
Even though that video was likely posted to social media, where someone could have downloaded it, added subtitles and a watermark, and reposted it before it ended up in your news feed, at some point there was an original video recording on a camera, likely a cell phone in today’s age. To truly evaluate authenticity, the person recording the original video (not the one uploaded to and then transformed by social media) should make that video available for examination along with a “claim” as to how it was recorded. For example, a simple statement of “this video was recorded using an iPhone 13 Pro” along with that purportedly original video would allow for a determination as to whether the video was truly captured from an iPhone 13 Pro and unedited. Given that scenario, the Medex Forensics structural analysis can be fully reliable and accurate in determining camera original video vs AI created video, no matter how real the imagery looks.
Once video is uploaded to social media platforms, the determination of authenticity becomes more challenging as the video files themselves are changed by the platforms for storage and streaming; preventing access to the original video for an analysis of deepfake encoding. It should be noted that many, if not all, social media platforms currently do retain the originally submitted video for a short period of time while it is being optimized for use on their platform. During this time, an automated analysis could be run to detect and flag any potential deepfake encoding, prior to publishing, however no platform has employed this technology yet.
Over the course of the next weeks and months, there is sure to be a lot of talk about what you can and cannot trust when watching videos online. Many of those posted videos will be obviously produced, including a news media watermark, subtitles, or obvious edits. By the inclusion of those attributes, we know that this video has been altered from its original state and is not a camera original recording. That alone does not make the content inauthentic, it just makes it extremely difficult to evaluate authenticity, especially with recent advances in AI.
As we stand today, it is best to remember there is/was an original recording of what you are watching in existence, and if someone really wants you to believe what you are watching they should make that available with their “claim” of how it was recorded. Armed with that information there are proven reliable methods, like the ones developed by Medex Forensics, to truly identify authentic recording from deepfakes, no matter how advanced the technology and how realistic it looks.