Imagine the president giving a very important message to the people. Using content generation, a bad actor could insert minor alterations that change the meaning of an important sentence and then spread it naturally on social media. That could have dramatic implications for disinformation campaigns.
Not nearly as well as it can now. Frame generation and speech imitators get better every day. Our AI is far better than it was 5 years ago, and 10 years ago algorithms like chatGPT and stable diffusion were things of science fiction.
I mean it makes it more accessible to the general public, but for anyone was attempting to seriously deepfake a presidential announcement… the resources have been out there.
Imagine the president giving a very important message to the people. Using content generation, a bad actor could insert minor alterations that change the meaning of an important sentence and then spread it naturally on social media. That could have dramatic implications for disinformation campaigns.
Could they not have done that 5-10 years ago?
Not nearly as well as it can now. Frame generation and speech imitators get better every day. Our AI is far better than it was 5 years ago, and 10 years ago algorithms like chatGPT and stable diffusion were things of science fiction.
I mean it makes it more accessible to the general public, but for anyone was attempting to seriously deepfake a presidential announcement… the resources have been out there.
Like 5 years ago, Jordan Peele did an Obama deepfake
It’s not perfect, but it was pretty close. Someone with real motivation probably could have made it 100% believable