Practical AI Practical AI #49

Exposing the deception of DeepFakes

This week we bend reality to expose the deceptions of deepfake videos. We talk about what they are, why they are so dangerous, and what you can do to detect and resist their insidious influence. In a political environment rife with distrust, disinformation, and conspiracy theories, deepfakes are being weaponized and proliferated as the latest form of state-sponsored information warfare. Join us for an episode scarier than your favorite horror movie, because this AI bogeyman is real!


Discussion

Sign in or Join to comment or subscribe

2019-06-27T09:45:13Z ago

Hi,

One practical application of fake news is creating realist looking educational material.
For instance, in radiology a frequent test is to have a plastic phantom with small, slightly different patches, and radiologist have to find these patches. This is kind a test for the image quality what the scanner produces. But if you know how these phantoms look like, it is easier to guess.
Therefore, a realistic looking slight, artificial lesion/tumor would be beneficial training doctors.
Similarly, not only training doctors, but testing medical AI could benefit from deepfakes.
If the fake tumor is indistinguishable from real ones, than the medical AI should recognize both types.
Obviously, using deep fakes, there is a much larger control on the environment, so besides the classic tests (software testing, performance test against human observers, etc.), another layer of robustness-test can be added. (E.g. FDA could have an automated service for compliance test. And having different generated dataset every time can prevent from overfitting the method on a standardized test set.)

0:00 / 0:00