黑料网

AI - Deepfake Scams

The rapid increase in the abilities of AI models has led to the creation of deepfakes. Deepfakes are video or audio messages generated by artificial intelligence, which can be used to impersonate an individual by mimicking their voice and face. Advanced AI models can generate deepfakes by using videos, audio clips, and images of their targets to create very realistic lookalikes. For those who have a large public social media presence, it is very easy for a scammer to obtain sample videos, audio and images, which could be used to generate a deepfake. It is typically much simpler to generate a deepfake that only uses audio, as videos require much more advanced AI models and need more resources to generate a convincing result.

AI-generated image of a face, where half is designed to look human and half appears to be a robot.

From the scammer's perspective, the most important component of any social engineering attack is to make themselves seem convincing to the victim. While an email or text message may be quickly identified by a target as fraudulent, identifying a copy of a friend, family member, or colleague's voice or face may be much more difficult. This is what makes deepfakes such a useful tool for scammers.

While these deepfakes can be very convincing, there are still techniques that you can use to see through the scammer's ruse. First, consider how the message was sent to you. Consider this example: you receive a phone call from someone with your supervisor's voice who begins to ask for sensitive information. In this scenario, you can check the phone number that they are calling from. Do you recognize the number? If you have any doubts about the number, hang up immediately and contact your supervisor through a trusted method. This also applies to emails. If you don't recognize the sender's email address, report the message to phish@kent.edu for review.

Another technique that can be used to avoid these scams is to set up safe words with your closest friends and family. These safe words should be words or short phrases that you can easily remember, but would be difficult for someone else to guess. For example, you should not use your mother's maiden name or the name of your elementary school as a safe word; these are easily guessed or found through public records online. If you find yourself contacted by someone claiming to be a friend or family member, but you do not recognize the method they are using to contact you, ask them for the safe word. If they do not know it or respond incorrectly, do not interact with them further.

0
0