16 Sep Deepfakes: why we should be concerned
Posted
in Industry Knowledge
You probably will have heard the term “deepfake” in the news recently as governments around the world attempt to address this technology, which has the potential to spread fake news, steal identities and exploit celebrities. Adult content platforms have responded strongly to the presence of “deepfakes” by not allowing content in which performers have been artificially generated. However, according to a recent worldwide survey, 71% of people reported not knowing what a deepfake is. In this piece, I will talk about what makes a deepfake different from other altered images, the risk of deepfake technology and what is being done to combat it.
WHAT IS A DEEPFAKE, AND HOW ARE THEY CREATED
Not all artificial intelligence is alike. To understand deepfakes, it is important to understand the difference between them. While some artificial intelligence tools use machine learning to execute their tasks, others use deep learning. Machine learning is when the program learns from the data it is given using algorithms programmed by its human creators. So although the program can learn without being explicitly programmed, it still has a scaffold provided by its creators. Deep learning is much more like the way a human brain works, with a logical structure of algorithms. This makes them the next step in power next to machine learning. And the term “deepfake”? This comes from the fact that these images are created by programs that utilise deep learning.
It is true that there are faked and doctored images all over the internet nowadays. However, the difference between these and deepfake images is that images and videos produced by deepfake technology are extremely difficult to discern as fake. As opposed to other photo-altering technologies which are for entertainment purposes, deepfakes are designed to be as real as possible. The technology used to create them uses a system of two algorithms, one which builds the image and the other which checks for its level of realism and makes suggestions to the first algorithm about what needs to be reviewed to improve realism. It is this process that is continually focused on making the image as real as possible which sets deepfakes apart from other image manipulation technology.
RISK OF DEEPFAKES
The major risk of deepfakes is that because the technology is so sophisticated, it can be difficult to tell a fake image or video from a real one. Furthermore, with the technology becoming accessible for ordinary people outside of specialist artificial intelligence circles, there is more opportunity for a greater pool of people to create false and misleading content. Because the vast majority of people get information about current events and form their opinions using internet-based content, the dissemination of fake content (particularly when it involves people in positions of power) can be extremely dangerous.
When it comes to adult content, the use of deepfakes has a wide range of consequences and risks if the technology is allowed on platforms. The performers whose content has been used to create the deepfake have not consented to this use, and consent is the backbone of all adult content. Using images of anyone to create pornographic content, whether for profit or other exploitative practices, violates that person’s consent and cannot be tolerated. Regardless of your opinion on adult content, if a person does not consent, it is unlawful, and due to the power of deepfake technology, platforms must take a strong stance on it.
COMBATING DEEPFAKE TECHNOLOGY
Deep learning in artificial intelligence has a range of applications that could be useful for people, but it does not include the creation of damaging content. However, the technology has developed faster than the structures to support and control it at a societal level. Government policy has a key role to play in this, as well as educating internet users and increasing media literacy. The balance between encouraging technological advances and mitigating their real-world consequences will always be tricky to manage. There are laws that already exist that can be used to challenge the use and dissemination of deepfakes, although they may not have been created for that purpose originally. Ironically, AI can be trained to recognise fake and altered images on dimensions undetectable to the human eye, so artificially intelligent tools can be used to combat the proliferation of deepfakes.
Deepfake technology is an area to watch in the coming years, as the technology develops and real-world responses are established to address it.
Rem Sequence is an Australian adult content creator, blogger and internationally published alt model. She has a background in psychology, philosophy and political science and has worked in health and sex education, youth work and trauma counseling for almost two decades.
SHARE THIS:
You probably will have heard the term “deepfake” in the news recently as governments around the world attempt to address this technology, which has the potential to spread fake news, steal identities and exploit celebrities. Adult content platforms have responded strongly to the presence of “deepfakes” by not allowing content in which performers have been artificially generated. However, according to a recent worldwide survey, 71% of people reported not knowing what a deepfake is. In this piece, I will talk about what makes a deepfake different from other altered images, the risk of deepfake technology and what is being done to combat it.
WHAT IS A DEEPFAKE, AND HOW ARE THEY CREATED
Not all artificial intelligence is alike. To understand deepfakes, it is important to understand the difference between them. While some artificial intelligence tools use machine learning to execute their tasks, others use deep learning. Machine learning is when the program learns from the data it is given using algorithms programmed by its human creators. So although the program can learn without being explicitly programmed, it still has a scaffold provided by its creators. Deep learning is much more like the way a human brain works, with a logical structure of algorithms. This makes them the next step in power next to machine learning. And the term “deepfake”? This comes from the fact that these images are created by programs that utilise deep learning.
It is true that there are faked and doctored images all over the internet nowadays. However, the difference between these and deepfake images is that images and videos produced by deepfake technology are extremely difficult to discern as fake. As opposed to other photo-altering technologies which are for entertainment purposes, deepfakes are designed to be as real as possible. The technology used to create them uses a system of two algorithms, one which builds the image and the other which checks for its level of realism and makes suggestions to the first algorithm about what needs to be reviewed to improve realism. It is this process that is continually focused on making the image as real as possible which sets deepfakes apart from other image manipulation technology.
RISK OF DEEPFAKES
The major risk of deepfakes is that because the technology is so sophisticated, it can be difficult to tell a fake image or video from a real one. Furthermore, with the technology becoming accessible for ordinary people outside of specialist artificial intelligence circles, there is more opportunity for a greater pool of people to create false and misleading content. Because the vast majority of people get information about current events and form their opinions using internet-based content, the dissemination of fake content (particularly when it involves people in positions of power) can be extremely dangerous.
When it comes to adult content, the use of deepfakes has a wide range of consequences and risks if the technology is allowed on platforms. The performers whose content has been used to create the deepfake have not consented to this use, and consent is the backbone of all adult content. Using images of anyone to create pornographic content, whether for profit or other exploitative practices, violates that person’s consent and cannot be tolerated. Regardless of your opinion on adult content, if a person does not consent, it is unlawful, and due to the power of deepfake technology, platforms must take a strong stance on it.
COMBATING DEEPFAKE TECHNOLOGY
Deep learning in artificial intelligence has a range of applications that could be useful for people, but it does not include the creation of damaging content. However, the technology has developed faster than the structures to support and control it at a societal level. Government policy has a key role to play in this, as well as educating internet users and increasing media literacy. The balance between encouraging technological advances and mitigating their real-world consequences will always be tricky to manage. There are laws that already exist that can be used to challenge the use and dissemination of deepfakes, although they may not have been created for that purpose originally. Ironically, AI can be trained to recognise fake and altered images on dimensions undetectable to the human eye, so artificially intelligent tools can be used to combat the proliferation of deepfakes.
Deepfake technology is an area to watch in the coming years, as the technology develops and real-world responses are established to address it.
Rem Sequence is an Australian adult content creator, blogger and internationally published alt model. She has a background in psychology, philosophy and political science and has worked in health and sex education, youth work and trauma counseling for almost two decades.