This Streamlit application is designed to both generate and detect deepfakes. It enables the creation of deepfakes in audio, image, and video formats, and features a detection system specifically for image-based deepfakes. The application utilizes advanced machine learning models to ensure high accuracy in both generation and detection tasks.
- Deepfake Generation: Users can create deepfakes in various media formats, enhancing their understanding of how deepfake technology works.
- Deepfake Detection: The app provides tools to detect image-based deepfakes, helping users identify manipulated content.
- User-Friendly Interface: Built with Streamlit, the app offers a clean and intuitive interface for easy navigation and operation.
- Streamlit: For creating a responsive web application.
- Python: The primary programming language used for both backend and integration of machine learning models.
- Machine Learning Models: Advanced models for accurate generation and detection of deepfakes.
The main goal of this project is to educate and inform users about the capabilities and risks associated with deepfake technology, providing practical tools for generating and detecting deepfakes responsibly.
For uniformity, I suggest using virtualenv or conda with pip and requirements.txt to keep the tools used up to date. With pip and virtualenv installed on your system, follow these steps:
-
Clone the repository with
git clone https://github.com/aminatouseyeup/Deepfakes.git
-
In your project repo, create the virtual environment with
virtualenv venv
on Python version 3.7 -
Activate your virtual environment with
venv\Scripts\activate
3.1. We need to install
Microsoft Visual C++ 14.0 or greater
3.2. We need to install and configureffmpeg
-
Run
pip install -r requirements.txt
to install all required packages.
At this point, it is important to download the pre-trained models from the links below:
- Download the repo
https://github.com/misbah4064/Real-Time-Voice-Cloning.git
and include it in the folderdeepfake-audio-generator
- Download the file
pretrained.zip
fromhttps://drive.google.com/uc?id=1n1sPXvT34yXFLT47QZA6FIRGrwMeSsZc
- Copy
pretrained.zip
into theReal-Time-Voice-Cloning
folder downloaded earlier and unzip it.
- Download the file from
https://drive.google.com/file/d/1krOLgjW2tAPaqV-Bw4YALz0xT5zlb5HF/view
- Copy it into the
deepfake-image-swap
folder.
- Download the weights from
https://drive.google.com/uc?id=1zqa0la8FKchq62gRJMMvDGVhinf3nBEx&export=download
- Rename it to
model_weights.tar
- Copy it into the
deepfake-video-generator
folder.
Now you are ready to launch the application on Streamlit with the command streamlit run main.py
.
This project utilizes several external repositories and resources that have significantly contributed to its development. Here is a list of these resources:
https://github.com/cdenq/deepfake-image-detector?tab=readme-ov-file
https://github.com/sudouser2010/python-ninjas/blob/main/jupyter-notebooks/2023/deep_fake_video/deep_fake_video.ipynb
https://github.com/misbah4064/Real-Time-Voice-Cloning.git