SpearCopy is an educational and auditing tool designed to clone websites locally and capture credentials entered into forms. It’s ideal for controlled environments such as internal network testing and does not require Internet access.
⚠️ Disclaimer: This project is intended only for educational purposes and testing in controlled environments. Misuse of this tool is strictly prohibited. I am not responsible for any damage caused by improper use of this software.
- 📋 Website cloning: Downloads the full content of a webpage, including HTML, CSS, JavaScript, images, and other resources, preserving the original structure.
- 🦝 Credential capture: Modifies forms to capture and log entered data in JSON format.
- 💻 Local HTTP server: Hosts the cloned site locally for testing.
- 💀 Customizable payload: Includes an editable JavaScript payload to tailor phishing behavior.
- 📚 Download management: Avoids duplicate downloads and organizes resources in folders based on the target URL.
- Python 3.9 or higher.
- Dependencies listed in
requirements.txt
(install them withpip install -r requirements.txt
).
-
Clone a website and start the server:
python main.py start <url>
- Downloads the specified website into
tmp/<hash>/public
. - Starts a local HTTP server at
http://localhost:8080
to serve the cloned site.
- Downloads the specified website into
-
Clean the temporary directory:
python main.py clean
- Deletes all files and folders under the
tmp
directory.
- Deletes all files and folders under the
When running the start
command, the cloned site is saved in the following structure:
tmp/
└── <hash_md5>/
├── public/ # Contains the cloned website.
│ ├── index.html # Main cloned page.
│ ├── css/ # Local or remote CSS files.
│ ├── js/ # Local or remote JavaScript files.
│ └── ... # Other resources (images, videos, etc.).
└── logs/ # Folder for captured data logs.
└── <log_id>.json
Each submitted form is logged as a JSON file in the logs
folder. An example log looks like this:
{
"local_url": "http://evil.com/index.html",
"remote_url": "http://realwebsite.com/index.html",
"address": "192.168.0.10",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.5993.94 Safari/537.36",
"data": {
"username": "test_user",
"password": "1234"
}
}
The credential-capturing behavior is defined in the capture.js
file. By default, it intercepts form submissions, prevents them from being sent, and logs the captured data via the /capture
endpoint.
You can edit the capture.js
file to customize the phishing behavior.
-
Run the command to clone a website:
python main.py start https://example.com
This will download the site and serve it at
http://localhost:8080
. All files will be stored intmp/<hash_md5>/public
. -
Open the cloned site in a browser.
-
Submit data into a form and check the console or the logs stored in
tmp/<hash_md5>/logs
. -
To clean up temporary files:
python main.py clean
Contributions, issues and feature requests are welcome! Feel free to check issues page.
Give a ⭐️ if this project helped you! Or buy me a coffeelatte 🙌 on Ko-fi