This repository contains the final group project for the Advanced Computer Vision course. Our project explores two core computer vision challenges applied to an art dataset: โข Style Transfer โ applying the style of a reference artwork (e.g., Cubism) to a target image โข Fake Image Detection โ classifying whether an image was generated by AI or created by a human artist
For each task, we develop and compare two models: a Champion model (best performing) and a Challenger model (alternative approach), with the goal of deploying the top models for real-time usage via a web app.
We use deep learning to transfer artistic styles to input images.
Models:
- Champion: VGG-based Neural Style Transfer using a pretrained model and Cubism-style reference image
- Challenger: Custom CNN architecture trained from scratch
This task focuses on visual fidelity, stylization quality, and processing speed.
We classify whether an artwork is AI-generated or human-made using two architectures and two dataset configurations (pure vs. hybrid).
Models:
- Champion: Pre-trained DINOv2 with a custom classification head (linear regression for logits)
- Challenger: EfficientNet
Evaluation includes accuracy, F1-score, ROC-AUC, and performance on a hybrid dataset.
- Preprocessing: Image normalization, resizing, label encoding
- Training: Implemented with PyTorch, early stopping and LR scheduling used
- Loss Functions:
- Style Transfer: Content loss + Style loss
- Fake Detection: Binary Cross-Entropy Loss
 
- Evaluation: Visual and quantitative metrics, ROC curves for fake detection
- Deployment: Streamlit app
The final models were deployed via a Streamlit application with optional ngrok or localtunnel integration for live public demos. The web app supports:
- Uploading new images for style transfer
- Uploading test images to classify as real or AI-generated