-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not scraping images or messages #455
Comments
put:
or
|
Thank you for the quick reply. I tried both methods and it is still the same result. Only scrap the profile image and header. I deleted all the cache and the model cache in .data and even used the first method on the new sub or existing sub and it is still giving me the same result. |
Which commit exactly |
Seems to have just started happening for me a few days ago (I didn't notice until I did another run/pull today). Only pulling videos, and I'm not certain if it's all of them yet (need to do an audit). Definitely not pulling images, or any content in messages. I can't be sure when it started, but it had to have started a few days ago / late last week. |
Hmm, actually, I have an example where even videos weren't pulled. (New subscription as of today) Trying this specific example with Update: Nope, that didn't do it. Even with a force it didn't download / re-download everything. Update 2: Nevermind. It works with |
Same issue |
Can you try maybe logging out and logging back in |
Here's the log file. Please let me know this suffice. |
Well according to the script all the downloads you want are already been marked as downloaded You have to use --dupe --after 2000 |
This is just one of the logs. It still doesn't help that I cannot scrape anything with a new sub. And I'm sorry I don't know what you mean by using dupe after 2000. |
Here's the script I just ran with the new sub I subbed yesterday. Beside the script only getting her Profile Pic and Header, it didn't scrape anything else. This is how I found out the script wasn't scraping any images. Thank you for looking into this! |
Not sure what happen but all the downloads you want are already marked as downloaded you need to force a rescan, and redownload with |
I'm gonna chime in and say that i too have noticed that in some cases some posts aren't being listed or downloaded at all, i tried purging the database and using dupe but same result. The only way i could get it to download the missing posts was running post_check mode and selecting every row in the table to be downloaded. Sometimes the difference is pretty significant, in one case nearly 100 posts were missing on a total of 140ish. At first i thought it may have been skipping duplicates but as it turns out this is not the case. Images were affected most but many videos were also affected by the bug |
I personally found that even though I was running "pip install --upgrade ofscraper" that I was still running an old version that would only download videos. I downloaded the latest version from the release page and that appears to be downloading all images now. |
3.12.1 win build has the partial download outside of post_check mode issue for me, but only for some people for whatever reason, others are fine |
Please open another issue with a log of an effected user if possible |
The force reset of cache in 3.12.2 and beyond should fix some of these issues |
Hi there, I downloaded the latest commit from yesterday. I finally got past authentication and the script to run regularly. However, I noticed that the script is scraping Video ok but not scraping any images on posts or in messages. I don't have any error or anything to share here as the script just runs normally.
For example, there are models that I just sub today and it's not scraping anything but just the Profile and those images. Another model that I'm subbed to, is not scraping any images from posts but only videos.
There are two screenshots I could take when the script is running for this newly subbed today. However, there was nothing scrapped besides the Profile image and the header.
Help, please?
Thanks!
Bum
The text was updated successfully, but these errors were encountered: