Replies: 16 comments
-
Today and right at the moment it's working (here). The settings affect the crawlers for both. Also the default settings are in absolute terms a bit too high for Twitter's API limits, but they work for normal crawling/downloading because of time spent between requests. There is room for improvements. Contributions are welcome. |
Beta Was this translation helpful? Give feedback.
-
I actually have been trying to update one user who have already been downloaded once two weeks ago. The error is happening the first minute of running during Then i get the message |
Beta Was this translation helpful? Give feedback.
-
Please open this blog in the browser and tell me when the first two posts have been posted. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
At the moment I don't have a clue why it's crawling that much on this blog. |
Beta Was this translation helpful? Give feedback.
-
No, i have almost everything on default settings. The only things i have changed in the software is General: Connection:
Blog: |
Beta Was this translation helpful? Give feedback.
-
It seems some error occurs during the crawl process that keeps it from updating |
Beta Was this translation helpful? Give feedback.
-
This is the error in
|
Beta Was this translation helpful? Give feedback.
-
This blog downloads without problems here. Even if I try to emulate your situation by adapting the settings and blog file accordingly, it downloads the posts until the one from last time and stops. I don't know what could be the difference to your system. You could backup the blog's download folder and its two blog files. Then you can add the blog again and see, whether the blog works again and download the missing new posts. Later you can close the app and merge in the backed up files and the already downloaded entries in "blog"_files.twitter from the copy to the current one (just all entries, a few duplicates are ok). |
Beta Was this translation helpful? Give feedback.
-
Report from start to end:
Conclusions:
Log: |
Beta Was this translation helpful? Give feedback.
-
Ok, but now we are talking about a different thing, isn't it? It's no longer about downloading a few dozen recent posts, but downloading historic posts (resp. complete blogs). The download of the "post lists" counts towards the limit, whether a post's media is downloaded or skipped. |
Beta Was this translation helpful? Give feedback.
-
To my understanding: I see no difference between updating an already downloaded blog and complete new download. In other words, you will never be able to update/download the second blog in the download queue if the first user have a large amount of The problem with updating a blog would not be a problem if you only got the recent posts between
Problem summary:
|
Beta Was this translation helpful? Give feedback.
-
First, you experience resp. describe something that I don't see here. Looks like most other users can update their existing blogs too.
That's exactly what we're doing, precisely
Not automatically and unattended, yes. You can, for example, remove this blog from the download queue, which stops its crawler and continues with the next one. Let me summarize what I get (and probably others too):
The last point needs to be fixed, so that all posts to the limit are downloaded and then the blog is marked as completely downloaded.
If you know how to fix it, you are welcome to do so (or share it). @Hrxn @desbest @cr1zydog |
Beta Was this translation helpful? Give feedback.
-
I've never used Twitter with this App before, so my own experience here is a little limited. That said, what you state here is obviously true:
The third point is the real issue, as I understand it, and yes, this is a limitation due to how Twitter works. |
Beta Was this translation helpful? Give feedback.
-
I can't download any blogs, new or old, few posts or large. |
Beta Was this translation helpful? Give feedback.
-
I had this problem several months ago but it's not bothered me since and I didn't change anything other than the routine TumbleThree updates. I catch-up with all my Tumblr blogs once a month and add any newly discovered ones. I'm now following 257 Tumblr blogs (In know, I'm hooked!), and the last catch-up on the first of the month was 147 GB and 404,000 files. It took almost 24 hours to harvest everything, but ran perfectly. |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
Still being
Rate Limited
on the twitter API with a suggestion to lower the connections in Setting. This however makes no difference at all. I tried as low as10 Numbers of connections in 60s
with only1 Concurrent connection
. To my understanding of the Twitter APIhttps://developer.twitter.com/en/docs/twitter-api/rate-limits
this shouldn't be an issue?This also raises the questions, if the settings only effect the Tumblr API?
Should both Tumblr and Twitter really be treated under the same settings and name?
And shouldn't there also be a way to Authenticate a Twitter account? This would allow you to crawl users that only allow followers.
Desktop (please complete the following information):
Beta Was this translation helpful? Give feedback.
All reactions