Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions, Feedback, and Suggestions #4 #5262

Open
mikf opened this issue Mar 1, 2024 · 204 comments
Open

Questions, Feedback, and Suggestions #4 #5262

mikf opened this issue Mar 1, 2024 · 204 comments

Comments

@mikf
Copy link
Owner

mikf commented Mar 1, 2024

Continuation of the previous issue as a central place for any sort of question or suggestion not deserving their own separate issue.

Links to older issues: #11, #74, #146.

@BakedCookie
Copy link

For most sites I'm able to sort files into year/month folders like this:

"directory": ["{category}", "{search_tags}", "{date:%Y}", "{date:%m}"]

However for redgifs it doesn't look like there's a date keyword available for directory. There's only a date keyword available for filename. Is this an oversight?

@mikf
Copy link
Owner Author

mikf commented Mar 2, 2024

Yep, that's a mistake that happened when adding support for galleries in 5a6fd80.
Will be fixed with the next git push.

edit: 82c73c7

@taskhawk
Copy link

taskhawk commented Mar 6, 2024

There's a typo in extractor.reddit.client-id & .user-agent:

"I'm not a rebot"

@the-blank-x
Copy link
Contributor

There's also another typo in extractor.reddit.client-id & .user-agent, "reCATCHA"

@biggestsonicfan
Copy link

Can you grab all the media from quoted tweets? Example.

mikf added a commit that referenced this issue Mar 7, 2024
#5262 (comment)

It's implemented as a search for 'quoted_tweet_id:…' on Twitter.
mikf added a commit that referenced this issue Mar 7, 2024
#5262 (comment)

This on was on the same line as the previous one ... (9fd851c)
@mikf
Copy link
Owner Author

mikf commented Mar 7, 2024

Regarding typos, thanks for pointing them out.
I would be surprised if there aren't at least 10 more somewhere in this file.

@biggestsonicfan
This is implemented as a search for quoted_tweet_id:…- on Twitter's end.
I've added an extractor for it similar to the hashtags one (40c0553), but it only does said search under the hood.

@BakedCookie
Copy link

BakedCookie commented Mar 7, 2024

Normally %-encoded characters in the URL get converted nicely when running gallery-dl, eg.

https://gelbooru.com/index.php?page=post&s=list&tags=nighthawk_%28circle%29
gives me a nighthawk_(circle) folder

but for this url:
https://gelbooru.com/index.php?page=post&s=list&tags=shin%26%23039%3Bya_%28shin%26%23039%3Byanchi%29

I'm getting a shin'ya_(shin'yanchi) folder. Shouldn't I be getting a shin'ya_(shin'yanchi) folder instead?

EDIT: Actually, I think there's just something wrong with that URL. I had it saved for a long time and searching that tag normally gives a different URL (https://gelbooru.com/index.php?page=post&s=list&tags=shin%27ya_%28shin%27yanchi%29). I still got valid posts from the weird URL so I didn't think much of it.

@mikf
Copy link
Owner Author

mikf commented Mar 7, 2024

%28 and so on are URL escaped values, which do get resolved.
#039; is the HTML escaped value for '.

You could use {search_tags!U} to convert them.

@taskhawk
Copy link

taskhawk commented Mar 8, 2024

Is there support to remove metadata like this?

gallery-dl -K https://www.reddit.com/r/carporn/comments/axo236/mean_ctsv/

...
preview['images'][N]['resolutions'][N]['height']
  144
preview['images'][N]['resolutions'][N]['url']
  https://preview.redd.it/mcerovafack21.jpg?width=108&crop=smart&auto=webp&s=f8516c60ad7fa17c84143d549c070738b8bcc989
preview['images'][N]['resolutions'][N]['width']
  108
...

Post-processor:

"filter-metadata":
    {
      "name": "metadata",
      "mode": "delete",
      "event": "prepare",
      "fields": ["preview[images][0][resolutions]"]
    }

I've tried a few variations but no dice.

"fields": ["preview[images][][resolutions]"]
"fields": ["preview[images][N][resolutions]"]
"fields": ["preview['images'][0]['resolutions']"]

@YuanGYao
Copy link

YuanGYao commented Mar 8, 2024

Hello, I left a comment in #4168 . Does the _pagination method of the WeiboExtractor class in weibo.py return when data["list"] is an empty list?
When I used gallery-dl to batch download the album page of Weibo, the download also appeared incomplete.
Through testing on the web page, I found that Weibo's getImageWall api sometimes returns an empty list when the image is not completely loaded. I think this may be what causes gallery-dl to terminate the download.

@mikf
Copy link
Owner Author

mikf commented Mar 8, 2024

@taskhawk
fields selectors are quite limited and can't really handle lists.
You might want to use a python post processor (example) and write some code that does this.

def remove_resolutions(metadata):
    for image in metadata["preview"]["images"]:
        del image["resolutions"]

(untested, might need some check whether preview and/or images exists)

@YuanGYao
Yes, the code currently stops when Weibo's API returns no more results (empty list).
This is probably not ideal, as I've hinted at in #4168 (comment)

@YuanGYao
Copy link

YuanGYao commented Mar 9, 2024

@mikf
Well, I think for Weibo's album page, since_id should be used to determine whether the image is fully loaded.
I updated my comment in #4168(comment) and attached the response returned by Weibo's getImageWall API.
I think this should help solve this problem.

@BakedCookie
Copy link

Not sure if I'm missing something, but are directory specific configurations exclusive to running gallery-dl via the executable?

Basically, I have a directory for regular tags, and a directory for artist tags. For regular tags I use "directory": ["{category}", "{search_tags}", "{date:%Y}", "{date:%m}"] since the tag number is manageable. For artist tags though, there's way more of them so this "directory": ["{category}", "{search_tags[0]!u}", "{search_tags}", "{date:%Y}", "{date:%m}"] makes more sense.

So right now the only way I know to get this per-directory configuration to work, is to copy the gallery-dl executable everywhere I want to use a master configuration override. Am I missing something? It feels like there should be a better way.

@Hrxn
Copy link
Contributor

Hrxn commented Mar 11, 2024

Huh? No, the configuration works always in the same way. You're simply using different configuration files?

@BakedCookie
Copy link

@Hrxn

From the readme:

When run as executable, gallery-dl will also look for a gallery-dl.conf file in the same directory as said executable.

It is possible to use more than one configuration file at a time. In this case, any values from files after the first will get merged into the already loaded settings and potentially override previous ones.

I want to override my master configuration %APPDATA%\gallery-dl\config.json in specific directories with a local gallery-dl.conf but it seems like that's only possible with the standalone executable.

@taskhawk
Copy link

taskhawk commented Mar 11, 2024

You can load additional configuration files from the console with:

-c, --config FILE           Additional configuration files

You just need to specify the path to the file and any options there will overwrite your main configuration file.

Edit: From my understanding, yeah, automatic loading of local config files in each directory is only possible having the standalone executable in each directory. Are different directory options the only thing you need?

@BakedCookie
Copy link

@taskhawk

Thanks, that's exactly what I was looking for! Guess I didn't read the documentation thoroughly enough.

For now the only thing I'd want to override is the directory structure for artist tags. I don't think it's possible to determine from the metadata alone if a given tag is the name of an artist or not, so I thought the best way to go about it is to just have a separate directory for artists, and use a configuration override. So yeah, loading that override with the -c flag works great for that purpose, thanks again!

@taskhawk
Copy link

taskhawk commented Mar 11, 2024

You kinda can, but you need to enable tags for Gelbooru in your configuration to get them, which will require an additional request:

    "gelbooru": {
      "directory": {
        "search_tags in tags_artists": ["{category}", "{search_tags[0]!u}", "{search_tags}", "{date:%Y}", "{date:%m}"],
        ""                           : ["{category}", "{search_tags}", "{date:%Y}", "{date:%m}"]
      },
      "tags": true
    },

Set "tags": true in your config and run a test with gallery-dl -K "https://gelbooru.com/index.php?page=post&s=list&tags=TAG" so you can see the tags_* keywords.

Of course, this depends on the artists being correctly tagged. Not sure if it happens on Gelbooru, but at least in other boorus and booru-like sites I've come across posts with the artist tagged as a general tag instead of an artist tag. Another limitation is that your search tag can only include one artist at a time, doing more will require a more complex expression to check all tags are present in tags_artists.

What I do instead is that I inject a keyword to influence where it will be saved, like this:

gallery-dl -o keywords='{"search_tags_type":"artists"}' "https://gelbooru.com/index.php?page=post&s=list&tags=ARTIST"

And in my config I have

    "gelbooru": {
      "directory": ["boorus", "{search_tags_type}", "{search_tags}"]
    },

You can have:

    "gelbooru": {
      "directory": {
        "search_tags_type == 'artists'": ["{category}", "{search_tags[0]!u}", "{search_tags}", "{date:%Y}", "{date:%m}"],
        ""                             : ["{category}", "{search_tags}", "{date:%Y}", "{date:%m}"]
      }
    },

You can do this for other tag types, like general, copyright, characters, etc.

Because it's a chore to type that option every time I made a wrapper script, so I just call it like this because artists is my default:

~/script.sh "TAG"

For other tag types I can do:

~/script.sh --copyright "TAG"
~/script.sh --characters "TAG"
~/script.sh --general "TAG"

@BakedCookie
Copy link

Thanks for pointing out there's a tags option available for the gelbooru extractor. I already used it in the kemono extractor to get the name of the artist, but it didn't occur to me that gelbooru might also have such an option (and just accepted that the tags aren't categorized).

For artists I store all the url's in their respective gelbooru.txt, rule34.txt, etc files like so:

https://gelbooru.com/index.php?page=post&s=list&tags=john_doe
https://gelbooru.com/index.php?page=post&s=list&tags=blue-senpai
https://gelbooru.com/index.php?page=post&s=list&tags=kaneru
.
.
.

And then just run gallery-dl -c gallery-dl.conf -i gelbooru.txt. Since the search_tags ends up being the artist anyway, getting tags_artists is probably not worth the extra request. Same for general tags, and copyright tags, in their respective directories. With this workflow I can't immediately see where I'd be able to utilize keyword injection, but it's definitely a useful feature that I'll keep in mind.

@Wiiplay123
Copy link
Contributor

When I'm making an extractor, what do I do if the site doesn't have different URL patterns for different page types? Every single page is just a numerical ID that could be a forum post, image, blog post, or something completely different.

@mikf
Copy link
Owner Author

mikf commented Mar 19, 2024

@Wiiplay123 You handle everything with a single extractor and decide what type of result to return on the fly. The gofile code is a good example for this I think, or aryion.

@I-seah
Copy link

I-seah commented Mar 20, 2024

Hi, what options should I use in my config file to change the format of dates in metadata files? I would like to use "%Y-%m-%dT%H:%M:%S%z" for the values of "date" and "published" (from coomer/kemono downloads).

And would it also be possible to do this for json files that ytdl creates? I downloaded some videos with gallery-dl but the dates got saved as "upload_date": "20230910" and "timestamp": 1694344011, so I think it might be better to convert the timestamp to a date to get a more precise upload time, but I'm not sure if it's possible to do that either.

@xzorby
Copy link

xzorby commented Aug 9, 2024

How do I make sure reddit GIFs are downloaded using yt-dlp? I've set extractor.reddit.videos to "ytdl", but it looks like gallery-dl still downloads reddit animated GIFs directly as if they were images.

yt-dlp does seem to work correctly, because some redgifs videos are downloaded to a ytdl folder (although most are downloaded using the redgifs downloader). I want to download .mp4 files instead of .gif and I've set the appropriate ytdl format settings under extractor.ytdl.format (also under reddit>ytdl) but it seems reddit GIF urls are not being sent to ytdl.

@xzorby
Copy link

xzorby commented Aug 11, 2024

I have put an ffmpeg postprocessor in gallery-dl to convert the GIFs locally, which works, but now I'm having issues with my archive database.

For 32 subreddits, gallery-dl only downloads new files as expected. But for 3 subreddits some 300-400 files are downloaded every time, going back months and for these, newly downloaded files are not being saved to the database. This is with archive-mode set to "memory".

I thought something in the database had been corrupted, but PRAGMA integrity_check; returns "ok" and I can't see anything obviously wrong with the database.

Using a backup sqlite3 archive from 3 weeks ago, only new files from the past 3 weeks are downloaded for those 3 "broken" subreddits, but newly downloaded files still are not saved to the database.

What's going on?

Edit 4 (or 5? I lost track) - I think I have it fixed now. I set archive-mode back to default and started with a clean database file, dumped the old database content into an sql script, and imported it back into the clean database file.

@biggestsonicfan
Copy link

If I import gallery-dl as a python dependency, can I use it's filename sanitation function?

@Skyofflad
Copy link

I use a shadowsocks proxy to circumvent bans in my country. All of the sites can be viewed in browser, but gallery-dl throws errors for some of them (like kemono and furaffinity).
Maybe I need to change the user-agent?
Does anyone have experience with this?

@Hrxn
Copy link
Contributor

Hrxn commented Aug 27, 2024

@biggestsonicfan Yes, you can import and use functions from gallery-dl just as you like.

@fireattack
Copy link
Contributor

fireattack commented Aug 27, 2024

I use a shadowsocks proxy to circumvent bans in my country. All of the sites can be viewed in browser, but gallery-dl throws errors for some of them (like kemono and furaffinity). Maybe I need to change the user-agent? Does anyone have experience with this?

You need to be more specific. What's your command? g-dl supports --proxy and I've used it before without issues. Post verbose log.

@Skyofflad
Copy link

You need to be more specific. What's your command? g-dl supports --proxy and I've used it before without issues. Post verbose log.

@fireattack, I link proxy in the config file: "proxy": {"https": "socks5://127.0.0.1:1080"},. It worked previously but now some sites refuse the connection.

Furaffinity:

gallery-dl -K -v https://www.furaffinity.net/user/zeusdex
gallery-dl: Version 1.27.3
gallery-dl: Python 3.8.10 - Linux-5.15.0-118-generic-x86_64-with-glibc2.29
gallery-dl: requests 2.32.3 - urllib3 2.2.1
gallery-dl: Configuration Files ['${HOME}/.config/gallery-dl/config.json']
gallery-dl: Starting KeywordJob for 'https://www.furaffinity.net/user/zeusdex'
furaffinity: Using FuraffinityUserExtractor for 'https://www.furaffinity.net/user/zeusdex'
furaffinity: This extractor only spawns other extractors and does not provide any metadata on its own.
furaffinity: Showing results for 'https://www.furaffinity.net/gallery/zeusdex/' instead:

furaffinity: Using FuraffinityGalleryExtractor for 'https://www.furaffinity.net/gallery/zeusdex/'
furaffinity: Loading cookies from 'cookies/fa-cookies.txt'
urllib3.connectionpool: Starting new HTTPS connection (1): www.furaffinity.net:443
furaffinity: SOCKSHTTPSConnectionPool(host='www.furaffinity.net', port=443): Max retries exceeded with url: /gallery/zeusdex/1/ (Caused by NewConnectionError('<urllib3.contrib.socks.SOCKSHTTPSConnection object at 0x7f8033656f70>: Failed to establish a new connection: [Errno -2] Name or service not known')) (1/11)

Kemono:

downloader.http: SOCKSHTTPSConnectionPool(host='n1.kemono.su', port=443): Max retries exceeded with url: /data/2a/bf/2abfdd2dcf45858b7b0f13776d81b1cdd3c82955b627532faf7567cb2b7f9781.jpg (Caused by NewConnectionError('<urllib3.contrib.socks.SOCKSHTTPSConnection object at 0x7feec89a70d0>: Failed to establish a new connection: [Errno -2] Name or service not known')) (1/21)

@mikf
Copy link
Owner Author

mikf commented Aug 27, 2024

@Skyofflad
You need to use socks5h:// as scheme for your SOCKS proxy server to use it for DNS requests. With socks5://, DNS requests are done over your regular connection.

@Skyofflad
Copy link

@mikf, thanks, it works

@throwaway242685
Copy link

hi, so, hm... are there any plans to add basic support for Facebook?

@mikf
Copy link
Owner Author

mikf commented Aug 29, 2024

Facebook

There's an open pull request: #5626
I just need to review and merge it ...

@klazoklazo
Copy link

I'm trying to download galleries from furaffinity but it seems to be ignoring all NSFW posts. I synced my profile's cookies with gallery-dl but it seems to continue ignoring NSFW posts unless I manually define the "a" and "b" cookies in the JSON itself.

Is there a way to get gallery-dl to automatically recognize these cookies from my profile? Or am I missing something here?

@Coro365
Copy link

Coro365 commented Sep 9, 2024

Hello.
I think that currently when a pixiv user changes id name (user[account]) or screen name (user[name]), it re-downloads all the images of the user in question, but it would be helpful if there was a feature to avoid this by setting the following

"pixiv": {"directory": ["pixiv", "{user[id]}{ignore} {user[name]} {user[account]}"]}

In the above example, if there is a directory starting with {user[id]}, use it, otherwise create a directory {user[id]} {user[name]} {user[account]} and use it.

Thank you!

@biggestsonicfan
Copy link

This is a postprocessor I use for every gallery type:

        "json_metadata":
        {
            "name": "metadata",
            "event": "prepare",
            "mode": "json",
            "directory": "json",
            "extension-format": "json"
        }

I have now run into a situation where I updated a gallery which I am no longer subscribed, and all json files were replaced. The problem being, all the json data contained passwords to access the content of each post. So now each post is relocked until I resubscribe and redownload the json metadata.

Is there a flag in which I can say "if exists, do not overwrite" for a json postprocessor?

@mikf
Copy link
Owner Author

mikf commented Sep 10, 2024

@klazoklazo
What do you mean by "I synced my profile's cookies with gallery-dl"? --cookies-from-browser? It sometimes fails to grab all cookies, especially for Chrome. Try exporting them to a cookies.txt file with a browser addon and loading them from there , or do it manually like you already said you did. These cookies usually never expire (although they did for me quite recently).

@Coro365
This situation is one of the reasons why --download-archive exists, so you don't re-download files even if their filesystem paths change.

@biggestsonicfan
https://gdl-org.github.io/docs/configuration.html#metadata-skip
(This option was added in 00f0233)

@klazoklazo
Copy link

@mikf
I synced my profile's cookies with gallery-dl through the config JSON, via

"cookies": ["firefox", "/home/klazo/.librewolf/h4kfijr8.default-default"],
"cookies-update": true,

This seems to work for every other website I poll through gallery-dl, except for Fur Affinity where it doesn't grab any NSFW images.

I'd like to be able to sync all my cookies with my browser automatically if possible since, like you said, even if it takes a while it does expire eventually, and I'd rather not be taken by surprise if possible.

@biggestsonicfan
Copy link

@mikf
Oh, that'll work a treat! Still though, if I do end up subbing to someone on a platform where free/preview metadata was downloaded, I will need to edit the post-processor or remove existing metadata. I was thinking maybe a size comparison of the metadata downloaded by gallery-dl vs what's on disk could do the trick, and theoretically the "bigger" size would contain paywalled data (text/links/whatever) but if new fields or something were added, that could possibly push the size of a free/locked post metadata over previously downloaded paid metadata and overwrite it. It's a tricky scenario that I'll keep in the back of my mind to figure out a best solution, but for now manually making sure I have paid data is fine. Thanks!

@Coro365
Copy link

Coro365 commented Sep 11, 2024

@mikf
Thanks for the reply.
I had overlooked that.
Thanks kindly, I'll give it a try.

@raz3x
Copy link

raz3x commented Sep 12, 2024

Guys, I need some help. I tried to look for this but I couldn't find anything similar. Maybe there is and I'm just a noob, so I apologize in advance.

Here's my situation:

Let's say I downloaded some artworks from "random website A" a while ago.
But as I got to know gallery-dl more, the way I named my files got better organized.
So I basically use "-o skip=false" now in order to redownload some stuff with the new settings.
Everything is fine until a few things go wrong here and there, and a few files aren't downloaded due to connection issues or something.
This could be easily solved just by re-entering the same command line to download what is missing.
But the thing is that I'm using "-o skip=false" because of my already existing archive, and it ends up downloading everything again.
And if I remove "-o skip=false", gallery-dl will take the archive in consideration and skip everything.

tldr: I would like to know if there is a way to skip files based on what's inside the folder instead of the archive. Just like yt-dlp for instance. Or is wiping my archive the only way?

Thanks for reading through.

UPDATE: I got this. So I basically edited "archive.sqlite3" with sqlitebrowser. Deleted all the entries related to the artist. Next time I entered the command line, it basically recognized what was already in the folder and downloaded what was missing. Maybe there is an easier way? I don't know, but it worked like a charm.

@Hrxn
Copy link
Contributor

Hrxn commented Sep 12, 2024

@raz3x What cache?
Do you mean the archive option?

This is similar to the --download-archive option of yt-dlp, except it's more advanced (uses a database) and much more flexible.

@raz3x
Copy link

raz3x commented Sep 12, 2024

@Hrxn

Yeah, that's what I meant. My bad haha. I will edit the original post.

@taskhawk
Copy link

One question, why was the following line:

self.out.skip(pathfmt.path)

moved after the if-block in the commit 3595721#diff-805418c86a6e54601f79e880a0a58749fbc92607592a0d4f73d1e0bc2c8e56f1 ?

I'm manipulating the output of gallery-dl to give me a bit more information and after upgrading to a later version it broke my output and I'm wondering if it will cause any issue if I just move it back in my local copy. I think what's happening is that the code in the if-block is printing a newline for each file in the output somewhere in there.

@docholllidae
Copy link

docholllidae commented Sep 18, 2024

is it possible to specify multiple cookies in the config such that g-dl will cycle through them as needed?

eg:
when downloading a list of twitter users with cookie[0], if the profile is private and c[0] doesn't have access, then g-dl will try with c[1], then c[2], etc

i'm also curious if it would be possible to randomly cycle through a cookie list to help prevent account bans eg when downloading instagram.
feed an array of cookies into the config, and when downloading from a list of url's it will randomly choose a cookie each time it starts a new extraction/input url

@biggestsonicfan
Copy link

Heavily related to my previous post, I've now encountered a new patron who edits the text, image attachment, and file attachment of a single post to update rewards from month to month. Since they do change the title of the post, my filename schema shouldn't match and it might be redownloaded as a new post. I won't know until next month, but I guess I'll cross that bridge when I get there.

How expensive would it be, computational-wise, to check specific fields within json dumps to determine if an enumerate file should be downloaded or not?

@throwaway242685
Copy link

throwaway242685 commented Sep 23, 2024

is there a way to make gallery-dl stop/exit when cookies get expired? even when there aren't any errors.

there are times when my IG cookies get expired but it doesn't show me any errors, so it justs keeps downloading files, lol.

is there a way to stop it when cookies get expired? even if it doesn't show any errors?

this only works when there are explicit errors:

"error:NotFoundError|AuthorizationError|HttpError|HTTP redirect to login page": "exit 0"

@topchaser
Copy link

I am getting the error pixiv: Unable to download work 59915441 ('sanity_level' warning) when I try to download this link (NSFW, but you cannot see it unless logged in):
https://www.pixiv.net/en/artworks/59915441

I see many mentions of this error:
https://github.com/mikf/gallery-dl/issues?q=sanity+level+warning

but I read through many of them trying to understand what to do, and I cannot figure it out. Will someone please tell me how to fix this.

Also, just to vent, I had no idea how long this had been happening, or if any of my attempts to download pixiv profiles prior had been subject to this. I can't retroactively check any logs, since I think I used to have logs, but it would cause redownloading profiles to skip media it already downloaded, which annoyed me. I didn't know if I could disable that specifically, so I just gave up on having logs. So, I potentially am missing media when I intended to get everything. I am a bit sad about it. Also, the "logs" I am describing might actually be something entirely different, and might not have told me of this error anyway. I don't know. I barely manage to get gallery-dl working for myself, so it working at all is essentially where my knowledge on the program ends.

@biggestsonicfan
Copy link

Also, just to vent, I had no idea how long this had been happening...

I've come across this too often. Regular auditing of your archives sucks but is almost a necessary thing to do if you want to make sure you have it all. I'd recommend polishing up on some Python skills, and while you don't have to work with gallery-dl's code itself necessarily, you can write your own little audit scripts as needed. I wish we all were at a point where we could say a program is bulletproof, but not knowing everyone's scenarios and every gallery type out there throws curveballs and exceptions into the mix.

@topchaser
Copy link

@biggestsonicfan part of the problem was I delayed updating to windows 10 for a very long time, so the cmd window allowing seemingly an infinite amount of text (or at least enough that it dwarf's windows 7's not even allowing a gallery-dl -K command to necessarily be fully displayed) is by all accounts extremely new to me relative to the years I've been using gallery-dl. It now would be no issue to just scroll up on the command window before I close it, but prior, I had to babysit it in the present, without letting it scroll too far before checking on it again, since I didn't (and still don't) know if I can even keep a log of everything I've downloaded, to retroactively check them for errors if ever I so chose. I don't know if I have it in me to commit to anything much, especially considering even reading existing issues on my immediate issue is something I gave up on after trying to make sense of them for maybe half an hour at most. But, it would be in my best interest to do so, of course. For now I will just scroll up on my cmd windows before I close them, I guess. It is so easy to do, I should've been doing it since I updated to windows 10.

@topchaser
Copy link

Trying to download this:
https://misskey.gg/notes/9yp3zt35c3

using:
gallery-dl misskey:https://misskey.gg/notes/9yp3zt35c3

produces this error:
[downloader.http][warning] ('Connection broken: IncompleteRead(0 bytes read, 58762 more expected)', IncompleteRead(0 bytes read, 58762 more expected)) (1/5)

until it hits 5/5 then fails. It happens for all misskey.gg links. In contrast, misskey.io links work without even needing to preface the link with "misskey:". For example:
https://misskey.io/notes/9ru7yqi5u4j6070a

Is there anything I can do to make misskey.gg links work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests