Skip to content

Commit ca57dbb

Browse files
committed
Update README for v1.2.0
1 parent 45631b1 commit ca57dbb

File tree

1 file changed

+38
-8
lines changed

1 file changed

+38
-8
lines changed

README.md

Lines changed: 38 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -11,19 +11,19 @@ Written in Python using PySide6.
1111
## Features
1212

1313
- Keyboard-friendly interface for fast tagging
14-
- Tag autocomplete based on your most used tags
15-
- Integrated token counter
16-
- Searchable list of all used tags
17-
- Filter images by tag
18-
- Rename or delete all instances of a tag
14+
- Tag autocomplete based on your own most-used tags
15+
- Integrated Stable Diffusion token counter
16+
- Batch tag renaming and deleting
17+
- BLIP-2 caption generation
1918
- Automatic dark mode based on system settings
2019

2120
## Installation
2221

2322
The easiest way to use the application is to download the latest release from
2423
the [releases page](https://www.github.com/jhc13/taggui/releases).
25-
Choose the appropriate executable file for your operating system.
26-
The file can be run directly without any additional dependencies.
24+
Choose the appropriate `.zip` file for your operating system, extract it
25+
wherever you want, and run the executable file shortcut inside.
26+
No additional dependencies are required.
2727

2828
Alternatively, you can install manually by cloning this repository and
2929
installing the dependencies in `requirements.txt`.
@@ -40,10 +40,40 @@ Any changes you make to the tags are also automatically saved to these `.txt`
4040
files.
4141

4242
You can change the settings in `File` -> `Settings`.
43-
Panes can be resized, undocked, and moved around.
43+
Panes can be resized, undocked, moved around, or placed on top of each
44+
other to create a tabbed interface.
45+
46+
## BLIP-2 Captioning (New in v1.2.0)
47+
48+
In addition to manual tagging, you can use the BLIP-2 model to automatically
49+
generate captions for your images inside TagGUI.
50+
GPU generation requires a compatible NVIDIA GPU, and CPU generation is also
51+
supported.
52+
53+
To use the feature, select the images you want to caption in the image list,
54+
then click the `Caption With BLIP-2` button in the BLIP-2 Captioner pane.
55+
You can select a single image to get a caption for that image, or multiple
56+
images to batch generate captions for all of them.
57+
It can take up to several minutes to download and load the model when you first
58+
use it, but subsequent generations will be much faster.
59+
60+
You can put some text inside the `Start caption with:` box to make the model
61+
generate captions that start with that text.
62+
For example, you can write `A photo of a person wearing` to get captions that
63+
describe the clothing of the subject.
64+
Additional generation parameters such as the minimum number of tokens and the
65+
repetition penalty can be viewed and changed by clicking the
66+
`Show Advanced Settings` button.
67+
If you want to know more about what each parameter does, you can read the
68+
[Hugging Face documentation](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig).
4469

4570
## Controls
4671

72+
- Focus the image list: `Alt`+`L`
73+
- Focus the `Add Tag` box: `Alt`+`A`
74+
- Focus the `Search Tags` box: `Alt`+`S`
75+
- Focus the `Caption With BLIP-2` button: `Alt`+`C`
76+
4777
### Images pane
4878

4979
- Previous / next image: `Up` / `Down` arrow keys

0 commit comments

Comments
 (0)