Replies: 4 comments 2 replies
-
I've created a small reproduction: https://github.com/jawngee/slow-vips-ops-repro/tree/main You can run it using I've tested it on a couple of different machines to the same effect. |
Beta Was this translation helpful? Give feedback.
-
This might be a question for https://github.com/davidbyttow/govips e.g. davidbyttow/govips#289 will be preventing certain optimisations/caching from occurring and davidbyttow/govips#197 will be required to switch from sequential to random access - you may find the latter more appropriate given the large Gaussian blur operation. |
Beta Was this translation helpful? Give feedback.
-
As well as Lovell's excellent advice, you can speed your code up a bit with some small refactoring. I made a tiny test framework in python: #!/usr/bin/env python3
import sys
import pyvips
import timeit
def blur(filename):
image = pyvips.Image.new_from_file(filename, access="sequential")
image = image.gaussblur(18)
data = image.write_to_buffer(".jpg", Q=100)
def pix(filename):
image = pyvips.Image.new_from_file(filename, access="sequential")
image = image.resize(1.0/16).resize(16)
data = image.write_to_buffer(".jpg", Q=100)
def pix_and_blur(filename):
image = pyvips.Image.new_from_file(filename, access="sequential")
image = image.gaussblur(18)
image = image.resize(1.0/16).resize(16)
data = image.write_to_buffer(".jpg", Q=100)
def time(statement):
t = timeit.timeit(statement, number=5, globals=globals())
print(f"{statement} took {t:.1f}s")
time("blur(sys.argv[1])")
time("pix(sys.argv[1])")
time("pix_and_blur(sys.argv[1])") Which I think matches your dockerfile. With a debug build of libvips I see:
The "computed 880%" etc. is a debug feature of libvips which counts pixels that pass through operations compared to the expected number of pixels in an image -- you can see that running a large radius gaussblur before a resize causes almost 9x overcomputation of earlier stages. I'd first swap the two resizes for shrink and zoom:
These are low level operations that do block averaging and block zooming. https://www.libvips.org/API/current/libvips-resample.html#vips-shrink https://www.libvips.org/API/current/libvips-conversion.html#vips-zoom I see:
For the pathological case (blur then shrink then zoom) you have a few choices. The simplest is to cache after the gaussblur, eg.:
Will render the blur to memory, then do the shrink and zoom. This gets the time down to 0.9s, but it will use quite a bit of memory for the cache. You can use random access mode (as Lovell said): def pix_and_blur(filename):
image = pyvips.Image.new_from_file(filename)
image = image.gaussblur(18)
image = image.shrink(16, 16).zoom(16, 16)
data = image.write_to_buffer(".jpg", Q=100) Gives 1.7s, but again will raise memory or maybe (for very large images) disc use. |
Beta Was this translation helpful? Give feedback.
-
Just for fun I made a tiny thing to blur a circular area of an image: #!/usr/bin/env python3
import sys
import pyvips
def blur(filename, x, y, r):
"""blur a circular area at (x, y) radius r.
"""
image = pyvips.Image.new_from_file(filename)
# we blur the whole image, make a mask for the circle, then use ifthenelse
# to pick pixels from either the blurred version or the original
blur = image.gaussblur(18)
# bounding box of the circle
circle_image = image.crop(x - r, y - r, r * 2, r * 2)
circle_blur = blur.crop(x - r, y - r, r * 2, r * 2)
# make an ideal circular mask
mask = pyvips.Image.mask_ideal(r * 2, r * 2, 1,
uchar=True, reject=True, optical=True)
# and mask pixels ... we could soften the edge, but maybe a hard edge
# adtuically looks nicer
masked = mask.ifthenelse(circle_blur, circle_image)
# and insert the masked image back into the original
image = image.insert(masked, x - r, y - r)
return image
image = blur(sys.argv[1], int(sys.argv[3]), int(sys.argv[4]),int(sys.argv[5]))
image.write_to_file(sys.argv[2]) With this test image: I see:
To make: |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
We are redacting faces from images by obscuring them with blurring and/or pixelation.
If we do just blur or we do just pixelation, the whole export finishes within a second.
If we do both, the whole export takes several minutes.
I don't know the internals of libvips, but I'm assuming it doesn't actually perform all of the requested operations until export, so I don't believe it's the export itself that is slow.
This issue only seems to appear in a Docker image.
To Reproduce
Steps to reproduce the behavior:
This is what the go code looks like:
The
GaussianBlur()
is callingsharpen_image()
under the covers and theResize()
is callingresize_image()
The source images are 3840x2560 but that doesn't seem to have any bearing as the same delay happens with much smaller images.
Expected behavior
For it not to take 3 minutes.
Environment
Additional context
Of course this issue doesn't exist when running locally on macOS during development. It's only when running in a docker image that the issue pops up because that's the way the universe seemingly operates.
The base image for the Dockerfile looks like:
Maybe I'm missing a package?
Beta Was this translation helpful? Give feedback.
All reactions