-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vello anti-aliasing rendering seems not fine enough, how to improve it? #592
Comments
I believe there are two things going on here. It's part of our longer term roadmap to address them and improve quality, but in the short term I think we'll stick with the basic spec for antialiasing, which is essentially to do the AA calculations in sRGB colorspace, and do compositing after antialiased path rendering. The first of the issues is the choice of colorspace for doing the antialiasing. The most defensible from a physical rendering perspective is a linear colorspace. For a simple black-on-white vector shape, that's approximately equivalent to applying a gamma curve of 2.2, which I've done below: Earlier versions of piet-gpu in fact did this, doing all alpha compositing in a linear sRGB space, then doing conversion to devices sRGB at the end. There are two problems with this. The first is that you get alpha compositing results that don't match expected results. In particular, if you composite a black and a white layer at 50% alpha, you get 0.5 linear light intensity, which is 0.735 in device sRGB, or This particular issue can be addressed by effectively doing the compositing at a higher resolution (using alpha rules appropriate for the document), then downsampling in a linear color space. That's more computationally intensive. The second, though, is that doing compositing in the "correct" space does not always look nicer. In particular, looking at the visual examples of MPVG, it is clear that much of the black-on-white text looks anemic and spindly. I believe that a good solution to this problem will involve "stem thickening" to counteract this tendency. And in fact, that is one of the motivations of the recent stroke expansion work, though we have not yet enabled thickening of filled shapes in the pipeline. A second and related problem is the choice of box filter for reconstruction. According to the theory of Mitchell and Netravali, with followup by Nehab and Hoppe, there is no single ideal sampling filter, only a tradeoff space, with blurriness, ringing, and aliasing as three points in a triangle. The box filter is commonly used because it is computationally efficient, but it also represents an appealing point in this tradeoff space for most 2D vector graphics, including most font rendering. The exception is very thin lines, where aliasing is visible as a stepped-like appearance. In the limit, a very thin line, a box sampling filter is equivalent to a non-antialiased line of single pixel width (but a much lower alpha opacity to compensate for the effective fraction of the pixel covered). One approach is to use a sampling filter that goes a little towards blurriness and away from aliasing in this tradeoff space - a tent filter is a good choice. That's also more computationally intensive, and leads to quality degradation for the vector content that isn't thin strokes. My sense about the best way to reconcile all this is to do some preprocessing of the scene, increasing stroke width for very thin strokes, while also decreasing the alpha to avoid changing the perceived darkness (probably not all the way to preserving total intensity). And indeed, a single pixel wide line sampled with a box filter renders to a similar result as a thin line sampled with a tent filter. Very likely, some combination of increasing stroke width for strokes and applying stem thickening to fills will yield the best perceived quality and minimize discrepancies between the two primitives. I have ideas on how to do compositing free of conflation artifacts, which would fully address being able to do alpha compositing of document colors and antialiasing in different color spaces. I should write that up as an issue or a design document. My current thinking is that it makes sense to take this on after sparse strip path rendering, as it would potentially be a lot more efficient to render the path once and handle the compositing sparsely. Sorry if this is not as immediately useful to just trying to get good quality out of the renderer as it exists today, but hopefully it helps explain what's going on, and seeing these examples does help motivate the more sophisticated rendering ideas. And maybe there are some things to try, in particular using thicker strokes at partial alpha. |
Thanks for the detailed explanation!
Make sense.
This is definitely actionable.
Setting a partial alpha on the border color does indeed visually improve the quality. Thanks! |
Please click on each of the following three screenshots to see that compared to Chrome and Femtovg, the Vello rendering results have obvious small jagged edges on the beard.
PS: Femtovg rendering results also had small jagged, similar to Vello, but after changing the
dpi_factor
to 1.0, it seems to be not inferior to Chrome.https://github.com/femtovg/femtovg/blob/4e40a61f824a8ea1bd361b4d227a2187e912124a/examples/svg.rs#L145
The text was updated successfully, but these errors were encountered: