Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tatham upstream merges #7

Open
wants to merge 1,447 commits into
base: master
Choose a base branch
from
Open

Tatham upstream merges #7

wants to merge 1,447 commits into from

Conversation

snelg
Copy link

@snelg snelg commented Oct 5, 2020

I've been keeping my own fork of your code up-to-date with Simon Tatham's updates over the years. Lots of little bugfixes, but he also added a new game at some point.

But at some point I also merged in a branch from some other guy's additional puzzles for the collection:
https://github.com/x-sheep/puzzles-unreleased

Since you hadn't updated in years, I didn't think to keep a completely separate branch without those additional "x-sheep" puzzles.
I did, however, keep my iOS build file changes in a separate branch. That separate branch is the one I've been using to build and then locally install on my iPhone and iPad.
So this PR should produce a clean build that only incorporates 5 years' worth of upstream merges. It does have the x-sheep files in the source, but they're not incorporated into the build.

If you're interested in those additional puzzles, I can make different PR that includes the changes to Puzzles.xcodeproj/project.pbxproj I've been using to include them in the build.

If you're interested in the upstream merges but not the x-sheep puzzles, and you'd prefer a nice clean PR without the extra garbage, then I can make a cleaner PR for you.

If you're not interested in any of this at all, then I'll just say "thanks" for creating this iOS build in the first place :)

@mld2443
Copy link

mld2443 commented Nov 20, 2020

It would make my absolute day if this got merged in. I've been playing these puzzles for 12 years (😳) now, I really like palisade and would love to try out those others!

@Svedalrain
Copy link

@ghewgill Any chance of this happening?

@Svedalrain
Copy link

@snelg What barriers are there to putting this out as a second app?

@snelg
Copy link
Author

snelg commented Dec 25, 2020

@snelg What barriers are there to putting this out as a second app?

If this was Android, then I'd be fine tossing out an .apk file into the wild for people to install on their own.
Since iOS is such a walled garden, though, I'd pretty much have to go through the rigmarole of formally registering and publishing on the App Store. That's more hassle than I need in my life right now :)

@mld2443
Copy link

mld2443 commented May 19, 2021

@snelg What barriers are there to putting this out as a second app?

If this was Android, then I'd be fine tossing out an .apk file into the wild for people to install on their own.
Since iOS is such a walled garden, though, I'd pretty much have to go through the rigmarole of formally registering and publishing on the App Store. That's more hassle than I need in my life right now :)

I'm interested, is it really that challenging? (other than the iOS developer license fee)

It's been 7 months, pretty sure @ghewgill has no interest in updating.

@snelg
Copy link
Author

snelg commented May 19, 2021

I'm interested, is it really that challenging? (other than the iOS developer license fee)

Not really "challenging," just annoying :)

@ghewgill
Copy link
Owner

Sorry all, not really challenging but my Mac had died last year and I just got a shiny new M1 Mini. So I now have the ability to update this. Just have to find the time in between everything else.

@NLZ
Copy link

NLZ commented Sep 15, 2022

Not sure if you would be willing to make the effort, but I would be really happy if you could release your version in the App Store.

bjh21 and others added 21 commits April 8, 2023 20:08
Some nonogram implementations allow zero clues so that a row or column
with a single zero clue is equivalent to one with no clues, that is it
has no black squares in it.  Pattern, however, doesn't interpret them
like this and treats a puzzle with a zero clue as insoluble, so it's
not helpful to permit them.

Permitting zero clues also confuses Pattern's memory allocation so
that it can suffer a buffer overrun.  As an example, before this
commit a build with AddressSanitizer would report a buffer overrun
with the description "1:0/0.0" because it tries to put two clues in a
row that can have a maximum of one.
In commit 8d66475 I added the Hats grid type to Loopy, and
mentioned in the commit message that I was very pleased with the
algorithm I came up with.

In fact, I was so pleased with it that I've decided it deserves a
proper public writeup. So I've spent the Easter weekend producing one:

  https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/aperiodic-tilings/

In this commit I adjust the header comments in both penrose.c and
hat.c to refer to the article (replacing a previous comment in
penrose.c to a much less polished page containing a copy of my
jotting-grade personal notes that I sent James Harvey once). Also,
added some code to hatgen.c to output Python hat descriptions in a
similar style to hat-test, which I used to generate a couple of the
more difficult diagrams in the new article, and didn't want to lose.
I just found these self-tests lying around in mines.c under an #ifdef
that nobody ever enables. Let's put them somewhere more sensible! We
already have a separate tool for working with the obfuscation system
in a puzzle-independent way, and it seems reasonable to put them in
there.
unfinished/separate.c had its own declaration of divvy_rectangle(),
duplicating the one in puzzles.h. Probably that was where the
declaration originally lived, before I moved it out into the main
header.
I'm going to work towards turning 'dsf' into an opaque type, so that I
can improve its implementation without breaking clients. The first
step is to deal manually with puzzles that currently reuse a single
array of ints for multiple purposes, some of which are dsf and some
are not.

In divvy_rectangle_attempt, 'tmp' was used as an int array and later a
dsf, and I've made a new 'tmpdsf' to be the latter.

In Dominosa, solver->pc_scratch2 was sometimes a dsf, and now there's
a new solver->dsf_scratch.

In Map, parse_edge_list() needed a dsf internally and then never
exported it; it expected to be passed an array of 2*w*h ints and used
the second half as a dsf. Now it expects only w*h ints, and allocates
its own dsf internally, freeing it again before returning.

And in Tents, find_errors() was allocating a single block of 2*w*h
ints and using the second half of it as a dsf, apparently just to save
one malloc. Now we malloc and free the dsf separately.
No functional change: currently, this just wraps the previous sfree
call.
Previously we were duplicating the contents of a dsf using straight-up
memcpy. Now there's a dsf_copy function wrapping the same memcpy.

For the moment, this still has to take a size parameter, because the
size isn't stored inside the dsf itself. But once we make a proper
data type, it will be.
All remaining cases where a dsf was allocated via snewn(foo, int) are
removed by this change.
The expression 'dsf[foo] >> 2' already has a sensible wrapper
function, but Filling wasn't bothering to call it.
In this commit, 'DSF' is simply a typedef for 'int', so that the new
declaration form 'DSF *' translates to the same type 'int *' that dsfs
have always had. So all we're doing here is mechanically changing type
declarations throughout the code.
This makes good on all the previous preparatory commits, which I did
separately so that each one individually has a reasonably readable
diff, and all the mechanical changes are separated out from the
rewrites that needed actual thought.

Still no functional change, however: the DSF type wraps nothing but
the same int pointer that 'DSF *' used to store directly.
This permits bounds-checking of all inputs to dsf_canonify and
dsf_merge, so that any out-of-range values will provoke assertion
failure instead of undefined behaviour.
Now that the dsf knows its own size internally, there's no need to
tell it again when one is copied or reinitialised.

This makes dsf_init much more about *re*initialising a dsf, since now
dsfs are always allocated using a function that will initialise them
anyway. So I think it deserves a rename.
print_dsf was declared in puzzles.h, but never called, and its
definition was commented out. So it probably wouldn't still have
worked anyway. The other commented-out printfs in that file don't look
very useful either, and they just mean more stuff will need messing
about with as I continue to refactor.
Not very interesting, but the idiom for checking equivalence via two
calls to dsf_canonify is cumbersome enough to be worth abbreviating.
This is preparing to separate out the auxiliary functionality, and
perhaps leave space for making more of it in future.

The previous name 'edsf' was too vague: the 'e' stood for 'extended',
and didn't say anything about _how_ it was extended. It's now called a
'flip dsf', since it tracks whether elements in the same class are
flipped relative to each other. More importantly, clients that are
going to use the flip tracking must say so when they allocate the dsf.

And Keen's need to track the minimal element of an equivalence class
is going to become a non-default feature, so there needs to be a new
kind of dsf that specially tracks those, and Keen will have to call it.

While I'm here, I've renamed the three dsf creation functions so that
they start with 'dsf_' like all the rest of the dsf API.
This rewrite improves the core data structure implementation in two
ways. Firstly, when merging two equivalence classes, we check their
relative sizes, and choose the larger class's canonical element to be
the overall root of the new class tree. This minimises the number of
overlong paths to the root after the merge. Secondly, we defer path
compression until _after_ the two classes are merged, rather than do
it beforehand (via using edsf_canonify as a subroutine) and then have
to do it wastefully again afterwards.

The size-based root selection was what we _used_ to do, and delivers
the better asymptotic performance. I reverted it so that Keen could
track the min of each equivalence class. But since then I've realised
you can have the asymptotic goodness _and_ min-tracking if you store
the minima separately from the main data structure. So now Keen does
that, and other clients don't have to pay the cost.

Similarly, the flip tracking is now a cost that only users of flip
dsfs have to pay, because a normal one doesn't store that information
at all.
Easier to do this all in one go after I've finished messing about.
If you configure a Linux build of Puzzles with -fsanitize=address, it
will fail during the icons build, because the icon-maker programs leak
memory when they're run, and by default, this causes ASan to report
all the memory leaks at the end of the program *and then exit 1*.

I don't think 'just fix the memory leaks' is a viable answer. _Some_
of the leaks come from the Puzzles code, and could be fixed by being
more punctilious about freeing everything before exiting (although
that is not necessary for any actually sensible purpose and would
_only_ serve to stop Leak Sanitiser complaining). But some are outside
the Puzzles code completely, apparently from fontconfig. So even if we
fixed all the leaks we could find, we wouldn't prevent the failure
status.

When I want to build with ASan, I've been working around this by
setting ASAN_OPTIONS=detect_leaks=0 in the environment before running
the build command. Easier all round if we just do that _inside_ the
cmake setup.
sgtatham and others added 30 commits March 1, 2024 12:31
I updated Emscripten recently, to version 3.1.54. The result was that
none of the WASM puzzles run any more, because they produce a stack
trace at startup with the error message "to do setValue(i64) use
WASM_BIGINT".

I don't fully understand this. The stack trace comes via a JS wrapper
around a WASM-compiled function called __main_argc_argv, which sounds
like something in the Emscripten library startup. Somewhere in there
it goes via _js_get_date_64(), which tries to write an i64 into the
WASM memory array, which hits this abort in setValue().

Web searching for the error message turned up
godotengine/godot#88594 which gave me the clue
that '-s WASM_BIGINT' is a command-line setting for the Emscripten
linker. And indeed, setting that makes the startup-time error go away
and the puzzles run again. But it also causes older versions of all
browsers to be unsupported, presumably on the grounds that they don't
have whatever WASM bigint feature this flag causes the code to use.

I don't really understand what's going on here. I assume
_js_get_date_64 is being called to handle a 64-bit time_t. But the
Puzzles code doesn't work with time_t at all, so this is entirely in
the Emscripten standard library. And if the pre-main() startup code
needs a 64-bit integer write, which won't work without this flag, then
surely _nothing_ would work without this flag, and surely someone
would have noticed that, and made that flag the default? This all
seems confusing and I wonder if I've misunderstood something.

However, if I don't fix it in _some_ way, the web puzzles will stay
out of action for days and I'll get lots of email complaints. So
here's something that makes them basically work again, and maybe we
can figure out the rest of the story later.
That's what the comment said I'd left it at, but in some kind of
editing goof, I failed to actually do so.
The foosolver.exe binaries aren't delivered out of the end of
Buildscr, so there's no point wasting time on signing them. Signing is
slow in wall-clock time (you have to wait for a timestamp server), so
this should significantly reduce overall build time.
When a drag started outside the grid (or no drag has started yet),
ensure the drag state in game_ui says so and bail out accordingly.

Previously, such a drag would manipulate the tile the last valid drag
started from, if any, else segfault.

Also, allow drags that start on-grid and then go off-grid to continue
rotating.
There are many places in the code which use the (x & ~MOD_MASK) idiom. This
new macro makes possible their refactoring in the future.
Previously, a button code of ('\t' | MOD_SHFT) from a frontend would have
the MOD_SHFT flag stripped by midend_process_key, resulting in a bare '\t'
passed to the backend. Now, this combination is allowed to pass through to
the backend. This will be used in the keyboard interface to Untangle, which
I'm about to add.
According to
https://mail.gnome.org/archives/gtk-list/1999-August/msg00145.html
pressing Shift+Tab generates a special keyval of ISO_Left_Tab, without
a GDK_SHIFT_MASK applied. We now handle this case so that the midend
receives ('\t' | MOD_SHFT) as intended. This will be used by the
upcoming Untangle keyboard interface.
The cursor keys navigate amongst the points. CURSOR_SELECT toggles dragging;
CURSOR_SELECT2 and the Tab key cycle through the points.

The cursor navigation scheme jumps to the nearest point within the quadrant
of the cursor direction; this seems to yield fairly intuitive gameplay.
Unfortunately, the "quadrant-nearest-neighbors" digraph produced by this
scheme is not necessarily fully reciprocal; that is, pressing opposite
cursor keys in sequence does not always return to the original point. There
doesn't seem to be any immediately obvious way around this.

As for connectivity (i.e. whether all points are reachable from any given
point), I could not find a counterexample, but I don't yet have a formal
proof that this is the case in general. Hence, I've added the ability to
cycle through all the points with Tab. (This will probably also be useful
in conjunction with the "Numbers" point drawing preference.)
This refactors all instances of bitwise-ANDs with `~MOD_MASK'. There is
a handful of more complex instances I left unchanged (in cube.c, midend.c,
and twiddle.c), since those AND with `~MOD_MASK | MOD_NUM_KEYPAD' or
similar. I don't think it's worth writing a macro for those cases.

Also document this new macro's usage in devel.but.
…ions.

C99 reserves the mem* namespace for future expansion. Some Rockbox
targets had issues with memswap() conflicting with another definition,
so fix that.
…uilds.

This was breaking Rockbox builds due to the lack of vprintf().
This is useful on smaller screens, where the default-size pencil marks may
be too small to be visible by default.
This adds a "half-grid" cursor mode, settable via a preference, which
doubles the resolution of the keyboard cursor, so that it can be over a
square center, vertex, or edge. The cursor select buttons then toggle the
edge directly under the cursor.

There are two advantages to this new behavior. First, the game can now be
played with only the cursor keys, doing away with the need to hold Control
or Shift, which are not currently emulated on Rockbox. And second, the new
interface is arguably more discoverable than the legacy mode,
which is still retained as a user preference.
Last November in commit 08365fb I added a VERSIONINFO resource
to the Windows puzzle binaries, with three of the four integer
components of the binary version number taken from the year, month and
day of the build date. The header file #define of those integers was
made by a Perl one-liner which just split up $(!builddate) into the
right groups of digits.

But it didn't trim leading zeroes. So the build failed today because
the month component of the version number was '08', which isn't a
valid C integer literal (the leading 0 means octal, but 8 isn't an
octal digit), and presumably therefore not valid according to llvm-rc
either. I have to assume that the previous months have all worked
because 01, ..., 07 _are_ valid octal integer literals and still mean
the right things.

I'm not 100% satisfied with this explanation, because surely the same
argument applied to the day field should have meant my builds failed
on the 8th and 9th of every month since I added this code last
November! But I don't have any evidence left over to show why it
_didn't_ fail. Perhaps I've upgraded llvm-rc past a relevant bug fix
in the last month, or something.
Both Inertia and Twiddle previously included static implementations of this
exact same function, which was passed to `qsort()` as a comparator. With
this change, a single global implementation is provided in misc.c, which
will hopefully reduce code duplication going forward.

I'm refactoring this in preparation for the upcoming fallback polygon fill
function, which I'm about to add.
Previously, this function's documentation just stated that `coords`
contained the X and Y coordinates of the polygon's vertices, without
specifying in what order they were listed. Thus, it would have been
reasonable to (wrongly) assume that all the X coordinates were listed
in the first half of the array, followed by all the Y coordinates.

Now it's clear that each point's X and Y coordinates are stored
directly next to each other, and in that order.
This changes the drawing API so that implementations receive a
`drawing *` pointer with each call, instead of a `void *` pointer as
they did previously. The `void *` context pointer has been moved to be
a member of the `drawing` structure (which has been made public), from
which it can be retrieved via the new `GET_HANDLE_AS_TYPE()` macro. To
signal this breaking change to downstream front end authors, I've
added a version number to the `drawing_api` struct, which will
hopefully force them to notice.

The motivation for this change is the upcoming introduction of a
draw_polygon_fallback() function, which will use a series of calls to
draw_line() to perform software polygon rasterization on platforms
without a native polygon fill primitive. This function is fairly
large, so I desired that it not be included in the binary
distribution, except on platforms which require it (e.g. my Rockbox
port). One way to achieve this is via link-time optimization (LTO,
a.k.a. "interprocedural optimization"/IPO), so that the code is
unconditionally compiled (preventing bit-rot) but only included in the
linked executable if it is actually referenced from elsewhere.
Practically, this precludes the otherwise straightforward route of
including a run-time check of the `draw_polygon` pointer in the
drawing.c middleware. Instead, Simon recommended that a front end be
able to set its `draw_polygon` field to point to
draw_polygon_fallback(). However, the old drawing API's semantics of
passing a `void *` pointer prevented this from working in practice,
since draw_polygon_fallback(), implemented in middleware, would not be
able to perform any drawing operations without a `drawing *` pointer;
with the new API, this restriction is removed, clearing the way for
that function's introduction.

This is a breaking change for front ends, which must update their
implementations of the drawing API to conform. The migration process
is fairly straightforward: every drawing API function which previously
took a `void *` context pointer should be updated to take a `drawing *`
pointer in its place. Then, where each such function would have
previously casted the `void *` pointer to a meaningful type, they now
instead retrieve the context pointer from the `handle` field of the
`drawing` structure. To make this transition easier, the
`GET_HANDLE_AS_TYPE()` macro is introduced to wrap the context pointer
retrieval (see below for usage).

As an example, an old drawing API function implementation would have
looked like this:

void frontend_draw_func(void *handle, ...)
{
    frontend *fe = (frontend *)handle;
    /* do stuff with fe */
}

After this change, that function would be rewritten as:

void frontend_draw_func(drawing *dr, ...)
{
    frontend *fe = GET_HANDLE_AS_TYPE(dr, frontend);
    /* do stuff with fe */
}

I have already made these changes to all the in-tree front ends, but
out-of-tree front ends will need to follow the procedure outlined
above.

Simon pointed out that changing the drawing API function pointer
signatures to take `drawing *` instead of `void *` results only in a
compiler warning, not an outright error. Thus, I've introduced a
version field to the beginning of the `drawing_api` struct, which will
cause a compilation error and hopefully force front ends to notice
this. This field should be set to 1 for now. Going forward, it will
provide a clear means of communicating future breaking API changes.
I'm guessing that front end authors might look here to investigate a
build failure due to the recent changes to the drawing API.

(This unfortunately needs to be a separate commit from its parent --
otherwise it would need to reference its own hash!)
This adds a portable, scanline-based polygon filling algorithm, which
fills a polygon by drawing a collection of adjacent horizontal lines.

This change is motivated by the Rockbox port's current lack of a true
polygon fill capability. Until now, it attempted to approximate a
polygon fill by performing a series of triangle fills, but this worked
reliably only for convex polygons. I originally considered making this
new rasterizer part of the Rockbox front end itself, but I ultimately
decided that it made more sense to include it here, in the Puzzles
distribution, where other platforms may benefit from it in the future.

No in-tree front ends use this new function quite yet, but I plan to
follow this commit with a compile-time option to force front ends to
use it for testing.

This new polygon drawing code also comes with its own standalone
driver code to test it out in isolation. This code currently relies on
SDL 2.0 to present a GUI window to the user, which unfortunately adds
a build-time dependency. To lessen the impact of this change, this
program is gated behind a CMake build option. To use it, run:

$ cmake -DBUILD_SDL_PROGRAMS=true
This new option, when enabled, forces the in-tree front ends
(Emcscripten, GTK, NestedVM, OS X, and Windows) to use the recently
introduced draw_polygon_fallback() in place of their native
draw_poly(). This will enable easy testing of this function in the
future.

This new option is off by default. To enable it, run CMake as:

$ cmake -DUSE_DRAW_POLYGON_FALLBACK=on

Note that I did _not_ update the Postscript frontend (ps.c) to use
this new option, as I don't think draw_polygon_fallback() would work
at all in Postscript, where the drawing units are no longer guaranteed
to be pixels. The envisioned use case for this option is a developer
testing changes to this function for sanity and/or performance, which
I only foresee happening on a standard GUI front end.
I had tried to make this comment refer to the commit in which the
change in question (the new drawing API semantics) was introduced, but
something about Simon applying the patch caused its hash to change.

Now that the commit in question is in Simon's branch, it should remain
constant forever.
The lack of this pair of parens triggered a compiler warning, which
turned into an error at -Werror. Oops - I thought I'd folded that fix
in before pushing Franklin's series.
The text format includes newline characters that weren't being
included in the buffer length calculation.  Fix the calculation and
assert before returning that the string offset matches the calculated
length.
This fixes a build failure on Ubuntu 24.04, whose javac won't go back
as far as 1.7 any more. Bump to 1.8.
This week I expanded that comment into a blog post:

  https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/findloop/

which improves on the comment in three ways:

  1. diagrams

  2. adds a further reason why the footpath-dsf algorithm was
     unsatisfactory, pointed out by a Mastodon comment after I
     published the original version of the blog post

  3. adds the punchline that the loop tracing approach _could_ have
     been made to work after all!

So I've deleted the comment and replaced it with a link to the article.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.