We had exception specs on the C++ interface, but C++11 does not support
them and some compilers have never honoured them. Remove all specs.
Thanks Lovell.
See https://github.com/jcupitt/libvips/issues/362
we had both a class member bandjoin, and an instance member
Vips.Image.bandjoin([i1, i2, i3..])
i1.ibandjoin([i2, i3..])
this was confusing and annoying ... get rid of the class one and just
use bandjoin everywhere, so this is now the way to do it:
i1.bandjoin([i2, i3..])
I remember @benvanik, a friend and colleague, who worked on our DZI renderer (Seadragon) told us that, I believe, it’s better for GPUs to have power of two sized tiles, so 254px + 2 x 1px tile overlap will give you 256 pixel.s
> To keep my tiles nicely sized, I shrink in instead of expand out – that means my tiles are really 254×254 with a 1px border, so the images are 256x256px with 254×254 of useful imagery. If you wanted larger tiles, you’d go 510×510 with 1px border making 512×512 tiles.
> — Source: http://www.noxa.org/blog/2009/11/29/megatextures-in-webgl-2/
There was a mixup with the previous fix to dzsave overlap handling,
correct it and update the test suite.
In the previous revision, dzsave overlapped tiles by overlap and sized
them by tile_size. In fact, tiles should be sized as (tile_size + overlap
* 2), ie. tile_size refers to the number of unique pixels per tile.
See https://github.com/jcupitt/libvips/issues/357
oh argh class and instance methods are in the same namespace, so we have
to rename the instance one as ibandjoin
also, start adding a test for arrayjoin
we now have VIPS_MAX_COORD for maximum image dimension, set to 10m
pixels ... we could go up to 2bn, but 10m seems a reasonable max, at
least for now
see https://github.com/jcupitt/libvips/issues/355