warping, blending, scaling, passive 3D
Image Geometry Correction (Image Warping) is the process of digitally manipulating image data such that the image's projection precisely maps to a specific projection surface or shape.  Image Geometry Correction compensates for the spatial distortion created by off-axis projector placement or a non-flat screen surface by applying a pre-compensating inverse distortion to that image in the digital domain.  This technology can also be used to create special effect distortions, although is more commonly applied to correct for a symmetrical screen shape.
Image Geometry Correction is implemented either by graphics processing or by video signal processing. Graphics Processing techniques harness the host computer’s graphics IC or sub-system to perform Geometry Correction using Texture Mapping hardware developed for computer games.  Image Geometry Correction performed with this technology is inexpensively implemented, but is limited to images that exist within the computer.  Video Signal based Image Geometry Correction applies real time digital filtering technology to an incoming video signal, enabling a broadcast or live signal to be geometry corrected without any computer preparation, delay or degradation.  Both techniques involve real-time execution of a spatial transformation from the input image to the output image. In both uses spatial transformation must be pre-defined for a particular desired geometric, and may be calculated by several different methods, including automated camera feedback methods.
FPS' Image AnyPlace and Image AnyPlace-200 are stand-alone Video Signal Processors.  Signal Processing based Geometry Correction is the most flexible form of this technology, enabling the correction of images that originate from ANY graphics controller platform.  
The simplest application of Image Geometry Correction is Keystone Distortion Correction, allowing users to move the image both vertically and horizontally, although with limited degree of adjustment when found within most projectors currently on the market.  For more complex distortion correction onto regular (i.e: spheres and cylinders) and irregular surfaces (i.e: architectural pillars), an external processor such as the Image AnyPlace-200 is required.  

Edge Blending is technology that enables the output of multiple projectors to be combined together in a seamless fashion.  Two or more projectors are used to illuminate a larger projection surface than can be effectively illuminated by a single projector.  The resulting total projection may be of a higher resolution and different aspect ratio than that of any component projector.  The area where projectors’ outputs overlap (called the Blend Region) requires special treatment to ensure that it is invisible.  Edge Blending is implemented by performing a real time proportional multiplication on the pixels in the Blend Region using a video signal processing technology (which may be present in the projector, source or in a separate processor).    
Since a successful blend depends on the ability to perfectly map the blended image to the screen, Edge Blending is dependent on proper Image Geometry Correction.   Edge Blending that is located in either image source or projector requires perfect projector position and a perfectly positioned, flat screen.  (Also, in any curved screen situation, it is much more effective to perform Edge Blending BEFORE Image Geometry Correction, rendering projector based Edge Blending unusable.)   Combining Edge Blending in the same Video Signal Processor that performs Image Geometry Correction enables much greater flexibility in achieving invisible Edge Blends.

Flexible Picture Systems Diagram