ScummVM RGB color progress blog

July 6, 2009

Updated API reference

Filed under: Uncategorized — Upthorn @ 4:31 am

Note: the following is copied from the source of the wiki reference article I just wrote for it.

Contents

//

Introduction

This page is meant as a reference for developers implementing the Truecolor API in their engine and backend modules. This page will provide a complete spec of API requirements and suggestions, as well as a protocol for engines and backends to follow during setup.

NOTE: This API was designed with backwards-compatibility for 8-bit Graphics only engines in mind. If your engine only uses 256 color graphics, you should not have to change anything, so long as the engine’s ENABLE_RGB_COLOR setting matches the backend’s during compilation, so that functions link properly.

Truecolor API specifications

Engine specifications

  • Engines capable of some, but not all RGB formats must use Graphics::findCompatibleFormat with OSystem::getSupportedFormats() as the first parameter, and the engine’s supported format list as the second parameter.
  • Lists of formats supported by the engine must be in descending order of preference. That is, the first value is the most wanted, and the last value id the least wanted.
  • Engines capable of any RGB format must use the first item in the backend’s getSupportedFormats() list to generate graphics.
  • Engines with static graphical resources should use the backend’s preferred format, and convert resources upon loading, but are not required to.
  • Engines which do not require the backend to handle alpha should not use XRBG1555 or XBGR1555 modes, but may do so.

Backend specifications

  • When no format has been requested, or no game is running, the return value of getScreenFormat must equal that of Graphics::PixelFormat::createFormatCLUT8().
  • If a requested format can not be set up, the backend must revert to 256 color mode (that is, Graphics::PixelFormat::createFormatCLUT8()).
  • Backends must not change the return value of getScreenFormat outside of initBackend, initSize, or endGFXTransaction.
  • Backends must be able to display 256 color cursors even when the screen format is set differently.
  • Backends supporting GFX transactions must return kTransactionFormatNotSupported from endGFXTransaction when screen format change fails.
  • Backends must place the highest color mode that is supported by the backend and the hardware at the beginning of the list returned by getSupportedFormats.
  • Backends should support graphics in RGB(A) color order, even if their hardware uses a different color order, but are not required to.
  • Backends supporting color order conversion with limited hardware may use Graphics::crossBlit, but are strongly recommended to use platform-optimized code.

Truecolor API initialization protocol

Engine initialization protocol

NOTE: This API was designed with backwards-compatibility for 8-bit Graphics only engines in mind. If your engine does not make use of per-pixel RGB color graphics, you should not have to change anything, so long as ENABLE_RGB_COLOR is set in configuration during compilation, so that functions link properly.

  1. Init with desired pixel format
    • If your engine can only produce graphics in one RGB color format, initialize a Graphics::PixelFormat to the desired format, and call initGraphics with a pointer to that format as the fourth parameter.
      • For instance, if your engine can only produce graphics in RGB555, you would say Graphics::PixelFormat myFormat(2, 3, 3, 3, 8, 10, 5, 0, 0);
    • If your engine can easily support any RGB mode (for instance if it converts from YUV), call initGraphics with NULL for the fourth parameter.
    • If your engine can support more than one RGB mode, but not all of them…
      1. Produce a Common::List of Graphics::PixelFormat objects describing the supported formats. This list must be in order of descending preference, so that the most desired format is first, and the least desired is last.
      2. call initGraphics with this list of formats as the fourth parameter
  2. Check the return value of OSystem::getScreenFormat() to see if setup of your desired format was successful. If setup was not successful, it will return Graphics::PixelFormat::createFormatCLUT8();
    • If the setup was not successful, and your engine cannot run in 256 colors, display an error and return.
    • Otherwise, initialize your engine to use the pixel format that getScreenFormat returned, and run normally.

Example

Here is an example of a simple engine that uses the best color depth available to display and color-cycle this gradient:

GradientRGB565

Common::Error QuuxEngine::run() {
	Graphics::PixelFormat ourFormat;

	// Request the backend to initialize a 640 x 480 surface with the best available format.
	initGraphics(640, 480, true, NULL);

	// If our engine could only handle one format, we would specify it here instead of asking the backend:
	// 	// RGB555
	// 	ourFormat(2, 3, 3, 3, 8, 10, 5, 0,  0);
	// 	initGraphics(640, 480, true, &ourFormat);

	// If our engine could handle only a few formats, this would look quite different:
	//  	Common::List<Graphics::PixelFormat> ourFormatList;
	//
	// 	// RGB555
	// 	ourFormat(2, 3, 3, 3, 8, 10, 5, 0,  0); 
	// 	ourFormatList.push_back(ourFormat);

	//
	// 	// XRGB1555
	// 	ourFormat(2, 3, 3, 3, 7, 10, 5, 0, 15); 
	// 	ourFormatList.push_back(ourFormat);
	// 	
	// 	// Use the best format which is compatible between our engine and the backend

	// 	initGraphics(640, 480, true, ourFormatList);

	// Get the format the system was able to provide
	// in case it cannot support that format at our requested resolution
	ourFormat = _system->getScreenFormat();

 	byte *offscreenBuffer = (byte *)malloc(640 * 480 * ourFormat.bytesPerPixel);

 	if (ourFormat.bytesPerPixel == 1) {
		// Initialize palette to simulate RGB332

		// If our engine had no 256 color mode support, we would error out here:
		//  	return Common::kUnsupportedColorMode;

		byte palette[1024];
		memset(&palette,0,1024);

		byte *dst = palette;
		for (byte r = 0; r < 8; r++) {

			for (byte g = 0; g < 8; g++) {

				for (byte b = 0; b < 4; b++) {

					dst[0] = r << 5;
					dst[1] = g << 5;

					dst[2] = b << 6;
					dst[3] = 0;

					dst += 4;
				}
			}
		}

		_system->setPalette(palette,0,256);
	}

	uint32 t = 0;

	// Create a mask to limit the color from exceeding the bitdepth
	// The result is equivalent to:
	// 	uint32 mask = 0;
	// 	for (int i = ourFormat.bytesPerPixel; i > 0; i--) {
	// 		mask <<= 8;
	// 		mask |= 0xFF;
	// 	}
	uint32 mask = (1 << (ourFormat.bytesPerPixel << 3)) - 1;

	// Repeat this until the event manager tells us to stop
	while (!shouldQuit()) {

		// Keep t from exceeding the number of bits in each pixel.
		// I think this is faster than "t %= (ourFormat.bytesPerPixel * 8);" would be.
		t &= (ourFormat.bytesPerPixel << 3) - 1;

		// Draw the actual gradient
		for (int16 y = 0; y < 480; y++) {

			uint8 *dst = offscreenBuffer + (y * 640 * ourFormat.bytesPerPixel);

			for (int16 x = 0; x < 640; x++) {

				uint32 color = (x * y) & mask;
				color = (color << t) | (color >> ((ourFormat.bytesPerPixel << 3) - t));

				// Currently we have to jump through hoops to write variable-length data in an endian-safe manner.
				// In a real-life implementation, it would probably be better to have an if/else-if tree or
				// a switch to determine the correct WRITE_UINT* function to use in the current bitdepth.
				// Though, something like this might end up being necessary for 24-bit pixels, anyway.

#ifdef SCUMM_BIG_ENDIAN
				for (int i = 0; i < ourFormat.bytesPerPixel; i++) {

					dst[ourFormat.bytesPerPixel - i] = color & 0xFF;

					color >>= 8;
				}
				dst += ourFormat.bytesPerPixel;

#else
				for (int i = ourFormat.bytesPerPixel; i > 0; i--) {

					*dst++ = color & 0xFF;
					color >>= 8;

				}
#endif
			}
		}
		// Copy our gradient to the screen. The pitch of our image is the width * the number of bytes per pixel.
		_system->copyRectToScreen(offscreenBuffer, 640 * ourFormat.bytesPerPixel, 0, 0, 640, 480);

		// Tell the system to update the screen.
		_system->updateScreen();

		// Get new events from the event manager so the window doesn't appear non-responsive.
		parseEvents();

		// Wait a semi-arbitrary length in order to animate fluidly, but not insanely fast
		_system->delayMillis(66);

		// Increment our time variable, which doubles as our bit-shift counter.
		t++;
	}
	return Common::kNoError;
}

Backend initialization protocol

  1. During first initialization, set the value that getScreenFormat returns to Graphics::PixelFormat::createFormatCLUT8()
  2. When initSize is called, attempt to set screen format with the PixelFormat pointed to by the format parameter
    • If format is NULL, use Graphics::PixelFormat::createFormatCLUT8()
    • If requested screen format is supported, attempt to set screen up with it.
      • If setup is unsuccessful, fall back to previous color mode and set the value that getScreenFormat returns accordingly.
        • Note: During game initialization, this must always result in a fall-back to 256 color mode with getScreenFormat returning a value equivalent to Graphics::PixelFormat::createFormatCLUT8. This may only have any other result if the same game has already run an initSize with a different format, and is trying to switch formats during runtime.
      • If setup is successful, update the value that getScreenFormat returns to the value that was requested.
        • If format is supported by backend but not directly in hardware, ensure that graphics are converted in copyRectToScreen
    • If requested screen format is not supported, continue running in 256 color mode.

Complete API reference

New functions

OSystem

  • Graphics::PixelFormat OSystem::getScreenFormat(void)
    • Returns the pixel format currently accepted for graphics from the engine.
  • Common::List<Graphics::PixelFormat> OSystem::getSupportedFormats(void)
    • Returns a list of all the pixel formats the backend can accept graphics in.
    • The first item in this list must be the highest color graphics mode supported by the backend which is directly supported by the hardware.
    • The remainder of the list must be in order of descending preference, such that the last item in the list is the one that the backend functions worst in.
    • Backends which do not support fast conversion must put all modes directly supported in hardware, (and CLUT8), before modes that will require conversion during copyRectToScreen.
    • Backends which support fast conversion should put larger colorspaces before smaller color spaces, but are not required to.

Graphics::PixelFormat

  • inline Graphics::PixelFormat Graphics::findCompatibleFormat(Common::List<Graphics::PixelFormat> backend, Common::List<Graphics::PixelFormat> frontend)
    • Returns the first entry on the backend list that also occurs in the frontend list, or CLUT8 if there is no matching format.
  • inline Graphics::PixelFormat (void)
    • creates an uninitialized PixelFormat.
  • inline Graphics::PixelFormat(byte BytesPerPixel, byte RBits, byte GBits, byte BBits, byte ABits, byte RShift, byte GShift, byte BShift, byte AShift)
    • creates an initialized PixelFormat.
    • [_]Bits is the width in bits of the relevant channel
      • RBits = red bits, GBits = green bits, BBits = blue bits, ABits = alpha bits.
    • [_]Shift is the number (starting from 0) of the least significant bit in the relevant channel, which is equal to the bitshift required to make a channel.
      • In RGB565, RShift is 11, GShift is 5, and BShift is 0.
      • In RGBA4444, RShift is 12, GShift is 8, BShift is 4, and AShift is 0.
  • static inline Graphics::PixelFormat Graphics::PixelFormat::createFormatCLUT8(void)
    • creates a PixelFormat set to indicate 256 color paletted mode
    • This method is provided for convenience, and is equivalent to initializing a Graphics::PixelFormat with the bytedepth of 1, component losses of 8, and component shifts of 0.
      • Which would be accomplished normally via Graphics::PixelFormat(1,8,8,8,8,0,0,0,0);
    • Because this methods are static, it can be called without creating a pixel format first
      • For instance, if (format == NULL) newFormat = Graphics::PixelFormat::createFormatCLUT8();

Miscellaneous

  • bool Graphics::crossBlit(byte *dst, const byte *src, int dstpitch, int srcpitch, int w, int h, Graphics::PixelFormat dstFmt, Graphics::PixelFormat srcFmt)
    • blits a rectangle from a “surface” in srcFmt to a “surface” in dstFmt
    • returns false if the blit fails (due to unsupported format conversion)
    • returns true if the blit succeeds
    • can convert the rectangle in place if src and dst are the same, and srcFmt and dstFmt have the same bytedepth

Modified functions

engine

  • void initGraphics(int width, int height, bool defaultTo1xScaler, const Graphics::PixelFormat *format)
    • Now takes a format parameter, which is a pointer to a requested pixelformat
    • Uses top item in backend’s getSupportedFormats list if format is NULL
    • Now displays a warning if it recieves OSystem::kTransactionFormatNotSupported in return from endGFXTransaction
    • Now overloaded to simplify initialization for the three engine types:
  • void initGraphics(int width, int height, bool defaultTo1xScaler)
    • A wrapper which sets format as a pointer to Graphics::PixelFormat::createFormatCLUT8();
  • void initGraphics(int width, int height, bool defaultTo1xScaler, const Commmon::List<Graphics::PixelFormat> &formatList)
    • A wrapper which sets format as a pointer to the return value from Graphics::findCompatibleFormat(OSystem::getSupportedFormats(),formatList)

OSystem

  • virtual void OSystem::initSize(uint width, uint height, Graphics::PixelFormat *format = NULL)
    • Can now take a format parameter, which is a pointer to a requested pixelformat, and defaults to NULL
    • Uses 256 color mode if format is NULL
  • OSystem::TransactionError OSystem::endGFXTransaction(void)
    • Must now return kTransactionFormatNotSupported if the backend fails in an attempt to initialize to a new pixel format during a GFX transaction.

CursorMan

  • void Graphics::CursorManager::pushCursor(const byte *buf, uint w, uint h, int hotspotX, int hotspotY, uint32 keycolor, int targetScale, Graphics::PixelFormat *format)
    • Can now take a format parameter, which is a pointer to a Graphics::PixelFormat describing the pixel format of the cursor graphic, and defaults to NULL.
    • Uses 256 color mode if format is NULL
  • void Graphics::CursorManager::replaceCursor(const byte *buf, uint w, uint h, int hotspotX, int hotspotY, uint32 keycolor, int targetScale, Graphics::PixelFormat *format)
    • Can now take a format parameter, which is a pointer to a Graphics::PixelFormat describing the pixel format of the cursor graphic, and defaults to NULL.
    • Uses 256 color mode if format is NULL
  • Graphics::CursorManager::Cursor(const byte *data, uint w, uint h, int hotspotX, int hotspotY, uint32 keycolor = 0xFFFFFFFF, int targetScale = 1, Graphics::PixelFormat format = Graphics::PixelFormat::createFormatCLUT8())
    • Can now take a format parameter, which is a Graphics::PixelFormat describing the pixel format of the cursor graphic, and defaults to 256 color mode.

Modified Types

  • enum Common::Error
    • Now includes a kUnsupportedColorMode value, for engines which get unsupported pixel formats after a format change request fails.
  • enum OSystem::TransactionError
    • Now includes a kTransactionFormatNotSupported value, for backends to announce failure to supply game screen with requested pixel format.

17 Comments »

  1. Very nice!

    Note: I still think that most (all?) of the PixelFormat::createFormat??? methods should be gone. To quote a statement on good API design: “API Should Be As Small As Possible But No Smaller”. In this particular case, these methods serve no real purpose. So, keep only those which are used in at least 3 different places (that’s a *very* low bound, btw, normally, I’d say 5 or 10). Rational: (a) it is easy to add stuff, but hard to remove it later on; (b) these functions will all be used very rarely, probably each in at most 1 place. Conclusion: They are not needed. (c) Adding more funcs clutters the interfaces and makes it harder to learn and understand them.

    Comment by Max Horn — July 6, 2009 @ 1:40 pm | Reply

    • I am commenting here to note that we I have acted on this and edited the post to reflect the outcome of that action, as without that context your comment would appear to be criticizing a feature that doesn’t exist.

      Comment by scummvmupthorn09 — July 7, 2009 @ 12:48 pm | Reply

  2. Wondering about this line:

    Check the return value of OSystem::getScreenFormat() to see if setup of your desired format was successful. If setup was not successful, it will return Graphics::PixelFormat::createFormatCLUT8();

    Actually that’s not quite correct or rather consistent with our transaction and rollback support. If there was an video mode setup before, a backend with rollback support should just try to set the old mode up again and *not* CLUT8.

    Maybe you have some more insights on how to solve the consistency clash.

    Comment by LordHoto — July 7, 2009 @ 12:31 pm | Reply

    • Because the backend is required to operate in CLUT8 outside of a game, the prior mode should always be CLUT8.
      I suppose that to support this I would have to require that games do not change format during operation, but that is arbitrary. Rather I should specify that this applies only to first initialization, and change the spec such that initSize reverts to the prior format on failure, rather than CLUT8.

      Comment by scummvmupthorn09 — July 7, 2009 @ 12:52 pm | Reply

  3. I’m a bit confused about the convertScreenRect() function. I’m assuming this is to be used by
    the engine for pre-converting resources, right? But in that case it means that the engine has
    selected a different pixel format than the one used in its resource data, so we have potentially
    three pixel formats involved:

    Engine resources OSystem pixel format Hardware pixel format
    Format A -> Format B -> Format C

    * The engine has asked the backend for format B. Thus, pixels submitted to the backend
    needs to be in Format B. If format A differs from format B, the engine first needs
    to convert from format A to format B.

    * The OSystem has been asked for format B. It can not refuse it except by reverting to CLUT8 mode.
    If it accepts, and the hardware format C is different, the backend needs to convert from format B
    to format C.

    * The engine has no way of knowing Format C. The backend has no way of knowing Format A.

    Given this, I don’t see how convertScreenRect() is supposed to be used. It takes only one format parameter
    “hwFormat” (Format C?), the other format is implicit. The backend knows only two formats (B and C)
    implicitly. So, since C->C would be a no-operation, the only conversion it could reasonably perform
    is B->C. But why would a function to perform a conversion B->C be a part of the OSystem API? That
    is something which is completely internal to the backend. Nobody but the backend can use the result, since
    only the backend can use data in format C (if it differs from B).

    Comment by Marcus Comstedt — July 9, 2009 @ 5:14 am | Reply

    • If you read the spec, you should see that convertScreenRect is meant to be used by backends for post conversion, I have provided it on the assumption that unless there was some method that already exists, many backends would simply not provide any conversion. That said, it is the part of the API that I am least confident about.

      Comment by scummvmupthorn09 — July 9, 2009 @ 10:25 am | Reply

      • If that’s the case I wonder why there is this function at all, it looks similar to crossBlit (It *is* actually even implemented using crossBlit by default). I would rather ask the backends call that by default, instead of going through an unnecessary virtual method in my eyes.

        Comment by LordHoto — July 9, 2009 @ 10:41 am

      • Ok, but then there are two things that don’t really make sense:

        1) There is no reason for the method to be virtual. If the backend does not want to
        use the provided method, it can use a different algorithm, either inlined, as
        a method with another name (of its own choosing) or even by a non-virtual overload
        of the default method. Having a virtual method only makes sense if somebody else
        is supposed to call it.

        2) A “conversion method that already exists” has no particular reason to live in the
        OSystem class. A more logical place to have it would be in the PixelFormat class.
        Then you could use it to convert between any combination of pixel formats, like
        for example in a pre-converting engine. E.g. to convert from pixel format A to B,
        you would do A->convertScreenRect(…., B).

        Comment by Marcus Comstedt — July 9, 2009 @ 10:42 am

      • The only reason for the virtual function in the OSystem class is so that backends can override it with hardware-optimized versions as they are strongly recommended, but not required, to do.
        But, it may be better just to provide crossBlit and let them write their own function entirely when crossBlit is too slow for their hardware.
        To be honest, I’m surprised Max didn’t come out strongly against doing it this way, like with the PixelFormat::createFormat* functions.

        Comment by scummvmupthorn09 — July 9, 2009 @ 10:53 am

      • (Since I can’t reply to your reply I’m using this indentation level):

        “The only reason for the virtual function in the OSystem class is so that backends can override it with hardware-optimized versions as they are strongly recommended, but not required, to do.”

        There is still no reason to have this method at all because of this (and also no reason to have it virtual). If a backend is fine with “crossBlit” it can just use it directly. If a backend needs something custom it can write one and use that directly.

        Thus there is no need to keep the function signature (and more over no need for the virtual bit) in OSystem. Since the implementation can just use the proper function directly, without going through a virtual function call.

        Comment by LordHoto — July 9, 2009 @ 11:00 am

      • But there is no _point_ in the backend overriding the method. Why should it override
        the existing method instead of simply not using it, and using its own implementation
        instead? All you get is a pointless vcall.

        Comment by Marcus Comstedt — July 9, 2009 @ 11:02 am

      • Yeah, that’s what I wasn’t sure about. I guess that is definitely a negative then, so I will remove the function.

        Comment by scummvmupthorn09 — July 9, 2009 @ 11:03 am

  4. // Currently we have to jump through hoops to write variable-length data in an endian-safe manner.
    // In a real-life implementation, it would probably be better to have an if/else-if tree or
    // a switch to determine the correct WRITE_UINT* function to use in the current bitdepth.
    // Though, something like this might end up being necessary for 24-bit pixels, anyway.

    I don’t get the following code. Usually graphics data is in native endian format, thus there is no reason to have special cases for BIG and LITTLE endian. Still if it isn’t native format, the backend should do the conversion internally. Or at least supply a function telling the user, which endian the data has to be passed in. (Actually the endian should not matter at all for 32bpp, since there you can just exchange the masks and everything is fine, the only place where it’s needed is probably 16bpp and an unpadded 24bpp mode).

    Just about the unpadded 24bpp mode as wjp also suggested multiple times in our IRC channel, we do not want unpadded 24bpp modes.

    Comment by LordHoto — July 9, 2009 @ 6:56 am | Reply

    • The salient data can be 1 byte, 2 bytes, 3 bytes, or 4 bytes. The data type used is always 4 bytes.
      If the data we want is smaller than the size of the data type, in big endian, the first two bytes will be empty, but in little endian, it will be the last two.
      That is the reason for the following code.

      Comment by scummvmupthorn09 — July 9, 2009 @ 10:28 am | Reply

      • Ah right I missed that bit, thanks for clearing up. Yeah in the end definitly the WRITE_UINT## (or just an plain uint16 * etc.) access should be used. Actually I would vote for using a different example code to be honest, I doubt engines will access the graphics data that way. So we should rather have a real world example in there, instead of something made up.

        Comment by LordHoto — July 9, 2009 @ 10:35 am

  5. I’m still not happy with: Common::List OSystem::getSupportedFormats(void).

    At least the SDL backend has no means of telling any natively supported format, since for SDL you can only know *after* you setup the video mode which colot masks are used. This might be fine on desktop machines, which are usually fast enough for conversion anyway. But since SDL is also used on Symbian and WinCE, and I expect at least high end WinCE devices to support 16bpp, this might be bad. If I remember correctly we had for example one user which had an BGR WinCE video mode, which resulted in wrong GUI colors. (Can’t remember if this was on the forums or on the bug tracker or in IRC).

    Now “getSupportFormats” (or rather the WinCE backend) has no way to tell the color order before initializing the hardware mode. Which will be best for engines doing YUV -> RGB conversion.

    Thus I would propose at least to *consider* something like SDL does. This means maybe a way of only asking for a given color depth, not caring about the real color order along with it. That might be at least useful for all engines out there doing YUV -> RGB conversion. Since the could create data in the real native mode.

    Maybe this idea was considered, but dropped, because of being to not clean enough. I would at least still to hear about it, since it *is* a problem affecting devices like WinCE, where hardware format creation might help.

    Comment by LordHoto — July 9, 2009 @ 7:06 am | Reply

    • After a short talk on IRC here’s my real proposal:

      Change OSystem::initSize to:

      – Automatically choose a mode when “format” is NULL, this should be a hardware format of the backend
      -> this should then in the end be similar to SDL_SetVideoMode with the SDL_ANYFORMAT flags
      – Elsewise use the mode the user requests

      To remove the need for updating all engines ::initGraphics calls do the following:

      – Change ::initGraphics to always pass the “format” pointer directly to OSystem::initSize
      – Add a wrapper ::initGraphics taking no “format” pointer at all, which will always initialize CLUT8
      -> that can be done for example via calling ::initGraphics with a CLUT8 pixel format
      -> convert all 8bpp engines to use that (actually this will have the same API as the old ::initGraphics, thus there should be no need for an conversion).

      Comment by LordHoto — July 9, 2009 @ 10:56 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply to LordHoto Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: