I would like a simple way of achieving this, but seem to be bjorking the parameters to
glTexImage2D. I have an
std::vector<uint16_t> depth_buffer that, on a frame-by-frame basis has depth measurements coming from a kinect. There are exactly 640 x 480 of them, one depth measurement per pixel. If the world went my way, the call should be
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, 640, 480, 0, GL_LUMINANCE16, GL_UNSIGNED_SHORT, depth_buffer.data());
Where internalFormat (third parameter) is
GL_LUMINANCE16 because they are 16 bit unsigned integers, and format is the same because that is exactly how the data is coming in. The type parameter should be
GL_UNSIGNED_SHORT...because these are shorts and not bytes.
Surprisingly, if I change it to be
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, 640, 480, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, depth_buffer.data());
where internalFormat is still
GL_LUMINANCE16, format is just
GL_LUMINANCE without the
16, and type is
GL_UNSIGNED_BYTE, then I get something. Things are clearly being skipped, but just changing to
GL_UNSIGNED_SHORT doesn't cut it.
Depending on which documentation you read, format (the second
GL_LUMINANCE) may or may not allow the
16 after it (anybody know why? experimentation seems to confirm this). But my chief concern here is why
GL_UNSIGNED_**SHORT** seems to be invalid (either all black or all white) depending on the internalFormat -- format combination.
I've tried an obscene amount of combinations here, and am looking for the right approach. Anybody have some advice for achieving this? I'm not opposed to using fbo's, but would really like to avoid it if possible...since it definitely should be doable.