image scaling with shader?

classic Classic list List threaded Threaded
2 messages Options
andrews andrews
Reply | Threaded
Open this post in threaded view
|

image scaling with shader?

I tried to render 16-bit image. The scaling function from 16-bit to 8-bit conversion is setup in
shader. ReaderwriterGDAL is modified as following,


unsigned short *gray = new unsigned short[target_width * target_height];
unsigned short *alpha = new unsigned short[target_width * target_height];

//Initialize the alpha values to 255.
memset(alpha, 255, target_width * target_height * sizeof(unsigned int));

bandGray->RasterIO(GF_Read, off_x, off_y, width, height, gray, target_width, target_height, GDT_UInt16, 0, 0);

if (bandAlpha)
{
    bandBlue->RasterIO(GF_Read, off_x, off_y, width, height, alpha, target_width, target_height, GDT_UInt16, 0, 0);
}

image = new osg::Image;
image->allocateImage(tile_size, tile_size, 1, GL_RGBA, GL_UNSIGNED_SHORT);
memset(image->data(), 0, image->getImageSizeInBytes());

for (int src_row = 0, dst_row = tile_offset_top;
    src_row < target_height;
    src_row++, dst_row++)
{
    for (int src_col = 0, dst_col = tile_offset_left;
        src_col < target_width;
        ++src_col, ++dst_col)
    {
        *(image->data(dst_col, dst_row) + 0) = gray[src_col + src_row * target_width];
        *(image->data(dst_col, dst_row) + 1) = gray[src_col + src_row * target_width];
        *(image->data(dst_col, dst_row) + 2) = gray[src_col + src_row * target_width];
        *(image->data(dst_col, dst_row) + 3) = alpha[src_col + src_row * target_width];
    }
}

image->flipVertical();

delete []gray;
delete []alpha;


Getting blank image layer. Not clear about creating osg::Image with unsigned short type.
I am not sure what went wrong in the code?
andrews andrews
Reply | Threaded
Open this post in threaded view
|

Re: image scaling with shader?

I missed out setting internal texture format as GL_RGBA16UI_EXT.
It displays the 16 bit image but many pixels appear saturated. look like there is an internal
BYTE conversion. I think ImageUtils always creates image with GL_UNSIGNED_BYTE
inside the function createEmptyImage.

If change in it, will display 16 bit without any degrade?