Skip to content

Swapchain and Frame stream

TheHellBox edited this page Apr 30, 2019 · 5 revisions

To render an image, we have to work with swapchain and framestream. Swapchan is an list of images to render into. So let's start with it.

Swapchain

The first thing you have to know - swapchain creation is expensive. So you probably need to create it only once and then reuse it as much as possible. The best time to initialize swapchain - RUNNING session state. This is an example code to create swapchain

pub fn create_swapchain(&mut self){
    let swapchain_formats = self.session.enumerate_swapchain_formats().unwrap();
    if !swapchain_formats.contains(&GL_RGBA8) {
        for format in swapchain_formats{
            println!("Format: {:04x}", format);
        }
        panic!("XR: Cannot use OpenGL GL_RGBA8 swapchain format");
    }

    let swapchain_create_info: xr::SwapchainCreateInfo<xr::OpenGL> = xr::SwapchainCreateInfo{
        create_flags: xr::SwapchainCreateFlags::EMPTY,
        usage_flags: xr::SwapchainUsageFlags::COLOR_ATTACHMENT | xr::SwapchainUsageFlags::SAMPLED,
        format: GL_RGBA8,
        sample_count: 1,
        // NOTE: Use correct resolution
        width: 800,
        height: 600,
        face_count: 1,
        array_size: 1,
        mip_count: 1
    };
    self.swap_chain = Some(self.session.create_swapchain(&swapchain_create_info).unwrap());
}

Now, you probably want to render something into it. The algorhytm is simple:

Enumerate swapchain images
Get image you need using acquire_image function
From enumerated swapchain images, get the image with id you get from acquire_image
wait for image

Or, if we write it as code:

pub fn get_swapchain_image(&mut self) -> Option<u32>{
    let swapchain = self.swap_chain.as_mut()?;
    let images = swapchain.enumerate_images().unwrap();
    let image_id = swapchain.acquire_image().unwrap();
    swapchain.wait_image(xr::Duration::INFINITE).unwrap();
    let image = images[image_id as usize];
    Some(image)
}

Now rendering, as we are using glium, you want to construct SrgbTexture2d from swapchain image, and use SimpleFrameBuffer for rendering. This is an example code:

pub fn draw(&mut self){
    use glium::texture::{DepthTexture2d, DepthFormat, UncompressedFloatFormat, MipmapsOption};

    let swapchain_image = self.xr.get_swapchain_image();
    if let Some(swapchain_image) = swapchain_image{
        self.xr.frame_stream_begin();
        println!("Rendering!");
        let color = unsafe{
            glium::texture::texture2d::Texture2d::from_id(
                &self.context,
                glium::texture::UncompressedFloatFormat::U8U8U8U8,
                swapchain_image,
                false,
                glium::texture::MipmapsOption::NoMipmap,
                glium::texture::Dimensions::Texture2d{width: 800, height: 600}
            )
        };
        let depthtexture = DepthTexture2d::empty_with_format(&self.context, DepthFormat::F32, MipmapsOption::NoMipmap, 800, 600).unwrap();
        let mut target = glium::framebuffer::SimpleFrameBuffer::with_depth_buffer(&self.context, &color, &depthtexture).unwrap();
        target.clear_color_and_depth((1.0, 0.0, 1.0, 1.0), 1.0);
        self.xr.release_swapchain_image();
        self.xr.frame_stream_end();
    }
}

After you rendered an image, you want to release it. You can do it like this swapchain.release_image().unwrap();

Frame Stream

Framestream has 3 states:

  • Wait
  • Begin
  • End

In wait state, you get the predicted display time that you have to use to end the framestream.

To begin the frame stream, you can use this example

let state = self.frame_stream.wait().unwrap();
self.predicted_display_time = state.predicted_display_time;
self.frame_stream.begin().unwrap();

But if while starting framestream is easy, ending is much harder. First, you probably need 2 subimages for each eye. And a swapchain. You can construct them like that:

let eye_rect = xr::Rect2Di{
    offset: xr::Offset2Di{
        x: 0,
        y: 0
    },
    // NOTE: Use actual resolution
    extent: xr::Extent2Di{
        width: 800,
        height: 600
    }
};

let left_subimage: xr::SwapchainSubImage<xr::OpenGL> = openxr::SwapchainSubImage::new()
    .swapchain(swap_chain)
    .image_rect(eye_rect);
let right_subimage: xr::SwapchainSubImage<xr::OpenGL> = openxr::SwapchainSubImage::new()
    .swapchain(swap_chain)
    .image_rect(eye_rect);

And then, construct projection view for each eye:

let projection_view_left = xr::CompositionLayerProjectionView::new().sub_image(left_subimage);
let projection_view_right = xr::CompositionLayerProjectionView::new().sub_image(right_subimage);
let views = [projection_view_left, projection_view_right];

The next step is getting an projection. It's easy, just use following code

let projection = xr::CompositionLayerProjection::new().views(&views);

And now we finally can end framestream

self.frame_stream.end(time, xr::EnvironmentBlendMode::OPAQUE, &[&projection]).unwrap();

Clone this wiki locally