HemiStereo SDK
The C++ Software Development Kit for the HemiStereo 3D sensing device.
Using the HemiStereo SDK


If you want to use the SDK on the HemiStereo device, please follow this guide first and then continue with the next section. For all other cases, download the SDK installer from here and follow the installation instructions.

Sample simple_capture

The HemiStereo SDK comes with a few sample applications. You can find the samples under <HemiStereoSDKInstallDir>/share/osp/samples. For example, the simple_capture sample is an application that stores images captured by the sensor in a given interval. In this guide, we will use this sample to show you how to access the HemiStereo sensor and capture images and 3d scenes from it.


Inside the simple_capture directory, we find the simple_capture.cpp. This file contains the C++ source code. Let's start with the main function:

if (!parseCmdLineArgs(argc, argv))
std::unique_ptr<osp::Device> device;
if (config::device == "auto")
// scan available devices
std::vector<osp::Device::Info> devices = osp::DeviceManager::get().getDevices();
if (devices.empty())
std::cout << "No devices found." << std::endl;
// connect to the first device
device = osp::DeviceManager::get().request(devices[0], config::tls);
// connect to device
device = osp::DeviceManager::get().request(config::device, config::tls);
if (!device)
std::cerr << "Device " << config::device << " not available." << std::endl;

First, the command line parameters are parsed and depending on the parameters, the device is selected. The DeviceManager singleton is responsible for managing the devices. You can get a reference to the singleton using the static method get(). The DeviceManager is able to search for devices in the local network using UDP broadcasts. To get a list of all found devices, getDevices() can be used. request(...) establishes a connection to a device and return an unique_ptr to the Device class. The first parameter of request is the device information which can be of type Device::Info or of type std::string which will represent an IP or hostname. The second parameter is a flag that enables or disables TLS encryption. If the connection fails, an empty pointer is returned.

After we got an connection to the device, we can continue. It is possible to define a password for restricting the access to the device. If a password was set, we need to unlock the device first:

// login if password is required
auto info = device->info();
if (info.passwordRequired)
std::cout << "Password: ";
std::cin >> creds.password;
auto loginStatus = device->login(creds);
if (!loginStatus.ok())
std::cout << "Login failed!" << std::endl;

First we check the passwordRequired flag if the password protection is enabled. If yes, we show a password prompt an let the user enter the password. Then, we call login() to unlock the device.

Now we are ready to use the device functions. In this example, we first set the capture mode:

// set capture mode

HemiStereo supports two capture modes: RAW and RGBD. Setting the capture mode to RAW instructs the device to send only the images from the camera. No stereo processing is done in that mode. RGBD enables the stereo processing. In that mode, an ideal omnidirectional image (without lens distortions) and a pixel equivalent distance map are transferred from the device. This mode also provides access to the point cloud which is calculated on the client to save bandwidth. In this example, the capture mode can be set by the user using a command line parameter. Next we start the device using the start() method and run our processing function:

// start streaming from device
osp::Status status = device->start();
if (!status.ok())
// register signal handler
std::signal(SIGTERM, signalHandler);
std::signal(SIGINT, signalHandler);
// start main loop

As you can see, we also connect a signal handler to some system signals which just changes a global variable to stop the processing loop:

void signalHandler(int signal)
running = false;

When the running flag is set to false, the processing function stops stops and the device gets released:

// stop streaming from device
// close device connection

Now let's see what happens inside the process function:

void process(osp::Device *device)
size_t frameCount = 0;
running = true;
auto frame = device->getFrame();
if (!frame)
std::string frameCountStr = numToStr(frameCount);
if (!frame->image.empty())
boost::filesystem::create_directories(config::outputDir + "/image");
saveImage(config::outputDir + "/image/" + frameCountStr + ".png", frame->image);
if (!frame->distanceMap.empty())
boost::filesystem::create_directories(config::outputDir + "/distance");
saveDistanceMap(config::outputDir + "/distance/" + frameCountStr + ".png", frame->distanceMap);
if (!frame->sourceImage0.empty())
boost::filesystem::create_directories(config::outputDir + "/image_0");
saveImage(config::outputDir + "/image_0/" + frameCountStr + ".png", frame->sourceImage0);
if (!frame->sourceImage1.empty())
boost::filesystem::create_directories(config::outputDir + "/image_1");
saveImage(config::outputDir + "/image_1/" + frameCountStr + ".png", frame->sourceImage1);
if (!frame->sourceImage2.empty())
boost::filesystem::create_directories(config::outputDir + "/image_2");
saveImage(config::outputDir + "/image_2/" + frameCountStr + ".png", frame->sourceImage2);
std::cout << "Recorded image " << frameCountStr << std::endl;

First, the frameCount and the running variables are initialized. Then a while loop is started that runs as long as running is true. Inside the loop, a frame is received from the device using the getFrame() method. The frame is of type Frame which is a struct containing some meta data and the images:

struct Frame
Metadata metadata;
Matrix<uint8_t, 3> sourceImage0;
Matrix<uint8_t, 3> sourceImage1;
Matrix<uint8_t, 3> sourceImage2;
Matrix<uint8_t, 3> image;
Matrix<uint16_t, 1> distanceMap;
Matrix<Point3d<float>> pointcloud;

Depending on the capture mode, the matrices contain some data or not. Therefore we check each matrix if it is not empty and save the image in that case. The functions for saving images and saving the distance map are slightly different:

void saveImage(const std::string &filepath, const osp::Matrix<uint8_t, 3> &image)
auto cvMatRgb = osp::matrixToCvMat(image);
cv::Mat cvMatBgr;
cv::cvtColor(cvMatRgb, cvMatBgr, cv::COLOR_RGB2BGR);
cv::imwrite(filepath, cvMatBgr);
void saveDistanceMap(const std::string &filepath, const osp::Matrix<uint16_t, 1> &distanceMap)
cv::Mat distanceMapCv = osp::matrixToCvMat(distanceMap);
cv::Mat distanceMapColored;
distanceMapCv.convertTo(distanceMapColored, CV_8U, 255./5000.);
cv::applyColorMap(255-distanceMapColored, distanceMapColored, cv::COLORMAP_JET);
cv::imwrite(filepath, distanceMapColored);

We use OpenCV for writing the image files. To convert between a osp::Matrix and a cv::Mat, we provide some helper functions defined under osp/types/mathelper.h. Here, we use matrixToCvMat to convert to a cv::Mat. The images coming from the sensor device are in rgb order. Because OpenCV uses bgr order, a call to cv::cvtColor is necessary. For saving the distance map, a color map is applied, to make it displayable by common image applications. In the example, the colormap JET is mapped between 0 and 5000 millimeters.

Building the source

The CMakeLists.txt

Next to the source file, there is a CMakeLists.txt which contains the build instructions that are used by CMake. CMake is a popular cross-platform tool for building software. It uses compiler independent configuration files to generate compile instructions for the target platform. If you do not have CMake installed on your PC, please download it from here or install it using your distribution's package manager.

Let's have a look at the CMakeLists.txt:

cmake_minimum_required(VERSION 3.5)
# find osp
# find Boost
find_package(Boost COMPONENTS filesystem program_options)
# find OpenCV
find_package(OpenCV REQUIRED core imgproc imgcodecs)
# create executable
add_executable(${PROJECT_NAME} simple_capture.cpp)
# link dependencies

The first line checks defines a minimum CMake version. If an older CMake is used, it will throw an error. Then the project name is defined. In our case it is simple_capture. Our sample depends on several libraries: osp (which is the package name of the HemiStereo libraries), Boost and OpenCV. Therefore we instruct CMake to find the libraries. After finding the libraries, add_executable gets called. The first parameter is the targets name, all others are the paths to the source files. The last step is to link the dependencies to our target. To achieve this, one have to call target_link_libraries with the target and all dependencies. Note that the libraries are passed as imported targets. These are targets with contains the paths to the libraries, the include directories and possible dependencies of these libraries. Therefore we don't have to define the include directories on our own which makes it easier to handle. If you use a library which doesn't provide imported targets, you need to pass the library's include directories to the target_include_directories CMake function.

Installing dependencies

For building the application, we have to install some dependencies. As we can see in the CMakeLists.txt, the sample depends on two third party libraries: Boost and OpenCV. Because we do not ship these libraries with the SDK, you have to download and install them. Of course we need also CMake. So let's install the dependencies:


On Ubuntu, you can install them using the package manager:

> sudo apt-get update
> sudo apt-get install \
cmake-gui \
libopencv-dev \
libboost-filesystem-dev \

On Windows, you can Download Boost binaries from the following links:

Running CMake

If everything is installed, you should find the CMake GUI in the program menu. After starting it, you can set the source directory. This is the directory containing the CMakeLists.txt. The build directory is the directory where the binary will be generated. You can set it to a path of your choise. After setting the paths, please click at Configure. A new window will be opened where you can configure the generator. On Linux, you can select Unix Makefiles and use the default native compilers. On Windows, you should select your Visual Studio Edition. You can also define a toolkit version. Note that the SDK libraries are compiled with toolkit v141, so choose this toolkit at minimum. After that, you can continue and CMake will start to configure the project. It could happen that CMake is complaining that it can not find some dependencies. You can help CMake by defining the following variables in the CMake GUI:

osp_DIR -> <HemiStereoSDKInstallDir>/lib/cmake/osp
OpenCV_DIR -> The directory containing the OpenCVConfig.cmake in the OpenCV directory
BOOST_ROOT -> <BoostInstallDir>

Note: On Windows, compilations sometimes fail due to wrong paths of the shared boost libraries. If that happens with your Boost version, please enable the Boost_USE_STATIC_LIBS option to use the static libs.

Please run Configure again after setting these variables. If everything is configured correctly, click on Generate to generate the build files.

Compiling the Code

After running CMake, you will find some project files for your build tool in the build directory. Depending on the CMake generator settings, you can compile the code the following ways:

Unix Makefiles

Open a terminal and run the following commands:

> cd <BuildDirectory>
> make
Visual Studio 20xx

CMake generates a solution file *.sln, which you can open in Visual Studio. Then, just use Visual Studio to compile the code.

Running the application

After compilation, an executable is generated in the build directory. Open a terminal and run the following command:

#### Linux

> ./simple_capture

#### Windows

> simple_capture.exe

If windows complains about missing DLLs, copy them to directory where the executable exists.