Volume 21, Number 2 (2013)

Permanent URI for this collection

Browse

Recent Submissions

Showing 1 - 7 out of 7 results
  • Item
    A GPGPU-based Pipeline for Accelerated Rendering of Point Clouds
    (Václav Skala - UNION Agency, 2013) Günther, Christian; Kanzok, Thomas; Linsen, Lars; Rosenthal, Paul; Skala, Václav
    Direct rendering of large point clouds has become common practice in architecture and archaeology in recent years. Due to the high point density no mesh is reconstructed from the scanned data, but the points can be rendered directly as primitives of a graphics API like OpenGL. However, these APIs and the hardware, which they are based on, have been optimized to process triangle meshes. Although current API versions provide lots of control over the hardware, e.g. by using shaders, some hardware components concerned with rasterization of primitives are still hidden from the programmer. In this paper we show that it might be beneficial for point primitives to abandon the standard graphics APIs and directly switch to a GPGPU API like OpenCL.
  • Item
    Optimizing Multiple Camera Positions for the Deflectometric Measurement of Multiple Varying Targets
    (Václav Skala - UNION Agency, 2013) Lobachev, Oleg; Schmidt, Martin; Guthe, Michael; Skala, Václav
    We present a device for detection of hail dents in passenger cars. For this purpose we have constructed a new multi-camera deflectometric setup for large specular objects. Deflectometric measurements have strict constraints how cameras can be placed – for instance: angular restrictions and distance limitations. An important trait of our system is the static setup – we use a single setup for camera configuration for all objects to be scanned. We render the camera images and analyze them for the deflectometric needs to optimize the camera placement w.r.t. multiple parameters. Important ones are the positions of the cameras – reflections of the patterns should be clearly visible. Camera parameters are computed using a global optimization procedure for which we efficiently generate a good starting configuration. We introduce an empiric quality measure of a particular camera configuration and present both visual and quantitative results for the generated camera placement. This configuration was then used to build the actual device.
  • Item
    Marker-less Facial Motion Capture based on the Parts Recognition
    (Václav Skala - UNION Agency, 2013) Akagi, Yasuhiro; Furokawa, Ryo; Sawage, Ryusuke; Ogawara, Koichi; Kawasaki, Hiroshi; Skala, Václav
    A motion capture method is used to capture facial motion to create 3D animations and for recognizing facial expressions. Since the facial motion consists of non-rigid deformations of a skin, it is difficult to track a transition of a point on the face over time. Therefore, a number of methods based on markers have been proposed to solve this problem. However, since it is difficult to place the markers on a face or on an actual texture of the face, it is difficult to apply the marker-based capture methods. To overcome this problem, we propose a marker-less motion capture method for facial motions. Since the thickness of a skin varies in each facial part, the features of the motion of the each parts also vary. These features make the non-rigid tracking problem more difficult. To prevent the problem, we recognize five types of facial parts (nose, mouth, eye, cheek and obstacle) from 3D points of a face by using Random Forest algorithm. After the recognition of the facial parts, we track the motion of the each part by using a non-rigid registration algorithm based on the Gaussian Mixture Model. Since the motions of the each part are independently detected, we integrate the motions of the each part as 3D shape deformations for tracking the motions of the points on the whole face. We adopt a Free-Form Deformation technique which is based on the Radial Basis Function for the integration. This deformation method deforms 3D shapes seamlessly with pairs of key points: several numbers of points of a source face and the corresponding points of a target shape which are detected by the non-rigid registration algorithm. Finally, we represent the motion of the face as the deformation from the face of the initial frame to the others. In our results, we show that the proposed method enables us to detect the motion of the face more accurately.
  • Item
    A Muscle Model for Enhanced Character Skinning
    (Václav Skala - UNION Agency, 2013) Ramos, Juan; Larboulette, Caroline; Skala, Václav
    This paper presents a novel method for deforming the skin of 3D characters in real-time using an underlying set of muscles. We use a geometric model based on parametric curves to generate the various shapes of the muscles. Our new model includes tension under isometric and isotonic contractions, a volume preservation approximation as well as a visually accurate sliding movement of the skin over the muscles. The deformation of the skin is done in two steps: first, a skeleton subspace deformation is computed due to the bones movement; then, vertices displacements are added due to the deformation of the underlying muscles. We have tested our algorithm with a GPU implementation. The basis of the parametric primitives that serve for the muscle shape definition is stored in a cache. For a given frame, the shape of each muscle as well as its associated skin displacement are defined by only the splines control points and the muscle’s new length. The data structure to be sent to the GPU is thus small, avoiding the data transfer bottleneck between the CPU and the GPU. Our technique is suitable for applications where accurate skin deformation is desired as well as video games or virtual environments where fast computation is necessary.
  • Item
    Multiple Live Video Environment Map Sampling
    (Václav Skala - UNION Agency, 2013) Nikodým, Tomáš; Havran, Vlastmil; Bittner, Jiří; Skala, Václav
    We propose a framework that captures multiple high dynamic range environment maps and decomposes them into sets of directional light sources in real-time. The environment maps, captured and processed on stand-alone devices (e.g. Nokia N900 smartphone), are available to rendering engines via a server that provides wireless access. We compare three different importance sampling techniques in terms of the quality of sampling pattern, temporal coherence, and performance. Furthermore, we propose a novel idea of merging the directional light sources from multiple cameras by interpolation. We then discuss the pros and cons when using multiple cameras.
  • Item
    Interaction with Dynamic Large Bodies in Efficient, Real-Time Water Simulation
    (Václav Skala - UNION Agency, 2013) Kellomäki, Timo; Skala, Václav
    Water is an important part of nature. Interactively simulating large areas of flowing water would be a welcome addition to many virtual worlds, but the simulation is computationally demanding. Another problem is combining the simulation with rigid bodies, which are the most common interaction solution in virtual worlds. Heightfield water simulation is fast, but is especially hard to couple with rigid bodies: Usually water simply flows through the bodies. We propose a method that generalizes the extremely fast virtual pipe method to handle large, dynamic bodies. Our method diverts water around the objects. This enables us, for example, to dynamically build and destroy dams on rivers in a large virtual world.
  • Item
    GPU real time hatching
    (Václav Skala - UNION Agency, 2013) Jordane, Suarez; Farès, Belhadj; Vincent, Boyer; Skala, Václav
    Hatching is a shading technique in which tone is represented by a series of strokes or lines. Drawing using this technique should follow three criteria: the lighting, the object geometry and its material. These criteria respectively provide tone, geometric motif orientation and geometric motif style. We present a GPU real time approach of hatching strokes over arbitrary surfaces. Our method is based on a coherent and consistent model texture mapping and takes into account these three criteria. The triangle adjacency primitive is used to provide a coherent stylization over the model. Our model computes hatching parameter per fragment according to the light direction and the geometry and generates hatching rendering taking into account these parameters and a lighting model. Dedicated textures can easily be created off-line to depict material properties for any kind of object. As our GPU model is designed to deal with texture resolutions, consistent mapping and geometry in the object space, it provides real time rendering while avoiding popping and shower-door effects.