Now-a-days almost everyone has a virtual character in a virtual environment of some type. Social networking websites for instance allow you to interact with others and play games virtually with friends which up until a few years ago weren’t possible without meeting physically. Today, the web has taken all of us to a new era which is immensely immersive and intelligent with striking similarities to the real world we live in. There are two challenges in creating such an experience. One is to create virtual environments that are no different from the real world we live in (earth!) and second is to enable/feel experiences that will be life-like when living in these virtual environments.
To address these challenges Intel Corporation brought some of the best visual computing researchers together in 2011 to develop the Intel Science and Technology Center for Visual Computing (ISTC-VC). Researchers from this Center are having an impact in this area. Already they have written 25% of the papers published at SIGGRAPH 2012 describing research that will fundamentally change the nature of visual computing over the next 5 to 10 years. This clearly demonstrates the progress Intel and the researchers at the ISTC are making in visual computing.
Imagine being able to adapt your virtual avatar’s’ walking style depending on if it’s walking on a country road vs slippery slope. Imagine being able to see the yarn level details of woven fabrics while shopping in the virtual store with all visually important effects in its modern interior that may include fabrics such as drapes, sofas, wine glass on table with all reflections, clothes etc. and is no different from a store in real world. Last but not the least, hearing realistic life-like sounds resulting from tiny to large object interactions.
The ISTC-VC papers published at SIGGRAPH 2012 also describe research that makes content creation a breeze. ISTC-VC research promises to make it possible for the average user to include characters in their virtual environments that carry out human tasks autonomously – one simply will have to give these characters a goal such as ”move over to the chair and sit down” and the software associated with the character will figure out how to achieve the goal in a natural humanlike fashion. https://www.box.com/s/d1ed902a25c1ce263195
ISTC-VC research also promises to enable the average computer user to create sophisticated content by manipulating the large amounts of modeling data available on the web. Research is uncovering how to recognize similarities between content and then organize it into databases that can be manipulated intuitively by the average user to create new content that is derived from the existing content. The following 3 papers all contribute to the establishment of this capability: content creation, edit and compare 3D shapes .
One exciting new technology allows content creators to de-animate only part of a moving video for emphasizing certain life-time adventures or for emphasizing items of interest in advertisements, etc.
Read on for individual paper topics and how their research will enhance virtual environments.
- First ever technique that do not rely on motion capture data to achieve virtual objects locomotion
- Demonstrates biologically-based actuators and objectives that matches real human-like gaits
- Technique automatically synthesizes walking and running controllers and operates without the use of motion capture data.
- Humans learn a lot of movements upon observation because they are able to generalize the style and repurpose it for striking etc.
- Such generalization is a challenge for character animation as it requires users/programmers to “spell out” the desired behavior with a comprehensive set of motions and extensive programming.
- In this paper, the author presents a new technique that animates characters performing user-specified tasks by generalizing from a small number of example motion clips into a continuous space of stylistically consistent motions.
- This paper investigates focal points or “Schelling points” on 3D surfaces to develop a model of salience. The prediction of model of salience is the fundamental problem in applications such as shape matching and online games where winning of a game depends on selecting most salient points without any communication among the players.
- This paper publishes a unique technique that develops model of salience on 3D surface meshes based on social/psychological definition also known as “Common knowledge” about feature points on 3D surfaces.
- Knitted Cloth simulation with yarn level details has remained a challenge in computer animation industry. There is a lot of guess work involved in understanding the cloth movement that is based on different stitching styles.
- This paper describes a first modeling technique that can efficiently create yarn-level models of knitted clothing with a rich variety of patterns that would be completely impractical to model using traditional techniques.
- By leveraging the 6-step process described in the paper an artist can create a rich variety of knitting patterns with full scale garments for virtual characters.
- HDR is used imaging and photography to allow greater dynamic range between the lightest and darkest areas of an image to represent more accurately the range of intensity levels found in real scenes.
- When traditional photographs are printed, the range of brightness can be heavily compressed, and the result can look flat. This paper presents a solution for viewing high-dynamic-range (HDR) images using a reflective sheet of paper, glossy ink, and a torch light illuminating the paper. With the proposed technique, one can get a better sense of the range of brightness in the scene and adjust it by moving the light or the paper.
- The appearance of strokes in digital drawing depends not only on the path of the stylus but also on 2D position, pressure, 2D tilt and rotation of the instrument.
- Digital artists often use a highly quality stylus coupled to a virtual brush to produce expressive strokes in their own style. However, such devices are difficult for novices to control and many people draw with less expensive input devices.
- This paper describes a method whereby a non-expert draws a 2D query stroke using an inexpensive input device, and missing hand gestures are synthesized based on a library of examples supplied by an artist. The resulting marks follow the trajectory drawn by the non-expert but convey the gestural style of the artist.
- In traditional techniques it is difficult to render a scene that involves specular lights paths such as a tabletop seen through a drinking glass sitting on it. However, these glossy objects are being used everywhere in animation industry and has become an important material that needs realistic rendering in virtual scenes.
- This paper presents a Manifold exploration, a new way of handling specular paths in rendering. The technique is on sets of paths contributing to the image naturally form manifolds in path space, which can be explored locally by a simple equation-solving iteration
- Computing the visually important effects in a modern interior that may include fabrics such as drapes, sofas, clothes, where volumetric representations represent yarns at micron resolution; polished metals in appliances like refrigerators and faucets; wood in furniture and floors; complex lighting fixtures; and building materials like granite and marble, is challenging and requires unified support for light transport with high gloss reflections, subsurface and simulation complex light sources.
- Today, designers choose existing ray tracing and light transport protocol that are within computational budget that result in noisy images even with high computational times.
- This paper presents novel strategies for handling light paths for efficient generation of low noise images.
- The creation of compelling three-dimensional content is a central problem in computer graphics. Many applications such as games and virtual worlds require large collections of three-dimensional shapes for populating environments, and modeling each shape individually.
- Tools for automatic synthesis of shapes from complex real-world domains must understand what characterizes the structure of shapes within such domains. Developing formal models of this structure is challenging, since shapes in many real-world domains exhibit complex relationships between their components.
- The focus of this paper is on designing a compact representation of these relationships that can be learned without supervision from a limited number of examples. The key idea in the design of the model is to relate probabilistic relationships between geometric and semantic properties of shape components to learned latent causes of structural variability, both at the level of individual component categories and at the level of the complete shape.
Synthesizing Open Worlds with Constraints Using Locally Annealed Reversible Jump MCMC
No pre-print yet
- Large collections of 3D models are now commonly available via many public repositories, opening new possibilities for data mining, visualization, sorting, comparing and synthesis of new models. However, the task of exploring such 3D collections remains challenging. It is because today while most online databases make it easy for users to select sets of similar models using text-based filtering, understanding the similarities and differences within such collections is difficult because most 3D models are not in a consistent orientation.
- This paper presents a new analysis tool and exploration interface for 3D model collections. The tool allows users to directly specify regions of interest (ROI) on example shapes in order to guide subsequent exploration actions. The tool is robust and efficient over existing alternatives and is interactive exploration tool for large model collections that uses fuzzy correspondences to support view alignment, ROI-based similarity search, and faceted exploration.
- The large-scale motion of an object can sometimes make it difficult to see its finer-scale, internal motions, or those of nearby objects.
- In this paper, we present a semi-automated technique for selectively de-animating or removing the large-scale motions of one or more objects. The user draws a small set of strokes indicating the regions of the objects that should be immobilized and our algorithm warps the video to remove the gross motion of these regions while leaving finer-scale, relative motions intact.
- Many materials have textures (fine-scale, high-frequency variations) organized in distinctly recognizable spatial patterns (large-scale,low-frequency variations). For example, tiger pelts have fine-scale fur textures organized in large-scale striped patterns; floor carpets have fine-scale weave textures organized in large-scale ornamental patterns; and, brick walls have fine-scale mud textures organized in a large-scale block patterns. The key challenge in modeling these patterns is to provide a way for the user to guide the synthesis process, that is, specify what spatial patterns should appear in the output image.
- This paper presents an idea that representations in symmetry space are a natural way to describe spatial patterns in many real-world textures. We also provide a framework to investigate this idea which includes a variety of methods for symmetry representation, objective function specification, and image optimization. Different combinations of the methods are shown useful for symmetry transfer and symmetry processing.
- Woven Fabrics have a wide range of appearance determined by their small scale 3D structure. Building these yarn-level models using existing techniques is challenging due to the manually intensive process that often fail to capture the naturally rising irregularities which contribute significantly to the overall appearance of the cloth. Existing techniques also do not automatically adapt to the different fabric designs and only rely on scanned samples.
- To overcome the limitations described above the papers presents a novel approach of creating models of woven cloth which user specified fabric designs and produces models that correctly capture yarn level structural details of cloth.
- Today creating 3D environments is difficult due to the fact that it’s hard to create and easily adapt existing shapes to new shapes. Structure aware shape editing software tools that are available today can only allow the user to enable the user to edit high level shapes. It cannot alter the topology of the object.
- This paper presents a 3-step structure adaptive shape editing tools that lets users to edit topology while maintain global characteristics.
Simple Formulas For Quasiconformal Plane Deformations
No preprint available
- The advent of sophisticated photo editing software has made it increasingly easier to manipulate digital images. Often visual inspection cannot definitively distinguish the resulting forgeries from authentic photographs. In response, forensic techniques have emerged to detect geometric or statistical inconsistencies that result from specific forms of photo manipulation.
- This paper describes a new forensic technique that focuses on geometric inconsistencies that arise when fake reflections are inserted into a photograph or when a photograph containing reflections is manipulated.
- Self-collision detection (SCD) methods are widely used in computer graphics and engineering to enable realistic simulation of self-contact for highly deformable objects. Various methods have been devised to accelerate the numerous triangle-triangle overlap tests for realistic object movement simulation
- This paper describes a method that accelerates self-collision detection (SCD) for a deforming triangle mesh by exploiting the idea that a mesh cannot self-collide unless it deforms enough. Unlike prior work on subspace self-collision culling which is restricted to low-rank deformation subspaces, our energy-based approach supports arbitrary mesh deformations while still being fast.
Motion-Driven Concatenative Synthesis of Cloth Sounds.
No Preprint yet
- The rattling of coins dropped on a table, the jingling of a set of car keys and the cascade of noise from a shattering pane of glass are all familiar sound phenomena. Simulation of rigid-body dynamics for scenarios such as these is a widely studied field in the computer animation community.
- Rigid-body impacts produce sound primarily due to two sources: “ringing noise” and “acceleration noise”. Ringing noise refers to sound due to object vibrations. Acceleration noise, on the other hand, is produced when objects undergo large rigid-body accelerations. If a body experiences acceleration over a sufficiently short time scale, the resulting pressure disturbance in the surrounding medium is perceived as sound. While current rigid-body sound models synthesize convincing ringing noise, no efficient models exist for synthesizing sound due to acceleration noise. Consequently, synthesized rigid-body impact sounds tend to have an incorrect initial attack and lack the “crispness” characteristic of real impact sounds.
- To address this limitation, this paper proposes a simple and efficient model for acceleration noise which can be easily integrated with existing rigid-body sound pipelines for realistic experience.