Laplacian Damping for Projective Dynamics


Jing Li
University of Utah
 
Tiantian Liu
University of Pennsylvania
 
Ladislav Kavan
University of Utah
 


Our method makes the ribbon coil in a natural and fabric-like fashion, similar to ribbon motions seen in rhythmic gymnastics performances. In contrast, PBD damping introduces unnatural early rotation of the tail of the ribbon, as if the ribbon was a rigid bar rather than fabric. The arrows point out the differences in the motion of the ribbon tail.



Abstract

Damping is an important ingredient in physics-based simulation of deformable objects. Recent work introduced new fast simulation methods such as Position Based Dynamics and Projective Dynamics. Explicit velocity damping methods currently used in conjunction with Position Based Dynamics or Projective Dynamics are simple and fast, but have some limitations. They may damp global motion or non-physically transport velocities throughout the simulated object. More advanced damping models do not have these limitations, but are slow to evaluate, defeating the benefits of fast solvers such as Projective Dynamics. We present a new type of damping model specifically designed for Projective Dynamics, which provides the quality of advanced damping models while adding only minimal computing overhead. The key idea is to define damping forces using Projective Dynamics' Laplacian matrix. In a number of simulation examples we show that this damping model works very well in practice. When used with a modified Projective Dynamics solver that uses a non-dissipative implicit midpoint integrator, our damping method provides fully user-controllable damping, allowing the user to quickly produce visually pleasing and vivid animations.






Publication

Jing Li, Tiantian Liu, Ladislav Kavan. Laplacian Damping for Projective Dynamics. VRIPHYS [Honorable Mention], 2018.  


Links and Downloads

Paper

 
BibTeX



Acknowledgements

We thank Junior Rojas, Saman Sepehri, and Cem Yuksel for many inspiring discussions. We also thank Yasunari Ikeda for help with hair rendering, Nathan Marshak and Dimitar Dinev for proofreading, Shirley Han and Jessica Hair for narrating the accompanying video. This material is based upon work supported by the National Science Foundation under Grant Numbers IIS-1617172 and IIS-1622360. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We also gratefully acknowledge the support of Activision and hardware donation from NVIDIA Corporation.