Webinar

Asiagraphics Web Seminar

(AG Webinar)

Mission: The AG webinar (held monthly) aims to showcase exciting research results, inspire and motivate new research, and create a regular recurring opportunity for the Asiagraphics community to meet and exchange ideas.

Format: In each AG webinar we will have 1.5 hours live session with 1-2 talks followed by Q&A, which will be held on Tuesday evening (Asian time) near the end of the month. Audiences can watch the live talks and propose questions on Youtube or Bilibili during and right after the talks. Then the session chair will help paraphrase the questions to the speakers.

Copyright: All AG webinar talks will be recorded and made available on both Youtube and Bilibili (see links at the end of this page). The recorded videos are owned by the corresponding speakers and can only be used for studying and teaching purposes (i.e., non-commercial purposes).

Working Team: The AG Webinar is organized by Ligang Liu (team chair), Xiao-Ming Fu (secretary), Yuki Koyama, and Minhyuk Sung. If you want to nominate a speaker or provide feedback, please feel free to contact us or via asiagraphics.ag@gmail.com.

Join The Live Talks Via:youtube Or huya

Contents


AG Webinar Session 30

Date: Friday, December 20, 2024
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Zhonggui Chen, Xiamen University, China

Talk 1

Title:

Voronoi diagram of curves and surfaces – challenges and algorithms

Speaker:

Ramanathan Muthuganapathy
Ramanathan Muthuganapathy
IITM, Chennai, India
Professor

Abstract:

Voronoi diagram (VD) is one of the fundamental geometric structures that has found applications in different fields, from Engineering to Biology. However, computing / developing algorithms for VD has proven to be quite a challenge, when the inputs are represented in the form of exact curves / surfaces. In this talk, the challenges for computing VD for such domains will be highlighted and discuss algorithms that had overcome some of the challenges.

Speaker bio:

Dr. Ramanathan Muthuganapathy (Raman) is a professor in the Department of Engineering Design, Indian Institute of Technology Madras, popularly known as IITM, Chennai, India. His primary area of interest is in geometry computing, developing algorithms for geometric structures like Voronoi diagrams, and for other problems such as reconstruction, denoising, sketch-cleaning etc. Recently, apart from geometric computing, his focus is on Deep learning for CAD, design and analysis of musical instruments, and VR/XR/MR for different applications. He is in the program committee of several leading geometry conferences and served as technical paper co-chair of SMI2022. He received best paper award in SPM2005 and honorable mention in SMI2019.

Talk 2

Title:

Meshless Power Diagrams

Speaker:

Yanyang Xiao
Yanyang Xiao
Nanchang University, China
Assistant Professor

Abstract:

The computation of power diagrams (or weighted Voronoi diagrams) is a fundamental task in computational geometry and computer graphics. To accomplish the computation, we provide a different way from the existing ones for lifting the weighted seeds to a set of points in the space of one dimension higher, then the power cells can be directly obtained by computing the intersections of the Voronoi cells of these lifting points and the original space. In this talk, I will present the k-nearest neighbors based method for constructing power diagrams.

Speaker bio:

Dr. Yanyang Xiao is an assistant professor in the School of Mathematics and Computer Sciences at Nanchang University, China. He received his Ph.D. degree in computer science from Xiamen University, China, in 2020, under the supervision of Prof. Zhonggui Chen and Prof. Cheng Wang. His research interests include computer graphics, image processing, and point cloud processing.


AG Webinar Session 29

Date: Tuesday, November 12, 2024
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Miao Wang, Beihang University, China

Talk 1

Title:

XR in Sports: Enhancing Broadcasting, Spectating, and Training through Immersive Technology

Speaker:

Stefanie Zollmann
Stefanie Zollmann
University of Otago, New Zealand
Associate Professor

Abstract:

Extended Reality (XR) offers a wide range of possibilities for enhancing sports broadcasting, live spectating, and sports training by integrating real-time data, immersive visuals, and interactive experiences into the sports landscape. This talk will discuss how we use XR technology to elevate the experience of watching live sports while also providing advanced tools for athlete training and performance analysis.

For sports broadcasting and live spectating, XR brings real-time data overlays—such as player statistics, tactical insights, and performance analytics—into the viewing experience, allowing fans to gain a richer understanding of the game. On-site situated visualization offers perspectives beyond traditional broadcasts, blending virtual information with live action to create an interactive, engaging environment for on-site spectators.

In sports training, XR interfaces can reduce athletes’ cognitive load by delivering instructional information precisely where it is needed. Using mobile AR for sports capture and replay, athletes gain immersive, on-demand self-training resources as well as personalized feedback that allows them to work on skill development even when coaches are not available on-site. This approach supports athletes in building a deeper understanding of complex skills and techniques independently, enhancing both comprehension and execution over time.

This talk will highlight the versatile applications of XR in both spectating and training, emphasizing how XR innovations can redefine engagement and performance in the sports industry. By examining the intersection of these applications, we can envision the future of XR in sports and the unique immersive experiences it brings to fans and athletes.

Speaker bio:

Stefanie Zollmann is an Associate Professor in the School of Computing at the University of Otago in New Zealand. She is co-leading the Visual Computing Otago research group. Before starting at Otago in 2016, she worked as a senior developer at Animation Research Ltd on eXtended Reality visualization, Computer Graphics, Video Processing and Computer-Vision-based tracking technology for sports broadcasting. She also worked for Daimler and Graz University of Technology. Her main research is in the field of Visual Computing, working on the intersection of traditional Computer Graphics, Computer Vision, Machine Learning, Visualization and Human-Computer-Interaction. Her research focus is on eXtended Reality (XR) for sports and media, situated visualization techniques for Augmented Reality and novel methods for capturing content for immersive experiences. Stefanie serves on the Editorial Boards of Transaction on Visualization and Graphics (TVCG) and Computers & Graphics. She was also a program chair at IEEE ISMAR in 2019 and 2020 as well as for IEEE VR 2024.

Talk 2

Title:

Computational Assemblies: Bridging the Gap from Concept to Production

Speaker:

Ziqi Wang
Ziqi Wang
EPFL/HKUST
Assistant Professor

Abstract:

Assemblies are ubiquitous in our daily lives, with applications ranging from small toys to large buildings. By combining parts of simpler shapes, assemblies enable the creation of complex structures, an approach extensively adopted in construction and manufacturing industries.

Making assembly was once a labor-intensive process. Advancements in automation have led to the increased use of robots to accomplish assembly tasks. Still, designing and fabricating complex assemblies poses a significant intellectual challenge. Current design and fabrication workflows require designers to specify numerous low-level details, which limits creativity and efficiency. Therefore, to address the challenges in current assembly design and fabrication, my research focuses on developing a new end-to-end concept-to-production workflow called computational assemblies. In this talk, I’ll be excited to showcase how my new computational algorithms are opening doors to:

  1. Computational design of infinitely reusable structural systems.
  2. Computational analysis of topological interlocking assemblies.
  3. Computational fabrication of complex assemblies using augmented reality (AR) and robotics.

Speaker bio:

Ziqi Wang is a postdoctoral researcher jointly appointed at the Creative Computation Lab and Sycamore at École Polytechnique Fédérale de Lausanne (EPFL), supervised by Prof. Stefana Parascho and Prof. Maryam Kamgarpour. Before this, he was a postdoctoral researcher at the Computational Robotics Lab, ETH Zurich, advised by Prof. Stelian Coros. He completed his Ph.D. at the Geometric Computing Laboratory (GCM) at EPFL in 2021, guided by Prof. Mark Pauly. He received his bachelor’s degree in mathematics in 2017 from the University of Science and Technology of China (USTC). His research interests focus on geometric modeling, digital fabrication, and robotic assembly. This Fall, he will join the Hong Kong University of Science and Technology as an assistant professor. He is seeking PhD students interested in digital fabrication and robotic assembly for Fall 2025.


AG Webinar Session 28

Date: Tuesday, October 8, 2024
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Sida Peng, Zhejiang University, China

Talk 1

Title:

Scaling Up Data for Enhancing 3D Learning and Generation

Speaker:

Yang Liu
Yang Liu
Microsoft Research Asia
Principal Researcher

Abstract:

The success of scaling laws in language and 2D vision fields has demonstrated that both model and data scaling are crucial to the performance of learning models. However, in the realm of 3D, the scarcity of 3D data and the gap between synthetic and real 3D data present notable challenges for 3D learning. In this talk, I will share our thoughts and efforts to scale up 3D data, including multi-source pretraining for 3D indoor scene understanding and the use of in-the-wild images for indoor scene generation.

Speaker bio:

Yang Liu is a Principal Researcher at Microsoft Research Asia. He previously served as a Postdoctoral Researcher at LORIA/INRIA. Dr. Liu earned his Ph.D. from the University of Hong Kong in 2008, following his M.S. and B.S. degrees in Computational Mathematics from the University of Science and Technology of China in 2003 and 2000, respectively. His research interests include 3D modeling, geometry processing, and 3D vision, with a recent focus on learning-based 3D processing, understanding, and generation. Dr. Liu has been an editorial board member for ACM Transactions on Graphics (ToG), IEEE Transactions on Visualization and Computer Graphics (TVCG), and IEEE Computer Graphics and Applications (C&A). He has also served as Program and General Co-Chair for the GMP and SMI conferences.

Talk 2

Title:

From High-fidelity 3D Generative Models to Dynamic Embodied Learning

Speaker:

Ziwei Liu
Ziwei Liu
Nanyang Technological University, Singapore
Assistant Professor

Abstract:

Beyond the confines of flat screens, 3D generative models are crucial to create immersive experiences in virtual reality, not only for human users but also for robotics. Virtual environments or real-world simulators, often comprised of complex 3D/4D assets, significantly benefit from the accelerated creation enabled by 3D Gen AI. In this talk, we will introduce our latest research progress on 3D generative models for objects, avatars, scenes and motions, including 1) large-scale 3D scene generation, 2) high-quality 3D diffusion for PBR assets, 3) high fidelity 3D avatar generation, 4) egocentric motion learning.

Speaker bio:

Ziwei Liu is currently an Assistant Professor at Nanyang Technological University, Singapore. His research revolves around computer vision, machine learning and computer graphics. He has published extensively on top-tier conferences and journals in relevant fields, including CVPR, ICCV, ECCV, NeurIPS, ICLR, SIGGRAPH, TPAMI, TOG and Nature – Machine Intelligence. He is the recipient of is the recipient of PAMI Mark Everingham Prize, MIT TR Innovators under 35 Asia Pacific, ICBS Frontiers of Science Award, CVPR Best Paper Award Candidate and Asian Young Scientist Fellowship. He serves as an Area Chair of CVPR, ICCV, ECCV, NeurIPS and ICLR, as well as an Associate Editor of IJCV.


AG Webinar Session 27

Date: Tuesday, June 25, 2024
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Keke Tang, Guangzhou University, China

Talk 1

Title:

Recent Results from the Visual Computing and Computer Graphics Lab @ Ben Gurion University

Speaker:

Andrei Sharf
Andrei Sharf
Ben-Gurion University, Israel
Associate Professor

Abstract:

In this talk, we present three interdisciplinary contributions in image recovery, shape recognition, and video prediction using deep learning. First, we introduce a deep neural network designed to recover characters and symbols from corrupted archaeological artifacts, such as palimpsests and petroglyphs, by segmenting and inferring missing parts of heavily degraded data. Due to limited annotated ground-truth data, we also develop data augmentation tools to enhance training. This work bridges the disciplines of archaeology, computer vision, and artificial intelligence. Second, we explore the power of abstract human sketches in conveying high-level 2D shape semantics. We introduce OneSketch, a crowd-sourced dataset of minimal one-line sketches, and a neural network that learns sketch-to-shape relations, enabling accurate differentiation and retrieval of 2D objects from simple sketches. This project combines insights from cognitive science, art, and machine learning. Third, we present PhyLoNet, an extension of PhyDNet for long-term future frame prediction in videos. By disentangling physical dynamics from other information and introducing a novel relative flow loss, PhyLoNet achieves high accuracy and quality in predicting future frames of natural motion datasets. We demonstrate the effectiveness of these methods through extensive experiments and evaluations, underscoring the importance of interdisciplinary approaches in advancing deep learning applications.

Speaker bio:

Andrei Sharf is an Associate Professor in the Computer Science Department at Ben-Gurion University. He has previously served as a Visiting Associate Professor at the Shenzhen Institute of Advanced Technology (SIAT) of the Chinese Academy of Sciences and as a Postdoctoral Researcher at the School of Computer Science at UC-Davis, USA. In 2012, Sharf received the Eurographics Young Researcher Award for his contributions to 3D point clouds and related problems. He leads the Lab of Visual Computing and Computer Graphics at Ben-Gurion University and heads the Computer Games track in the Computer Science Department.

His research interests encompass a wide range of cutting-edge topics in computer graphics, geometry processing, interactive techniques, and 3D modeling. He is also deeply involved in the development and application of deep learning algorithms to these areas, pushing the boundaries of what is possible in visual computing. Through his work, he aims to create groundbreaking methods and tools that enhance our ability to process and understand complex visual data.

Talk 2

Title:

CNC Flank Milling of Freeform Surfaces with Customized Tools

Speaker:

Pengbo Bo
Pengbo Bo
Harbin Institute of Technology, Weihai, China
Professor

Abstract:

CNC flank milling is an advanced technique for machining the surfaces of industrial components. In the milling process, the machine tool is positioned so that the tool’s revolution surface forms a contact curve with the target surface. This wide path milling enhances milling efficiency compared to point milling. However, the curve contact constraint is highly restrictive, making path planning for freeform surfaces challenging.
In this talk, we present a series of works on the path planning of CNC flank milling. We discuss an optimization framework and the initialization of motion paths. To provide more degrees of freedom in path planning, we treat the shape of the machine tool as a variable in the optimization. Additionally, we explore the extension of CNC flank milling to trochoidal milling of 3D cavities. Real machining experiments are conducted, and comparisons to the results of commercial software are provided.

Speaker bio:

Pengbo Bo is a professor at the School of Computer Science and Technology, Harbin Institute of Technology, Weihai. He obtained both his Bachelor’s and Master’s degrees in Computer Science from Shandong University and his Ph.D. in Computer Science from the University of Hong Kong. He completed postdoctoral research at the University of Hong Kong and the Visual Computing Center of KAUST. His research interests include geometric modeling, computer graphics, and computer-aided design. He received the Gaheon Award in 2017 and the Best Paper Award at SMP 2020.


AG Webinar Session 26

Date: Friday, April 26, 2024
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Haisen Zhao, Shandong University, China

Talk 1

Title:

Empowering Deep Modeling of 3D Point Clouds: From Representation, Learning Mechanism, to Loss Function

Speaker:

Junhui Hou
Junhui Hou
City University of Hong Kong, China
Associate Professor

Abstract:

3D point cloud data are becoming increasingly popular in various emerging applications, such as meta-verse, autonomous driving, and computer animations/games, as it provides an explicit representation of the geometric structures of objects and scenes. While deep learning has achieved great success in 2D image and video processing, designing efficient yet effective deep architectures and loss functions for 3D point cloud data is difficult, and as a result, the representation capability of existing deep architectures is limited. In this presentation, I will showcase our endeavors to push the boundaries of this field, starting with the fundamental representation, the development of a cross-modal learning mechanism, to the efficient yet effective loss function. Additionally, I will delve into the exploration of efficient and effective loss functions. These new perspectives are poised to unlock numerous possibilities in deep 3D point cloud data modeling.

Speaker bio:

Junhui Hou is an Associate Professor with the Department of Computer Science, City University of Hong Kong. His research interests include multi-dimensional visual computing, such as light field, hyperspectral, geometry, and event data. He received the Early Career Award (3/381) from the Hong Kong Research Grants Council in 2018. He has served or is serving as an Associate Editor for IEEE TIP, TVCG, and TCSVT.

Talk 2

Title:

Towards the High-Fidelity and Real-Time Dynamic View Synthesis

Speaker:

Sida Peng
Sida Peng
Zhejiang University, China
Assistant Professor

Abstract:

Dynamic view synthesis is a long-standing problem in the field of computer graphics and computer vision, which is important for many applications, such as immersive telepresence, sports broadcasting, and virtual reality. In this talk, I will present our recent work on improving the quality and speed of dynamic view synthesis. We design new sampling and pre-computation strategies to boost the rendering speed, and introduce the image blending technique to achieve the photorealistic rendering. Our work achieves the state-of-the-art performance in dynamic view synthesis.

Speaker bio:

Sida Peng is an Assistant Professor at the School of Software Technology, Zhejiang University. He received his Ph.D. degree from College of Computer Science and Technology at Zhejiang University in 2023. His research interest lies in building the next-generation volumetric media for end users and the neural simulator for intelligent systems. His work has been recognized with some prizes, including the CVPR Best Paper Candidate, the 2020 CCF-CV Excellent Young Researcher Award, and the 2022 Apple Scholar in AI/ML.

AG Webinar Session 25

Date: Tuesday, February 27, 2024
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Ye Pan, Shanghai Jiao Tong University, China

Talk 1

Title:

Placing and Animating Virtual Avatars in Dissimilar Environment

Speaker:

Sung-Hee Lee
Sung-Hee Lee
KAIST, South Korea
Professor

Abstract:

In this talk, we address the challenges of 3D telepresence and avatar-mediated augmented reality telepresence in dissimilar indoor environments. Rapidly developing technologies have enabled geographically separated users to interact through virtual avatars, but maintaining the semantics of users’ positions in diverse indoor spaces presents a significant challenge. To tackle this, we present novel methods for determining avatar positions and retargeting users’ deictic motions in real-time. We conduct a user survey to understand preferred avatar placements and identify attributes such as interpersonal relation and spatial characteristics. Leveraging this data, we train a neural network to predict similarity between placements and develop a method to preserve semantic placement across different spaces. Additionally, we propose a neural network-based framework for real-time retargeting of users’ deictic motions to avatars in dissimilar environments. Our framework translates sparse tracking signals of users’ motions to natural avatar motions, accommodating various user sizes. We demonstrate the effectiveness of our methods through a prototype AR-based telepresence system and user evaluations.

Speaker bio:

Sung-Hee Lee is a Professor at the Graduate School of Culture Technology at KAIST. His research focus is on modeling and animation of digital humans, avatars, and virtual characters for applications in VR/AR, telepresence, computer games, and computer animation. He obtained his Ph.D. in Computer Science from the University of California, Los Angeles (UCLA) in 2008, following his B.S. and M.S. degrees in Mechanical Engineering from Seoul National University, Korea, in 1996 and 2000, respectively. Prior to joining KAIST in 2013, he was an Assistant Professor at Gwangju Institute of Science and Technology (GIST). He received the Outstanding Ph.D. in Computer Science and Northrop Grumman Outstanding Graduate Research Award from UCLA in 2009. He has been honored with Achievement Awards from the Korea Society of Computer Graphics in 2016 and 2020, as well as Research Innovation Awards from KAIST College of Liberal Arts and Convergence Science in 2017 and 2019. He serves as an Associate Editor for IEEE Transactions on Visualization and Computer Graphics (TVCG) and the Computer Animation and Virtual Worlds journal. He served for numerous conferences, including Pacific Graphics 2023 as Conference Chair, CASA 2021 as Program Chair, SCA 2019 as a Conference Chair, and Korea CG Society Conferences 2018-2919 as Organizing Chair.

Talk 2

Title:

Toward Immersive and Natural Interactions in Large-Scale Virtual Environments

Speaker:

Miao Wang
Miao Wang
Beihang University, China
Associate Professor

Abstract:

Immersiveness and interactivity are pivotal characteristics of virtual reality (VR). In large-scale VR applications, users expect to be able to move freely and interact naturally within virtual environments that far exceed the physical constraints of actual spaces. However, discrepancies in scale and structure between the limited physical environment in which users are situated and the virtual reality scenes they inhabit, as well as among distributed multiple user physical environments, may lead to inconsistent semantics in spatial interactions, mismatched collaborative interaction contexts for multiple users, significantly diminishing both immersion and interaction efficiency. In this talk, I will introduce the endeavors of our group in developing natural interaction-oriented locomotion redirection methods and an open-source framework. Additionally, I will discuss our exploration of methods for contextual semantic associations with virtual objects within the scene, as well as our research on efficient collaborative interaction and roaming in blended virtual reality environments. These efforts are propelling us toward breaking free from the limitations of physical environment on immersive scene interaction.

Speaker bio:

Miao Wang is an Associate Professor at the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University. His research interests include redirected locomotion and immersive experience in virtual environments, virtual character reconstruction and animation, and neural scene representations and synthesis for VR. Miao completed his Ph.D. in Computer Science at Tsinghua University in 2016 and subsequently pursued postdoctoral research at the same institution before joining Beihang University in 2018. He possesses a commendable publication record with over 40 papers featured in esteemed platforms such as ACM TOG, IEEE TVCG, and IEEE TIP. Since 2020, he has held roles as a program committee member for prominent IEEE VR and ISMAR conferences. Furthermore, he undertook the responsibility of organizing co-chair for ChinaVR 2023.

AG Webinar Session 24

Date: Tuesday, November 21, 2023
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Shi-Sheng Huang, Beijing Normal University, China

Talk 1

Title:

Curved Origami Design

Speaker:

Jun Mitani
Jun Mitani
University of Tsukuba, Japan
Professor

Abstract:

While origami is an art form that creates various shapes from a single sheet of paper, its engineering potential has attracted significant attention, leading to extensive research. Folding, the fundamental operation in origami, typically involves straight folds that tend to enclose flat areas. However, paper can be flexibly bent to create curved surfaces, known as developable surfaces. By incorporating curved folds, it is possible to create shapes composed of piecewise developable surfaces, offering a wide range of modeling possibilities, similar to origami with straight folds. However, research on curved folding in origami has not been as widespread. In this talk, the speaker will present a variety of research on curved folding they have conducted, including artistic forms that incorporate curved folding and their corresponding design methods.

Speaker bio:

Mitani is a professor of Information and Systems at University of Tsukuba. He received his Ph.D. in engineering from the University of Tokyo in 2004. He has been present post since April 2015. His research interests center on computer graphics, in particular geometric modeling techniques and their application to origami design. The origami artworks created by him have features that are three-dimensional shapes with smooth curved surfaces. His main books are “3D Origami Art” and “Curved Origami Design”. In 2010, through an exchange with ISSEY MIYAKE, he contributed to the launch of the new 132.5 fashion brand. He also cooperated in the design of origami used in the movies “Shin Godzilla (2016)” and “Death Note Light up the NEW world (2016)”. His unique origami has been well received around the world and he had received invitations to hold workshops and exhibitions in Germany, Switzerland, Italy, Israel and many other countries. His work had inspired the design of the trophy for the Player of the Match winner of each game at the Rugby World Cup 2019. His major awards are “Microsoft Research Japan Informatics Research Award, 2012” and “The 2nd Japan Society for Graphic Science Award, 2007”. He was appointed as a Japan Cultural Envoy from the Agency for Cultural Affairs in 2019. He visited eight Asian countries in November and December 2019 as the envoy.

Talk 2

Title:

From 3D Models to Papercraft: The Proxies of Developability

Speaker:

Qing Fang
Qing Fang
University of Science and Technology of China, China
Postdoctoral Researcher

Abstract:

Papercraft are constructed using sheets of heavy paper that are cut out, folded, scored, and glued together. As each piece of papercraft must be developable, there will inevitably be some approximation error between the craft model and the input shape. To control the approximation error, developability-enhanced methods are applied. In this talk, I will present our recent progress in the papercraft design algorithm, which focuses on optimizing developability proxies on complex digital models, including edge-oriented discrete developability and Gaussian curvature on signed distance fields (SDF). Additionally, I will discuss the limitations of current methods and potential future work in this area.

Speaker bio:

Qing Fang is a Postdoctoral Researcher in the School of Mathematical Sciences, University of Science and Technology of China. He received his Ph.D. in 2021 from University of Science and Technology of China. His research interests include geometric processing and computational fabrication.

AG Webinar Session 23

Date: Tuesday, October 17, 2023
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Ziqi Wang, ETH Zurich

Talk 1

Title:

Artistic robotic drawing on large, non-planar canvas

Speaker:

Young J. Kim
Young J. Kim
Ewha Womans University, South Korea
Professor

Abstract:

Throughout the history of art, artists have continuously embraced technological advancements, incorporating machines and innovative technologies into their creative endeavors to push the boundaries of traditional art forms. Today, contemporary artists are at the forefront of exploring new avenues for creative expression, with robots emerging as a significant medium for artistic innovation. In this presentation, we will introduce our recent robotic drawing projects that extend the world of art. These robotic systems are designed to produce artistic drawings on large, non-planar canvases, employing techniques such as impedance control, non-conformal mapping, and coverage planning. Furthermore, we will delve into the realm of TSP-pen art, an artistic form that involves the creation of images using piecewise-continuous line segments, showcasing how our robotic systems can extend this concept to achieve quality results. Lastly, we will unveil a stroke-based robotic drawing system that not only produces high-quality drawings but also mimics the behaviors of a human artist, demonstrating the exciting possibilities at the intersection of art and technology.

Speaker bio:

Young J. Kim is a full professor of computer science and engineering at Ewha Womans University. His research interests include interactive computer graphics, computer games, robotics AI, haptics, and geometric modeling. He has published more than 100 papers in leading conferences and journals in these fields. He also received the best paper awards at the ACM Solid Modeling Conference in 2003, the International CAD Conference in 2008, and the best poster award at the Geometric Modeling and Processing Conference in 2006. He was selected as the best research faculty and Ewha fellow of Ewha in 2008 and 2016 and the best lecturer at Ewha ELTEC engineering college in 2023. He received outstanding research case awards from the Korean national research foundation and the Korean Ministry of Knowledge and Economy in 2008 and 2011. He is currently the president of the Korea computer graphics society and a vice president of the AsiaGraphics and an executive committee member of EuroGraphics. He serves on the editorial board of the Journal of Computer Animation and Virtual Worlds, the International Journal of Computational Geometry and Applications, and Advances in Robotics Research. Since 2022, he has served as a review board (RB) member of the national research foundation in Korea.

Talk 2

Title:

Algorithmic planning for robotic assembly of building structures

Speaker:

Yijiang Huang
Yijiang Huang
ETH Zurich
Postdoctoral Researcher

Abstract:

How can we enable robots to build houses for us? Can they build structures that are impossible to build by humans? Answers to these questions are hidden in the process of programming or planning the robots to achieve our high-level assembly goal. This planning process reveals an intricate interplay between robot reachability, structural stability, and task assignment.

In this talk, I will present our work on automated planning approaches to program robot builders and assign material resources in architectural-scale experiments. We demonstrate that these algorithms enhance design-build flexibility by (1) enabling robotic assembly of arbitrary design inputs, (2) reducing wasted programming efforts for new robot fabrication processes, (3) allowing design responsive to upcycled material inventory.

Speaker bio:

Yijiang Huang is a Postdoctoral Researcher in the Department of Computer Science at ETH Zurich. He researches at the intersection between Architecture, Computing, and Robotics to make design and construction more connected. Yijiang completed his Ph.D. in the Building Technology Program at MIT’s Department of Architecture. Before MIT, he studied applied math and did research in computer graphics at the University of Science and Technology of China, where he received his Bachelor of Science.

AG Webinar Session 22

Date: Friday, September 15, 2023
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Tao Yu, Tsinghua University, China

Talk 1

Title:

Monocular Camera based High-fidelity Digital Human Modeling and Animation

Speaker:

Juyong Zhang
Juyong Zhang
University of Science and Technology of China, China
Professor

Abstract:

Traditional digital human modeling and animation methods rely on expensive acquisition equipment, complex production processes, and a large number of manual interactions by professional staff, which greatly limit its wide applications. The 3DV group of USTC has conducted research on the aspect of monocular camera based high-fidelity digital human modeling and animation toward the target of “digitalize everyone in the world”. In this talk, I will share our research work about: high-fidelity 3D head modeling, audio-driven talking head, clothed human modeling and animation.

Speaker bio:

Juyong Zhang is a professor in the School of Mathematical Sciences at the University of Science and Technology of China. He received his BS degree from the University of Science and Technology of China in 2006, and the Ph.D. degree from Nanyang Technological University, Singapore. His research interests include computer graphics, 3D computer vision and numerical optimization. He serves as the associate editors for IEEE Transactions on Multimedia and IEEE Computer Graphics and Applications.

Talk 2

Title:

Digital Human Modeling with Light

Speaker:

Shunsuke Saito
Shunsuke Saito
Meta Reality Labs Research, USA
Research Scientist

Abstract:

Leveraging light in various ways, we can observe and model physical phenomena or states which may not be possible to observe otherwise. In this talk, I will introduce our recent exploration on digital human modeling with different types of light. First, I will present our recent work on the modeling of relightable human heads, hands, and accessories. In particular, we will take a deep dive into our advancement in a capture system as well as learning algorithms that enable the real-time and photorealistic rendering of dynamic humans with global light transport. Then, I will also present our recent work on 3D hair reconstruction with X-rays. Image-based hair reconstruction is an extremely challenging task due to the limited observation of hair interior. To address this, we propose a fully automatic hair reconstruction method by utilizing computed tomography (CT). We show that our approach achieves high-fidelity reconstruction of 3D hair strands for a wide variety of hair styles, which are ready for downstream applications such as rendering and simulation.

Speaker bio:

Shunsuke Saito is a Research Scientist at Meta Reality Labs Research in Pittsburgh. He obtained his PhD degree at the University of Southern California. Prior to USC, he was a Visiting Researcher at University of Pennsylvania in 2014. He obtained his BE (2013), ME (2014) in Applied Physics at Waseda University. His research lies in the intersection of computer graphics, computer vision and machine learning, especially centered around digital human, 3D reconstruction, and performance capture. His work has been published in SIGGRAPH, SIGGRAPH Asia, NeurIPS, ECCV, ICCV and CVPR, two of which have been nominated for CVPR Best Paper Award (2019, 2021). His real-time volumetric teleportation work also won Best in Show award in SIGGRAPH 2020 Real-time Live!

AG Webinar Session 21

Date: Thursday, June 29, 2023
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Pengbo Bo, Harbin Institute of Technology, Weihai, China

Talk 1

Title:

From Curved to Flat and Back Again: Mesh Processing for Fabrication

Speaker:

Mirela Ben Chen
Mirela Ben Chen
Technion-Israel Institute of Technology, Israel
Associate Professor

Abstract:

Assume that for a craft project you were given a task: create a (doubly) curved surface. What are your options? With applications varying from art to health care to architecture, making shapes is a fundamental problem. In this talk we will explore the challenges of creating curved shapes from different materials, and describe the math and practice of a few solutions. We will additionally consider the limitations of existing approaches, and discuss a few open problems.

Speaker bio:

Prof. Ben-Chen is an Associate Professor at the Center for Graphics and Geometric Computing of the CS Department at the Technion. She has received her Ph.D. from the Technion in 2009, was a Fulbright postdoc at Stanford from 2009-2012, and then started as an Assistant Prof. at the Technion in 2012. Prof. Ben Chen is interested in modeling and understanding the geometry of shapes. She uses mathematical tools, such as discrete differential geometry, numerical optimization and harmonic analysis, for applications such as animation, shape analysis, fluid simulation on surfaces and computational fabrication. Prof. Ben Chen has won an ERC Starting grant, the Henry Taub Prize for Academic Excellence, the Science Prize of the German Technion Society and multiple best paper awards.

Talk 2

Title:

Piecewise Developable Approximations for Triangular Meshes

Speaker:

Xiao-Ming Fu
Xiao-Ming Fu
University of Science and Technology of China, China
Associate Professor

Abstract:

Shape modeling is fundamental for many computer graphics, engineering, and architecture applications. In manufacturing-related applications, modeling a shape with developable surfaces provides an opportunity to reduce manufacturing and construction costs because only flat pieces of material need to be folded, bent, or rolled. Since most shapes are not globally developable, we discuss how to automatically model shapes with piecewise developable patches. In this talk, I will introduce our latest progress in piecewise developable approximations of triangular meshes.

Speaker bio:

Xiao-Ming Fu is an associate professor at the School of Mathematical Sciences, University of Science and Technology of China. He received a BSc degree in 2011 and a PhD degree in 2016 from University of Science and Technology of China. His research interests include geometric processing and computer-aided geometric design. His research work can be found at his research website: https://ustc-gcl-f.github.io/.

AG Webinar Session 20

Date:Tuesday, April 25, 2023
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Bo Ren, Nankai University, China

Talk 1

Title:

Simulating Complex Flows for Foods and Painterly Drawings

Speaker:

Yonghao Yue
Yonghao Yue
Aoyama Gakuin University (AGU), Japan
Professor

Abstract:

I would like to share some of our work for simulating fluid-like foods, e.g., creams and sauces, as well as for mimicking artistic painterly drawings. We will start from discussing the elasto-viscoplastic Herschel-Bulkley model for modeling our daily fluid-like foods, their simulations using the material point method, and how to model their mixtures. Then, moving on to the topic of non-photo realistic rendering, we consider how to mimic the brushstroke styles seen in painterly drawings through the modeling and learning of the flows of the strokes.

Speaker bio:

Yonghao is a Professor leading the Computer Graphics Lab at Department of Integrated Information Technology, Aoyama Gakuin University (AGU). Before joining AGU, he worked at The University of Tokyo and Columbia University. He received his Ph.D. in Information Science and Technology in 2011 from The University of Tokyo. His research interests lie primarily in the mathematics side of computer graphics, covering physically based simulations and designs.

Talk 2

Title:

Computational Design of Physical Systems with Solid-Fluid Coupling

Speaker:

Tao Du
Tao Du
Tsinghua University, China
Assistant Professor

Abstract:

Physical systems with solid-fluid coupling are widespread in nature and have inspired various engineering designs and applications. Designing such systems with extreme performances is challenging due to their intricate dynamics. This talk will present our recent work on modeling, simulating, and optimizing physical systems with fluid-solid interaction by drawing inspiration from physics simulation, machine learning, and numerical optimization. We demonstrate our computational methods in designing multiple solid-fluid systems, including microfluidic devices, soft underwater robots, and aerial vehicles.

Speaker bio:

Tao Du is an Assistant Professor at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. His research combines physics simulation, numerical analysis, and machine learning to understand physical systems. His research works have been published at top-tier graphics, learning, and robotics conferences and have been covered by major technology media outlets. Before joining Tsinghua, Tao Du obtained his M.S. in Computer Science at Stanford University and completed his Ph.D. in Computer Science at MIT.

AG Webinar Session 19

Date:Tuesday, March 28, 2023
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Lin Lu, Shandong University, China

Talk 1

Title:

Computational Design of Geometric Puzzles

Speaker:

Peng Song
Peng Song
Singapore University of Technology and Design, Singapore
Assistant Professor

Abstract:

Geometric puzzles are nontrivial geometric problems that challenge our ingenuity. The task of solving these puzzles is to put together the puzzle pieces to form a meaningful 3D shape. Traditionally, the design of a new geometric puzzle requires hours or even days of mental work of skilled professional. With the recent advance of digital fabrication, there is an interest and need to design personalized geometric puzzles for general users such as puzzle enthusiasts, collectors, and players. To address this challenge, researchers in computer graphics have developed computational techniques for designing a variety of geometric puzzles. In this talk, I will review state-of-the-art works on computational design of geometric puzzles, and introduce our recent works in this topic. In particular, I will focus on a specific class of geometric puzzles called interlocking puzzles, and formulate the design of interlocking puzzles as an assembly-aware shape decomposition problem. I will introduce computational approaches to design two different kinds of interlocking puzzles, and show how these approaches enable the design of interlocking puzzles that cannot be achieved by the previous methods.

Speaker bio:

Peng Song is an Assistant Professor of Computer Science at Singapore University of Technology and Design (SUTD). Prior to joining SUTD in 2019, he was a research scientist at EPFL, Switzerland, and an Associate Researcher at University of Science and Technology of China. He received his PhD from Nanyang Technological University, Singapore in 2013. His research interests lie in computer graphics, with a particular focus on geometry modeling and computational design. He received SIGGRAPH Technical Papers Award Honorable Mention in 2022, co-organized a virtual seminar series on Computational Fabrication in 2021 and 2022, and has served on the program committee of many international conferences including SIGGRAPH Asia, Pacific Graphics, and Symposium on Solid and Physical Modeling (SPM).

Talk 2

Title:

Efficient Design and Fabrication for Complex Geometries

Speaker:

Haisen Zhao
Haisen Zhao
Shandong University, China
Professor

Abstract:

This talk will introduce my research on computer graphics for intelligent manufacturing, intending to produce intelligent computational tools for the new industrial revolution. I have worked on the following topics: 1) to improve the iterative efficiency of geometric design and manufacturing, we introduced a compact representation based on a domain-specific language, together with a multi-objective optimization method for material usage, time cost, fabrication precision, and geometric designs. (2) To improve complex geometries’ manufacturing efficiency and quality, we proposed a decomposition method based on set-cover theory for setup planning problems in CNC machining. Besides, we presented a novel space-filling curve achieving global continuity and low-curvature properties used in additive and subtractive manufacturing. (3) To precisely control the physical properties of geometric microstructures, we proposed a tightly coupled optimization method between physical properties and geometric structures. Finally, a brief introduction to future works will be given.

Speaker bio:

Haisen Zhao is a professor at the School of Computer Science and Technology, Shandong University. His research interest lies in computer graphics and their application for digital fabrication. He completed his Ph.D. at Shandong University (2018) under the supervision of Baoquan Chen and received Master’s and Bachelor’s degrees from Shandong University in 2014 and 2011, respectively. Haisen Zhao was postdoctoral at the University of Washington, working with Prof. Adriana Schulz from 2019 to 2021. He was postdoctoral at the Institute of Science and Technology Austria (IST Austria), working with Prof. Bernd Bickel from 2021 to 2022. He received the First prize of the Shandong Natural Science Award of Shandong Province in 2020 and the Doctoral Dissertation Award from CCF in 2019.

AG Webinar Session 18

Date:Tuesday, February 28, 2023
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Bin Wang, Beijing Institute for General Artificial Intelligence, China

Talk 1

Title:

Periodic Autoencoder for Character Animation and Isotropic ARAP energy using Cauchy-Green invariants

Speaker:

Taku Komura
Taku Komura
The University of Hong Kong, China
Professor

Abstract:

In this talk, I will present our recent works about virtual avatars and physically-based animation.

First, I will talk about the Periodic Autoencoder (PAE), which can learn periodic features from large unstructured motion datasets in an unsupervised manner. The character movements are decomposed into multiple latent channels that capture the non-linear periodicity of different body segments while progressing forward in time. Our method extracts a multi-dimensional phase space from full-body motion data, which effectively clusters animations and produces a manifold in which computed feature distances provide a better similarity measure than in the original motion space to achieve better temporal and spatial alignment. We demonstrate that the learned periodic embedding can significantly help to improve neural motion synthesis in a number of tasks, including diverse locomotion skills, style-based movements, dance motion synthesis from music, synthesis of dribbling motions in football, and motion query for matching poses within large animation databases.

Next, I will present a novel isotropic ARAP energy formulation based on Cauchy Green invariants. It has been believed that an explicit formulation of isotropic ARAP energy using Cauchy-Green is not possible due to a rotation-polluted trace term. Our analysis reveals the relationship between the CG invariants and the trace term to be a polynomial where the roots equate to the trace term, and where the derivatives also give rise to closed-form expressions of the Hessian to guarantee positive semi-definiteness for a fast and concise Newton-type implicit {time} integration. A consequence of this analysis is a novel analytical formulation to compute rotations and singular values of deformation-gradient tensors without explicit/numerical factorization which is significant, resulting in up-to 3.5 times speedup and benefits energy function evaluation for reducing solver time.

Speaker bio:

Taku Komura joined The University of Hong Kong in 2020. Before joining The University of Hong Kong, he worked at the University of Edinburgh (2006-2020), City University of Hong Kong (2002-2006) and RIKEN (2000-2002). He received his BSc, MSc and PhD in Information Science from University of Tokyo. His research has focused on data-driven character animation, physically-based character animation, crowdfunding simulation, 3D modelling, cloth animation, anatomy-based modelling and robotics. Recently, his main research interests have been on physically-based animation and the application of machine learning techniques for animation synthesis. He received the Royal Society Industry Fellowship (2014) and the Google AR/VR Research Award (2017).

Talk 2

Title:

Co-speech gesture synthesis and generative motion controllers

Speaker:

Libin Liu
Libin Liu
Peking University, China
Assistant Professor

Abstract:

Generating realistic human behaviors is a fundamental problem in computer animation and also one of the most demanding techniques in many emerging fields such as digital humans and metaverse. There has been tremendous progress in this area in the past years, partially thanks to the rapid advancement in deep learning and reinforcement learning. In this talk, I will briefly introduce two of our recent works on this topic, both published in SIGGRAPH Asia 2022. In the first work, we present a novel co-speech gesture synthesis framework that achieves convincing results on both rhythm and semantics. We devise an explicit rhythm-based generation scheme to ensure the temporal coherence between the vocalization and gestures. We also develop a disentanglement mechanism that builds correspondence between the speech and motion at different levels of features to achieve semantics-aware gesture generation. In the second work, we propose ControlVAE, a VAE-based generative control policy for physically simulated characters and robots. With a model-based reinforcement learning scheme, the policy effectively embeds a large variety of motion skills into a rich and versatile latent space, which allows efficient learning of downstream tasks such as interactive control of the character’s action and response to unexpected perturbations.

Speaker bio:

Libin Liu is an Assistant Professor at Peking University. Before joining Peking University, he was the Chief Scientist of DeepMotion Inc. and had been a postdoc researcher at Disney Research and the University of British Columbia. He received his Ph.D. degree in computer science in 2014 from Tsinghua University. His research interests include character animation, physics-based simulation, motion control, and related areas such as reinforcement learning and deep learning. He served on the program committees of all major computer graphics conferences, including ACM SIGGRAPH (North America/Asia), Pacific Graphics, ACM SIGGRAPH/Eurographics Symposium on Computer Animation, etc.

AG Webinar Session 17

Date: Friday, December 23, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Ran Yi, Shanghai Jiao Tong University, China

Talk 1

Title:

Televerse: Teleport to the Augmented Real-World driven by 3i innovation (#immersive, #interactive, #intelligent)

Speaker:

TJ
Taehyun James (TJ) Rhee
Victoria University of Wellington, New Zealand
Associate Professor

Abstract:

Computer Graphics (CG) and visual effects (VFX) enables seamless blending between computer generated imagery and recorded real footage. Recent advancements of real-time technologies actuate the transition from off-line post-production to real-time. Immersive media technologies transform the end-user experience from observing, to a high sense of presence within the story. High-speed networking changes the media distribution from a pre-recorded medium to live streaming. Modern AI contributes to automatic pipeline and smart solution for better human media interactions.

This talk will introduce our research in real-time live visual effects, immersive telepresence, augmented telecollaboration, volumetric environment capturing and modelling, appearance modelling and reconstruction, which have been driven by 3i (immersive, interactive, intelligent) innovation, interdisciplinary research across computer graphics, vision, data science, machine learning, and more.

We will further discuss the convergence of the 3i innovation, introduce a concept of augmented telepresence, and the new frame work and platform, “televerse”, which allow user’s illusion to virtually teleport and augment their telepresence to communicate with people in distance. The potential applications and future extensions are further discussed while introducing our recent case studies with public end-users.

Speaker bio:

Taehyun James (TJ) Rhee is the Director of Computational Media Innovation Centre, Associate Professor (tenured full professor in US system) at Faculty of Engineering, Co-founder of Computer Graphics degrees at School of Engineering and Computer Science in Victoria University of Wellington, New Zealand, and a founder of the Mixed Reality start-up, DreamFlux. He worked in the immersive and interactive technology sector over 25 years, across academia and industry. He worked at Samsung (1996-2012) as their Principal Researcher and General Manager to lead their Computer Graphics, Medical Physics Research at Samsung Advanced Institute of Technology (SAIT), and a Senior Researcher and Senior Manager of Research Innovation Centre at Samsung Electronics. He severed as the general chair for Pacific Graphics 2020-2021, XR chair for SIGGRAPH Asia 2018, executive committee for Asia Graphics Association.

Talk 2

Title:

Capturing and Display 3D Avatars in Immersive Environment

Speaker:

Ye Pan
Ye Pan
Shanghai Jiao Tong University, China
Associate Professor

Abstract:

The use of a self-avatar representation in metaverse is shown to have important effects on user behavior. Various research demonstrated that avatars exhibiting higher levels of visual quality or tracking quality (e.g., eye tracking, facial expression, and finger tracking) can potentially communicate more subtleties of human nonverbal communication, enhancing the perceived authenticity of the interaction. However, there are problems in providing a self-avatar because of uncanny valley and different discrepancies. In this talk, I will talk about different avatar representations within an AR/VR/MR and demonstrating the effectiveness of the use of various self-avatars on social interaction.

Speaker bio:

Ye Pan is an Associate Professor at Shanghai Jiao Tong University, where she leads a “Character Lab” and focuses on using virtual environments and computer graphics technologies to advance character animation and avatar performance, and thus improve user experience. She previously worked at Disney Research Los Angeles, and received Master and PhD in computer science from University College London. She has served as Associate Editor of the International Journal of Human Computer Studies, and a regular member of IEEE virtual reality program committees.

AG Webinar Session 16

Date: Wednesday, November 30, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Xuejin Chen, University of Science and Technology of China, China

Talk 1

Title:

Controllable Generative Models: The Quest for Photorealism Reloades

Speaker:


Dani Lischinski

Dani Lischinski

The Hebrew University of Jerusalem, Israel

Professor

Abstract:

The synthesis of images that possess photo-like realism has been a long standing grand challenge in the field of computer graphics. Thanks to the astounding success of deep neural models over the last few years, it might appear that this quest has finally been accomplished. Indeed, the quality and ease of photorealistic image synthesis has taken a giant leap forward, and images generated by GANs and Denoising Diffusion models are often indistinguishable from real photographs. In this talk, though, I would like to argue that the quest is still there, but the emphasis is now shifting from flawlessly imitating reality towards providing users with means to control the outcome, and doing so in an intuitive and predictable fashion.

Specifically, I will describe some of our recent results on analyzing and using the latent spaces of StyleGAN to manipulate generated and real images. First, I will show that the space of channel-wise style parameters, which we refer to as StyleSpace, is significantly more disentangled than the other latent spaces explored by previous works. I will also describe methods we have developed for discovering a large collection of style channels, each of which is shown to control a distinct visual attribute in a highly disentangled manner. Next, I will describe several ways of leveraging the power of recent Contrastive Language-Image Pre-training (CLIP) models in order to develop a text-based interface for StyleGAN image manipulation that does not require an extensive a priori analysis. Finally, I will demonstrate how combining Denoising Diffusion models with CLIP guidance enables performing local edits of generic natural images.

Speaker bio:

Dani Lischinski is a Professor at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, Israel. He received his PhD from Cornell University in 1994, and was a postdoc at the University of Washington until 1996. In 2002/3, he spent a sabbatical year at Pixar Animation Studios In 2012 he received the Eurographics Outstanding Technical Contributions Award. In 2017, he served as the Technical Papers Chair for SIGGRAPH Asia 2017. His areas of interest span a wide variety of topics in the fields of computer graphics, image and video processing, and computer vision. Most of his recent work involves deep neural networks and their applications in graphics and vision.

Talk 2

Title:

Audio-driven Talking Face Video Generation

Speaker:

Ran Yi
Ran Yi

Shanghai Jiao Tong University, China

Assistant Professor

Abstract:

Audio-driven talking face video generation has attracted much attention recently and have a wide range of applications, such as bandwidth-limited video transformation, virtual newsreader and role-playing game generation, etc. In this talk, I will present our recent works in audio-driven realistic talking face generation, and artistic talking face generation. For audio-driven realistic talking face generation, we pay attention to machine learning of personalized head movement and address the one-to-many problem for speech-to-head-pose mapping. For artistic talking face generation, we focus on how to generate an artistic talking-face video of line drawings / cartoon from a single face photo and a speech signal.

Speaker bio:

Ran Yi is currently an Assistant Professor with the Department of Computer Science and Engineering, Shanghai Jiao Tong University. She received the BEng degree and the PhD degree from Tsinghua University, China, in 2016 and 2021. Her research interests include computer vision, computer graphics and computational geometry. She has published over 30 papers in major international journals and conferences.

AG Webinar Session 15

Date: Tuesday, October 25, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Fanglue Zhang, Victoria University of Wellington, New Zealand

Talk 1

Title:

3D Reconstruction and Morphological Analysis for Neurons from Large-scale Microscopy Images

Speaker:

Xuejin Chen
Xuejin Chen

University of Science and Technology of China, China

Professor

Abstract:

The fast development of nanometer-resolution electronic microscopy (EM) imaging technology allows synaptic observation of neurons. However, the massive data volume of nanometer-resolution EM images brings significant challenges for cell segmentation and analysis. While annotating volumetric data for training deep-learning models is laborious and tedious, we investigate self-learning for 3D neuron segmentation from volumetric microscopy images and morphological representation from large-scale neuron datasets. In this talk, I will first introduce our 3D neuron reconstruction approach that integrates structural guidance and 3D segmentation with a GAN framework. Second, I will introduce our work on self-learning of morphological representation for neuron skeletons and 3D EM segments using contrastive learning. I will also talk the challenges in full-brain neuron reconstruction and tracing.

Speaker bio:

Xuejin Chen is currently a Professor at the University of Science and Technology of China. She received her B.S. and Ph.D. degrees in electronic circuits and systems from the University of Science and Technology of China in 2003 and 2008, respectively. From 2008 to 2010, she worked as a post-doctoral scholar in Department of Computer Science at Yale University. Her research interests include 3-D modeling, geometry processing, and biomedical image analysis. She has authored or co-authored over 70 articles in these areas and received the Honorable Mention Award of the journal of Computational Visual Media in 2019 and the Second Prize in MitoEM Challenge in ISBI 2021.

Talk 2

Title:

Computational Cameras and Displays Pro: Incorporating Optics and Machine Intelligence for Visual Computing Systems

Speaker:

Yifan Peng
Yifan (Evan) Peng

The University of Hong Kong, China

Assistant Professor

Abstract:

From cameras to displays, visual computing systems are becoming ubiquitous in our daily life. However, their underlying design principles have stagnated after decades of evolution. Existing imaging devices require dedicated hardware that is not only complex and bulky, but also exhibits only suboptimal results in certain visual computing scenarios. This shortcoming is due to a lack of joint design between hardware and software, importantly, impeding the delivery of vivid 3D visual experience of displays. By bridging advances in computer science and optics with extensive machine intelligence strategies, my work engineers physically compact, yet functionally powerful imaging solutions of cameras and displays for applications in photography, wearable computing, IoT products, autonomous driving, medical imaging, and VR/AR/MR.

In this talk, I will describe two classes of computational imaging modalities. Firstly, in Deep Optics, we jointly optimize lightweight diffractive optics and differentiable image processing algorithms to enable high-fidelity imaging in domain-specific cameras. Additionally, I will discuss Neural Holography, which also applies the unique combination of machine intelligence and physics to solve long-standing problems of computer-generated holography. Specifically, I will describe several holographic display architectures that leverage the advantages of camera-in-the-loop optimization and neural network model representation to deliver full-color, high-quality holographic images. Driven by trending machine intelligence, these hardware-software co-designed imaging solutions can unlock the full potential of traditional cameras and displays.

Speaker bio:

Dr. Yifan “Evan” Peng joined the University of Hong Kong (HKU) as an Assistant Professor in 2022. Before that, he has worked for over three years as a Postdoctoral Research Scholar at Stanford University. Dr. Peng received his Ph.D. in Computer Science from the University of British Columbia (UBC) and both his M.Sc. and B.E. in Optical Science and Engineering from Zhejiang University (ZJU).

Dr. Peng’s research interest lies in the unique intersection of computer graphics, computer vision, optics, and artificial intelligence, in particular the Joint-design of Hardware and Software for Intelligent Visual Computing Systems. Specifically, he and his team have leveraged AI advances to bridge the long-standing gap of optimal designs between devices and algorithms, with the potential impact of revolutionizing the camera and display industry. Dr. Peng is also making great contributions to the graphics and optics communities with serving Program Committee and/or Session Chair roles for multiple IEEE, ACM SIGGRAPH, SPIE, SID, and Optica events.

AG Webinar Session 14

Date: Tuesday, September 27, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Lin Gao, Institute of Computing Technology, CAS, China

Talk 1

Title:

Exploring “WOW!” with Single Images

Speaker:

Yuki Endo
Yuki Endo

Tsukuba University, Japan

Assistant Professor

Abstract:

Recent advances in deep learning have successfully overcome the limitations of conventional image editing techniques and led to exciting applications with amazing content only from single images. However, applying general deep neural networks to specific tasks does not always work due to the problem complexity, and thus carefully incorporating respective domain knowledge into models is crucial. In this talk, I will present how we have addressed this challenge over the past several years in image synthesis and editing with single images. The first topic is low dynamic range (LDR)-to-high dynamic range (HDR) inference. To reduce inference complexity for 32-bit HDR images, we estimate multiple LDR images for reconstructing HDR images rather than directly estimating HDR images. The second topic is landscape animation generation, in which motion and appearance are trained separately to reduce video training complexity. Finally, I will explain our recent project for controlling StyleGAN image layout via latent code manipulation. Our method does not directly move latent codes as in previous works but indirectly manipulates them following user annotations specified on the image, enabling intuitive editing. We hope that our insights will facilitate future research by providing novel deep learning solutions to explore “WOW!” with single images.

Speaker bio:

Yuki Endo is an assistant professor in University of Tsukuba. Previously, he worked as a researcher at NTT Laboratories from 2012 to 2016. He received his B.S., M.S., and Ph.D. degrees in engineering from University of Tsukuba, in 2010, 2012, and 2017, respectively. His research topics center on image synthesis, image editing, data mining, and machine learning. He is currently interested in applying machine learning to explore exciting image editing applications.

Talk 2

Title:

Face Video Generation and Editing

Speaker:

Yu-Kun Lai
Yu-Kun Lai

Cardiff University, UK

Professor

Abstract:

Face videos are widely used in many applications. However, recording ideal face videos (e.g. for someone talking) may not be a trivial task. To reduce the effort, my talk will focus on our recent research in the following two aspects: talking face video generation that automatically produces a talking face video driven by audio signals, and face video editing that modifies a given face video to satisfy user needs. For the former task, we consider both generating normal talking videos and stylized ones, which are more challenging to maintain temporal coherence while ensuring high quality stylization. For face video editing, we propose a sketch-based approach for intuitive manipulation and addresses different types of editing manipulations and the fusion of multiple manipulations.

Speaker bio:

Yu-Kun Lai is a Professor and Director of Research in the School of Computer Science & Informatics, Cardiff University, UK. He received his bachelor’s and PhD degrees from Tsinghua University, China in 2003 and 2008, respectively. He has been working on Computer Graphics, Geometric Processing and Computer Vision for over 15 years, and published over 100 papers in major international journals (including 57 papers in ACM TOG/IEEE TPAMI/TVCG/TIP/IJCV), and over 70 major international conference papers (including 21 papers in CVPR/ICCV). He is an associated editor for Computer Graphics Forum and The Visual Computer journals. He was also conference co-chair of Eurographics Symposium on Geometry Processing (SGP) 2014, conference co-chair of International Conference on Computational Visual Media (CVM) 2016, program co-chair of Eurographics Workshop on 3D Object Retrieval (3DOR) 2021, and program committee members of many international conferences.

AG Webinar Session 13

Date: Tuesday, August 30, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Juyong Zhang, University of Science and Technology of China, China

Talk 1

Title:

3D Shape Parsing from Point Clouds

Speaker:

Jianmin Zheng
Jianmin Zheng

Nanyang Technological University, Singapore

Professor

Abstract:

With the advance in 3D acquisition technologies, point clouds are easily generated and become a widely adopted 3D data representation. However, the unordered and unstructured nature of point clouds makes it difficult to perform high level manipulation and easy editing of their underlying geometries. There is a great demand for converting point clouds to high level shape representations that help understand the shapes, support the re-creation of new products, and facilitate practical applications. This talk presents some research in this direction. Particularly, a three-level structure called CSG-Stump is introduced, which describes the combination of underlying constituent modeling primitives of a shape in a simple and regular manner and thus makes itself learning friendly. Then two networks are presented: one is CSG-Stump Net that generates a CSG representation of a shape from point clouds, and the other is ExtrudeNet that uses machine learning to reverse engineer the sketch-and-extrude modeling process of a shape in an unsupervised fashion. These networks can be used for 3D shape parsing from point clouds.

Speaker bio:

Jianmin Zheng is a professor in the School of Computer Science and Engineering at Nanyang Technological University (NTU), Singapore. He received his BS and PhD from Zhejiang University. Prior to joining NTU in 2003, he was a post-doc research associate at Brigham Young University and a professor at Zhejiang University. His research areas include computer graphics, geometric modelling, reality computing, AR/VR, visualization, and AI for design and 3D printing. He has published more than 200 papers in international journals and conferences, and he is a co-inventor of T-spline technologies that have been used in CAD and CAE industry. He is currently the programme director for the research pillar of ML/AI under the HP-NTU Digital Manufacturing Corporate Lab.

Talk 2

Title:

Neural Rendering for Novel View Appearance, Semantic, and Content Synthesis

Speaker:

Yiyi Liao
Yiyi Liao

Zhejiang University, China

Assistant Professor

Abstract:

Photorealistic visual content and 3D assets are essential for graphics and vision with many applications in gaming, simulation, and virtual reality. Creating these visual contents manually is extremely time-consuming and requires the concerted effort of many 3D artists. Recent advances in neural rendering, e.g., NeRF, have demonstrated impressive results for reconstructing such 3D visual content from the real world via end-to-end training. However, scaling neural rendering to graphics and vision applications still faces several challenges, including the slow rendering speed, the absence of semantic information, and the lack of novel content creation. In this talk, I will present our recent progress in tackling these challenges, including KiloNeRF for fast rendering, Panoptic NeRF for rendering semantic labels, and GRAF and VoxGRAF for creating novel content using 3D-aware generative models.

Speaker bio:

Yiyi Liao is an assistant professor in Zhejiang University. Prior to that, she worked as a Postdoc in MPI for Intelligent Systems and the University of Tübinge. She received her Ph.D. degree from Zhejiang University in 2018 and her B.S. degree from Xi’an Jiaotong University in 2013. Her research interest lies in 3D computer vision, including 3D scene understanding, 3D reconstruction, and 3D controllable image synthesis.

AG Webinar Session 12

Date: Sunday, July 31, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Qilin Sun, The Chinese University of Hong Kong, Shenzhen, China

Talk 1

Title:

Find the gap: Learning 3D representation from 2D image collections

Speaker:

Xin Tong
Xin Tong

Microsoft Research Asia (MSRA)

Partner research manager

Abstract:

3D deep learning has demonstrated its advantage in many 3D graphics applications. However, compared to images and videos that can be easily acquired from real world, modeling or capturing 3D dataset (e.g. shapes and material maps) is still a difficult task, which limits the scale of 3D dataset available in 3D deep learning.

In this talk, I will introduce our explorations in the last several years on how to utilize 2D image collections in 3D deep learning. By bridging the gap between 2D images and 3D representations, we believe that this method will release the power of deep learning and enable new solutions for 3D content creation.

Speaker bio:

Dr. Xin Tong is a partner research manager of Microsoft Research Asia (MSRA) and the leader of graphics group in MSRA. His research interests cover many topics in computer graphics and computer vision, including appearance modeling and rendering, texture synthesis, light transport analysis, 3D deep learning, performance capturing and facial animation, as well as graphics system. Xin has published more than 120 papers in top computer graphics and vision journals and conferences, including 55 ACM SIGGRAPH/TOG papers. He has served as the associate editors of computer graphics journals (ACM TOG, IEEE TVCG, CGF) and paper committee members of ACM SIGGRAPH/SIGGRAPH ASIA, Eurographics, and Pacific Graphics. He is the associate editor of IEEE CG&A, CVMJ, and visual informatics. Xin obtained his Ph.D. degree in Computer Graphics from Tsinghua University in 1999 and his B.S. Degree and Master Degree in Computer Science from Zhejiang University in 1993 and 1996 respectively.

Talk 2

Title:

Differentiable Computational Imaging with Light

Speaker:

Seung-hwan Baek
Seung-hwan Baek

POSTECH, South Korea

Assistant Professor

Abstract:

Modern camera systems have evolved to effectively capture light and become essential tools for many applications. Developing such imaging systems has commonly required hand-crafted or heuristic rules set by human experts, and post-processing algorithms were devised in isolation with the imaging-system design. This results in sub-optimal performance and fundamentally limits its application to new problems. In this talk, I will present our work on capturing, analyzing, and exploiting overlooked dimensions of light waves via end-to-end imaging system designs from optics to reconstruction algorithms. We demonstrate that this joint design approach allows for understanding the high-dimensional visual information of the real world originating from complex interplays between light, material appearance, and geometry.

Speaker bio:

Seung-hwan Baek is an assistant professor at POSTECH. Before joining POSTECH, he worked as a post-doctoral research associate at Princeton University and holds a Ph.D. degree in Computer Science from KAIST. His research interests lie in computer graphics and computer vision with a particular focus on computational imaging and display. His work aims to capture, model, and analyze the high-dimensional visual information of the real world originating from complex interplay between light, material appearance, and geometry. To this end, he designs end-to-end computational imaging and display systems for fundamental scientific analysis as well as diverse application domains.

AG Webinar Session 11

Date: Tuesday, Jun 28, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Xiaopei Liu, ShanghaiTech University, China

Talk 1

Title:

Fun with Fluids

Speaker:

Yoshinori Dobashi
Yoshinori Dobashi

Hokkaido University, Japan

Professor

Abstract:

We see fluids everywhere in our daily life and their appearances attract many people, including researchers, due to its complicated and interesting motions. Thus, visual simulation of fluid phenomena has become one of the most important research topics in computer graphics. Examples of such phenomena include water, fire, smoke, and so on. These methods numerically solve Navier-Stokes equations to synthesize realistic appearances and motions. We have also been working on applications of fluid simulation. One of the problems with fluid simulation is its expensive computational cost and the directability. We mainly focus on the latter problem, the directability. Generating the desired visual effects with numerical fluid simulation is usually difficult; the user has to repeat the simulation repeatedly until he or she obtains the desired appearances and motions. In this talk, I will first introduce fluid simulation briefly and our interesting applications of it. Then, I will talk our approach for improving the directability, including inverse cloud simulation, modeling of fluids from images, and editing of simulated fluid data.

Speaker bio:

Yoshinori Dobashi is a professor at Hokkaido University in the graduate school of information science and technology, Japan since 2020. His research interests center in computer graphics including lighting simulation, fluid simulation, digital fabrication, and sound synthesis. He received his BE, ME and Ph.D in Engineering in 1992, 1994, and 1997, respectively, from Hiroshima University. He worked at Hiroshima City University from 1997 to 2000 as a research associate. His work has been awarded from Eurographics, the society for art and science, etc. He received The Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2014.

Talk 2

Title:

Modeling Fluid-solid Mixture with Smoothed Particle Hydrodynamics

Speaker:

Bo Ren
Bo Ren

Nankai University, China

Associate Professor

Abstract:

In computer graphics, particle-based discretization is commonly adopted for fluid simulations. Astonishing results have been achieved by various research works or in industrial applications. However, there are still ways to go for using particle-based methods to simulate real-world multi-fluid mixtures, especially non-interfacial flows where different phases actually mix together with concentration changes during the fluid motions. Such “miscible” behaviors are important in diffusion, extraction, dissolution, chemical reaction or porous capillary actions, etc. In this talk, I will talk about how can we use the Smoothed Particle Hydrodynamics (SPH) method to reproduce such effects. First, I will provide an introduction the theoretical foundation, which is the mixture model. Then, I will use recent works to demonstrate progresses over its original shortcomings. Finally, I will talk about how we can exploit the theory for a more universal simulation that deals with different physical laws together.

Speaker bio:

Bo Ren received his B.S. and Ph.D. degrees from Tsinghua University (Beijing, China) in 2010 and 2015 respectively. He is currently an associate professor at College of Computer Science, Nankai University (Tianjin, China). His current researches interests lie in learning-based/physically-based simulation, 3D scene geometry reconstruction and analysis.

AG Webinar Session 10

Date: Tuesday, May 31, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Tuanfeng Y. Wang, Adobe Research

Talk 1

Title:

Non-Photorealistic Vision and Graphics

Speaker:

Ariel Shamir
Ariel Shamir

Reichman University, Israel

Professor

Abstract:

Computer vision and graphics algorithms for both analysis and synthesis have developed considerably in recent years due to advancements in neural networks and deep learning methods. Nevertheless, these algorithms concentrate mainly on photorealistic inputs and outputs. In this talk, I will present several efforts to advance the state-of-the-art on non-photorealistic (NPR) visual content such as animations, cartoons and even art paintings. The main challenges stem from the differences of these domains in subject, appearance, variance, and abstraction. I will show how learning correct representations as well as domain adaptation techniques enable tracking, segmentation, landmark detection in NPR domains, and allow synthesis of abstract visual depictions.

Speaker bio:

Prof. Ariel Shamir is the former Dean of the Efi Arazi school of Computer Science at Riechmann University (the Interdisciplinary Center) in Israel. He received his Ph.D. in computer science in 2000 from the Hebrew University in Jerusalem, and spent two years as a Postdoc at the University of Texas in Austin. Prof. Shamir has numerous publications and a number of patents, and was named one of the most highly cited researchers on the Thomson Reuters list in 2015. He has a broad commercial experience consulting various companies including Disney research, Mitsubishi Electric, PrimeSense (now Apple), Verisk, Donde (now Shopify), and more. Prof. Shamir specializes in computer graphics, image processing, and machine learning. He is a member of the ACM SIGGRAPH, IEEE Computer, AsiaGraphics, and EuroGraphics associations.

Talk 2

Title:

Neural Representation and Rendering of 3D Real-world Scenes

Speaker:

Lingjie Liu
Lingjie Liu

Max Planck Institute for Informatics, Germany

Postdoctoral Research Fellow

Abstract:

High-quality reconstruction and photo-realistic rendering of real-world scenes are two important tasks that have a wide range of applications in AR/VR, movie production, games, and robotics. These tasks are challenging because real-world scenes contain complex phenomena, such as occlusions, motions and interactions. Approaching these tasks using classical computer graphics techniques is a highly difficult and time-consuming process, which requires complicated capture procedures, manual intervention, and a sophisticated global illumination rendering process. In this talk, I will introduce our recent work that integrates deep learning techniques into the traditional graphics pipeline for modelling humans and static scenes in an automatic way. Specifically, I will talk about creating photo-realistic animatable human characters from only RGB videos, high-quality reconstruction and fast novel view synthesis of general static scenes from RGB image inputs, and scene generation with a 3D generative model. Finally, I will discuss challenges and opportunities in this area for future work.

Speaker bio:

Lingjie Liu is the incoming Aravind K. Joshi endowed Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania, where she will be leading the Computer Graphics Lab. Currently, Lingjie Liu is a Lise Meitner Postdoctoral Research Fellow working with Prof. Christian Theobalt in the Visual Computing and AI Department at Max Planck Institute for Informatics. She received her Ph.D. degree at the University of Hong Kong in 2019. Before that, she got her B.Sc. degree in Computer Science at Huazhong University of Science and Technology in 2014. Her research interests include neural scene representations, neural rendering, human performance modeling and capture, and 3D reconstruction.

AG Webinar Session 9

Date: Tuesday, April 26, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Yu-Kun Lai, Cardiff University, UK

Talk 1

Title:

Semantic Image Editing using GANs

Speaker:

Peter Wonka
Peter Wonka

KAUST, Saudi Arabia

Professor

Abstract:

In this talk, I will discuss recent papers from our group about semantic image editing using GANs. I will discuss embedding algorithms and various algorithms for manipulating latent representations. Applications of this work are attribute-based editing, hairstyle editing, style transfer, and domain adaptation. All discussed algorithms will use StyleGAN2.

Speaker bio:

Peter Wonka is Full Professor in Computer Science at King Abdullah University of Science and Technology (KAUST) and Interim Director of the Visual Computing Center (VCC). Peter Wonka received his doctorate from the Technical University of Vienna in computer science. Additionally, he received a Master of Science in Urban Planning from the same institution. After his PhD, Dr. Wonka worked as a postdoctoral researcher at the Georgia Institute of Technology and as faculty at Arizona State University. His research publications tackle various topics in computer vision, computer graphics, remote sensing, image processing, visualization, and machine learning. The current research focus is on deep learning, generative models, and 3D shape analysis and reconstruction.

Talk 2

Title:

Geometric Modeling from Flat Sheet Material

Speaker:

Caigui Jiang
Caigui Jiang

Xi’an Jiaotong University, China

Professor

Abstract:

In this presentation, I will talk about our recent work on geometric modeling based on planar materials. There are several related works and I will focus on two of them:

1. Quad-Mesh Based Isometric Mappings and Developable Surfaces. We discretize isometric mappings between surfaces as correspondences between checkerboard patterns derived from quad meshes. This method captures the degrees of freedom inherent in smooth isometries and enables a natural definition of discrete developable surfaces. This definition, which is remarkably simple, leads to a class of discrete developables which is much more flexible in applications than previous concepts of discrete developables. In this paper, we employ optimization to efficiently compute isometric mappings, conformal mappings and isometric bending of surfaces. We perform geometric modeling of developables, including cutting, gluing and folding. The discrete mappings presented here have applications in both theory and practice: We propose a theory of curvatures derived from a discrete Gauss map as well as a construction of watertight CAD models consisting of developable spline surfaces.

2. Shape-morphing mechanical metamaterials. Small-scale cut and fold patterns imposed on sheet material enable its morphing into threedimensional shapes. This manufacturing paradigm has been receiving much attention in recent years and poses challenges in both fabrication and computation. It is intimately connected with the interpretation of patterned sheets as mechanical metamaterials, typically of negative Poisson ratio. We here present an affirmative solution to a fundamental geometric question, namely the targeted programming of a shape morph. We use optimization to compute kirigami patterns that realize a morph between shapes, in particular between a flat sheet and a surface in space. The shapes involved can be arbitrary; in fact we are able to approximate any mapping between shapes whose principal distortions do not exceed certain bounds. This amounts to a solution of the so-called inverse problem for kirigami cut and fold patterns. The methods we employ include a differential-geometric interpretation of the morph, besides drawing on recent progress in geometric computing.

Speaker bio:

Caigui Jiang is a professor at Institute of Artificial Intelligence and Robotics of Xi’an Jiaotong University (XJTU) in China. Before that, he worked as a research scientist and Postdoc at Visual Computing Center (VCC) of King Abdullah University of Science and Technology (KAUST), ICSI, UC Berkeley, and the Max Planck Institute for Informatics. He obtained his Ph.D. in 2016 from KAUST under the supervision of Prof. Dr. Helmut Pottmann and Prof. Dr. Peter Wonka. He received his B.S. and M.S. degrees from XJTU in 2008 and 2011 respectively. His research interests are in geometric modeling, geometry processing, architectural geometry, computer graphics, and computer vision.

AG Webinar Session 8

Date: Tuesday, March 29, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Xianzhi Li, Huazhong University of Science and Technology, China

Talk 1

Title:

Shape-Inspired Architectural Design

Speaker:

Pedro Sander
Pedro Sander

The Hong Kong University of Science and Technology, China

Professor

Abstract:

Various techniques have been proposed to improve the level of automation in stages of architectural design. In this talk I will present our work on an interactive interface, along with optimization algorithms focused on designing early stage symbolic architecture. The architect specifies some simple shape requirements by inputting a few binary images that resemble the shape of the building from different viewpoints. Our optimizer uses these to generate a conceptual 3D design that is guided by various aesthetic and structural requirements. I will also discuss our approach for planning the inner space of the given architecture model. The architect specifies idealized requirements on the key functional rooms by using instances of simple 3D primitive shapes. The pose and location of each instance are parameters of the optimization. By coupling this process together with the shape inspired exterior design process, we can construct good initial conceptual designs. Several examples are presented to illustrate the methodology of these approaches and results. User studies based on a proposed dataset and interviews with domain experts have been carried out to demonstrate the usability and effectiveness of our system and algorithms.

Speaker bio:

Pedro V. Sander received a Bachelor of Science in Computer Science from Stony Brook University in 1998, and Master of Science and Doctor of Philosophy degrees from Harvard University in 1999 and 2003, respectively. He was a senior member of the Application Research Group of ATI Research, where he conducted real-time rendering and general-purpose computation research with latest generation and upcoming graphics hardware. Currently, he is a Professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology. His research interests lie mostly in real-time rendering, graphics hardware, geometry processing, and imaging. Prof. Sander has been a member of multiple ACM SIGGRAPH and ACM SIGGRAPH Asia paper committees and the Courses Chair of SIGGRAPH Asia 2011. He has co-organized the top rendering and interactive graphics conferences, including I3D (general co-chair in 2014 and papers co-chair in 2015), and EGSR (papers co-chair in 2017). He has served as an Associate Editor of Graphical Models (GMOD), Computer Graphics Forum (CGF), and IEEE Transactions on Visualization and Computer Graphics (TVCG).

Talk 2

Title:

Synthesizing Dynamic Human Appearance

Speaker:

Tuanfeng Y. Wang
Tuanfeng Y. Wang

Adobe Research

Research Scientist

Abstract:

Synthesizing the dynamic appearances of humans in motion plays a central role in applications such as AR/VR and video editing. While many recent methods have been proposed to tackle this problem, handling loose garments with complex textures and high dynamic motion still remains challenging. In this talk, I will introduce a video-based appearance synthesis method that tackles such challenges and demonstrates high-quality results for in-the-wild videos that have not been shown before. Another key challenge of learning the dynamics of the appearance lies in the requirement of a prohibitively large amount of observations. I will show how we address this issue with a compact motion representation by enforcing equivariance. Such a representation is learned from the spatial and temporal derivatives of the 3D body surface and can be used to render high fidelity time-varying appearance.

Speaker bio:

Tuanfeng Y. Wang is a Research Scientist from Adobe Research. Before that, he was a lead reseacher at miHoYo. He received his PhD from University College London, advised by Niloy J. Mitra, and B.S. from University of Science and Technology of China, advised by Ligang Liu. Tuanfeng’s research interests include clothes modeling, human animation, neural rendering, geometry processing, etc.

AG Webinar Session 7

Date: Tuesday, February 22, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Yuki Koyama, AIST, Japan

Talk 1

Title:

Neural Indoor Scene Rendering with Reflections

Speaker:

Weiwei Xu
Weiwei Xu

College of Computer Science and Technology, Zhejiang University, China

Professor

Abstract:

Neural rendering is able to produce images according to the encoded neural scene representations, a recently popular rendering scheme. Its advantage is its robustness to geometric noise and the ability to exploit priors learned in the training data. This talk describes a novel scalable neural rendering pipeline for indoor scenes with reflections. We make substantial progress towards three problems in indoor scene rendering, namely, depth and reflection reconstruction, view selection and temporally coherent rendering with various reflections. The Rendering quality outperforms state-of-the-art IBR or neural rendering algorithms considerably.

Speaker bio:

Weiwei Xu is currently a full professor at state key lab of CAD&CG in Zhejiang university. He was a researcher in Internet Graphics Group at Microsoft Research Asia from 2005 to 2012  and a post-doc researcher at Ritsmeikan university in Japan for one and half year. He received Ph.D. Degree in Computer Graphics from Zhejiang University, Hangzhou, and B.S. Degree and Master Degree in Computer Science from Hohai University in 1996 and 1999 respectively.   His main research interests include 3D reconstruction, image-based rendering and virtual reality and has published more than 90 papers, including 30 papers on ACM TOG, Science Robotics, CVPR, ICCV, AAAI and IEEE TVCG.  He received outstanding your researcher award from NSFC on 2013.

Talk 2

Title:

Discovering the Compositional Structure in 3D Shapes – From Supervised to Unsupervised Learning

Speaker:

Minhyuk Sung
Minhyuk Sung

KAIST, South Korea

Assistant professor

Abstract:

3D data matching the actual form of a physical object enables a direct representation of the compositional structure of the object, which is essential in many applications in graphics, vision, and robotics such as 3D modeling/editing, object detection, and robot interactions. However, discovering the compositional structure from raw 3D data is challenging since it requires huge supervision in learning, and even the supervision needs to be carefully applied when it is given.

In this talk, Minhyuk Sung will introduce three learning-based methods of discovering the compositional structure of shapes. He will first discuss a supervised neural network detecting geometric primitives from a point cloud. Even when supervision is given, a direct regression of the primitive parameters does not provide better results than an unsupervised estimation. He will explain how the prediction power of neural networks can make the best synergy with estimation in a carefully designed end-to-end learning network. Second, he will introduce a self-supervised method of learning the compositional structure from deformation. He will propose a conditional generative model producing possible deformations of a shape and show how the compositional structure can emerge from learning the disentanglement of possible shape variations. Lastly, he’ll introduce another self-supervised method of learning semantic part decomposition from language descriptions. He’ll discuss how an attention model finding a shape matching a query sentence can be designed to discover semantic parts while learning attention.

Speaker bio:

Minhyuk Sung is an assistant professor in the School of Computing at KAIST (also affiliated with the Graduate School of AI). Before joining KAIST, he was a Research Scientist at Adobe Research. He received his Ph.D. from Stanford University under the supervision of Professor Leonidas J. Guibas. His research interests lie in vision, graphics, and machine learning, with a focus on 3D geometric data processing. His work has been selected as one of six SIGGRAPH 2017 Asia papers featured for a press release. He received his MSc. and BSc. from KAIST.

AG Webinar Session 6

Date: Tuesday, January 18, 2022
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Ying He, Nanyang Technological University, Singapore

Talk 1

Title:

Data-driven Sketch Interpretation

Speaker:

Hongbo FU
Hongbo Fu

School of Creative Media,City University of Hong Kong, Hong Kong S.A.R., China

Professor

Abstract:

Freehand sketching provides an easy tool for communication between people. While human viewers can easily interpret the semantics of a freehand sketch, it is often difficult to teach machines to understand sketches like we do, mainly because of different levels of abstraction, drawing styles, and various sources of drawing errors. In this talk, I will introduce how data-driven approaches can help us address various sketch understanding tasks, including sketch classification, sketch segmentation and labeling, 3D interpretation of freehand sketches, and sketch-based image generation.

Speaker bio:

Hongbo Fu is a Professor in the School of Creative Media, City University of Hong Kong. Before joining CityU, he had postdoctoral research training at the Imager Lab, University of British Columbia, Canada, and the Department of Computer Graphics, Max-Planck-Institut Informatik, Germany. He received a PhD degree in computer science from the Hong Kong University of Science and Technology in 2007 and a BS degree in information sciences from Peking University, China, in 2002. His primary research interests fall in the fields of computer graphics and human-computer interaction. His research has led to over 100 scientific publications, including over 50 technical papers published at SIGGRAPH/SIGGRAPH Asia/TOG/TVCG/CHI/UIST. His recent works have received the Best Demo awards at the Emerging Technologies program, SIGGRAPH Asia in both 2013 and 2014, and the Best Paper awards from CAD/Graphics 2015 and UIST 2019.

He was the Organization Co-Chair of Pacific Graphics 2012, the Program Chair/Co-chair of CAD/Graphics 2013 & 2015, SIGGRAPH Asia 2013 (Emerging Technologies) & 2014 (Workshops), Pacific Graphics 2018, Computational Visual Media 2019, and the Conference Chair of SIGGRAPH Asia 2016 and Expressive 2018. He was on the SIGGRAPH Asia Conference Advisory Group, Expressive conference steering committee, and is currently Vice-Chairman of Asia Graphics Association. He has served as an Associate Editor of The Visual Computer, Computers & Graphics, and Computer Graphics Forum.

Talk 2

Title:

Human-in-the-Loop Preferential Bayesian Optimization for Visual Design

Speaker:

Yuki Koyama
Yuki Koyama

National Institute of Advanced Industrial Science and Technology (AIST) & Graphinica, Inc., Japan

Researcher

Abstract:

Visual design often involves searching for an optimal parameter set that produces a subjectively preferable design. However, this optimization problem is not trivial to solve with typical optimization algorithms since the objective function is human preference and thus requires special treatment. In this talk, I will introduce preferential Bayesian optimization (PBO), a powerful technique to aid this task. PBO is a human-in-the-loop Bayesian optimization specializing in relative preference oracle (i.e., which option is liked the most). This method models the latent preference in a probabilistic manner and generates effective preference queries to human evaluators based on the preference model. Then, I will explain two of my recent works [SIGGRAPH 2017; SIGGRAPH 2020], which are built on PBO and achieve even better sample efficiency by combining with tailored user interactions.

Speaker bio:

Yuki Koyama is a Researcher at National Institute of Advanced Industrial Science and Technology (AIST). He received his Ph.D. from The University of Tokyo in 2017, advised by Prof. Takeo Igarashi. His research fields are computer graphics and human-computer interaction. In particular, he is interested in enhancing design activities by using computational techniques such as mathematical optimization. From 2021, he also started working at Graphinica (a Japanese animation studio), in which he is aiming at bridging art and technology in animation production. He was awarded JSPS Ikushi Prize (2017) and Asia Graphics Young Researcher Award (2021).

AG Webinar Session 5

Date: Wednesday, December 22, 2021
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Peng Song, Singapore University of Technology and Design, Singapore

Talk 1

Title:

Deep 3D Sensing Pipeline: Feature Learning, Preprocessing, Understanding, and Applications

Speaker:

Chi-Wing (Philip) FU
Chi-Wing Fu

Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong S.A.R., China

Professor

Abstract:

3D sensing and understanding aim to capture the geometric and semantic information of the real world for promoting many applications, such as robotics, autonomous driving, and AR interactions. In the past years, my research team has been putting in great effort on designing and developing methods for supporting the 3D sensing and data processing pipeline, ranging from low-level processing (e.g., upsampling and denoising) to high-level 3D understanding (e.g., semantic segmentation and 3D object detection). In this talk, I will first give an overview of the overall 3D sensing pipeline, which includes four parts: deep feature extraction, data preprocessing, 3D understanding, and applications. Then, I will briefly discuss some of my works in the pipeline and then showcase some of the downstream applications that we have developed in, e.g., robotics, 3D shape generation, and direct hand AR interactions.

Speaker bio:

Chi-Wing Fu is currently a full professor at the Chinese University of Hong Kong. He served as the associate editor of IEEE Computer Graphics & Applications and Computer Graphics Forum, co-chair of SIGGRAPH ASIA 2016’s Technical Brief and Poster program, panel member in SIGGRAPH 2019 Doctoral Consortium, and program committee members in various research conferences, including SIGGRAPH Asia Technical Brief, SIGGRAPH Asia Emerging technology, IEEE visualization, AAAI, CVPR, IEEE VR, VRST, Pacific Graphics, GMP, etc. His recent research interests include point cloud processing, 3D vision, computation fabrication, user interaction, and data visualization.

Talk 2

Title:

Robust 3D Point Cloud Analysis via Deep Learning Approaches

Speaker:

Xianzhi LI
Xianzhi Li

School of Computer Science and Technology,Huazhong University of Science and Technology, China

Associate Professor

Abstract:

Deep neural networks for 3D point cloud analysis has drawn a lot of interest in recent years and has also been actively applied to many 3D applications. However, the robustness of most existing works relies on some assumptions, for example, the 3D objects should align with the gravity direction, there are sufficient labeled training samples, etc. Without these assumptions, many methods may not function well, as expected. In this talk, I will introduce two of my works that focus on robust 3D point cloud analysis. First, I will talk about how to extract robust rotation-invariant features from point clouds with arbitrary orientations. Then, I will discuss how to extract reliable features from point clouds in an unsupervised manner to facilitate the detection of distinctive regions on 3D shapes. Multiple applications of each work will be demonstrated to show the effectiveness and superiority of our approaches.

Speaker bio:

Xianzhi Li recently joined the School of Computer Science and Technology, Huazhong University of Science and Technology (HUST) as an associate professor. Prior to that, she was a postdoctoral fellow at the Chinese University of Hong Kong (CUHK) and the Hong Kong Center for Logistics Robotics. She received a PhD degree in computer science from the CUHK in 2020. Her research interests are in 3D vision and computer graphics, with a recent focus on developing machine learning methods to advance the processing, analysis, and understanding of 3D point cloud data.

Talk 3

Title:

3D Geometry Learning from Upsampling to Generation

Speaker:

Ruihui LI
Ruihui Li

Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong S.A.R., China

Postdoctoral fellow

Abstract:

Low-cost 3D sensors have spurred the development of learning techniques for 3D analysis. However, due to the complexity and costs of 3D data collecting and annotating, the quality and quantity of existing 3D datasets are still limited. In this talk, I will present our efforts for enhancing the point cloud representation and enriching the 3D data. First, considering that raw point clouds produced from 3D sensors are often sparse, I will present a series of point cloud upsampling frameworks, producing a denser and more faithful representation for the underlying surface. Then, I will introduce an unsupervised framework for high-quality shape generation and manipulation. Such data enhancement and enrichment of the original data promotes the effective use of 3D point clouds for downstream analysis and general process.

Speaker bio:

Ruihui Li is currently a postdoctoral fellow at the Chinese University of Hong Kong (CUHK) and will soon join Hunan University as an associate professor.  Before that, he received his Ph.D. degree in the Department of Computer Science and Engineering from CUHK in June 2021. His research interests are in deep geometry learning, generative modeling, 3D vision, and computer graphics, and he is particularly interested in 3D reconstruction and generation with high controllability. Most of his publications are presented at top-tier journals and conferences, such as SIGGRAPH, TVCG, CVPR (oral), ICCV, etc.

AG Webinar Session 4

Date: Tuesday, November 30, 2021
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Jue Wang, Tencent AI Lab, China

Talk 1

Title:

Single Image Defocus Deblurring

Speaker:

Seungyong Lee
Seungyong Lee

POSTECH, South Korea

Professor

Abstract:

Defocus blur of an image occurs when the light ray from a point in the scene forms a circle of confusion (COC) on the camera sensor. Defocus deblurring aims to restore an all-in-focus image from a defocused image. In this talk, I will introduce my two recent works on single image defocus deblurring. The first is a novel end-to-end learning-based approach equipped with a novel Iterative Filter Adaptive Network (IFAN) that is specifically designed to handle spatially-varying and large defocus blur. It also contains a training scheme based on defocus disparity estimation and reblurring. The second is a novel deep learning approach for single image defocus deblurring based on inverse kernels. It proposes a kernel-sharing parallel atrous convolutional (KPAC) block specifically designed by incorporating the property of inverse kernels for defocus deblurring. Experimental results demonstrate that these approaches achieve state-of-the-art performance on real-world images.

Speaker bio:

Seungyong Lee is a professor of computer science and engineering at Pohang University of Science and Technology (POSTECH), Korea. He received a PhD degree in computer science from Korea Advanced Institute of Science and Technology (KAIST) in 1995. From 1995 to 1996, he worked at City College of New York as a postdoctoral researcher. Since 1996, he has been a faculty member of POSTECH, where he leads Computer Graphics Group. During his sabbatical years, he worked at MPI Informatik (2003-2004) and Creative Technologies Lab at Adobe Systems (2010-2011). His technologies on image deblurring and photo upright adjustment have been transferred to Adobe Creative Cloud and Adobe Photoshop Lightroom. His current research interests include image and video processing, deep learning based computational photography, and 3D scene reconstruction.

Talk 2

Title:

Knowledge-Driven Deep Image/Video Restoration Networks

Speaker:

Jinshan Pan
Jinshan Pan

School of Computer Science and Engineering, Nanjing University of Science and Technology, China

Professor

Abstract:

The recent years have witnessed significant advances in image/video restoration due to effective deep neural networks. However, most existing approaches mainly rely on large-capacity deep models, and their network designs do not well explore the properties of the image/video degradation process or the domain knowledge of the image/video restoration problems. In this talk, we will first revisit the statistical prior modeling-based image/video restoration methods. Then, we will discuss how to explore the physics models and prior knowledge to constrain deep neural networks for better image/video restoration. Instead of simply increasing the capacity of the deep models, the proposed neural networks constrained by the physics models and prior knowledge are more compact and perform favorably against state-of-the-art methods on several image/video restoration tasks.

Speaker bio:

Jinshan Pan is a professor of the School of Computer Science and Engineering, Nanjing University of Science and Technology. He received the PhD degree in computational mathematics from Dalian University of Technology and was a joint training PhD student in University of California, Merced. His research interests mainly include image/video restoration, enhancement, and related vision problems. He serves as an area chair for CVPR 2022.

AG Webinar Session 3

Date: Tuesday, October 26, 2021
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Nobuyuki Umetani, The University of Tokyo, Japan

Talk 1

Title:

Human-in-the-loop Creative AI

Speaker:

Takeo Igarashi
Takeo Igarashi

Department of Computer Science, The University of Tokyo, Japan

Professor

Abstract:

Generative models that apply deep learning to the generation of contents such as images and sound are attracting attention. However, generative process using deep learning is a black box, which makes it difficult for humans to understand and control. In this talk, I will introduce methods for human intervention and control of such generative processes. I will show examples in generative models of images, 3D models, and acoustic signals.

Speaker bio:

Takeo Igarashi is a Professor of Computer Science Department at The University of Tokyo. His research interest is in user interfaces and interactive computer graphics. He has received several awards including the ACM SIGGRAPH 2006 Significant New Researcher Award, ACM CHI Academy award 2018, and the Asia Graphics 2020 Outstanding Technical Contributions Award. He served as a program co-chair for ACM UIST 2013, a conference co-chair for ACM UIST 2016, technical papers chair for SIGGRAPH ASIA 2018, and technical program co-chair for ACM CHI 2021.

Talk 2

Title:

Shape Manipulation via Reinforcement Learning

Speaker:

Ruizhen Hu
Ruizhen Hu

College of Computer Science & Software Engineering, Shenzhen University, China

Associate Professor

Abstract:

Solving shape generation and manipulation problems using neural networks exhibits a new trend in computer graphics. However, for many shape manipulation tasks, there are more than one optimal solution or ground-truth solutions are difficult to obtain, which makes it hard to adopt supervised learning methods. Instead, learning a good policy that can make sequential decisions to find any optimal solution becomes a better fit. In this talk, I will introduce several shape manipulation tasks, including translate-and-pack, grasp-and-place, and reshape 2D or 3D shapes, and show how they are solved using reinforcement learning methods.

Speaker bio:

Ruizhen Hu is an Associate Professor at Shenzhen University, China. She received her Ph.D. from the Department of Mathematics, Zhejiang University. Before that, she spent two years visiting Simon Fraser University, Canada. Her research interests are in computer graphics, with a recent focus on applying machine learning to advance the understanding and modeling of visual data. She received the Asia Graphics Young Researcher Award in 2019. She has served as a program co-chair for SMI 2020, and is an editorial board member of The Visual Computer and IEEE CG&A.

AG Webinar Session 2

Date: Tuesday, September 28, 2021
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Hongbo Fu, City University of Hong Kong, China

Talk 1

Title:

Generating and Editing Ultrarealistic Faces

Speaker:

Daniel Cohen-Or
Daniel Cohen-Or

Department of Computer Science, Tel Aviv University, Israel

Professor

Abstract:

StyleGAN has recently been established as the state-of-the-art unconditional generator, synthesizing images of phenomenal realism and fidelity, particularly for human faces. With its rich semantic space, many works have attempted to understand and control StyleGAN’s latent representations with the goal of performing image manipulations. To perform manipulations on real images, however, one must learn to “invert” the GAN and encode the image into StyleGAN’s latent space, which remains a challenge. In this talk, I will discuss recent techniques and advancements in GAN Inversion and explore their importance for real image editing applications. In addition, going beyond the inversion task, I will demonstrate how StyleGAN can be used for performing a wide range of image editing tasks.

Speaker bio:

Daniel Cohen-Or is a professor in the School of Computer Science. He received his B.Sc. cum laude in both mathematics and computer science (1985), and M.Sc. cum laude in computer science (1986) from Ben-Gurion University, and Ph.D. from the Department of Computer Science (1991) at State University of New York at Stony Brook. He was sitting on the editorial board of a number of international journals, and a member of many the program committees of several international conferences. He was the recipient of the Eurographics Outstanding Technical Contributions Award in 2005. In 2013 he received The People’s Republic of China Friendship Award. In 2015 he has been named a Thomson Reuters Highly Cited Researcher.

He received the ACM SIGGRAPH Computer Graphics Achievement Award in 2018. In 2019 he won The Kadar Family Award for Outstanding Research. In 2020 he received The Eurographics Distinguished Career Award. His research interests are in computer graphics, in particular, synthesis, processing and modeling techniques.

Talk 2

Title:

Deep Face Generation and Editing with Sketches

Speaker:

Lin Gao
Lin Gao

Institute of Computing Technology, Chinese Academy of Sciences, China

Associate Professor

Abstract:

Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches. Our tool is easy to use even for non-artists, while still supporting fine-grained control of shape details. To have more control of generated results, one possible approach is to apply existing disentangling works to disentangle face images into geometry and appearance representations. Hence, we propose DeepFaceEditing, a structured disentanglement framework specifically designed for face images to support face generation and editing with disentangled control of geometry and appearance. We exploit sketches to assist in extracting a better geometry representation, which also supports intuitive geometry editing via sketching. The resulting method can either extract the geometry and appearance representations from face images, or directly extract the geometry representation from face sketches. Such representations allow users to easily edit and synthesize face images, with decoupled control of their geometry and appearance.

Speaker bio:

Lin Gao received the bachelor’s degree in mathematics from Sichuan University and the PhD degree in computer science from Tsinghua University. He had spent one year as the visiting professor at RWTH Aachen university. He is currently an Associate Professor at the Institute of Computing Technology, Chinese Academy of Sciences. He has been awarded the Newton Advanced Fellowship from the Royal Society (2019) and the Asia Graphics Association young researcher award (2020). His research interests include computer graphics, geometric processing and visual media computing.

AG Webinar Session 1

Date: Monday, August 30, 2021
Time: 11:00am UTC/GMT | 07:00pm (Beijing, Singapore) | 08:00pm (Seoul, Tokyo)
Chair: Ligang Liu, University of Science and Technology of China

Talk 1

Title:

Studies on 3D Reconstruction

Speaker:

Wenping Wang
Wengping Wang

Department of Visualization, Texas A&M University, USA

Professor, Department Head

Abstract:

In this talk on 3D reconstruction, I will first present a new pipeline for scanning and reconstructing sherds (i.e. excavated ceramic fragments at archeological sites) in large throughput, which is a long-standing problem in digitization for archeology. Existing image acquisition systems typically take several minutes to scan a single sherd so they are impractical for field studies which usually produce hundreds of sherds every day. Our image acquisition system is capable of scanning over a thousand pieces per day (in eight hours). Our system is not only efficient but also portal, affordable, and accurate; the images acquired allow fast and accurate 3D reconstruction of the sherds with an accuracy within 0.2mm. The system has been deployed in archeological fields in Armenia this summer and demonstrated expected efficacy and robustness.

As the second topic, I will present a novel neural rendering method, called NeuS, for reconstructing 3D objects from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR [Niemeyer et al, 2020] and IDR [Yariv et al., 2020] fail for complex objects with severe self-occlusion because they are prone to get stuck in local minima. Meanwhile, recent neural rendering methods for novel view synthesis, such as NeRF [Mildenhall et al., 2020] and its variants, use volume rendering to achieve more robust optimization, even for highly complex scenes. However, they cannot extract high-quality surfaces because of their lack of surface constraints. Inspired by NeRF, we introduce the missing surface constraints by representing a surface as the zero-level set of a signed distance function (SDF) and devise a new volume rendering method to learn this neural SDF representation. Extensive experiments show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.

Speaker bio:

Prof. Wenping Wang conducts research in computer graphics, computer visualization, computer vision, robotics, medical image processing, and geometric modeling, and he has published over 170 journal papers in these fields and given 40 invited talks at international conferences. Prof. Wang is journal associate editor of Computer Aided Geometric Design (CAGD), Computers & Graphics (CAG), Computer Graphics Forum (CGF), IEEE Transactions on Visualization and Computer Graphics (TVCG) (2008-2012), IEEE Transactions on Computers (TC), and IEEE Computer Graphics and Applications (CG&A), and has chaired 20 international conferences, including Pacific Graphics 2012, ACM Symposium on Physical and Solid Modeling (SPM) 2013, SIGGRAPH Asia 2013, and Geometry Summit 2019. Prof. Wang received the Outstanding Achievement Award in Computer Graphics of China in 2016, and the John Gregory Memorial Award for contributions in geometric modeling in 2017. He is Founding Chairman of Asia Graphics Association and an IEEE Fellow.

Talk 2

Title:

Interactive Design Optimization in Computational Fabrication

Speaker:

Nobuyuki Umetani
Nobuyuki Umetani

Creative Informatics Department, The University of Tokyo, Japan

Associate Professor

Abstract:

Designing functional 3D objects today continues to be a time-consuming task. The designer needs to carefully optimize the objects’ performance, which often can be evaluated through expensive simulation. Leveraging the power of machine learning, we can now drastically accelerate various kinds of simulations for 3D shape designs. Based on the prior real-world or simulation examples for various existing shapes, machine learning can instantly synthesize simulation results for a novel input shape. In this talk, I describe several interactive approaches to integrate physical simulation into geometric modeling to actively support creative design processes. The importance of interactivity in the design system will be discussed in various practical contexts including structurally robust design, musical instrument design, garment design, electric circuit design, and aerodynamic design.

Speaker bio:

Nobuyuki Umetani is an associate professor at the University of Tokyo. Previously, he was a research scientist at Autodesk Research, leading the Design and Fabrication group. The principal research question he addresses through his studies is: how to integrate real-time physical simulation into an interactive geometric modeling procedure to facilitate creativity. He is broadly interested in physics simulation, especially the finite element method, applied for computer animation, biomechanics, and mechanical engineering.


Playback of Previous Talks

  Or  


Contact

If you want to nominate a speaker or provide feedback, please contact us at asiagraphics.ag@gmail.com.