We are thrilled to announce an insightful seminar featuring cutting-edge research in the field of Computational Sciences. This event will be held in a hybrid format, providing both virtual and in-person attendance options to suit your preferences.
Date and Time:
Location:
HPCC Special Seminar: Decoding Computational Challenges
Thứ bảy, 2 tháng 3 · 10:00 – 12:00
Múi giờ: Asia/Ho_Chi_Minh
Thông tin về cách tham gia Google Meet
Đường liên kết đến cuộc gọi video: https://meet.google.com/xac-taqa-ozt
Investigators: Quang Minh Nguyen, Hoang Huy Nguyen, Bao Thach
The Optimal Transport (OT) problem originated from the need to find the optimal cost to transport masses from one distribution to another distribution. However, standard OT requires the restricted assumption that the input measures are normalized to unit mass, which facilitates the development of the Unbalanced Optimal Transport (UOT) and the Partial Optimal Transport (POT) between two measures of possibly different masses. Serving as geometrically meaningful and powerful metrics to compare measures, these OT variants together have found widespread applications in statistics and machine learning (ML), such as color transfer, graph neural networks, graph matching, partial covering, point set registration, and robust estimation. Nevertheless, this power has been both a blessing and a curse: OT-based metrics have been pervasively adopted to achieve competitive performance in various ML tasks for their flexibility, yet this comes with a heavy computational price tag. Since then, accelerating the computation of OT variants has been an important and ongoing problem.
In this talk, we first review respectively OT, UOT and POT as well as their algorithmic solvers and application to ML/AI, and present the recent contributions of the investigators to the literature of computational UOT and POT. Nevertheless, despite various computational tools for these OT variants in practice, especially under CPU settings, performance benchmarking of the parallelized versions of these solvers under GPU settings remains limited. Furthermore, the rapid growth of AI application scale leads to increasing interest in understanding and comparing the performance of different solvers at scale. The first and main goal of this research is thus to extensively benchmark state-of-the-art OT solvers under GPU settings and analyze their performance. On large-scale multi-GPU computation, one long-standing challenge is the potential failures of processors that invalidate the whole algorithmic execution. Predicated on the recent success of coding theory to provide fault tolerance in distributed computation, the second goal of this research is to develop and/or apply coding strategies to augment the existing algorithms with an efficient resilience mechanism.
Investigators: Bao Thach, Quang Minh Nguyen
Shape servoing, a robotic task dedicated to controlling objects to desired goal shapes, is a promising approach to deformable object manipulation. An issue arises, however, with the reliance on the specification of a goal shape. This goal has been obtained either by a laborious domain knowledge engineering process or by manually manipulating the object into the desired shape and capturing the goal shape at that specific moment, both of which are impractical in various robotic applications.
In this project, we aim to solve this problem by developing a novel neural network that learns deformable object goal shapes directly from human demonstrations. A promising research direction involves the use of diffusion probabilistic models, which have been widely utilized in the computer vision community for generating 3D point clouds. Drawing inspiration from the diffusion process in nonequilibrium thermodynamics, we conceptualize point clouds as particles within a thermodynamic system interacting with a heat bath. This interaction leads the particles to move from their initial distribution towards a noise distribution. Consequently, the essence of point cloud generation lies in learning the reverse diffusion process, which converts the noise distribution back to the original distribution that mirrors the desired deformable object goal shapes. We plan to demonstrate the effectiveness of our method across various robotic tasks, both in simulation and on a physical robot.
Investigators: Quang Truong, Bao Thach
Graph Neural Networks (GNNs) offer powerful tools for graph representation learning, but their expressivity is intrinsically limited by the 1-Weisfeiler-Lehman (1-WL) isomorphism test. Although there are other GNNs upper-bounded by the k-WL test, their computational demands often hinder practical application. Topological Deep Learning (TDL), an emergent subfield of Geometric Deep Learning, presents a promising alternative for balancing graph expressivity with computational efficiency. Recent studies demonstrate TDL models’ potential, which are proven to be strictly more powerful than the 1-WL test and not less powerful than the 3-WL test. TDL’s core innovation lies in generalizing the conventional message-passing mechanism through topological concepts. In the below projects, we would like to investigate the capability of TDL on other tasks while maintaining a feasible computational complexity.
Yet, to date, the potential of TDL models has been primarily explored within the domain of graph classification; their efficacy in other tasks remains an open area of inquiry. As the design of GNNs for node and link prediction poses certain challenges, a key research question is whether topological models yield performance gains in these tasks by leveraging higher-order information.
Another research direction focuses on reducing computational complexity. The computational expense of TDL models, stemming from graph lifting and higher-order message passing, motivates the exploration of knowledge distillation from TDL models to GNNs. By doing so, we can have a smaller model that can be flexibly applied to downstream tasks. This distillation could follow either:
– Student-Teacher Paradigms: Transfer higher-order knowledge captured by TDL ’teachers’ to GNN ’students.’
– Gradient Matching Approaches: Optimize GNNs to emulate representational spaces induced by TDL models.
To join this engaging seminar, please join our Google Meets using your BKNetID (HCMUT Email) for virtual attendance or visit our iST Office for in-person participation.
Don’t miss the opportunity to be part of this intellectually stimulating event, where experts from diverse backgrounds converge to discuss the forefront of computational sciences!