Sifei Liu

@nvidia.com

NVIDIA LPR
NVIDIA



                       

https://researchid.co/sifeil

EDUCATION

PhD, University of California, Merced

RESEARCH INTERESTS

Computer Vision, Machine Learning

55

Scopus Publications

6482

Scholar Citations

31

Scholar h-index

50

Scholar i10-index

Scopus Publications

  • Zero-shot Pose Transfer for Unrigged Stylized 3D Characters
    Jiashun Wang, Xueting Li, Sifei Liu, Shalini De Mello, Orazio Gallo, Xiaolong Wang, and Jan Kautz

    IEEE
    Transferring the pose of a reference avatar to stylized 3D characters of various shapes is a fundamental task in computer graphics. Existing methods either require the stylized characters to be rigged, or they use the stylized character in the desired pose as ground truth at training. We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms stylized characters of significantly different shapes at inference. Classical methods achieve strong generalization by deforming the mesh at the triangle level, but this requires labelled correspondences. We leverage the power of local deformation, but without requiring explicit correspondence labels. We introduce a semi-supervised shape-understanding module to bypass the need for explicit correspondences at test time, and an implicit pose deformation module that deforms individual surface points to match the target pose. Further-more, to encourage realistic and accurate deformation of stylized characters, we introduce an efficient volume-based test-time training procedure. Because it does not need rigging, nor the deformed stylized character at training time, our model generalizes to categories with scarce annotation, such as stylized quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed method compared to the state-of-the-art approaches trained with comparable or more supervision. Our project page is available at https://jiashunwang.github.io/ZPT/

  • Self-Supervised Super-Plane for Neural 3D Reconstruction
    Botao Ye, Sifei Liu, Xueting Li, and Ming-Hsuan Yang

    IEEE
    Neural implicit surface representation methods show impressive reconstruction results but struggle to handle texture-less planar regions that widely exist in indoor scenes. Existing approaches addressing this leverage image prior that requires assistive networks trained with large-scale annotated datasets. In this work, we introduce a self-supervised super-plane constraint by exploring the free geometry cues from the predicted surface, which can further regularize the reconstruction of plane regions without any other ground truth annotations. Specifically, we introduce an iterative training scheme, where (i) grouping of pixels to formulate a super-plane (analogous to super-pixels), and (ii) optimizing of the scene reconstruction network via a super-plane constraint, are progressively conducted. We demonstrate that the model trained with superplanes surprisingly outperforms the one using conventional annotated planes, as individual super-plane statistically occupies a larger area and leads to more stable training. Extensive experiments show that our self-supervised super-plane constraint significantly improves 3D reconstruction quality even better than using ground truth plane segmentation. Additionally, the plane reconstruction results from our model can be used for auto-labeling for other vision tasks. The code and models are available at https://github.com/botaoye/S3PRecon.

  • Affordance Diffusion: Synthesizing Hand-Object Interactions
    Yufei Ye, Xueting Li, Abhinav Gupta, Shalini De Mellon, Stan Birchfield, Jiaming Song, Shubham Tulsiani, and Sifei Liu

    IEEE
    Recent successes in image synthesis are powered by large-scale diffusion models. However, most methods are currently limited to either text- or image-conditioned generation for synthesizing an entire image, texture transfer or inserting objects into a user-specified region. In contrast, in this work we focus on synthesizing complex interactions (i.e., an articulated hand) with a given object. Given an RGB image of an object, we aim to hallucinate plausible images of a human hand interacting with it. We propose a two-step generative approach: a LayoutNet that samples an articulation-agnostic hand-object-interaction layout, and a ContentNet that synthesizes images of a hand grasping the object given the predicted layout. Both are built on top of a large-scale pretrained diffusion model to make use of its latent representation. Compared to baselines, the proposed method is shown to generalize better to novel objects and perform surprisingly well on out-of-distribution in-the-wild scenes of portable-sized objects. The resulting system allows us to predict descriptive affordance information, such as hand articulation and approaching orientation.

  • Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models
    Jiarui Xu, Sifei Liu, Arash Vahdat, Wonmin Byeon, Xiaolong Wang, and Shalini De Mello

    IEEE
    We present ODISE: Open-vocabulary DIffusion-based panoptic SEgmentation, which unifies pre-trained text-image diffusion and discriminative models to perform open-vocabulary panoptic segmentation. Text-to-image diffusion models have the remarkable ability to generate high-quality images with diverse open-vocabulary language descriptions. This demonstrates that their internal representation space is highly correlated with open concepts in the real world. Text-image discriminative models like CLIP, on the other hand, are good at classifying images into open-vocabulary labels. We leverage the frozen internal representations of both these models to perform panoptic segmentation of any category in the wild. Our approach outperforms the previous state of the art by significant margins on both open-vocabulary panoptic and semantic segmentation tasks. In particular, with COCO training only, our method achieves 23.4 PQ and 30.0 mIoU on the ADE20K dataset, with 8.3 PQ and 7.9 mIoU absolute improvement over the previous state of the art. We open-source our code and models at https://github.com/NVlabs/ODISE.

  • Deblurring Dynamic Scenes via Spatially Varying Recurrent Neural Networks
    Wenqi Ren, Jiawei Zhang, Jinshan Pan, Sifei Liu, Jimmy S. J. Ren, Junping Du, Xiaochun Cao and Ming-Hsuan Yang


    Deblurring images captured in dynamic scenes is challenging as the motion blurs are spatially varying caused by camera shakes and object movements. In this paper, we propose a spatially varying neural network to deblur dynamic scenes. The proposed model is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). The RNN is used as a deconvolution operator on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the spatially varying weights for the RNN. As a result, the RNN is spatial-aware and can implicitly model the deblurring process with spatially varying kernels. To better exploit properties of the spatially varying RNN, we develop both one-dimensional and two-dimensional RNNs for deblurring. The third component, based on a CNN, reconstructs the final deblurred feature maps into a restored image. In addition, the whole network is end-to-end trainable. Quantitative and qualitative evaluations on benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art deblurring algorithms.

  • Correction to: Learning Contrastive Representation for Semantic Correspondence (International Journal of Computer Vision, (2022), 10.1007/s11263-022-01602-y)
    Taihong Xiao, Sifei Liu, Shalini De Mello, Zhiding Yu, Jan Kautz, and Ming-Hsuan Yang

    Springer Science and Business Media LLC

  • Learning Contrastive Representation for Semantic Correspondence
    Taihong Xiao, Sifei Liu, Shalini De Mello, Zhiding Yu, Jan Kautz, and Ming-Hsuan Yang

    Springer Science and Business Media LLC
    Dense correspondence across semantically related images has been extensively studied, but still faces two challenges: 1) large variations in appearance, scale and pose exist even for objects from the same category, and 2) labeling pixel-level dense correspondences is labor intensive and infeasible to scale. Most existing methods focus on designing various matching modules using fully-supervised ImageNet pretrained networks. On the other hand, while a variety of self-supervised approaches are proposed to explicitly measure image-level similarities, correspondence matching the pixel level remains under-explored. In this work, we propose a multi-level contrastive learning approach for semantic matching, which does not rely on any ImageNet pretrained model. We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects, while the performance can be further enhanced by regularizing cross-instance cycle-consistency at intermediate feature levels. Experimental results on the PF-PASCAL, PF-WILLOW, and SPair-71k benchmark datasets demonstrate that our method performs favorably against the state-of-the-art apTaihong Xiao E-mail: txiao3@ucmerced.edu Sifei Liu E-mail: sifeil@nvidia.com Shalini De Mello E-mail: shalinig@nvidia.com Zhiding Yu E-mail: zhidingy@nvidia.com Jan Kautz E-mail: jkautz@nvidia.com Ming-Hsuan Yang E-mail: mhyang@ucmerced.edu University of California, Merced, CA, USA Nvidia, Santa Clara, CA, USA Yonsei University, Seoul, Korea proaches. The source code and trained models will be made available to the public.

  • LEARNING CONTINUOUS ENVIRONMENT FIELDS VIA IMPLICIT FUNCTIONS


  • Autoregressive 3D Shape Generation via Canonical Mapping
    An-Chieh Cheng, Xueting Li, Sifei Liu, Min Sun, and Ming-Hsuan Yang

    Springer Nature Switzerland

  • Scraping Textures from Natural Images for Synthesis and Editing
    Xueting Li, Xiaolong Wang, Ming-Hsuan Yang, Alexei A. Efros, and Sifei Liu

    Springer Nature Switzerland

  • CoordGAN: Self-Supervised Dense Correspondences Emerge from GANs
    Jiteng Mu, Shalini De Mello, Zhiding Yu, Nuno Vasconcelos, Xiaolong Wang, Jan Kautz, and Sifei Liu

    IEEE
    Recent advances show that Generative Adversarial Networks (GANs) can synthesize images with smooth variations along semantically meaningful latent directions, such as pose, expression, layout, etc. While this indicates that GANs implicitly learn pixel-level correspondences across images, few studies explored how to extract them explicitly. In this work, we introduce Coordinate GAN (CoordGAN), a structure-texture disentangled GAN that learns a dense correspondence map for each generated image. We represent the correspondence maps of different images as warped coordinate frames transformed from a canonical coordinate frame, i.e., the correspondence map, which describes the structure (e.g., the shape of a face), is controlled via a transformation. Hence, finding correspondences boils down to locating the same coordinate in different correspondence maps. In CoordGAN, we sample a transformation to represent the structure of a synthesized instance, while an independent texture branch is responsible for rendering appearance details orthogonal to the structure. Our approach can also extract dense correspondence maps for real images by adding an encoder on top of the generator. We quantitatively demonstrate the quality of the learned dense correspondences through segmentation mask transfer on multiple datasets. We also show that the proposed generator achieves better structure and texture disentanglement compared to existing approaches. Project page: https://jitengmu.github.io/CoordGAN/

  • GroupViT: Semantic Segmentation Emerges from Text Supervision
    Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, and Xiaolong Wang

    IEEE
    Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision. We open-source our code at https://github.com/NVlabs/GroupViT.

  • CONTRASTIVE SYN-TO-REAL GENERALIZATION


  • Hierarchical Contrastive Motion Learning for Video Action Recognition


  • Coupled Segmentation and Edge Learning via Dynamic Graph Propagation


  • Learning 3D Dense Correspondence via Canonical Point Autoencoder


  • Video Matting via Consistency-Regularized Graph Neural Networks
    Tiantian Wang, Sifei Liu, Yapeng Tian, Kai Li, and Ming-Hsuan Yang

    IEEE
    Learning temporally consistent foreground opacity from videos, i.e., video matting, has drawn great attention due to the blossoming of video conferencing. Previous approaches are built on top of image matting models, which fail in maintaining the temporal coherence when being adapted to videos. They either utilize the optical flow to smooth frame-wise prediction, where the performance is dependent on the selected optical flow model; or naively combine feature maps from multiple frames, which does not model well the correspondence of pixels in adjacent frames. In this paper, we propose to enhance the temporal coherence by Consistency-Regularized Graph Neural Networks (CRGNN) with the aid of a synthesized video matting dataset. CRGNN utilizes Graph Neural Networks (GNN) to relate adjacent frames such that pixels or regions that are incorrectly predicted in one frame can be corrected by leveraging information from its neighboring frames. To generalize our model from synthesized videos to real-world videos, we propose a consistency regularization technique to enforce the consistency on the alpha and foreground when blending them with different backgrounds. To evaluate the efficacy of CRGNN, we further collect a real-world dataset with annotated alpha mattes. Compared with state-of-the-art methods that require hand-crafted trimaps or backgrounds for modeling training, CRGNN generates favorably results with the help of unlabeled real training dataset. The source code and datasets are available at https://github.com/TiantianWang/VideoMattingCRGNN.git.

  • Video Autoencoder: self-supervised disentanglement of static 3D structure and motion
    Zihang Lai, Sifei Liu, Alexei A. Efros, and Xiaolong Wang

    IEEE
    A video autoencoder is proposed for learning disentangled representations of 3D structure and camera pose from videos in a self-supervised manner. Relying on temporal continuity in videos, our work assumes that the 3D scene structure in nearby video frames remains static. Given a sequence of video frames as input, the video autoencoder extracts a disentangled representation of the scene including: (i) a temporally-consistent deep voxel feature to represent the 3D structure and (ii) a 3D trajectory of camera pose for each frame. These two representations will then be re-entangled for rendering the input video frames. This video autoencoder can be trained directly using a pixel re-construction loss, without any ground truth 3D or camera pose annotations. The disentangled representation can be applied to a range of tasks, including novel view synthesis, camera pose estimation, and video generation by motion following. We evaluate our method on several large-scale natural video datasets, and show generalization results on out-of-domain images. Project page with code: https://zlai0.github.io/VideoAutoencoder.

  • Learning Continuous Image Representation with Local Implicit Image Function
    Yinbo Chen, Sifei Liu and Xiaolong Wang


    How to represent an image? While the visual world is presented in a continuous manner, machines store and see the images in a discrete way with 2D arrays of pixels. In this paper, we seek to learn a continuous representation for images. Inspired by the recent progress in 3D reconstruction with implicit neural representation, we propose Local Implicit Image Function (LIIF), which takes an image coordinate and the 2D deep features around the coordinate as inputs, predicts the RGB value at a given coordinate as an output. Since the coordinates are continuous, LIIF can be presented in arbitrary resolution. To generate the continuous representation for images, we train an encoder with LIIF representation via a self-supervised task with superresolution. The learned continuous representation can be presented in arbitrary resolution even extrapolate to ×30 higher resolution, where the training tasks are not provided. We further show that LIIF representation builds a bridge between discrete and continuous representation in 2D, it naturally supports the learning tasks with size-varied image ground-truths and significantly outperforms the method with resizing the ground-truths. Our project page with code is at https://yinboc.github.io/liif/.

  • Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes
    Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu and X. Wang


    Synthesizing 3D human motion plays an important role in many graphics applications as well as understanding human activity. While many efforts have been made on generating realistic and natural human motion, most approaches neglect the importance of modeling human-scene interactions and affordance. On the other hand, affordance reasoning (e.g., standing on the floor or sitting on the chair) has mainly been studied with static human pose and gestures, and it has rarely been addressed with human motion. In this paper, we propose to bridge human motion synthesis and scene affordance reasoning. We present a hierarchical generative framework to synthesize long-term 3D human motion conditioning on the 3D scene structure. Building on this framework, we further enforce multiple geometry constraints between the human mesh and scene point clouds via optimization to improve realistic synthesis. Our experiments show significant improvements over previous approaches on generating natural and physically plausible human motion in a scene.1

  • Learning to Track Instances without Video Annotations
    Yang Fu, Sifei Liu, Umar Iqbal, Shalini De Mello, Humphrey Shi and J. Kautz


    Tracking segmentation masks of multiple instances has been intensively studied, but still faces two fundamental challenges: 1) the requirement of large-scale, frame-wise annotation, and 2) the complexity of two-stage approaches. To resolve these challenges, we introduce a novel semisupervised framework by learning instance tracking networks with only a labeled image dataset and unlabeled video sequences. With an instance contrastive objective, we learn an embedding to discriminate each instance from the others. We show that even when only trained with images, the learned feature representation is robust to instance appearance variations, and is thus able to track objects steadily across frames. We further enhance the tracking capability of the embedding by learning correspondence from unlabeled videos in a self-supervised manner. In addition, we integrate this module into single-stage instance segmentation and pose estimation frameworks, which significantly reduce the computational complexity of tracking compared to two-stage networks. We conduct experiments on the YouTube-VIS and PoseTrack datasets. Without any video annotation efforts, our proposed method can achieve comparable or even better performance than most fullysupervised methods1.

  • Self-Supervised Object Detection via Generative Image Synthesis
    Siva Karthik Mustikovela, Shalini De Mello, Aayush Prakash, Umar Iqbal, Sifei Liu, Thu Nguyen-Phuoc, Carsten Rother, and Jan Kautz

    IEEE
    We present SSOD – the first end-to-end analysis-by-synthesis framework with controllable GANs for the task of self-supervised object detection. We use collections of real-world images without bounding box annotations to learn to synthesize and detect objects. We leverage controllable GANs to synthesize images with pre-defined object properties and use them to train object detectors. We propose a tight end-to-end coupling of the synthesis and detection networks to optimally train our system. Finally, we also propose a method to optimally adapt SSOD to an intended target data without requiring labels for it. For the task of car detection, on the challenging KITTI and Cityscapes datasets, we show that SSOD outperforms the prior state-of-the-art purely image-based self-supervised object detection method Wetectron. Even without requiring any 3D CAD assets, it also surpasses the state-of-the-art rendering-based method Meta-Sim2. Our work advances the field of self-supervised object detection by introducing a successful new paradigm of using controllable GAN-based image synthesis for it and by significantly improving the baseline accuracy of the task. We open-source our code at https://github.com/NVlabs/SSOD.

  • Semi-supervised 3D hand-object poses estimation with interactions in time
    Shaowei Liu, Hanwen Jiang, Jiarui Xu, Sifei Liu and Xiaolong Wang


    Estimating 3D hand and object pose from a single image is an extremely challenging problem: hands and objects are often self-occluded during interactions, and the 3D annotations are scarce as even humans cannot directly label the ground-truths from a single image perfectly. To tackle these challenges, we propose a unified framework for estimating the 3D hand and object poses with semi-supervised learning. We build a joint learning framework where we perform explicit contextual reasoning between hand and object representations. Going beyond limited 3D annotations in a single image, we leverage the spatial-temporal consistency in large-scale hand-object videos as a constraint for generating pseudo labels in semi-supervised learning. Our method not only improves hand pose estimation in challenging real-world dataset, but also substantially improve the object pose which has fewer ground-truths per instance. By training with large-scale diverse videos, our model also generalizes better across multiple out-of-domain datasets. Project page and code: https://stevenlsw.github.io/Semi-Hand-Object

  • Regularizing Meta-learning via Gradient Dropout
    Hung-Yu Tseng, Yi-Wen Chen, Yi-Hsuan Tsai, Sifei Liu, Yen-Yu Lin and Ming-Hsuan Yang


    With the growing attention on learning-to-learn new tasks using only a few examples, meta-learning has been widely used in numerous problems such as few-shot classification, reinforcement learning, and domain generalization. However, meta-learning models are prone to overfitting when there are no sufficient training tasks for the meta-learners to generalize. Although existing approaches such as Dropout are widely used to address the overfitting problem, these methods are typically designed for regularizing models of a single task in supervised training. In this paper, we introduce a simple yet effective method to alleviate the risk of overfitting for gradient-based meta-learning. Specifically, during the gradient-based adaptation stage, we randomly drop the gradient in the inner-loop optimization of each parameter in deep neural networks, such that the augmented gradients improve generalization to new tasks. We present a general form of the proposed gradient dropout regularization and show that this term can be sampled from either the Bernoulli or Gaussian distribution. To validate the proposed method, we conduct extensive experiments and analysis on numerous computer vision tasks, demonstrating that the gradient dropout regularization mitigates the overfitting problem and improves the performance upon various gradient-based meta-learning frameworks.

  • Weakly-Supervised Semantic Segmentation by Iterative Affinity Learning
    Xiang Wang, Sifei Liu, Huimin Ma and Ming-Hsuan Yang


    Weakly-supervised semantic segmentation is a challenging task as no pixel-wise label information is provided for training. Recent methods have exploited classification networks to localize objects by selecting regions with strong response. While such response map provides sparse information, however, there exist strong pairwise relations between pixels in natural images, which can be utilized to propagate the sparse map to a much denser one. In this paper, we propose an iterative algorithm to learn such pairwise relations, which consists of two branches, a unary segmentation network which learns the label probabilities for each pixel, and a pairwise affinity network which learns affinity matrix and refines the probability map generated from the unary network. The refined results by the pairwise network are then used as supervision to train the unary network, and the procedures are conducted iteratively to obtain better segmentation progressively. To learn reliable pixel affinity without accurate annotation, we also propose to mine confident regions. We show that iteratively training this framework is equivalent to optimizing an energy function with convergence to a local minimum. Experimental results on the PASCAL VOC 2012 and COCO datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods.

RECENT SCHOLAR PUBLICATIONS

  • RegionGPT: Towards Region Understanding Vision Language Model
    Q Guo, S De Mello, H Yin, W Byeon, KC Cheung, Y Yu, P Luo, S Liu
    arXiv preprint arXiv:2403.02330 2024

  • POSE TRANSFER FOR THREE-DIMENSIONAL CHARACTERS USING A LEARNED SHAPE CODE
    X Li, S Liu, S De Mello, O Gallo, J Wang, J Kautz
    US Patent App. 18/110,287 2024

  • Learning and propagating visual attributes
    S Liu, S De Mello, V Jampani, J Kautz, X Li
    US Patent 11,907,846 2024

  • Generalizable One-shot 3D Neural Head Avatar
    X Li, S De Mello, S Liu, K Nagano, U Iqbal, J Kautz
    Advances in Neural Information Processing Systems 36 2024

  • RGBD Objects in the Wild: Scaling Real-World 3D Object Learning from RGB-D Videos
    H Xia, Y Fu, S Liu, X Wang
    arXiv preprint arXiv:2401.12592 2024

  • Three-dimensional object reconstruction from a video
    X Li, S Liu, K Kim, S De Mello, J Kautz
    US Patent 11,880,927 2024

  • Agg: Amortized generative 3d gaussians for single image to 3d
    D Xu, Y Yuan, M Mardani, S Liu, J Song, Z Wang, A Vahdat
    arXiv preprint arXiv:2401.04099 2024

  • Colmap-free 3d gaussian splatting
    Y Fu, S Liu, A Kulkarni, J Kautz, AA Efros, X Wang
    arXiv preprint arXiv:2312.07504 2023

  • A unified approach for text-and image-guided 4d scene generation
    Y Zheng, X Li, K Nagano, S Liu, O Hilliges, S De Mello
    arXiv preprint arXiv:2311.16854 2023

  • Image processing using coupled segmentation and edge learning
    Z Yu, R Huang, W Byeon, S Liu, G Liu, T Breuel, A Anandkumar, J Kautz
    US Patent 11,790,633 2023

  • 3D Reconstruction with Generalizable Neural Fields using Scene Priors
    Y Fu, S De Mello, X Li, A Kulkarni, J Kautz, X Wang, S Liu
    arXiv preprint arXiv:2309.15164 2023

  • Segmentation using an unsupervised neural network training technique
    V Jampani, WC Hung, S Liu, P Molchanov, J Kautz
    US Patent 11,748,887 2023

  • Learning dense correspondences for images
    S Liu, J Mu, S De Mello, Z Yu, J Kautz
    US Patent App. 17/929,182 2023

  • Three-dimensional object reconstruction from a video
    X Li, S Liu, K Kim, S De Mello, J Kautz
    US Patent 11,704,857 2023

  • Performing semantic segmentation training with image/text pairs
    J Xu, S De Mello, S Liu, W Byeon, T Breuel, J Kautz
    US Patent App. 17/853,631 2023

  • Tuvf: Learning generalizable texture uv radiance fields
    AC Cheng, X Li, S Liu, X Wang
    arXiv preprint arXiv:2305.03040 2023

  • Learning contrastive representation for semantic correspondence
    T Xiao, S Liu, S De Mello, Z Yu, J Kautz
    US Patent App. 17/412,091 2023

  • Self-supervised hierarchical motion learning for video action recognition
    X Yang, X Yang, S Liu, J Kautz
    US Patent 11,594,006 2023

  • Training object detection systems with generated images
    SK Mustikovela, S De Mello, A Prakash, U Iqbal, S Liu, J Kautz
    US Patent App. 17/361,202 2023

  • Self-supervised super-plane for neural 3d reconstruction
    B Ye, S Liu, X Li, MH Yang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2023

MOST CITED SCHOLAR PUBLICATIONS

  • A face antispoofing database with diverse attacks
    Z Zhang, J Yan, S Liu, Z Lei, D Yi, SZ Li
    2012 5th IAPR international conference on Biometrics (ICB), 26-31 2012
    Citations: 882

  • Generative face completion
    Y Li, S Liu, J Yang, MH Yang
    Proceedings of the IEEE conference on computer vision and pattern 2017
    Citations: 743

  • Learning continuous image representation with local implicit image function
    Y Chen, S Liu, X Wang
    Proceedings of the IEEE/CVF conference on computer vision and pattern 2021
    Citations: 491

  • Low-light image enhancement via a deep hybrid network
    W Ren, S Liu, L Ma, Q Xu, X Xu, X Cao, J Du, MH Yang
    IEEE Transactions on Image Processing 28 (9), 4364-4375 2019
    Citations: 402

  • Groupvit: Semantic segmentation emerges from text supervision
    J Xu, S De Mello, S Liu, W Byeon, T Breuel, J Kautz, X Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2022
    Citations: 319

  • Learning affinity via spatial propagation networks
    S Liu, S De Mello, J Gu, G Zhong, MH Yang, J Kautz
    Advances in Neural Information Processing Systems 30 2017
    Citations: 280

  • Deep cascaded bi-network for face hallucination
    S Zhu, S Liu, CC Loy, X Tang
    Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The 2016
    Citations: 274

  • Learning linear transformations for fast image and video style transfer
    X Li, S Liu, J Kautz, MH Yang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2019
    Citations: 236

  • Learning dual convolutional neural networks for low-level vision
    J Pan, S Liu, D Sun, J Zhang, Y Liu, J Ren, Z Li, J Tang, H Lu, YW Tai, ...
    Proceedings of the IEEE conference on computer vision and pattern 2018
    Citations: 208

  • Open-vocabulary panoptic segmentation with text-to-image diffusion models
    J Xu, S Liu, A Vahdat, W Byeon, X Wang, S De Mello
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2023
    Citations: 189

  • Learning recursive filters for low-level vision via a hybrid neural network
    S Liu, J Pan, MH Yang
    Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The 2016
    Citations: 178

  • Structured face hallucination
    CY Yang, S Liu, MH Yang
    Proceedings of the IEEE conference on computer vision and pattern 2013
    Citations: 172

  • Joint-task self-supervised learning for temporal correspondence
    X Li, S Liu, S De Mello, X Wang, J Kautz, MH Yang
    Advances in Neural Information Processing Systems 32 2019
    Citations: 160

  • Multi-objective convolutional learning for face labeling
    S Liu, J Yang, C Huang, MH Yang
    Proceedings of the IEEE conference on computer vision and pattern 2015
    Citations: 158

  • Self-supervised single-view 3d reconstruction via semantic consistency
    X Li, S Liu, K Kim, S De Mello, V Jampani, MH Yang, J Kautz
    Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 2020
    Citations: 147

  • Unsupervised domain adaptation for face recognition in unlabeled videos
    K Sohn, S Liu, G Zhong, X Yu, MH Yang, M Chandraker
    Proceedings of the IEEE International Conference on Computer Vision, 3210-3218 2017
    Citations: 143

  • Scops: Self-supervised co-part segmentation
    WC Hung, V Jampani, S Liu, P Molchanov, MH Yang, J Kautz
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern 2019
    Citations: 137

  • Context-aware synthesis and placement of object instances
    D Lee, S Liu, J Gu, MY Liu, MH Yang, J Kautz
    Advances in neural information processing systems 31 2018
    Citations: 116

  • Learning linear transformations for fast arbitrary style transfer
    X Li, S Liu, J Kautz, MH Yang
    arXiv preprint arXiv:1808.04537 2018
    Citations: 113

  • Putting humans in a scene: Learning affordance in 3d indoor environments
    X Li, S Liu, K Kim, X Wang, MH Yang, J Kautz
    Proceedings of the IEEE/CVF conference on computer vision and pattern 2019
    Citations: 97