— Recent Publications since 2015
2022
Xu, L., Wang, Z., Wu, B., & Lui, S. (2022). MDAN: Multi-level Dependent Attention Network for Visual Emotion Analysis. Accepted in CVPR 2022 (to appear)
2021
Zhuang, X., Yu, H., Zhao, W., Jiang, T., Hu, P., Lui, S., & Zhou, W. (2021). KaraTuner: Towards end to end natural pitch correction for singing voice in karaoke. arXiv preprint arXiv:2110.09121.
S. Hu, B. Liang, Z. Chen, X. Lu, E. Zhao and S. Lui, “Large-scale singer recognition using deep metric learning: an experimental study,” 2021 International Joint Conference on Neural Networks (IJCNN), 2021, pp. 1-6, doi: 10.1109/IJCNN52387.2021.9533911.
Zhuang, X., Jiang, T., Chou, S. Y., Wu, B., Hu, P., & Lui, S. (2021, June). Litesing: Towards fast, lightweight and expressive singing voice synthesis. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 7078-7082). IEEE.
Zeng, Y., Xiao, Z., Hung, K. W., & Lui, S. (2021). Real-time video super resolution network using recurrent multi-branch dilated convolutions. Signal Processing: Image Communication, 93, 116167.
Xiao, Z., Zhang, Z., Hung, K. W., & Lui, S. (2021). Real-time video super-resolution using lightweight depthwise separable group convolutions with channel shuffling. Journal of Visual Communication and Image Representation, 75, 103038.
2020
Hu, S., Zhang, B., Liang, B., Zhao, E., & Lui, S. (2020). Phase-aware music super-resolution using generative adversarial networks. arXiv preprint arXiv:2010.04506. Interspeech 2020.
Jin, C., Wang, T., Liu, S., Tie, Y., Li, J., Li, X., & Lui, S. (2020). A transformer-based model for multi-track music generation. International Journal of Multimedia Data Engineering and Management (IJMDEM), 11(3), 36-54.
Lin, K. W. E., Balamurali, B. T., Koh, E., Lui, S., & Herremans, D. (2020). Singing voice separation using a deep convolutional neural network trained by ideal binary mask and cross entropy. Neural Computing and Applications, 32(4), 1037-1050.
2019
Agres, K., Lui, S., & Herremans, D. (2019, August). A novel music-based game with motion capture to support cognitive and motor function in the elderly. In 2019 IEEE Conference on Games (CoG) (pp. 1-4). IEEE.
Balamurali, B. T., Lin, K. E., Lui, S., Chen, J. M., & Herremans, D. (2019). Toward robust audio spoofing detection: A detailed comparison of traditional and learned features. IEEE Access, 7, 84229-84241.
Zhao, D., Lee, J. S. A., Tan, C. T., Dancu, A., Lui, S., Shen, S., & Mueller, F. F. (2019, June). GameLight-Gamification of the Outdoor Cycling Experience. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion (pp. 73-76).
Hee, H. I., Balamurali, B. T., Karunakaran, A., Herremans, D., Teoh, O. H., Lee, K. P., … & Chen, J. M. (2019). Development of machine learning for asthmatic and healthy voluntary cough sounds: A proof of concept study. Applied Sciences, 9(14), 2833.
2018
Agus, N., Anderson, H., Chen, J. M., Lui, S., & Herremans, D. (2018). Minimally simple binaural room modeling using a single feedback delay network. Journal of the Audio Engineering Society, 66(10), 791-807.
Agus, N., Anderson, H., Chen, J. M., Lui, S., & Herremans, D. (2018). Perceptual evaluation of measures of spectral variance. The Journal of the Acoustical Society of America, 143(6), 3300-3311.
Upadhyay, R., & Lui, S. (2018, January). Foreign English accent classification using deep belief networks. In 2018 IEEE 12th international conference on semantic computing (ICSC) (pp. 290-293). IEEE.
2017
Anderson, H., Agus, N., Chen, J. M., & Lui, S. (2017). Modeling the Proportion of Early and Late Energy in Two-Stage Reverberators. Journal of the Audio Engineering Society, 65(12), 1017-1031.
Lui, S., & Grunberg, D. (2017, December). Using skin conductance to evaluate the effect of music silence to relieve and intensify arousal. In 2017 international conference on orange technologies (ICOT) (pp. 91-94). IEEE.
Fang, J., Grunberg, D., Lui, S., & Wang, Y. (2017, December). Development of a music recommendation system for motivating exercise. In 2017 International Conference on Orange Technologies (ICOT) (pp. 83-86). IEEE.
Hee, H. I., Chen, J., & Lui, S. (2017). Intuitive Interactive Platform for Preoperative Communication Between Hospital and Patients/Caregivers: Towards Community Partnership for Peri-Operative Person-Based Healthcare Model. Iproceedings, 3(1), e8425.
Agus, N., Anderson, H., Chen, J. M., & Lui, S. (2017). Energy-Based Binaural Acoustic Modeling. Technical Report 1, Singapore University of Technology and Design.(2017 Apr.) https://istd. sutd. edu. sg/research/technicalreports/energy-based-binaural-acoustic-modeling.
Lin, K. W. E., Anderson, H., So, C., & Lui, S. (2017). Sinusoidal Partials Tracking for Singing Analysis Using the Heuristic of the Minimal Frequency and Magnitude Difference. In INTERSPEECH (pp. 3038-3042).
2016
Khwaja, M. K., Vikash, P., Arulmozhivarman, P., & Lui, S. (2016). Robust phoneme classification for automatic speech recognition using hybrid features and an amalgamated learning model. International Journal of Speech Technology, 19(4), 895-905.
Lee, H., Yoong, A. C. H., Lui, S., Vaniyar, A., & Balasubramanian, G. (2016, November). Design exploration for the” squeezable” interaction. In Proceedings of the 28th Australian Conference on Computer-Human Interaction (pp. 586-594).
2015
Tan, C. T., Byrne, R., Lui, S., Liu, W., & Mueller, F. (2015). JoggAR: a mixed-modality AR approach for technology-augmented jogging. In SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications (pp. 1-1).
Anderson, H., Lin, K. W. E., So, C., & Lui, S. (2015, October). Flatter frequency response from feedback delay network reverbs. In ICMC.
Trochidis, K., & Lui, S. (2015, June). Modeling affective responses to music using audio signal analysis and physiology. In International symposium on computer music multidisciplinary research (pp. 346-357). Springer, Cham.
Anderson, H., Lin, K. W. E., Agus, N., & Lui, S. (2015, May). Major thirds: a better way to tune your ipad. In NIME (pp. 365-368).
Leslie, G., Picard, R., & Lui, S. (2015). An EEG and Motion Capture Based Expressive Music Interface for Affective Neurofeedback. In Proc. 1st Int. BCMI Workshop.
Lui, S. (2015, May). Generate expressive music from picture with a handmade multi-touch music table. In NIME (pp. 374-377).
Hoon, L. T., Vuyyuru, M. R., Kumar, T. A., & Lui, S. (2015). Binaural Navigation for the Visually Impaired with a Smartphone. In ICMC.