January 17 to 19, 2022
Indian Institute of Technology Madras
Sensing technologies play an important role in realizing smart and sustainable buildings. Sensors of different modalities are part of buildinginfrastructures like lighting, HVAC and surveillance. Data from suchmulti-modal sensors can be used to realize new and improved buildingapplications using AI approaches involving advanced signal processing and machine learning. In this tutorial, we will cover the following topics:
- Smart building applications like lighting/HVAC controls, systemmonitoring and diagnostics, space management, and location-based services;
- Sensor system architectures;
- Sensor data analytics and machine learning based sensor data processing techniques.
Ashish Pandharipande (Senior Member, IEEE) received the M.S. degrees in electrical and computer engineering, and mathematics and the Ph.D. degree in electrical and computer engineering from the University of Iowa, Iowa City, in 2000, 2001, and 2002, respectively.,Since 2002, he has been a Postdoctoral Researcher with the University of Florida, a Senior Researcher with the Samsung Advanced Institute of Technology, and a Senior Scientist with Philips Research. He has held visiting positions at AT&T Laboratories, NJ, USA, and the Department of Electrical Communication Engineering, Indian Institute of Science, Bengaluru, India. He is currently Lead Research and Development Engineer with Signify (formerly Philips Lighting), Eindhoven, The Netherlands. His research interests are in sensing, networking and controls, data analytics, and applications in smart lighting systems, energy management, and cognitive wireless systems. He is a Senior Editor of IEEE Signal Processing Letters, a Topical Area Editor of IEEE Sensors Journal, and an Associate Editor of Lighting Research & Technology journal and IEEE Journal of Biomedical and Health Informatics.
Avik Santra (Senior Member, IEEE) received the M.S. (Hons.) degree in signal processing from the Indian Institute of Science, Bengaluru, in 2010. He is currently leading research and development of signal processing and deep learning algorithm/solutions of radar and depth sensors for human–machine interface applications with Infineon, Neubiberg. Earlier in his career, he has worked as a System Engineer for LTE/4G modem with Broadcom Communications, and also has worked as a Research Engineer developing cognitive radars with Airbus. He is the author of the book titled Deep Learning Applications of Short-Range Radars published by ArTech House. He has filed over 50 patents and published more than 35 research articles related to various topics of radar waveform design, radar signal processing and radar machine/deep learning topics. He was a recipient of several outstanding reviewer awards. He is a reviewer at various IEEE and Elsevier journals.
Challenges presented in the semiconductor industry often lead to complex scenarios, where engineers and scientists struggle in realizing optimal solutions. As an example, selecting the optimal tracking parameters for a radar-based application is nontrivial, and requires much expertise/manual effort. The same happens in hardware design, where different Hardware component configurations must be selected. In this context, Deep Reinforcement Learning comes to help, supporting the algorithm designer in approaching high-dimensional, non-linear problems. In this tutorial, we will concentrate on solutions for high-dimensional action spaces and their impact on tasks related to the semiconductor industry (e.g. Design and Automation, Radar Signal Processing, etc.)
Lorenzo Servadei received his Ph.D degree from the Johannes Kepler University Linz, in collaboration with Infineon Technologies AG. During his Ph.D. studies, his research focus has been Hardware Optimization with Machine Learning. He is currently working as a Senior Staff Machine Learning Engineer at Infineon Technologies AG in the Advanced AI group and lecturing Machine Learning at the Technical University of Munich. He is IEEE as well as ACM Member.
Photoacoustic tomography (PAT) is a non-ionizing imaging modality capable of acquiring high contrast and resolution images based on optical absorption at depths greater than traditional optical imaging techniques. As PAT technology matures, its mainstream clinical acceptance is hindered by several practical considerations and limitations associated with the instrumentation and data acquisition. Common challenges include having a limited number of available acoustic detectors and a reduced “view” of the imaging target which result in the acquisition of incomplete data. Forming an image with classical reconstruction methods from incomplete data result in image artifacts that degrade image quality. Advanced methods such as iterative reconstruction are effective in reducing and removing artifacts but are also computationally expensive and cannot be used for real-time imaging. Deep learning has the potential to be an effective and computationally efficient alternative to state-of-the-art iterative methods. Having such a method would enable improved image quality, real-time PAT image rendering, and more accurate image interpretation and quantification. This tutorial will provide an overview of PAT and its various applications. Then we will discuss conventional image reconstruction schemes followed by a detailed description of several deep learning techniques that enhance image quality in a post-processing manner or provide and end-to-end solution for directly reconstructing images from raw channel data. Finally, the tutorial session will conclude with a discussion on how deep learning techniques might be applied to image reconstruction and interpretationfor diagnostic ultrasound.
Parag V. Chitnis is an Associate Professor in the Department of Bioengineering at George Mason University (GMU). He is also a founding member of the Center for Adaptive Systems in Brain-Body Interactions. Dr. Chitnis received M.S. and Ph.D. degrees in mechanical engineering from Boston University in 2002 and 2006, respectively. His dissertation focused on experimental studies of acoustic shock waves for therapeutic applications. After a two-year postdoctoral fellowship at Boston University involving a study of acoustically driven bubble dynamics, Dr. Chitnis joined Riverside Research as a Staff Scientist in 2008. Here, he used ultrasound and photoacoustic imaging of mouse embryos to research the interplay between the developing cardiovascular and central nervous systems. Since joining GMU in 2014, his research (supported by NIH, NSF, DARPA, and DoD) has focused on wearable sensors, therapeutic ultrasound and neuromodulation, photoacoustic microscopy, and deep-learning strategies for photoacoustic tomography. For his contributions in teaching, research and service to the university, Dr. Chitnis was nominated by GMU for the Outstanding Faculty Award (Rising Star Category), State Council of Higher Education for Virginia, and selected as a State Finalist in 2017. He currently serves as an Associate Editor for Ultrasonic Imaging and a reviewer for NIH and NSF grant panels.
IoMT stands for the Internet of Medical Things and it’s a combination of wearable, healthcare and medical devices along with applications that can connect all the healthcare information systems through networking technologies. It’s a very big market that was reported to have a worth of 22.5 billion USD in 2016 and it’s expected to become a USD 142.45 billion by 2026.
Wearable and Medical devices can collect, analyze and send data across the web using this technology. It can connect both, digital such as heart monitor and non-digital devices like patient beds to the internet. IoMT will transform the future of the healthcare industry by providing the world with smart digital solutions at the ease of comfort of the consumers.
An increase in world population along with a signiﬁcant aging portion is forcing rapid rises in healthcare costs. The healthcare system is going through a transformation in which continuous monitoring of inhabitants is possible even without hospitalization. The advancement of sensing technologies, embedded systems, wireless communication technologies, nano-technologies, and miniaturization makes it possible to develop smart medical systems to monitor activities of human beings continuously. Wearable sensorsmonitor physiological parameters continuously along with detect other symptoms such as any abnormal and/or unforeseen situations which need immediate attention. Therefore, necessary help can be provided in times of dire need. The tutorialwill reviewthe latest reported systems and the trends on wearable and medical devices to monitor activities of humans and issues to be addressed to tackle the challenges.
Biography: Subhas holds a B.E.E. (gold medallist), M.E.E., Ph.D. (India) and Doctor of Engineering (Japan). He has over 31 years of teaching, industrial and research experience.
Currently he is working as a Professor of Mechanical/Electronics Engineering, Macquarie University, Australia and is the Discipline Leader of the Mechatronics Engineering Degree Programme. His fields of interest include Smart Sensors and sensing technology, instrumentation techniques, wireless sensors and network (WSN), Internet of Things (IoT),Mechatronics, Robotics, Health monitoring etc. He has supervised over 40 postgraduate students and over 100 Honours students.
He has published over 400 papers in different international journals and conference proceedings, written ten books and fifty two book chapters and edited eighteen conference proceedings. He has also edited thirty five books with Springer-Verlag andthirty two journal special issues. He has organized over 20 international conferences as either General Chairs/co-chairs or Technical Programme Chair. He has delivered 378 presentations including keynote, invited, tutorial and special lectures.
He is a Fellow of IEEE (USA), a Fellow of IET (UK), a Fellow of IETE (India). He is a Topical Editor of IEEE Sensors journal. He is also associate editor of IEEE Transactions on Instrumentation and Measurements and IEEE Transactions on Review of Biomedical Engineering. He is a Distinguished Lecturer of the IEEE Sensors Council from 2017 to 2022. He is the Founding chair of the IEEE Sensors CouncilNSW chapter.
More details can be available at :