EPFL-CIS and RIKEN-AIP Joint Workshop on Machine Learning and Artificial Intelligence

 Featuring ML-AI talks from Switzerland and Japan | March 9 – 10, 2023 | Hybrid

This event is part of a series of institutional exchanges between RIKEN-AIP and EPFL-CIS since 2020 based on a Memorandum of Understanding between the two institutions aiming for the establishment of a long-term relationship. 

Date: March 9 and 10, 2023

Location: Hybrid: EPFL Speaker and audience on-site (BM 5202), Riken speakers will join onsite in person and online (Zoom). 


 DAY 1: March 9

“Optimization Challenges in Robust Machine Learning”

Abstract: “Thanks to neural networks (NNs), faster computation, and massive datasets, machine learning (ML) is under increasing pressure to provide automated solutions to even harder real-world tasks beyond human performance with ever faster response times due to potentially huge technological and societal benefits. Unsurprisingly, the NN learning formulations present a fundamental challenge to the back-end learning algorithms despite their scalability, in particular due to the existence of traps in the non-convex optimization landscape, such as saddle points, that can prevent algorithms from obtaining “good” solutions.

In this talk, we describe our recent research that has demonstrated that the non-convex optimization dogma is false by showing that scalable stochastic optimization algorithms can avoid traps and rapidly obtain locally optimal solutions. Coupled with the progress in representation learning, such as over-parameterized neural networks, such local solutions can be globally optimal.

Unfortunately, this talk will also demonstrate that the central min-max optimization problems in ML, such as generative adversarial networks (GANs), robust reinforcement learning (RL), and distributionally robust ML, contain spurious attractors that do not include any stationary points of the original learning formulation. Indeed, we will describe how algorithms are subject to a grander challenge, including unavoidable convergence failures, which could explain the stagnation in their progress despite the impressive earlier demonstrations. We will conclude with promising new preliminary results from our recent progress on some of these difficult challenges. “

Bio: Volkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara, Turkey, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta, GA in 2005. He was a Research Scientist with the University of Maryland, College Park from 2006-2007 and also with Rice University in Houston, TX, from 2008-2009. Currently, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and an Amazon Scholar. His research interests include machine learning, signal processing theory, optimization theory and methods, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the Google Faculty Research award in 2018, the IEEE Signal Processing Society Best Paper Award in 2016, a Best Paper Award at CAMSAP in 2015, a Best Paper Award at SPARS in 2009, and an ERC CG in 2016 as well as an ERC StG in 2011.

“Physics-Informed Deep Learning Approach for Modeling Crustal Deformation”

Abstract: The movement and deformation of the Earth’s crust and upper mantle provide critical insights into the evolution of earthquake processes and future earthquake potentials. Crustal deformation can be modeled by dislocation models that represent earthquake faults in the crust as defects in a continuum medium. In this talk, let me introduce a novel physics-informed deep learning approach to model crustal deformation due to earthquakes. Neural networks can represent continuous displacement fields in arbitrary geometrical structures and mechanical properties of rocks by incorporating governing equations and boundary conditions into a loss function. The polar coordinate system is introduced to accurately model the displacement discontinuity on a fault as a boundary condition. I will show the validity and usefulness of this approach through example problems with strike-slip faults. This approach has a potential advantage over conventional approaches in that it could be straightforwardly extended to high dimensional, anelastic, nonlinear, and inverse problems.

Bio: View website

10:00 – 10:30 Coffee break

“Learning equilibria in multi-agent systems”

Abstract: A rising challenge in control of large-scale control systems such as the electricity and the transportation networks is to address autonomous decision making of interacting agents, i.e. the subsystems, with local objectives while ensuring global system safety and performance. In this setting, a Nash equilibrium is a stable solution outcome in the sense that no agent finds it profitable to unilaterally deviate from her decision. Due to geographic distance, privacy concerns or simply the scale of these systems, each agent can only base her decision on local measurements. Hence, a fundamental question is: do agents learn to play a Nash equilibrium strategy based only on local information? I will discuss conditions under which we have an affirmative answer to this question and will present algorithms that achieve this learning task.

Bio: Maryam Kamgarpour holds a Doctor of Philosophy in Engineering from the University of California, Berkeley and a Bachelor of Applied Science from University of Waterloo, Canada. Her research is on safe decision-making and control under uncertainty, game theory and mechanism design, mixed integer and stochastic optimization and control. Her theoretical research is motivated by control challenges arising in intelligent transportation networks, robotics, power grid systems, financial markets and healthcare. She is the recipient of NASA High Potential Individual Award, NASA Excellence in Publication Award, the European Union (ERC) Starting Grant and NSERC Discovery Accelerator Grant.

“Noise Robust Classification”

Abstract: Supervised learning from noisy output is one of the classic problems in machine learning. While this task is relatively straightforward in regression since independent additive noise cancels out with big data, classification from noisy labels is still a challenging research topic. Recently, it has been shown that when the noise transition matrix which specifies the label flipping probability is available, the bias caused by label noise can be elimiated by appropriately correcting the loss function. However, when the noise transition matrix is unknown, which is often the case in practice, its estimation only from noisy labels alone is not straightforward due to its non-identifiability. In this talk, I will give an overview of recent advances in classification from noisy labels, including joint estimation of the noise transition matrix and a classifier, analysis of identifiability conditions, and extension to instance-dependent noise.

Bio: “Masashi Sugiyama received his Ph.D. in Computer Science from the Tokyo Institute of Technology in 2001. He has been a professor at the University of Tokyo since 2014, and also the director of the RIKEN Center for Advanced Intelligence Project (AIP) since 2016. His research interests include theories and algorithms of machine learning. In 2022, he received the Award for Science and Technology from the Japanese Minister of Education, Culture, Sports, Science and Technology. He was program co-chair of the Neural Information Processing Systems (NeurIPS) conference in 2015, the International Conference on Artificial Intelligence and Statistics (AISTATS) in 2019, and the Asian Conference on Machine Learning (ACML) in 2010 and 2020. He is (co-)author of Machine Learning in Non-Stationary Environments (MIT Press, 2012), Density Ratio Estimation in Machine Learning (Cambridge University Press, 2012), Statistical Reinforcement Learning (Chapman & Hall, 2015), and Machine Learning from Weak Supervision (MIT Press, 2022).”

“Building Blocks for Collaborative and Decentralized Machine Learning”

12:00 – 14:00 Lunch break (Lunch not included for attendees)

“Music Structure Analysis based on Transformers”

Abstract: We present a time-span tree leveled by the length of the time span. Using the time-span tree of the Generative Theory of Tonal Music, it is possible to reduce notes in a melody, but it is difficult to automate because the priority order of the branches to be reduced is not defined. A similar problem arises in the automation of time-span analysis and melodic morphing. Therefore, we propose a method for defining the priority order in total order in accordance with the length of the time span of each branch in a time-span tree. In the experiment, we confirmed that melodic morphing and deep learning of time-span tree analysis can be carried out automatically using the proposed method.

Bio: Masatoshi Hamanaka received his Ph.D. from the University of Tsukuba, Japan, in 2003. He is currently a leader of the Music Information Intelligence team on the Center for Advanced Intelligence Project, RIKEN. His research interests are music information technology, biomedical applications, and unmanned aircraft systems. He received the Journal of New Music Research Distinguished Paper Award in 2005, the SIGGRAPH2019 Emerging Technologies Laval Virtual Revolution Research Jury Prize in 2019, and the IJCAI-19 Most Entertaining Video Award in 2019.

“A Tractable Barycenter for Probability Measures in Machine Learning”

Abstract: We introduce a formulation of entropic Wasserstein barycenters that enjoys favorable optimization, approximation, and statistical properties. This barycenter is defined as the unique probability measure that minimizes the sum of entropic optimal transport costs with respect to a family of given probability measures, plus an entropy term. We show that (i) this notion of barycenter is debiased, in the sense that it is a better approximation of the unregularized Wasserstein barycenter than the naive entropic Wasserstein barycenter (ii) it can be estimated efficiently from samples (as measured in relative entropy) and (iii) it lends itself naturally to a grid-free optimization algorithm which, in the mean-field limit, converges globally at an exponential rate.


“Information geometry and optimal transport framework for Gaussian processes”

Abstract: Information geometry (IG) and Optimal transport (OT) have been attracting much research attention in various fields, in particular machine learning and statistics. In this talk, we present results on the generalization of IG and OT distances for finite-dimensional Gaussian measures to the setting of infinite-dimensional Gaussian measures and Gaussian processes. Our focus is on the Entropic Regularization of the 2-Wasserstein distance and the generalization of the Fisher-Rao distance and related quantities. In both settings, regularization leads to many desirable theoretical properties, including in particular dimension-independent convergence and sample complexity. The mathematical formulation involves the interplay of IG and OT with Gaussian processes and the methodology of reproducing kernel Hilbert spaces (RKHS). All of the presented formulations admit closed form expressions that can be efficiently computed and applied practically. The theoretical formulations will be illustrated with numerical experiments on Gaussian processes.

Bio: Minh Ha Quang is currently a unit leader at RIKEN-AIP (RIKEN Center for Advanced Intelligence Project) in Tokyo, Japan, where he leads the Functional Analytic Learning Unit. He received the PhD degree in Mathematics from Brown University (RI, USA), with the dissertation written under the supervision of Stephen Smale. Before joining AIP, he was a researcher at the Italian Institute of Technology in Genova, Italy. His current research focuses on functional analytic and geometrical methods in machine learning and statistics.

15:30 – 16:00 Coffee break

“Attempts at Taming Language Model Behavior”

Bio: Robert West is a tenure-track assistant professor of computer science at EPFL (the Swiss Federal Institute of Technology, Lausanne), where he heads the Data Science Lab (dlab). His research aims to make sense of large amounts of data by developing and applying algorithms and techniques in natural language processing, machine learning, and computational social science. Typically, the data he works with is generated by humans (e.g., natural language or behavioral traces), and frequently it is collected on the Web (e.g., using wikis, online news, social media, server logs, online games). Bob received his PhD in Computer Science from Stanford University (2016), his MSc from McGill University, Canada (2010), and his undergraduate degree from Technische Universität München, Germany (2007). He is a Wikimedia Foundation Research Fellow, an Associate Editor of ICWSM and EPJ Data Science, and a co-founder of the Wiki Workshop and the Applied Machine Learning Days. His work has won several awards, including best/outstanding paper awards at ICWSM’21, ICWSM’19, and WWW’13, a Google Faculty Research Award, a Facebook Research Award, and the ICWSM’22 Adamic–Glance Distinguished Young Researcher Award.

“Adversarial robustness”

Abstract: When we deploy models trained by standard training (ST), they work well on natural test data. However, those models cannot handle adversarial test data (also known as adversarial examples) that are algorithmically generated by adversarial attacks. An adversarial attack is an algorithm which applies specially designed tiny perturbations on natural data to transform them into adversarial data, in order to mislead a trained model and let it give wrong predictions. Adversarial robustness is aimed at improving the robust accuracy of trained models against adversarial attacks, which can be achieved by adversarial training (AT). What is AT? Given the knowledge that the test data may be adversarial, AT carefully simulates some adversarial attacks during training. Thus, the model has already seen many adversarial training data in the past, and hopefully it can generalize to adversarial test data in the future. AT has two purposes: (1) correctly classify the data (same as ST) and (2) make the decision boundary thick so that no data lie nearby the decision boundary. In this talk, I will introduce how to leverage adversarial attacks/training for evaluating/enhancing reliabilities of AI-powered tools.

Bio: Jingfeng Zhang is a researcher in RIKEN-AIP at “Imperfect Information Learning Team’’ supervised by Prof. Masashi Sugiyama. Prior to RIKEN-AIP, Jingfeng obtained his Ph.D. degree (in 2020) under Prof. Mohan Kankanhalli at School of Computing in the National University of Singapore, and his Bachelor’s Degree (in 2016) at Taishan College in Shandong University, China. Jingfeng is the receiver of Strategic Basic Research Programs ACT-X 2021-2023 funding, JSPS Grants-in-Aid for Scientific Research (KAKENHI), Early-Career Scientists 2022-2023, and the RIKEN Ohbu Award 2022. Jingfeng serves as a reviewer for prestigious ML conferences such as ICLR, ICML, NeurIPS, etc. Jingfeng’s long-term research interest is making artificial intelligence safe for human beings.

“Linearization and Identification of Multiple-Attractor Dynamical Systems through Laplacian Eigenmaps”

Location: Starling Hotel

 DAY 2: March 10

Random subspace methods for non-convex optimization

Abstract: In this talk, we present a randomized subspace regularized Newton method for a non-convex function. We show that our method has global convergence under appropriate assumptions, and its convergence rate is the same as that of the full regularized Newton method. Furthermore, we can obtain a local linear convergence rate, under some additional assumptions, and prove that this rate is the best we can hope when using random subspace.

Bio: I graduated from the French engineering school Ensimag in 2007. In 2010, I did a one master internship at the Optimization Laboratory of Kyoto University.During my Ph.D. (graduated in 2013), I worked on Robust Optimization, under the supervision of Marie-Christine Costa and Alain Billionnet, at CEDRIC and UMA laboratories.
I did a Post-doc at LIX – Ecole Polytechnique, under the supervision of Leo Liberti, working on Network Optimization and Bilevel programming for the SoGrid project and working on some new probabilistic methods based on the measure concentration phenomenon to solve large scale optimization problems. I did an other Postdoc at Ensta-Paristech, working on a PGMO project about robust Steiner tree problems. In January 2017, I joined Huawei Technologies as a research scientist and worked on optimization problems in networks; then I moved in Japan and worked as an R&D Engineer on Machine Learning at a Japanese company. Since September 2018 I am working at RIKEN – AIP, as a researcher, in the Continuous Optimization Team led by Prof. Takeda.

“Equivariance and universal approximation for geometric point clouds”

Abstract: As with many fields of science, machine learning has become an essential part of the toolbox for modeling matter at the atomic scale, with many frameworks having become well-established, and many more being developed in new research directions.
The most effective frameworks treat atomic structures as point clouds, and incorporate fundamental physical principles, such as symmetry, locality, and hierarchical decompositions of the interactions between atoms, in the construction of the ML model.
I will present a general framework that unifies several of the most recent developments in the field, including the representation of structures in terms of systematically-convergent atom-centered correlations of the neighbor density, as well as equivariant message-passing schemes that build automatically descriptors with equivalent information content.
Rationalizing the structure of equivariant models reveals some limitations, including the existence of configurations that cannot be distinguished by certain classes of symmetric models, and strategies to address them, building accurate and interpretable models that are capable of universal approximation. 

Bio: Michele Ceriotti received his Ph.D. in Physics from ETH Zürich. He spent three years in Oxford as a Junior Research Fellow at Merton College. Since 2013 he leads the laboratory for Computational Science and Modeling, in the institute of Materials at EPFL, that focuses on method development for atomistic materials modeling based on statistical mechanics and machine learning. He is especially proud of his contributions to the development of several open-source software packages, including http://ipi-code.org and http://chemiscope.org, and of serving the atomistic modeling community as an associate editor of the Journal of Chemical Physics, as a moderator of the physics.chem-ph section of the arXiv, and as an editorial board member of Physical Review Materials. 

10:00 – 10:30 Coffee break

“Representation power and optimization ability of neural networks”

Abstract: In this presentation, I will provide an overview of recent developments of our theoretical work on representation power and optimization ability of neural networks. In the first half, I will present a nonparametric estimation analysis of transformer networks in a sequence-to-sequence problem. Transformer networks are the fundamental model for recent large language models. They can handle long input sequences and avoid the curse of dimensionality with variable input dimensions. We show that they can adapt to the smoothness property of the true function, even when the smoothness towards each coordinate varies for different inputs. In the latter half, we consider a mean field Langevin dynamics for optimizing mean field neural networks. We present a convergence analysis of space-time discretized dynamics with a stochastic gradient approximation.

Bio: Taiji Suzuki is currently an Associate Professor in the Department of Mathematical Informatics at the University of Tokyo. He also serves as the team leader of “Deep learning theory” team in AIP-RIKEN. He received his Ph.D. degree in information science and technology from the University of Tokyo in 2009. He worked as an assistant professor in the department of mathematical informatics, the University of Tokyo between 2009 and 2013, and then he was an associate professor in the department of mathematical and computing science, Tokyo Institute of Technology between 2013 and 2017. He has a broad research interest in statistical learning theory on deep learning, kernel methods and sparse estimation, and stochastic optimization for large-scale machine learning problems. He served as area chairs of premier conferences such as NeurIPS, ICML, ICLR, AISTATS and a program chair of ACML. He received the Outstanding Paper Award at ICLR in 2021, the MEXT Young Scientists’ Prize, and Outstanding Achievement Award in 2017 from the Japan Statistical Society.

“Open-world learning for biomedicine”

Abstract: Biomedical data poses multiple hard challenges that break conventional machine learning assumptions. In this talk, I will highlight the need to transcend our prevalent machine learning paradigm and methods to enable them to become the driving force of new scientific discoveries. I will present machine learning methods that have the ability to bridge heterogeneity of individual biological datasets by transferring knowledge across datasets with an unique ability to discover novel, previously uncharacterized phenomena. I will discuss the biological findings enabled by these methods and the conceptual shift they bring in annotating comprehensive single-cell atlas datasets.

Bio: Maria Brbic is an Assistant Professor of Computer Science and, by courtesy, of Life Sciences at the Swiss Federal Institute of Technology, Lausanne (EPFL). She develops new machine learning methods and applies her methods to advance biology and biomedicine. Her methods have been used by global cell atlas consortia efforts aiming to create reference maps of all cell types with the potential to transform biomedicine, including the Human BioMolecular Atlas Program (HuBMAP) and Fly Cell Atlas consortium. Prior to joining the EPFL faculty in 2022, Maria was a postdoctoral fellow at Stanford University, Department of Computer Science, and was a member of the Chan Zuckerberg Biohub at Stanford. Maria received her Ph.D. from University of Zagreb in 2019 while also researching at Stanford University as a Fulbright Scholar and University of Tokyo. She was named a rising star in EECS by MIT in 2021.

“Statistical Inference for Neural Network-based Image Segmentation”

Abstract: Although a vast body of literature relates to image segmentation methods that use deep neural networks (DNNs), less attention has been paid to assessing the statistical reliability of segmentation results. In the absence of statistical reliability, it is difficult to manage the risk of obtaining incorrect segmentation results, which might be harmful when they are used in high-stakes decision-making, such as medical diagnoses or automatic driving. In this talk, I will interpret the segmentation results as hypotheses driven by DNN (called DNN-driven hypotheses) and introduce a method to quantify the reliability of these hypotheses within a statistical hypothesis testing framework.

Bio: Vo Nguyen Le Duy is a postdoctoral researcher at RIKEN-AIP, working under the supervision of Prof. Ichiro Takeuchi. He received his B.S. degree from Danang University of Science and Technology, Vietnam in 2017, followed by his M.S. degree and Ph.D. degree from Nagoya Institute of Technology, Japan in 2020 and 2022, respectively. His research interest is developing reliable artificial intelligence systems.

12:00 – 14:00 Lunch break (Lunch not included for attendees)

Causal Network Inference

Abstract: We consider the problem of learning the causal structure of a system from observational data. Constraint-based methods are one of the main approaches for solving this problem, but the existing methods are either computationally impractical when dealing with large graphs or lacking completeness guarantees. We propose a novel computationally efficient recursive constraint-based method that is sound and complete. The key idea of our approach is that at each iteration a specific type of variable is identified and removed. This allows us to learn the structure efficiently and recursively, as this technique reduces both the number of required conditional independence (CI) tests and the size of the conditioning sets.

Bio: Negar Kiyavash is the chair of Business Analytics (BAN) at École polytechnique fédérale de Lausanne (EPFL) at the College of Management of Technology. Prior to joining EPFL, she was a faculty member at the University of Illinois, Urbana-Champaign, and at Georgia Institute of Technology. Her research interests are broadly in the area of statistical learning and applied probability with special focus on network inference and causality. She is a recipient of the NSF CAREER and AFOSR YIP awards.

“SAM as an Optimal Relaxation of Bayes”

Abstract: Sharpness-aware minimization (SAM) and related adversarial deep-learning methods can drastically improve generalization, but their underlying mechanisms are not yet fully understood. In this talk, I will show how SAM can be interpreted as optimizing a relaxation of the Bayes objective where the expected negative-loss is replaced by the optimal convex lower bound, obtained by using the so-called Fenchel biconjugate. The connection enables a new Adam-like extension of SAM to automatically obtain reasonable uncertainty estimates, while sometimes also improving its accuracy.

Bio: Thomas Möllenhoff received his PhD in Informatics from the Technical University of Munich in 2020. Since then, he is a post-doctoral researcher in the Approximate Bayesian Inference Team at RIKEN-AIP. During his PhD, Thomas worked on nonconvex optimization methods for image processing and computer vision. His recent research focuses on improving deep learning via Bayesian principles. His awards include the “Best Paper Honorable Mention” at CVPR 2016 and a first-place at the NeurIPS 2021 “Challenge on Approximate Inference in Bayesian Deep Learning”.

“Non-convex optimization when the solution is not unique: a kaleidoscope of favorable conditions”

Abstract: Classical optimization algorithms can see their local convergence rates deteriorate when the Hessian at the optimum is singular. The latter is inescapable when the optima are non-isolated. Yet, several algorithms behave perfectly nicely even when optima form a continuum (e.g., due to overparameterization). This has been studied through various lenses, including the Polyak-Lojasiewicz inequality, Quadratic Growth, the Error Bound, and (less so) through a Morse-Bott property. I will present work with Quentin Rebjock showing tight links between all of these.

Bio: Nicolas Boumal is assistant professor of mathematics at EPFL, and an associate editor of the journal Mathematical Programming. He explores geometry, symmetry and statistics in optimization to tackle nonconvexity, as part of an ERC Starting Grant funded by SERI. Nicolas has contributed to several modern theoretical advances in Riemannian optimization. He wrote a book on this topic, and is a lead-developer of the award-winning toolbox Manopt, which facilitates experimentation with optimization on manifolds.

15:30 – 16:00 Coffee break

“Efficient machine learning with tensor networks”

Abstract: Tensor Networks (TNs) are factorizations of high dimensional tensors into networks of many low-dimensional tensors, which have been studied in quantum physics, high-performance computing, and applied mathematics. In recent years, TNs have been increasingly investigated and applied to machine learning and signal processing, due to its significant advances in handling large-scale and high-dimensional problems, model compression in deep neural networks, and efficient computations for learning algorithms. This talk aims to present some recent progress of TNs technology applied to machine learning from perspectives of basic principle and algorithms, novel approaches in unsupervised learning, tensor completion, multi-model learning and various applications in DNN, CNN, RNN and etc.

Bio: Qibin Zhao received the Ph.D. degree in computer science from Shanghai Jiao Tong University, China in 2009. He was a research scientist at RIKEN Brain Science Institute from 2009 to 2017. Since 2017, he has joined RIKEN Center for Advanced Intelligence Project as a unit leader (2017 – 2019) and currently a team leader for tensor learning team. His research interests include machine learning, tensor factorization and tensor networks, computer vision and brain signal processing. He has published more than 150 scientific papers in international journals and conferences and two monographs on tensor networks based methods. He serves as an Action Editor for “Neural Networks” and “Transaction on Machine Learning Research”, as well as Area Chair for the top-tier ML conference of NeurIPS, ICML, ICLR, AISTATS, etc.

“Low-rank dynamical training of feed-forward neural networks”

Abstract: Neural networks have recently found tremendous interest in a large variety of applications. However, their memory and computational footprint can make them impractical in settings with limited computational resources. In the present contribution, a brief recapitulation on recent developments for dynamical low-rank approximation is presented. Then, based on the novel rank-adaptive unconventional robust numerical integrator for dynamical low-rank approximation, a novel algorithm(DLRT) for finding and efficiently training feed-forward neural networks having low-rank weight matrices is introduced. It is illustrated that up to a prescribed tolerance parameter, the proposed algorithm dynamically adapts during the training phase the ranks of the weight matrices of the neural network, reducing the overall time and memory resources required by both the training and the evaluating process. Furthermore, up to numerical errors, the DLRT algorithm is shown to preserve the monotonic decrease of the loss-function along the low-rank approximations. The efficiency and accuracy of the proposed method is illustrated through a variety of numerical experiments on fully-connected and convolutional networks.The present contribution is based on a joint work with Steffen Schotthöfer(KIT), Emanuele Zangrando(GSSI), Jonas Kusch(University of Innsbruck), and Francesco Tudisco(GSSI).

“AI for Social Good – Dementia EEG Neurobiomarker Elucidation with Network Analysis of Time Series and Subsequent Machine Learning Model Application”

Abstract: Modern neurotechnology research employing state-of-the-art machine learning (ML) algorithms within the so-called `AI for social good’ domain contributes to the well-being improvement of individuals with a disability. Using digital health technologies, home-based self-diagnostics, or cognitive decline managing approaches with neurobiomarker feedback enables the elderly to remain independent and improve their daily life. We report research results related to early onset dementia neuro-biomarkers to scrutinize cognitive-behavioral intervention management and digital non-pharmacological therapies. The EEG responses are analyzed in a framework of network neuroscience and topological data analysis (TDA) techniques applied to EEG time series for evaluation and to confirm the initial hypothesis of possible ML application modeling mild cognitive impairment prediction. The proposed experimental tasks in the current pilot study showcase the critical utilization of artificial intelligence for early-onset dementia prognosis in the elderly. We report the best median accuracies well above 90\% for linear SVM and deep fully connected neural network classifier models in leave-one-out-subject cross-validation, which presents very encouraging results in a binary healthy cognitive aging versus MCI stages using TDA features applied to brainwave time series patterns captured from a four-channel EEG wearable.

Bio: Tomasz M. Rutkowski received his M.Sc. in Electronics and Ph.D. in Telecommunications and Acoustics from Wroclaw University of Technology, Poland, in 1994 and 2002, respectively. He received postdoctoral training at the Multimedia Laboratory, Kyoto University, and from 2005-2011 he worked as a research scientist at RIKEN Brain Science Institute, Japan. From 2011-2016 Tomasz served as an assistant professor at the University of Tsukuba and a visiting scientist at RIKEN Brain Science Institute. He served as a visiting lecturer at The University of Tokyo and was also a member of an AI startup in Tokyo. He is a research scientist at the RIKEN Center for Advanced Intelligence Project (AIP) and a research fellow at The University of Tokyo and Nicolaus Copernicus University. Tomasz’s research interests include computational neuroscience, especially brain-computer interfacing (BCI), computational modeling of evoked brain processes and awareness, and AI applications for dementia biomarkers elucidation. He received The BCI Annual Research Award 2014 for the project “Airborne Ultrasonic Tactile BCI” and a nomination for the award in 2016. He also promotes diversity in research by serving as a jury for Maria Sklodowska-Curie Prize for Young Female Scientists in Japan. More information is available at http://tomek.bci-lab.info/


Organising Committee:
Prof. Volkan Cevher, EPFL (Information and Inference Systems Lab.)
Prof. Masashi Sugiyama, Director, RIKEN Center for Advanced Intelligence Project
Dr Jan Kerschgens, EPFL (CIS)


Any questions regarding the event? Please contact us:
[email protected]