To transition from artificial (narrow) intelligence to artificial general intelligence will require incorporating additional fundamental learning principles that evolved in biologically intelligent systems. One such property is the ability to lifelong learn, that is, to use incoming data to improve performance on essentially all tasks, both past and present, without catastrophically forgetting anything important. We provide a general framework in which an intelligent agent can perform lifelong learning, and then propose a concrete algorithm, generalizing decision forests, to achieve it. Theory, simulations, and real data applications demonstrate the power of this approach.
I received a B.S degree from the Department of Biomedical Engineering (BME) at Washington University in St. Louis, MO in 2002, a M.S. degree from the Department of Applied Mathematics & Statistics (AMS) at Johns Hopkins University (JHU) in Baltimore, MD in 2009, and a Ph.D. degree from the Department of Neuroscience at JHU in 2009. I was a Postdoctoral Fellow in AMS@JHU from 2009 until 2011, at which time I was appointed an Assistant Research Scientist, and became a member of the Institute for Data Intensive Science and Engineering. I spent 2 years at Information Initiative at Duke University, before coming home to my current appointment as Assistant Professor in BME@JHU, and core faculty in both the Institute for Computational Medicine and the Center for Imaging Science, as well as a member of the Kavli Neuroscience Discovery Institute. I married my kindergarten sweetheart in the summer of 2014, and we had our first child in 2017, and a second in 2019.