Informed Machine Learning

Auton Lab research frequently involves ways to incorporate expert knowledge into AI systems. This ranges from research on how to effectively have experts label vast amounts of data, to incorporating feedback in active learning frameworks, to formal verification of model adherence to domain-specific constraints and design specifications. We take AI outside the cozy spot of data-driven approach. Standard AI relies primarily on what can be learned from data, however, data is just a limited projection of reality. Auton lab is working on multiple exciting avenues to make AI and ML smarter.

Highlighted Work

  • Weak Supervision
    - Many state of the art models have a voracious appetite for labeled data, which is hard to provide in contexts where subject matter experts are the only people capable of providing annotations. The weak supervision paradigm replaces labeling of individual data samples with the creation of labeling functions. Auton lab work expands this paradigm to increase the efficiency and flexibility of the data programming framework.
  • Active Learning
    - Active search, Jeff, Andrew, Ina [blank]
  • Semi-supervised Learning
    - [blank]
  • Principle-Driven AI
    - Leveraging domain knowledge; leveraging first principles of physics/chemistry/biology; leveraging common sense and demonstrating common sense. The utility of machine learning is that it will learn useful policies from data, however it is an open question of how to incorporate domain-specific constraints into the training process. Auton Lab works to help SMEs to codify their knowledge in a way that informs the model fitting process, including physics informed algorithms, as well as informing the testing process, including model-centric verification of adherence to design specifications and statistical evaluation of business metrics.
  • Introspective AI
    - Models should be conscious of their own decision logic, and able to admit what they can and cannot do. Building systems that exhibit algorithmic fairness.
Informed Machine Learning

Supporting Scientific Discovery

Supporting Scientific Discovery

Blank

Highlighted Work

  • Data Mining
    - Leveraging hidden structure in data informs design decisions in modeling paradigms and expands the set of possibilities for building useful models.
  • Anomaly Detection
    - Capturing rare events in data also provides leads for investigation to understand the environmental conditions under which a model performs well or poorly.

Pragmatic Deep Learning

Research at the intersection of the benefits of deep learning and real-world constraints that limit the effectiveness of standard methods

Highlighted Work

  • Hybrid Forecasting
    - Incorporating exogenous variables to reduce forecasting error. Learning heirarchical, structural elements of data to aid in forecasting.
  • Deep Survival Analysis
    - Predicting time to failure under censored outcomes. Discovering effective interventions under assumptions of heterogeneity.
  • Bayesian Deep Learning
    - Auton Lab works on methods to build autoencoders with fewer parameters than deep network alternatives, trading fidelity for scalability. This work enables accurate forecasting for long windows into the future. Variational autoencoders break the assumption of independence among subsequent layers in the network. That reduces complexity of the model but maintains performance. Reduces time to train, resource consumption, etc.
  • Deep Reinforcement Learning
    - Sample efficient reinforcement learning, learning with genetic curriculum.
  • Deep Creativity
    - Style infusion, morphing of images. [blank]
Pragmatic Deep Learning

Making AI Usable & Accessible

Making AI Usable & Accessible

AI research that is informed by constant exposure to real-world, domain-specific constraints including resource limits, privacy considerations, and user trust & understanding.

Highlighted Work

  • Automated Machine Learning
    - AutonML augments the capacity of Data Scientists by automating searches for plausible modeling process designs. It can help address shortages of qualified personnel and boost productivity of current staff by automatically learning what is learnable from data.
  • Distributed AI
    - Federated Learning supports machine learning in a distributed manner, by learning on local data and updating global model parameters.
  • Efficient Data Structures and Learning Algorithms
    - Intelligent data structures can support fast queries for information that may otherwise take a long time to compute, such as temporal scans and robustness guarantees. Efficient as well as scaling existing learning paradigms.
  • Reinforcement Learning
    - Current trends in Reinforcement Learning require massive amounts of data and compute power. Work on the Auton Lab makes RL much more efficient and accessible to researchers to push its limits and answer new questions without requiring massive computing infrastructure.
  • Explainable AI
    - If understanding, performance, and trust are integral to the adoption of AI in new, mission-critical fields, a model's inability to rationalize its behavior is rate-limiting. If users cannot supervise AI systems, there is a non-trivial chance that AI will inflict otherwise easily preventable harm to humans. Auton Lab develops a variety of tools which are intended to give the developers of AI systems a better understanding of what their models actually learn.