These are the research areas that are particularly interesting to the lab and present many challenges in real world applications. These topics will be the focus of the hackAuton.
1. AI Safety
Trustworthiness, Interpretability, AI x Policy
AI has the potential to revolutionize many aspects of our lives, from healthcare and transportation to entertainment and finance. However, with this great power comes great responsibility. Ensuring that AI is safe and aligned with human values is crucial to avoid unintended consequences and negative impacts on society.
- Deepfakes: Deepfakes are AI-generated videos or images that can be used to manipulate public opinion or defame individuals. They have already been used to spread false information and create fake news. A recent NPR article highlights technical and policy challenges.
- Autonomous Vehicles: Self-driving cars are becoming more common on our roads, but there are still concerns about their safety and ability to make ethical decisions in complex situations. The National Highway Traffic Safety Administration identifies autonomous vehicle safety as one of four core topics for technology and innovation.
- Bias: AI algorithms can be biased if they are trained on biased data. This can lead to discrimination and unfair treatment of certain groups. The National Institute of Standards and Technology comments on the scope and challenge of AI bias.
- Explainability and Interpretability: Certain applications of AI must justify critical decisions or earn the trust of users through transparency. Meanwhile, popular focus continues to trend toward increasingly large and complex black-box models. IBM discusses the value of explainable artificial intelligence (XAI).
2. Green/Sustainable AI/ML
Resource constrained AI. How can we leverage smarter AI without consuming compute and energy resources at unsustainable rates?
As the size of models has increased and their use become more widespread, concerns about sustainability have emerged. To ensure long-term cost-effectiveness and minimize environmental impact, it is essential to explore ways to leverage smarter AI without consuming resources at unsustainable rates.
- Energy Consumption: For large models, both training and inference consume substantial amounts of energy. This is costly, limiting the ways these models can be practically deployed. Energy consumption is also associated with carbon emissions. An article by MIT Technology Review summarizes energy and emissions concerns.
- Hardware: Expensive hardware such as GPUs or TPUs is required for large models. A CNBC article notes that the primary data center GPU from NVIDIA costs $10,000, and more recent high-end chips can cost several times that. Frequent upgrades and replacements also result in electronic waste (e-waste) that poses environmental hazards.
- Data Costs: Collecting and preparing a sufficient amount of training data also comes with labor, computation, and storage costs. Techniques that reduce data-related costs can improve the sustainability of developing AI systems.
3. Reducing ML Development Cycle Time
How can we get fast prototypes? Reactionary AI? Generalized problem solving?
Building specialized AI systems requires a large amount of time and effort from a limited pool of experts. Improving accessibility to AI by reducing these barriers and avoiding common errors can make AI more impactful in a more diverse range of applications.
- Automatic Design: Automated Machine Learning (AutoML) and automatic hyperparameter tuning can dramatically reduce design effort and development time. For example, see the Auton Lab’s own AutoML system, AutonML.
- Reactionary AI: Given ever changing environments, how can we create ML technology that can recognize and react to fundamental shifts? How can models learn from a small number of counter-examples?
- Generalized Problem Solving: Developing general solutions can reduce effort for specialized tasks, especially by reducing the amount and quality of data required for success in the target tasks. For example, transfer learning adapts models from one task to another related task, while foundation models are purpose-built to be fine-tuned for a wide range of tasks.
- Collaboration and Resource Sharing: Establishing open platforms for collaboration and knowledge sharing among ML practitioners, researchers, and industry experts accelerates learning and promotes best practices. Sharing code, datasets, and research findings can help others build upon existing work, reducing duplication of effort.
4. Patient Safety
How can we ensure models are not racially biased?
What is Patient Safety? The discipline aims to prevent and reduce risks, errors, and harm that occur to patients when they seek care.
Five Problem Categories:
- Medication (44%): Examples: Wrong drug, patient, dose, route, time. Delirium or other change. Significant hypoglycemia. Acute kidney injury.
- Patient (23%): Examples: Pressure injury. Blood clots (VTE and PE). Fall or trauma with injury.
- Procedure/Surgery (22%): Examples: Intestinal perforation. Excessive bleeding. Pneumothorax.
- Infection (11%): Examples: Respiratory infection. Surgical site infection. CLABSI.
- Diagnostic Error: Examples: Missed, deplayed or wrong diagnostics produced from AI/ML models