Welcome to Collaborative, Robust & Explainable AI-based Decision-making Lab (CARE.AI)

CARE.AI lab aims to build collaborative and robust decision making systems that augment human intelligence.

Vision and Mission

AI decision making systems are being successfully prototyped in a wide variety of different domains including but not limited to mobility, logistics, safety, security, public health etc. However, most of these work independently of humans and other AI systems. For these systems to be useful and integrated with humans to solve real use cases, there are two key challenges:

  1. First, AI systems have to work seamlessly with other AI systems and with humans in the loop.
  2. Second, Humans should be able to trust the AI systems. This requires robustness and easy understanding (explainability) of AI decision making.

Towards addressing these two major challenges, CARE.AI lab aims to build collaborative and robust decision making systems that augment human intelligence.

Focus Areas (current)

Current focus of the CARE.AI lab is on problems where AI systems assist non-expert humans either through training or situated assistance.