AI and Machine Learning are rapidly evolving, driven by advances in data, computing, and algorithms. Research for 2023-2024 focuses on enhancing large language and foundation models, with efforts to improve their efficiency, automation, and integration into real-world tasks. Key areas include fine-tuning models for enterprise applications, advancing AI safety and trust, and exploring deep learning theories like self-supervised learning, reinforcement learning, and generative AI. These innovations are shaping the future of AI across industries, from natural language processing to ethical considerations.
AI Algorithms
Projects
2025
| Project | RPI Principal Investigators | IBM Principal Investigators |
|---|---|---|
| Theoretical and Algorithmic Foundations of In-Context Learning and Chain of Thought Using Properly Trained Transformer Models | Meng Wang | Songtao Lu, Pin-Yu Chen, Xiaodong Cui |
| Key-Value Cache Compression for Memory-Efficient Large Language Model Inference | Mohammad Mohammadi Amiri | Pin-Yu Chen, Tejaswini Pedapati, Subhajit Chaudhury |
| Associative Energy-based Diffusion Models for Generative AI | Mohammed J. Zaki | Dmitry Krotov, Rogerio S. Feris |
| Fast Inference and Alignment for Large Multi-modal Models | Koushik Kar, Tianyi Chen | Parikshit Ram, Nathalie Baracaldo, Yi Zhou, Horst Samulowitz, Ken Wong |
| Meta-Transfer-Learning for Tabular Data Distillation, Generation, and Predictive Modeling | Oshani Seneviratne | Horst Samulowitz, Yi Zhou, Parikshit Ram |
| Interpretable Foundation Models for General-Purpose Time Series Analysis | Agung Julius | Lam Nguyen |
| Strategic AI: Enhancing Large Language Model Agents with Decision-Making Algorithms for Advanced Reasoning and Planning | Santiago Paternain | Dharmashankar Subramanian |
| Large Language Models as Planning Domain Model Generators | Selmer Bringsjord | Kavitha Srinivas, Harsha Kokel, Michael Katz, Shirin Sohrabi |
2024
| Project | RPI Principal Investigators | IBM Principal Investigators |
|---|---|---|
| Energy Transformer for Foundational Models | Mohammed Zaki | Dmitry Krotov, Benjamin Hoover, Hendrik Strobelt |
| FIT: Fast Inference using Transformer models | Koushik Kar, Tianyi Chen | Parikshit Ram, Nathalie Baracaldo, Yi Zhou, Soham Dan, Horst Samulowitz |
| Foundational Models for Understanding Tabular Data Through AI Automation | Jianxi Gao | Kavitha Srinivas, Tejsawini Pedapati, Horst Samulowitz, Pin-Yu Chen |
| Multi-Objective Training of Foundation Acoustic Models for Automatic Speech Recognition | Tianyi Chen, Mei Si | Xiaodong Cui, Brian Kingsbury, Songtao Lu |
| Resource-Effective Fine-Tuning of Large Language Models | Mohammad Mohammadi Amiri | Pin-Yu Chen, Tejaswini Pedapati, Subhajit Chaudhury |
| Testing LLM Safety via Causal Reasoning | Ali Tajer | Prasanna Sattigeri, Dennis Wei, Dmitrity Katz-Rogozhnikov |
| Theoretical and Algorithmic Foundations of In-Context Learning Using Properly Trained Transformer Models | Meng Wang | Songtao Lu, Pin-Yu Chen |
| Unlearning: Dynamics of Membership Privacy and Inference Attacks Against Large Language Models | Lei Yu | Magdon Ismail, Nathalie Baracaldo, Ling Cai |
| Control-Based Reinforcement Learning | Santiago Paternain | Mark Squillante; Chai Wah Wu |
| Correctors and Selectors: Building An Ecosystem for LLM Alignment | Alex Gittens | Mikhail Yurochkin, Mayank Agarwal |
| Data Distillation in Tabular Date: A Foundation Model Approach | Oshani Seneviratne, Inwon Kang | Horst Samulowitz, Parikshit Ram, Yi Zhou |