Research scientists and faculty at Rensselaer join forces with IBM researchers to collaborate on projects that push the frontiers in Artificial Intelligence.
Research

AI and Machine Learning are rapidly evolving, driven by advances in data, computing, and algorithms. Research for 2023-2024 focuses on enhancing large language and foundation models, with efforts to improve their efficiency, automation, and integration into real-world tasks.
- Associative Energy-based Diffusion Models for Generative AI
- Control-Based Reinforcement Learning
- Correctors and Selectors: Building An Ecosystem for LLM Alignment
- Data Distillation in Tabular Date: A Foundation Model Approach
- Energy Transformer for Foundational Models
- Fast Inference and Alignment for Large Multi-modal Models
- FIT: Fast Inference using Transformer models
- Foundational Models for Understanding Tabular Data Through AI Automation
- Interpretable Foundation Models for General-Purpose Time Series Analysis
- Key-Value Cache Compression for Memory-Efficient Large Language Model Inference
- Large Language Models as Planning Domain Model Generators
- Meta-Transfer-Learning for Tabular Data Distillation, Generation, and Predictive Modeling
- Multi-Objective Training of Foundation Acoustic Models for Automatic Speech Recognition
- Resource-Effective Fine-Tuning of Large Language Models
- Strategic AI: Enhancing Large Language Model Agents with Decision-Making Algorithms for Advanced Reasoning and Planning
- Testing LLM Safety via Causal Reasoning
- Theoretical and Algorithmic Foundations of In-Context Learning and Chain of Thought Using Properly Trained Transformer Models
- Theoretical and Algorithmic Foundations of In-Context Learning Using Properly Trained Transformer Models
- Unlearning: Dynamics of Membership Privacy and Inference Attacks Against Large Language Models

Deep neural networks (DNNs) have driven significant breakthroughs but also raised concerns due to their increasing computational and energy demands. This research focuses on hardware-software co-design strategies to improve AI efficiency across platforms like data centers, edge, and embedded devices.
- Algorithmic Innovations and Architectural Support towards In-Memory Training on Analog AI Accelerators
- Bringing AI Intelligence to 5G/6G Edge Platform
- Closing the Accuracy Gap in Analog In-memory Training: Device-dependent Algorithms and Hyperparameter Search
- Efficient Deployment of Large Language Model over Heterogeneous Computing Systems
- Holistic Algorithm-Architecture Co-Design of Approximate Computing for Scalable Foundation Models
- Low-precision Distributed Accelerated Methods and Library Development for Training and Fine-tuning Foundation Models
- Low-precision second-order-type distributed methods for training and fine-tuning foundation
- Model Optimization and Hardware-aware Neural Architecture Search for Spatiotemporal Data Mining
- Optimization of Hardware-based Neural Network Accelerators for Fluorescence Lifetime in Biomedical Applications
- Optimization of Hardware-based Neural Networks Accelerators for Fluorescence Lifetime Biomedical Applications
- Structured & Robust Neural Network Pruning on Low-Precision Hardware for Guaranteed Learning Performance for Complex Time-Series Datasets

The Semiconductor Technology Track focuses on two key areas essential for the continued scaling of semiconductor systems: BEOL interconnect technologies and chiplet technologies. Research in this area includes both experimental work on material development and theoretical studies using AI/ML and simulations to improve interconnect performance.
- Anisotropic Thermal Resistance Characterization of SI, BEOL, and underfill layers using 3 omega Joule heating thermometry and exploratory non-destructive scanning Thermal Microscopy/ Multiscale Thermal Modeling of 3D ICs
- Control of orientation and handedness of nanoscale topological and directional interconnect conductors on amorphous SiO2
- Control of orientation and handedness of nanoscale topological interconnect conductors on amorphous SiO2
- Discovering topological materials for BEOL interconnects using ?rst-principles calculations and machine learning
- Discovering topological materials for BEOL Interconnects using first-principles calculations and machine learning
- E-beam glancing angle scattering for hybrid bonding surface planarity measurement
- High resolution x-ray imaging for 3D-HI chiplet non-destructive internal metal joints inspection
- Intermetallic compounds for high-conductivity interconnects
- Meta-Learned Digital Twins for Circuit Design in Chiplet Integration
- Molecular engineering of metal/low-k interfaces for Cu interconnects
- Molecular nanoengineering of post-Cu-metal/dielectric interfaces for nanodevice wiring
- Validation of multiscale 3D chiplet stack thermal model with temperature-dependent materials through 3w measurements

Quantum computing is rapidly advancing as a powerful tool for high-performance digital computing applications. The focus will be on accelerating knowledge transfer between IBM and Rensselaer, advancing quantum computing as a research tool, and supporting workforce development and education.
- Benchmarking Quantum Computational Methods for Thermo- Chemical Processes
- Educating the Quantum Future: Filling the Pipeline from Middle School to the Workforce
- Enhancing Gate Fidelity and Speed Through AI-Driven Optimization for Hubbard Model Computation
- HPC-assisted Hybrid Classical Quantum System for Drug Discovery and Development
- Integration and Scheduling of High Performance Computing (HPC) and Quantum Computing Workflows
- Quantum Computing Exploration to Advance Supply Chain