Research scientists and faculty at Rensselaer join forces with IBM researchers to collaborate on projects that push the frontiers in Artificial Intelligence.
Research
AI and Machine Learning are rapidly evolving, driven by advances in data, computing, and algorithms. Research for 2023-2024 focuses on enhancing large language and foundation models, with efforts to improve their efficiency, automation, and integration into real-world tasks.
- A Code Knowledge Graph for Planning Data Science Experiments
- A Framework for Automating Decentralized Training of Foundation Models
- Accelerated and Compressed Distributed Stochastic Optimization for Deep Learning
- Active Learning for Automated Decision-Making
- Active Learning of Adversarial Attack Boudaries
- Advancing LLM Reasoning via Intrinsic and Integrative Capabilities
- AI Models for Curation of Threat Intelligence
- AI Safeguards Using Agentic AI
- Anomaly Detection on Knowledge Graphs
- Associative Energy-based Diffusion Models for Generative AI
- Asynchronous and adaptive stochastic approximation methods for accelerating deep learning
- AutoDML: A Framework for Automating Decentralized Machine Learning
- Automated Design and Optimization of Enterprise-Scale AI Agent Systems
- Capacity Limited Reinforcement Learning in Minds and Machines
- Combining Learning and Reasoning for Embedding Ethical Properties in AI Group Decision Making
- Communicating Generative Models: Multi-Agent Causal Representation Learning for Coordinated Decision-Making
- Composable Systems
- Control-Based Reinforcement Learning
- Correctors and Selectors: Building An Ecosystem for LLM Alignment
- Data Distillation in Tabular Date: A Foundation Model Approach
- Data Recovery and Subspace Clustering from Quantized and Corrupted Measurements
- Deep Causal Representation Learning Towards Generalizable, Explainable, and Fair AI Systems
- Deep Learning for Trust in Cybersecurity
- Energy Transformer for Foundational Models
- Enhancing Efficiency and Robustness Simultaneously in Processing Deep Neural Networks
- Explainable Transfer Learning
- Exploration of Artificial Intelligence Approaches to Earth Observing Remote Sensing
- Extracting Types from Python Machine Learning Libraries
- Fairness Auditor Stress-Testing AI Fairness Methodologies Using Synthetic Data
- Fairness Auditor: Stress-testing AI Fairness Methodologies using Synthetic Data
- Fast Inference and Alignment for Large Multi-modal Models
- Fast Learning of Neural Network Models with Provable Generalizability
- FIT: Fast Inference using Transformer models
- Foundational Models for Understanding Tabular Data Through AI Automation
- GATOR: The Goal-oriented Autonomous Dialogue System
- Holistic Alignment of Agentic LLM Systems via Lightweight System-Level Objectives
- Improving Generalization and Abstraction in Deep Reinforcement Learning
- Interpretable Failure Prediction Algorithm for Time Series Data
- Interpretable Foundation Models for General-Purpose Time Series Analysis
- Interpretable Similarity Metric Learning
- Joint Domain Generalization and Algorithm Robustness for Trusted AI
- Key-Value Cache Compression for Memory-Efficient Large Language Model Inference
- Large Language Models as Planning Domain Model Generators
- Large-Scale Foundation Acoustic Modeling for Automatic Speech Recognition
- Latent Representation and Tiered Indexing for Scalable and Efficient Data Product Creation from Large Data Lakes
- Learning and Embedding Ethical Guidelines in Group Decision-Making AI
- Manifold-Structured Latent Space for Deep Generative Modeling
- Meta-Transfer-Learning for Tabular Data Distillation, Generation, and Predictive Modeling
- Multi-Objective Training of Foundation Acoustic Models for Automatic Speech Recognition
- Neural Memories for Text and Knowledge Graphs
- Neural Memories: Distributed Representations and Associative Retrieval
- Novel Diffusion and Flow-based Generative Language Models via Associative Memories
- Provably Efficient Reinforcement Learning via Neuro-Symbolic Representations
- Quickest Failure Prediction Algorithm for High Dimensional Time Series Data
- Resource-Effective Fine-Tuning of Large Language Models
- Rethinking Retrieval Signals via Hybrid Retrieval Heads
- Robustness of Causal Bandits
- SafeR: Automating Safe Reinforcement Learning
- Secure and Robust Cross-Silo Vertical Federated Learning
- Self-Supervision Method for Natural Language Processing and Applications
- Semantic shift as measure of bias with applications to detection, explanation and mitigation of misinformation
- Signal Temporal Logic Neural Network (STL-NN): A Neuro-Symbolic Framework for Human-Interpretable Machine Learning
- Smart Contracts Augmented with Learning and Semantics
- Strategic AI: Enhancing Large Language Model Agents with Decision-Making Algorithms for Advanced Reasoning and Planning
- Sufficiently Accurate Model Based Reinforcement Learning
- Systematic Failure Analysis for LLM Agents: Taxonomy, Attribution, and Reflection
- Tentacular AI (TAI)
- Testing LLM Safety via Causal Reasoning
- Theoretical and Algorithmic Foundations of In-Context Learning and Chain of Thought Using Properly Trained Transformer Models
- Theoretical and Algorithmic Foundations of In-Context Learning Using Properly Trained Transformer Models
- Time Series Data Agent: Enabling Multipurpose Foundation Models for Multimodal Data
- Towards a General Framework
- Training Neural Network with Few-Shot Data & Applications to AI Automation
- Unlearning: Dynamics of Membership Privacy and Inference Attacks Against Large Language Models
Deep neural networks (DNNs) have driven significant breakthroughs but also raised concerns due to their increasing computational and energy demands. This research focuses on hardware-software co-design strategies to improve AI efficiency across platforms like data centers, edge, and embedded devices.
- Algorithmic Innovations and Architectural Support towards In-Memory Training on Analog AI Accelerators
- AutoComp: Automated Compression & Deployment for Foundation Models
- Bringing AI Intelligence to 5G/6G Edge Platform
- Closing the Accuracy Gap in Analog In-memory Training: Device-dependent Algorithms and Hyperparameter Search
- Co-Designing Analog AI System and Accelerator for Large Foundation Models
- Co-Designing Analog AI System and Accelerator for Large Foundation Models
- Efficient Chiplet-based Memory Architecture for AI Hardware Accelerator
- Efficient Deployment of Large Language Model over Heterogeneous Computing Systems
- Efficient Deployment of Large Language Model over Heterogeneous Computing Systems
- Efficient Hardware Acceleration of CoFrNets
- Enabling Efficient Inference and High Accuracy by Exploring Novel Linear-type Attention and KV Cache Optimization
- Exploring Analog-Aware Learning and Architectures with Hardware Support for Next-Generation Foundation Models
- Hardware–Software Co-Design for Unified Pruning and Mixed-Precision Compression of Vision–Language Model
- Hardware–Software Co-Design of Efficient Spatiotemporal Transformers and Mixture-of-Experts on IBM Hardware
- Holistic Algorithm-Architecture Co-Design of Approximate Computing for Scalable Foundation Models
- Holistic Algorithm-Architecture Co-Design of Approximate Computing for Scalable Foundation Models
- Integrated Sensing and Communication with AI-RAN Platform
- KV-cache Management for Improving Run-time efficiency of Large Reasoning Models
- Low-precision Distributed Accelerated Methods and Library Development for Training and Fine-tuning Foundation Models
- Low-precision second-order-type distributed methods for training and fine-tuning foundation
- Model Optimization and Hardware-aware Neural Architecture Search for Spatiotemporal Data Mining
- Optimization of Hardware-based Neural Network Accelerators for Fluorescence Lifetime in Biomedical Applications
- Optimization of Hardware-based Neural Networks Accelerators for Fluorescence Lifetime Biomedical Applications
- Rethinking Retrieval Signals via Hybrid Retrieval Heads
- Structured & Robust Neural Network Pruning on Low-Precision Hardware for Guaranteed Learning Performance for Complex Time-Series Datasets
The Semiconductor Technology Track focuses on two key areas essential for the continued scaling of semiconductor systems: BEOL interconnect technologies and chiplet technologies. Research in this area includes both experimental work on material development and theoretical studies using AI/ML and simulations to improve interconnect performance.
- Anisotropic Thermal Resistance Characterization of SI, BEOL, and underfill layers using 3 omega Joule heating thermometry and exploratory non-destructive scanning Thermal Microscopy/ Multiscale Thermal Modeling of 3D ICs
- Computation-Guided Discovery of High-Thermal Conductivity, Low-k Dielectrics for Advanced Node Technologies
- Control of orientation and handedness of nanoscale topological and directional interconnect conductors on amorphous SiO2
- Control of orientation and handedness of nanoscale topological interconnect conductors on amorphous SiO2
- Control of orientation and handedness of nanoscale Weyl interconnects metals on amorphous SiO2
- Discovering topological materials for BEOL Interconnects using first-principles calculations and machine learning
- Discovering topological materials for BEOL interconnects using first-principles calculations and machine learning
- Discovering topological materials for BEOL interconnects using first-principles calculations and machine learning
- E-beam glancing angle scattering for hybrid bonding surface planarity measurement
- E-beam glancing angle scattering for hybrid bonding surface planarity measurement
- Glancing electron diffraction study of Cu pads surface texture for hybrid bonding
- High resolution x-ray imaging for 3D-HI chiplet non-destructive internal metal joints inspection
- High resolution x-ray imaging for 3D-HI chiplet non-destructive internal metal joints inspection
- High-Resolution X-ray Imaging for 3D-HI Chiplet Non-Destructive Internal Metal Joints Inspection
- Intermetallic compounds for high-conductivity interconnects
- Intermetallic compounds for high-conductivity interconnects
- Meta-Learned Digital Twins for Circuit Design in Chiplet Integration
- Molecular engineering of metal/low-k interfaces for Cu interconnects
- Molecular nanoengineering of post-Cu-metal/dielectric interfaces for nanodevice wiring
- Molecularly engineered liner/low-k interfaces and linerless Cu interconnects
- New materials for high-conductivity interconnects
- Validation of multiscale 3D chiplet stack thermal model with temperature-dependent materials through 3w measurements
- Validation of size dependent and interface resistance models with 3 measurements
Quantum computing is rapidly advancing as a powerful tool for high-performance digital computing applications. The focus will be on accelerating knowledge transfer between IBM and Rensselaer, advancing quantum computing as a research tool, and supporting workforce development and education.
- A Hybrid Quantum-Classic Approach to Solve Semidefinite Programming Problems
- Benchmarking Quantum Computational Methods for Thermo- Chemical Processes
- Educating the Quantum Future: Filling the Pipeline from Middle School to the Workforce
- Enhancing Gate Fidelity and Speed Through AI-Driven Optimization for Hubbard Model Computation
- Harnessing Quantum Hysteresis in Spin Materials
- HPC-assisted Hybrid Classical Quantum System for Drug Discovery and Development
- Hybrid Quantum-Classical Workflows for Modeling Multi-Body Interactions in Disordered Materials
- Integration and Scheduling of High Performance Computing (HPC) and Quantum Computing Workflows
- Materials Quantum Intelligence: Creating an HPC-Assisted Quantum Machine Learning Framework for Materials Design
- Qantum Approximate Optimization for Secure and Resilient Supply Chains
- QRMI-DA: Robust Direct Access Middleware and Hybrid Workflows for Quantum-Centric Supercomputing
- Quantum Computing Exploration to Advance Supply Chain
- Quantum Computing for Predicting Lithium-Ion Mobility
- Quantum optimization algorithm for combinatorial composite material design
- Quantum-Enhanced Bayesian Inference for Inverse Problems and Digital Twins in Complex Dynamical Systems
- Quantum–Classical Optimization and Edge–Quantum Collaboration for Next-Generation Transportation Systems
- Reliable Quantum Simulation of Fluid Dynamics on Quantum Hardware
- Stochastic hydrodynamics for designing non-equilibrium many-body systems
- Topology-Aware Circuits and Structure-Aware Optimizers for Phase Transitions
- Utility-scale Spin-Boson QCSC Simulation