Difference between revisions of "Kaustubh Shivdikar"

m (Reduced Whitespace)
m (Lines added between Publications)
Line 93: Line 93:
  
  
 
+
[[File:AMD MI100 GPU.png|right|frameless|171x171px|AMD MI100 GPU]]
<br />[[File:AMD MI100 GPU.png|right|frameless|171x171px|AMD MI100 GPU]]
 
  
 
======[https://wiki.kaustubh.us/w/img_auth.php/GME.pdf GME: GPU-based Microarchitectural Extensions to Accelerate Homomorphic Encryption]======
 
======[https://wiki.kaustubh.us/w/img_auth.php/GME.pdf GME: GPU-based Microarchitectural Extensions to Accelerate Homomorphic Encryption]======
Line 135: Line 134:
  
  
 
<br />
 
 
[[File:V100 GPU.png|right|frameless|196x196px]]
 
[[File:V100 GPU.png|right|frameless|196x196px]]
  
Line 149: Line 146:
 
<br />
 
<br />
 
<br />
 
<br />
 
 
  
  
Line 181: Line 176:
  
  
 
+
[[File:Hacker icon.png|right|frameless|103x103px]]
<br />[[File:Hacker icon.png|right|frameless|103x103px]]
 
  
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/JAXED_Reverse_Engineering_DNN_Architectures_Leveraging_JIT_GEMM_Libraries.pdf JAXED: Reverse Engineering DNN Architectures Leveraging JIT GEMM Libraries]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/JAXED_Reverse_Engineering_DNN_Architectures_Leveraging_JIT_GEMM_Libraries.pdf JAXED: Reverse Engineering DNN Architectures Leveraging JIT GEMM Libraries]'''======
Line 208: Line 202:
  
  
 
+
[[File:Mini GNN.png|right|frameless|69x69px]]
<br />[[File:Mini GNN.png|right|frameless|69x69px]]
 
  
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/GNNMark.pdf GNNMark: A benchmark suite to characterize graph neural network training on GPUs]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/GNNMark.pdf GNNMark: A benchmark suite to characterize graph neural network training on GPUs]'''======
Line 235: Line 228:
  
  
 
+
[[File:Core Image SMASH.png|right|frameless|52x52px]]
<br />[[File:Core Image SMASH.png|right|frameless|52x52px]]
 
  
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/SMASH_Thesis.pdf SMASH: Sparse Matrix Atomic Scratchpad Hashing]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/SMASH_Thesis.pdf SMASH: Sparse Matrix Atomic Scratchpad Hashing]'''======
Line 264: Line 256:
  
 
<br />
 
<br />
 
 
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/SC_18_Cluster_Contest_Paper.pdf Student cluster competition 2018, team northeastern university: Reproducing performance of a multi-physics simulations of the Tsunamigenic 2004 Sumatra Megathrust earthquake on the AMD EPYC 7551 architecture]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/SC_18_Cluster_Contest_Paper.pdf Student cluster competition 2018, team northeastern university: Reproducing performance of a multi-physics simulations of the Tsunamigenic 2004 Sumatra Megathrust earthquake on the AMD EPYC 7551 architecture]'''======
 
{{Nutshell|Optimizing earthquake simulation workload on AMD CPUs.|title=Paper}}
 
{{Nutshell|Optimizing earthquake simulation workload on AMD CPUs.|title=Paper}}
Line 285: Line 275:
  
  
 
 
<br />
 
  
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/BARC_speeding.pdf Speeding up DNNs using HPL based Fine-grained Tiling for Distributed Multi-GPU Training]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/BARC_speeding.pdf Speeding up DNNs using HPL based Fine-grained Tiling for Distributed Multi-GPU Training]'''======
Line 297: Line 284:
  
 
<br />
 
<br />
 
 
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/Video_Steganography.pdf Video steganography using encrypted payload for satellite communication]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/Video_Steganography.pdf Video steganography using encrypted payload for satellite communication]'''======
 
{{Nutshell|Concealing secret messages in videos.|title=Paper}}
 
{{Nutshell|Concealing secret messages in videos.|title=Paper}}
Line 307: Line 292:
  
 
<br />
 
<br />
 
 
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/missing_middle.pdf Missing 'Middle Scenarios' Uncovering Nuanced Conditions in Latin America's Housing Crisis]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/missing_middle.pdf Missing 'Middle Scenarios' Uncovering Nuanced Conditions in Latin America's Housing Crisis]'''======
 
{{Nutshell|Integrating LSTM models with Latin American housing data to devise solutions.|title=Paper}}
 
{{Nutshell|Integrating LSTM models with Latin American housing data to devise solutions.|title=Paper}}
Line 316: Line 299:
  
  
 
+
==  ==
 
 
<br />
 
  
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/dynamic_power.pdf Dynamic power allocation using Stackelberg game in a wireless sensor network]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/dynamic_power.pdf Dynamic power allocation using Stackelberg game in a wireless sensor network]'''======
Line 328: Line 309:
  
 
<br />
 
<br />
 
 
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/automatic_image.pdf Automatic image annotation using a hybrid engine]'''======
 
======'''[https://wiki.kaustubh.us/w/img_auth.php/automatic_image.pdf Automatic image annotation using a hybrid engine]'''======
 
{{Nutshell|A hybrid engine that merges feature extraction with language models.|title=Paper}}
 
{{Nutshell|A hybrid engine that merges feature extraction with language models.|title=Paper}}

Revision as of 11:31, 8 February 2024

Boston, MA

Hi, I am Kaustubh, a Ph.D. candidate studying computer engineering in NUCAR lab at Northeastern University with my advisor David Kaeli. My research focuses on designing hardware accelerators for sparse graph workloads.

My expertise lies in:

  • Computer Architecture Simulator Design
  • Graph Neural Network Accelerators
  • Sparse Matrix Accelerators
  • Homomorphic Encryption Accelerators
  • GPU Kernel Design

Contact:

  • shivdikar.k [at] northeastern [dot] edu
  • mail [at] kaustubh [dot] us

ResearchGate Google Scholar


[Resume]









Education

  • PhD - Computer Engineering, Northeastern University [May 2024]
  • MS - Electrical and Computer Engineering, Northeastern University [May 2021]
  • BS - Electrical Engineering, Veermata Jijabai Technological Institute [May 2016]

Work










Recent News

  • Sept 2023: Served on the HPCA 2024 Program Reviewer's Committee
  • Feb 2023: Served on the HPCA 2023 Best Paper Committee
  • June 2022: Mentored Lina Adkins for the GNN Acceleration project at REU-Pathways program
  • May 2022: Served as Submission chair for HPCA 2023 conference.
  • Jan 2020: Taught the GPU Programming Course at NEU
  • April 2019: Graduate Innovator Award at the RISE 2019 Research Expo for our poster Pi-Tiles
  • April 2018: Best Poster Award at the RISE 2018 Research Expo for our poster The Prime Hexagon
  • Nov 2018: Mentored the NEU team for Student Cluster Contest at Super Computing Conference 2018
  • Nov 2017: Joined the NEU Team for Student Cluster Contest at Super Computing Conference 2017











Publications

MaxK teaser.png
MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training


(ASPLOS 2024) [PDF] [RG]

Abstract
Abstract
In the acceleration of deep neural network training, the graphics processing unit (GPU) has become the mainstream platform. GPUs face substantial challenges on Graph Neural Networks (GNNs), such as workload imbalance and memory access irregularities, leading to underutilized hardware. Existing solutions such as PyG, DGL with cuSPARSE, and GNNAdvisor frameworks partially address these challenges. However, the memory traffic involved with Sparse-Dense Matrix Matrix Multiplication (SpMM) is still significant. We argue that drastic performance improvements can only be achieved by the vertical optimization of algorithm and system innovations, rather than treating the speedup optimization as an "after-thought" (i.e., (i) given a GNN algorithm, designing an accelerator, or (ii) given hardware, mainly optimizing the GNN algorithm). In this paper, we present MaxK-GNN, an advanced high-performance GPU training system integrating algorithms and system innovation. (i) We introduce the MaxK nonlinearity and provide a theoretical analysis of MaxK nonlinearity as a universal approximator, and present the Compressed Balanced Sparse Row (CBSR) format, designed to store the data and index of the feature matrix after nonlinearity; (ii) We design a coalescing enhanced forward computation with row-wise product-based Sparse Matrix-Matrix Multiplication (SpGEMM) Kernel using CBSR for input feature matrix fetching and strategic placement of a sparse output accumulation buffer in shared memory; (iii) We develop an optimized backward computation with outer product-based and Sampled Sparse Matrix Dense Matrix Multiplication (SSpMM) Kernel. We conduct extensive evaluations of MaxK-GNN and report the end-to-end system run-time. Experiments show that our MaxK-GNN system could approach the theoretical speedup limit according to Amdahl’s law. We achieve comparable accuracy to existing GNNs, but at a significantly increased speed: 3.22×/4.24× speedup (vs. theoretical limits, 5.52×/7.27×) on Reddit compared to DGL and GNNAdvisor implementations. Our implementation can be found on GitHub
MaxK GNN Operations.png

Training dataflow of single MaxK-based GNN layer. In the backward computation, the transposed CSC format is equal to original CSR format.

Authors: Hongwu Peng, Xi Xie, Kaustubh Shivdikar, MD Amit Hasan, Shaoyi Huang, Omen Khan, Caiwen Ding, David Kaeli


AMD MI100 GPU
GME: GPU-based Microarchitectural Extensions to Accelerate Homomorphic Encryption


(IEEE/ACM MICRO 2023) [PDF] [RG] [Slides]

Abstract
Abstract
Fully Homomorphic Encryption (FHE) enables the processing of encrypted data without decrypting it.

FHE has garnered significant attention over the past decade as it supports secure outsourcing of data processing to remote cloud services.

Despite its promise of strong data privacy and security guarantees, FHE introduces a slowdown of up to five orders of magnitude as compared to the same computation using plaintext data.

This overhead is presently a major barrier to the commercial adoption of FHE.

In this work, we leverage GPUs to accelerate FHE, capitalizing on a well-established GPU ecosystem available in the cloud.

We propose GME, which combines three key microarchitectural extensions along with a compile-time optimization to the current AMD CDNA GPU architecture.

First, GME integrates a lightweight on-chip compute unit (CU)-side hierarchical interconnect to retain ciphertext in cache across FHE kernels, thus eliminating redundant memory transactions.

Second, to tackle compute bottlenecks, GME introduces special MOD-units that provide native custom hardware support for modular reduction operations, one of the most commonly executed sets of operations in FHE.

Third, by integrating the MOD-unit with our novel pipelined 64-bit integer arithmetic cores (WMAC-units), GME further accelerates FHE workloads by 19%.

Finally, we propose a Locality-Aware Block Scheduler (LABS) that exploits the temporal locality available in FHE primitive blocks.

Incorporating these microarchitectural features and compiler optimizations, we create a synergistic approach achieving average speedups of 796x, 14.2x, and 2.3x over Intel Xeon CPU, NVIDIA V100 GPU, and Xilinx FPGA implementations, respectively.

GME NOC.png
Authors: Kaustubh Shivdikar, Yuhui Bao, Rashmi Agrawal, Michael Shen, Gilbert Jonatan, Evelio Mora, Alexander Ingare, Neal Livesay, José L. Abellán, John Kim, Ajay Joshi, David Kaeli



V100 GPU.png
Accelerating Finite Field Arithmetic for Homomorphic Encryption on GPUs


(IEEE MICRO 2023) [PDF] [RG]







Lady Bug.png
Accelerating Polynomial Multiplication for Homomorphic Encryption on GPUs


(SEED 2022) [PDF][Slides] [RG]

Abstract
Abstract
Homomorphic Encryption (HE) enables users to securely outsource both the storage and computation of sensitive data to untrusted servers. Not only does FHE offer an attractive solution for security in cloud systems, but lattice-based FHE systems are also believed to be resistant to attacks by quantum computers. However, current FHE implementations suffer from prohibitively high latency. For lattice-based FHE to become viable for real-world systems, it is necessary for the key bottlenecks---particularly polynomial multiplication---to be highly efficient.

In this paper, we present a characterization of GPU-based implementations of polynomial multiplication. We begin with a survey of modular reduction techniques and analyze several variants of the widely-used Barrett modular reduction algorithm. We then propose a modular reduction variant optimized for 64-bit integer words on the GPU, obtaining a 1.8x speedup over the existing comparable implementations.


Next, we explore the following GPU-specific improvements for polynomial multiplication targeted at optimizing latency and throughput: 1) We present a 2D mixed-radix, multi-block implementation of NTT that results in a 1.85x average speedup over the previous state-of-the-art. 2) We explore shared memory optimizations aimed at reducing redundant memory accesses, further improving speedups by 1.2x. 3) Finally, we fuse the Hadamard product with neighboring stages of the NTT, reducing the twiddle factor memory footprint by 50%. By combining our NTT optimizations, we achieve an overall speedup of 123.13x and 2.37x over the previous state-of-the-art CPU and GPU implementations of NTT kernels, respectively.

FHE Teaser.png

FHE protects against network insecurities in untrusted cloud services, enabling users to securely offload sensitive data

Authors: Kaustubh Shivdikar, Gilbert Jonatan, Evelio Mora, Neal Livesay, Rashmi Agrawal, Ajay Joshi, José L. Abellán, John Kim, David Kaeli



Hacker icon.png
JAXED: Reverse Engineering DNN Architectures Leveraging JIT GEMM Libraries


(SEED 2021) [PDF] [Slides] [Poster] [RG]

Abstract
Abstract
General matrix multiplication (GEMM) libraries on x86 architectures have recently adopted Just-in-time (JIT) based optimizations to dramatically reduce the execution time of small and medium-sized matrix multiplication. The exploitation of the latest CPU architectural extensions, such as the AVX2 and AVX-512 extensions, are the target for these optimizations. Although JIT compilers can provide impressive speedups to GEMM libraries, they expose a new attack surface through the built-in JIT code caches. These software-based caches allow an adversary to extract sensitive information through carefully designed timing attacks. The attack surface of such libraries has become more prominent due to their widespread integration into popular Machine Learning (ML) frameworks such as PyTorch and Tensorflow.


In our paper, we present a novel attack strategy for JIT-compiled GEMM libraries called JAXED. We demonstrate how an adversary can exploit the GEMM library's vulnerable state management to extract confidential CNN model hyperparameters. We show that using JAXED, one can successfully extract the hyperparameters of models with fully-connected layers with an average accuracy of 92%. Further, we demonstrate our attack against the final fully connected layer of 10 popular DNN models. Finally, we perform an end-to-end attack on MobileNetV2, on both the convolution and FC layers, successfully extracting model hyperparameters.

JAXED Teaser.png

Attack Surface: After the victim’s execution, the victim leaves behind information about its model hyperparameters in the JIT code cache. The attacker probes this JIT code cache through the attacker’s ML model and observes timing information to determine the victim’s model hyperparameters.

Authors: Malith Jayaweera, Kaustubh Shivdikar, Yanzhi Wang, David Kaeli



Mini GNN.png
GNNMark: A benchmark suite to characterize graph neural network training on GPUs


(ISPASS 2021) [PDF] [RG]

Abstract
Abstract
Graph Neural Networks (GNNs) have emerged as a promising class of Machine Learning algorithms to train on non-euclidean data. GNNs are widely used in recommender systems, drug discovery, text understanding, and traffic forecasting. Due to the energy efficiency and high-performance capabilities of GPUs, GPUs are a natural choice for accelerating the training of GNNs. Thus, we want to better understand the architectural and system level implications of training GNNs on GPUs. Presently, there is no benchmark suite available designed to study GNN training workloads.


In this work, we address this need by presenting GNNMark, a feature-rich benchmark suite that covers the diversity present in GNN training workloads, datasets, and GNN frameworks. Our benchmark suite consists of GNN workloads that utilize a variety of different graph-based data structures, including homogeneous graphs, dynamic graphs, and heterogeneous graphs commonly used in a number of application domains that we mentioned above. We use this benchmark suite to explore and characterize GNN training behavior on GPUs. We study a variety of aspects of GNN execution, including both compute and memory behavior, highlighting major bottlenecks observed during GNN training. At the system level, we study various aspects, including the scalability of training GNNs across a multi-GPU system, as well as the sparsity of data, encountered during training. The insights derived from our work can be leveraged by both hardware and software developers to improve both the hardware and software performance of GNN training on GPUs.

GNN Analysis.png

Graph Neural Network Analysis

Authors: Trinayan Baruah, Kaustubh Shivdikar, Shi Dong, Yifan Sun, Saiful A Mojumder, Kihoon Jung, José L. Abellán, Yash Ukidave, Ajay Joshi, John Kim, David Kaeli



Core Image SMASH.png
SMASH: Sparse Matrix Atomic Scratchpad Hashing


(MS Thesis, 2021) [PDF] [RG]

Abstract
Abstract
Sparse matrices, more specifically Sparse Matrix-Matrix Multiply (SpGEMM) kernels, are commonly found in a wide range of applications, spanning graph-based path-finding to machine learning algorithms (e.g., neural networks). A particular challenge in implementing SpGEMM kernels has been the pressure placed on DRAM memory. One approach to tackle this problem is to use an inner product method for the SpGEMM kernel implementation. While the inner product produces fewer intermediate results, it can end up saturating the memory bandwidth, given the high number of redundant fetches of the input matrix elements. Using an outer product-based SpGEMM kernel can reduce redundant fetches, but at the cost of increased overhead due to extra computation and memory accesses for producing/managing partial products.


In this thesis, we introduce a novel SpGEMM kernel implementation based on the row-wise product approach. We leverage atomic instructions to merge intermediate partial products as they are generated. The use of atomic instructions eliminates the need to create partial product matrices, thus eliminating redundant DRAM fetches.

To evaluate our row-wise product approach, we map an optimized SpGEMM kernel to a custom accelerator designed to accelerate graph-based applications. The targeted accelerator is an experimental system named PIUMA, being developed by Intel. PIUMA provides several attractive features, including fast context switching, user-configurable caches, globally addressable memory, non-coherent caches, and asynchronous pipelines. We tailor our SpGEMM kernel to exploit many of the features of the PIUMA fabric.

This thesis compares our SpGEMM implementation against prior solutions, all mapped to the PIUMA framework. We briefly describe some of the PIUMA architecture features and then delve into the details of our optimized SpGEMM kernel. Our SpGEMM kernel can achieve 9.4x speedup as compared to competing approaches.

SMASH Algorithm.png

The SMASH Algorithm



Student cluster competition 2018, team northeastern university: Reproducing performance of a multi-physics simulations of the Tsunamigenic 2004 Sumatra Megathrust earthquake on the AMD EPYC 7551 architecture


(SC 2018) [PDF] [RG]

Abstract
Abstract
This paper evaluates the reproducibility of a Supercomputing 17 paper titled Extreme Scale Multi-Physics Simulations of the Tsunamigenic 2004 Sumatra Megathrust Earthquake. We evaluate reproducibility on a significantly smaller computer system than used in the original work. We found that we able to demon- strate reproducibility of the multi-physics simulations on a single-node system, as well as confirm multi- node scaling. However, reproducibility of the visual and geophysical simulation results were inconclusive due to issues related to input parameters provided to our model. The SC 17 paper provided results for both CPU-based simulations as well as Xeon Phi based simulations. Since our cluster uses NVIDIA V100s for acceleration, we are only able to assess the CPU-based results in terms of reproducibility.
Earthquake simulation.png

Horizontal Seafloor displacement simulation

Authors: Chris Bunn, Harrison Barclay, Anthony Lazarev, Toyin Yusuf, Jason Fitch, Jason Booth, Kaustubh Shivdikar, David Kaeli



Speeding up DNNs using HPL based Fine-grained Tiling for Distributed Multi-GPU Training


(BARC 2018) [PDF] [RG]



Video steganography using encrypted payload for satellite communication


(Aerospace Conference 2017) [PDF] [RG]



Missing 'Middle Scenarios' Uncovering Nuanced Conditions in Latin America's Housing Crisis


(Cityscape 2017) [PDF] [RG]


Dynamic power allocation using Stackelberg game in a wireless sensor network


(Aerospace Conference 2016) [PDF] [RG]



Automatic image annotation using a hybrid engine


(Indicon 2015) [PDF] [RG]














Posters

  • FHE [PDF]
  • JAXED [PDF]
  • Pi-Tiles (Graduate Innovator Award) [PDF]
  • The Prime Hexagon (Best Poster Award) [PDF]











What is KTB Wiki?

KTB Wiki, because the best way to store your knowledge is in an indexed SQL database.

This website was built on KTB Wiki. KTB wiki is my side project/attempt to consolidate knowledge gained during my Ph.D. journey. Though many other platforms provide similar service, the process of creating KTB Wiki was a learning experience since it taught me concepts of indexing, load balancing, and in-memory file systems. KTB Wiki was built using MediaWiki and is intended for research purposes only.




















Interesting Reads