Difference between revisions of "Kaustubh Shivdikar"
m (Spell check) (Tag: Visual edit) |
m (DNN Speeding up paper added) (Tag: Visual edit) |
||
Line 55: | Line 55: | ||
======'''Accelerating Polynomial Multiplication for Homomorphic Encryption on GPUs'''====== | ======'''Accelerating Polynomial Multiplication for Homomorphic Encryption on GPUs'''====== | ||
([https://seed22.engr.uconn.edu/ SEED 2022]) [PDF] | ([https://seed22.engr.uconn.edu/ SEED 2022]) [PDF] | ||
− | |||
− | |||
{| class="wikitable mw-collapsible mw-collapsed" | {| class="wikitable mw-collapsible mw-collapsed" | ||
Line 71: | Line 69: | ||
|[[File:FHE Teaser.png|frameless|802x802px]] | |[[File:FHE Teaser.png|frameless|802x802px]] | ||
FHE protects against network insecurities in untrusted cloud services, enabling users to securely offload sensitive data | FHE protects against network insecurities in untrusted cloud services, enabling users to securely offload sensitive data | ||
+ | |- | ||
+ | |Authors: '''[[Kaustubh Shivdikar]]''', [https://ieeexplore.ieee.org/author/37088654483 Gilbert Jonatan], [https://eveliomora.es/ Evelio Mora], [https://neallivesay.github.io/ Neal Livesay], [https://www.linkedin.com/in/rashmi-agrawal-9a0601133/ Rashmi Agrawal], [https://www.bu.edu/eng/profile/ajay-joshi/ Ajay Joshi], [https://sites.google.com/ucam.edu/jlabellan/ José L. Abellán], [http://icn.kaist.ac.kr/~jjk12/ John Kim], [https://ece.northeastern.edu/fac-ece/kaeli.html David Kaeli] | ||
|} | |} | ||
Line 81: | Line 81: | ||
======'''JAXED: Reverse Engineering DNN Architectures Leveraging JIT GEMM Libraries'''====== | ======'''JAXED: Reverse Engineering DNN Architectures Leveraging JIT GEMM Libraries'''====== | ||
([https://www.seed-symposium.org/2021/index.html SEED 2021]) [[https://wiki.kaustubh.us/w/img_auth.php/JAXED_Reverse_Engineering_DNN_Architectures_Leveraging_JIT_GEMM_Libraries.pdf PDF]] | ([https://www.seed-symposium.org/2021/index.html SEED 2021]) [[https://wiki.kaustubh.us/w/img_auth.php/JAXED_Reverse_Engineering_DNN_Architectures_Leveraging_JIT_GEMM_Libraries.pdf PDF]] | ||
− | |||
− | |||
{| class="wikitable mw-collapsible mw-collapsed" | {| class="wikitable mw-collapsible mw-collapsed" | ||
Line 95: | Line 93: | ||
|[[File:JAXED Teaser.png]] | |[[File:JAXED Teaser.png]] | ||
Attack Surface: After the victim’s execution, the '''victim leaves behind information about its model hyperparameters''' in the JIT code cache. The '''attacker probes this JIT code cache''' through the attacker’s ML model and observes timing information to determine the victim’s model hyperparameters. | Attack Surface: After the victim’s execution, the '''victim leaves behind information about its model hyperparameters''' in the JIT code cache. The '''attacker probes this JIT code cache''' through the attacker’s ML model and observes timing information to determine the victim’s model hyperparameters. | ||
+ | |- | ||
+ | |Authors: [https://malithjayaweera.com/ Malith Jayaweera], '''[[Kaustubh Shivdikar]]''', [https://ywang393.expressions.syr.edu/ Yanzhi Wang], [https://ece.northeastern.edu/fac-ece/kaeli.html David Kaeli] | ||
|} | |} | ||
Line 105: | Line 105: | ||
======'''GNNMark: A benchmark suite to characterize graph neural network training on GPUs'''====== | ======'''GNNMark: A benchmark suite to characterize graph neural network training on GPUs'''====== | ||
([https://ispass.org/ispass2021/ ISPASS 2021]) [[https://wiki.kaustubh.us/w/img_auth.php/GNNMark.pdf PDF]] | ([https://ispass.org/ispass2021/ ISPASS 2021]) [[https://wiki.kaustubh.us/w/img_auth.php/GNNMark.pdf PDF]] | ||
− | |||
− | |||
{| class="wikitable mw-collapsible mw-collapsed" | {| class="wikitable mw-collapsible mw-collapsed" | ||
Line 117: | Line 115: | ||
In this work, we address this need by presenting GNNMark, a feature-rich benchmark suite that covers the diversity present in GNN training workloads, datasets, and GNN frameworks. Our benchmark suite consists of GNN workloads that utilize a variety of different graph-based data structures, including homogeneous graphs, dynamic graphs, and heterogeneous graphs commonly used in a number of application domains that we mentioned above. We use this benchmark suite to explore and characterize GNN training behavior on GPUs. We study a variety of aspects of GNN execution, including both compute and memory behavior, highlighting major bottlenecks observed during GNN training. At the system level, we study various aspects, including the scalability of training GNNs across a multi-GPU system, as well as the sparsity of data, encountered during training. The insights derived from our work can be leveraged by both hardware and software developers to improve both the hardware and software performance of GNN training on GPUs. | In this work, we address this need by presenting GNNMark, a feature-rich benchmark suite that covers the diversity present in GNN training workloads, datasets, and GNN frameworks. Our benchmark suite consists of GNN workloads that utilize a variety of different graph-based data structures, including homogeneous graphs, dynamic graphs, and heterogeneous graphs commonly used in a number of application domains that we mentioned above. We use this benchmark suite to explore and characterize GNN training behavior on GPUs. We study a variety of aspects of GNN execution, including both compute and memory behavior, highlighting major bottlenecks observed during GNN training. At the system level, we study various aspects, including the scalability of training GNNs across a multi-GPU system, as well as the sparsity of data, encountered during training. The insights derived from our work can be leveraged by both hardware and software developers to improve both the hardware and software performance of GNN training on GPUs. | ||
|- | |- | ||
− | |[[File:GNN Analysis.png| | + | |[[File:GNN Analysis.png|748x748px]] |
Graph Neural Network Analysis | Graph Neural Network Analysis | ||
+ | |- | ||
+ | |Authors: [https://www.linkedin.com/in/trinayan-baruah-30/ Trinayan Baruah], '''[[Kaustubh Shivdikar]]''', [https://www.linkedin.com/in/shi-dong-neu/ Shi Dong], [https://syifan.github.io/ Yifan Sun], [https://www.linkedin.com/in/saiful-mojumder/ Saiful A Mojumder], [https://scholar.google.com/citations?user=do5tcEAAAAAJ&hl=en Kihoon Jung], [https://sites.google.com/ucam.edu/jlabellan/ José L. Abellán], [https://www.linkedin.com/in/ukidaveyash/ Yash Ukidave], [https://www.bu.edu/eng/profile/ajay-joshi/ Ajay Joshi], [http://icn.kaist.ac.kr/~jjk12/ John Kim], [https://ece.northeastern.edu/fac-ece/kaeli.html David Kaeli] | ||
|} | |} | ||
Line 143: | Line 143: | ||
This thesis compares our SpGEMM implementation against prior solutions, all mapped to the PIUMA framework. We briefly describe some of the PIUMA architecture features and then delve into the details of our optimized SpGEMM kernel. Our SpGEMM kernel can achieve 9.4x speedup as compared to competing approaches. | This thesis compares our SpGEMM implementation against prior solutions, all mapped to the PIUMA framework. We briefly describe some of the PIUMA architecture features and then delve into the details of our optimized SpGEMM kernel. Our SpGEMM kernel can achieve 9.4x speedup as compared to competing approaches. | ||
|- | |- | ||
− | |[[File:SMASH Algorithm.png| | + | |[[File:SMASH Algorithm.png|744x744px]] |
The SMASH Algorithm | The SMASH Algorithm | ||
|} | |} | ||
Line 153: | Line 153: | ||
======'''Student cluster competition 2018, team northeastern university: Reproducing performance of a multi-physics simulations of the Tsunamigenic 2004 Sumatra Megathrust earthquake on the AMD EPYC 7551 architecture'''====== | ======'''Student cluster competition 2018, team northeastern university: Reproducing performance of a multi-physics simulations of the Tsunamigenic 2004 Sumatra Megathrust earthquake on the AMD EPYC 7551 architecture'''====== | ||
− | ([https://sc18.supercomputing.org/ SC 2018]) [PDF] | + | ([https://sc18.supercomputing.org/ SC 2018]) [[https://wiki.kaustubh.us/w/img_auth.php/SC_18_Cluster_Contest_Paper.pdf PDF]] |
+ | {| class="wikitable mw-collapsible mw-collapsed" | ||
+ | |+Abstract | ||
+ | !Abstract | ||
+ | |- | ||
+ | |This paper evaluates the reproducibility of a Supercomputing 17 paper titled Extreme Scale Multi-Physics Simulations of the Tsunamigenic 2004 Sumatra Megathrust Earthquake. We evaluate reproducibility on a significantly smaller computer system than used in the original work. We found that we able to demon- strate reproducibility of the multi-physics simulations on a single-node system, as well as confirm multi- node scaling. However, reproducibility of the visual and geophysical simulation results were inconclusive due to issues related to input parameters provided to our model. The SC 17 paper provided results for both CPU-based simulations as well as Xeon Phi based simulations. Since our cluster uses NVIDIA V100s for acceleration, we are only able to assess the CPU-based results in terms of reproducibility. | ||
+ | |- | ||
+ | |[[File:Earthquake simulation.png|619x619px]] | ||
+ | Horizontal Seafloor displacement simulation | ||
+ | |- | ||
+ | |Authors: Chris Bunn, Harrison Barclay, Anthony Lazarev, Toyin Yusuf, Jason Fitch, Jason Booth, Kaustubh Shivdikar, David Kaeli | ||
+ | |} | ||
Revision as of 12:57, 20 August 2022
I am a Ph.D. candidate studying in NUCAR lab at Northeastern University under the guidance of David Kaeli. My research focuses on designing hardware accelerators for sparse graph workloads.
My expertise lies in:
- Computer Architecture Simulator Design
- Graph Neural Network Accelerators
- Sparse Matrix Accelerators
- Homomorphic Encryption Accelerators
- GPU Kernel Design
Contact: shivdikar.k [at] northeastern [dot] edu, mail [at] kaustubh [dot] us
Education
- PhD - Computer Engineering, Northeastern University [Expected Fall 2023]
- MS - Electrical and Computer Engineering, Northeastern University [May 2021]
- BS - Electrical Engineering, Veermata Jijabai Technological Institute [May 2016]
Work
- Summer-Fall 2020 Coop: Parallel Computing Lab @ Intel Labs with Fabrizio Petrini.
- Summer-Fall 2019 Coop: Parallel Computing Lab @ Intel Labs with Fabrizio Petrini.
- Summer-Fall 2018 Coop: Mobile Robotics @ Omron Adept with George Paul.
Recent News
- June 2022: Mentored Lina Adkins for the GNN Acceleration project at REU-Pathways program
- May 2022: Served as Submission chair for HPCA 2023 conference.
- Jan 2020: Taught the GPU Programming Course at NEU
- April 2019: Graduate Innovator Award at the RISE 2019 Research Expo for our poster Pi-Tiles
- April 2018: Best Poster Award at the RISE 2018 Research Expo for our poster The Prime Hexagon
- Nov 2018: Mentored the NEU team for Student Cluster Contest at Super Computing Conference 2018
- Nov 2017: Joined the NEU Team for Student Cluster Contest at Super Computing Conference 2017
Publications
Accelerating Polynomial Multiplication for Homomorphic Encryption on GPUs
(SEED 2022) [PDF]
Abstract |
---|
Fully Homomorphic Encryption (FHE) enables users to securely outsource both the storage and computation of sensitive data to untrusted servers. Not only does FHE offer an attractive solution for security in cloud systems, but lattice-based FHE systems are also believed to be resistant to attacks by quantum computers. However, current FHE implementations suffer from prohibitively high latency. For lattice-based FHE to become viable for real-world systems, it is necessary for the key bottlenecks---particularly polynomial multiplication---to be highly efficient.
In this paper, we present a characterization of GPU-based implementations of polynomial multiplication. We begin with a survey of modular reduction techniques and analyze several variants of the widely-used Barrett modular reduction algorithm. We then propose a modular reduction variant optimized for 64-bit integer words on the GPU, obtaining a 1.8x speedup over the existing comparable implementations.
|
FHE protects against network insecurities in untrusted cloud services, enabling users to securely offload sensitive data |
Authors: Kaustubh Shivdikar, Gilbert Jonatan, Evelio Mora, Neal Livesay, Rashmi Agrawal, Ajay Joshi, José L. Abellán, John Kim, David Kaeli |
JAXED: Reverse Engineering DNN Architectures Leveraging JIT GEMM Libraries
Abstract |
---|
General matrix multiplication (GEMM) libraries on x86 architectures have recently adopted Just-in-time (JIT) based optimizations to dramatically reduce the execution time of small and medium-sized matrix multiplication. The exploitation of the latest CPU architectural extensions, such as the AVX2 and AVX-512 extensions, are the target for these optimizations. Although JIT compilers can provide impressive speedups to GEMM libraries, they expose a new attack surface through the built-in JIT code caches. These software-based caches allow an adversary to extract sensitive information through carefully designed timing attacks. The attack surface of such libraries has become more prominent due to their widespread integration into popular Machine Learning (ML) frameworks such as PyTorch and Tensorflow.
|
Attack Surface: After the victim’s execution, the victim leaves behind information about its model hyperparameters in the JIT code cache. The attacker probes this JIT code cache through the attacker’s ML model and observes timing information to determine the victim’s model hyperparameters. |
Authors: Malith Jayaweera, Kaustubh Shivdikar, Yanzhi Wang, David Kaeli |
GNNMark: A benchmark suite to characterize graph neural network training on GPUs
(ISPASS 2021) [PDF]
Abstract |
---|
Graph Neural Networks (GNNs) have emerged as a promising class of Machine Learning algorithms to train on non-euclidean data. GNNs are widely used in recommender systems, drug discovery, text understanding, and traffic forecasting. Due to the energy efficiency and high-performance capabilities of GPUs, GPUs are a natural choice for accelerating the training of GNNs. Thus, we want to better understand the architectural and system level implications of training GNNs on GPUs. Presently, there is no benchmark suite available designed to study GNN training workloads.
|
Graph Neural Network Analysis |
Authors: Trinayan Baruah, Kaustubh Shivdikar, Shi Dong, Yifan Sun, Saiful A Mojumder, Kihoon Jung, José L. Abellán, Yash Ukidave, Ajay Joshi, John Kim, David Kaeli |
SMASH: Sparse Matrix Atomic Scratchpad Hashing
(MS Thesis, 2021) [PDF]
Student cluster competition 2018, team northeastern university: Reproducing performance of a multi-physics simulations of the Tsunamigenic 2004 Sumatra Megathrust earthquake on the AMD EPYC 7551 architecture
Speeding up DNNs using HPL based Fine-grained Tiling for Distributed Multi-GPU Training
(BARC 2018) [PDF]
Video steganography using encrypted payload for satellite communication
(Aerospace Conference 2017) [PDF]
Missing'Middle Scenarios' Uncovering Nuanced Conditions in Latin America's Housing Crisis
(Cityscape 2017) [PDF]
Dynamic power allocation using Stackelberg game in a wireless sensor network
(Aerospace Conference 2016) [PDF]
Automatic image annotation using a hybrid engine
(Indicon 2015) [PDF]
Posters
- JAXED
- Pi-Tiles
- The Prime Hexagon
What is KTB Wiki?
KTB Wiki, because the best way to store your knowledge is in an indexed SQL database.
This website was built on KTB Wiki. KTB wiki is my side project/attempt to consolidate knowledge gained during my Ph.D. journey. Though many other platforms provide similar service, the process of creating KTB Wiki was a learning experience since it taught me concepts of indexing, load balancing, and in-memory file systems. KTB Wiki was built using MediaWiki and is intended for research purposes only.
Interesting Reads