Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in arXiv, submitted to Conference on Machine Learning and Systems (MLSys), 2010
This paper is about the number 2. The number 3 is left for future work.
Recommended citation: ``` @misc{https://doi.org/10.48550/arxiv.2211.00839, doi = {10.48550/ARXIV.2211.00839}, url = {https://arxiv.org/abs/2211.00839}, author = {He, Haoze and Dube, Parijat}, keywords = {Machine Learning (cs.LG), Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {RCD-SGD: Resource-Constrained Distributed SGD in Heterogeneous Environment via Submodular Partitioning}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Zero v1.0 Universal} } ``` http://academicpages.github.io/files/RCD-SGD-2022ICASSP.pdf
Published in Journal 1, 2015
This paper is about the number 3. The number 4 is left for future work.
Recommended citation: Your Name, You. (2015). "Paper Title Number 3." Journal 1. 1(3). http://academicpages.github.io/files/paper3.pdf
Published in ICML, to be submitted, 2022
Elastic Averaging SGD (ASGD) and Leader Gradient Descent (LSGD) can accelerate the convergence of centralized distributed SGD and lead to faster training versus both wall-clock time and the number of epochs. However, both of these algorithms can not be applied to the state-of-the-art decentralized distributed SGD frameworks which can alleviate the congestion communication traffic issue by abandoning the centralized parameter server. In this paper, we propose the decentralized Adjacent Leader Decentralized Gradient Descent(AL-DSGD), which can accelerate the convergence of decentralized SOTA framework. The main idea of AL-DSGD is to assign specific weights to different neighbor learners according to their performance when averaging and apply a corrective force dictated by the currently best-performing neighbor when training. The convergence analysis is applied to demonstrate the faster convergence. Experiments on a suite of datasets and deep learning neural networks validate the theoretical analyses and demonstrate that AL-DSGD speeds up the training and fastens the convergence. Finally, we developed a general and concise distributed training pytorch framework which can implement any distributed machine learning systems easily (any synchronous/ asynchronous, centralized/decentralized distributed SGD system).
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.