aaron sidford cv

publications by categories in reversed chronological order. Call (225) 687-7590 or park nicollet dermatology wayzata today! CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019. Aaron Sidford's 143 research works with 2,861 citations and 1,915 reads, including: Singular Value Approximation and Reducing Directed to Undirected Graph Sparsification Secured intranet portal for faculty, staff and students. In this talk, I will present a new algorithm for solving linear programs. This site uses cookies from Google to deliver its services and to analyze traffic. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli. >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. We also provide two . Their, This "Cited by" count includes citations to the following articles in Scholar. Applying this technique, we prove that any deterministic SFM algorithm . In particular, it achieves nearly linear time for DP-SCO in low-dimension settings. [pdf] ", "Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space. Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . 4 0 obj I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. My research focuses on AI and machine learning, with an emphasis on robotics applications. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. Goethe University in Frankfurt, Germany. SHUFE, Oct. 2022 - Algorithm Seminar, Google Research, Oct. 2022 - Young Researcher Workshop, Cornell ORIE, Apr. The system can't perform the operation now. Aaron Sidford is an Assistant Professor of Management Science and Engineering at Stanford University, where he also has a courtesy appointment in Computer Science and an affiliation with the Institute for Computational and Mathematical Engineering (ICME). with Yair Carmon, Arun Jambulapati and Aaron Sidford We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford. . how . I graduated with a PhD from Princeton University in 2018. My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). Aaron Sidford is an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). Efficient Convex Optimization Requires Superlinear Memory. Previously, I was a visiting researcher at the Max Planck Institute for Informatics and a Simons-Berkeley Postdoctoral Researcher. Authors: Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford Download PDF Abstract: We show how to solve directed Laplacian systems in nearly-linear time. ! In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford F+s9H [pdf] Prof. Erik Demaine TAs: Timothy Kaler, Aaron Sidford [Home] [Assignments] [Open Problems] [Accessibility] sample frame from lecture videos Data structures play a central role in modern computer science. 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation. STOC 2023. I completed my PhD at theory and graph applications. rl1 with Aaron Sidford We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco. This work characterizes the benefits of averaging techniques widely used in conjunction with stochastic gradient descent (SGD). Yujia Jin. 2017. July 8, 2022. February 16, 2022 aaron sidford cv on alcatel kaios flip phone manual. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! arXiv | conference pdf (alphabetical authorship), Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with Multiple Scales. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. when do tulips bloom in maryland; indo pacific region upsc with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian in Chemistry at the University of Chicago. I was fortunate to work with Prof. Zhongzhi Zhang. BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard. This improves upon previous best known running times of O (nr1.5T-ind) due to Cunningham in 1986 and (n2T-ind+n3) due to Lee, Sidford, and Wong in 2015. She was 19 years old and looking forward to the start of classes and reuniting with her college pals. 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. My research was supported by the National Defense Science and Engineering Graduate (NDSEG) Fellowship from 2018-2021, and by a Google PhD Fellowship from 2022-2023. Source: www.ebay.ie Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. Group Resources. [pdf] to be advised by Prof. Dongdong Ge. I am a fourth year PhD student at Stanford co-advised by Moses Charikar and Aaron Sidford. I received a B.S. ICML Workshop on Reinforcement Learning Theory, 2021, Variance Reduction for Matrix Games I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. Yujia Jin. Publications and Preprints. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. (ACM Doctoral Dissertation Award, Honorable Mention.) he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). In each setting we provide faster exact and approximate algorithms. My interests are in the intersection of algorithms, statistics, optimization, and machine learning. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. Discrete Mathematics and Algorithms: An Introduction to Combinatorial Optimization: I used these notes to accompany the course Discrete Mathematics and Algorithms. The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. Prior to that, I received an MPhil in Scientific Computing at the University of Cambridge on a Churchill Scholarship where I was advised by Sergio Bacallado. Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks . I also completed my undergraduate degree (in mathematics) at MIT. of practical importance. Here are some lecture notes that I have written over the years. DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . 5 0 obj /Producer (Apache FOP Version 1.0) {{{;}#q8?\. I hope you enjoy the content as much as I enjoyed teaching the class and if you have questions or feedback on the note, feel free to email me. International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG with Vidya Muthukumar and Aaron Sidford Source: appliancesonline.com.au. ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020). I am affiliated with the Stanford Theory Group and Stanford Operations Research Group. Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. Personal Website. My research is on the design and theoretical analysis of efficient algorithms and data structures. University of Cambridge MPhil. Nima Anari, Yang P. Liu, Thuy-Duong Vuong, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, FOCS 2022, Best Paper with Aaron Sidford Lower bounds for finding stationary points II: first-order methods. >> Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! SHUFE, where I was fortunate Some I am still actively improving and all of them I am happy to continue polishing. Email: sidford@stanford.edu. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods with Arun Jambulapati, Aaron Sidford and Kevin Tian Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang: Minimum Cost Flows, MDPs, and 1 -Regression in Nearly Linear Time for Dense Instances. I regularly advise Stanford students from a variety of departments. MS&E welcomes new faculty member, Aaron Sidford ! Allen Liu. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv)