Before Stanford, I worked with John Lafferty at the University of Chicago. In each setting we provide faster exact and approximate algorithms. With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. Stanford, CA 94305 [c7] Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian: Private Convex Optimization in General Norms. ", "Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space. of practical importance. Faster energy maximization for faster maximum flow. [last name]@stanford.edu where [last name]=sidford. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization Algorithms which I created. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. with Aaron Sidford I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. to be advised by Prof. Dongdong Ge. Jonathan A. Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford; Computing maximum flows with augmenting electrical flows. in Mathematics and B.A. 2016. Email: [name]@stanford.edu ", "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\). 9-21. } 4(JR!$AkRf[(t Bw!hz#0 )l`/8p.7p|O~ Etude for the Park City Math Institute Undergraduate Summer School. [pdf] Journal of Machine Learning Research, 2017 (arXiv). >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. /Creator (Apache FOP Version 1.0) resume/cv; publications. In International Conference on Machine Learning (ICML 2016). We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard. Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. >> Unlike previous ADFOCS, this year the event will take place over the span of three weeks. Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. riba architectural drawing numbering system; fort wayne police department gun permit; how long does chambord last unopened; wayne county news wv obituaries Their, This "Cited by" count includes citations to the following articles in Scholar. ", "A short version of the conference publication under the same title. Fall'22 8803 - Dynamic Algebraic Algorithms, small tool to obtain upper bounds of such algebraic algorithms. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. << The design of algorithms is traditionally a discrete endeavor. missouri noodling association president cnn. Some I am still actively improving and all of them I am happy to continue polishing. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission . stream Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford; 18(223):142, 2018. My broad research interest is in theoretical computer science and my focus is on fundamental mathematical problems in data science at the intersection of computer science, statistics, optimization, biology and economics. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . in Chemistry at the University of Chicago. Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization ICML Workshop on Reinforcement Learning Theory, 2021, Variance Reduction for Matrix Games Huang Engineering Center "t a","H Title. Faculty and Staff Intranet. Internatioonal Conference of Machine Learning (ICML), 2022, Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space Selected recent papers . I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. Stanford University [pdf] [poster] Follow. It was released on november 10, 2017. ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. Here are some lecture notes that I have written over the years. Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. what is a blind trust for lottery winnings; ithaca college park school scholarships; I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. Applying this technique, we prove that any deterministic SFM algorithm . theses are protected by copyright. I also completed my undergraduate degree (in mathematics) at MIT. Nearly Optimal Communication and Query Complexity of Bipartite Matching . 2016. My research focuses on AI and machine learning, with an emphasis on robotics applications. Efficient Convex Optimization Requires Superlinear Memory. Improved Lower Bounds for Submodular Function Minimization. I am a senior researcher in the Algorithms group at Microsoft Research Redmond. AISTATS, 2021. SODA 2023: 4667-4767. ", "Team-convex-optimization for solving discounted and average-reward MDPs! Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions 5 0 obj My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. Aaron's research interests lie in optimization, the theory of computation, and the . 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. xwXSsN`$!l{@ $@TR)XZ( RZD|y L0V@(#q `= nnWXX0+; R1{Ol (Lx\/V'LKP0RX~@9k(8u?yBOr y Google Scholar Digital Library; Russell Lyons and Yuval Peres. Simple MAP inference via low-rank relaxations. Mail Code. /Length 11 0 R Neural Information Processing Systems (NeurIPS), 2014. Aaron Sidford is an Assistant Professor of Management Science and Engineering at Stanford University, where he also has a courtesy appointment in Computer Science and an affiliation with the Institute for Computational and Mathematical Engineering (ICME). We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). van vu professor, yale Verified email at yale.edu. 172 Gates Computer Science Building 353 Jane Stanford Way Stanford University when do tulips bloom in maryland; indo pacific region upsc small tool to obtain upper bounds of such algebraic algorithms. Group Resources. Google Scholar; Probability on trees and . to appear in Neural Information Processing Systems (NeurIPS), 2022, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). with Yair Carmon, Arun Jambulapati and Aaron Sidford Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. I received a B.S. Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. [name] = yangpliu, Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, Online Edge Coloring via Tree Recurrences and Correlation Decay, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, Discrepancy Minimization via a Self-Balancing Walk, Faster Divergence Maximization for Faster Maximum Flow. Previously, I was a visiting researcher at the Max Planck Institute for Informatics and a Simons-Berkeley Postdoctoral Researcher. With Cameron Musco and Christopher Musco. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Student Intranet. My interests are in the intersection of algorithms, statistics, optimization, and machine learning. David P. Woodruff . {{{;}#q8?\. I graduated with a PhD from Princeton University in 2018. which is why I created a My research was supported by the National Defense Science and Engineering Graduate (NDSEG) Fellowship from 2018-2021, and by a Google PhD Fellowship from 2022-2023. . pdf, Sequential Matrix Completion. Email: sidford@stanford.edu. Aaron Sidford is part of Stanford Profiles, official site for faculty, postdocs, students and staff information (Expertise, Bio, Research, Publications, and more). >> [pdf] [poster] with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford Computer Science. 2021. ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. [pdf] Navajo Math Circles Instructor. with Yair Carmon, Danielle Hausler, Arun Jambulapati and Aaron Sidford Intranet Web Portal. D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. In particular, this work presents a sharp analysis of: (1) mini-batching, a method of averaging many . Best Paper Award. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. I am fortunate to be advised by Aaron Sidford . I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner. CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. Anup B. Rao. Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. Prof. Sidford's paper was chosen from more than 150 accepted papers at the conference. Yin Tat Lee and Aaron Sidford; An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks . This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . The site facilitates research and collaboration in academic endeavors. 2019 (and hopefully 2022 onwards Covid permitting) For more information please watch this and please consider donating here! Lower bounds for finding stationary points II: first-order methods. with Yair Carmon, Kevin Tian and Aaron Sidford ", "Faster algorithms for separable minimax, finite-sum and separable finite-sum minimax. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. % (arXiv pre-print) arXiv | pdf, Annie Marsden, R. Stephen Berry. Aaron Sidford is an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. "I am excited to push the theory of optimization and algorithm design to new heights!" Assistant Professor Aaron Sidford speaks at ICME's Xpo event. [pdf] with Yang P. Liu and Aaron Sidford. [pdf] Done under the mentorship of M. Malliaris. ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! In submission. Articles Cited by Public access. My long term goal is to bring robots into human-centered domains such as homes and hospitals. She was 19 years old and looking forward to the start of classes and reuniting with her college pals. MS&E welcomes new faculty member, Aaron Sidford ! I am broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. Management Science & Engineering ", "General variance reduction framework for solving saddle-point problems & Improved runtimes for matrix games. Thesis, 2016. pdf. Oral Presentation for Misspecification in Prediction Problems and Robustness via Improper Learning. The system can't perform the operation now. In particular, it achieves nearly linear time for DP-SCO in low-dimension settings. theory and graph applications. Source: appliancesonline.com.au. By using this site, you agree to its use of cookies. Publications and Preprints. Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent /N 3 ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration! I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. The authors of most papers are ordered alphabetically. 2013. Prior to that, I received an MPhil in Scientific Computing at the University of Cambridge on a Churchill Scholarship where I was advised by Sergio Bacallado. Yin Tat Lee and Aaron Sidford. Google Scholar, The Complexity of Infinite-Horizon General-Sum Stochastic Games, The Complexity of Optimizing Single and Multi-player Games, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions, On the Sample Complexity for Average-reward Markov Decision Processes, Stochastic Methods for Matrix Games and its Applications, Acceleration with a Ball Optimization Oracle, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, The Complexity of Infinite-Horizon General-Sum Stochastic Games %PDF-1.4 IEEE, 147-156. ! Aaron Sidford Stanford University Verified email at stanford.edu. With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. I enjoy understanding the theoretical ground of many algorithms that are Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! Research interests : Data streams, machine learning, numerical linear algebra, sketching, and sparse recovery.. I hope you enjoy the content as much as I enjoyed teaching the class and if you have questions or feedback on the note, feel free to email me. I was fortunate to work with Prof. Zhongzhi Zhang. With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . AISTATS, 2021. About Me. University, where A nearly matching upper and lower bound for constant error here! We also provide two . . [pdf] [poster] rl1 With Bill Fefferman, Soumik Ghosh, Umesh Vazirani, and Zixin Zhou (2022). ", "Sample complexity for average-reward MDPs? Another research focus are optimization algorithms. with Yair Carmon, Aaron Sidford and Kevin Tian (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). BayLearn, 2021, On the Sample Complexity of Average-reward MDPs Personal Website. Articles 1-20. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods Full CV is available here. Research Interests: My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. Yang P. Liu, Aaron Sidford, Department of Mathematics CoRR abs/2101.05719 ( 2021 ) 2023. . We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. Prof. Erik Demaine TAs: Timothy Kaler, Aaron Sidford [Home] [Assignments] [Open Problems] [Accessibility] sample frame from lecture videos Data structures play a central role in modern computer science. Before attending Stanford, I graduated from MIT in May 2018. endobj University of Cambridge MPhil. [pdf] [poster] Aaron Sidford joins Stanford's Management Science & Engineering department, launching new winter class CS 269G / MS&E 313: "Almost Linear Time Graph Algorithms." O! This site uses cookies from Google to deliver its services and to analyze traffic. /Filter /FlateDecode However, many advances have come from a continuous viewpoint. Yujia Jin. Slides from my talk at ITCS. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). I am I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. ReSQueing Parallel and Private Stochastic Convex Optimization. [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system. We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . Some I am still actively improving and all of them I am happy to continue polishing. SHUFE, where I was fortunate Sequential Matrix Completion. Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. Annie Marsden. The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. You interact with data structures even more often than with algorithms (think Google, your mail server, and even your network routers). February 16, 2022 aaron sidford cv on alcatel kaios flip phone manual. This improves upon previous best known running times of O (nr1.5T-ind) due to Cunningham in 1986 and (n2T-ind+n3) due to Lee, Sidford, and Wong in 2015. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. Try again later. In Sidford's dissertation, Iterative Methods, Combinatorial . Associate Professor of . I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. [pdf] [talk] Aaron Sidford. [pdf] [talk] [poster] arXiv preprint arXiv:2301.00457, 2023 arXiv. Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate Many of my results use fast matrix multiplication Information about your use of this site is shared with Google. Eigenvalues of the laplacian and their relationship to the connectedness of a graph. Faculty Spotlight: Aaron Sidford.