Rustica Bakery Christchurch, Civil War Unit Nicknames, Michael Lancaster Attorney, Articles A

Stanford University. Aaron Sidford | Stanford Online when do tulips bloom in maryland; indo pacific region upsc If you see any typos or issues, feel free to email me. (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. I have the great privilege and good fortune of advising the following PhD students: I have also had the great privilege and good fortune of advising the following PhD students who have now graduated: Kirankumar Shiragur (co-advised with Moses Charikar) - PhD 2022, AmirMahdi Ahmadinejad (co-advised with Amin Saberi) - PhD 2020, Yair Carmon (co-advised with John Duchi) - PhD 2020. CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. February 16, 2022 aaron sidford cv on alcatel kaios flip phone manual. SHUFE, Oct. 2022 - Algorithm Seminar, Google Research, Oct. 2022 - Young Researcher Workshop, Cornell ORIE, Apr. Improves the stochas-tic convex optimization problem in parallel and DP setting. In Symposium on Foundations of Computer Science (FOCS 2017) (arXiv), "Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, With Yair Carmon, John C. Duchi, and Oliver Hinder, In International Conference on Machine Learning (ICML 2017) (arXiv), Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu, In Symposium on Theory of Computing (STOC 2017), Subquadratic Submodular Function Minimization, With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong, In Symposium on Theory of Computing (STOC 2017) (arXiv), Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu, In Symposium on Foundations of Computer Science (FOCS 2016) (arXiv), With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki, In Symposium on Theory of Computing (STOC 2016) (arXiv), With Alina Ene, Gary L. Miller, and Jakub Pachocki, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm, With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli, In Conference on Learning Theory (COLT 2016) (arXiv), Principal Component Projection Without Principal Component Analysis, With Roy Frostig, Cameron Musco, and Christopher Musco, In International Conference on Machine Learning (ICML 2016) (arXiv), Faster Eigenvector Computation via Shift-and-Invert Preconditioning, With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. University of Cambridge MPhil. Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks . Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . Yair Carmon. to appear in Neural Information Processing Systems (NeurIPS), 2022, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching (ACM Doctoral Dissertation Award, Honorable Mention.) In particular, it achieves nearly linear time for DP-SCO in low-dimension settings. Alcatel One Touch Flip Phone - New Product Recommendations, Promotions Aaron Sidford. Articles Cited by Public access. /Length 11 0 R IEEE, 147-156. Sivakanth Gopi at Microsoft Research publications | Daogao Liu We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . Stanford, CA 94305 2016. Overview This class will introduce the theoretical foundations of discrete mathematics and algorithms. Email / Aviv Tamar - Reinforcement Learning Research Labs - Technion Computer Science. Annie Marsden. Multicalibrated Partitions for Importance Weights Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder ALT, 2022 arXiv . Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. Oral Presentation for Misspecification in Prediction Problems and Robustness via Improper Learning. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods Aaron Sidford's 143 research works with 2,861 citations and 1,915 reads, including: Singular Value Approximation and Reducing Directed to Undirected Graph Sparsification With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams. Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. Main Menu. Stanford University The system can't perform the operation now. . I am affiliated with the Stanford Theory Group and Stanford Operations Research Group. Yang P. Liu, Aaron Sidford, Department of Mathematics Google Scholar; Probability on trees and . In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. Selected recent papers . Research Institute for Interdisciplinary Sciences (RIIS) at en_US: dc.format.extent: 266 pages: en_US: dc.language.iso: eng: en_US: dc.publisher: Massachusetts Institute of Technology: en_US: dc.rights: M.I.T. [pdf] [poster] " Geometric median in nearly linear time ." In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, Pp. Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate I am an Assistant Professor in the School of Computer Science at Georgia Tech. Goethe University in Frankfurt, Germany. Intranet Web Portal. . About - Annie Marsden Google Scholar Digital Library; Russell Lyons and Yuval Peres. [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. MS&E welcomes new faculty member, Aaron Sidford ! Prior to coming to Stanford, in 2018 I received my Bachelor's degree in Applied Math at Fudan In submission. Before Stanford, I worked with John Lafferty at the University of Chicago. Associate Professor of . With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. Lower Bounds for Finding Stationary Points II: First-Order Methods She was 19 years old and looking forward to the start of classes and reuniting with her college pals. We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. [pdf] Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f Before attending Stanford, I graduated from MIT in May 2018. Efficient Convex Optimization Requires Superlinear Memory. I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. If you see any typos or issues, feel free to email me. Faculty Spotlight: Aaron Sidford - Management Science and Engineering The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. 22nd Max Planck Advanced Course on the Foundations of Computer Science } 4(JR!$AkRf[(t Bw!hz#0 )l`/8p.7p|O~ Articles 1-20. ", "About how and why coordinate (variance-reduced) methods are a good idea for exploiting (numerical) sparsity of data. Parallelizing Stochastic Gradient Descent for Least Squares Regression Management Science & Engineering CV (last updated 01-2022): PDF Contact. Yujia Jin. the Operations Research group. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . >> CME 305/MS&E 316: Discrete Mathematics and Algorithms Aaron Sidford Stanford University Verified email at stanford.edu. With Cameron Musco and Christopher Musco. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. with Yair Carmon, Kevin Tian and Aaron Sidford theses are protected by copyright. aaron sidford cvis sea bass a bony fish to eat. 2013. Aaron Sidford - Selected Publications ?_l) Publications | Jakub Pachocki - Harvard University ", Applied Math at Fudan International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods ", "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs. Aleksander Mdry; Generalized preconditioning and network flow problems I received a B.S. . I was fortunate to work with Prof. Zhongzhi Zhang. With Bill Fefferman, Soumik Ghosh, Umesh Vazirani, and Zixin Zhou (2022). Faculty Spotlight: Aaron Sidford. 9-21. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! Best Paper Award. We forward in this generation, Triumphantly. My research is on the design and theoretical analysis of efficient algorithms and data structures. In particular, this work presents a sharp analysis of: (1) mini-batching, a method of averaging many . AISTATS, 2021. Information about your use of this site is shared with Google. Summer 2022: I am currently a research scientist intern at DeepMind in London. Mail Code. Personal Website. I graduated with a PhD from Princeton University in 2018. Links. Lower bounds for finding stationary points II: first-order methods. [pdf] [poster] he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). to be advised by Prof. Dongdong Ge. [pdf] [poster] Aaron Sidford receives best paper award at COLT 2022 I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. COLT, 2022. Neural Information Processing Systems (NeurIPS), 2014. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. Applying this technique, we prove that any deterministic SFM algorithm . ", "Sample complexity for average-reward MDPs? I enjoy understanding the theoretical ground of many algorithms that are BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard. F+s9H Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff Innovations in Theoretical Computer Science (ITCS) 2018. Conference of Learning Theory (COLT), 2021, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs with Aaron Sidford Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization Algorithms which I created. D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford. Google Scholar, The Complexity of Infinite-Horizon General-Sum Stochastic Games, The Complexity of Optimizing Single and Multi-player Games, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions, On the Sample Complexity for Average-reward Markov Decision Processes, Stochastic Methods for Matrix Games and its Applications, Acceleration with a Ball Optimization Oracle, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, The Complexity of Infinite-Horizon General-Sum Stochastic Games The design of algorithms is traditionally a discrete endeavor. Try again later. Alcatel flip phones are also ready to purchase with consumer cellular. ", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). small tool to obtain upper bounds of such algebraic algorithms. 2021. Internatioonal Conference of Machine Learning (ICML), 2022, Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space Aaron Sidford - Google Scholar {{{;}#q8?\. This site uses cookies from Google to deliver its services and to analyze traffic. Some I am still actively improving and all of them I am happy to continue polishing. 2023. . Aaron Sidford | Management Science and Engineering [pdf] [talk] [poster] Improved Lower Bounds for Submodular Function Minimization My broad research interest is in theoretical computer science and my focus is on fundamental mathematical problems in data science at the intersection of computer science, statistics, optimization, biology and economics. We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . of practical importance. David P. Woodruff - Carnegie Mellon University Try again later. Here is a slightly more formal third-person biography, and here is a recent-ish CV. with Yang P. Liu and Aaron Sidford. With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. ICML Workshop on Reinforcement Learning Theory, 2021, Variance Reduction for Matrix Games However, even restarting can be a hard task here. Assistant Professor of Management Science and Engineering and of Computer Science. Huang Engineering Center Many of my results use fast matrix multiplication Prof. Sidford's paper was chosen from more than 150 accepted papers at the conference. Algorithms Optimization and Numerical Analysis. [pdf] My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). >> In International Conference on Machine Learning (ICML 2016). /Creator (Apache FOP Version 1.0) << This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. BayLearn, 2021, On the Sample Complexity of Average-reward MDPs Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games View Full Stanford Profile. Enrichment of Network Diagrams for Potential Surfaces. Jan van den Brand In this talk, I will present a new algorithm for solving linear programs. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! Conference on Learning Theory (COLT), 2015. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. 2021 - 2022 Postdoc, Simons Institute & UC . Aaron Sidford is an Assistant Professor in the departments of Management Science and Engineering and Computer Science at Stanford University. I regularly advise Stanford students from a variety of departments. With Yair Carmon, John C. Duchi, and Oliver Hinder. with Yair Carmon, Arun Jambulapati and Aaron Sidford July 8, 2022. Yang P. Liu - GitHub Pages The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020).