# Aaditya Ramdas

Sign in to view more

## Papers76 papers

On the bias, risk and consistency of sample means in multi-armed bandits.

A Higher-Order Kolmogorov-Smirnov Test.

A sequential algorithm for false discovery rate control on directed acyclic graphs

The limits of distribution-free conditional predictive inference

ADDIS: an adaptive discarding algorithm for online FDR control with conservative nulls.

The bias of the sample mean in multi-armed bandits can be positive or negative.

Function-Specific Mixing Times and Concentration Away from Equilibrium

Are sample means in multi-armed bandits positively or negatively biased?

Conformal Prediction Under Covariate Shift.

SAFFRON: an adaptive algorithm for online control of the false discovery rate.

Iterative Methods for Solving Factorized Linear Systems.

Asynchronous Online Testing of Multiple Hypotheses.

On kernel methods for covariates that are rankings

**1**|Bibtex

The power of online thinning in reducing discrepancy

Towards "simultaneous selective inference": post-hoc bounds on the false discovery proportion

**1**|Bibtex

Uniform, nonparametric, non-asymptotic confidence sequences

**3**|Bibtex

Decoding from Pooled Data: Phase Transitions of Message Passing.

A framework for Multi-A(rmed)/B(andit) testing with online FDR control.

On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests.

Online control of the false discovery rate with decaying memory.

DAGGER: A sequential algorithm for FDR control on DAGs.

QuTE: Decentralized multiple testing on sensor networks with false discovery rate control.

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy.

A Unified Treatment of Multiple Testing with Prior Knowledge

**8**|Bibtex

More powerful and flexible rules for online FDR control with memory and weights

STAR: A general interactive framework for FDR control under structural constraints

**4**|Bibtex

QuTE: Decentralized multiple testing on sensor networks with false discovery rate control.

Rows versus Columns: Randomized Kaczmarz or Gauss--Seidel for Ridge Regression

The p‐filter: multilayer false discovery rate control for grouped hypotheses

**5**|Bibtex

A unified treatment of multiple testing with prior knowledge using the p-filter.

**2**|Bibtex

Online control of the false discovery rate with decaying memory

Towards a deeper geometric, analytic and algorithmic understanding of margins.

Asymptotic behavior of $\ell_p$-based Laplacian regularization in semi-supervised learning.

Universality of Mallows' and degeneracy of Kendall's kernels for rankings.

Minimax Lower Bounds for Linear Independence Testing.

Classification Accuracy as a Proxy for Two Sample Testing.

Function-Specific Mixing Times and Concentration Away from Equilibrium.

Sequential Nonparametric Testing with the Law of the Iterated Logarithm.

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy.

Rows vs. Columns: Randomized Kaczmarz or Gauss-Seidel for Ridge Regression

On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests

**17**|Bibtex

Convergence Properties of the Randomized Extended Gauss--Seidel and Kaczmarz Methods

Nonparametric Independence Testing for Small Sample Sizes.

Fast Two-Sample Testing with Analytic Representations of Probability Measures

Regularized brain reading with shrinkage and smoothing

Fast Two-Sample Testing with Analytic Representations of Probability Measures

Margins, Kernels and Non-linear Smoothed Perceptrons.

An Analysis of Active Learning with Uniform Feature Noise.

Stein Shrinkage for Cross-Covariance Operators and Kernel Independence Testing

**1**|Bibtex

Algorithmic Connections between Active Learning and Stochastic Convex Optimization.

Optimal rates for stochastic convex optimization under Tsybakov noise condition.

Algorithmic Connections Between Active Learning and Stochastic Convex Optimization.

Optimal Stochastic Convex Optimization Through The Lens Of Active Learning