Unless I'm confused about probability (which is very possible), I have most of a combinatorial solution to this problem. The short answer is: don't bother, because it's bloody awful (except maybe for the purpose of proving Dan wrong with some slightly-less-awful estimation, but I'm not even quite to the point where that's possible yet).
The long answer follows.
First, the thing I might be confused about: you can simulate choosing $n$ numbers uniformly from an interval $[0,\alpha]$ by first choosing the largest of them (say $y$) with a pdf proportional to $x^{n-1}$, and then picking the rest of them uniformly from the interval $[0,y]$. If this is false, my entire argument is hosed, and it's the kind of thing I get confused about easily, but I'm pretty sure it's true.
Now, if the number you're adding to $S$ is ever larger than something you removed, you can totally ignore it, thereby reducing the problem to something like its instance with $n-1$ numbers. (I say "something like" because you're more likely to generate something that gets thrown away, since you're picking from a larger interval. This is a pain to fix, so I'll just worry about how long it takes to get rid of the first guy.)
This leads to the following way of rephrasing the problem:
1. Generate the largest number $y_0$ of $n$ things in $[0,1]$. Put it in $D$. 2. With probability $1-y_0$, be done (i.e., reduced to $n-1$ things). Otherwise, generate the largest number $y_1$ of $n$ things in $[0,y_0]$. 3. With probability $1-y_1$, be done. Otherwise, generate the largest number $y_2$ of $n$ things in $[0,y_1]$.
So we just need to evaluate the expected value $E_k$ of $1-y_k$ for each $k$. This is given by $$ E_k=\frac{\int_{y_0=0}^1 \int_{y_1=0}^{y_0} \dots \int_{y_k=0}^{y_{k-1}} (1-y_k) y_k^{n-1}y_{k-1}^{n-1} \dots y_1^{n-1}y_0^{n-1} \, dy_k \, dy_{k-1} \dots dy_1 \, dy_0}{\int_{y_0=0}^1 \int_{y_1=0}^{y_0} \dots \int_{y_k=0}^{y_{k-1}} y_k^{n-1}y_{k-1}^{n-1} \dots y_1^{n-1}y_0^{n-1} \, dy_k \, dy_{k-1} \dots dy_1 \, dy_0} $$ which ends up giving $1-\prod_{i=1}^{k} \frac{in}{in+1}$. So, if we've reached the $k^{\rm th}$ step, we'll continue to the $(k+1)^{\rm st}$ step with probability $\prod_{i=1}^k \frac{in}{in+1}$; you can use this to find the expected time it takes to eliminate a number from consideration.
In particular, the odds of continuing are always smaller than $\frac{n}{n+1}$, so the expected number of steps before we lose a number is definitely less than $n+1$.
(Unfortunately, once we lose a number this analysis doesn't quite apply. Intuitively, though, it should not-apply by some constant factor, and so in particular the number of steps required to finish should be finite. Making this rigorous is left as an exercise to the reader.)
almost...
The long answer follows.
First, the thing I might be confused about: you can simulate choosing $n$ numbers uniformly from an interval $[0,\alpha]$ by first choosing the largest of them (say $y$) with a pdf proportional to $x^{n-1}$, and then picking the rest of them uniformly from the interval $[0,y]$. If this is false, my entire argument is hosed, and it's the kind of thing I get confused about easily, but I'm pretty sure it's true.
Now, if the number you're adding to $S$ is ever larger than something you removed, you can totally ignore it, thereby reducing the problem to something like its instance with $n-1$ numbers. (I say "something like" because you're more likely to generate something that gets thrown away, since you're picking from a larger interval. This is a pain to fix, so I'll just worry about how long it takes to get rid of the first guy.)
This leads to the following way of rephrasing the problem:
1. Generate the largest number $y_0$ of $n$ things in $[0,1]$. Put it in $D$.
2. With probability $1-y_0$, be done (i.e., reduced to $n-1$ things). Otherwise, generate the largest number $y_1$ of $n$ things in $[0,y_0]$.
3. With probability $1-y_1$, be done. Otherwise, generate the largest number $y_2$ of $n$ things in $[0,y_1]$.
So we just need to evaluate the expected value $E_k$ of $1-y_k$ for each $k$. This is given by
$$
E_k=\frac{\int_{y_0=0}^1 \int_{y_1=0}^{y_0} \dots \int_{y_k=0}^{y_{k-1}} (1-y_k) y_k^{n-1}y_{k-1}^{n-1} \dots y_1^{n-1}y_0^{n-1} \, dy_k \, dy_{k-1} \dots dy_1 \, dy_0}{\int_{y_0=0}^1 \int_{y_1=0}^{y_0} \dots \int_{y_k=0}^{y_{k-1}} y_k^{n-1}y_{k-1}^{n-1} \dots y_1^{n-1}y_0^{n-1} \, dy_k \, dy_{k-1} \dots dy_1 \, dy_0}
$$
which ends up giving $1-\prod_{i=1}^{k} \frac{in}{in+1}$. So, if we've reached the $k^{\rm th}$ step, we'll continue to the $(k+1)^{\rm st}$ step with probability $\prod_{i=1}^k \frac{in}{in+1}$; you can use this to find the expected time it takes to eliminate a number from consideration.
In particular, the odds of continuing are always smaller than $\frac{n}{n+1}$, so the expected number of steps before we lose a number is definitely less than $n+1$.
(Unfortunately, once we lose a number this analysis doesn't quite apply. Intuitively, though, it should not-apply by some constant factor, and so in particular the number of steps required to finish should be finite. Making this rigorous is left as an exercise to the reader.)