Recall that our inferences are usually based on integrals of the posterior, e.g. a marginal distribution:
$P(\Theta_0=\theta_0|D) = \int d\theta_1\,P(\Theta_0=\theta_0,\Theta_1=\theta_1|D)$,
where $D$ is our data, $\Theta$ are parameters and $\theta$ is a particular parameter value. In an MCMC analysis, we produce a chain where the density of points is proportional to the posterior, and then approximate these marginals as monte carlo integrals,
$\int d\theta_1\,P(\Theta_0\approx\theta_0,\Theta_1=\theta_1|D) \approx \sum_{i:\Theta_0\approx\theta_0} 1 =$ number of samples in a $\Theta_0$ bin containing $\theta_0$ in an ordinary histogram.
Note that this is the simplest example of a monte carlo integral - replacing an integration by a weighted sum over samples randomly generated from a helpful distribution - where in this case the weight left over is unity.
Suppose that, after doing our MCMC analysis, additional information comes along and we want to produce a combined inference. This could be in the form of new data or new prior information, but for concreteness, say that there is a new data set, $D_2$. Assuming independence of the two data sets, the combined posterior is
$P(\Theta_0=\theta_0|D_1,D_2) = \int d\Theta_1\,P(D_1|\Theta_0=\theta_0,\Theta_1=\theta_1)P(D_2|\Theta_0=\theta_0,\Theta_1=\theta_1)\,P(\Theta)$.
We can either run a brand new chain incorporating both likelihoods, or we can notice that the samples from our first experiment can be re-weighted in the monte carlo integration above:
$P(\Theta_0\approx\theta_0|D_1,D_2) \approx \sum_{i:\Theta_0\approx\theta_0} P(D_2|\Theta_0=\theta_0^{(i)},\Theta_1=\theta_1^{(i)})$,
where $i$ runs over the samples of the original chain.