# Deriving the Ideal Gas Law: A Statistical Story

The ideal gas law is $PV=NkT$, where $P$ is the pressure, $V$ is the volume, $N$ is the number of particles, $k=1.38\times10^{-23}\,\mathrm{m}^2\,\mathrm{kg}\,\mathrm{s}^{-2}\,\mathrm{K}^{-1}$, and $T$ is the temperature. It constitutes one of the simplest and most applied “equations of states” in all of physics, and is (or will become) incredibly familiar to any student of not only physics but also related fields such as chemistry and various engineering disciplines.

Recently, I’ve come to appreciate the ideal gas law as a very good illustrative example of what statistical mechanics is capable of. In the eighteenth and nineteenth (during the industrial revolution), the field of thermodynamics arose in pursuit of answers to how much energy can be extracted from systems. At the time, the notion of temperature was hotly debated, with one ostensibly reasonable theory being that the flow of a fluid, called “caloric,” facilitated heat transfer. It should be appreciated just how many important results could be derived by our predecessors without precise knowledge of the microscopic physics involved.

Statistical mechanics arises from a desire to understand how thermodynamics arises from microscopic physics, either classical or quantum. While classical thermodynamics is able to achieve a great deal, it is philosophically difficult to reconcile the microscopic and macroscopic worlds without some statistical framework that can translate the former to the latter. Moreover, it allows us to determine the thermodynamic behavior of new systems that are hard to reason about a priori—how, for example, should we expect the stars in a star cluster to behave? What about a complicated, novel quantum field theory?

In this article, I will present a number of different derivations of the ideal gas law within the framework of statistical mechanics. In the process, I hope to motivate the different ensembles of statistical mechanics (which ultimately just encode the behavior of possibly large systems for which only a small amount of bulk information is known). In the process, I hope that an overview of these derivations will be a useful introduction for current physics students (or other interested people) to a notoriously (but arguably needlessly) opaque field.

## A hand-wavy, “kinetic” derivation

Before we proceed with derivations from statistical mechanics, I think that there is utility in understanding at a heuristic and intuitive level what is being stated by the ideal gas law. An ideal gas is a simplified model which models a gas as a collection of very small, collisionless particles which exert forces on the sides of a container (e.g., a box) whenever they collide with them. The pressure is a reflection of this force, and is (by definition) force per unit area. This can go up if (1) the number of particles per unit volume is increased (so that more particles are around to hit the sides of the box), or if (2) the typical velocities (read: temperature) of the particles are increased (so that each collision of a particle with the sides of the box imparts more momentum on average).

We can start by saying that the pressure can also be thought of as the normal momentum flux, or the amount of momentum perpendicular to the sides of the box which moves through a surface per unit area (this is equivalent to the force per unit area, since force is momentum per time). We can write

$\mathrm{presure} \simeq \textnormal{momentum density}\times\textnormal{velocity} \simeq (\textnormal{mass density}\times\textnormal{velocity})\times\textnormal{velocity}$

For particles with mass $m$, we can write

$P \simeq \frac{mN}{V}\times v^2$

At a temperature $T$, the typical kinetic energy of a particle is $\frac{1}{2}mv^2\sim kT$ (up to numerical constants). This means that $v^2\sim kT/m$, and, substituting, we have

$P \simeq \frac{mN}{V}\times\frac{kT}{m}$

or

$PV\simeq NkT$

In fact, this approximate expression is actually exact, and kinetic theory can be used to show that the numerical factor that ultimately ends up in front is $1$. Remarkably, the law does not depend on the mass $m$ of the particles (which cancels out of the equation)—simple measurements of these bulk quantities is not sufficient to find out what the masses of the particles in a gas are, and somehow, at least through these bulk equilibrium quantities, the gas does not appear to care what the particle mass is.

## How to think about statistical mechanics

Statistical mechanics (as mentioned above) is a formalism for transforming microscopic physical laws into thermodynamics. Classical thermodynamics, in turn, is based on the determination of so-called thermodynamic potentials, of which the most famous is energy $E$. An infinitesimal change of energy $dE$ is related to changes in other quantities $dS$, volume $dV$, and particle number $dN$ as

$dE = TdS - PdV + \mu dN$

where $\mu$ is the chemical potential, and is the amount of energy needed to add an additional particle to the system. This is commonly called the first law of thermodynamics (and is a restatement of the conservation of energy).

Another is the entropy $S$, which curiously was already understood to be an extensive quantity on the basis of thermodynamic arguments alone. An infinitesimal change in the entropy $dS$ can be seen from the above to be

$dS = \frac{1}{T}dE + \frac{P}{T}dV - \frac{\mu}{T}dN$

The other “thermodynamic potentials” are obtained by performing a mathematical operation called a “Legendre transform,” whose interpretation is very neat and summarized well here. For example, we can define the Helmholtz free energy as $F\equiv E-TS$. While this definition doesn’t seem very motivated at first, we can see upon finding $dF$ why we have done this:

$dF = d(E-TS) = dE - (TdS + SdT) = TdS - PdV + \mu dN - (TdS + SdT) = -SdT - PdV + \mu dN$

We see in the differential that the effect of subtracting $TS$ from the energy was to “switch around” $T$ and $S$ within $dF$, with the necessary addition of a minus sign. Similarly, we can define other thermodynamic potentials which are useful in different contexts:

$dH = d(E + PV) = TdS + VdP + \mu dN$ (enthalpy)

$dG = d(E - TS + PV) = -SdT + VdP + \mu dN$ (Gibbs free energy)

$d\Omega = d(E - TS - \mu N) = -SdT - PdV - Nd\mu$ (grand potential)

The differential quantities which naturally appear in the above expressions should be interpreted as the “natural” quantities of the potential. While that seems a bit hand-wavy, I mean that, for example, if you were doing an experiment where you had control over the temperature, pressure, and particle number (as chemists often do), it would probably be most useful for you to reason about your system through the Gibbs free energy, where $dT$, $dP$, and $dN$ appear.

Moreover (and relatedly), the knowledge of any of these potentials as a function of their natural variables will encode all the macroscopic thermodynamical quantities. For example, if we had knowledge of the energy function $E=E(S,V,N)$, then we could use the multivariate chain rule to write

$dE = \left( \frac{\partial E}{\partial S} \right)_{V,N}dS + \left( \frac{\partial E}{\partial V} \right)_{S,N}dV + \left( \frac{\partial E}{\partial N} \right)_{S,V}dN$

where subscripts indicate that a partial derivative is being taken with the subscripted variables fixed (for example, $\left( \frac{\partial E}{\partial V} \right)_{S,N}$ is the volume derivative of the energy where entropy and particle number are held fixed). At the same time, since we know that $dE = TdS - PdV + \mu dN$, we can identify the coefficients $dS$, $dV$, and $dN$ with each other to find

$\left( \frac{\partial E}{\partial S} \right)_{V,N} = T$

$\left( \frac{\partial E}{\partial V} \right)_{S,N} = -P$

$\left( \frac{\partial E}{\partial N} \right)_{S,V} = \mu$

Hence, by taking derivatives of our known energy function $E(S,V,N)$, we can easily find the temperature $T$, pressure $P$, and chemical potential $\mu$ as a function of $S$, $V$, and $N$. This framework can be generalized to other systems. For example, for a one-dimensional problem (e.g., rubber band), we may instead write

$dE = TdS - fdL$

whereby $\left( \frac{dE}{dL} \right) = -f$. For a more extreme example, one can consult Wikipedia and write for a black hole

$dE = \frac{\kappa}{8\pi}dA + \Omega dJ + \Phi dQ$

where $\kappa$ is its surface gravity, $A$ is its surface area, $\Omega$ is its angular velocity (of rotation), $J$ is its angular momentum, $\Phi$ is its electrostatic potential, and $Q$ is its charge. By analogy, it can be seen exactly what sense in which Stephen Hawking meant that the entropy of a black hole scales with its area, and that its temperature scales with surface gravity.

As a fun aside, it turns out that performing a Legendre transform with respect to all particles in a system is “not allowed” in the sense that it will actually cause the quantity to vanish entirely:

$0 = -SdT + VdP - Nd\mu$

This result is called the Gibbs-Duhem relation.

### Fantastic ensembles and where to find them

Heuristically, most, if not all, equilibrium statistical mechanics problems go as follows: we count the number of microscopic states (“microstates”) which are compatible with a given macroscopic state, and then straightforwardly translate that result into a thermodynamic potential which is most natural to that system. Upon obtaining that thermodynamic potential, we can then use rules from classical thermodynamics to find out any thermodynamical quantity about our system that we want.

A fundamental assumption of statistical mechanics is the principle of indifference, a (sometimes debated) postulate that two microscopic configurations of a system which have the same total energy will be equally probable. A shuffled deck of cards is unlikely to be in order, not because any given shuffled deck is more likely than an ordered deck, but rather because there are many more ways for the shuffled deck to be out of order. From this perspective, it is a little bit clearer why solving problems in this field almost always involves counting states, sometimes weighted by some quantity to account for coupling to a heat or particle bath.

Statistical mechanics reasons about ensembles, which can be thought of as probability distributions over a given system where certain macroscopic quantities are known (possibly including, for example, the temperature). Alternatively, they can be thought of as a large collection of macroscopically (but not necessarily microscopically) identical systems. The sense in which the systems are macroscopically identical depends on which type of ensemble is being used (i.e., which macroscopic quantities are shared in common between the systems which makes them “macroscopically identical”).

However, an important fact must be stated: for systems with very many particles, the ensembles ought to agree. On a macroscopic level, an isolated system ought to behave identically to the same system which is in contact with a heat bath of the same temperature. Even though, strictly speaking, the latter is coupled to another system and the former is not, the lack of heat flow means that the systems should be macroscopically indistinguishable. The same goes for a particle bath with the same chemical potential as the system.

I will briefly summarize below the three most common statistical ensembles which are introduced in usual courses on statistical mechanics (although this is not an exhaustive list of all ensembles which could exist). There are some very common tricks to doing problems in all of the ensembles, some of which I will also describe below. The ultimate hope is that the way to utilize those tricks will become clear in the concrete example of an ideal gas.

### Microcanonical Ensemble

In the microcanonical ensemble, we know a system’s total energy $E$, its volume $V$, and the number of particles $N$ within it. As these can be considered the “natural variables” of the ensemble, the “natural potential” of the microcanonical ensemble is the entropy $S=S(E,V,N)$.

In such problems, we will seek to count the total number of states $W$ with energy of exactly $E$ or, if in a classical problem where the space of such states is measure zero, with energy within the interval $[E,E+\delta E]$ (i.e., very close to $E$). In other words, in the case of a discrete number of states, we would like to find

$W=\sum_{\textnormal{states}\,i}\Delta(E_i-E)$

where

$\Delta(x)=\begin{cases}1 & x=0\\ 0 & x\neq0\end{cases}$

In the case of continuous states, for example of $N$ particles in three dimensions, we would like to integrate over phase space (noting that the “phase space volume” of a single state is $h^k$ for $k$ canonical coordinates):

$W=\delta E\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\delta(H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)-E)$

where $\delta(x)$ is the Dirac delta distribution and $H$ is the Hamiltonian (in many usual cases, the energy function). While this seems like a disgusting integral, the interpretation is that we want to pick out of our phase space only those states which have energy $E$, which in all cases I can think of is a $(2k-1)$-dimensional surface in phase space. The “integral” in many problems is done geometrically, if it can be done at all. In many cases, it is more straightforward to find the number of states $\tilde{W}$ with energy $\leq E$ (rather than $=E$), or

$\tilde{W}=\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\Theta(E-H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N))$

\noindent where $\Theta(x)$ is the Heaviside step function. Then we can find $W$ by differentiating $\tilde{W}$:

$W=\frac{d\tilde{W}}{dE}\delta E$

After finding $\tilde{W}$ (the microscopic, “counting” part of statistical mechanics), we simply translate to the thermodynamic potential (entropy) using the definition of entropy:

$S=k\ln W$

and from here can use classical thermodynamics to find out everything else.

### Canonical Ensemble

In the canonical ensemble, we again know the system’s volume $V$ and $N$ (as in the microcanonical ensemble), but instead of knowing the total energy of the system, we only know the average energy $\bar{\epsilon}$ per particle (which, for fixed underlying physics, is equivalent to knowing $T$). The temperature is often written using the variable $\beta=1/kT$. This has the interpretation of a system being in contact with a heat bath of temperature $T$.

Again, we are required to count the number of states, but this time with higher energy states being exponentially down-weighted (see, e.g., here for a satisfying derivation from Lagrange multipliers). In the discrete case, we have

$Z=\sum_{\textnormal{states}\,i}\exp\left(-\beta E_i\right)$

This time, the sum $Z$ (the partition function) appears to be very similar to the sum $W$ in the microcanonical ensemble, but with weights $\exp\left(-\beta E_i\right)$ (Boltzmann factors) rather than $\Delta(E_i-E)$. Similarly, in the continuous (three-dimensional) case, we have

$Z=\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\exp\left(-\beta H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)\right)$

Note that, if the system comprises many particles with identical behaviors whose energies sum together, i.e., $E=E_1+E_2+\ldots+E_N$ (discrete), or $Z=\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\exp\left(-\beta H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)\right)=h(\vec{x}_1,\vec{p_1})+h(\vec{x}_2,\vec{p_2})+\ldots+h(\vec{x}_N,\vec{p_N})$, then the sum/integral decouple, and we often have

$Z=Z_1^N$

where $Z_1$ is the partition function of only a single such particle.

Because the natural variables for this ensemble are now $T$, $V$, and $N$, the natural potential for the ensemble is $F=F(T,V,N)$, which can be found via

$F=-kT\ln Z$

Again, after obtaining $F(T,V,N)$, we are home free, and, by means of taking derivatives, can proceed to find out everything of (equilibrium) thermodynamic relevance in our system.

To reiterate, even though in this case the total energy of the system $E$ is not precisely fixed, it will only experience small fluctuations about its expectation value in the case of a realistically large system.

### Grand Canonical Ensemble

In the grand canonical ensemble, we know the average energy per particle $\bar{\epsilon}$ as well as the system volume $V$ (as in the canonical ensemble), but now, in place of knowing the total particle number precisely, we instead know the average particle number $\bar{N}$ (equivalent to knowing $\mu$). This ensemble has the interpretation of dealing with a system which is in contact with both a heat bath of temperature $T$ as well as a particle bath with chemical potential $\mu$.

Counting the states this time requires computation of the so-called grand canonical partition function $\mathcal{Z}$, which in the discrete case takes the form

$\mathcal{Z}=\sum_{\textnormal{states}\,i}\exp\left(-\beta(E_i-\mu N)\right)$

where the Boltzmann factors $\exp\left(-\beta E_i\right)$ have been replaced with the aesthetically similar Gibbs factors $\exp\left(-\beta(E_i-\mu N)\right)$. In the continuous case, we not only integrate over phase space but sum over all possible particle numbers:

$\mathcal{Z}=\sum^\infty_{N=0}\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\exp\left(-\beta\left(H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)-\mu N\right)\right)$

In both cases, we can factor $\exp\left(\mu N\right)$ out of the energy sum/integral, and recognize the remaining summand/integrand as the canonical partition function $Z_N$ at for the system at a given particle number $N$:

$\mathcal{Z}=\sum^\infty_{N=0}\exp\left(\beta\mu N\right)Z_N$

Hence, if the canonical problem has already been solved, it is often not too difficult at least in principle to extend it to the grand canonical case.

The natural variables $T$, $V$, and $\mu$ now imply that the natural potential is the grand potential $\Omega=\Omega(T,V,\mu)$, given by

$\Omega=-kT\ln\mathcal{Z}$

in a similar manner to $F=F(T,V,N)$ given the the canonical partition function $Z$ in the canonical ensemble.

Once again, even though a particle bath is only involved here, in realistically large systems the total particle number $N$ will only experience small fluctuations about its large expectation value, determined by $\mu$.

## Deriving the ideal gas law in the classical (continuous) case

Since all the ensembles should yield the same results in macroscopically large systems, all three ensembles described above can be used to derive the ideal gas law (although not necessarily with the same ease).

Furthermore, we can approach this problem both from classical (continuous) physics (which involves integrating over phase space) or take the high temperature limit of the quantum (discrete) case (which involves summing over states). Both of these approaches should, again, give the same answer as they should converge to the same physics at the regimes that we care about.

### Microcanonical Ensemble: Classical Case

We will seek to find the number of states $W$ with total energy $E$. This process is made easier if we first instead find the number of states $\tilde{W}$ with total energy less than $E$:

$\tilde{W}=\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\Theta(E-H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N))$

Since the positions play no part in the energy of a particle within an ideal gas, the integrals $\int d^3\vec{x}_1d^3\vec{x}_2\cdots sd^3\vec{x}_N$ are easy and factor out as $V^N$ since the integrand doesn’t depend on them. We have then taken care of half of the integrals and are then reassuringly left with

$\tilde{W}=\frac{V^N}{h^{3N}}\int d^3\vec{p}_1d^3\vec{p}_2\cdots d^3\vec{p}_N\Theta(E-H(\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N))$

We may be tempted to evaluate the integral above by hand, but that would probably result in many empty boxes of chalk filled with tears.

Instead, we can find the number of states geometrically, by noting that the Hamiltonian (energy) for such a system is simply the sum of the kinetic energies:

$H(\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)=\frac{p_{1x}^2}{2m}+\frac{p_{1y}^2}{2m}+\frac{p_{1z}^2}{2m}+\ldots+\frac{p_{Nx}^2}{2m}+\frac{p_{Ny}^2}{2m}+\frac{p_{Nz}^2}{2m}$

The hypersurface $H(\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)=E$ can be written as

$p_{1x}^2+p_{1y}^2+p_{1z}^2+\ldots+p_{Nx}^2+p_{Ny}^2+p_{Nz}^2=2mE$

which looks awfully similar to the formula for a sphere (cf. $x^2+y^2+z^2=a^2$). In fact, this is an $3N$-dimensional sphere with radius $\sqrt{2mE}$, where typically $N\gtrsim10^{23}$ particles (the $(N=10^{23})$-dimensional hypersphere concept was memed into oblivion in Berkeley’s statistical mechanics course the year before I took it; our semester was less fun and we instead spent our time being lame and doing problem sets and stuff).

Nevertheless, we trust that mathematicians have taken care of this problem, and indeed a quick look at Wikipedia reveals that a $k$-dimensional hypersphere with radius $a$ has a hypervolume

$\mathcal{V}_k=\frac{\pi^{k/2}}{\Gamma(k/2+1)}a^k$

where $k=3N\gtrsim10^{23}$ and $\Gamma(x)$ is the gamma function, as nature intended. We can use this formula to evaluate the momentum integrals, which yields

$\tilde{W}=\frac{V^N}{h^{3N}}\mathcal{V}_k=\frac{V^N}{h^{3N}}\frac{\pi^{3N/2}}{\Gamma(3N/2+1)}(2mE)^{3N/2}$

Then we can find $W$ by differentiating

$W=\frac{d\tilde{W}}{dE}\delta E=\delta E\frac{2mV^N}{h^{3N}}\frac{\pi^{3N/2}}{\Gamma(3N/2+1)}(2mE)^{3N/2-1}$

This can then be translated into an entropy which, using the rules for logarithms, cleans up immensely:

$S=k\ln W=Nk\ln V+\left(\frac{3}{2}N-1\right)k\ln E+g(N)$

where $g(N)$ is a hideous function of $N$ which, fortunately, we do not need to worry about so long as it turns out that all of our partial derivatives from now on are taken with $N$ fixed. Note that, since $N\gg1$, this is practically

$S=k\ln W=Nk\ln V+\frac{3}{2}Nk\ln E+g(N)$

Then we can write

$dS=\frac{1}{T}dE+\frac{P}{T}dV-\frac{\mu}{T}dN=\left( \frac{\partial S}{\partial E} \right)_{V,N}dE+\left( \frac{\partial S}{\partial V} \right)_{E,N}dV+\left( \frac{\partial S}{\partial N} \right)_{E,V}dN$

We can then find the pressure by identifying

$\frac{P}{T}=\left( \frac{\partial S}{\partial V} \right)_{E,N}=\frac{Nk}{V}$

which yields the ideal gas law:

$PV=NkT$

### Canonical Ensemble: Classical Case

We saw that one difficulty with the microcanonical ensemble was that the integrals were too hard to do algebraically (at least, if the goal is to still want to be alive afterwards). The fundamental issue is that, when we fix the total energy of the system $E$, we force the energy of any given particle to depend on all the others. For example, if particle $1$ turns out to be hogging most of the energy, then the rest of the particles must necessarily have much less energy, and the integrals are coupled accordingly.

In contrast, if we take the canonical approach of only fixing the average particle energy $\bar{\epsilon}$, we no longer have this issue, since particle $1$ hogging all the energy does not at all affect the values of energy allowed for the other particles (which still hovers around $\bar{\epsilon}$). We will expect (and will come to see) that the integrals will decouple appropriately.

We can compute the partition function which is analogous to state counting once again:

$Z=\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\exp\left(-\beta H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)\right)$

Again, since the energy of the gas does not depend on the location of the particles but only their momenta, we can quickly evaluate the position integrals:

$Z=\frac{V^N}{h^{3N}}\int d^3\vec{p}_1d^3\vec{p}_2\cdots d^3\vec{p}_N\exp\left(-\beta\left(\frac{p_{1x}^2}{2m}+\frac{p_{1y}^2}{2m}+\frac{p_{1z}^2}{2m}+\ldots+\frac{p_{Nx}^2}{2m}+\frac{p_{Ny}^2}{2m}+\frac{p_{Nz}^2}{2m}\right)\right)$

where we have substituted in the Hamiltonian for an ideal gas. Notably, the exponential can be broken up as follows:

$Z=\frac{V^N}{h^{3N}}\int d^3\vec{p}_1d^3\vec{p}_2\cdots d^3\vec{p}_N\exp\left( -\frac{\beta p_{1x}^2}{2m} \right)\exp\left( -\frac{\beta p_{1y}^2}{2m} \right)\exp\left( -\frac{\beta p_{1z}^2}{2m} \right)\ldots\exp\left( -\frac{\beta p_{Nx}^2}{2m} \right)\exp\left( -\frac{\beta p_{Ny}^2}{2m} \right)\exp\left( -\frac{\beta p_{Nz}^2}{2m} \right)$

We see that each factor in the integrand only depends on one component of one particle’s momentum, so we can pull each factor out of all the integrals that don’t depend on it, and we will obtain

$Z=\frac{V^N}{h^{3N}}\left(\int dp_{1x}\exp\left( -\frac{\beta p_{1x}^2}{2m} \right)\right)\left(\int dp_{1y}\exp\left( -\frac{\beta p_{1y}^2}{2m} \right)\right)\left(\int dp_{1z}\exp\left( -\frac{\beta p_{1z}^2}{2m} \right)\right)\left(\int dp_{Nx}\exp\left( -\frac{\beta p_{Nx}^2}{2m} \right)\right)\left(\int dp_{Ny}\exp\left( -\frac{\beta p_{Ny}^2}{2m} \right)\right)\left(\int dp_{Nz}\exp\left( -\frac{\beta p_{Nz}^2}{2m} \right)\right)$

We can recognize this as $3N$ copies of the same (Gaussian) integral:

$Z=\frac{V^N}{h^{3N}}\left(\int dp\exp\left( -\frac{\beta p^2}{2m} \right)\right)^{3N}=\frac{V^N}{h^{3N}}\left(\frac{2\pi m}{\beta}\right)^{3N/2}=\frac{V^N}{h^{3N}}\left(2\pi mkT\right)^{3N/2}$

If we had been clever from the start, we could have recognized that, in fact, the total partition function $N$ from the start could be decoupled into $N$ identical smaller partition functions:

$Z=Z_1^N$

where $Z_1$, the partition function for just one of these identical particles, is simply given by

$Z_1=\frac{V}{h^3}\left(2\pi mkT\right)^{3/2}$

Anyway, we can now translate to the macroscopic picture by deriving the Helmholtz free energy:

$F=-kT\ln Z=-NkT\left( \ln V + \frac{3}{2}\ln T + \ln\left(2\pi mk/h\right) \right)$

We can then write

$dF = -SdT - PdV + \mu dN = \left( \frac{\partial F}{\partial T} \right)_{V,N}dT + \left( \frac{\partial F}{\partial V} \right)_{T,N}dV + \left( \frac{\partial F}{\partial N} \right)_{T,V}dN$

Aiming for the pressure once again, we can identify

$-P = \left( \frac{\partial F}{\partial V} \right)_{T,N} = -\frac{NkT}{V}$

which, once again, leads to the ideal gas law:

$PV=NkT$

### Grand Canonical Ensemble: Classical Case

If we insisted in allowing the particle number to vary, we could, in fact, verify that we would achieve the same ideal gas law even if, say, the system were coupled to a large particle bath. We can work in the grand canonical ensemble, where now rather than knowing $N$ we instead know the average number of particles $\bar{N}$ (parameterized by $\mu$).

We now start by performing the relevant “state counting,” which in this case involves deriving the grand canonical partition function:

$\mathcal{Z}=\sum^\infty_{N=0}\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\exp\left(-\beta\left(H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)-\mu N\right)\right)$

We recognize that the integral has a factor $\exp\left( \beta\mu N \right)$ which is constant with respect to the integration variables, and so we can factor it out:

$\mathcal{Z}=\sum^\infty_{N=0}e^{\beta\mu N}\frac{1}{h^{3N}}\int d^3\vec{x}_1d^3\vec{p}_1d^3\vec{x}_2d^3\vec{p}_2\cdots d^3\vec{x}_Nd^3\vec{p}_N\exp\left(-\beta\left(H(\vec{x}_1,\vec{x}_2,\ldots,\vec{x}_N,\vec{p}_1,\vec{p}_2,\ldots,\vec{p}_N)\right)\right)$

Before doing literally anything else (even the trivial volume integrals), we can step back and recognize that the integral within the sum over $N$ is oddly familiar. In fact, the canonical partition function $Z_N$ at a fixed number $N$ appears in the sum. Conveniently, we already know what this is, and can substitute accordingly:

$\mathcal{Z}=\sum^\infty_{N=0}e^{\beta\mu N}\frac{V^N}{h^{3N}}\left(2\pi mkT\right)^{3N/2}$

Noting that everything in the summand is exponentiated to the $N$th power,

$\mathcal{Z}=\sum^\infty_{N=0}\left(e^{\beta\mu}\frac{V}{h^3}\left(2\pi mkT\right)^{3/2}\right)^N$

we recognize that the grand canonical partition function is, in fact, a geometric series:

$\mathcal{Z} = \frac{1}{1 - e^{\beta\mu}\frac{V}{h^3}\left(2\pi mkT\right)^{3/2}}$

We can then relate this to the “natural potential,” in this case the grand potential $\Omega$, as

$\Omega = -kT\ln\mathcal{Z} = kT\ln\left( 1 - e^{\beta\mu}\frac{V}{h^3}\left(2\pi mkT\right)^{3/2} \right)$

We note that

$d\Omega = -SdT - PdV - Nd\mu = \left( \frac{d\Omega}{dT} \right)_{V,\mu}dT + \left( \frac{d\Omega}{dV} \right)_{T,\mu}dV + \left( \frac{d\Omega}{d\mu} \right)_{T,V}d\mu$

We can again identify $P$ via

$-P = \left( \frac{d\Omega}{dV} \right)_{T,\mu} = -\frac{kT}{1 - e^{\beta\mu}\frac{V}{h^3}\left(2\pi mkT\right)^{3/2}}e^{\beta\mu}\frac{1}{h^3}\left(2\pi mkT\right)^{3/2}$

This is ostensibly a very ugly formula, for which we are required to eliminate $\mu$ in favor of $N$. Regrettably, this can be done with the further identification

$-N = \left( \frac{d\Omega}{d\mu} \right)_{T,V} = -\frac{1}{1 - e^{\beta\mu}\frac{V}{h^3}\left(2\pi mkT\right)^{3/2}}e^{\beta\mu}\frac{V}{h^3}\left(2\pi mkT\right)^{3/2}$

We see that the mess

$\frac{1}{1 - e^{\beta\mu}\frac{1}{h^3}\left(2\pi mkT\right)^{3/2}}e^{\beta\mu}\frac{V}{h^3}\left(2\pi mkT\right)^{3/2} = \frac{N}{V}$

appears in our expression for $P$, which yields

$-P = -\frac{NkT}{V}$

where, again, we find the ideal gas law.

$PV=NkT$

## Deriving the ideal gas law in the quantum (discrete) case

Quantum mechanics devolves to classical mechanics in the macroscopic limit—Ehrenfest’s theorem is that the expectation values of quantum mechanics will follow the usual classical equations of motion. Thus, we should not be particularly surprised that we can also derive the ideal gas law from a quantum mechanical perspective, with states now being discrete rather than part of a continuous phase space.

However, curiously, from a quantum mechanical perspective, the usual Boltzmann distribution is actually only a high temperature limit of the Fermi-Dirac or Bose-Einstein distributions, and does not arise on its own from any natural-looking partition function. In the Fermi-Dirac distribution (which describes fermions), two particles cannot share the same state. In the Bose-Einstein distribution (which describes bosons), they assuredly can (and, compared to the classical picture, are more zealous to do so particularly at low temperatures).

In the following derivations, I will be considering bosons, since counting states while also excluding those where multiple particles share a given state is somewhat more complicated. I will only perform the derivation up to the point where the mathematics becomes identical to the classical case, which must occur at some point for the physics that occurs to be identical.

This section will use some basic facts from quantum mechanics, in particular that the energy of a particle in an infinite square well with dimensions $L_x$, $L_y$, and $L_z$ is

$\epsilon = \frac{\hbar^2}{2m}(k_x^2+k_y^2+k_z^2)$

where the wavenumbers $k_x$, $k_y$, and $k_z$ can only take discrete values

$k_x=\frac{n_x\pi}{L_x}$

$k_y=\frac{n_y\pi}{L_y}$

$k_z=\frac{n_z\pi}{L_z}$

where $n_x,n_y,n_z=1,2,3,\ldots$.

### Microcanonical Ensemble: Quantum Case

We would like to count all of the states where this energy is equal to the numerical value $E$:

$W = \sum_{\textnormal{states}\,i}\Delta(E_i-E)$

As in the continuous case, we may instead consider all the states with energy $\leq E$:

$\tilde{W} = \sum_{\textnormal{states}\,i}\Theta(E-E_i)$

where $\Theta(x)$ is, again, the Heaviside step function. Ironically, the next step involves approximating the sum as an integral, in the limit where the states are very close together—this step makes it much more plausible that the classical and quantum procedures will ultimately end up giving the same answer.

The total energy of the system can be written as

$E = \frac{\hbar^2}{2m}\left( k_{1x}^2 + k_{1y}^2 + k_{1z}^2 + \ldots + k_{Nx}^2 + k_{Ny}^2 + k_{Nz}^2 \right)$

which can again be rearranged into the equation for a $3N$-dimensional sphere:

$k_{1x}^2 + k_{1y}^2 + k_{1z}^2 + \ldots + k_{Nx}^2 + k_{Ny}^2 + k_{Nz}^2 = \frac{2mE}{\hbar^2}$

In the single particle $k$-space, we can work out that the “volume” of a single state is given by incrementing $n_x$, $n_y$, and $n_z$, and find that it is $(\pi/L_x)(\pi/L_y)(\pi/L_z)=\pi^3/V$ (the volume of the box is $V=L_xL_yL_z$)—a state in the total $k$-space involving all particles thus has volume $\pi^{3N}/V^N$. We can use the hypersphere hypervolume formula to perform the sum:

$\tilde{W} = \frac{1}{2^N}\frac{1}{\left( \pi^{3N}/V^N \right)}\frac{\pi^{3N/2}}{\Gamma(3N/2+1)}\left( \frac{2mE}{\hbar^2} \right)^{3N/2}$

where the factor of $1/2^N$ arises because we only consider positive values of $k_i$ (for example, in the case of two dimensions, we only integrate over the sphere in a single quadrant). In fact, simplifying a bit, we have

$\tilde{W} = \frac{V^N}{h^{3N}}\frac{\pi^{3N/2}}{\Gamma(3N/2+1)}(2mE)^{3N/2}$

which is exactly the same as the classical result. The rest of the analysis is identical to the classical microcanonical derivation, which can be found above.

One important note is that, in this case, the phase space volume of a state being $h^{3N}$ arises naturally from counting quantum states, whereas this result was put in by hand in the classical case. Fortuitously, in the classical derivation, we would have been able to derive the ideal gas law taking any normalization for the phase space, and the only quantity that would be affected would be the chemical potential (since it involves an integral with respect to $N$).

### Canonical Ensemble: Quantum Case

In the canonical ensemble, we would like to compute the partition function $Z$, this time making explicit use of the decoupling of particles that comes with knowing $\bar{N}$ rather than $N$:

$Z = Z_1^N$

where

$Z_1 = \sum^\infty_{n_x=1}\sum^\infty_{n_y=1}\sum^\infty_{n_z=1}\exp\left(-\frac{\beta\pi^2\hbar^2}{2mL^2}\left( n_x^2 + n_y^2 + n_z^2 \right)\right)$

We see that the sum in $Z_1$ breaks up into the product of three smaller, nearly identical sums:

$Z_1 = \left( \sum^\infty_{n_x=1}\exp\left(-\frac{\beta\pi^2\hbar^2}{2mL_x^2}n_x^2\right)\right)\left( \sum^\infty_{n_y=1}\exp\left(-\frac{\beta\pi^2\hbar^2}{2mL_y^2}n_y^2\right)\right)\left( \sum^\infty_{n_z=1}\exp\left(-\frac{\beta\pi^2\hbar^2}{2mL_z^2}n_z^2\right)\right)$

The summands are Gaussians in $n_i$ whose width is $\sigma = \frac{L_i}{\pi\hbar}\sqrt{mkT}$, and the sums are adequately approximated as Gaussian integrals so long as $\sigma\gg1$, which is true if the temperature is macroscopically large. Then we must perform an integral over half of a Gaussian:

$Z_1 = \left( \int^\infty_0du\exp\left(-\frac{\beta\pi^2\hbar^2}{2mL_x^2}u^2\right)\right)\left( \int^\infty_0du\exp\left(-\frac{\beta\pi^2\hbar^2}{2mL_y^2}u^2\right)\right)\left( \int^\infty_0du\exp\left(-\frac{\beta\pi^2\hbar^2}{2mL_z^2}u^2\right)\right)$

which evaluates to

$Z_1 = \left( \frac{1}{2}\frac{L_x}{\hbar}\sqrt{\frac{2}{\pi}mkT} \right)\left( \frac{1}{2}\frac{L_y}{\hbar}\sqrt{\frac{2}{\pi}mkT} \right)\left( \frac{1}{2}\frac{L_z}{\hbar}\sqrt{\frac{2}{\pi}mkT} \right) = \frac{1}{8}\frac{V}{\hbar^3}\left( \frac{2}{\pi}mkT \right)^{3/2}$

This can be rewritten as

$Z_1 = \frac{V}{h^3}\left(2\pi mkT\right)^{3/2}$

and the steps that follow are identical to the classical case.

### Grand Canonical Ensemble: Quantum Case

In the grand canonical ensemble, we use the trick that we can decompose the grand canonical partition function $\mathcal{Z}$ into the sum of canonical partition functions $Z_n$ at fixed $N$:

$\mathcal{Z} = \sum^\infty_{N=0}e^{\beta\mu N}Z_N$

However, because we have already found that the quantum mechanical canonical partition function $Z_N$ is identical to the classical version, the quantum version of $\mathcal{Z}$ must be identical to its classical version as well, and hence by this point have already achieved the exact same result as in the classical case.

## Summary

Equilibrium statistical mechanics comprises a set of tools for converting microscopic physics into macroscopic physics, in the absence of most of the information which describes the state of the system. When I first learned statistical mechanics, these tools seemed to be very ad hoc and disjoint from each other, but they fit together into an elegant, (debatably) coherent framework. When I now think about statistical mechanics, I have the following table in mind:

When keeping in mind that the ultimate purpose here is to try to convert microscopic physics into macroscopic physics, we can learn insightful facts about the behavior of large systems, such as the ideal gas law, which can be found using any number of the tools presented in a typical undergraduate statistical mechanics course.

All roads lead to Rome.