Sunday, May 16, 2010

Quantum electrodynamics

Quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. QED was developed by a number of physicists, beginning in the late 1920s. It basically describes how light and matter interact. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons. Physicist Richard Feynman has called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron, and the Lamb shift of the energy levels of hydrogen.[1]

In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum.

The word 'quantum' is Latin, meaning "how much" (neut. sing. of quantus "how great").[2] The word 'electrodynamics' was coined by André-Marie Ampère in 1822.[3] The word 'quantum', as used in physics, i.e. with reference to the notion of count, was first used by Max Planck, in 1900 and reinforced by Einstein in 1905 with his use of the term light quanta.

Quantum theory began in 1900, when Max Planck assumed that energy is quantized in order to derive a formula predicting the observed frequency dependence of the energy emitted by a black body. This dependence is completely at variance with classical physics. In 1905, Einstein explained the photoelectric effect by postulating that light energy comes in quanta, later called photons. In 1913, Bohr invoked quantization in his proposed explanation of the spectral lines of the hydrogen atom. In 1924, Louis de Broglie proposed a quantum theory of the wave-like nature of subatomic particles. The phrase "quantum physics" was first employed in Johnston's Planck's Universe in Light of Modern Physics. These theories, while they fit the experimental facts to some extent, were strictly phenomenological: they provided no rigorous justification for the quantization they employed.

Modern quantum mechanics was born in 1925 with Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave mechanics and the Schrödinger equation, which was a non-relativistic generalization of de Broglie's (1925) relativistic approach. Schrödinger subsequently showed that these two approaches were equivalent. In 1927, Heisenberg formulated his uncertainty principle, and the Copenhagen interpretation of quantum mechanics began to take shape. Around this time, Paul Dirac, in work culminating in his 1930 monograph finally joined quantum mechanics and special relativity, pioneered the use of operator theory, and devised the bra-ket notation widely used since. In 1932, John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces. This and other work from the founding period remains valid and widely used.

Quantum chemistry began with Walter Heitler and Fritz London's 1927 quantum account of the covalent bond of the hydrogen molecule. Linus Pauling and others contributed to the subsequent development of quantum chemistry.

The application of quantum mechanics to fields rather than single particles, resulting in what are known as quantum field theories, began in 1927. Early contributors included Dirac, Wolfgang Pauli, Weisskopf, and Jordan. This line of research culminated in the 1940s in the quantum electrodynamics (QED) of Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, for which Feynman, Schwinger and Tomonaga received the 1965 Nobel Prize in Physics. QED, a quantum theory of electrons, positrons, and the electromagnetic field, was the first satisfactory quantum description of a physical field and of the creation and annihilation of quantum particles.

QED involves a covariant and gauge invariant prescription for the calculation of observable quantities. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. The renormalization procedure for eliminating the awkward infinite predictions of quantum field theory was first implemented in QED. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus". (Feynman, 1985: 128)

QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1975 work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Peter Higgs, Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.

Physical interpretation of QED

In classical optics, light travels over all allowed paths and their interference results in Fermat's principle. Similarly, in QED, light (or any other particle like an electron or a proton) passes over every possible path allowed by apertures or lenses. The observer (at a particular location) simply detects the mathematical result of all wave functions added up, as a sum of all line integrals. For other interpretations, paths are viewed as non physical, mathematical constructs that are equivalent to other, possibly infinite, sets of mathematical expansions. Similar to the paths of nonrelativistic Quantum mechanics, the different configuration contributions to the evolution of the Quantum field describing light do not necessarily fulfill the classical equations of motion. So according to the path formalism of QED, one could say light can go slower or faster than c, but will travel at velocity c on average[4].

Physically, QED describes charged particles (and their antiparticles) interacting with each other by the exchange of photons. The magnitude of these interactions can be computed using perturbation theory; these rather complex formulas have a remarkable pictorial representation as Feynman diagrams. QED was the theory to which Feynman diagrams were first applied. These diagrams were invented on the basis of Lagrangian mechanics. Using a Feynman diagram, one decides every possible path between the start and end points. Each path is assigned a complex-valued probability amplitude, and the actual amplitude we observe is the sum of all amplitudes over all possible paths. The paths with stationary phase contribute most (due to lack of destructive interference with some neighboring counter-phase paths) — this results in the stationary classical path between the two points.

QED doesn't predict what will happen in an experiment, but it can predict the probability of what will happen in an experiment, which is how (statistically) it is experimentally verified. Predictions of QED agree with experiments to an extremely high degree of accuracy: currently about 10−12 (and limited by experimental errors); for details see precision tests of QED. This makes QED one of the most accurate physical theories constructed thus far.

Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The strange theory of light and matter, a classic non-mathematical exposition of QED from the point of view articulated above.

A simple but detailed description of QED, on the lines of Feynman's book

The key components of Feynman's presentation of QED are three basic actions.

  • A photon goes from one place and time to another place and time.
  • An electron goes from one place and time to another place and time.
  • An electron emits or absorbs a photon at a certain place and time.
Feynman Diagram Components.svg

These actions are represented in a form of visual shorthand by the three basic elements of Feynman diagrams: a wavy line, a straight line and a junction of two straight lines and a wavy one. These may all be seen in the adjacent diagram.

It is important not to over-interpret these diagrams. Nothing is implied about how a particle gets from one point to another. The diagrams do not imply that the particles are moving in straight or curved lines. They do not imply that the particles are moving with fixed speeds. The fact that the photon is often represented, by convention, by a wavy line and not a straight one does not imply that it is thought that it is more wavelike than is an electron. The images are just symbols to represent the actions above: photons and electrons do, somehow, move from point to point and electrons, somehow, emit and absorb photons. We do not know how these things happen, but the theory tells us about the probabilities of these things happening. With the help of modern graphics software as opposed to blackboard and chalk, it would probably be better for a beginner to be presented with fuzzy lines to avoid the natural assumption that physics regards particles as moving along trajectories, like bullets.

As well as the visual shorthand for the actions Feynman introduces another kind of shorthand for the numerical quantities which tell us about the probabilities. If a photon moves from one place and time - in shorthand, A - to another place and time - shorthand, B - the associated quantity is written in Feynman's shorthand as P(A to B). The similar quantity for an electron moving from C to D is written E(C to D). The quantity which tells us about the probability for the emission or absorption of a photon he calls 'j'. This is related to, but not the same as, the measured electron charge 'e'.

Now the theory of QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks, and then using the probability-quantities to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while making the assumption that the quantities mentioned above are just our everyday probabilities. (A simplification of Feynman's book.) Later on this will be corrected to include specifically quantum mathematics, following Feynman.

The basic rules of probabilities that will be used are that a) if an event can happen in a variety of different ways then its probability is the SUM of the probabilities of the possible ways and b) if a process involves a number of independent subprocesses then its probability is the PRODUCT of the component probabilities.

Here is an example to show how things work. Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). Then we ask, 'What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time). The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and that the photon moves from B to D (another elementary action). From a knowledge of the probabilities of each of these subprocesses - E(A to C) and P(B to D) - then we would expect to calculate the probability of both happening by multiplying them, using rule b) above. This gives a simple estimated answer to our question.

Compton Scattering.svg

But there are other ways in which the end result could come about. The electron might move to a place and time E where it absorbs the photon; then move on before emitting another photon at F; then move on to C where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probabilities of each of the individual actions: three electron actions, two photon actions and two vertices - one emission and one absorption. We would expect to find the total probability by multiplying the probabilities of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probabilities for all the alternatives for E and F. (This is not elementary in practice, and involves integration.) But there is another possibility: that is that the electron first moves to G where it emits a photon which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again we can calculate the probability of these possibilities (for all points G and H). We then have a better estimation for the total probability by adding the probabilities of these two possibilities to our original simple estimate. Incidentally the name given to this process of a photon interacting with an electron in this way is Compton Scattering.

Do we stop with those diagrams? No, there are an infinite number of other intermediate processes in which more and more photons are absorbed and/or emitted. Associated with each of these possibilities there will be a Feynman diagram which helps us to keep track of them. Clearly there is going to be a lot of computing involved in calculating the resulting probabilities, but provided it is the case that the more complicated the diagram the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as you want to the original question. And that is the basic approach of QED. To calculate the probability of ANY interactive process between electrons and photons it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability. By adding the probabilities of each diagram we can find the total probability.

That basic scaffolding remains when we move to a quantum description. But there are a number of important detailed changes. The first is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is NOT true in full Quantum Electrodynamics. There is a certain possibility of an electron or photon at A moving as a basic action to any other place and time in the universe!. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.)

The second important change has to do with the probability quantities. It has been found that the quantities which we have to use to represent the probabilities are not the usual real numbers we use for probabilities in our everyday world, but complex numbers which are called probability amplitudes. Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams which are actually simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude-arrows are fundamental to the description of the world given by quantum theory. No satisfactory reason has been given for why they are needed. But pragmatically we have to accept that they are an essential part of our description of all quantum phenomena. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the SQUARE of the length of the corresponding amplitude-arrow.

The rules as regards adding or multiplying, however, are the same as above. Where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes.

How are two arrows added or multiplied? (These are familiar operation in the theory of complex numbers.) The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the start of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction.

That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarised, which is to say that their orientation in space and time have to be taken into account. Therefore P(A to B) actually consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity "j", which may have to be rotated by a multiple of 90º for some polarisations, which is only of interest for the detailed bookkeeping.

Associated with the fact that the electron can be polarised is another small necessary detail which is connected with the fact that an electron is a Fermion and obeys Fermi-Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we just exchange two electron events, the resulting amplitude is the reverse - the negative - of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", P(A to B)xP(C to D) - P(A to C)xP(B to D), where we would expect, from our everyday idea of probabilities, that it would be a sum.


Theory of everything

The theory of everything (TOE) is a putative theory of theoretical physics that fully explains and links together all known physical phenomena, and, ideally, has predictive power for the outcome of any experiment that could be carried out in principle. Initially, the term was used with an ironic connotation to refer to various overgeneralized theories. For example, a great-grandfather of Ijon Tichy—a character from a cycle of Stanisław Lem's science fiction stories of 1960s—was known to work on the "General Theory of Everything". Physicist John Ellis[1] claims to have introduced the term into the technical literature in an article in Nature in 1986.[2] Over time, the term stuck in popularizations of quantum physics to describe a theory that would unify or explain through a single model the theories of all fundamental interactions of nature.

There have been many theories of everything proposed by theoretical physicists over the last century, but none has been confirmed experimentally. The primary problem in producing a TOE is that the accepted theories of quantum mechanics and general relativity are hard to combine. Their mutual incompatibility argues that they are incomplete, or at least not fully understood taken individually. (For more, see unsolved problems in physics).

Based on theoretical holographic principle arguments from the 1990s, many physicists believe that 11-dimensional M-theory, which is described in many sectors by matrix string theory, in many other sectors by perturbative string theory is the complete theory of everything, although there is no widespread consensus and M-theory is not a completed theory but rather an approach for producing one.

Modern physics

In current mainstream physics, a Theory of Everything would unify all the fundamental interactions of nature, which are usually considered to be four in number: gravity, the strong nuclear force, the weak nuclear force, and the electromagnetic force. Because the weak force can transform elementary particles from one kind into another, the TOE should yield a deep understanding of the various different kinds of particles as well as the different forces. The expected pattern of theories is:

Theory of Everything


Gravity
Electronuclear force (GUT)

Strong force
SU(3)
Electroweak force
SU(2) x U(1)

Weak force
SU(2)
Electromagnetism
U(1)


Electric force
Magnetic force

In addition to the forces listed here, modern cosmology might require an inflationary force, dark energy, and also dark matter composed of fundamental particles outside the scheme of the standard model. The existence of these has not been proven and there are alternative theories such as modified Newtonian dynamics.[citation needed]

Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have a mass of about 100 GeV, whereas the photon, which carries the electromagnetic force, is massless. At higher energies Ws and Zs can be created easily and the unified nature of the force becomes apparent. Grand unification is expected to work in a similar way, but at energies of the order of 1016 GeV, far greater than could be reached by any possible Earth-based particle accelerator. By analogy, unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV.

It may seem premature to be searching for a TOE when there is as yet no direct evidence for an electronuclear force, and while in any case there are many different proposed GUTs. In fact the name deliberately suggests the hubris involved. Nevertheless, most physicists believe this unification is possible, partly due to the past history of convergence towards a single theory. Supersymmetric GUTs seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and the inflationary force may be related to GUT physics (although it does not seem to form an inevitable part of the theory). And yet GUTs are clearly not the final answer. Both the current standard model and proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies. Furthermore, the inconsistency between quantum mechanics and general relativity implies that one or both of these must be replaced by a theory incorporating quantum gravity

The mainstream theory of everything at the moment is superstring theory / M-theory; current research on loop quantum gravity may eventually play a fundamental role in a TOE, but that is not its primary aim.[9] These theories attempt to deal with the renormalization problem by setting up some lower bound on the length scales possible. String theories and supergravity (both believed to be limiting cases of the yet-to-be-defined M-theory) suppose that the universe actually has more dimensions than the easily observed three of space and one of time. The motivation behind this approach began with the Kaluza-Klein theory in which it was noted that applying general relativity to a five dimensional universe (with the usual four dimensions plus one small curled-up dimension) yields the equivalent of the usual general relativity in four dimensions together with Maxwell's equations (electromagnetism, also in four dimensions). This has led to efforts to work with theories with large number of dimensions in the hopes that this would produce equations that are similar to known laws of physics. The notion of extra dimensions also helps to resolve the hierarchy problem, which is the question of why gravity is so much weaker than any other force. The common answer involves gravity leaking into the extra dimensions in ways that the other forces do not.[citation needed]

In the late 1990s, it was noted that one problem with several of the candidates for theories of everything (but particularly string theory) was that they did not constrain the characteristics of the predicted universe. For example, many theories of quantum gravity can create universes with arbitrary numbers of dimensions or with arbitrary cosmological constants. Even the "standard" ten-dimensional string theory allows the "curled up" dimensions to be compactified in an enormous number of different ways (one estimate is 10500 ) each of which corresponds to a different collection of fundamental particles and low-energy forces. This array of theories is known as the string theory landscape.

A speculative solution is that many or all of these possibilities are realised in one or another of a huge number of universes, but that only a small number of them are habitable, and hence the fundamental constants of the universe are ultimately the result of the anthropic principle rather than a consequence of the theory of everything. This anthropic approach is often criticised[who?] in that, because the theory is flexible enough to encompass almost any observation, it cannot make useful (as in original, falsifiable, and verifiable) predictions. In this view, string theory would be considered a pseudoscience, where an unfalsifiable theory is constantly adapted to fit the experimental results.


Relativistic electromagnetism

Relativistic electromagnetism is a modern teaching strategy for developing electromagnetic field theory from Coulomb’s law and Lorentz transformations. Though Coulomb’s law expresses action at a distance, it is an easily understood electric force principle. The more sophisticated view of electromagnetism expressed by electromagnetic fields in spacetime can be approached by applying spacetime symmetries. In certain special configurations it is possible to exhibit magnetic effects due to disparity of charge density in various simultaneous hyperplanes. This approach to physics education and the education and training of electrical and electronics engineers was pioneered by Edward M. Purcell (1965), Jack R. Tessman (1966), W.G.V. Rosser (1968), Anthony French (1968), and Dale R. Corson & Paul Lorrain (1970). This approach provides some preparation for magnetic forces involved in the Biot-Savart Law, Ampère's law, and Maxwell's equations.

Most of Purcell's explanation is based on using the Lorentz contraction factor:

 \sqrt{1 - v^2/c^2}

Classical Electrodynamics

The scientist William Gilbert proposed, in his De Magnete (1600), that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle, but the link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752. One of the first to discover and publish a link between man-made electric current and magnetism was Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment.[1] Ørsted's work influenced Ampère to produce a theory of electromagnetism that set the subject on a mathematical foundation.

An accurate theory of electromagnetism, known as classical electromagnetism, was developed by various physicists over the course of the 19th century, culminating in the work of James Clerk Maxwell, who unified the preceding developments into a single theory and discovered the electromagnetic nature of light. In classical electromagnetism, the electromagnetic field obeys a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.

One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in a vacuum is a universal constant, dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaces classical kinematics with a new theory of kinematics that is compatible with classical electromagnetism. (For more information, see History of special relativity.)

In addition, relativity theory shows that in moving frames of reference a magnetic field transforms to a field with a nonzero electric component and vice versa; thus firmly showing that they are two sides of the same coin, and thus the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity.)

ELECTROMAGNETISM

11.1 More About the Magnetic Field




fulfill

a / The pair of charged particles, as seen in two different frames of reference.


bdeflects

b / A large current is created by shorting across the leads of the battery. The moving charges in the wire attract the moving charges in the electron beam, causing the electrons to curve.


brelativity

c / A charged particle and a current, seen in two different frames of reference. The second frame is moving at velocity v with respect to the first frame, so all the velocities have v subtracted from them. (As discussed in the main text, this is only approximately correct.)

11.1.1 Magnetic forces


In this chapter, I assume you know a few basic ideas about Einstein's theory of relativity,
as described in section 7.1.
Unless your typical workday involves rocket ships or particle accelerators,
all this relativity stuff might sound like a description of some bizarre
futuristic world that is completely hypothetical. There is, however, a relativistic
effect that occurs in everyday life, and it is obvious and dramatic: magnetism.
Magnetism, as we discussed previously, is an interaction between a moving
charge and another moving charge,
as opposed to electric forces, which act between any pair of charges, regardless of their
motion. Relativistic effects are weak for speeds that are small compared to the speed
of light, and the average speed at which electrons drift through a wire is quite
low (centimeters per second, typically), so how can relativity be behind an impressive
effect like a car being lifted by an electromagnet hanging from a crane? The key is
that matter is almost perfectly electrically neutral, and electric forces therefore
cancel out almost perfectly. Magnetic forces really aren't very strong, but electric
forces are even weaker.


What about the word “relativity” in the name of the theory?
It would seem problematic if moving charges interact differently than stationary charges,
since motion is a matter of opinion, depending on your frame of reference.
Magnetism, however, comes not to destroy relativity but to fulfill it. Magnetic interactions
must exist according to the theory of relativity. To understand how this can be,
consider how time and space behave in relativity. Observers in different frames of reference
disagree about the lengths of measuring sticks and the speeds of clocks, but the laws
of physics are valid and self-consistent in either frame of reference.
Similarly, observers in different frames of reference disagree about what electric and magnetic
fields and forces there are, but they agree about concrete physical events.
For instance, figure a/1 shows two particles, with opposite charges,
which are not moving at a particular moment in time. An observer in this frame of reference
says there are electric fields around the particles, and predicts that as time goes on, the
particles will begin to accelerate towards one another, eventually colliding.
A different observer, a/2, says the particles are moving. This observer
also predicts that the particles will collide, but explains their motion in terms of both
an electric field, E, and a magnetic field, B. As we'll see shortly, the
magnetic field is required in order to maintain consistency between the predictions made
in the two frames of reference.


To see how this really works out, we need to find a nice simple example that is
easy to calculate. An example like figure a is not easy
to handle, because in the second frame of reference, the moving charges
create fields that change over time at any given location. Examples like
figure b are easier, because there is a steady flow of charges, and
all the fields stay the same over time.1
What is remarkable about this demonstration is that there can be no electric fields
acting on the electron beam at all, since the total charge density throughout the wire
is zero. Unlike figure a/2, figure b is purely magnetic.


To see why this must occur based on relativity, we make the mathematically idealized model
shown in figure c. The charge by itself is like one of the electrons
in the vacuum tube beam of figure b, and a
pair of moving, infinitely long line charges has been substituted for the wire. The electrons in a real wire
are in rapid thermal motion, and the current is created only by a slow drift superimposed
on this chaos. A second deviation from reality is that in the real experiment, the protons
are at rest with respect to the tabletop, and it is the electrons that are in motion, but in
c/1 we have the positive charges moving in
one direction and the negative ones moving
the other way. If we wanted to, we could construct a third frame of reference in which the
positive charges were at rest, which would be more like the frame of reference fixed to the
tabletop in the real demonstration. However, as we'll see shortly, frames
c/1 and c/2 are designed so that they are
particularly easy to analyze. It's important to note that even though the two line charges
are moving in opposite directions, their currents don't cancel. A negative charge moving
to the left makes a current that goes to the right, so in frame c/1,
the total current is twice that contributed by either line charge.


Frame 1 is easy to analyze because the charge densities of the two line charges cancel out,
and the electric field experienced by the lone charge is therefore zero:

E1 = 0


In frame 1, any force experienced by the lone charge must therefore be attributed solely
to magnetism.


Frame 2 shows what we'd see if we were observing all this from a frame of reference moving
along with the lone charge.
Why don't the charge densities also cancel in this frame?
Here's where the relativity comes in. Relativity tells us that moving objects
appear contracted to an observer who is not moving along with them.
Both line charges are in motion in both frames of reference, but in frame 1, the
line charges were moving at equal speeds, so their contractions were equal, and their
charge densities canceled out. In frame 2, however, their speeds are unequal. The positive
charges are moving more slowly than in frame 1, so in frame 2 they are less contracted.
The negative charges are moving more quickly, so their contraction is greater now.
Since the charge densities don't cancel, there is an electric field in frame 2, which
points into the wire, attracting the lone charge. Furthermore, the attraction felt
by the lone charge must be purely electrical, since the lone charge is at rest in this
frame of reference, and magnetic effects occur only between moving charges and other
moving charges.2


To summarize, frame 1 displays a purely magnetic attraction, while in frame 2 it is
purely electrical.


A common source of confusion in this argument is that it seems as though the excess of
negative electric charge must have been stolen from some other part of the wire, leaving that
part with a net positive charge. That wouldn't make sense, because there is no physical
reason why one part of the wire would behave differently than any other. The flaw in this
reasoning has to do with the fact that simultaneity is not well defined in relativity.
In frame c/1, suppose that we label all the positive charges with
integers, and likewise all the negative ones, so that the positive charge labeled 42 is on top of
the negative charge labeled 42, and so on. In this frame of reference, every charge has a partner that cancels it,
and the net charge everywhere is zero. If simultaneity were a valid concept in relativity, then
not only would 42's pairing with 42, and 43's pairing with 43, occur simultaneously in frame c/1,
but these same pairings would occur all at the same time in frame c/2. But observers
in different frames of reference do not agree on simultaneity. For simplicity, let's imagine that
the Lorentz contractions are such that the spacing between the negative charges in frame c/2
is exactly half as much as the spacing between the positive charges. Then we may have negative charge number 42
paired up with positive charge 21, and negative charge 44 paired with positive charge 22, while negative charge 43
has no partner.


Now we can calculate the force in frame 2, and equating it to the force in frame 1, we
can find out how much magnetic force occurs.
To keep the math simple, and to keep from assuming too much about your knowledge
of relativity, we're going to carry out this whole calculation in the approximation
where all the speeds are fairly small compared to the speed of light.3 For instance, if
we find an expression such as (v/c)2+(v/c)4, we will assume that the fourth-order
term is negligible by comparison. This is known as a calculation “to leading order
in v/c.” In fact, I've already used the leading-order approximation twice
without saying so! The first time I used it implicitly was in figure c,
where I assumed that the velocities of the two line charges were u-v and -u-v.
Relativistic velocities don't just combine by simple addition and subtraction like
this, but this is an effect we can ignore in the present approximation. The second
sleight of hand occurred when I stated that we could equate the forces in the two
frames of reference. Force, like time and distance, is distorted relativistically
when we change from one frame of reference to another. Again, however, this is an effect
that we can ignore to the desired level of approximation.


Let ±λ be the charge per unit length of each line charge without relativistic
contraction, i.e., in the frame moving with that line charge.
Using the approximation γ=(1-v2/c2)-1/2≈ 1+v2/2c2 for v<< c, the
total charge per unit length in frame 2 is






λtotal,2
≈λ[1+(u-v)22c2]-λ[1+(-u-v)22c2]



=-2λuvc2.





Let R be the distance from the line charge to the lone charge.
Applying Gauss' law to a cylinder of radius R centered on the line charge,
we find that the magnitude of the electric field experienced by the lone charge
in frame 2 is



E=4kλuvc2R,


and the force acting on the lone charge q is



F=4kλquvc2R.


In frame 1, the current is I=2λ1 u (see homework problem 5),
which we can approximate
as I=2λ u, since the current, unlike λtotal, 2, doesn't
vanish completely without the relativistic effect.
The magnetic force on the lone charge q due to the current I is



F=2kIqvc2R.





vbf

d / The right-hand relationship between the velocity of a positively charged particle, the magnetic field through which it is moving, and the magnetic force on it.


tesla

e / The unit of magnetic field, the tesla, is named after Serbian-American inventor Nikola Tesla.


current-loop-dipole

f / A standard dipole made from a square loop of wire shorting across a battery. It acts very much like a bar magnet, but its strength is more easily quantified.


current-loop-aligns

g / A dipole tends to align itself to the surrounding magnetic field.


arearh

h / The m and A vectors.


squaretorque

i / The torque on a current loop in a magnetic field. The current comes out of the page, goes across, goes back into the page, and then back across the other way in the hidden side of the loop.


inout

j / A vector coming out of the page is shown with the tip of an arrowhead. A vector going into the page is represented using the tailfeathers of the arrow.


adddipoles

k / Dipole vectors can be added.


irregularloop

l / An irregular loop can be broken up into little squares.


iron-filings-around-magnet

m / The magnetic field pattern around a bar magnet is created by the superposition of the dipole fields of the individual iron atoms. Roughly speaking, it looks like the field of one big dipole, especially farther away from the magnet. Closer in, however, you can see a hint of the magnet's rectangular shape. The picture was made by placing iron filings on a piece of paper, and then bringing a magnet up underneath.

11.1.2 The magnetic field


Definition in terms of the force on a moving particle


With electricity, it turned out to be useful to define an electric field
rather than always working in terms of electric forces. Likewise, we want
to define a magnetic field, B. Let's look at the result of the preceding subsection
for insight. The equation



F=2kIqvc2R


shows that when we put a moving charge
near other moving charges, there is an extra magnetic force on it, in addition to
any electric forces that may exist. Equations for electric forces always have a factor
of k in front --- the Coulomb constant k is called
the coupling constant for
electric forces. Since magnetic effects are relativistic in origin, they end up
having a factor of k/c2 instead of just k. In a world where the speed of light
was infinite, relativistic effects, including magnetism, would be absent, and the
coupling constant for magnetism would be zero. A cute feature of the metric system
is that we have k/c2=10-7 N⋅s2/C2 exactly,
as a matter of definition.


Naively, we could try to work by analogy with the electric field, and define
the magnetic field as the magnetic force per unit charge. However, if we think
of the lone charge in our example as the test charge, we'll find that this
approach fails, because the force depends not just on the test particle's charge,
but on its velocity, v, as well. Although we only carried out calculations for
the case where the particle was moving parallel to the wire, in general this velocity
is a vector, v, in three dimensions. We can also anticipate that the magnetic
field will be a vector. The electric and gravitational fields are vectors, and we
expect intuitively based on our experience with magnetic compasses that a magnetic field
has a particular direction in space. Furthermore, reversing the current I in our
example would have reversed the force, which would only make sense if the magnetic
field had a direction in space that could be reversed. Summarizing, we think there
must be a magnetic field vector B, and the force on a test particle moving
through a magnetic field is proportional both to the B vector
and to the particle's own v vector. In other words, the magnetic force vector F is
found by some sort of vector multiplication of the vectors v and B.
As proved on page 856, however, there is only one physically
useful way of defining such a multiplication, which is the cross product.




We
therefore define the magnetic field vector, B, as the vector that determines
the force on a charged particle according to the following rule:



F=qv×B[definition of the magnetic field]





From this definition, we see that the magnetic field's units are
N⋅s/C⋅m, which are usually abbreviated as
teslas, 1 T=1 N⋅s/C⋅m.
The definition implies a right-hand-rule relationship
among the vectors, figure d, if the charge q is positive, and
the opposite handedness if it is negative.


This is not just a definition but a bold prediction! Is it really true
that for any point in space, we can always find a vector B that successfully
predicts the force on any passing particle, regardless of its charge and
velocity vector? Yes --- it's not obvious that it can be done, but
experiments verify that it can. How? Well for example, the cross product of parallel vectors
is zero, so we can try particles moving in various directions, and hunt for the
direction that produces zero force; the B vector lies along that line, in
either the same direction the particle was moving, or the opposite one.
We can then go back to our data from one of the other cases, where the
force was nonzero, and use it to choose between these two directions and find
the magnitude of the B vector. We could then verify that this vector
gave correct force predictions in a variety of other cases.


Even with this empirical reassurance, the meaning of this equation is
not intuitively transparent,
nor is it practical in most cases to measure a magnetic field this way. For these
reasons, let's look at an alternative method of defining the magnetic field which,
although not as fundamental or mathematically simple, may be more appealing.


Definition in terms of the torque on a dipole


A compass needle in a magnetic field experiences a torque which tends to align
it with the field. This is just like the behavior of an electric dipole in
an electric field, so we consider the compass needle to be a
magnetic dipole.
In subsection 10.1.3 on
page 526,
we gave an alternative definition of the
electric field in terms of the torque on an electric dipole.


To define the strength of a magnetic field, however, we need
some way of defining the strength of a test dipole, i.e., we
need a definition of the magnetic dipole moment. We could
use an iron permanent magnet constructed according to
certain specifications, but such an object is really an
extremely complex system consisting of many iron atoms, only
some of which are aligned with each other. A more fundamental standard
dipole is a square current loop. This could be little
resistive circuit consisting of a square of wire shorting across a battery, f.


Applying F=v×B, we
find that such a loop, when placed in a magnetic
field, g,
experiences a torque that tends to align plane so
that its interior “face” points in a certain direction.
Since the loop is symmetric, it doesn't care if we rotate it
like a wheel without changing the plane in which it lies. It
is this preferred facing direction that we will end up
using as our alternative definition of the magnetic field.


If the loop is out of alignment with the
field, the torque on it is proportional to the amount of
current, and also to the interior area of the loop. The
proportionality to current makes sense, since magnetic
forces are interactions between moving charges, and current
is a measure of the motion of charge. The proportionality to
the loop's area is also not hard to understand, because
increasing the length of the sides of the square increases
both the amount of charge contained in this circular
“river” and the amount of leverage supplied for making
torque. Two separate physical reasons for a proportionality
to length result in an overall proportionality to length
squared, which is the same as the area of the loop. For
these reasons, we define the magnetic dipole moment of a
square current loop as

m = IA ,


where the direction of the vectors is defined as shown in figure h.



We can now give an alternative definition of the magnetic
field:




The magnetic field vector, B, at any location in space is
defined by observing the torque exerted on a magnetic test
dipole mt consisting of a square current loop. The
field's magnitude is



|B|=τ|mt|sinθ,


where θ is the angle between the dipole vector and the field.
This is equivalent to the vector cross product

τ=mt×B

.





Let's show that this is consistent with the previous definition, using the
geometry shown in figure i. The velocity vector that point in
and out of the page are shown using the convention defined
in figure j.
Let the mobile charge carriers in the wire have linear
density λ, and
let the sides of the loop have
length h, so that we have I=λ v, and
m=h2λ v. The only nonvanishing torque comes from the forces on the
left and right sides. The currents in these sides are perpendicular to the field,
so the magnitude of the cross product F=qv×B is simply
|F|=qvB. The torque supplied by each of these forces
is r×F, where the lever arm r has length h/2,
and makes an angle θ with respect to the force vector. The magnitude of the total torque
acting on the loop is therefore






|τ|
=2h2|F|sinθ



=h qvB sinθ,


and substituting q=λh and v=m/h2λ, we have


|τ|
=h λh mh2λBsinθ



=mBsinθ,








which is consistent with the second definition of the field.


It undoubtedly seems artificial to you that we have discussed dipoles only in
the form of a square loop of current. A permanent magnet, for example, is made
out of atomic dipoles, and atoms aren't square! However, it turns out that the
shape doesn't matter. To see why this is so, consider the additive property of
areas and dipole moments, shown in figure k. Each of the square
dipoles has a dipole moment that points out of the page. When they are placed
side by side, the currents in the adjoining sides cancel out, so they are equivalent
to a single rectangular loop with twice the area. We can break down
any irregular shape into little squares, as shown in figure l,
so the dipole moment of any planar current loop can be calculated based on its area,
regardless of its shape.


Example 1: The magnetic dipole moment of an atom


Let's make an order-of-magnitude estimate of the magnetic dipole moment of an atom.
A hydrogen atom is about 10-10 m in diameter, and the electron moves at speeds
of about 10-2 c. We don't know the shape of the orbit, and indeed it turns out that
according to the principles of quantum mechanics, the electron doesn't even have a well-defined
orbit, but if we're brave, we can still estimate the dipole moment using the
cross-sectional area of the atom, which will be on the order of
(10-10 m)2=10-20 m2.
The electron is a single particle, not a steady current, but again we throw caution to
the winds, and estimate the current it creates as e/Δ t,
where Δ t, the time for one orbit, can be estimated by dividing the
size of the atom by the electron's velocity. (This is only a rough estimate,
and we don't know the shape of the orbit, so it would be silly, for instance,
to bother with multiplying the diameter by π based on our intuitive visualization
of the electron as moving around the circumference of a circle.)
The result for the dipole moment is m∼10-23 A⋅m2.


Should we
be impressed with how small this dipole moment is, or with how big it is, considering
that it's being made by a single atom?
Very large or very small numbers are never very interesting by themselves. To get a
feeling for what they mean, we need to compare them to something else. An interesting
comparison here is to think in terms of the total number of atoms in a typical object,
which might be on the order of 1026 (Avogadro's number). Suppose we had this
many atoms, with their moments all aligned. The total dipole moment would be on the
order of 103 A⋅m2, which is a pretty big number. To get
a dipole moment this strong using human-scale devices,
we'd have to send a thousand amps of current through a
one-square meter loop of wire! The insight to be gained here is that, even in
a permanent magnet, we must not have all the atoms perfectly aligned, because that
would cause more spectacular magnetic effects than we really observe. Apparently, nearly
all the atoms in such a magnet are oriented randomly, and do not contribute to the
magnet's dipole moment.



Discussion Questions



The physical situation shown in figure c on page 604
was analyzed entirely in terms of forces. Now let's go back and think about it in terms of fields.
The charge by itself up above the wire is like a test charge, being used to determine the magnetic
and electric fields created by the wire. In figures c/1 and
c/2, are there fields that are purely electric or purely magnetic? Are there
fields that are a mixture of E and B? How does this compare with the forces?





Continuing the analysis begun in discussion question A, can we come up
with a scenario involving some charged particles such that the fields are purely magnetic
in one frame of reference but a mixture of E and B in another frame?
How about an example where the fields are purely electric in one frame, but mixed in
another? Or an example where the fields are purely electric in one frame, but purely
magnetic in another?

Wednesday, August 26, 2009

Electrostatics

Electrostatics is the branch of science that deals with the phenomena arising from stationary or slowly moving electric charges.

Since classical antiquity it was known that some materials such as amber attract light particles after rubbing. The Greek word for amber, ήλεκτρον (electron), was the source of the word 'electricity'. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by Coulomb's law. Even though electrostatically induced forces seem to be rather weak, the electrostatic force between e.g. an electron and a proton, that together make up a hydrogen atom, is about 40 orders of magnitude stronger than the gravitational force acting between them.

Electrostatic phenomena include many examples as simple as the attraction of the plastic wrap to your hand after you remove it from a package, to the apparently spontaneous explosion of grain silos, to damage of electronic components during manufacturing, to the operation of photocopiers. Electrostatics involves the buildup of charge on the surface of objects due to contact with other surfaces. Although charge exchange happens whenever any two surfaces contact and separate, the effects of charge exchange are usually only noticed when at least one of the surfaces has a high resistance to electrical flow. This is because the charges that transfer to or from the highly resistive surface are more or less trapped there for a long enough time for their effects to be observed. These charges then remain on the object until they either bleed off to ground or are quickly neutralized by a discharge: e.g., the familiar phenomenon of a static 'shock' is caused by the neutralization of charge built up in the body from contact with nonconductive surfaces.

Contents

[hide]

Fundamental concepts

Coulomb's law

The fundamental equation of electrostatics is Coulomb's law, which describes the force between two point charges. The magnitude of the electrostatic force between two point electric charges is directly proportional to the product of the magnitudes of each charge and inversely proportional to the square of the distance between the charges.Q1 and Q2:

F = \frac{Q_1Q_2}{4\pi\varepsilon_0 r^2}\ ,

where ε0 is the electric constant, a defined value:

 \varepsilon_0 \ \stackrel{\mathrm{def}}{=}\ \frac {1}{\mu_0 {c_0}^2} = 8.854\ 187\ 817\ \times 10^{-12}   in A2s4 kg-1m−3 or C2N−1m−2 or F m−1.

The electric field

The electric field (in units of volts per meter) at a point is defined as the force (in newtons) per unit charge (in coulombs) on a charge at that point:

\vec{F} = q\vec{E}\,

From this definition and Coulomb's law, it follows that the magnitude of the electric field E created by a single point charge Q is:

E = \frac{Q}{4\pi\varepsilon_0 r^2}.

Gauss's law

Gauss' law states that "the total electric flux through a closed surface is proportional to the total electric charge enclosed within the surface". The constant of proportionality is the permittivity of free space.

Mathematically, Gauss's law takes the form of an integral equation:

\oint_S\varepsilon_0\vec{E} \cdot\mathrm{d}\vec{A} =  \int_V\rho\cdot\mathrm{d}V.

Alternatively, in differential form, the equation becomes

\vec{\nabla}\cdot\varepsilon_0\vec{E} = \rho.

Poisson's equation

The definition of electrostatic potential, combined with the differential form of Gauss's law (above), provides a relationship between the potential φ and the charge density ρ:

{\nabla}^2 \phi = - {\rho\over\varepsilon_0}.

This relationship is a form of Poisson's equation. Where {\varepsilon_0} is Vacuum permittivity.

Laplace's equation

In the absence of unpaired electric charge, the equation becomes

{\nabla}^2 \phi = 0,

which is Laplace's equation.

The electrostatic approximation

The validity of the electrostatic approximation rests on the assumption that the electric field is irrotational:

\vec{\nabla}\times\vec{E} = 0.

From Faraday's law, this assumption implies the absence or near-absence of time-varying magnetic fields:

{\partial\vec{B}\over\partial t} = 0.

In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored.

Electrostatic potential

Because the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function, called the electrostatic potential (also known as the voltage). An electric field, E, points from regions of high potential, φ, to regions of low potential, expressed mathematically as

\vec{E} = -\vec{\nabla}\phi.

The electrostatic potential at a point can be defined as the amount of work per unit charge required to move a charge from infinity to the given point.

Triboelectric series

The triboelectric effect is a type of contact electrification in which certain materials become electrically charged when they are brought into contact with a different material and then separated. One of the materials acquires a positive charge, and the other acquires an equal negative charge. The polarity and strength of the charges produced differ according to the materials, surface roughness, temperature, strain, and other properties. Amber, for example, can acquire an electric charge by friction with a material like wool. This property, first recorded by Thales of Miletus, was the first electrical phenomenon investigated by man. Other examples of materials that can acquire a significant charge when rubbed together include glass rubbed with silk, and hard rubber rubbed with fur.

Electrostatic generators

The presence of surface charge imbalance means that the objects will exhibit attractive or repulsive forces. This surface charge imbalance, which yields static electricity, can be generated by touching two differing surfaces together and then separating them due to the phenomena of contact electrification and the triboelectric effect. Rubbing two nonconductive objects generates a great amount of static electricity. This is not just the result of friction; two nonconductive surfaces can become charged by just being placed one on top of the other. Since most surfaces have a rough texture, it takes longer to achieve charging through contact than through rubbing. Rubbing objects together increases amount of adhesive contact between the two surfaces. Usually insulators, e.g., substances that do not conduct electricity, are good at both generating, and holding, a surface charge. Some examples of these substances are rubber, plastic, glass, and pith. Conductive objects only rarely generate charge imbalance except, for example, when a metal surface is impacted by solid or liquid nonconductors. The charge that is transferred during contact electrification is stored on the surface of each object. Static electric generators, devices which produce very high voltage at very low current and used for classroom physics demonstrations, rely on this effect.

Note that the presence of electric current does not detract from the electrostatic forces nor from the sparking, from the corona discharge, or other phenomena. Both phenomena can exist simultaneously in the same system.

See also: Friction machines, Wimshurst machines, and Van de Graaf generators.

Charge neutralization

Natural electrostatic phenomena are most familiar as an occasional annoyance in seasons of low humidity, but can be destructive and harmful in some situations (e.g. electronics manufacturing). When working in direct contact with integrated circuit electronics (especially delicate MOSFETs), or in the presence of flammable gas, care must be taken to avoid accumulating and suddenly discharging a static charge (see electrostatic discharge).

Charge induction

Charge induction occurs when a negatively charged object repels electrons from the surface of a second object. This creates a region in the second object that is more positively charged. An attractive force is then exerted between the objects. For example, when a balloon is rubbed, the balloon will stick to the wall as an attractive force is exerted by two oppositely charged surfaces (the surface of the wall gains an electric charge due to charge induction, as the free electrons at the surface of the wall are repelled by the negative balloon, creating a positive wall surface, which is subsequently attracted to the surface of the balloon). You can explore the effect with a simulation of the balloon and static electricity.

'Static' electricity

Before the year 1832, when Michael Faraday published the results of his experiment on the identity of electricities, physicists thought "static electricity" was somehow different from other electrical charges. Michael Faraday proved that the electricity induced from the magnet, voltaic electricity produced by a battery, and static electricity are all the same.

Static electricity is usually caused when certain materials are rubbed against each other, like wool on plastic or the soles of shoes on carpet. The process causes electrons to be pulled from the surface of one material and relocated on the surface of the other material.

A static shock occurs when the surface of the second material, negatively charged with electrons, touches a positively-charged conductor, or vice-versa.

Static electricity is commonly used in xerography, air filters, and some automotive paints. Static electricity is a build up of electric charges on two objects that have become separated from each other. Small electrical components can easily be damaged by static electricity. Component manufacturers use a number of antistatic devices to avoid this.

Static electricity and chemical industry

When different materials are brought together and then separated, an accumulation of electric charge can occur which leaves one material positively charged while the other becomes negatively charged. The mild shock that you receive when touching a grounded object after walking on carpet is an example of excess electrical charge accumulating in your body from frictional charging between your shoes and the carpet. The resulting charge build-up upon your body can generate a strong electrical discharge. Although experimenting with static electricity may be fun, similar sparks create severe hazards in those industries dealing with flammable substances, where a small electrical spark may ignite explosive mixtures with devastating consequences.

A similar charging mechanism can occur within low conductivity fluids flowing through pipelines - a process called flow electrification. Fluids which have low electrical conductivity (below 50 pico siemens/m, where pico siemens/m is a measure of electrical conductivity), are called accumulators. Fluids having conductivities above 50 pico siemens/m are called non-accumulators. In non-accumulators, charges recombine as fast as they are separated and hence electrostatic charge generation is not significant. In the petrochemical industry, 50 pico siemens/m is the recommended minimum value of electrical conductivity for adequate removal of charge from a fluid.

An important concept for insulating fluids is the static relaxation time. This is similar to the time constant (tau) within an RC circuit. For insulating materials, it is the ratio of the static dielectric constant divided by the electrical conductivity of the material. For hydrocarbon fluids, this is sometimes approximated by dividing the number 18 by the electrical conductivity of the fluid. Thus a fluid that has an electrical conductivity of 1 pico siemens /cm will have an estimated relaxation time of about 18 seconds. The excess charge within a fluid will be almost completely dissipated after 4 to 5 times the relaxation time, or 90 seconds for the fluid in the above example.

Charge generation increases at higher fluid velocities and larger pipe diameters, becoming quite significant in pipes 8 inches (200 mm) or larger. Static charge generation in these systems is best controlled by limiting fluid velocity. The British standard BS PD CLC/TR 50404:2003 (formerly BS-5958-Part 2) Code of Practice for Control of Undesirable Static Electricity prescribes velocity limits. Because of its large impact on dielectric constant, the recommended velocity for hydrocarbon fluids containing water should be limited to 1 m/s.

Bonding and earthing are the usual ways by which charge buildup can be prevented. For fluids with electrical conductivity below 10 pico siemens/m, bonding and earthing are not adequate for charge dissipation, and anti-static additives may be required.




Electrostatic induction in commercial applications

The principle of electrostatic induction has been harnessed to beneficial effect in industry for many years, beginning with the introduction of electrostatic industrial painting systems for the economical and even application of enamel and polyurethane paints to consumer goods, including automobiles, bicycles, and other products.