Free Electromagnetic Energy In Theory

Aether Theory and the Road to Energy Abundance

By Arend Lammertink, MScEE.
<lamare{at}gmail{dot}com>

About

This article describes some fundamental theory about a wonderful energy source that is available everywhere in the Universe: the (static) Electric and Magnetic fields and it's practical application in a number of free energy devices.


Table of contents

Foreword : (single page)

Chapter 1 : Conservation of Energy

Chapter 2 : Tesla's 'Wheelwork of Nature'

Chapter 3 : Unification of Physics

Chapter 4 : The remarkable properties of Water

Chapter 5 : todo


Foreword

This document is being submitted in memory of Stanley Meyer and Gerrit Stokreef. Stanley Meyer was a great tinkerer who dared to challenge the powers that were and paid for it with his life. Gerrit Stokreef was one of my neighbors in the place where I grew up. It was a very warm neighborhood filled with loving, honest people that all had to work hard to make a living. Gerrit was always there when you needed him, he just never said "No". He lent me his oscilloscope years ago. I hardly used it until after he passed away and he left it to me. Now I know he lost his fight to cancer because the powers that were didn't want us to use the cures invented by Royal Rife. But the rules of the game have changed now. The genie is out of the bottle folks and there is no way to put it back in there. May Stan's dream finally be realized and may there be peace on this planet, beacuse when there's no need for oil anymore, which will put the powers that were out of business, who in his right mind would ever fight a war again?

While studying various articles and discussions about Free Energy, it struck me that there were some striking similarities between a number of systems, notably those made by John Bedini as well as Stan Meyer's Water Fuel Cell. At some point, it occurred to me that there might be a common explanation behind these different systems, which all appear to be some form of (electrolytic) capacitor. In various discussions at the Energetic Forum I have made an attempt to formulate a theory to explain a number of phenomena that have been reported in relation to these systems. Since the relevant information has been scattered all over the forum, it is my intention that all that information be brought together and assembled here.

I hope that this information is helpful to those people that are better experimenters than I am, so that this technology will be further developed in the spirit of open source. I hope other engineers and scientists will study this article and the referenced material and make products that put this technology in the hands of the people of this planet, so disasters as in the Mexican Gulf will never have to happen again. I also hope that none of this will ever be patented, because this technology is worth the most when it is actually used, not when it is put behind bars because of greed and selfishness. Haven't we had enough of that by now?

Power to the people! (pun intended)

Chapter 1: Conservation of Energy

According to the law of conservation of energy it is impossible to create energy out of nothing:

The law of conservation of energy is an empirical law of physics. It states that the total amount of energy in an isolated system remains constant over time (is said to be conserved over time). A consequence of this law is that energy can neither be created nor destroyed: it can only be transformed from one state to another. The only thing that can happen to energy in a closed system is that it can change form: for instance chemical energy can become kinetic energy.

The fundamental foundation for the law of conservation of energy lies in Newton's third law:

To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.

In essence energy (work) is an integration (summation) of a force enacted between two bodies - or particles or even the fundamental 'God' particles or whatever the aether/medium may be composed of - over the effect of the force, the movement of the body over a certain distance or a displacement in/of the aether/medium.

In other words, energy is in essence a measurement of the effect of the interaction between two bodies/particles and/or the medium. Fundamental point is that it is always a measurement of the effect of an interaction. And since action equals minus reaction, there can be no other way than that energy is always conserved. Because after all, as Tesla said, something cannot act upon nothing:

It might be inferred that I am alluding to the curvature of space supposed to exist according to the teachings of relativity, but nothing could be further from my mind. I hold that space cannot be curved, for the simple reason that it can have no properties. It might as well be said that God has properties. He has not, but only attributes and these are of our own making. Of properties we can only speak when dealing with matter filling the space. To say that in the presence of large bodies space becomes curved, is equivalent to stating that something can act upon nothing. I, for one, refuse to subscribe to such a view.

It is this law of conservation of energy that causes any device which appears to produce "useful work" without the use of a visible or obvious energy source to be considered "impossible" and done away with as perpetual motion (version of August 2010):

Perpetual motion describes hypothetical machines that once started operate or produce useful work indefinitely. This definition has been expanded to include any machine that produces more work or energy than it consumes, whether or not it can operate indefinitely. Despite that[sic] fact that such machines are not possible within the framework of our current formulation of physical law the pursuit of perpetual motion remains popular.

However, even though the law of conservation is correct, this does not mean it is impossible to create "machines that once started operate or produce useful work indefinitely" at all, provided you do not take the word 'indefinitely' too literally. But what this is really about, is the second part: "any machine that produces more work or energy than it consumes". Yes, this is correct, you cannot build a machine that produces energy out of nothing, you can only make a machine that uses some (external) energy source to do useful work. The current WikiPedia revision on perpetual motion (April 20th 2015) is much more nuanced:

Perpetual motion is motion that continues indefinitely without any external source of energy. This is impossible to ever achieve because of friction and other sources of energy loss. A perpetual motion machine is a hypothetical machine that can do work indefinitely without an energy source. This kind of machine is impossible, as it would violate the first or second law of thermodynamics.

In some way, the way the description of perpetual motion has changed over at WikiPedia illustrates that regarding this matter things are not as easy as they seem and that the detail regarding the use of an external energy source is an important distinction to make indeed.

Either way, in most cases, we can use the energy source of choice more or less directly, like burning fuel, and we don't count the energy we have to spend in order to get our energy source. But of course it also takes energy to drill a hole in the earth in order to extract oil for making fuel. So, in essence, the fuel supply chain ("a machine") as a whole provides more energy than it consumes, that is, the energy needed to make fuel is less than the energy released when burning the final product, the fuel.

To continue in this line of thinking, the ground source heat pump is a perfect example of a machine that uses a certain amount of energy in order to extract energy from some other external energy source provided by nature, heat naturally stored in the ground:

Ground source heat pumps, which are also referred to as Geothermal heat pumps, typically have higher efficiencies than air-source heat pumps. This is because they draw heat from the ground or groundwater which is at a relatively constant temperature all year round below a depth of about thirty feet (9 m).

Of course, we can apply this same principle in various ways, if we can find an appropriate external energy source provided by nature, preferably free of charge. Fortunately, an energy source exists that is available everywhere in the universe for free. It's an energy source that could in theory provide limitless energy without any pollution whatsoever, if only we could find a way to utilize it. This energy source is most generally known under the name "zero-point energy":

Zero-point energy, also called quantum vacuum zero-point energy, is the lowest possible energy that a quantum mechanical physical system may have; it is the energy of its ground state. All quantum mechanical systems undergo fluctuations even in their ground state and have an associated zero-point energy, a consequence of their wave-like nature. The uncertainty principle requires every physical system to have a zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure at any temperature because of its zero-point energy.

So far, we have not covered anything controversial. We have covered the fundamental law of conservation of energy, we have established that because of that one cannot build devices which produce energy out of nothing and we have identified zero-point energy as an energy source that could be used, in principle, as far as the law of conservation of energy is concerned, that is.

However, the question remains whether or not this is actually possible in practice, the so-called the Utilization controversy (WikiPedia):

As a scientific concept, the existence of zero-point energy is not controversial. However, the ability to harness zero point energy for useful work is considered pseudoscience by the scientific community at large.

The source for the 'pseudoscience' label is a document by the US Army National Ground Intelligence Center (copy):

While it is tempting to think that the energy of the vacuum in its abundance might somehow be harvested for our general use, this is sadly not possible. Extracting energy from a ground-state system would imply that the resulting system would have a lower energy, which is a nonsensical concept given that the system is (by definition) already at its lowest energy state. Forays into "free energy" inventions and perpetual-motion machines using ZPE are considered by the broader scientific community to be pseudoscience.

Let's first examine the conclusions of the US Army document:

ZPE has been a controversial topic similar to cold fusion and antigravity for a number of years because of the hope it creates for "free energy" and grandiose solutions to the world's energy problems. This hope has made it sometimes difficult to separate the hype spread by pseudoscientists and inventors from very real and noncontroversial application potential of the small-scale forces generated by the Casimir effect stemming from vacuum energy for nanoscale devices. While pockets of research in the field do exist, those with any promise for military technologies of tomorrow are less likely to affect space travel and more likely to affect future nanoscale devices.

Let's get this straight: according to this official US Army document, "a nonsensical concept given that the system is (by definition) already at its lowest energy state" has "very real and noncontroversial application potential".

Provided it is only applied at the nanoscale, of course.

But what about the microscale? After all, the Casimir effect, which accordingly does have very real and noncontroversial application potential, has been demonstrated at the microscale by Lamoreaux at the University of Washington:

Demonstration of the Casimir Force in the 0.6 to 6 um Range
The vacuum stress between closely spaced conducting surfaces, due to the modification of the zero-point fluctuations of the electromagnetic field, has been conclusively demonstrated. The measurement employed an electromechanical system based on a torsion pendulum. Agreement with theory at the level of 5% is obtained.

Now what is the Casimir effect?

http://en.wikipedia.org/wiki/Casimir_effect

In quantum field theory, the Casimir effect and the Casimir–Polder force are physical forces arising from a quantized field. They are named after the Dutch physicist Hendrik Casimir.
The typical example is of two uncharged metallic plates in a vacuum, placed a few nanometers apart. In a classical description, the lack of an external field also means that there is no field between the plates, and no force would be measured between them. When this field is instead studied using the QED vacuum of quantum electrodynamics, it is seen that the plates do affect the virtual photons which constitute the field, and generate a net force—either an attraction or a repulsion depending on the specific arrangement of the two plates.

Note that in the quantum electrodynamics interpretation, there is no fundamental understanding of the phenomenon. While it is clear that actually some kind of kinetic field exists which causes the Casimir effect, this field is being attributed to virtual photons, which by definition do not actually exist. However, the need for using photons to describe the phenomenon implies that some kind of oscillating field is involved, which has shown to have real, measurable effects. However, if it has real physical effects, the logical conclusion would be that it also has a real, physical origion and thus we can come to the following hypothesis:

A real, physical field of force with an oscillating nature exists, which amongst others causes the Casimir effect.

In other words: we postulate that the field causing the Casimir effect is not virtual (and thus actually non-existent) in nature, but very real and physically existing. And since it is of an oscillating nature it is a kinetic field of force and not a static one.

The US Army document also contains an alternate view:

As stated in the conclusion, ZPE has met with much controversy and debate. An alternate view of the topic is provided below by an analyst at Defense Intelligence Agency (DIA)
"The topic of successfully exploiting zero point energy (ZPE) has importance because it represents a high-risk/high pay-off technology. This is not pseudo-science but a very serious discipline where very serious research is underway worldwide that range from investigating the Casimir effect, finding new alternative sources of energy, and developing a means of future long-range space travel. Efforts are currently underway at a U.S. aerospace corporation to include creating hardware to investigate using ZPE to provide energy. Finally, one would like to see experimental data and, hopefully, replication of such experiments representative of 'good' science. However, the amount of U.S. research dollars spent in this endeavor is abysmal such that even the simplest experiment cannot be performed. Although we are aware of only modest funding worldwide for this type of research, the Intelligence Community should monitor the more controversial aspects of ZPE, or we may miss an important foreign innovational leap forward, thereby leaving us vulnerable to technology surprise."
Note: This alternate view was provided by an analyst of DIA and represents the view of this one analyst. It does not represent a DIA position.

Apparently, within "three letter US agencies" there is still room for debate on this one...

Wikipedia also refers to another document, a NASA contractor report:

According to a NASA contractor report, "the concept of accessing a significant amount of useful energy from the ZPE gained much credibility when a major article on this topic was published in Aviation Week & Space Technology (1 March 2004), a leading aerospace industry magazine".

This report reads (page 66):

One can put forth the hypothesis that ZPE is potentially such an energy source that can possibly explain the "excess output" inventors have claimed to observe. The calculated spatial density of ZPE is incomprehensibly large (as described in Section 4.1). If these calculated values are correct and a very small fraction of ZPE could be obtained in a system output, then this output could readily exceed the conventional types of energy entering the system. The working hypothesis in this report is that (excluding claims associated with poor measurements and intentional fraud) ZPE has been demonstrated a number of times, and that an examination of what is common or similar between claimed technologies could lead to a theoretical understanding of the science involved. Once the underlying scientific theory is understood, it may be possible to derive a short-list of "principles" that could be used to develop ZPE technology.

This seems like a very sensible working hypothesis to me, which I fully subscribe to. Furthermore, the idea to examine what is common or similar between a number of claimed technologies in order to come to a theoretical understanding of the science involved is exactly the purpose of this work.

However, doing so is not an easy task. It involves returning to the roots of our current scientific models, correcting some fundamental errors which have been introduced in the 20th century, dealing with rather nonsensical predictions of Quantum Mechanics, such as alleged "entanglement", and an alleged curving of space-time itself. By returning to an aether theory, whereby we model the aether as being a compressible gas/fluid, we can finally come to the long sought for 'Unification of Physics' and thus come to a workable theoretical understanding of the science involved.

But before we do that, we shall first examine the nature of the electric field, thereby assuming that the static electric field, whatever it may actually be, is a kinetic force caused by some kind of movement with a finite speed.

Chapter 2: Tesla's 'Wheelwork of Nature'

Throughout space there is energy. Is this energy static or kinetic? If static our hopes are in vain; if kinetic — and this we know it is, for certain — then it is a mere question of time when men will succeed in attaching their machinery to the very wheelwork of nature. - Nikola Tesla, 1892

Static or kinetic?

From the assumption that the electric field propagates at a finite speed, we explain that a circulation of energy between the vacuum and the propagating field(s) exist and is therefore part of ZPE.

Tom Bearden has made a number of video's as well as an article copy in which he explains how he thinks electrical circuits are actually powered.

Here's a simple explanation of what powers every electrical circuit. When we crank the shaft of the generator and rotate it, the rotation transforms the input "mechanical" energy into internal "magnetic field" energy. In that little part of the circuit that is between the terminals of the generator and inside it, the magnetic field energy is dissipated on the charges right there, to do work on them. This work (expending the magnetic energy) forces the negative charges in one direction, and the positive charges in the other direction. [...] That's all that rotating the shaft of the generator accomplishes. None of that input shaft energy was transformed into EM energy and sent out down the powerline, as electrical engineers assume.
Not to worry, energy does get sent down the power line but not from the generator shaft energy or its transduction. Essentially then, all the energy we put into the shaft of the generator is dissipated inside the generator itself, to push the positive charges in one direction and the negative charges in the other. The separation of the charges forms what is called a "dipole" (opposite charges separated from each other a bit). That is all that the generator does. That is all that burning all that coal or oil or gas does. It heats a boiler to make steam, so that the steam runs a steam turbine attached to the shaft of the generator, and turns it -- and therefore forcing those charges apart and making that dipole between the terminals of the generator.

This is a very important principle to understand, even though Bearden is a bit off, IMHO, and it is very hard to get this straight. It does take energy to separate the charges and that energy is used to change the configuration of the electric field. The field is not the same before and after a separation of charges has been done, so the applied energy is converted into a form of energy that can perhaps be described as a stress, a disturbance, of the overall electric field. And when the charges flow trough the circuit, one way or the other, the same amount of energy is released to the circuit as the amount of energy needed to separate the charges. If really "all the energy we put into the shaft of the generator" would be "dissipated inside the generator itself", big generators would heat up like hellfire.

Imagine a room with a fan and a door. When the door is opened, the airflow, wind, generated by the fan pushes against the door and tries to shut it. While opening the door, you have to push it against the air flow, which costs you energy. You can get that same amount of energy back, when you use the pressure of the airflow pushing against the door to do work, like cracking a peanut. However, the fan is not powered by the energy you have spent to open the door, it is a separate energy flow that is powered by something else. In this analogy, the door stands for the charges (mass) that move around and can be used to do work while the airflow (wind) stands for the electric field that causes the charges to move around. The only thing is that the door is the fan. So, we get all those little fandoors we can push around and as long as we keep using the same fandoors to create the airflow and to do the work, we will never ever be able to extract more energy from the airflow than we have spent ourselves to open the door.

So, these fandoors (charges) are really wonderful things. You open the door and mother nature (the vacuum) spins the fan and gives you a flow of energy you can use. Now the good news is that you can not only use this free energy to get your door shut again to do work, you can also use it to push on your neighbour's door. The bad news is that your neighbour's door also has its own fan, which has the nasty habbit of blowing in the other direction, that is, it will oppoze your airflow, which makes it very hard and certainly not straightforward to get a foot between these doors and keep the air flowing without paying for it. So, if you may have had the idea of taking an electret, a piece of permanent polarized material that continuously emits an electric field (the airflow) for free, to induce a current in a nearby wire, you're in trouble. The charges inside the wire will oppoze this exteral field and neutralize it faster than you can blink your eye and then the party is over. So much for that one.

So, are the engineers right and is Bearden wrong after all?

Well, the engineers are right in that you do convert mechanical energy into potential electric energy by opening the door against the airflow. But, Bearden is right that the dipole that has been created is a energy source. That energy source puts out energy in the form of a electric field, real energy that is converted from ZPE or whatever into a "static" electric field, mostly to be sent into space without ever being used, except for that part that is needed to close the door again.

To sum this up: besides the energies that are normally considered, there is a second energy flow that is totally being ignored. And that is interesting, because if the law of conservation practically holds for the first flow (the opening and closing of the door) it means we can use this second, hidden, energy flow (the fan) for free! This also means that electrical circuits can never ever be considered being "isolated systems", so if you want to throw "law of conservation" stuff into the equation, you have to make damn sure that whatever energy is being exchanged by the electric field with the environment can be neglected in the case at hand. In other words: electrical circuits are always interacting with the environment, even though you can often ignore that when doing energy conservation calculations. But let's read a littlebit further in Bearden:

So we "see" the dipole as if it were just sitting there and pouring out real EM energy continuously, in all directions, like a spray nozzle or giant energy gusher. We don't see the input energy from the vacuum at all! But it's there, and it's well-known in particle physics. It's just that electrical engineers -- particularly those that have designed and built all our electrical power systems for more than a century -- do not know it.
So, according to proven particle physics and a Nobel Prize, the easiest thing in all the world is to extract EM energy from the vacuum. All you wish. Anywhere in the universe. For free. Just pay a little bit once, to make a little dipole, and that silly thing is like a great oil well you just successfully drilled that has turned into a mighty gusher of oil without you having to pump it. The dipole just sits there and does its thing, and it pours energy out forever, for free, as long as that dipole continues to exist.

Well, it may be right that particle physics says it's easy to extract EM energy from the vacuum, but that does not tell us how we can use that, nor how we can engineer systems that are able to make use of this unknown, or better: overlooked, territory. Where is that energy? Where does it come from and where does it go?

The answer to these questions can be found in the paper Conversion of the Vacuum-energy of electromagnetic zero point oscillations into Classical Mechanical Energy by the German Professor Claus Turtur. In the chapter "A circulation of energy of the electrostatic field" (pages 10-14) he makes a straightforward calculation of the energy density of the static electric field surrounding a point charge using nothing more than Coulombs law and the known propagation speed of the electric field, the speed of light, and shows that there must be some kind of energy circulation between the vacuum and charge carriers:

If electrostatic fields propagate with the speed of light, they transport energy, because they have a certain energy density. It should be possible to trace this transport of energy if is really existing. That this is really the case can be seen even with a simple example regarding a point charge, as will be done on the following pages. When we trace this energy, we come to situation, which looks paradox at the very first glance, but the paradox can be dissolved, introducing a circulation of energy. This is also demonstrated on the following pages.
The first aspect of the mentioned paradox regards the emission of energy at all. If a point charge (for instance an elementary charge) exists since a given moment in time, it emits electric field and field’s energy from the time of its birth without any alteration of its mass. The volume of the space filled with this field increases permanently during time and with it the total energy of the field. But from where does this “new energy” originate? For the charged particle does not alter its mass (and thus its energy), the “new energy” can not originate from the particle itself. This means: The charged particle has to be permanently supplied with energy from somewhere. The situation is also possible for particles, which are in contact with nothing else but only with the vacuum. The consequence is obvious: The particle can be supplied with energy only from the vacuum. This sounds paradox, so it can be regarded as the first aspect of the mentioned paradox. But it is logically consequent, and so we will have to solve it later.

[...]

Important is the conclusion, which can be found with logical consequence:
On the one hand the vacuum (= the space) permanently supplies the charge with energy (first paradox aspect), which the charge (as the field source) converts into field energy and emits it in the shape of a field. On the other hand the vacuum (= the space) permanently takes energy away from the propagating field, this means, that space gets back its energy from field during the propagation of the field. This indicates that there should be some energy inside the “empty” space, which we now can understand as a part of the vacuum-energy. In section 3, we will understand this energy more detailed.
But even now, we can come to the statement:
During time, the field of every electric charge (field source) increases. Nevertheless the space (in the present work the expressions “space” and “vacuum” are use as synonyms) causes a permanent circulation of energy, supplying charges with energy and taking back this energy during the propagation of the fields. This is the circulation of energy, which gave the title for present section 2.2.
This leads us to a new aspect of vacuum-energy:
The circulating energy (of the electric field) is at least a part of the vacuum-energy. We found its existence and its conversion as well as its flow. On the basis of this understanding it should be possible to extract at least a part of this circulating energy from the vacuum – in section 4 a description is given of a possible method how to extract such energy from the vacuum.

So there we are. Unless we are to assume that the static electric field propagates with an infinite speed, the static electric field (the airflow in our fandoor analogy) is on the one hand powered by the vacuum and on the other hand it powers the vacuum. And at least part of the energy in space / the vacuum, referred to with names as "Zero Point Energy" (ZPE), virtual particle flux, the Dirac sea, Orgone, etc., is not only fueled by the electric field, it is continuously converted back into an electric field by each and every charged particle in the Universe, which makes the electric field a source of energy from a practical point of view, just like the light coming from our Sun.

150 nW in comparison to loss of 3 nW


https://www.psiram.com/en/index.php/Claus_Wilhelm_Turtur

Turtur-Rotor / electrostatic fan wheel motor of Turtur Turtur88.jpg Test in vacuum

Between April and December 2008, Turtur conducted privately funded experiments on a "fan wheel motor" invented by him which, in his opinion, was powered by inexhaustible vacuum energy but at the same time required applying high voltage (1-30 KV) which, however, is not considered in the Casimir effect. Without high voltage, the impeller would not move. A successful replication of his experiment by other scientists is unknown as of yet (December 2009). Austrian Harald Chmela (Borderlands), at suggestion of Martin Tajmar, attempted a replication in a vacuum but failed[12].

Turtur used several slightly differing designs. Aluminium foil glued to balsa wood is used as material for the propeller which swims in a water bath on small styrofoam, with which it is connected by a conductive element. Due to high voltage between the electrically conductive impeller and a diametrically charged plate, Coulomb forces arise which turn the fan to a position of energy minimum (direction of rotation is undetermined at first). Afterwards. the fan is expected to start rotating. The direction of rotation is said to be always the same, while angular velocity is said to depend on the high voltage applied.

According to his own estimates, an observed performance of 150 nW(nano Watt) of the engine in air and water bath with rotation times of 1-16 minutes were seen, yielding a few kilovolts. The usage of the high voltage power supply is unknwon, but he mentions a current limit of 50µA for his vacuum experiments. A high voltage power supply built by Turtur was said to have been used in the experiments and Turtur stated that he was not able to keep the output voltage constant. A replica by an Italian inventor at 38 KV yielded high voltage fluctuating currents up to 7 mA[13]. Later experiments in vacuum with an oil bath of vacuum-oil(a special kind of oil) are said to have required higher voltage of 16-30 kV and yielded just an average current of 0.1 pA in vacuum (about 3 nW power) with additional peaks of several picoampere. Rotation speed was said to be slower in vacuum with a circulation time of 2 to 3 hours.

Vacuum tests: at Otto-von-Guericke University in Magdeburg, Turtur conducted experiments in vacuum in cooperation with the local technician Wolfram Knapp, after critics had pointed out that his construction just showed Biefeld-Brown effects. He put the impeller into a sour cream cup of the brand Milbona, which swam in oil. The impeller was connected to the high voltage power supply by a wire. According to his own report, rotation speed decreased. A pressure of 10-3 to 10-5 Millibar was applied; a further decrease of pressure would have resulted in boiling oil and was avoided. The vacuum oil used was of the type "Ilmvac, LABOVAC-12S" with a vapor pressure of 10-8 mbar, a 40 degree (C) viscosity of 94 mPoise. Mechanical power output in vacuum was not measured and also experiments were not conducted without oil or water bath.

According to his own report[14], no ongoing rotation over an arbitrary number of revolutions was seen in vacuum and the number of revolutions was not reproducible.


The implications of that are staggering. It means that the law of conservation of energy does not apply to 'isolated' electrical systems, because they are not actually isolated. After all, Turtur shows that energy is being extracted from the active vacuum by each and every charged particle and thus every electrical system in existence in the Universe.

Interestingly, Nikola Tesla already said the exact same thing in 1891:

Nature has stored up in the universe infinite energy. The eternal recipient and transmitter of this infinite energy is the ether. The recognition of the existence of ether, and of the functions it performs, is one of the most important results of modern scientific research. The mere abandoning of the idea of action at a distance, the assumption of a medium pervading all space and connecting all gross matter, has freed the minds of thinkers of an ever present doubt, and, by opening a new horizon—new and unforeseen possibilities—has given fresh interest to phenomena with which we are familiar of old.

Based on all this, it is clear that we need to look at electrical systems in a different way, we need a way of thinking that does account for the energy source that is really powering our systems. In a way, we need a similar change in our models as the change from Newton to quantum mechanics. While Newtonian mechanics can still be used in mechanical engineering most of the time, at some point they are no longer valid.

In the same way, the current electrical engineering model is fine for most applications where it suffices to consider only the door part of our fandoor analogy, that is, by considering electrical systems basically as an analogy of hydraulics, which is literally just a variation of Newtonian mechanics. However, if you want to be able to utilize the energy source the electric field provides, there just ain't no way to do that without taking the energy exchange between an electrical system and the vacuum completely into account. And that means we have to go back to field theory instead of describing our systems in terms of concrete components, the so-called lumped element models, especially in the case we are dealing with resonating coils. This is explained by James and Kenneth Corum points in Tesla Coils and the Failure of Lumped-Element Circuit Theory:

In the following note, we will show why one needs transmission line analysis (or Maxwell's equations) to model these electrically distributed structures. Lumped circuit theory fails because it's a theory whose presuppositions are inadequate. Every EE in the world was warned of this in their first sophomore circuits course.
All those handbook formulas that people use for inductance, L, inherently assume applications at frequencies so low that the current distribution along the coil is uniform. The real issue is that migrating voltage nodes and loops are not a property of lumped-circuit elements - they are the directly observable consequence of velocity inhibited wave interference on the self-resonant coil. Lumped element representations for coils require that the current is uniformly distributed along the coil - no wave interference and no standing waves can be present on lumped elements.

So, we need to consider the fields and that also means we need to realise that the nature of these fields is dynamic and not static. In the old Newtonian model, we consider the voltage across an impedance to be the cause for a current to occur, which in our fandoor anology would be the pressure that the door "feels" being enacted by the airflow on its surface, while in reality it is the airflow (the electric) field that acts upon the door and not the pressure itself. In other words it seems like the "pressure" the electric field enacts on our components is static, hence the name "static electric field", while in actual reality this force is a dynamic force, something flows along the surface that creates the pressure. Tesla already realised this in 1892:

There is no doubt that with the enormous potentials obtainable by the Use of high frequencies and oil insulation luminous discharges might be passed through many miles of rarefied air, and that, by thus directing the energy of many hundreds or thousands of horse-power, motors or lamps might be operated at considerable distances from stationary sources. But such schemes are mentioned merely as possibilities. We shall have no need to transmit power at all. Ere many generations pass, our machinery will be driven by a power obtainable at any point of the universe. This idea is not novel. Men have been led to it long ago by instinct or reason; it has been expressed in many ways, and in many places, in the history of old and new. We find it in the delightful myth of Antheus [Antaeus], who derives power from the earth; we find it among the subtle speculations of one of your splendid mathematicians and in many hints and statements of thinkers of the present time. Throughout space there is energy. Is this energy static or kinetic! If static our hopes are in vain; if kinetic — and this we know it is, for certain — then it is a mere question of time when men will succeed in attaching their machinery to the very wheelwork of nature.

It is nothing less than a shame that even more than a hundred years later, we still burn fossile fuel for our energy, basically because of arrogance, selfishness and ignorance. Still, the question remains the same. It is a mere question of time... Anyhow, there basically is a deeper cause we have to account for: the electric field itself, which is present everywhere in the Universe. With that in mind, we continue with Bearden:

The external (attached) circuits and power lines etc. catch some of that available EM energy flowing through space (generally flowing parallel to the wires but outside them). Some of the flowing energy is intercepted and diverted into the wires themselves, to power up the internal electrons and force them into currents, thus powering the entire power line and all its circuits.
However, the power system engineers use just one kind of circuit. In the standard "closed current loop" circuit, all the "spent electrons" (spent after giving up their excess energy in the loads, losses, etc.) are then forcibly "rammed" back through that little internal section between the ends of the source dipole (between the terminals). These "rammed" electrons smash the charges in the dipole away, and destroy the dipole then and there.
It can easily be shown that half the "caught" energy in the external circuit is used to destroy that source dipole, and nothing else.
For more than a century, our misguided engineers have thus used a type of circuit that takes half of the energy it catches, and uses that half to destroy the source dipole that is actually extracting the EM energy from the vacuum and pouring it out of the terminals for that power line to "catch" in the first place! The other half of the "caught energy" in the powerline is used to power the external loads and losses.
So half the caught energy in the power line is used to kill the source dipole (kill the free energy gusher), and less than half is used to power the loads. It follows that our electrical engineers are trained to use only those power circuits that kill themselves (kill their gushing free energy from the vacuum) faster than they can power their loads.
Well, to get the energy gusher going again, the dipole has to be restored in order to extract the energy and pour it out again.
So we have to pay to crank the shaft of that generator some more, to turn that generator some more, so that we can dissipate some more magnetic energy to re-make the dipole. We have to work on that shaft at least as much as the external circuit worked on that source dipole to destroy it. So we have to "input more shaft energy" to the generator than the external power system uses to power its loads. Since we pay for the input shaft energy, we have to keep on burning that coal, oil, and gas etc. to do so.
All our electrical power systems are "suicidal" vacuum-powered systems, freely extracting their useful EM energy from the seething vacuum, but deliberately killing themselves faster than they power their loads.
All that the burning of all that coal, oil, gas, etc. accomplishes is to continually remake the source dipole, which our engineers insure will then receive be killed by the system itself faster than the system gives us work in the load."

Now isn't that interesting, half the caught energy in the power line is used to kill the source dipole, and less than half is used to power the loads? Think about it, how can that be?

There is an essential difference between the Newtonian analogy we use in electrical engineering (closed circuits) and the actual reality. The analogy of a capacitor in hydraulics (Newtonian analogy) is a piston moving back and forth in a closed cylinder wherein gas is pressurized. And here's the difference: Imagine moving the piston inwards, pressurizing the gas, and put the thing on your workbench. The piston will immediately move back, because of the gas pressure. Now charge a capacitor and put it on your workbench. See the difference? The capacitor will just sit there, keeping it's charge. In other words: our hydraulic analogy is unstable, it 'wants' to release it's energy, while our actual electrical component is stable when 'pressurized'. It will only 'release' it's energy when something external is being done. It has to be disturbed, because the charges in a capacitor actually attract one another, which makes them like to stay where they are. So, when 'discharging' a capacitor, as a matter of fact, these attraction forces have to be overcome. And that does not release energy at all, it costs energy to do that. So, it actually takes the same amount of energy to charge a capacitor as the amount of energy it takes to discharge the capacitor.

It is undoubtedly because of this that Steinmetz wrote, already in the beginning of the twentieth century:

"Unfortunately, to large extent in dealing with dielectric fields the prehistoric conception of the electrostatic charge (electron) on the conductor still exists, and by its use destroys the analogy between the two components of the electric field, the magnetic and the dielectric, and makes the consideration of dielectric fields unnecessarily complicated. There is obviously no more sense in thinking of the capacity current as current which charges the conductor with a quantity of electricity, than there is of speaking of the inductance voltage as charging the conductor with a quantity of magnetism. But the latter conception, together with the notion of a quantity of magnetism, etc., has vanished since Faraday's representation of the magnetic field by lines of force."

So, it may seem that the conservation law holds when considering electrical circuits in their 'prehistoric' analogy, in actual truth this is only the case because the interactions with the environment, the active vacuum, balance one another out. In reality twice the amount of work has been done than seems to having been done!

Summary

Any charge continously emits an energy field, an electric field, spreading with the speed of light, which is the real energy source that makes our circuits run. This energy-field, generated by the charges in our wires, is not created out of thin-air. Since there is a continuous flow of energy out of every charge, there also is a continuous flow of energy going into every charge. And that is where the energy eventually comes from, right from the vacuum itself. For our purposes, it doesn't really matter how the energy that ends up in the electric field is being taken out of the vacuum. It may be ZPE, it may be a "virtual partical flux", it may be anything. It doesn't matter, because we don't need to know.

All we need to know is that somehow, some form of energy flows into each and every charge in the universe and this energy flow is continuously converted into an outflowing electric energy field by each and every charge in the universe, 27/7, 365 days a year, for free.

And this is the basic concept to understand. The electric field comes for free, as long as you keep the charges separated and don't disturb them.

So, where does all this leave us? We can spend the effort of turning the shaft of a generator, which will separate the charges in the system we want to power and creates a dipole. When we do this, we do not actually store energy in the dipole, we change the configuration of the electric field. When we subsequently send those same charges trough the system we want to power, it is the active vacuum, the environment, which is kind enough to provide us with the energy that is needed to kill the dipole we have created to be able to power our load and with the energy to actually power our load as well. As we have seen, this is an exercise with a closed wallet from our point of view. The load receives the exact same amount of energy that we have put in the system ourselves as mechanical energy, apart from the losses. So, all things considered, the Newtionian analogy we use in electrical engineering is perfectly valid and applicable. Except for one tiny little detail.

We change the configuration of the electric field when we operate an electrical circuit and since we eventually get the same amount of energy back trough our load while doing this, this means we can actually manipulate the electric field for free, just by powering our circuits the way we always do. Get the point? While we are opening and closing our fandoor, we influence the airflow in our neighbourhood without having to pay a dime for that in terms of energy! That means we can manipulate our neigbors fandoor for free. So, all we need to do is figure out how to use our free manipulative power to put the fandoors in our neighborhood to work such that it is the environment that delivers the energy to power the neighbors load, just as it powers our load. In other words: we have to manipulate the electric field in such a way that charge carriers in the environment of our systems are moved around in such a way that they perform useful work, in such a wat that it isn't us that provides the energy, but someone else: the electric field itself. That means most of all that we have to make sure that those neighboring charges don't end up in our circuit, since then they will kill our dipole and we will have to pay the price, and secondly that we have to make sure that we don't disturb the charge carriers that make up our voltage source.

Let's take a look at how three inventors managed to do just that by using the power of resonance. You can find that part after the intermezzo with some interesting references.

Chapter 3: Unification of Physics

"It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do" - Albert Einstein.

The dual slit experiment and the wave particle duality principle

This gives us a major point regarding ZPE. Because all known particles are Electromagnetic waves, it follows that even at absolute zero the particles themselves *must* oscillate and therefore emit an electrostatic field as well as a magnetic field.

So, when considering Zero Point Energy from the perspective of movements of the particles, one does not take the oscillations which make up the particles themselves into account. How could an electromagnetic wave stop oscillating and radiating energy at absolute zero?

Another point is the question of weak and strong nuclear forces. How can there be any other forces but the electromagnetic working on electromagnetic waves?

David LaPoint clip with steelballs rotating under magnets

-> spinning plasma

https://thesingularityeffect.wordpress.com/physics/what-i-think-the-primer-fields/some-questions-and-answers-from-david-lapoint/

http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality http://en.wikipedia.org/w/index.php?title=Wave%E2%80%93particle_duality&oldid=659839762

Wave–particle duality is the concept that every elementary particle or quantic entity exhibits the properties of not only particles, but also waves. It addresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects.

[...]

The idea of duality originated in a debate over the nature of light and matter that dates back to the 17th century, when Christiaan Huygens and Isaac Newton proposed competing theories of light: light was thought either to consist of waves (Huygens) or of particles (Newton). Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles also have a wave nature (and vice versa). This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.

[...]

In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter, not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p):
    \lambda = \frac{h}{p}
This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = \tfrac{E}{c} and the wavelength (in a vacuum) by λ = \tfrac{c}{f}, where c is the speed of light in vacuum.

[...]

Treatment in modern quantum mechanics
Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, has no solutions of the Schrödinger equation so have another wave.

Because all known matter (particles) has a wave-like nature, described by electrodynamic theory, I propose the following two hypothesis:

1. All matter is the manifestation of a localized electrodynamic phenomenon;

2. There exists but one fundamental interaction: the electrodynamic c.q. electromagnetic.

So, what I am saying is that all known particles ARE some kind of local electromagnetic wave phenomenon an that therefore a fully developed electromagnetic theory should be capable of describing and predicting all known physical phenomena, thus integrating all known physics within a Unified Theory requiring only one fundamental interaction.


Scientific theory:

I scientific theory is a well-substantiated explanation of some aspect of the natural world that is acquired through the scientific method and repeatedly tested and confirmed through observation and experimentation. As with most (if not all) forms of scientific knowledge, scientific theories are inductive in nature and aim for predictive power and explanatory capability.
The strength of a scientific theory is related to the diversity of phenomena it can explain, and to its elegance and simplicity (Occam's razor). As additional scientific evidence is gathered, a scientific theory may be rejected or modified if it does not fit the new empirical findings- in such circumstances, a more accurate theory is then desired and free of confirmation bias. In certain cases, the less-accurate unmodified scientific theory can still be treated as a theory if it is useful (due to its sheer simplicity) as an approximation under specific conditions (e.g. Newton's laws of motion as an approximation to special relativity at velocities which are small relative to the speed of light).
Scientific theories are testable and make falsifiable predictions. They describe the causal elements responsible for a particular natural phenomenon, and are used to explain and predict aspects of the physical universe or specific areas of inquiry (e.g. electricity, chemistry, astronomy).

[...]

The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis.

http://en.wikipedia.org/wiki/Falsifiability

Falsifiability or refutability of a statement, hypothesis, or theory is an inherent possibility to prove it to be false. A statement is called falsifiable if it is possible to conceive an observation or an argument which proves the statement in question to be false. In this sense, falsify is synonymous with nullify, meaning not "to commit fraud" but "show to be false". Some philosophers argue that science must be falsifiable.
For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as All swans are white, yet it is logically possible to falsify it by observing a single black swan. Thus, the term falsifiability is sometimes synonymous to testability. Some statements, such as It will be raining here in one million years, are falsifiable in principle, but not in practice.
The concern with falsifiability gained attention by way of philosopher of science Karl Popper's scientific epistemology "falsificationism". Popper stresses the problem of demarcation—distinguishing the scientific from the unscientific—and makes falsifiability the demarcation criterion, such that what is unfalsifiable is classified as unscientific, and the practice of declaring an unfalsifiable theory to be scientifically true is pseudoscience. The question is epitomized in the famous saying of Wolfgang Pauli that if an argument fails to be scientific because it cannot be falsified by experiment, "it is not only not right, it is not even wrong!"

There are two known types of electromagnetic waves:

1. transverse wave;

2. a vortex based wave.

http://en.wikipedia.org/wiki/Optical_vortex

An optical vortex (also known as a screw dislocation or phase singularity) is a zero of an optical field, a point of zero intensity. Research into the properties of vortices has thrived since a comprehensive paper by John Nye and Michael Berry, in 1974,[1] described the basic properties of "dislocations in wave trains". The research that followed became the core of what is now known as "singular optics".

[...]

In an optical vortex, light is twisted like a corkscrew around its axis of travel. Because of the twisting, the light waves at the axis itself cancel each other out. When projected onto a flat surface, an optical vortex looks like a ring of light, with a dark hole in the center. This corkscrew of light, with darkness at the center, is called an optical vortex.
The vortex is given a number, called the topological charge, according to how many twists the light does in one wavelength. The number is always an integer, and can be positive or negative, depending on the direction of the twist. The higher the number of the twist, the faster the light is spinning around the axis. This spinning carries orbital angular momentum with the wave train, and will induce torque on an electric dipole.

[...]

An optical singularity is a zero of an optical field. The phase in the field circulates around these points of zero intensity (giving rise to the name vortex). Vortices are points in 2D fields and lines in 3D fields (as they have codimension two). Integrating the phase of the field around a path enclosing a vortex yields an integer multiple of 2\pi. This integer is known as the topological charge, or strength, of the vortex.

[...]

A q-plate is a birefringent liquid crystal plate with an azimuthal distribution of the local optical axis, which has a topological charge q at its center defect. The q-plate with topological charge q can generate a \pm 2q charge vortex based on the input beam polarization.

Paul Stowe

http://vixra.org/pdf/1310.0237v1.pdf

in this model which is founded upon Maxwell’s, charge itself is a basic oscillation of momentum at each and every point in the field and with units of kg/sec we finally realize that the charge to mass ratio is simply the oscillation’s frequency ν'

Div, Grsd, Curl and the Fundamental Forces

In this model any field gradient (Grad) results in a perturbative force. Likewise the point divergence (Div) in the quantity we call Charge. Finally the net circulation (Curl) at any point defined magnetic potential. Electric and magnetic effects are both well- defined and almost completely quantified by Maxwell’s 1860-61 work On Physical Lines of Force . It is interesting that in this model (an extension of his) the electric potential

 (

E ) has units of velocity ( m/sec ) and the magnetic potential

 (

B ) is a dimensionless. This along with Maxwell’s original work could help shed light resolving the actual physical mechanisms involved in. the creation of the both forces. Inspection strongly suggests that both the electric and magnetic forces are Bernoulli flow induced effects. In this view opposing currents reduce net flow velocity between the vortices increasing pressure and creating an apparent repulsive force. Complimentary currents increase net velocity resulting in an apparent attraction.

Gravity as the Gradient of Electric Field

If the electric potential (E) is a net speed its gradient will be an acceleration.:

(Eq. 35)

Since this potential is squared the sign of E does not matter and the gradient vector is always directed towards the point of highest intensity. This provides a natural explanation for the singular attractive nature of the gravitational force.


Dear Jim,

All right, I'll take the bait.

( I am referring to your presentation on YouTube: )

So, let's go.

First of all, your presentation is a much simplified schematic overview of the real experiment and therefore omits essential details. However, the root of the problem with the interpretation quantum mechanics gives to this experiment is the denial of the existence of longitudinal dielectrical (Tesla) wave phenomena in current physics. The root of this misconception can be found in the Maxwell equations describing the EM fields. You see, the fields are depicted as being caused by charge carriers, which would be protons and electrons, which would be EM waves as shown by the dual slit experiment. In other words: the current Maxwell equations describe EM phenomena as being caused by EM waves, which essentially mixes up cause and effect. A chicken-and-egg problem is introduced here, which should not be there.

When we correct the Maxwell equations for this fundamental problem, we essentially end up with descriptions of waves as would occur in a fluid, which we used to call the aether. And with that, we can also do away with Einstein's relativity theory, because we have no need for the Lorentz transform since the Maxwell eqations "without charge or current" transform perfectly under the good old Galileian transform, as shown by Dr. C.K. Thornhill:

http://www.etherphysics.net/CKT4.pdf

And in fact, contrary to popular belief, the incorrectness of the relativity theory is confirmed by one of the leading experts in GPS technology, Ron Hatch:

http://www.youtube.com/watch?v=CGZ1GU_HDwY

When we take a closer look at the original Maxwell papers and introduce a compressible aether instead of an incompressible one, we can come to a proper foundation for a "theory of everything" explaining not only the magnetic field as a rotational force, but also explain gravity as being the gradient of the Electric field, as has been done by Paul Stowe:

http://vixra.org/abs/1310.0237

Note that at this point, we have already done away with gravity as being one of the fundamental forces. However, we still have a problem with the "weak and strong" interactions, because if all matter is an electromagnetic phenomenon, there can be no other fundamental forces but the electromagnetic ones. In other words: the forces keeping an atom together MUST also be electromagnetic in nature. And this can also be experimentally shown, as has been done by David LaPoint:

Getting back to the dual slit experiment, one of the most fundamental assumptions underneath the current accepted interpretation is that "photons" or "particles" are being emitted at random times from any given source. However, that would lead to particles falling on the slits which would not be in phase and therefore we cannot get an interference pattern. In other words: the emission of particles/photons from any real, physical source MUST be occuring along a deterministic process and not a random process. Said otherwise: the atoms making up any real, physical source MUST be vibrating in resonance in order to get a nice interference pattern. This can be made obvious by considering the 21 cm hydrogen line ( http://en.wikipedia.org/wiki/Hydrogen_line ). It is ludicrous to assume protons with a wavelength of no less than 21 cm can be caused by a change of energy states of individual atoms occuring at random moments. In other words: there is no question that these phenomena are the result of some kind of resonance occuring within your photons/particle source.

Now of course if you have a resonating photon/particle source, which acts as an antenna, you will get the so-called "near field" as well as the so-called "far field":

http://en.wikipedia.org/wiki/Near_and_far_field

These have thusfar not been properly explained. Quantum Mechanics resorts to the invention of "virtual" - by definition non-existing - photons in order to straighten things out:

"In the quantum view of electromagnetic interactions, far-field effects are manifestations of real photons, whereas near-field effects are due to a mixture of real and virtual photons. Virtual photons composing near-field fluctuations and signals, have effects that are of far shorter range than those of real photons."

However, since we know that the magnetic field is a rotational force, we can deduce that any photon or particle (the "far field"), which is an EM wave phenomena, contains some kind of (magnetic) vortex, one way or the other, and therefore is not a "real" transverse wave. So, the difference between the near and far fields in reality is simply that the near field is a real (surface) transverse wave, while the far field is made up of particles/photons characterized by the existence of some kind of (magnetic) vortex keeping the photon/particle together.

Since in the transition of the near-field to the far-field the propagation mode of the phenomena changes from "transverse" to "particle" mode, it is clear that the existence of a transverse wave on the surface of some kind of material, no matter in what way this has been induced, can radiate and/or absorb "photon/particle mode" wave phenomena, since an antenna works both ways as a transmitter and a receiver.

However, when we have a transverse wave propagating along the surface of some material, we also have an asociated dielectric (pressure) type wave, the longitudinal wave, which propagates at a speed of sqrt(2) times the speed of light trough vacuum. Of course, this propagation mode also propagates energy. Energy which is not being accounted for in the standard model, hence the need for the invention of "black" matter and energy in order to fill out the gaps.

So, what we are really looking at with the dual slit experiment source is a source of BOTH "particle/photon" modes of EM radiation AND longitudinal dielectric waves, which interact with one another. When longitudinal dielectric waves interact with a material capable of resonating at the frequency of these waves, it is pretty obvious that transverse surface waves are induced, which can also emit "photon/particle" mode waves on their turn.

As I already explained, photons/particles are characterized by the existence of some kind of rotational, magnetic, vortex, which cannot pass the slits. And therefore, AFTER the slits, we are left ONLY with the longitudinal phenomena and NO EM wave. These are introduced at surface of the screen and/or at your "atom counter", both surfaces acting as both receiving and transmitting antenna's at the same time. Now whenever you take energy from the waves propagating around your slits ("counter on"), you influence the resonance taking place within the experiment. When you do not take any anergy away ("counter off"), this influence is no longer present. And therefore you get the result that switching the counter on or off influences your experiment.

Any questions??


Very interesting. A leading expert in GPS stuff disagreeing with relativity:

http://www.youtube.com/watch?v=CGZ1GU_HDwY

RON HATCH: Relativity in the Light of GPS | EU 2013

Natural Philosophy Alliance Conference (NPA20) July 10 - 13, 2013 -- College Park, Maryland To register go to www.worldnpa.org (Difficulties registering? Contact davidharrison@thunderbolts.info)

Perhaps you've already heard that GPS, by the very fact that it WORKS, confirms Einstein's relativity; also that Black Holes must be real. But these are little more than popular fictions, according to the distinguished GPS expert Ron Hatch. Here Ron describes GPS data that refute fundamental tenets of both the Special and General Relativity theories. The same experimental data, he notes, suggests an absolute frame with only an appearance of relativity.

Ron has worked with satellite navigation and positioning for 50 years, having demonstrated the Navy's TRANSIT System at the 1962 Seattle World's Fair. He is well known for innovations in high-accuracy applications of the GPS system including the development of the "Hatch Filter" which is used in most GPS receivers. He has obtained over two dozen patents related to GPS positioning and is currently a member of the U.S National PNT (Positioning Navigation and Timing) Advisory Board. He is employed in advanced engineering at John Deere's Intelligent Systems Group.

Koevavla:

Hoi Arend,

Wat een geweldig goede presentatie. Ik snap alleen die 's' factor niet zo goed, maar verder ben ik het 100% eens met Ron Hatch.

De meet resultaten van Roland de Witte komen ook goed overeen met de theorie van Ron: een variabele lichtsnelheid ivm een 1-richting lichtsnelheidsmeting. Als de frequentie van vallend licht onveranderd blijft, dan zou in feite de golflengte en dus ook de lichtsnelheid wat moeten afnemen, richting het aard oppervlak. Ik heb dit idee al jaren geleden via Vesselin Petkov vernomen. Ik vind Ron erg goed op wetenschappelijk gebied.

Gemiste kans voor Ron is het verklaren van lichtafbuiging door zwaartekracht. Ron stelt overduidelijk dat een een zwaartekrachtveld de lichtsnelheid wat vermindert, en als je dit gegeven combineert met het afbuigen van licht indien het door een zwaartekracht veld reist, dan kan je de ether dichtheid voorstellen als een variabele electromagnetische polariseerbaarheid, net als lucht of water.

De ether heeft kennelijk een elektromagnetische eigenschap, en zodoende is de ether ook te beinvloeden dmv electrodynamische effecten. En dat is het grootste geheim der mensheid.


Ron Hatch

http://www.worldsci.org/php/index.php?tab0=Scientists&tab1=Scientists&tab2=Display&id=257

''Biography

Ronald Ray Hatch, born in Freedom, Oklahoma, now of Wilmington, California, received his Bachelor of Science degree in physics and math in 1962 from Seattle Pacific University. He worked at Johns Hopkins Applied Physics Lab, Boeing and Magnavox as Principle Scientist, before becoming a Global Positioning System (GPS) consultant. In 1994 he joined Jim Litton, K. T. Woo, and Jalal Alisobhani in starting what is now NavCom Technology, Inc. He has served a number of roles within the Institute of Navigation (ION), including Chair of the Satellite Division, President and Fellow. Hatch received the Johannes Kepler Award from the Satellite Division and the Colonel Thomas Thurlow Award from the ION. He has been awarded twelve patents either as inventor or co-inventor, most of which relate to GPS, about which he is one of the world's premier specialists. He is well known for his work in navigation and surveying via satellite.''

In a pair of articles, Hatch shows how GPS data provides evidence against, not for, both special and general relativity: "Relativity and GPS," parts I and II, Galilean Electrodynamics, V6, N3 (1995), pp. 51-57; and V6, N4 (1995), pp. 73-78. In his 1992 book, Escape From Einstein, Hatch presents data contradicting the special theory of relativity, and promotes a Lorentzian alternative described as an ether gauge theory.
Escape from Einstein
Einstein's fame can, to some extent, be ascribed to the fact that he originated a theory which, though contrary to common sense, was in remarkable agreement the experimental data. Ron Hatch claims there is increasingly precise data which contradicts the theory. But he does not stop there. He offers an alternative - an ether guage theory, which offers an unparalleled, common-sense explanation of the experimental data. The new theory is distinguished by:
* a return to time simultaneity, even though clocks (mechanical and biological) can run at different rats the replacement of the Lorentz transformations with gauge transformations (scaled Galilean transformations)
* a unification of the electromagnetic and gravitational forces
* a clear explanation of the source of inertia
* a clear and consistent explanation of the physics underlying the equivalence principle
In addition to the above, a comprehensive review of the experimental record shows that the new ether guage theory agrees with experiment better than the special theory. This releases everyone from the necessity of accepting a nonsensical theory which denies the common, ordinary sense of elapsed time. Rather than curved space, the ether guage theory postulates an elastic ether. This results in relatively minor modifications to the general theory mathematics - but with significant interpretational differences.

http://www.gps.gov/governance/advisory/members/hatch/

Ron Hatch is an expert in the use of GPS for precision farming, as well as other applications. Currently a consultant to John Deere, he was formerly the Director of Navigation Systems Engineering and Principal and co-founder of NavCom Technology, Inc., a John Deere company. That company provides a commercially operated differential GPS augmentation service to the agriculture industry and other high accuracy users.
Throughout his 30-year career in satellite navigation systems with companies such as Boeing and Magnavox, Hatch has been noted for his innovative algorithm design for Satellite Navigation Systems. He has consulted for a number of companies and government agencies developing dual-frequency carrier-phase algorithms for landing aircraft, multipath mitigation techniques, carrier phase measurements for real time differential navigation at the centimeter level, algorithms and specifications for Local Area Augmentation System, high-performance GPS and communication receivers, and Kinematic DGPS. In addition to the Hatch-Filter Technique, Hatch has obtained numerous patents and written many technical papers involving innovative techniques for navigation and surveying using the TRANSIT and GPS navigation satellites, authored Escape From Einstein in which he challenges competing relativity and other theories, and contributed significantly to the advancement of satellite navigation.
In 1994, Hatch received the Johannes Kepler Award from the Institute of Navigation for sustained and significant contributions to satellite navigation.

http://ivanik3.narod.ru/GPS/Hatch/relGPS.pdf

copy: http://www.tuks.nl/pdf/Reference_Material/Ronald_Hatch/Hatch-Relativity_and_GPS-II_1995.pdf

''"Relativistic" effects within the Global Positioning System (GPS) are addressed.
Hayden has already provided an introduction to GPS, so the characteristics of the system are not reviewed.
''There are three fundamental effects, generally described as relativistic phenomena, which affect GPS. These are: (1) the effect of source velocity (GPS satellite) and receiver velocity upon the satellite and receiver clocks; (2) the effect of the gravitational potential upon satellite and receiver clocks; and (3) the effect of receiver motion upon the signal reception time (Sagnac effect). There are a number of papers which have been written to explain these valid effects in the context of Einstein's relativity theories. However, quite often the explanations of these effects are patently incorrect. As an example of incorrect

explanation, Ashby [2] in a GPS World article, "Relativity and GPS," gives an improper explanation for each of the three phenomena listed above.''

The three effects are discussed separately and contrasted with Ashby's explanations. But the Sagnac effect is shown to be in conflict with the special theory. A proposed resolution of the conflict is offered. The Sagnac effect is also in conflict with the general theory, if the common interpretation of the general theory is accepted. The launch of GPS Block II satellites capable of intersatellite communication and tracking will provide a new means for a giant Sagnac test of this general theory interpretation. Other general theory problems are reviewed and a proposed alternative to the general theory is also offered.

http://en.wikipedia.org/wiki/Fictitious_force

A fictitious force, also called a pseudo force,[1] d'Alembert force[2][3] or inertial force,[4][5] is an apparent force that acts on all masses whose motion is described using a non-inertial frame of reference, such as a rotating reference frame. The force F does not arise from any physical interaction between two objects, but rather from the acceleration a of the non-inertial reference frame itself.

[...]

Assuming Newton's second law in the form F = ma, fictitious forces are always proportional to the mass m.
A fictitious force on an object arises when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. As a frame can accelerate in any arbitrary way, so can fictitious forces be as arbitrary (but only in direct response to the acceleration of the frame). However, four fictitious forces are defined for frames accelerated in commonly occurring ways: one caused by any relative acceleration of the origin in a straight line (rectilinear acceleration);[8] two involving rotation: centrifugal force and Coriolis force; and a fourth, called the Euler force, caused by a variable rate of rotation, should that occur. Gravitational force would also be a fictitious force based upon a field model in which particles distort spacetime due to their mass.

[...]

Fictitious forces and work
Fictitious forces can be considered to do work, provided that they move an object on a trajectory that changes its energy from potential to kinetic. For example, consider a person in a rotating chair holding a weight in his outstretched arm. If he pulls his arm inward, from the perspective of his rotating reference frame he has done work against centrifugal force. If he now lets go of the weight, from his perspective it spontaneously flies outward, because centrifugal force has done work on the object, converting its potential energy into kinetic. From an inertial viewpoint, of course, the object flies away from him because it is suddenly allowed to move in a straight line. This illustrates that the work done, like the total potential and kinetic energy of an object, can be different in a non-inertial frame than an inertial one.
Gravity as a fictitious force

Main article: General relativity

The notion of "fictitious force" comes up in general relativity.[15][16] All fictitious forces are proportional to the mass of the object upon which they act, which is also true for gravity.[17] This led Albert Einstein to wonder whether gravity was a fictitious force as well. He noted that a freefalling observer in a closed box would not be able to detect the force of gravity; hence, freefalling reference frames are equivalent to an inertial reference frame (the equivalence principle). Following up on this insight, Einstein was able to formulate a theory with gravity as a fictitious force; attributing the apparent acceleration of gravity to the curvature of spacetime. This idea underlies Einstein's theory of general relativity.

http://arxiv.org/ftp/physics/papers/0204/0204044.pdf

Abstract: There exists some confusion, as evidenced in the literature, regarding the nature the gravitational field in Einstein’s General Theory of Relativity. It is argued here that this confusion is a result of a change in interpretation of the gravitational field. Einstein identified the existence of gravity with the inertial motion of accelerating bodies (i.e. bodies in free-fall) whereas contemporary physicists identify the existence of gravity with space-time curvature (i.e. tidal forces). The interpretation of gravity as a curvature in space-time is an interpretation Einstein did not agree with.

For more then a century, millions of researchers has worked to implement the quantum idea in different fields of science. This work provided to be so extensive and fruitfully that few have ever had the time to think if this quantum idea is so consistent with experimental reality.

Around 2005, I wanted to publish a scientific paper based on idea that ionization energy variation contradicts the quantum hypothesis. As expected, no scientific journal was interested to publish such paper, even the data in the paper could be verified by a layman without any scientific background.

Is it something simpler than a linear dependency of two units in a graph for a clear conclusion?

Someone would ask himself rhetorically… how could it possible to not publish such paper?

The answer is very simple: how could a referee deny all his scientific activity and say bluntly that not only her but millions of people have worked to a new epicycles theory in science?

Therefore the idea was published in an Atomic structure book in 2007:

http://elkadot.com/index.php/en/books/atomic/ionization-energy-variation

Later the concept was revised and improved, published in another book about chemistry in 2009, and advertised in for discussion groups in 2009 with next formulation:

The neglected ionization energy variation for isoelectronic series can reveal more useful information about electrons structure; the problem is these data are in contradiction with actual quantum theory. The quantum prediction for work functions values are in contradiction with experiments; for metals, ionization energy and work function must be equal but in reality they are not.

For other classes of compounds quantum mechanic fails again to predict something. A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics.

http://elkadot.com/index.php/en/books/chemistry/ionization-energy-and-work-function

People who haven’t learned from past experiences will have all the time the tendency to repeat the same errors. I will paraphrase a famous economist who said that in a free market economy there is an invisible hand pushing the things forward; in science the opposite is true: the invisible hand of a entire system think that pushing something under the carpet will maintain the actual status quo.

Will it be so or will not be so?!

Best regards,

Sorin

--

Thanks very much, this is very interesting.

I wrote down my thoughts on QM some time ago:

http://www.tuks.nl/wiki/index.php/Main/QuestioningQuantumMechanics

QM is fundamentally flawed. It starts of with the famous dual slit experiment, whereby they eventually conclude that the change of orbit of a *single* electron at a *random* moment emits a photon which is subsequently emitted in such a way that it is automagically in phase with the other photons emitted by the light source, since otherwise we would not see an interference pattern:

http://en.wikipedia.org/wiki/Electromagnetic_radiation#Particle_model_and_quantum_theory

"As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies."

http://en.wikipedia.org/wiki/Emission_spectrum "The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference."

The ridiculousness of the idea of radiation being caused by the single transitions of an electron at a random moments pokes one in the eye when considering the 21 cm Hydrogen line:

http://en.wikipedia.org/wiki/Hydrogen_line

"The hydrogen line, 21 centimeter line or HI line refers to the electromagnetic radiation spectral line that is created by a change in the energy state of neutral hydrogen atoms. This electromagnetic radiation is at the precise frequency of 1420.40575177 MHz, which is equivalent to the vacuum wavelength of 21.10611405413 cm in free space. This wavelength or frequency falls within the microwave radio region of the electromagnetic spectrum, and it is observed frequently in radio astronomy, since those radio waves can penetrate the large clouds of interstellar cosmic dust that are opaque to visible light.

The microwaves of the hydrogen line come from the atomic transition between the two hyperfine levels of the hydrogen 1s ground state with an energy difference of 5.87433 µeV.[1] The frequency of the quanta that are emitted by this transition between two different energy levels is given by Planck's equation."

http://en.wikipedia.org/wiki/Atomic_radius

"Under most definitions the radii of isolated neutral atoms range between 30 and 300 pm (trillionths of a meter), or between 0.3 and 3 angstroms. Therefore, the radius of an atom is more than 10,000 times the radius of its nucleus (1–10 fm),[2] and less than 1/1000 of the wavelength of visible light (400–700 nm)."

This means that the raidus of the largest atoms is less than 1/70 millionth of the wavelength of the hydrogen line!

While it is hard belief that it is widely accepted that single atoms can emit photons with wavelengths which are up to 1000 times larger than the size of an atom, it is beyond belief and totally ridiculous to asume a single transition in a body can emit a "photon" with a wavelength which is more than 70 million times as large as the body itself.

The only reasonable explanation for such a phenomenon to occur is that the transitions of (the electrons around) the atoms occur in phase and in resonance (aka "deterministic") with one another. In other words: no randomness at all!

"A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics."

http://en.wikipedia.org/wiki/Work_function

"In solid-state physics, the work function (sometimes spelled workfunction) is the minimum thermodynamic work (i.e. energy) needed to remove an electron from a solid to a point in the vacuum immediately outside the solid surface. Here "immediately" means that the final electron position is far from the surface on the atomic scale, but still too close to the solid to be influenced by ambient electric fields in the vacuum. The work function is not a characteristic of a bulk material, but rather a property of the surface of the material (depending on crystal face and contamination)."

From your article:

"As is observed, as a general rule, the ionization energies for all metals are greater then work function."

This is not illogical. Since metals are conducting materials, the loss of negative charge because of the movement of an electron out of the metal can be compensated for by the movement of free electrons into the partial "hole" left by the leaving electron. This causes the departing electron to be less attrackted by the bulk metal compared to what would happen if the loosing of charge in the lattice could not be compensated for.

"A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics."

That's very interesting. I'm not an expert on metallic oxides, but I am a bit familiar with Aluminum Oxide and the working of electrolytic capacitors, whereby you have a dielectric layer of Aluminum Oxide between your positive plate and the electrolyte. On the negative plate, ususally also Aluminum, there is also a very thin layer of Aluminum Oxide, an insulator. Yet, the negative plate conducts electrocity very well into the electrolyte filling up the space between the plates.

So, for this particular metallic oxide, we are dealing with a dielectric:

http://en.wikipedia.org/wiki/Dielectric

"A dielectric material (dielectric for short) is an electrical insulator that can be polarized by an applied electric field. When a dielectric is placed in an electric field, electric charges do not flow through the material as they do in a conductor, but only slightly shift from their average equilibrium positions causing dielectric polarization. Because of dielectric polarization, positive charges are displaced toward the field and negative charges shift in the opposite direction. This creates an internal electric field that reduces the overall field within the dielectric itself."

[...]

"Dipolar polarization

Dipolar polarization is a polarization that is either inherent to polar molecules (orientation polarization), or can be induced in any molecule in which the asymmetric distortion of the nuclei is possible (distortion polarization). Orientation polarization results from a permanent dipole, e.g., that arising from the 104.45° angle between the asymmetric bonds between oxygen and hydrogen atoms in the water molecule, which retains polarization in the absence of an external electric field. The assembly of these dipoles forms a macroscopic polarization."

So, for Aluminum Oxide, we have are dealing with a material which is both an insulator and polarizable and thus also (to a certain degree) capable of making up for the loosing of charge because of a leaving electron, albeit more slowly.

Another point is the conductivity of heat by the material. Metals are excellent heat conductors IIRC, so if you wer to heat up a part of a metal, the heat would quickly conduct away into the bulk metsl and thus one would need more thermal energy to liberate an electron from a metal in comparison with a less heat conducting material, which might very well be the case for metallic oxides.

So, I would guess that those materials which are on the one hand electrically insulating and polarizable (dielectrics) and on the other hand bad heat conductors would have the lowest work function values, because they a) can compensate for "charge loss" and b) can be "locally heated".


“Het meeste van wat Lammertink schrijft lijkt me onzin (longitudinale em golven zijn nooit aangetoond) maar ik weet niet voldoende van EM-theorie.”

http://en.wikipedia.org/wiki/Laser

“Some applications of lasers depend on a beam whose output power is constant over time. Such a laser is known as continuous wave (CW). Many types of lasers can be made to operate in continuous wave mode to satisfy such an application. Many of these lasers actually lase in several longitudinal modes at the same time, and beats between the slightly different optical frequencies of those oscillations will in fact produce amplitude variations on time scales shorter than the round-trip time (the reciprocal of the frequency spacing between modes), typically a few nanoseconds or less.”

http://en.wikipedia.org/wiki/Longitudinal_mode

“A longitudinal mode of a resonant cavity is a particular standing wave pattern formed by waves confined in the cavity. The longitudinal modes correspond to the wavelengths of the wave which are reinforced by constructive interference after many reflections from the cavity’s reflecting surfaces. All other wavelengths are suppressed by destructive interference.

A longitudinal mode pattern has its nodes located axially along the length of the cavity. Transverse modes, with nodes located perpendicular to the axis of the cavity, may also exist.

[...]

A common example of longitudinal modes are the light wavelengths produced by a laser. In the simplest case, the laser’s optical cavity is formed by two opposed plane (flat) mirrors surrounding the gain medium (a plane-parallel or Fabry–Pérot cavity). The allowed modes of the cavity are those where the mirror separation distance L is equal to an exact multiple of half the wavelength, λ.”

Ik zou toch zweren dat ik hier lees dat lasers met longitudinale golven werken en dat transversale modes wellicht ook zouden kunnen bestaan…

Hmmm.


En als je dan even het ontwerp van een laser voor de geest haalt, dan heb je in feite daarmee ook een golfpijp, maar dan voor optische frequenties.

http://en.wikipedia.org/wiki/Laser

“The optical resonator is sometimes referred to as an “optical cavity”, but this is a misnomer: lasers use open resonators as opposed to the literal cavity that would be employed at microwave frequencies in a maser. The resonator typically consists of two mirrors between which a coherent beam of light travels in both directions, reflecting back on itself so that an average photon will pass through the gain medium repeatedly before it is emitted from the output aperture or lost to diffraction or absorption.”

Nu heb je bij een EM golfpijp dus een metalen wand, waarbij de aanwezigheid van een wisselend E-veld wervelstromen geinduceerd worden in de pijpwand en die vervolgens weer een magneetveld tot gevolg hebben. Maar bij een laser heb je alleen twee spiegels en dus geen pijpwand die middels inductie een magneetveld introduceert.

Kortom: longitudinale EM golven zijn wel degelijk aangetoond zonder dat de propagatie plaats vindt middels beweging van ladingsdragers, zij het dat dit in een golfpijp een TM mode is en dus geen zuivere “Tesla” LD golf. Het afwezig zijn van een metalen wand in een laser met daarbij de verwijzingen naar “longitudinale” mode wijzen er dan ook wel degelijk op dat we bij een laser te maken hebben met een zuivere longitudinale dielectrische golf, zo’n Tesla golf.

En als die inderdaad met pi/2 keer c propageren en daar is geen rekening mee gehouden bij optisch onderzoek, dan zou je verwachten dat er ergens een anomaliteit te vinden is die e.e.a. zou kunnen bevestigen.


it is toch wel een erg interessante anomaliteit:

http://en.wikipedia.org/wiki/Dispersion_%28optics%29

-:- The group velocity vg is often thought of as the velocity at which energy or information is conveyed along the wave. In most cases this is true, and the group velocity can be thought of as the signal velocity of the waveform. In some unusual circumstances, called cases of anomalous dispersion, the rate of change of the index of refraction with respect to the wavelength changes sign, in which case it is possible for the group velocity to exceed the speed of light (vg > c). Anomalous dispersion occurs, for instance, where the wavelength of the light is close to an absorption resonance of the medium. When the dispersion is anomalous, however, group velocity is no longer an indicator of signal velocity. Instead, a signal travels at the speed of the wavefront, which is c irrespective of the index of refraction.[3] Recently, it has become possible to create gases in which the group velocity is not only larger than the speed of light, but even negative. In these cases, a pulse can appear to exit a medium before it enters.[4] Even in these cases, however, a signal travels at, or less than, the speed of light, as demonstrated by Stenner, et al.[5] -:-

Dit is toch wel erg fascinerend. Een puls kan sneller dan het licht propageren (of zelfs een negatieve snelheid hebben), maar een signaal niet. Waarin verschilt een puls nu dan wel van een signaal?

http://scienceblog.com/light.html

“They were also able to create extreme conditions in which the light signal travelled faster than 300 million meters a second. And even though this seems to violate all sorts of cherished physical assumptions, Einstein needn’t move over – relativity isn’t called into question, because only a portion of the signal is affected.”

Juist. Niets aan de hand. Nothing to see here. Slechts een gedeelte van het signaal gaat sneller dan c. Maar welk gedeelte dan?

“To succeed commercially, a device that slows down light must be able to work across a range of wavelengths, be capable of working at high bit-rates and be reasonably compact and inexpensive.”

Juist. Het gaat om een gedeelte van het signaal, waarbij er sprake is van een beperkte bandbreedte waarbinnen sneller dan licht propagatie mogelijk is.

Wikipedia nog eens:

“Anomalous dispersion occurs, for instance, where the wavelength of the light is close to an absorption resonance of the medium. When the dispersion is anomalous, however, group velocity is no longer an indicator of signal velocity. Instead, a signal travels at the speed of the wavefront, which is c irrespective of the index of refraction.”

Dat is interessant. Er is dus een specifieke, medium afhankelijke, resonantie frequentie waarbij sneller dan licht propagatie optreedt, precies zoals ik betoogd heb dat dit ook in het RF geval optreedt bij antennes.

En wat ik daar ook bij betoogd heb, is dat het erg lastig is om longitudinale golven te meten. En dat is ook precies waar men hier tegen aan loopt:

Scienceblog: “Light signals race down the information superhighway at about 186,000 miles per second. But information cannot be processed at this speed, because with current technology light signals cannot be stored, routed or processed without first being transformed into electrical signals, which work much more slowly.”

Dit is een paper van Thévenaz dat wellicht meer details geeft:

http://infoscience.epfl.ch/record/161515/files/04598375.pdf

To be continued…

http://www.tuks.nl/ Arend Lammertink

Bijzonder interessant. Het mechanisme waar het om lijkt te draaien heet Brillouin scattering:

http://en.wikipedia.org/wiki/Brillouin_scattering

-:- As described in classical physics, when the medium is compressed its index of refraction changes, and a fraction of the traveling light wave, interacting with the periodic refraction index variations, is deflected as in a three-dimensional diffraction grating. Since the sound wave, too, is travelling, light is also subjected to a Doppler shift, so its frequency changes. -:-

Merk op dat het hier gaat om een mechanische compressie van het medium, en voor zover me nu duidelijk is wordt dit gedaan met behulp van geluidsgolven.

Uit het artikel in mijn vorige post: -:- Among all parametric processes observed in silica, stimulated Brillouin scattering (SBS) turns out to be the most efficient. In its most simple configuration the coupling is realized between two optical waves propagating exclusively in opposite directions in a single mode fibre, through the stimulation by electrostriction of a longitudinal acoustic wave that plays the role of the idler wave in the interaction [4]. This stimulation is efficient only if the two optical waves show a frequency difference giving a beating interference resonant with an acoustic wave (that is actually never directly observed). This acoustic wave in turn induces a dynamic Bragg grating in the fibre core that diffracts the light from the higher frequency wave back into the wave showing the lower frequency. -:-

Interessant detail is dat het hier gaat om silica. Dit is natuurlijk gewoon glas, maar het is wel een silicium oxide en silicium is een halfgeleider. En wat hier gebruikt wordt is de kristallijne vorm:

http://nl.wikipedia.org/wiki/Siliciumdioxide

-:- Silicium(di)oxide of silica is het bekendste oxide van silicium.

In de natuur komt het in diverse vormen voor, zowel in kristallijne als niet-kristallijne (amorfe) vorm. Kwarts is een voorbeeld van kristallijn silica, andere voorbeelden zijn cristobaliet en tridymiet. Opaal is een voorbeeld van amorf silica net als door extreme hitte samengesmolten kwarts (kwartsglas). -:-

Het artikel van Stenner “The speed of information in a ‘fast-light’ optical medium” over group velocity, etc. kan hier gevonden worden:

http://www.phy.duke.edu/research/photon/qelectron/pubs/StennerNatureFastLight.pdf

“One consequence of the special theory of relativity is that no signal can cause an effect outside the source light cone, the space-time surface on which light rays emanate from the source1. Violation of this principle of relativistic causality leads to paradoxes, such as that of an effect preceding its cause2. Recent experiments on optical pulse propagation in so-called ‘fast-light’ media—which are characterized by a wave group velocity u g exceeding the vacuum speed of light c or taking on negative values3—have led to renewed debate about the definition of the information velocity u i. One view is that u i 5 u g (ref. 4), which would violate causality, while another is that u i 5 c in all situations5, which would preserve causality. Here we find that the time to detect information propagating through a fast-light medium is slightly longer than the time required to detect the same information travelling through a vacuum, even though u g in the medium vastly exceeds c. Our observations are therefore consistent with relativistic causality and help to resolve the controversies surrounding superluminal pulse propagation.”

Een heel klein voorbeeld van hoe het er nu echt bij staat bij de gevestigde tijdschriften is dit stukje uit het abstract van het artikel van Stenner, nota bene in Nature:

“Recent experiments on optical pulse propagation in so-called ‘fastlight’ media—which are characterized by a wave group velocity v_g exceeding the vacuum speed of light c or taking on negative values—have led to renewed debate about the definition of the information velocity v_i. One view is that v_i = v_g (ref. 4), which would violate causality, while another is that v_i = c in all situations, which would preserve causality.”

Nu is de group velocity v_g dus de snelheid van de omhullende van een zich voortplantend signaal, waar normaal gesproken de informatie in zit. Wikipedia heeft een leuk plaatje met een animatie waarbij de groep snelheid negatief is ten opzichte van de fase snelheid, de draaggolf:

http://en.wikipedia.org/wiki/Group_velocity

De draaggolf (fase snelheid) beweegt naar links, de omhullende (groep snelheid) beweegt naar rechts. En nu krijgt Nature het dus voor elkaar een artikel te publiceren waarin beweerd wordt dat een negatieve groepssnelheid een schending van causaliteit zou zijn. Want het signaal bevindt zich eerder op de “uitgang” dan op de “ingang”.

Dit zijn toch gewoon denkfouten van Sesamstraat niveau?

Ik bedoel: als de omhullende de andere kant op beweegt dan de draaggolf, dan gaat die omhullende toch gewoon van de andere kant je glasvezel IN en komt er aldus enige tijd later weer UIT en wel aan de kant die je vanuit de draaggolf als INgang ziet. Maar het ding beweegt dus gewoon achteruit t.o.v. de voortbeweging van de draaggolf en dus bevindt hij zich eerst op het einde dat je vanuit de draaggolf gezien als uitgang bestempelt en pas later op het einde dat je vanuit de draaggolf bezien als ingang bestempelt…

Samenvatting van het voorgaande: ik stel vast dat het niet ter discussie staat dat het mogelijk is met licht een groepssnelheid groter dan c in glasvezel tot stand te brengen.

Het artikel van Thévenaz bevat een interessant detail:

“The maximal advancement attained 14.4 ns, to be compared to the 10 ns of normal propagation time in the 2 metre fibre. This is a situation of a negative group velocity and literally it means that the main feature of the signal exits the fibre before entering it.”

Wederom, dezelfde denkfout, maar dat terzijde.

BTW: men verwijst naar dit artikel voor meer detail: http://infoscience.epfl.ch/record/128303/files/ApplPhysLett_87_081113.pdf

Maar goed, waar het om gaat is dat we hier een maximale snelheidsfactor zien van 1.44, vrij dicht bij de eerder genoemde pi/2 voor het geval van longitudinale dielectrische (ofwel electro”statische”) LD golven.

Nu is de vraag uiteindelijk wat de aard van die omhullende golf is. Is dit nu inderdaad een longitudinale dielectrische golf, of is het toch iets anders?

OK. Nu heb ik betoogd dat er bij een dipool met een lengte van pi/2 keer 1/2 lambda iets bijzonders aan de hand is. Omdat er sprake is van golf reflecties op de uiteinden van de dipool, krijg je dus een staande golf, waarbij de totale rondgaande faseverschuiving van het EM veld 90 graden is.

Dit is echter niet het hele verhaal. Het elektrische veld is gerelateerd aan spanningen en het magnetische veld is gerelateerd aan het lopen van stroom. Nu heb je aan het uiteinde van een antenne dus de situatie dat er geen stroom kan lopen, maar de spanning vrij kan varieren.

Met andere woorden: voor het E-veld is het uiteinde van een antenne open, maar voor het magnetische veld is deze gesloten.

En dat betekent dat je voor beide componenten van de golf een andere faseverschuiving krijgt op de uiteinden van de dipool, net als bij een open/gesloten einde bij een touwtje dat je laat slingeren:

http://electron9.phys.utk.edu/phys136d/modules/m9/film.htm

“A wave pulse, which is totally reflected from a rope with a fixed end is inverted upon reflection. The phase shift of the reflected wave with respect to the incident wave is p (180o).

A wave pulse, which is totally reflected from a rope with a loose end is not inverted upon reflection. The phase shift of the reflected wave with respect to the incident wave is zero. When a periodic wave is totally reflected, then the incident wave and the reflected wave travel in the same medium in opposite directions and interfere.”

We hebben hier dus de eigenaardige situatie dat de ene component van het veld geinverteerd gereflecteerd wordt op het uiteinde van de dipool (het B-veld) en het andere niet (het E-veld).

Als we de metingen van Dollard e.a. mogen geloven, dan zou je verwachten dat als je deze situatie analytisch gaat uitwerken, je een significant verschil zult zien in de resulterende E-veldsterkte en de B-veldsterkte in vergelijking met de situatie waarbij je een dipool met een lengte van 1/2 lambda neemt.

En je zou daarnaast verwachten een groepssnelheid te vinden van pi/2 keer c…

Chapter 4: The remarkable properties of Water

(TO DO)

http://jes.ecsdl.org/content/99/1/30.abstract

An electrochemical theory is proposed for rectification, as exemplified by the tantalum (or aluminum) electrolytic rectifier and capacitor. A detailed consideration of the mechanism of formation of the oxide film which constitutes the rectification barrier leads to the conclusion that this barrier consists of an electrolytic polarization, in the form of a concentration gradient of excess metal ions, permanently fixed or “frozen” in position in an otherwise insulating matrix of electrolytically‐formed oxide. The physical structure which has been described functions as (a) a current‐blocking ionic space charge or (b) a current‐passing electronic semiconductor, depending solely upon the direction of the applied voltage. The movement of electrons only is required. An explanation for breakdown of the barrier at excessively high voltages is suggested. This explanation may be applicable to dielectric breakdown of other kinds.