Aether Theory and the Road to Energy Abundance

By Arend Lammertink, MScEE.
<lamare{at}gmail{dot}com>
About
This article describes some fundamental theory about a wonderful energy source that is available everywhere in the Universe: the (static) Electric and Magnetic fields and it's practical application in a number of free energy devices.
Table of contents
Foreword : (single page)
Chapter 1 : Conservation of Energy
Chapter 2 : Tesla's 'Wheelwork of Nature'
Chapter 3 : Unification of Physics
Chapter 4 : The remarkable properties of Water
Chapter 5 : todo
Foreword
This document is being submitted in memory of Stanley Meyer and Gerrit Stokreef. Stanley Meyer was a great tinkerer who dared to challenge the powers that were and paid for it with his life. Gerrit Stokreef was one of my neighbors in the place where I grew up. It was a very warm neighborhood filled with loving, honest people that all had to work hard to make a living. Gerrit was always there when you needed him, he just never said "No". He lent me his oscilloscope years ago. I hardly used it until after he passed away and he left it to me. Now I know he lost his fight to cancer because the powers that were didn't want us to use the cures invented by Royal Rife. But the rules of the game have changed now. The genie is out of the bottle folks and there is no way to put it back in there. May Stan's dream finally be realized and may there be peace on this planet, beacuse when there's no need for oil anymore, which will put the powers that were out of business, who in his right mind would ever fight a war again?
While studying various articles and discussions about Free Energy, it struck me that there were some striking similarities between a number of systems, notably those made by John Bedini as well as Stan Meyer's Water Fuel Cell. At some point, it occurred to me that there might be a common explanation behind these different systems, which all appear to be some form of (electrolytic) capacitor. In various discussions at the Energetic Forum I have made an attempt to formulate a theory to explain a number of phenomena that have been reported in relation to these systems. Since the relevant information has been scattered all over the forum, it is my intention that all that information be brought together and assembled here.
I hope that this information is helpful to those people that are better experimenters than I am, so that this technology will be further developed in the spirit of open source. I hope other engineers and scientists will study this article and the referenced material and make products that put this technology in the hands of the people of this planet, so disasters as in the Mexican Gulf will never have to happen again. I also hope that none of this will ever be patented, because this technology is worth the most when it is actually used, not when it is put behind bars because of greed and selfishness. Haven't we had enough of that by now?
Power to the people! (pun intended)
Chapter 1: Conservation of Energy
According to the law of conservation of energy it is impossible to create energy out of nothing:
The fundamental foundation for the law of conservation of energy lies in Newton's third law:
In essence energy (work) is an integration (summation) of a force enacted between two bodies - or particles or even the fundamental 'God' particles or whatever the aether/medium may be composed of - over the effect of the force, the movement of the body over a certain distance or a displacement in/of the aether/medium.
In other words, energy is in essence a measurement of the effect of the interaction between two bodies/particles and/or the medium. Fundamental point is that it is always a measurement of the effect of an interaction. And since action equals minus reaction, there can be no other way than that energy is always conserved. Because after all, as Tesla said, something cannot act upon nothing:
It is this law of conservation of energy that causes any device which appears to produce "useful work" without the use of a visible or obvious energy source to be considered "impossible" and done away with as perpetual motion (version of August 2010):
However, even though the law of conservation is correct, this does not mean it is impossible to create "machines that once started operate or produce useful work indefinitely" at all, provided you do not take the word 'indefinitely' too literally. But what this is really about, is the second part: "any machine that produces more work or energy than it consumes". Yes, this is correct, you cannot build a machine that produces energy out of nothing, you can only make a machine that uses some (external) energy source to do useful work. The current WikiPedia revision on perpetual motion (April 20th 2015) is much more nuanced:
In some way, the way the description of perpetual motion has changed over at WikiPedia illustrates that regarding this matter things are not as easy as they seem and that the detail regarding the use of an external energy source is an important distinction to make indeed.
Either way, in most cases, we can use the energy source of choice more or less directly, like burning fuel, and we don't count the energy we have to spend in order to get our energy source. But of course it also takes energy to drill a hole in the earth in order to extract oil for making fuel. So, in essence, the fuel supply chain ("a machine") as a whole provides more energy than it consumes, that is, the energy needed to make fuel is less than the energy released when burning the final product, the fuel.
To continue in this line of thinking, the ground source heat pump is a perfect example of a machine that uses a certain amount of energy in order to extract energy from some other external energy source provided by nature, heat naturally stored in the ground:
Of course, we can apply this same principle in various ways, if we can find an appropriate external energy source provided by nature, preferably free of charge. Fortunately, an energy source exists that is available everywhere in the universe for free. It's an energy source that could in theory provide limitless energy without any pollution whatsoever, if only we could find a way to utilize it. This energy source is most generally known under the name "zero-point energy":
So far, we have not covered anything controversial. We have covered the fundamental law of conservation of energy, we have established that because of that one cannot build devices which produce energy out of nothing and we have identified zero-point energy as an energy source that could be used, in principle, as far as the law of conservation of energy is concerned, that is.
However, the question remains whether or not this is actually possible in practice, the so-called the Utilization controversy (WikiPedia):
The source for the 'pseudoscience' label is a document by the US Army National Ground Intelligence Center (copy):
Let's first examine the conclusions of the US Army document:
Let's get this straight: according to this official US Army document, "a nonsensical concept given that the system is (by definition) already at its lowest energy state" has "very real and noncontroversial application potential".
Provided it is only applied at the nanoscale, of course.
But what about the microscale? After all, the Casimir effect, which accordingly does have very real and noncontroversial application potential, has been demonstrated at the microscale by Lamoreaux at the University of Washington:
Now what is the Casimir effect?
http://en.wikipedia.org/wiki/Casimir_effect
Note that in the quantum electrodynamics interpretation, there is no fundamental understanding of the phenomenon. While it is clear that actually some kind of kinetic field exists which causes the Casimir effect, this field is being attributed to virtual photons, which by definition do not actually exist. However, the need for using photons to describe the phenomenon implies that some kind of oscillating field is involved, which has shown to have real, measurable effects. However, if it has real physical effects, the logical conclusion would be that it also has a real, physical origion and thus we can come to the following hypothesis:
In other words: we postulate that the field causing the Casimir effect is not virtual (and thus actually non-existent) in nature, but very real and physically existing. And since it is of an oscillating nature it is a kinetic field of force and not a static one.
The US Army document also contains an alternate view:
Apparently, within "three letter US agencies" there is still room for debate on this one...
Wikipedia also refers to another document, a NASA contractor report:
This report reads (page 66):
This seems like a very sensible working hypothesis to me, which I fully subscribe to. Furthermore, the idea to examine what is common or similar between a number of claimed technologies in order to come to a theoretical understanding of the science involved is exactly the purpose of this work.
However, doing so is not an easy task. It involves returning to the roots of our current scientific models, correcting some fundamental errors which have been introduced in the 20th century, dealing with rather nonsensical predictions of Quantum Mechanics, such as alleged "entanglement", and an alleged curving of space-time itself. By returning to an aether theory, whereby we model the aether as being a compressible gas/fluid, we can finally come to the long sought for 'Unification of Physics' and thus come to a workable theoretical understanding of the science involved.
But before we do that, we shall first examine the nature of the electric field, thereby assuming that the static electric field, whatever it may actually be, is a kinetic force caused by some kind of movement with a finite speed.
Chapter 2: Tesla's 'Wheelwork of Nature'
Static or kinetic?
From the assumption that the electric field propagates at a finite speed, we explain that a circulation of energy between the vacuum and the propagating field(s) exist and is therefore part of ZPE.
Tom Bearden has made a number of video's as well as an article copy in which he explains how he thinks electrical circuits are actually powered.
This is a very important principle to understand, even though Bearden is a bit off, IMHO, and it is very hard to get this straight. It does take energy to separate the charges and that energy is used to change the configuration of the electric field. The field is not the same before and after a separation of charges has been done, so the applied energy is converted into a form of energy that can perhaps be described as a stress, a disturbance, of the overall electric field. And when the charges flow trough the circuit, one way or the other, the same amount of energy is released to the circuit as the amount of energy needed to separate the charges. If really "all the energy we put into the shaft of the generator" would be "dissipated inside the generator itself", big generators would heat up like hellfire.
Imagine a room with a fan and a door. When the door is opened, the airflow, wind, generated by the fan pushes against the door and tries to shut it. While opening the door, you have to push it against the air flow, which costs you energy. You can get that same amount of energy back, when you use the pressure of the airflow pushing against the door to do work, like cracking a peanut. However, the fan is not powered by the energy you have spent to open the door, it is a separate energy flow that is powered by something else. In this analogy, the door stands for the charges (mass) that move around and can be used to do work while the airflow (wind) stands for the electric field that causes the charges to move around. The only thing is that the door is the fan. So, we get all those little fandoors we can push around and as long as we keep using the same fandoors to create the airflow and to do the work, we will never ever be able to extract more energy from the airflow than we have spent ourselves to open the door.
So, these fandoors (charges) are really wonderful things. You open the door and mother nature (the vacuum) spins the fan and gives you a flow of energy you can use. Now the good news is that you can not only use this free energy to get your door shut again to do work, you can also use it to push on your neighbour's door. The bad news is that your neighbour's door also has its own fan, which has the nasty habbit of blowing in the other direction, that is, it will oppoze your airflow, which makes it very hard and certainly not straightforward to get a foot between these doors and keep the air flowing without paying for it. So, if you may have had the idea of taking an electret, a piece of permanent polarized material that continuously emits an electric field (the airflow) for free, to induce a current in a nearby wire, you're in trouble. The charges inside the wire will oppoze this exteral field and neutralize it faster than you can blink your eye and then the party is over. So much for that one.
So, are the engineers right and is Bearden wrong after all?
Well, the engineers are right in that you do convert mechanical energy into potential electric energy by opening the door against the airflow. But, Bearden is right that the dipole that has been created is a energy source. That energy source puts out energy in the form of a electric field, real energy that is converted from ZPE or whatever into a "static" electric field, mostly to be sent into space without ever being used, except for that part that is needed to close the door again.
To sum this up: besides the energies that are normally considered, there is a second energy flow that is totally being ignored. And that is interesting, because if the law of conservation practically holds for the first flow (the opening and closing of the door) it means we can use this second, hidden, energy flow (the fan) for free! This also means that electrical circuits can never ever be considered being "isolated systems", so if you want to throw "law of conservation" stuff into the equation, you have to make damn sure that whatever energy is being exchanged by the electric field with the environment can be neglected in the case at hand. In other words: electrical circuits are always interacting with the environment, even though you can often ignore that when doing energy conservation calculations. But let's read a littlebit further in Bearden:
Well, it may be right that particle physics says it's easy to extract EM energy from the vacuum, but that does not tell us how we can use that, nor how we can engineer systems that are able to make use of this unknown, or better: overlooked, territory. Where is that energy? Where does it come from and where does it go?
The answer to these questions can be found in the paper Conversion of the Vacuum-energy of electromagnetic zero point oscillations into Classical Mechanical Energy by the German Professor Claus Turtur. In the chapter "A circulation of energy of the electrostatic field" (pages 10-14) he makes a straightforward calculation of the energy density of the static electric field surrounding a point charge using nothing more than Coulombs law and the known propagation speed of the electric field, the speed of light, and shows that there must be some kind of energy circulation between the vacuum and charge carriers:
[...]
So there we are. Unless we are to assume that the static electric field propagates with an infinite speed, the static electric field (the airflow in our fandoor analogy) is on the one hand powered by the vacuum and on the other hand it powers the vacuum. And at least part of the energy in space / the vacuum, referred to with names as "Zero Point Energy" (ZPE), virtual particle flux, the Dirac sea, Orgone, etc., is not only fueled by the electric field, it is continuously converted back into an electric field by each and every charged particle in the Universe, which makes the electric field a source of energy from a practical point of view, just like the light coming from our Sun.
150 nW in comparison to loss of 3 nW
https://www.psiram.com/en/index.php/Claus_Wilhelm_Turtur
Turtur-Rotor / electrostatic fan wheel motor of Turtur Turtur88.jpg Test in vacuum
Between April and December 2008, Turtur conducted privately funded experiments on a "fan wheel motor" invented by him which, in his opinion, was powered by inexhaustible vacuum energy but at the same time required applying high voltage (1-30 KV) which, however, is not considered in the Casimir effect. Without high voltage, the impeller would not move. A successful replication of his experiment by other scientists is unknown as of yet (December 2009). Austrian Harald Chmela (Borderlands), at suggestion of Martin Tajmar, attempted a replication in a vacuum but failed[12].
Turtur used several slightly differing designs. Aluminium foil glued to balsa wood is used as material for the propeller which swims in a water bath on small styrofoam, with which it is connected by a conductive element. Due to high voltage between the electrically conductive impeller and a diametrically charged plate, Coulomb forces arise which turn the fan to a position of energy minimum (direction of rotation is undetermined at first). Afterwards. the fan is expected to start rotating. The direction of rotation is said to be always the same, while angular velocity is said to depend on the high voltage applied.
According to his own estimates, an observed performance of 150 nW(nano Watt) of the engine in air and water bath with rotation times of 1-16 minutes were seen, yielding a few kilovolts. The usage of the high voltage power supply is unknwon, but he mentions a current limit of 50µA for his vacuum experiments. A high voltage power supply built by Turtur was said to have been used in the experiments and Turtur stated that he was not able to keep the output voltage constant. A replica by an Italian inventor at 38 KV yielded high voltage fluctuating currents up to 7 mA[13]. Later experiments in vacuum with an oil bath of vacuum-oil(a special kind of oil) are said to have required higher voltage of 16-30 kV and yielded just an average current of 0.1 pA in vacuum (about 3 nW power) with additional peaks of several picoampere. Rotation speed was said to be slower in vacuum with a circulation time of 2 to 3 hours.
Vacuum tests: at Otto-von-Guericke University in Magdeburg, Turtur conducted experiments in vacuum in cooperation with the local technician Wolfram Knapp, after critics had pointed out that his construction just showed Biefeld-Brown effects. He put the impeller into a sour cream cup of the brand Milbona, which swam in oil. The impeller was connected to the high voltage power supply by a wire. According to his own report, rotation speed decreased. A pressure of 10-3 to 10-5 Millibar was applied; a further decrease of pressure would have resulted in boiling oil and was avoided. The vacuum oil used was of the type "Ilmvac, LABOVAC-12S" with a vapor pressure of 10-8 mbar, a 40 degree (C) viscosity of 94 mPoise. Mechanical power output in vacuum was not measured and also experiments were not conducted without oil or water bath.
According to his own report[14], no ongoing rotation over an arbitrary number of revolutions was seen in vacuum and the number of revolutions was not reproducible.
The implications of that are staggering. It means that the law of conservation of energy does not apply to 'isolated' electrical systems, because they are not actually isolated. After all, Turtur shows that energy is being extracted from the active vacuum by each and every charged particle and thus every electrical system in existence in the Universe.
Interestingly, Nikola Tesla already said the exact same thing in 1891:
Based on all this, it is clear that we need to look at electrical systems in a different way, we need a way of thinking that does account for the energy source that is really powering our systems. In a way, we need a similar change in our models as the change from Newton to quantum mechanics. While Newtonian mechanics can still be used in mechanical engineering most of the time, at some point they are no longer valid.
In the same way, the current electrical engineering model is fine for most applications where it suffices to consider only the door part of our fandoor analogy, that is, by considering electrical systems basically as an analogy of hydraulics, which is literally just a variation of Newtonian mechanics. However, if you want to be able to utilize the energy source the electric field provides, there just ain't no way to do that without taking the energy exchange between an electrical system and the vacuum completely into account. And that means we have to go back to field theory instead of describing our systems in terms of concrete components, the so-called lumped element models, especially in the case we are dealing with resonating coils. This is explained by James and Kenneth Corum points in Tesla Coils and the Failure of Lumped-Element Circuit Theory:
So, we need to consider the fields and that also means we need to realise that the nature of these fields is dynamic and not static. In the old Newtonian model, we consider the voltage across an impedance to be the cause for a current to occur, which in our fandoor anology would be the pressure that the door "feels" being enacted by the airflow on its surface, while in reality it is the airflow (the electric) field that acts upon the door and not the pressure itself. In other words it seems like the "pressure" the electric field enacts on our components is static, hence the name "static electric field", while in actual reality this force is a dynamic force, something flows along the surface that creates the pressure. Tesla already realised this in 1892:
It is nothing less than a shame that even more than a hundred years later, we still burn fossile fuel for our energy, basically because of arrogance, selfishness and ignorance. Still, the question remains the same. It is a mere question of time... Anyhow, there basically is a deeper cause we have to account for: the electric field itself, which is present everywhere in the Universe. With that in mind, we continue with Bearden:
Now isn't that interesting, half the caught energy in the power line is used to kill the source dipole, and less than half is used to power the loads? Think about it, how can that be?
There is an essential difference between the Newtonian analogy we use in electrical engineering (closed circuits) and the actual reality. The analogy of a capacitor in hydraulics (Newtonian analogy) is a piston moving back and forth in a closed cylinder wherein gas is pressurized. And here's the difference: Imagine moving the piston inwards, pressurizing the gas, and put the thing on your workbench. The piston will immediately move back, because of the gas pressure. Now charge a capacitor and put it on your workbench. See the difference? The capacitor will just sit there, keeping it's charge. In other words: our hydraulic analogy is unstable, it 'wants' to release it's energy, while our actual electrical component is stable when 'pressurized'. It will only 'release' it's energy when something external is being done. It has to be disturbed, because the charges in a capacitor actually attract one another, which makes them like to stay where they are. So, when 'discharging' a capacitor, as a matter of fact, these attraction forces have to be overcome. And that does not release energy at all, it costs energy to do that. So, it actually takes the same amount of energy to charge a capacitor as the amount of energy it takes to discharge the capacitor.
It is undoubtedly because of this that Steinmetz wrote, already in the beginning of the twentieth century:
So, it may seem that the conservation law holds when considering electrical circuits in their 'prehistoric' analogy, in actual truth this is only the case because the interactions with the environment, the active vacuum, balance one another out. In reality twice the amount of work has been done than seems to having been done!
Summary
Any charge continously emits an energy field, an electric field, spreading with the speed of light, which is the real energy source that makes our circuits run. This energy-field, generated by the charges in our wires, is not created out of thin-air. Since there is a continuous flow of energy out of every charge, there also is a continuous flow of energy going into every charge. And that is where the energy eventually comes from, right from the vacuum itself. For our purposes, it doesn't really matter how the energy that ends up in the electric field is being taken out of the vacuum. It may be ZPE, it may be a "virtual partical flux", it may be anything. It doesn't matter, because we don't need to know.
All we need to know is that somehow, some form of energy flows into each and every charge in the universe and this energy flow is continuously converted into an outflowing electric energy field by each and every charge in the universe, 27/7, 365 days a year, for free.
And this is the basic concept to understand. The electric field comes for free, as long as you keep the charges separated and don't disturb them.
So, where does all this leave us? We can spend the effort of turning the shaft of a generator, which will separate the charges in the system we want to power and creates a dipole. When we do this, we do not actually store energy in the dipole, we change the configuration of the electric field. When we subsequently send those same charges trough the system we want to power, it is the active vacuum, the environment, which is kind enough to provide us with the energy that is needed to kill the dipole we have created to be able to power our load and with the energy to actually power our load as well. As we have seen, this is an exercise with a closed wallet from our point of view. The load receives the exact same amount of energy that we have put in the system ourselves as mechanical energy, apart from the losses. So, all things considered, the Newtionian analogy we use in electrical engineering is perfectly valid and applicable. Except for one tiny little detail.
We change the configuration of the electric field when we operate an electrical circuit and since we eventually get the same amount of energy back trough our load while doing this, this means we can actually manipulate the electric field for free, just by powering our circuits the way we always do. Get the point? While we are opening and closing our fandoor, we influence the airflow in our neighbourhood without having to pay a dime for that in terms of energy! That means we can manipulate our neigbors fandoor for free. So, all we need to do is figure out how to use our free manipulative power to put the fandoors in our neighborhood to work such that it is the environment that delivers the energy to power the neighbors load, just as it powers our load. In other words: we have to manipulate the electric field in such a way that charge carriers in the environment of our systems are moved around in such a way that they perform useful work, in such a wat that it isn't us that provides the energy, but someone else: the electric field itself. That means most of all that we have to make sure that those neighboring charges don't end up in our circuit, since then they will kill our dipole and we will have to pay the price, and secondly that we have to make sure that we don't disturb the charge carriers that make up our voltage source.
Let's take a look at how three inventors managed to do just that by using the power of resonance. You can find that part after the intermezzo with some interesting references.
Chapter 3: Unification of Physics
The dual slit experiment and the wave particle duality principle
This gives us a major point regarding ZPE. Because all known particles are Electromagnetic waves, it follows that even at absolute zero the particles themselves *must* oscillate and therefore emit an electrostatic field as well as a magnetic field.
So, when considering Zero Point Energy from the perspective of movements of the particles, one does not take the oscillations which make up the particles themselves into account. How could an electromagnetic wave stop oscillating and radiating energy at absolute zero?
Another point is the question of weak and strong nuclear forces. How can there be any other forces but the electromagnetic working on electromagnetic waves?
-> spinning plasma
http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality http://en.wikipedia.org/w/index.php?title=Wave%E2%80%93particle_duality&oldid=659839762
[...]
[...]
\lambda = \frac{h}{p}
[...]
Because all known matter (particles) has a wave-like nature, described by electrodynamic theory, I propose the following two hypothesis:
1. All matter is the manifestation of a localized electrodynamic phenomenon;
2. There exists but one fundamental interaction: the electrodynamic c.q. electromagnetic.
So, what I am saying is that all known particles ARE some kind of local electromagnetic wave phenomenon an that therefore a fully developed electromagnetic theory should be capable of describing and predicting all known physical phenomena, thus integrating all known physics within a Unified Theory requiring only one fundamental interaction.
[...]
http://en.wikipedia.org/wiki/Falsifiability
There are two known types of electromagnetic waves:
1. transverse wave;
2. a vortex based wave.
http://en.wikipedia.org/wiki/Optical_vortex
[...]
[...]
[...]
Paul Stowe
http://vixra.org/pdf/1310.0237v1.pdf
Div, Grsd, Curl and the Fundamental Forces
In this model any field gradient (Grad) results in a perturbative force. Likewise the point divergence (Div) in the quantity we call Charge. Finally the net circulation (Curl) at any point defined magnetic potential. Electric and magnetic effects are both well- defined and almost completely quantified by Maxwell’s 1860-61 work On Physical Lines of Force . It is interesting that in this model (an extension of his) the electric potential
(
E ) has units of velocity ( m/sec ) and the magnetic potential
(
B ) is a dimensionless. This along with Maxwell’s original work could help shed light resolving the actual physical mechanisms involved in. the creation of the both forces. Inspection strongly suggests that both the electric and magnetic forces are Bernoulli flow induced effects. In this view opposing currents reduce net flow velocity between the vortices increasing pressure and creating an apparent repulsive force. Complimentary currents increase net velocity resulting in an apparent attraction.
Gravity as the Gradient of Electric Field
If the electric potential (E) is a net speed its gradient will be an acceleration.:
(Eq. 35)
Since this potential is squared the sign of E does not matter and the gradient vector is always directed towards the point of highest intensity. This provides a natural explanation for the singular attractive nature of the gravitational force.
Dear Jim,
All right, I'll take the bait.
( I am referring to your presentation on YouTube: )
So, let's go.
First of all, your presentation is a much simplified schematic overview of the real experiment and therefore omits essential details. However, the root of the problem with the interpretation quantum mechanics gives to this experiment is the denial of the existence of longitudinal dielectrical (Tesla) wave phenomena in current physics. The root of this misconception can be found in the Maxwell equations describing the EM fields. You see, the fields are depicted as being caused by charge carriers, which would be protons and electrons, which would be EM waves as shown by the dual slit experiment. In other words: the current Maxwell equations describe EM phenomena as being caused by EM waves, which essentially mixes up cause and effect. A chicken-and-egg problem is introduced here, which should not be there.
When we correct the Maxwell equations for this fundamental problem, we essentially end up with descriptions of waves as would occur in a fluid, which we used to call the aether. And with that, we can also do away with Einstein's relativity theory, because we have no need for the Lorentz transform since the Maxwell eqations "without charge or current" transform perfectly under the good old Galileian transform, as shown by Dr. C.K. Thornhill:
http://www.etherphysics.net/CKT4.pdf
And in fact, contrary to popular belief, the incorrectness of the relativity theory is confirmed by one of the leading experts in GPS technology, Ron Hatch:
http://www.youtube.com/watch?v=CGZ1GU_HDwY
When we take a closer look at the original Maxwell papers and introduce a compressible aether instead of an incompressible one, we can come to a proper foundation for a "theory of everything" explaining not only the magnetic field as a rotational force, but also explain gravity as being the gradient of the Electric field, as has been done by Paul Stowe:
http://vixra.org/abs/1310.0237
Note that at this point, we have already done away with gravity as being one of the fundamental forces. However, we still have a problem with the "weak and strong" interactions, because if all matter is an electromagnetic phenomenon, there can be no other fundamental forces but the electromagnetic ones. In other words: the forces keeping an atom together MUST also be electromagnetic in nature. And this can also be experimentally shown, as has been done by David LaPoint:
Getting back to the dual slit experiment, one of the most fundamental assumptions underneath the current accepted interpretation is that "photons" or "particles" are being emitted at random times from any given source. However, that would lead to particles falling on the slits which would not be in phase and therefore we cannot get an interference pattern. In other words: the emission of particles/photons from any real, physical source MUST be occuring along a deterministic process and not a random process. Said otherwise: the atoms making up any real, physical source MUST be vibrating in resonance in order to get a nice interference pattern. This can be made obvious by considering the 21 cm hydrogen line ( http://en.wikipedia.org/wiki/Hydrogen_line ). It is ludicrous to assume protons with a wavelength of no less than 21 cm can be caused by a change of energy states of individual atoms occuring at random moments. In other words: there is no question that these phenomena are the result of some kind of resonance occuring within your photons/particle source.
Now of course if you have a resonating photon/particle source, which acts as an antenna, you will get the so-called "near field" as well as the so-called "far field":
http://en.wikipedia.org/wiki/Near_and_far_field
These have thusfar not been properly explained. Quantum Mechanics resorts to the invention of "virtual" - by definition non-existing - photons in order to straighten things out:
"In the quantum view of electromagnetic interactions, far-field effects are manifestations of real photons, whereas near-field effects are due to a mixture of real and virtual photons. Virtual photons composing near-field fluctuations and signals, have effects that are of far shorter range than those of real photons."
However, since we know that the magnetic field is a rotational force, we can deduce that any photon or particle (the "far field"), which is an EM wave phenomena, contains some kind of (magnetic) vortex, one way or the other, and therefore is not a "real" transverse wave. So, the difference between the near and far fields in reality is simply that the near field is a real (surface) transverse wave, while the far field is made up of particles/photons characterized by the existence of some kind of (magnetic) vortex keeping the photon/particle together.
Since in the transition of the near-field to the far-field the propagation mode of the phenomena changes from "transverse" to "particle" mode, it is clear that the existence of a transverse wave on the surface of some kind of material, no matter in what way this has been induced, can radiate and/or absorb "photon/particle mode" wave phenomena, since an antenna works both ways as a transmitter and a receiver.
However, when we have a transverse wave propagating along the surface of some material, we also have an asociated dielectric (pressure) type wave, the longitudinal wave, which propagates at a speed of sqrt(2) times the speed of light trough vacuum. Of course, this propagation mode also propagates energy. Energy which is not being accounted for in the standard model, hence the need for the invention of "black" matter and energy in order to fill out the gaps.
So, what we are really looking at with the dual slit experiment source is a source of BOTH "particle/photon" modes of EM radiation AND longitudinal dielectric waves, which interact with one another. When longitudinal dielectric waves interact with a material capable of resonating at the frequency of these waves, it is pretty obvious that transverse surface waves are induced, which can also emit "photon/particle" mode waves on their turn.
As I already explained, photons/particles are characterized by the existence of some kind of rotational, magnetic, vortex, which cannot pass the slits. And therefore, AFTER the slits, we are left ONLY with the longitudinal phenomena and NO EM wave. These are introduced at surface of the screen and/or at your "atom counter", both surfaces acting as both receiving and transmitting antenna's at the same time. Now whenever you take energy from the waves propagating around your slits ("counter on"), you influence the resonance taking place within the experiment. When you do not take any anergy away ("counter off"), this influence is no longer present. And therefore you get the result that switching the counter on or off influences your experiment.
Any questions??
Very interesting. A leading expert in GPS stuff disagreeing with relativity:
http://www.youtube.com/watch?v=CGZ1GU_HDwY
RON HATCH: Relativity in the Light of GPS | EU 2013
Natural Philosophy Alliance Conference (NPA20) July 10 - 13, 2013 -- College Park, Maryland To register go to www.worldnpa.org (Difficulties registering? Contact davidharrison@thunderbolts.info)
Perhaps you've already heard that GPS, by the very fact that it WORKS, confirms Einstein's relativity; also that Black Holes must be real. But these are little more than popular fictions, according to the distinguished GPS expert Ron Hatch. Here Ron describes GPS data that refute fundamental tenets of both the Special and General Relativity theories. The same experimental data, he notes, suggests an absolute frame with only an appearance of relativity.
Ron has worked with satellite navigation and positioning for 50 years, having demonstrated the Navy's TRANSIT System at the 1962 Seattle World's Fair. He is well known for innovations in high-accuracy applications of the GPS system including the development of the "Hatch Filter" which is used in most GPS receivers. He has obtained over two dozen patents related to GPS positioning and is currently a member of the U.S National PNT (Positioning Navigation and Timing) Advisory Board. He is employed in advanced engineering at John Deere's Intelligent Systems Group.
Koevavla:
Hoi Arend,
Wat een geweldig goede presentatie. Ik snap alleen die 's' factor niet zo goed, maar verder ben ik het 100% eens met Ron Hatch.
De meet resultaten van Roland de Witte komen ook goed overeen met de theorie van Ron: een variabele lichtsnelheid ivm een 1-richting lichtsnelheidsmeting. Als de frequentie van vallend licht onveranderd blijft, dan zou in feite de golflengte en dus ook de lichtsnelheid wat moeten afnemen, richting het aard oppervlak. Ik heb dit idee al jaren geleden via Vesselin Petkov vernomen. Ik vind Ron erg goed op wetenschappelijk gebied.
Gemiste kans voor Ron is het verklaren van lichtafbuiging door zwaartekracht. Ron stelt overduidelijk dat een een zwaartekrachtveld de lichtsnelheid wat vermindert, en als je dit gegeven combineert met het afbuigen van licht indien het door een zwaartekracht veld reist, dan kan je de ether dichtheid voorstellen als een variabele electromagnetische polariseerbaarheid, net als lucht of water.
De ether heeft kennelijk een elektromagnetische eigenschap, en zodoende is de ether ook te beinvloeden dmv electrodynamische effecten. En dat is het grootste geheim der mensheid.
Ron Hatch
http://www.worldsci.org/php/index.php?tab0=Scientists&tab1=Scientists&tab2=Display&id=257
Ronald Ray Hatch, born in Freedom, Oklahoma, now of Wilmington, California, received his Bachelor of Science degree in physics and math in 1962 from Seattle Pacific University. He worked at Johns Hopkins Applied Physics Lab, Boeing and Magnavox as Principle Scientist, before becoming a Global Positioning System (GPS) consultant. In 1994 he joined Jim Litton, K. T. Woo, and Jalal Alisobhani in starting what is now NavCom Technology, Inc. He has served a number of roles within the Institute of Navigation (ION), including Chair of the Satellite Division, President and Fellow. Hatch received the Johannes Kepler Award from the Satellite Division and the Colonel Thomas Thurlow Award from the ION. He has been awarded twelve patents either as inventor or co-inventor, most of which relate to GPS, about which he is one of the world's premier specialists. He is well known for his work in navigation and surveying via satellite.''
http://www.gps.gov/governance/advisory/members/hatch/
http://ivanik3.narod.ru/GPS/Hatch/relGPS.pdf
copy: http://www.tuks.nl/pdf/Reference_Material/Ronald_Hatch/Hatch-Relativity_and_GPS-II_1995.pdf
explanation, Ashby [2] in a GPS World article, "Relativity and GPS," gives an improper explanation for each of the three phenomena listed above.''
http://en.wikipedia.org/wiki/Fictitious_force
[...]
[...]
Main article: General relativity
http://arxiv.org/ftp/physics/papers/0204/0204044.pdf
For more then a century, millions of researchers has worked to implement the quantum idea in different fields of science. This work provided to be so extensive and fruitfully that few have ever had the time to think if this quantum idea is so consistent with experimental reality.
Around 2005, I wanted to publish a scientific paper based on idea that ionization energy variation contradicts the quantum hypothesis. As expected, no scientific journal was interested to publish such paper, even the data in the paper could be verified by a layman without any scientific background.
Is it something simpler than a linear dependency of two units in a graph for a clear conclusion?
Someone would ask himself rhetorically… how could it possible to not publish such paper?
The answer is very simple: how could a referee deny all his scientific activity and say bluntly that not only her but millions of people have worked to a new epicycles theory in science?
Therefore the idea was published in an Atomic structure book in 2007:
http://elkadot.com/index.php/en/books/atomic/ionization-energy-variation
Later the concept was revised and improved, published in another book about chemistry in 2009, and advertised in for discussion groups in 2009 with next formulation:
The neglected ionization energy variation for isoelectronic series can reveal more useful information about electrons structure; the problem is these data are in contradiction with actual quantum theory. The quantum prediction for work functions values are in contradiction with experiments; for metals, ionization energy and work function must be equal but in reality they are not.
For other classes of compounds quantum mechanic fails again to predict something. A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics.
http://elkadot.com/index.php/en/books/chemistry/ionization-energy-and-work-function
People who haven’t learned from past experiences will have all the time the tendency to repeat the same errors. I will paraphrase a famous economist who said that in a free market economy there is an invisible hand pushing the things forward; in science the opposite is true: the invisible hand of a entire system think that pushing something under the carpet will maintain the actual status quo.
Will it be so or will not be so?!
Best regards,
Sorin
--
Thanks very much, this is very interesting.
I wrote down my thoughts on QM some time ago:
http://www.tuks.nl/wiki/index.php/Main/QuestioningQuantumMechanics
QM is fundamentally flawed. It starts of with the famous dual slit experiment, whereby they eventually conclude that the change of orbit of a *single* electron at a *random* moment emits a photon which is subsequently emitted in such a way that it is automagically in phase with the other photons emitted by the light source, since otherwise we would not see an interference pattern:
http://en.wikipedia.org/wiki/Electromagnetic_radiation#Particle_model_and_quantum_theory
"As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies."
http://en.wikipedia.org/wiki/Emission_spectrum "The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference."
The ridiculousness of the idea of radiation being caused by the single transitions of an electron at a random moments pokes one in the eye when considering the 21 cm Hydrogen line:
http://en.wikipedia.org/wiki/Hydrogen_line
"The hydrogen line, 21 centimeter line or HI line refers to the electromagnetic radiation spectral line that is created by a change in the energy state of neutral hydrogen atoms. This electromagnetic radiation is at the precise frequency of 1420.40575177 MHz, which is equivalent to the vacuum wavelength of 21.10611405413 cm in free space. This wavelength or frequency falls within the microwave radio region of the electromagnetic spectrum, and it is observed frequently in radio astronomy, since those radio waves can penetrate the large clouds of interstellar cosmic dust that are opaque to visible light.
The microwaves of the hydrogen line come from the atomic transition between the two hyperfine levels of the hydrogen 1s ground state with an energy difference of 5.87433 µeV.[1] The frequency of the quanta that are emitted by this transition between two different energy levels is given by Planck's equation."
http://en.wikipedia.org/wiki/Atomic_radius
"Under most definitions the radii of isolated neutral atoms range between 30 and 300 pm (trillionths of a meter), or between 0.3 and 3 angstroms. Therefore, the radius of an atom is more than 10,000 times the radius of its nucleus (1–10 fm),[2] and less than 1/1000 of the wavelength of visible light (400–700 nm)."
This means that the raidus of the largest atoms is less than 1/70 millionth of the wavelength of the hydrogen line!
While it is hard belief that it is widely accepted that single atoms can emit photons with wavelengths which are up to 1000 times larger than the size of an atom, it is beyond belief and totally ridiculous to asume a single transition in a body can emit a "photon" with a wavelength which is more than 70 million times as large as the body itself.
The only reasonable explanation for such a phenomenon to occur is that the transitions of (the electrons around) the atoms occur in phase and in resonance (aka "deterministic") with one another. In other words: no randomness at all!
"A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics."
http://en.wikipedia.org/wiki/Work_function
"In solid-state physics, the work function (sometimes spelled workfunction) is the minimum thermodynamic work (i.e. energy) needed to remove an electron from a solid to a point in the vacuum immediately outside the solid surface. Here "immediately" means that the final electron position is far from the surface on the atomic scale, but still too close to the solid to be influenced by ambient electric fields in the vacuum. The work function is not a characteristic of a bulk material, but rather a property of the surface of the material (depending on crystal face and contamination)."
From your article:
"As is observed, as a general rule, the ionization energies for all metals are greater then work function."
This is not illogical. Since metals are conducting materials, the loss of negative charge because of the movement of an electron out of the metal can be compensated for by the movement of free electrons into the partial "hole" left by the leaving electron. This causes the departing electron to be less attrackted by the bulk metal compared to what would happen if the loosing of charge in the lattice could not be compensated for.
"A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics."
That's very interesting. I'm not an expert on metallic oxides, but I am a bit familiar with Aluminum Oxide and the working of electrolytic capacitors, whereby you have a dielectric layer of Aluminum Oxide between your positive plate and the electrolyte. On the negative plate, ususally also Aluminum, there is also a very thin layer of Aluminum Oxide, an insulator. Yet, the negative plate conducts electrocity very well into the electrolyte filling up the space between the plates.
So, for this particular metallic oxide, we are dealing with a dielectric:
http://en.wikipedia.org/wiki/Dielectric
"A dielectric material (dielectric for short) is an electrical insulator that can be polarized by an applied electric field. When a dielectric is placed in an electric field, electric charges do not flow through the material as they do in a conductor, but only slightly shift from their average equilibrium positions causing dielectric polarization. Because of dielectric polarization, positive charges are displaced toward the field and negative charges shift in the opposite direction. This creates an internal electric field that reduces the overall field within the dielectric itself."
[...]
"Dipolar polarization
Dipolar polarization is a polarization that is either inherent to polar molecules (orientation polarization), or can be induced in any molecule in which the asymmetric distortion of the nuclei is possible (distortion polarization). Orientation polarization results from a permanent dipole, e.g., that arising from the 104.45° angle between the asymmetric bonds between oxygen and hydrogen atoms in the water molecule, which retains polarization in the absence of an external electric field. The assembly of these dipoles forms a macroscopic polarization."
So, for Aluminum Oxide, we have are dealing with a material which is both an insulator and polarizable and thus also (to a certain degree) capable of making up for the loosing of charge because of a leaving electron, albeit more slowly.
Another point is the conductivity of heat by the material. Metals are excellent heat conductors IIRC, so if you wer to heat up a part of a metal, the heat would quickly conduct away into the bulk metsl and thus one would need more thermal energy to liberate an electron from a metal in comparison with a less heat conducting material, which might very well be the case for metallic oxides.
So, I would guess that those materials which are on the one hand electrically insulating and polarizable (dielectrics) and on the other hand bad heat conductors would have the lowest work function values, because they a) can compensate for "charge loss" and b) can be "locally heated".
“Het meeste van wat Lammertink schrijft lijkt me onzin (longitudinale em golven zijn nooit aangetoond) maar ik weet niet voldoende van EM-theorie.”
http://en.wikipedia.org/wiki/Laser
“Some applications of lasers depend on a beam whose output power is constant over time. Such a laser is known as continuous wave (CW). Many types of lasers can be made to operate in continuous wave mode to satisfy such an application. Many of these lasers actually lase in several longitudinal modes at the same time, and beats between the slightly different optical frequencies of those oscillations will in fact produce amplitude variations on time scales shorter than the round-trip time (the reciprocal of the frequency spacing between modes), typically a few nanoseconds or less.”
http://en.wikipedia.org/wiki/Longitudinal_mode
“A longitudinal mode of a resonant cavity is a particular standing wave pattern formed by waves confined in the cavity. The longitudinal modes correspond to the wavelengths of the wave which are reinforced by constructive interference after many reflections from the cavity’s reflecting surfaces. All other wavelengths are suppressed by destructive interference.
A longitudinal mode pattern has its nodes located axially along the length of the cavity. Transverse modes, with nodes located perpendicular to the axis of the cavity, may also exist.
[...]
A common example of longitudinal modes are the light wavelengths produced by a laser. In the simplest case, the laser’s optical cavity is formed by two opposed plane (flat) mirrors surrounding the gain medium (a plane-parallel or Fabry–Pérot cavity). The allowed modes of the cavity are those where the mirror separation distance L is equal to an exact multiple of half the wavelength, λ.”
Ik zou toch zweren dat ik hier lees dat lasers met longitudinale golven werken en dat transversale modes wellicht ook zouden kunnen bestaan…
Hmmm.
En als je dan even het ontwerp van een laser voor de geest haalt, dan heb je in feite daarmee ook een golfpijp, maar dan voor optische frequenties.
http://en.wikipedia.org/wiki/Laser
“The optical resonator is sometimes referred to as an “optical cavity”, but this is a misnomer: lasers use open resonators as opposed to the literal cavity that would be employed at microwave frequencies in a maser. The resonator typically consists of two mirrors between which a coherent beam of light travels in both directions, reflecting back on itself so that an average photon will pass through the gain medium repeatedly before it is emitted from the output aperture or lost to diffraction or absorption.”
Nu heb je bij een EM golfpijp dus een metalen wand, waarbij de aanwezigheid van een wisselend E-veld wervelstromen geinduceerd worden in de pijpwand en die vervolgens weer een magneetveld tot gevolg hebben. Maar bij een laser heb je alleen twee spiegels en dus geen pijpwand die middels inductie een magneetveld introduceert.
Kortom: longitudinale EM golven zijn wel degelijk aangetoond zonder dat de propagatie plaats vindt middels beweging van ladingsdragers, zij het dat dit in een golfpijp een TM mode is en dus geen zuivere “Tesla” LD golf. Het afwezig zijn van een metalen wand in een laser met daarbij de verwijzingen naar “longitudinale” mode wijzen er dan ook wel degelijk op dat we bij een laser te maken hebben met een zuivere longitudinale dielectrische golf, zo’n Tesla golf.
En als die inderdaad met pi/2 keer c propageren en daar is geen rekening mee gehouden bij optisch onderzoek, dan zou je verwachten dat er ergens een anomaliteit te vinden is die e.e.a. zou kunnen bevestigen.
it is toch wel een erg interessante anomaliteit:
http://en.wikipedia.org/wiki/Dispersion_%28optics%29
-:- The group velocity vg is often thought of as the velocity at which energy or information is conveyed along the wave. In most cases this is true, and the group velocity can be thought of as the signal velocity of the waveform. In some unusual circumstances, called cases of anomalous dispersion, the rate of change of the index of refraction with respect to the wavelength changes sign, in which case it is possible for the group velocity to exceed the speed of light (vg > c). Anomalous dispersion occurs, for instance, where the wavelength of the light is close to an absorption resonance of the medium. When the dispersion is anomalous, however, group velocity is no longer an indicator of signal velocity. Instead, a signal travels at the speed of the wavefront, which is c irrespective of the index of refraction.[3] Recently, it has become possible to create gases in which the group velocity is not only larger than the speed of light, but even negative. In these cases, a pulse can appear to exit a medium before it enters.[4] Even in these cases, however, a signal travels at, or less than, the speed of light, as demonstrated by Stenner, et al.[5] -:-
Dit is toch wel erg fascinerend. Een puls kan sneller dan het licht propageren (of zelfs een negatieve snelheid hebben), maar een signaal niet. Waarin verschilt een puls nu dan wel van een signaal?
http://scienceblog.com/light.html
“They were also able to create extreme conditions in which the light signal travelled faster than 300 million meters a second. And even though this seems to violate all sorts of cherished physical assumptions, Einstein needn’t move over – relativity isn’t called into question, because only a portion of the signal is affected.”
Juist. Niets aan de hand. Nothing to see here. Slechts een gedeelte van het signaal gaat sneller dan c. Maar welk gedeelte dan?
“To succeed commercially, a device that slows down light must be able to work across a range of wavelengths, be capable of working at high bit-rates and be reasonably compact and inexpensive.”
Juist. Het gaat om een gedeelte van het signaal, waarbij er sprake is van een beperkte bandbreedte waarbinnen sneller dan licht propagatie mogelijk is.
Wikipedia nog eens:
“Anomalous dispersion occurs, for instance, where the wavelength of the light is close to an absorption resonance of the medium. When the dispersion is anomalous, however, group velocity is no longer an indicator of signal velocity. Instead, a signal travels at the speed of the wavefront, which is c irrespective of the index of refraction.”
Dat is interessant. Er is dus een specifieke, medium afhankelijke, resonantie frequentie waarbij sneller dan licht propagatie optreedt, precies zoals ik betoogd heb dat dit ook in het RF geval optreedt bij antennes.
En wat ik daar ook bij betoogd heb, is dat het erg lastig is om longitudinale golven te meten. En dat is ook precies waar men hier tegen aan loopt:
Scienceblog: “Light signals race down the information superhighway at about 186,000 miles per second. But information cannot be processed at this speed, because with current technology light signals cannot be stored, routed or processed without first being transformed into electrical signals, which work much more slowly.”
Dit is een paper van Thévenaz dat wellicht meer details geeft:
http://infoscience.epfl.ch/record/161515/files/04598375.pdf
To be continued…
http://www.tuks.nl/ Arend Lammertink
Bijzonder interessant. Het mechanisme waar het om lijkt te draaien heet Brillouin scattering:
http://en.wikipedia.org/wiki/Brillouin_scattering
-:- As described in classical physics, when the medium is compressed its index of refraction changes, and a fraction of the traveling light wave, interacting with the periodic refraction index variations, is deflected as in a three-dimensional diffraction grating. Since the sound wave, too, is travelling, light is also subjected to a Doppler shift, so its frequency changes. -:-
Merk op dat het hier gaat om een mechanische compressie van het medium, en voor zover me nu duidelijk is wordt dit gedaan met behulp van geluidsgolven.
Uit het artikel in mijn vorige post: -:- Among all parametric processes observed in silica, stimulated Brillouin scattering (SBS) turns out to be the most efficient. In its most simple configuration the coupling is realized between two optical waves propagating exclusively in opposite directions in a single mode fibre, through the stimulation by electrostriction of a longitudinal acoustic wave that plays the role of the idler wave in the interaction [4]. This stimulation is efficient only if the two optical waves show a frequency difference giving a beating interference resonant with an acoustic wave (that is actually never directly observed). This acoustic wave in turn induces a dynamic Bragg grating in the fibre core that diffracts the light from the higher frequency wave back into the wave showing the lower frequency. -:-
Interessant detail is dat het hier gaat om silica. Dit is natuurlijk gewoon glas, maar het is wel een silicium oxide en silicium is een halfgeleider. En wat hier gebruikt wordt is de kristallijne vorm:
http://nl.wikipedia.org/wiki/Siliciumdioxide
-:- Silicium(di)oxide of silica is het bekendste oxide van silicium.
In de natuur komt het in diverse vormen voor, zowel in kristallijne als niet-kristallijne (amorfe) vorm. Kwarts is een voorbeeld van kristallijn silica, andere voorbeelden zijn cristobaliet en tridymiet. Opaal is een voorbeeld van amorf silica net als door extreme hitte samengesmolten kwarts (kwartsglas). -:-
Het artikel van Stenner “The speed of information in a ‘fast-light’ optical medium” over group velocity, etc. kan hier gevonden worden:
http://www.phy.duke.edu/research/photon/qelectron/pubs/StennerNatureFastLight.pdf
“One consequence of the special theory of relativity is that no signal can cause an effect outside the source light cone, the space-time surface on which light rays emanate from the source1. Violation of this principle of relativistic causality leads to paradoxes, such as that of an effect preceding its cause2. Recent experiments on optical pulse propagation in so-called ‘fast-light’ media—which are characterized by a wave group velocity u g exceeding the vacuum speed of light c or taking on negative values3—have led to renewed debate about the definition of the information velocity u i. One view is that u i 5 u g (ref. 4), which would violate causality, while another is that u i 5 c in all situations5, which would preserve causality. Here we find that the time to detect information propagating through a fast-light medium is slightly longer than the time required to detect the same information travelling through a vacuum, even though u g in the medium vastly exceeds c. Our observations are therefore consistent with relativistic causality and help to resolve the controversies surrounding superluminal pulse propagation.”
Een heel klein voorbeeld van hoe het er nu echt bij staat bij de gevestigde tijdschriften is dit stukje uit het abstract van het artikel van Stenner, nota bene in Nature:
“Recent experiments on optical pulse propagation in so-called ‘fastlight’ media—which are characterized by a wave group velocity v_g exceeding the vacuum speed of light c or taking on negative values—have led to renewed debate about the definition of the information velocity v_i. One view is that v_i = v_g (ref. 4), which would violate causality, while another is that v_i = c in all situations, which would preserve causality.”
Nu is de group velocity v_g dus de snelheid van de omhullende van een zich voortplantend signaal, waar normaal gesproken de informatie in zit. Wikipedia heeft een leuk plaatje met een animatie waarbij de groep snelheid negatief is ten opzichte van de fase snelheid, de draaggolf:
http://en.wikipedia.org/wiki/Group_velocity

De draaggolf (fase snelheid) beweegt naar links, de omhullende (groep snelheid) beweegt naar rechts. En nu krijgt Nature het dus voor elkaar een artikel te publiceren waarin beweerd wordt dat een negatieve groepssnelheid een schending van causaliteit zou zijn. Want het signaal bevindt zich eerder op de “uitgang” dan op de “ingang”.
Dit zijn toch gewoon denkfouten van Sesamstraat niveau?
Ik bedoel: als de omhullende de andere kant op beweegt dan de draaggolf, dan gaat die omhullende toch gewoon van de andere kant je glasvezel IN en komt er aldus enige tijd later weer UIT en wel aan de kant die je vanuit de draaggolf als INgang ziet. Maar het ding beweegt dus gewoon achteruit t.o.v. de voortbeweging van de draaggolf en dus bevindt hij zich eerst op het einde dat je vanuit de draaggolf gezien als uitgang bestempelt en pas later op het einde dat je vanuit de draaggolf bezien als ingang bestempelt…
Samenvatting van het voorgaande: ik stel vast dat het niet ter discussie staat dat het mogelijk is met licht een groepssnelheid groter dan c in glasvezel tot stand te brengen.
Het artikel van Thévenaz bevat een interessant detail:
“The maximal advancement attained 14.4 ns, to be compared to the 10 ns of normal propagation time in the 2 metre fibre. This is a situation of a negative group velocity and literally it means that the main feature of the signal exits the fibre before entering it.”
Wederom, dezelfde denkfout, maar dat terzijde.
BTW: men verwijst naar dit artikel voor meer detail: http://infoscience.epfl.ch/record/128303/files/ApplPhysLett_87_081113.pdf
Maar goed, waar het om gaat is dat we hier een maximale snelheidsfactor zien van 1.44, vrij dicht bij de eerder genoemde pi/2 voor het geval van longitudinale dielectrische (ofwel electro”statische”) LD golven.
Nu is de vraag uiteindelijk wat de aard van die omhullende golf is. Is dit nu inderdaad een longitudinale dielectrische golf, of is het toch iets anders?
OK. Nu heb ik betoogd dat er bij een dipool met een lengte van pi/2 keer 1/2 lambda iets bijzonders aan de hand is. Omdat er sprake is van golf reflecties op de uiteinden van de dipool, krijg je dus een staande golf, waarbij de totale rondgaande faseverschuiving van het EM veld 90 graden is.
Dit is echter niet het hele verhaal. Het elektrische veld is gerelateerd aan spanningen en het magnetische veld is gerelateerd aan het lopen van stroom. Nu heb je aan het uiteinde van een antenne dus de situatie dat er geen stroom kan lopen, maar de spanning vrij kan varieren.
Met andere woorden: voor het E-veld is het uiteinde van een antenne open, maar voor het magnetische veld is deze gesloten.
En dat betekent dat je voor beide componenten van de golf een andere faseverschuiving krijgt op de uiteinden van de dipool, net als bij een open/gesloten einde bij een touwtje dat je laat slingeren:
http://electron9.phys.utk.edu/phys136d/modules/m9/film.htm
“A wave pulse, which is totally reflected from a rope with a fixed end is inverted upon reflection. The phase shift of the reflected wave with respect to the incident wave is p (180o).
A wave pulse, which is totally reflected from a rope with a loose end is not inverted upon reflection. The phase shift of the reflected wave with respect to the incident wave is zero. When a periodic wave is totally reflected, then the incident wave and the reflected wave travel in the same medium in opposite directions and interfere.”
We hebben hier dus de eigenaardige situatie dat de ene component van het veld geinverteerd gereflecteerd wordt op het uiteinde van de dipool (het B-veld) en het andere niet (het E-veld).
Als we de metingen van Dollard e.a. mogen geloven, dan zou je verwachten dat als je deze situatie analytisch gaat uitwerken, je een significant verschil zult zien in de resulterende E-veldsterkte en de B-veldsterkte in vergelijking met de situatie waarbij je een dipool met een lengte van 1/2 lambda neemt.
En je zou daarnaast verwachten een groepssnelheid te vinden van pi/2 keer c…
Chapter 4: The remarkable properties of Water
(TO DO)
http://jes.ecsdl.org/content/99/1/30.abstract