FEET Chapter 3

Chapter 3: Unification of Physics

"It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do" - Albert Einstein.

The dual slit experiment and the wave particle duality principle

This gives us a major point regarding ZPE. Because all known particles are Electromagnetic waves, it follows that even at absolute zero the particles themselves *must* oscillate and therefore emit an electrostatic field as well as a magnetic field.

So, when considering Zero Point Energy from the perspective of movements of the particles, one does not take the oscillations which make up the particles themselves into account. How could an electromagnetic wave stop oscillating and radiating energy at absolute zero?

Another point is the question of weak and strong nuclear forces. How can there be any other forces but the electromagnetic working on electromagnetic waves?

David LaPoint clip with steelballs rotating under magnets

-> spinning plasma

https://thesingularityeffect.wordpress.com/physics/what-i-think-the-primer-fields/some-questions-and-answers-from-david-lapoint/

http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality http://en.wikipedia.org/w/index.php?title=Wave%E2%80%93particle_duality&oldid=659839762

Wave–particle duality is the concept that every elementary particle or quantic entity exhibits the properties of not only particles, but also waves. It addresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects.

[...]

The idea of duality originated in a debate over the nature of light and matter that dates back to the 17th century, when Christiaan Huygens and Isaac Newton proposed competing theories of light: light was thought either to consist of waves (Huygens) or of particles (Newton). Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles also have a wave nature (and vice versa). This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.

[...]

In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter, not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p):
    \lambda = \frac{h}{p}
This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = \tfrac{E}{c} and the wavelength (in a vacuum) by λ = \tfrac{c}{f}, where c is the speed of light in vacuum.

[...]

Treatment in modern quantum mechanics
Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, has no solutions of the Schrödinger equation so have another wave.

Because all known matter (particles) has a wave-like nature, described by electrodynamic theory, I propose the following two hypothesis:

1. All matter is the manifestation of a localized electrodynamic phenomenon;

2. There exists but one fundamental interaction: the electrodynamic c.q. electromagnetic.

So, what I am saying is that all known particles ARE some kind of local electromagnetic wave phenomenon an that therefore a fully developed electromagnetic theory should be capable of describing and predicting all known physical phenomena, thus integrating all known physics within a Unified Theory requiring only one fundamental interaction.


Scientific theory:

I scientific theory is a well-substantiated explanation of some aspect of the natural world that is acquired through the scientific method and repeatedly tested and confirmed through observation and experimentation. As with most (if not all) forms of scientific knowledge, scientific theories are inductive in nature and aim for predictive power and explanatory capability.
The strength of a scientific theory is related to the diversity of phenomena it can explain, and to its elegance and simplicity (Occam's razor). As additional scientific evidence is gathered, a scientific theory may be rejected or modified if it does not fit the new empirical findings- in such circumstances, a more accurate theory is then desired and free of confirmation bias. In certain cases, the less-accurate unmodified scientific theory can still be treated as a theory if it is useful (due to its sheer simplicity) as an approximation under specific conditions (e.g. Newton's laws of motion as an approximation to special relativity at velocities which are small relative to the speed of light).
Scientific theories are testable and make falsifiable predictions. They describe the causal elements responsible for a particular natural phenomenon, and are used to explain and predict aspects of the physical universe or specific areas of inquiry (e.g. electricity, chemistry, astronomy).

[...]

The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis.

http://en.wikipedia.org/wiki/Falsifiability

Falsifiability or refutability of a statement, hypothesis, or theory is an inherent possibility to prove it to be false. A statement is called falsifiable if it is possible to conceive an observation or an argument which proves the statement in question to be false. In this sense, falsify is synonymous with nullify, meaning not "to commit fraud" but "show to be false". Some philosophers argue that science must be falsifiable.
For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as All swans are white, yet it is logically possible to falsify it by observing a single black swan. Thus, the term falsifiability is sometimes synonymous to testability. Some statements, such as It will be raining here in one million years, are falsifiable in principle, but not in practice.
The concern with falsifiability gained attention by way of philosopher of science Karl Popper's scientific epistemology "falsificationism". Popper stresses the problem of demarcation—distinguishing the scientific from the unscientific—and makes falsifiability the demarcation criterion, such that what is unfalsifiable is classified as unscientific, and the practice of declaring an unfalsifiable theory to be scientifically true is pseudoscience. The question is epitomized in the famous saying of Wolfgang Pauli that if an argument fails to be scientific because it cannot be falsified by experiment, "it is not only not right, it is not even wrong!"

There are two known types of electromagnetic waves:

1. transverse wave;

2. a vortex based wave.

http://en.wikipedia.org/wiki/Optical_vortex

An optical vortex (also known as a screw dislocation or phase singularity) is a zero of an optical field, a point of zero intensity. Research into the properties of vortices has thrived since a comprehensive paper by John Nye and Michael Berry, in 1974,[1] described the basic properties of "dislocations in wave trains". The research that followed became the core of what is now known as "singular optics".

[...]

In an optical vortex, light is twisted like a corkscrew around its axis of travel. Because of the twisting, the light waves at the axis itself cancel each other out. When projected onto a flat surface, an optical vortex looks like a ring of light, with a dark hole in the center. This corkscrew of light, with darkness at the center, is called an optical vortex.
The vortex is given a number, called the topological charge, according to how many twists the light does in one wavelength. The number is always an integer, and can be positive or negative, depending on the direction of the twist. The higher the number of the twist, the faster the light is spinning around the axis. This spinning carries orbital angular momentum with the wave train, and will induce torque on an electric dipole.

[...]

An optical singularity is a zero of an optical field. The phase in the field circulates around these points of zero intensity (giving rise to the name vortex). Vortices are points in 2D fields and lines in 3D fields (as they have codimension two). Integrating the phase of the field around a path enclosing a vortex yields an integer multiple of 2\pi. This integer is known as the topological charge, or strength, of the vortex.

[...]

A q-plate is a birefringent liquid crystal plate with an azimuthal distribution of the local optical axis, which has a topological charge q at its center defect. The q-plate with topological charge q can generate a \pm 2q charge vortex based on the input beam polarization.

Paul Stowe

http://vixra.org/pdf/1310.0237v1.pdf

in this model which is founded upon Maxwell’s, charge itself is a basic oscillation of momentum at each and every point in the field and with units of kg/sec we finally realize that the charge to mass ratio is simply the oscillation’s frequency ν'

Div, Grsd, Curl and the Fundamental Forces

In this model any field gradient (Grad) results in a perturbative force. Likewise the point divergence (Div) in the quantity we call Charge. Finally the net circulation (Curl) at any point defined magnetic potential. Electric and magnetic effects are both well- defined and almost completely quantified by Maxwell’s 1860-61 work On Physical Lines of Force . It is interesting that in this model (an extension of his) the electric potential

 (

E ) has units of velocity ( m/sec ) and the magnetic potential

 (

B ) is a dimensionless. This along with Maxwell’s original work could help shed light resolving the actual physical mechanisms involved in. the creation of the both forces. Inspection strongly suggests that both the electric and magnetic forces are Bernoulli flow induced effects. In this view opposing currents reduce net flow velocity between the vortices increasing pressure and creating an apparent repulsive force. Complimentary currents increase net velocity resulting in an apparent attraction.

Gravity as the Gradient of Electric Field

If the electric potential (E) is a net speed its gradient will be an acceleration.:

(Eq. 35)

Since this potential is squared the sign of E does not matter and the gradient vector is always directed towards the point of highest intensity. This provides a natural explanation for the singular attractive nature of the gravitational force.


Dear Jim,

All right, I'll take the bait.

( I am referring to your presentation on YouTube: )

So, let's go.

First of all, your presentation is a much simplified schematic overview of the real experiment and therefore omits essential details. However, the root of the problem with the interpretation quantum mechanics gives to this experiment is the denial of the existence of longitudinal dielectrical (Tesla) wave phenomena in current physics. The root of this misconception can be found in the Maxwell equations describing the EM fields. You see, the fields are depicted as being caused by charge carriers, which would be protons and electrons, which would be EM waves as shown by the dual slit experiment. In other words: the current Maxwell equations describe EM phenomena as being caused by EM waves, which essentially mixes up cause and effect. A chicken-and-egg problem is introduced here, which should not be there.

When we correct the Maxwell equations for this fundamental problem, we essentially end up with descriptions of waves as would occur in a fluid, which we used to call the aether. And with that, we can also do away with Einstein's relativity theory, because we have no need for the Lorentz transform since the Maxwell eqations "without charge or current" transform perfectly under the good old Galileian transform, as shown by Dr. C.K. Thornhill:

http://www.etherphysics.net/CKT4.pdf

And in fact, contrary to popular belief, the incorrectness of the relativity theory is confirmed by one of the leading experts in GPS technology, Ron Hatch:

http://www.youtube.com/watch?v=CGZ1GU_HDwY

When we take a closer look at the original Maxwell papers and introduce a compressible aether instead of an incompressible one, we can come to a proper foundation for a "theory of everything" explaining not only the magnetic field as a rotational force, but also explain gravity as being the gradient of the Electric field, as has been done by Paul Stowe:

http://vixra.org/abs/1310.0237

Note that at this point, we have already done away with gravity as being one of the fundamental forces. However, we still have a problem with the "weak and strong" interactions, because if all matter is an electromagnetic phenomenon, there can be no other fundamental forces but the electromagnetic ones. In other words: the forces keeping an atom together MUST also be electromagnetic in nature. And this can also be experimentally shown, as has been done by David LaPoint:

Getting back to the dual slit experiment, one of the most fundamental assumptions underneath the current accepted interpretation is that "photons" or "particles" are being emitted at random times from any given source. However, that would lead to particles falling on the slits which would not be in phase and therefore we cannot get an interference pattern. In other words: the emission of particles/photons from any real, physical source MUST be occuring along a deterministic process and not a random process. Said otherwise: the atoms making up any real, physical source MUST be vibrating in resonance in order to get a nice interference pattern. This can be made obvious by considering the 21 cm hydrogen line ( http://en.wikipedia.org/wiki/Hydrogen_line ). It is ludicrous to assume protons with a wavelength of no less than 21 cm can be caused by a change of energy states of individual atoms occuring at random moments. In other words: there is no question that these phenomena are the result of some kind of resonance occuring within your photons/particle source.

Now of course if you have a resonating photon/particle source, which acts as an antenna, you will get the so-called "near field" as well as the so-called "far field":

http://en.wikipedia.org/wiki/Near_and_far_field

These have thusfar not been properly explained. Quantum Mechanics resorts to the invention of "virtual" - by definition non-existing - photons in order to straighten things out:

"In the quantum view of electromagnetic interactions, far-field effects are manifestations of real photons, whereas near-field effects are due to a mixture of real and virtual photons. Virtual photons composing near-field fluctuations and signals, have effects that are of far shorter range than those of real photons."

However, since we know that the magnetic field is a rotational force, we can deduce that any photon or particle (the "far field"), which is an EM wave phenomena, contains some kind of (magnetic) vortex, one way or the other, and therefore is not a "real" transverse wave. So, the difference between the near and far fields in reality is simply that the near field is a real (surface) transverse wave, while the far field is made up of particles/photons characterized by the existence of some kind of (magnetic) vortex keeping the photon/particle together.

Since in the transition of the near-field to the far-field the propagation mode of the phenomena changes from "transverse" to "particle" mode, it is clear that the existence of a transverse wave on the surface of some kind of material, no matter in what way this has been induced, can radiate and/or absorb "photon/particle mode" wave phenomena, since an antenna works both ways as a transmitter and a receiver.

However, when we have a transverse wave propagating along the surface of some material, we also have an asociated dielectric (pressure) type wave, the longitudinal wave, which propagates at a speed of sqrt(2) times the speed of light trough vacuum. Of course, this propagation mode also propagates energy. Energy which is not being accounted for in the standard model, hence the need for the invention of "black" matter and energy in order to fill out the gaps.

So, what we are really looking at with the dual slit experiment source is a source of BOTH "particle/photon" modes of EM radiation AND longitudinal dielectric waves, which interact with one another. When longitudinal dielectric waves interact with a material capable of resonating at the frequency of these waves, it is pretty obvious that transverse surface waves are induced, which can also emit "photon/particle" mode waves on their turn.

As I already explained, photons/particles are characterized by the existence of some kind of rotational, magnetic, vortex, which cannot pass the slits. And therefore, AFTER the slits, we are left ONLY with the longitudinal phenomena and NO EM wave. These are introduced at surface of the screen and/or at your "atom counter", both surfaces acting as both receiving and transmitting antenna's at the same time. Now whenever you take energy from the waves propagating around your slits ("counter on"), you influence the resonance taking place within the experiment. When you do not take any anergy away ("counter off"), this influence is no longer present. And therefore you get the result that switching the counter on or off influences your experiment.

Any questions??


Very interesting. A leading expert in GPS stuff disagreeing with relativity:

http://www.youtube.com/watch?v=CGZ1GU_HDwY

RON HATCH: Relativity in the Light of GPS | EU 2013

Natural Philosophy Alliance Conference (NPA20) July 10 - 13, 2013 -- College Park, Maryland To register go to www.worldnpa.org (Difficulties registering? Contact davidharrison@thunderbolts.info)

Perhaps you've already heard that GPS, by the very fact that it WORKS, confirms Einstein's relativity; also that Black Holes must be real. But these are little more than popular fictions, according to the distinguished GPS expert Ron Hatch. Here Ron describes GPS data that refute fundamental tenets of both the Special and General Relativity theories. The same experimental data, he notes, suggests an absolute frame with only an appearance of relativity.

Ron has worked with satellite navigation and positioning for 50 years, having demonstrated the Navy's TRANSIT System at the 1962 Seattle World's Fair. He is well known for innovations in high-accuracy applications of the GPS system including the development of the "Hatch Filter" which is used in most GPS receivers. He has obtained over two dozen patents related to GPS positioning and is currently a member of the U.S National PNT (Positioning Navigation and Timing) Advisory Board. He is employed in advanced engineering at John Deere's Intelligent Systems Group.

Koevavla:

Hoi Arend,

Wat een geweldig goede presentatie. Ik snap alleen die 's' factor niet zo goed, maar verder ben ik het 100% eens met Ron Hatch.

De meet resultaten van Roland de Witte komen ook goed overeen met de theorie van Ron: een variabele lichtsnelheid ivm een 1-richting lichtsnelheidsmeting. Als de frequentie van vallend licht onveranderd blijft, dan zou in feite de golflengte en dus ook de lichtsnelheid wat moeten afnemen, richting het aard oppervlak. Ik heb dit idee al jaren geleden via Vesselin Petkov vernomen. Ik vind Ron erg goed op wetenschappelijk gebied.

Gemiste kans voor Ron is het verklaren van lichtafbuiging door zwaartekracht. Ron stelt overduidelijk dat een een zwaartekrachtveld de lichtsnelheid wat vermindert, en als je dit gegeven combineert met het afbuigen van licht indien het door een zwaartekracht veld reist, dan kan je de ether dichtheid voorstellen als een variabele electromagnetische polariseerbaarheid, net als lucht of water.

De ether heeft kennelijk een elektromagnetische eigenschap, en zodoende is de ether ook te beinvloeden dmv electrodynamische effecten. En dat is het grootste geheim der mensheid.


Ron Hatch

http://www.worldsci.org/php/index.php?tab0=Scientists&tab1=Scientists&tab2=Display&id=257

''Biography

Ronald Ray Hatch, born in Freedom, Oklahoma, now of Wilmington, California, received his Bachelor of Science degree in physics and math in 1962 from Seattle Pacific University. He worked at Johns Hopkins Applied Physics Lab, Boeing and Magnavox as Principle Scientist, before becoming a Global Positioning System (GPS) consultant. In 1994 he joined Jim Litton, K. T. Woo, and Jalal Alisobhani in starting what is now NavCom Technology, Inc. He has served a number of roles within the Institute of Navigation (ION), including Chair of the Satellite Division, President and Fellow. Hatch received the Johannes Kepler Award from the Satellite Division and the Colonel Thomas Thurlow Award from the ION. He has been awarded twelve patents either as inventor or co-inventor, most of which relate to GPS, about which he is one of the world's premier specialists. He is well known for his work in navigation and surveying via satellite.''

In a pair of articles, Hatch shows how GPS data provides evidence against, not for, both special and general relativity: "Relativity and GPS," parts I and II, Galilean Electrodynamics, V6, N3 (1995), pp. 51-57; and V6, N4 (1995), pp. 73-78. In his 1992 book, Escape From Einstein, Hatch presents data contradicting the special theory of relativity, and promotes a Lorentzian alternative described as an ether gauge theory.
Escape from Einstein
Einstein's fame can, to some extent, be ascribed to the fact that he originated a theory which, though contrary to common sense, was in remarkable agreement the experimental data. Ron Hatch claims there is increasingly precise data which contradicts the theory. But he does not stop there. He offers an alternative - an ether guage theory, which offers an unparalleled, common-sense explanation of the experimental data. The new theory is distinguished by:
* a return to time simultaneity, even though clocks (mechanical and biological) can run at different rats the replacement of the Lorentz transformations with gauge transformations (scaled Galilean transformations)
* a unification of the electromagnetic and gravitational forces
* a clear explanation of the source of inertia
* a clear and consistent explanation of the physics underlying the equivalence principle
In addition to the above, a comprehensive review of the experimental record shows that the new ether guage theory agrees with experiment better than the special theory. This releases everyone from the necessity of accepting a nonsensical theory which denies the common, ordinary sense of elapsed time. Rather than curved space, the ether guage theory postulates an elastic ether. This results in relatively minor modifications to the general theory mathematics - but with significant interpretational differences.

http://www.gps.gov/governance/advisory/members/hatch/

Ron Hatch is an expert in the use of GPS for precision farming, as well as other applications. Currently a consultant to John Deere, he was formerly the Director of Navigation Systems Engineering and Principal and co-founder of NavCom Technology, Inc., a John Deere company. That company provides a commercially operated differential GPS augmentation service to the agriculture industry and other high accuracy users.
Throughout his 30-year career in satellite navigation systems with companies such as Boeing and Magnavox, Hatch has been noted for his innovative algorithm design for Satellite Navigation Systems. He has consulted for a number of companies and government agencies developing dual-frequency carrier-phase algorithms for landing aircraft, multipath mitigation techniques, carrier phase measurements for real time differential navigation at the centimeter level, algorithms and specifications for Local Area Augmentation System, high-performance GPS and communication receivers, and Kinematic DGPS. In addition to the Hatch-Filter Technique, Hatch has obtained numerous patents and written many technical papers involving innovative techniques for navigation and surveying using the TRANSIT and GPS navigation satellites, authored Escape From Einstein in which he challenges competing relativity and other theories, and contributed significantly to the advancement of satellite navigation.
In 1994, Hatch received the Johannes Kepler Award from the Institute of Navigation for sustained and significant contributions to satellite navigation.

http://ivanik3.narod.ru/GPS/Hatch/relGPS.pdf

copy: http://www.tuks.nl/pdf/Reference_Material/Ronald_Hatch/Hatch-Relativity_and_GPS-II_1995.pdf

''"Relativistic" effects within the Global Positioning System (GPS) are addressed.
Hayden has already provided an introduction to GPS, so the characteristics of the system are not reviewed.
''There are three fundamental effects, generally described as relativistic phenomena, which affect GPS. These are: (1) the effect of source velocity (GPS satellite) and receiver velocity upon the satellite and receiver clocks; (2) the effect of the gravitational potential upon satellite and receiver clocks; and (3) the effect of receiver motion upon the signal reception time (Sagnac effect). There are a number of papers which have been written to explain these valid effects in the context of Einstein's relativity theories. However, quite often the explanations of these effects are patently incorrect. As an example of incorrect

explanation, Ashby [2] in a GPS World article, "Relativity and GPS," gives an improper explanation for each of the three phenomena listed above.''

The three effects are discussed separately and contrasted with Ashby's explanations. But the Sagnac effect is shown to be in conflict with the special theory. A proposed resolution of the conflict is offered. The Sagnac effect is also in conflict with the general theory, if the common interpretation of the general theory is accepted. The launch of GPS Block II satellites capable of intersatellite communication and tracking will provide a new means for a giant Sagnac test of this general theory interpretation. Other general theory problems are reviewed and a proposed alternative to the general theory is also offered.

http://en.wikipedia.org/wiki/Fictitious_force

A fictitious force, also called a pseudo force,[1] d'Alembert force[2][3] or inertial force,[4][5] is an apparent force that acts on all masses whose motion is described using a non-inertial frame of reference, such as a rotating reference frame. The force F does not arise from any physical interaction between two objects, but rather from the acceleration a of the non-inertial reference frame itself.

[...]

Assuming Newton's second law in the form F = ma, fictitious forces are always proportional to the mass m.
A fictitious force on an object arises when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. As a frame can accelerate in any arbitrary way, so can fictitious forces be as arbitrary (but only in direct response to the acceleration of the frame). However, four fictitious forces are defined for frames accelerated in commonly occurring ways: one caused by any relative acceleration of the origin in a straight line (rectilinear acceleration);[8] two involving rotation: centrifugal force and Coriolis force; and a fourth, called the Euler force, caused by a variable rate of rotation, should that occur. Gravitational force would also be a fictitious force based upon a field model in which particles distort spacetime due to their mass.

[...]

Fictitious forces and work
Fictitious forces can be considered to do work, provided that they move an object on a trajectory that changes its energy from potential to kinetic. For example, consider a person in a rotating chair holding a weight in his outstretched arm. If he pulls his arm inward, from the perspective of his rotating reference frame he has done work against centrifugal force. If he now lets go of the weight, from his perspective it spontaneously flies outward, because centrifugal force has done work on the object, converting its potential energy into kinetic. From an inertial viewpoint, of course, the object flies away from him because it is suddenly allowed to move in a straight line. This illustrates that the work done, like the total potential and kinetic energy of an object, can be different in a non-inertial frame than an inertial one.
Gravity as a fictitious force

Main article: General relativity

The notion of "fictitious force" comes up in general relativity.[15][16] All fictitious forces are proportional to the mass of the object upon which they act, which is also true for gravity.[17] This led Albert Einstein to wonder whether gravity was a fictitious force as well. He noted that a freefalling observer in a closed box would not be able to detect the force of gravity; hence, freefalling reference frames are equivalent to an inertial reference frame (the equivalence principle). Following up on this insight, Einstein was able to formulate a theory with gravity as a fictitious force; attributing the apparent acceleration of gravity to the curvature of spacetime. This idea underlies Einstein's theory of general relativity.

http://arxiv.org/ftp/physics/papers/0204/0204044.pdf

Abstract: There exists some confusion, as evidenced in the literature, regarding the nature the gravitational field in Einstein’s General Theory of Relativity. It is argued here that this confusion is a result of a change in interpretation of the gravitational field. Einstein identified the existence of gravity with the inertial motion of accelerating bodies (i.e. bodies in free-fall) whereas contemporary physicists identify the existence of gravity with space-time curvature (i.e. tidal forces). The interpretation of gravity as a curvature in space-time is an interpretation Einstein did not agree with.

For more then a century, millions of researchers has worked to implement the quantum idea in different fields of science. This work provided to be so extensive and fruitfully that few have ever had the time to think if this quantum idea is so consistent with experimental reality.

Around 2005, I wanted to publish a scientific paper based on idea that ionization energy variation contradicts the quantum hypothesis. As expected, no scientific journal was interested to publish such paper, even the data in the paper could be verified by a layman without any scientific background.

Is it something simpler than a linear dependency of two units in a graph for a clear conclusion?

Someone would ask himself rhetorically… how could it possible to not publish such paper?

The answer is very simple: how could a referee deny all his scientific activity and say bluntly that not only her but millions of people have worked to a new epicycles theory in science?

Therefore the idea was published in an Atomic structure book in 2007:

http://elkadot.com/index.php/en/books/atomic/ionization-energy-variation

Later the concept was revised and improved, published in another book about chemistry in 2009, and advertised in for discussion groups in 2009 with next formulation:

The neglected ionization energy variation for isoelectronic series can reveal more useful information about electrons structure; the problem is these data are in contradiction with actual quantum theory. The quantum prediction for work functions values are in contradiction with experiments; for metals, ionization energy and work function must be equal but in reality they are not.

For other classes of compounds quantum mechanic fails again to predict something. A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics.

http://elkadot.com/index.php/en/books/chemistry/ionization-energy-and-work-function

People who haven’t learned from past experiences will have all the time the tendency to repeat the same errors. I will paraphrase a famous economist who said that in a free market economy there is an invisible hand pushing the things forward; in science the opposite is true: the invisible hand of a entire system think that pushing something under the carpet will maintain the actual status quo.

Will it be so or will not be so?!

Best regards,

Sorin

--

Thanks very much, this is very interesting.

I wrote down my thoughts on QM some time ago:

http://www.tuks.nl/wiki/index.php/Main/QuestioningQuantumMechanics

QM is fundamentally flawed. It starts of with the famous dual slit experiment, whereby they eventually conclude that the change of orbit of a *single* electron at a *random* moment emits a photon which is subsequently emitted in such a way that it is automagically in phase with the other photons emitted by the light source, since otherwise we would not see an interference pattern:

http://en.wikipedia.org/wiki/Electromagnetic_radiation#Particle_model_and_quantum_theory

"As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies."

http://en.wikipedia.org/wiki/Emission_spectrum "The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference."

The ridiculousness of the idea of radiation being caused by the single transitions of an electron at a random moments pokes one in the eye when considering the 21 cm Hydrogen line:

http://en.wikipedia.org/wiki/Hydrogen_line

"The hydrogen line, 21 centimeter line or HI line refers to the electromagnetic radiation spectral line that is created by a change in the energy state of neutral hydrogen atoms. This electromagnetic radiation is at the precise frequency of 1420.40575177 MHz, which is equivalent to the vacuum wavelength of 21.10611405413 cm in free space. This wavelength or frequency falls within the microwave radio region of the electromagnetic spectrum, and it is observed frequently in radio astronomy, since those radio waves can penetrate the large clouds of interstellar cosmic dust that are opaque to visible light.

The microwaves of the hydrogen line come from the atomic transition between the two hyperfine levels of the hydrogen 1s ground state with an energy difference of 5.87433 µeV.[1] The frequency of the quanta that are emitted by this transition between two different energy levels is given by Planck's equation."

http://en.wikipedia.org/wiki/Atomic_radius

"Under most definitions the radii of isolated neutral atoms range between 30 and 300 pm (trillionths of a meter), or between 0.3 and 3 angstroms. Therefore, the radius of an atom is more than 10,000 times the radius of its nucleus (1–10 fm),[2] and less than 1/1000 of the wavelength of visible light (400–700 nm)."

This means that the raidus of the largest atoms is less than 1/70 millionth of the wavelength of the hydrogen line!

While it is hard belief that it is widely accepted that single atoms can emit photons with wavelengths which are up to 1000 times larger than the size of an atom, it is beyond belief and totally ridiculous to asume a single transition in a body can emit a "photon" with a wavelength which is more than 70 million times as large as the body itself.

The only reasonable explanation for such a phenomenon to occur is that the transitions of (the electrons around) the atoms occur in phase and in resonance (aka "deterministic") with one another. In other words: no randomness at all!

"A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics."

http://en.wikipedia.org/wiki/Work_function

"In solid-state physics, the work function (sometimes spelled workfunction) is the minimum thermodynamic work (i.e. energy) needed to remove an electron from a solid to a point in the vacuum immediately outside the solid surface. Here "immediately" means that the final electron position is far from the surface on the atomic scale, but still too close to the solid to be influenced by ambient electric fields in the vacuum. The work function is not a characteristic of a bulk material, but rather a property of the surface of the material (depending on crystal face and contamination)."

From your article:

"As is observed, as a general rule, the ionization energies for all metals are greater then work function."

This is not illogical. Since metals are conducting materials, the loss of negative charge because of the movement of an electron out of the metal can be compensated for by the movement of free electrons into the partial "hole" left by the leaving electron. This causes the departing electron to be less attrackted by the bulk metal compared to what would happen if the loosing of charge in the lattice could not be compensated for.

"A striking example is the case of metallic oxides having work functions values smaller then metals. It is outrageous how a covalent or ionic bound liberate electrons easier then a metallic bound in frame of actual physics."

That's very interesting. I'm not an expert on metallic oxides, but I am a bit familiar with Aluminum Oxide and the working of electrolytic capacitors, whereby you have a dielectric layer of Aluminum Oxide between your positive plate and the electrolyte. On the negative plate, ususally also Aluminum, there is also a very thin layer of Aluminum Oxide, an insulator. Yet, the negative plate conducts electrocity very well into the electrolyte filling up the space between the plates.

So, for this particular metallic oxide, we are dealing with a dielectric:

http://en.wikipedia.org/wiki/Dielectric

"A dielectric material (dielectric for short) is an electrical insulator that can be polarized by an applied electric field. When a dielectric is placed in an electric field, electric charges do not flow through the material as they do in a conductor, but only slightly shift from their average equilibrium positions causing dielectric polarization. Because of dielectric polarization, positive charges are displaced toward the field and negative charges shift in the opposite direction. This creates an internal electric field that reduces the overall field within the dielectric itself."

[...]

"Dipolar polarization

Dipolar polarization is a polarization that is either inherent to polar molecules (orientation polarization), or can be induced in any molecule in which the asymmetric distortion of the nuclei is possible (distortion polarization). Orientation polarization results from a permanent dipole, e.g., that arising from the 104.45° angle between the asymmetric bonds between oxygen and hydrogen atoms in the water molecule, which retains polarization in the absence of an external electric field. The assembly of these dipoles forms a macroscopic polarization."

So, for Aluminum Oxide, we have are dealing with a material which is both an insulator and polarizable and thus also (to a certain degree) capable of making up for the loosing of charge because of a leaving electron, albeit more slowly.

Another point is the conductivity of heat by the material. Metals are excellent heat conductors IIRC, so if you wer to heat up a part of a metal, the heat would quickly conduct away into the bulk metsl and thus one would need more thermal energy to liberate an electron from a metal in comparison with a less heat conducting material, which might very well be the case for metallic oxides.

So, I would guess that those materials which are on the one hand electrically insulating and polarizable (dielectrics) and on the other hand bad heat conductors would have the lowest work function values, because they a) can compensate for "charge loss" and b) can be "locally heated".


“Het meeste van wat Lammertink schrijft lijkt me onzin (longitudinale em golven zijn nooit aangetoond) maar ik weet niet voldoende van EM-theorie.”

http://en.wikipedia.org/wiki/Laser

“Some applications of lasers depend on a beam whose output power is constant over time. Such a laser is known as continuous wave (CW). Many types of lasers can be made to operate in continuous wave mode to satisfy such an application. Many of these lasers actually lase in several longitudinal modes at the same time, and beats between the slightly different optical frequencies of those oscillations will in fact produce amplitude variations on time scales shorter than the round-trip time (the reciprocal of the frequency spacing between modes), typically a few nanoseconds or less.”

http://en.wikipedia.org/wiki/Longitudinal_mode

“A longitudinal mode of a resonant cavity is a particular standing wave pattern formed by waves confined in the cavity. The longitudinal modes correspond to the wavelengths of the wave which are reinforced by constructive interference after many reflections from the cavity’s reflecting surfaces. All other wavelengths are suppressed by destructive interference.

A longitudinal mode pattern has its nodes located axially along the length of the cavity. Transverse modes, with nodes located perpendicular to the axis of the cavity, may also exist.

[...]

A common example of longitudinal modes are the light wavelengths produced by a laser. In the simplest case, the laser’s optical cavity is formed by two opposed plane (flat) mirrors surrounding the gain medium (a plane-parallel or Fabry–Pérot cavity). The allowed modes of the cavity are those where the mirror separation distance L is equal to an exact multiple of half the wavelength, λ.”

Ik zou toch zweren dat ik hier lees dat lasers met longitudinale golven werken en dat transversale modes wellicht ook zouden kunnen bestaan…

Hmmm.


En als je dan even het ontwerp van een laser voor de geest haalt, dan heb je in feite daarmee ook een golfpijp, maar dan voor optische frequenties.

http://en.wikipedia.org/wiki/Laser

“The optical resonator is sometimes referred to as an “optical cavity”, but this is a misnomer: lasers use open resonators as opposed to the literal cavity that would be employed at microwave frequencies in a maser. The resonator typically consists of two mirrors between which a coherent beam of light travels in both directions, reflecting back on itself so that an average photon will pass through the gain medium repeatedly before it is emitted from the output aperture or lost to diffraction or absorption.”

Nu heb je bij een EM golfpijp dus een metalen wand, waarbij de aanwezigheid van een wisselend E-veld wervelstromen geinduceerd worden in de pijpwand en die vervolgens weer een magneetveld tot gevolg hebben. Maar bij een laser heb je alleen twee spiegels en dus geen pijpwand die middels inductie een magneetveld introduceert.

Kortom: longitudinale EM golven zijn wel degelijk aangetoond zonder dat de propagatie plaats vindt middels beweging van ladingsdragers, zij het dat dit in een golfpijp een TM mode is en dus geen zuivere “Tesla” LD golf. Het afwezig zijn van een metalen wand in een laser met daarbij de verwijzingen naar “longitudinale” mode wijzen er dan ook wel degelijk op dat we bij een laser te maken hebben met een zuivere longitudinale dielectrische golf, zo’n Tesla golf.

En als die inderdaad met pi/2 keer c propageren en daar is geen rekening mee gehouden bij optisch onderzoek, dan zou je verwachten dat er ergens een anomaliteit te vinden is die e.e.a. zou kunnen bevestigen.


it is toch wel een erg interessante anomaliteit:

http://en.wikipedia.org/wiki/Dispersion_%28optics%29

-:- The group velocity vg is often thought of as the velocity at which energy or information is conveyed along the wave. In most cases this is true, and the group velocity can be thought of as the signal velocity of the waveform. In some unusual circumstances, called cases of anomalous dispersion, the rate of change of the index of refraction with respect to the wavelength changes sign, in which case it is possible for the group velocity to exceed the speed of light (vg > c). Anomalous dispersion occurs, for instance, where the wavelength of the light is close to an absorption resonance of the medium. When the dispersion is anomalous, however, group velocity is no longer an indicator of signal velocity. Instead, a signal travels at the speed of the wavefront, which is c irrespective of the index of refraction.[3] Recently, it has become possible to create gases in which the group velocity is not only larger than the speed of light, but even negative. In these cases, a pulse can appear to exit a medium before it enters.[4] Even in these cases, however, a signal travels at, or less than, the speed of light, as demonstrated by Stenner, et al.[5] -:-

Dit is toch wel erg fascinerend. Een puls kan sneller dan het licht propageren (of zelfs een negatieve snelheid hebben), maar een signaal niet. Waarin verschilt een puls nu dan wel van een signaal?

http://scienceblog.com/light.html

“They were also able to create extreme conditions in which the light signal travelled faster than 300 million meters a second. And even though this seems to violate all sorts of cherished physical assumptions, Einstein needn’t move over – relativity isn’t called into question, because only a portion of the signal is affected.”

Juist. Niets aan de hand. Nothing to see here. Slechts een gedeelte van het signaal gaat sneller dan c. Maar welk gedeelte dan?

“To succeed commercially, a device that slows down light must be able to work across a range of wavelengths, be capable of working at high bit-rates and be reasonably compact and inexpensive.”

Juist. Het gaat om een gedeelte van het signaal, waarbij er sprake is van een beperkte bandbreedte waarbinnen sneller dan licht propagatie mogelijk is.

Wikipedia nog eens:

“Anomalous dispersion occurs, for instance, where the wavelength of the light is close to an absorption resonance of the medium. When the dispersion is anomalous, however, group velocity is no longer an indicator of signal velocity. Instead, a signal travels at the speed of the wavefront, which is c irrespective of the index of refraction.”

Dat is interessant. Er is dus een specifieke, medium afhankelijke, resonantie frequentie waarbij sneller dan licht propagatie optreedt, precies zoals ik betoogd heb dat dit ook in het RF geval optreedt bij antennes.

En wat ik daar ook bij betoogd heb, is dat het erg lastig is om longitudinale golven te meten. En dat is ook precies waar men hier tegen aan loopt:

Scienceblog: “Light signals race down the information superhighway at about 186,000 miles per second. But information cannot be processed at this speed, because with current technology light signals cannot be stored, routed or processed without first being transformed into electrical signals, which work much more slowly.”

Dit is een paper van Thévenaz dat wellicht meer details geeft:

http://infoscience.epfl.ch/record/161515/files/04598375.pdf

To be continued…

http://www.tuks.nl/ Arend Lammertink

Bijzonder interessant. Het mechanisme waar het om lijkt te draaien heet Brillouin scattering:

http://en.wikipedia.org/wiki/Brillouin_scattering

-:- As described in classical physics, when the medium is compressed its index of refraction changes, and a fraction of the traveling light wave, interacting with the periodic refraction index variations, is deflected as in a three-dimensional diffraction grating. Since the sound wave, too, is travelling, light is also subjected to a Doppler shift, so its frequency changes. -:-

Merk op dat het hier gaat om een mechanische compressie van het medium, en voor zover me nu duidelijk is wordt dit gedaan met behulp van geluidsgolven.

Uit het artikel in mijn vorige post: -:- Among all parametric processes observed in silica, stimulated Brillouin scattering (SBS) turns out to be the most efficient. In its most simple configuration the coupling is realized between two optical waves propagating exclusively in opposite directions in a single mode fibre, through the stimulation by electrostriction of a longitudinal acoustic wave that plays the role of the idler wave in the interaction [4]. This stimulation is efficient only if the two optical waves show a frequency difference giving a beating interference resonant with an acoustic wave (that is actually never directly observed). This acoustic wave in turn induces a dynamic Bragg grating in the fibre core that diffracts the light from the higher frequency wave back into the wave showing the lower frequency. -:-

Interessant detail is dat het hier gaat om silica. Dit is natuurlijk gewoon glas, maar het is wel een silicium oxide en silicium is een halfgeleider. En wat hier gebruikt wordt is de kristallijne vorm:

http://nl.wikipedia.org/wiki/Siliciumdioxide

-:- Silicium(di)oxide of silica is het bekendste oxide van silicium.

In de natuur komt het in diverse vormen voor, zowel in kristallijne als niet-kristallijne (amorfe) vorm. Kwarts is een voorbeeld van kristallijn silica, andere voorbeelden zijn cristobaliet en tridymiet. Opaal is een voorbeeld van amorf silica net als door extreme hitte samengesmolten kwarts (kwartsglas). -:-

Het artikel van Stenner “The speed of information in a ‘fast-light’ optical medium” over group velocity, etc. kan hier gevonden worden:

http://www.phy.duke.edu/research/photon/qelectron/pubs/StennerNatureFastLight.pdf

“One consequence of the special theory of relativity is that no signal can cause an effect outside the source light cone, the space-time surface on which light rays emanate from the source1. Violation of this principle of relativistic causality leads to paradoxes, such as that of an effect preceding its cause2. Recent experiments on optical pulse propagation in so-called ‘fast-light’ media—which are characterized by a wave group velocity u g exceeding the vacuum speed of light c or taking on negative values3—have led to renewed debate about the definition of the information velocity u i. One view is that u i 5 u g (ref. 4), which would violate causality, while another is that u i 5 c in all situations5, which would preserve causality. Here we find that the time to detect information propagating through a fast-light medium is slightly longer than the time required to detect the same information travelling through a vacuum, even though u g in the medium vastly exceeds c. Our observations are therefore consistent with relativistic causality and help to resolve the controversies surrounding superluminal pulse propagation.”

Een heel klein voorbeeld van hoe het er nu echt bij staat bij de gevestigde tijdschriften is dit stukje uit het abstract van het artikel van Stenner, nota bene in Nature:

“Recent experiments on optical pulse propagation in so-called ‘fastlight’ media—which are characterized by a wave group velocity v_g exceeding the vacuum speed of light c or taking on negative values—have led to renewed debate about the definition of the information velocity v_i. One view is that v_i = v_g (ref. 4), which would violate causality, while another is that v_i = c in all situations, which would preserve causality.”

Nu is de group velocity v_g dus de snelheid van de omhullende van een zich voortplantend signaal, waar normaal gesproken de informatie in zit. Wikipedia heeft een leuk plaatje met een animatie waarbij de groep snelheid negatief is ten opzichte van de fase snelheid, de draaggolf:

http://en.wikipedia.org/wiki/Group_velocity

De draaggolf (fase snelheid) beweegt naar links, de omhullende (groep snelheid) beweegt naar rechts. En nu krijgt Nature het dus voor elkaar een artikel te publiceren waarin beweerd wordt dat een negatieve groepssnelheid een schending van causaliteit zou zijn. Want het signaal bevindt zich eerder op de “uitgang” dan op de “ingang”.

Dit zijn toch gewoon denkfouten van Sesamstraat niveau?

Ik bedoel: als de omhullende de andere kant op beweegt dan de draaggolf, dan gaat die omhullende toch gewoon van de andere kant je glasvezel IN en komt er aldus enige tijd later weer UIT en wel aan de kant die je vanuit de draaggolf als INgang ziet. Maar het ding beweegt dus gewoon achteruit t.o.v. de voortbeweging van de draaggolf en dus bevindt hij zich eerst op het einde dat je vanuit de draaggolf gezien als uitgang bestempelt en pas later op het einde dat je vanuit de draaggolf bezien als ingang bestempelt…

Samenvatting van het voorgaande: ik stel vast dat het niet ter discussie staat dat het mogelijk is met licht een groepssnelheid groter dan c in glasvezel tot stand te brengen.

Het artikel van Thévenaz bevat een interessant detail:

“The maximal advancement attained 14.4 ns, to be compared to the 10 ns of normal propagation time in the 2 metre fibre. This is a situation of a negative group velocity and literally it means that the main feature of the signal exits the fibre before entering it.”

Wederom, dezelfde denkfout, maar dat terzijde.

BTW: men verwijst naar dit artikel voor meer detail: http://infoscience.epfl.ch/record/128303/files/ApplPhysLett_87_081113.pdf

Maar goed, waar het om gaat is dat we hier een maximale snelheidsfactor zien van 1.44, vrij dicht bij de eerder genoemde pi/2 voor het geval van longitudinale dielectrische (ofwel electro”statische”) LD golven.

Nu is de vraag uiteindelijk wat de aard van die omhullende golf is. Is dit nu inderdaad een longitudinale dielectrische golf, of is het toch iets anders?

OK. Nu heb ik betoogd dat er bij een dipool met een lengte van pi/2 keer 1/2 lambda iets bijzonders aan de hand is. Omdat er sprake is van golf reflecties op de uiteinden van de dipool, krijg je dus een staande golf, waarbij de totale rondgaande faseverschuiving van het EM veld 90 graden is.

Dit is echter niet het hele verhaal. Het elektrische veld is gerelateerd aan spanningen en het magnetische veld is gerelateerd aan het lopen van stroom. Nu heb je aan het uiteinde van een antenne dus de situatie dat er geen stroom kan lopen, maar de spanning vrij kan varieren.

Met andere woorden: voor het E-veld is het uiteinde van een antenne open, maar voor het magnetische veld is deze gesloten.

En dat betekent dat je voor beide componenten van de golf een andere faseverschuiving krijgt op de uiteinden van de dipool, net als bij een open/gesloten einde bij een touwtje dat je laat slingeren:

http://electron9.phys.utk.edu/phys136d/modules/m9/film.htm

“A wave pulse, which is totally reflected from a rope with a fixed end is inverted upon reflection. The phase shift of the reflected wave with respect to the incident wave is p (180o).

A wave pulse, which is totally reflected from a rope with a loose end is not inverted upon reflection. The phase shift of the reflected wave with respect to the incident wave is zero. When a periodic wave is totally reflected, then the incident wave and the reflected wave travel in the same medium in opposite directions and interfere.”

We hebben hier dus de eigenaardige situatie dat de ene component van het veld geinverteerd gereflecteerd wordt op het uiteinde van de dipool (het B-veld) en het andere niet (het E-veld).

Als we de metingen van Dollard e.a. mogen geloven, dan zou je verwachten dat als je deze situatie analytisch gaat uitwerken, je een significant verschil zult zien in de resulterende E-veldsterkte en de B-veldsterkte in vergelijking met de situatie waarbij je een dipool met een lengte van 1/2 lambda neemt.

En je zou daarnaast verwachten een groepssnelheid te vinden van pi/2 keer c…