Follow by Email

Sunday, November 28, 2010

problems and problems solved


Dear Reader,

We're having problems with the instruments to record waveforms.  It may take a while to get this fixed or replaced.  I'll get back here asap.


And again readers, all,

I've decided to keep this information here.  Problems are fixed and all is very much on track and results more amazing than ever.  Just a quick view of our wonderful little rig - here it is.  And at the moment it's cooking from a 12 volt supply with zero discharge from that battery.  A full report to follow.

Kindest regards and may you all be as happy as we are

Sorry.  Yet another qualification.  I will NOT have the report ready for tomorrow or even today.  More likely by the end of next week or sometime before Christmas.  It needs to be perfectly presented.  And perfection takes time.  I'm only certain that it will 'rock'.   So for those that need some good news - take heart. 

Thursday, November 25, 2010

another upbeat update and more to come


Dear Reader - anyone who's following this blog - I have some preliminary and very good news and will post the data here tomorrow morning.  Right now I need sleep.

It seems that we've got an extraordinary coefficient of performance - as there's zero discharge measured from the battery and a 'too hot to handle' condition on our resistor.  On the morrow we'll be posting up all the graphs on the control wattages and the data from these last tests.   Still to rerun these and similar tests - but it all does appear to be very repeatable.  Still need some overview from our experts but I suspect that will still be some time in the offing as we're to submit a more comprehensive set of data than I'll be posting in the morning.

It's blown me away.  There were early indications of this - some week ago - but the numbers are being firmed up and it seems that we really do not need to stray too far from standard elements and standard technology.  Always was a concern.  It seems that we just need to switch that current and switch it fast.  What a pleasure.  More to the point - there appears to be no NOISE.  I actually am beginning to think that this may yet be easily applied to standard technologies using standard components.  The students are brilliant.

Watch this space.  LOL

Kindest regards,

BTW I'll be goig 'public' tomorrow - when and if I find the right threads and forums.  Not sure where to go but presumably will be happy to post either the data or a link.  And then there's - and  Anyway I'll ask around.  Meanwhile let your friends know.  This is a replication with a vengeance - and everyone dedicated to open source this knowledge will be able to exploit it as and where they can.  SO EASY TO REPLICATE.  Thank you God.

a universal structure

19 Script 3 (draft)

We know the forces as the strong and weak nuclear force, the electromagnetic force and gravity.  What this study will show is that one only needs the magnetic field and it's three potential dimensions to explain all those forces.  But to do this one first needs to locate that area of space where those dimensions would be manifest.  Where - in space - do we find these fields of magnetic dipoles?

Picture, if you will, in your mind's eye, a great big torus.  That torus comprises strings upon strings - necklaces of these very small and very fast and very cold magnetic dipoles.  They are so long that they wrap themselves around the entire universe.  They are so numerous that they fill the smallest regions and corners of space.  They are so fast that they are entirely invisible.  Yet this is the background, the scaffolding, the woof and weft of an almost solid cloth that holds the tapestry of matter.  It is invisible to light.  But it is the thing that moves and carries light inside its perfectly geometrical shape.  Put your finger on any part of that vacuum of space and it will be entirely undetectable except that it will be cold.  Very, very cold.  And in essense it is simply a jolly big toroidal magnetic field defining the shape and boundaries of the known universe.

Now picture God reaching into that structure with a pair of scissors and He then cuts one of those strings.  The necklace unravels, tumbles out of that orderly structure and it becomes a pile of beads that fall together - attracted to each other as would any magnets be attracted.  But tumbling.  They fall out of that orderly formation, that string.  And they then tumble into a localised area of space.  It puts one in mind of nebulae.  Those fast cloud structures that are seen to give birth to stars and possibly whole star structures and galaxies. 

We need to go back to Bell's requirement for symmetry.  If in the breaking of one indescribably long string - all those little beads fell out of their cold fast small state to become hot and slow and big - then an equal number of those little beads would have become equally colder and smaller and faster than the beads in the magnetic field itself.  So.  Here's the thing.  We have one level of size that we can relate to - and that's determined by light and whether or not light can interact with that 'thing'.  When it can't we have a second level of size that we can only speculate about - as it remains hidden and invisible.  Then we have a third level of size that is just so small that it cannot be reached by the hidden fields of the magnetic dipoles and it can hardly occupy any space at all as all of that space has been defined by magnetic fields.  It would share the same dimensions of time as the hot, big dipoles that we can see.  But it would be entirely out of reach of either our own reality of the reality of the magnetic fields.  Having no volume and occupying no space, it would, nevertheless share a time dimension with our own time dimensions - a kind of momentary co-incidence with our own realities.  

So.  Potentially we have these dimensions.  We have our own reality which has the three dimensions of length, breadth and depth and it's own relative time reference.  Then we have the magnetic fields that share our spatial dimensions but they have an entirely different time frame which, being faster than light would be in advance of our own time frame.  Then we have this third dimension that, curiously has only one  dimension of space in that it's extant at all, and another more certain dimension of time, and that time frame co-incident with our own.  Four dimensions to our realities, four dimensions to the magnetic field - and two dimensions to this third reality and one has a total of 10 dimensions that would then describe that entirety.  And all share a co-incidence with space but none share an identical time frame.

In effect this would then suggest that the universe may be a 10 dimension binary system - entirely described by our string theorists - but here pointing to varying properties of scale in speed and size and temperature.  The rule being that potentially the dipole itself can reach any extreme of three distinct properties being heat, velocity and volume.  The correspondence is this.  It is as hot and big - or as cold and small - as is determined by its velocity.  In the same way it is as big and fast  or small and slow as is determined by its temperature.  And therefore it's also as fast and hot or slow and cold as determined by it's volume.  Know any one of it's poperties and it describes the others.  Just, bear  this in mind.  In a field condition it would not be manifest in our own dimensions.  But outside a field condition it would indeed be evident.  It is proposed that it's evident when things glow with warmth or simply catch alight. 

Wednesday, November 24, 2010

2 the structure of the field

18 Script 2

The questions then are this.  What shape would the field take and what precisely would be the type and kind of particles that make up the field?  Here the solution was found in a simple rule of correspondence.  In effect everything is the sum of it's parts.  Take any three dimensional object, be it a brick or a stick of wood - then what we see and measure of the object itself is simply a collection or congregation of atoms and molecules that that are somehow bound together to create the visible, identifiable object itself.  Break down the object, grind it down to its very smallest parts and we'd be left with a puddle of atoms that were previously assembled and bound into that shape.  In the same way the proposal is that we take our 'clues' from what is known of the magnetic field and build from there.

The first point is that the field seems to comprise what Faraday referred to as 'lines of force'.  In effect the proposal was that the magnetic field comprises lines that move from one side of a permanent magnet to the other side, north to south.  If the field comprises particles then these lines of force would, in turn comprise particles.  And if there is a distinct north and south pole to each permanent magnet - then in the same way, following that same correspondence, then the particles would each have a north and south pole.  Effectively they'd be a magnetic dipole.

As to their shape?  We know that we only need to look to symmetry and this because of the conclusions to Bell's theorems which, loosely paraphrased, state that 'the statistical predictions of the quantum theories ... cannot be upheld with local hidden variables'.  All he was pointing to is this.  On a deep, a profound and fundamental level there has to be absolute correspondence - absolute symmetry.  Else matter would not be able to manifest in a stable and coherent way.  In effect he proved that if nature was not that economical and exact with all her rules - if she was not that precise on the very, very small scale - then we would not have this manifest assembly of our structured universe and its miracles of matter presented as it is - one thing distinct from another.  If all was variable then all would be chaos. The most perfectly structured, the most perfectly symmetrical shape is a sphere.  So.  I modestly propose that, just perhaps the basic particle, many of which make up a field, is also shaped as a sphere.  A perfectly round bead.  A ball.  And one half of that ball would be a north and the other half a south. That way the two charge potentials would be locked in a single particle forever married and neutral - but having the precise differences in charge to respond to each other and to all the other particles in the field.

Then the assembly of those particles is relatively straight forward.  They would align as magnets align.  Head to toe.  North to south.  That would form a long string.  And for absolute balance and symmetry, those strings would then close its open ends to form a circle.  I have no idea how long each string needs to be to then form that closed loop or that necklace.  Nor how many necklaces would then make up a field.  But I am reasonably satisfied that to fill all that 'space' that volume of the field itself, it would probably require a variety of lengths and those lengths would logically correspond to the shape of the field as a whole.  (2 Riaan's picture of the single to multiple lines of force from a magnet) And when one introduces differing lengths to the strings then one also introduces a partial imbalance.  One string is marginally different to an associated string.  This would inevitably result in 'like charge' aligning with like charge.  And this, in turn would induce a repulsive moment when the two particles would move apart from each other.  And that movement would induce a 'like movement' in the entire string.  One particle cannot simply move in space if it's fixed inside a line of like particles.  They would all move, one step forward, say.  And this would therefore result in an orbit of the entire necklace.  And in the process of describing that orbit, then other particles in that necklace would move towards other like charges in neighbouring strings.  And the same repulsions would induce more and more movements through more and more necklaces throughout the field.  Eventually all those strings would orbit - all in a shared direction or with a shared justification - and this then would account for the extraordinary velocity of the particles in the magnetic field.  Everything would be spinning at pace and in one direction.

But to analyse the basic properties of the string it is evident that there are various potential spatial dimensions of this.  A single string in the form of a necklace would be one dimensional having only  length.  (1 Riaan's picture of the necklace) Many strings forming a series of concentric circles - something like a saucer - would be a two dimensional field having width and breadth but no depth.

Many saucers piled, one on top the other, would be a three dimensional field.  (Riaan's picture of the torus)

So here's the thing.  Each particle is neutral comprising as is here proposed a magnetic dipole.  Each particle has a field justification which then proposes that the particle itself has one of its two potential charges.  But each orbit cancels out the potential charge in the field making the entire field absolutely neutral.  One can then say that a neutral particle has a justification in a neutral field determined by the orbit of that entire field.

an aside


Dear Reader,

It's been an absolute pleasure to write here on this blog of mine.  Not only am I not defending my corner and getting bogged down and delayed by endless irrelevancies but I'm actually enjoying the sheer creativity of this exercise, it's varied aspects of theorising and testing - and then the pleasure in composing this blog itself.  It's drifting between this and that - and no doubt will test everyone's tolerance when it is finally made public.  But it has the advantage of catching what's appropriate albeit it in a rather eccentric sequence.

Perhaps what I need to explain is the delays in that heat 'profiling' which, apparently is better explained as the heating characteristics of the element that we're studying at the moment.  Those poor students' time is heavily constrained in the bureaucratic demands to give them permission and license to study in this country of ours - as they are not South African Residents.  This is halting progress to a certain extent.  But the data for those 'characteristics' have now been captured and we're simply getting them put into a graphic form for easy reference.  We've also started on the switching circuit - but early steps.  At this stage we're simply trying to determine the optimised frequency for that particular element.  When we have a better handle on this then we'll download all that data against all those tests and - hopefully, those who are sufficiently interested, will be able to double check our numbers.

Meanwhile - I'll spend these long and wakeful nights of mine developing the script for that video.  It will also be a guide into the thesis for those of you who are interested.  And, by the way, I have not forgotten the need to transpose either the TIE paper or the Quantum paper onto this blog.  But I'll need to get someone to do that for me and - so far - have not found the right person.

Kindest regards,

Tuesday, November 23, 2010

1 the boundary constraint

16  - Script
The intention in the following series of videos is to share some aspects of a magnetic field model.  Hopefully then it can be better understood and its concepts more widely applied.  It is proposed that this may assist those many experimentalists who proliferate our free energy forums.  Hopefully it may give some kind of theoretical framework.  And if this is presumptuous then the defense is that all those efforts are seemingly advanced on a haphazard basis and are variously confused by overly complex coils and coil windings that daily, grow ever more complex.  It seems that all are searching for some elusive and magic properties assumed to be associated with the Toroidal Power Unit.  Perhaps this may help to return that focus back to some simple fundamental concepts that appear to have been overlooked.
Faraday's 'lines of force' from a magnetic field are evident when the magnetic force from a permanent magnet is exposed to iron filings.  The filings then adjust their positions to describe a linear array that is understood to correspond to that field alignment.  What is clear is that the filings are induced to alter their position which implies that energy has been transferred from the field to the filings.  What is not so clear are the actual properties in both the filings and the magnetic field that enable that energy exchange.  It's here proposed that this exchange is managed in a hidden dimension and that this same hidden force is the agent of all energy interactions.

To get a clearer picture of this one must first propose that all that is visible or measurable in our four dimensions of space and time are only made visible and measurable through the properties of light.  Light is known to move at a velocity of a little under 300 000 kilometres per second. If anything at all exceeded that velocity then light itself would not find it.  The analogy is this.  We can see the balloon being blown by the wind.  We cannot see the wind.

 In effect, there may be something hidden, some particle or field of particles that moves light itself.  In which case, if there is an interaction between light and a field then that interaction may be evident but the actual agent of that interaction will remain as hidden as the properties of wind itself.  In as much as it's proposed that this field may move the particle, then the field itself may move faster than the particle in the same way that the wind moves faster than the balloon that it blows.  Therefore one can propose that there may be those particles or those fields that exceed light speed.  But if there were such particles or such fields then they would remain invisible - or dark.  So.  It is proposed that there may be particles that exceed light speed.  But.  If such particles existed then they would be beyond our frame of reference - a theoretical possibility - at best.  The object here is to argue the existence of that field and the particle that comprises that field.

Imagine that we've got a machine that can propel stones and it throws stones inside a vacuum so there are no possible disturbances to that throw other than the pull of gravity.  Then assume that the machine throws those stones with a set and predetermined force.  Then the rule is this. The smaller the stone the further the throw.  And correspondingly, the bigger the stone the nearer the throw.  That's logical.  But what if the stone was too small to be detected by that machine - or too big to be thrown at all?  Either extreme and the machine can no longer interact with that stone.  That's proposed as a boundary constraint.  In other words - we need a certain size to enable any kind of interaction at all.  We know that light can interact with atoms.  We rely on that interaction to make our universe visible.  Therefore one can conclude that light itself is within the boundary constraint of the atom.  But.  If there were forces that were moving light itself - then we know nothing of those forces or their particles to determine their own boundary constraints.

Now.  Our scientists have actually photographed an electron.  It looks something like this.

 In other words it's discontinuous.  Apparently the particle itself dips in and out of the focus.  It first dies and then gets reborn.  It appears from nothing and then disappears into nothing.  It comes and goes.  And the question is where in space does it come from and where in space does it then go to?  And could it be that it has two innate potentials?  The one is when it is manifest in our measurable dimensions to be photographed at all?  And then a second state when it's not manifest?  When it can't even be reached by light?  The one moment it's visible.  The next moment it's dark.

It's that dark - that non manifest - condition when it's proposed to interact with a background field.  And it's also suggested that this field - this hidden force - moves the particle - all particles - but in a different time frame or a different time dimension to that which we can either see or measure.  Effectively - when it disappears, then the particle would be moving at a greater velocity - or it would have reached a smaller size - or both - that would then put it within the reach or influence of the boundary constraints of that field.  And it seems that this hidden field must also share our dimensions of space as both the particle's appearance and disappearance are also localised in space. 

Our time frame is therefore determined by the speed of light. Any velocity greater than 300 000 KPS or 'C' as it's referred to - and it becomes invisible to light - or it becomes 'dark'.  And anything within or slower than C and it would, inevitably, be visible to light.  So.  Light is simply the boundary limit to our measurable dimensions.  This in turn, suggests that there may be more than one time frame albeit that those different time frames share the same dimensions of space.

With this in mind then the idea is to somehow unravel the properties of a magnetic field - based on the following assumptions.  The field has material properties or is particulate.  Those particles move at a velocity that exceed light speed.  The field is the agent or 'carrier' of all energy.  The actual energy exchange is determined outside our time frame and it precedes our own.  The effect of that exchange is measurable and visible in our own time frame. 

Monday, November 22, 2010

general update


OK Readers, all,  I'm nearly there.  All I need to do is get the TIE paper scanned and posted here and then I think I've made the most of the 'due record' and historical events that are appropriate.  The students have nearly completed the heat profiling on that 'standard immersion heater' element and our hope, during the coming week - is to complete the switching tests.  We need to find a means of recording that data and where - but the hope is that I'll be able to give links here - if I can't actually download the data directly.  We also need to establish a format for that presentation.  Maybe on or around the 26th?  I see a busy week ahead. 

I will then post those test results onto two forums - subject to the permission of their owners.  And then too, I'll go public with this blog of mine.  There is a danger however.  Our 'base line' tests were started on a standard element - simply to establish how far away from 'standard' one would need to move to get a realistic application established.  No sooner had we posted a picture of that first element than Glen posted an identical picture and predated that post by 2 days.  My concern is that he'll do the same with this published data of ours.  If he does, then I will have to call a halt to the public display of this and simply keep it under wraps until the work is entirely finished.

I now need to develop a script for the 3D video presentation of the thesis - that I've been detailed to do.  Fortunately I've found a narrator.  A last.  One of our students.  With his voice in mind - it should be easier to compose that script.  I've had a case of 'writer's block' - sort of.  I just needed to find that diction and voice to find the appropriate style of text.  And I think I'm there.  I'm going to develop that script on this blog of mine.  So.  It's likely to be changed and much edited.  But working this publicly also forces me to apply stricter editing standards - which are much needed.

So.  Onwards and Upwards.

Kindest regards,

BTW - the thread that was due for 'deletion'.  It seems that it will NOT be deleted after all.  Just locked.  This is much appreciated.  Also.  In case that decision is ever reversed CLaNZeR has allowed this to be on record on his own forum of  Not sure how to access that link but hopefully you all can manage it - should it ever be required.

And there's that link to my thread that has definitely survived the week.  LOL

Sunday, November 21, 2010

good reason for great hopes


So, dear Reader,  that's the evidence.  It was firstly my own thesis - albeit rough and raw and confined to concepts.  I had to find a means to expose all that energy that I know is there.  The only way I could do this, definitively, was to prove it against electric current flow - because that much was doable.  Twelve years ago - I discussed this with two expert theoretical physicists.  They agreed that the proposed circuit would determine that thesis.  They even suggested their lab technician do the test.  He declined.  He was not prepared to get involved with that test. 

And here again - at the risk of boring you all - is the thinking.  In classical power analysis the assumption is made that energy is only available from a supply source - like a battery.  So.  In terms of classical prediction then - the heat that is dissipated throughout the circuit will exactly correlate to the energy supplied.  If the heat that is measured is less than the heat that is supplied then that difference may be considered to be 'stored' energy.  Yet these experiments show that SO MUCH energy is stored that it can return it all back to the battery that supplied it - and yet it can cook those resistors and transistors and sundry components in that circuit.  Clearly there's some error in classical prediction.  And clearly there is energy being generated rather than stored - else so much MORE energy would not be available to recharge the supply.

And for those of you who, like me, are unschooled in the sciences - then this is what it means.  It means that provided you have a voltage supply source, some kind of battery or some kind of link to a utility supplier - then you can make jolly good use of all that voltage - and simply 'give it back' without it actually costing you anything at all.  But here's the thing.  It's hardly fair to expect our utility suppliers to supply all that voltage in the first instance without charging you something.  That's fair cop.  However, if you simply apply batteries to substitute that supply source then...?  Indeed you could manage to heat your homes, cook your food, light your lights - with your OWN battery supply source.

As I see it - all that is needed is a continual trickle charge to the batteries from the supply grid - and then let your batteries pump all that potential to 'heat your hearths' and do the necessary.  They'll pretty well do all that with just the smallest amount of energy required to keep up their voltage potential.  That's certainly the evidence in these tests.  If your utility bill cost you say 100 dollars, pounds, whatever, then that trickle charge will barely cost you 10.  And if you used a solar panel to substitute that utility supplier for that small trickle charge - then you'd need only one small panel.  Then you could unhook yourself from that utility supplier - and kiss that dependency 'good bye'.  What a pleasure.

In terms of the fuller requirements here?  We're exploring them.  One thing that is clearly evident is that we need to 'switch' the current - in order to give all that returning energy a chance to 'do it's thing'.  That switch is called a transistor.  And right now - there is no transistor that is robust enough to handle all that returning voltage.  It spikes and it spikes big.  It's the difference between lighting a slow burning fuse and the explosion that then launches a rocket.  Or it's the difference between an early rumble from a volcano and the 'blast' that comes from it's subsequent explosion.  We need that transistor.  And we need our manufacturers to make it.  And right now - they see no reason for all that extra research expense.  Hopefully our own research will be enough to motivate them.

But all looks promising.
Kindest regards,

Saturday, November 20, 2010

test objectives outlined by Harvey Gramm


Dear Reader,

The following extract was authored by Harvey Gramm.  It was included in the introduction to the first paper submitted to the IEEE and was added as an appendix 1 in the paper that was subsequently submitted to TIE.

I am including it here for three reasons.  The one reason is that Harvey claims that he simply does not understand the thesis.  Yet this extract shows a remarkable level of understanding.   If he does not or did not understand it - then he, notwithstanding, was able to give an exceptionally articulate account of it.  Here is that evidence

The second reason that I'm publishing it is because Harvey is on record as denying that the spirit of collaboration was ever there.   It is true that the text of those papers was variously written only by myself, Donovan and himself.  But that early spirit of collaboration was always there - alive and well - and if Harvey was  intending to harm that effort then that intention was well hidden.  The inclusion of A Gardiner, A Palise and S Windisch to the collaboration, was simply intended to broaden that global open source representation. I trusted that they would support the objects of the paper and invited them to join.   In retrospect it was an appalling decision to invite them at all.  So much say for so little contribution.  What was I thinking?

The third reason is a chapter all on it's own.  This extract - this introduction - was absolutely in the opening paragraphs of that earlier submission - loud and clear and in full view.  It's the required proof that the thesis was always in the main body of the work that was submitted to the IEEE.  In other words - here is the evidence.  For those readers who eventually get here and who don't know of this relevance - it is this.  The thesis was also in the introduction to that second, much contended paper, submitted to TIE.  But while the thesis was more fully explained in the appendix to that first paper, we were obliged to take out the appendix in our TIE submission as it referenced the actual name of one of the authors, being myself.  This was NOT allowed - when submitting for review.  Therefore I had to give a more comprehensive overview of the thinking that required this predicted result in all that experimental evidence.  The absurdity is this.  Those contenders, being H Gramm, G Lettenmaier, A Palise, Gardiner, Windisch ALL agreed to what was written.  What's more they gave their written approval.  I submitted that they could leave my name off the paper - in which case they could include the reference.  Or they could include my name in which case we had to add that explanation in the revised introduction.  They unanimously stated that they preferred me to leave my name in the paper with its required amendment. 

The paper was NEVER designed as an account of an anomaly - but rather as a result of a prediction in the context or frame of that thesis.  It is every theorist's dream that there is a definitive prediction in terms of a thesis.  I had that evidence.  And they all effectively  and publicly denied me this right.  Under the spurious pretext of preferring to consider it an anomaly.  And THAT after the event.  God alone knows why.  And all those shouts, all that denial, is not only a breach of their undertaking  but  is a breach of good faith towards the advancement of aether energy and technology.  It is the theme of the thesis and the dream of many of us enthusiasts.  Since the thesis is considerably more significant than these 'effects' which are somewhat crude considering the potential inherent in that 'field model' - then indeed - they have all been rather prodigal both with my early trust and with this potential benefit.  Certainly there is an apparent indifference to concerns for the furtherance of this study in 'dark energy' applications.  What may well have launched us into an early advancement and recognition of that model resulted in a fiasco.  And the advancement of the thesis is really required - on just so many levels - not least of which - being that it appears to be correct.

What I need to add is this.  My rights to publish this extract and any part of the work in that paper are protected.  In terms of the rules applied to all collaborative works - any or all the authors may publish where they will - provided only that they compensate all authors in the event that there is a payment for that publication.


The following exercise is intended as a broad brushstroke description of the non classical properties of current flow that was tested in the experiment described herein.

The classical approach to current flow recognizes that charge motion is predominately that of electric charge. The aspect of this thesis that is considered appropriate to this submission relates to current flow. It proposes that current flow comprises the motion of magnetic charge which, in turn comprises elementary magnetic bipolar particles. In classical terms, these particles would align with Faraday’s Lines of Force and therefore the number of lines that exist through a particular  real or imaginary surface, would still be represented as magnetic flux while the particles themselves, in distribution along those lines, represent the magnetic field.

It is proposed that these fields are extraneous to the atomic structure of matter and are thought to play a critical part in binding atoms and molecules into gross identifiable matter. Further, the particles obey an immutable imperative to move towards a condition of balance or zero net magnetic charge. Given a source material with an ionized charge imbalance which is measured as a potential difference, and given a closed circuit electromagnetic material path, these particles will return to the source material with the necessary charge to neutralize that imbalance. in anti-phase or opposite polarity to the first cycle.

While this is substantially in line with classical assumption as it relates to the transfer of charge, the distinction is drawn that the energy that is then transferred to such electromagnetic components, is able to regenerate a secondary cycle of current flow in line with electromagnetic laws. This energy is then not limited to the quotient of stored energy delivered during the first cycle and as presumed by classical theory. Instead it is dependent on the circuit component’s material characteristics and the means by which those materials balance a charge put upon them. Therefore there is a real energy potential in the secondary cycle which would reflect in a measured improvement to the performance coefficient of the circuit arrangement. This enhanced performance coefficient may be at the expense of the bonding of the material in the circuit components. In a worst case condition, this energy may be released as is observed in an exploding wire that is put under extreme charge conditions due to excessive current flow. In a best case condition, the energy is released gradually over time and results in fatigue to those components. This paper addresses an application of the gradual release.

Harvey Gramm - author

Friday, November 19, 2010

the eternal dilemma


Dear Reader,

I don't know how to describe what little insights I have.  I get deeply embarrassed when I think of my presumptions in trying to explain anything at all -  let alone something of such profound importance  as this field.  And all with so little evident schooling or ability.  And then I'm caught up again in the beauty and simplicity of all those patterns that I'm compelled to make yet another attempt to describe it all.   The solutions are classical. Yet it seems that I can do this vision, this solution, no justice at all.  This is my Hell.  It is holds me locked in a dilemma that has doggged my best efforts for all these many years. 

The real problem is this.  I know of no-one who has proposed such eccentric properties to a single particle that it can have a field condition distinct with properties that are entirely reversed from it's 'out of field' condition.  It's not the same thing as holding an electron bound in a bubble chamber.  It's far, far stranger than that.  The proposal is that the particle in the field is cold and fast and small - and out of the field it becomes hot and slow and big.  And then there are other required eccentricities.  Correspondences.  Synchronicities.  It needs must sustain a field condition that is mathematically perfect.  Yet that very perfection intrinsically generates its required imperfection.  It repels and attracts - both.  The strings are held bound - locked in a formation as strong as Sydney Harbour Bridge.  South to North, head to toe, in an uncompromising military formation.  And in a line that can be as short as the space between two atoms or as long as the entire length and breadth of our universe.  And all sizes in between.  In that head to toe formation - that necklace - that linear formation, it is propelled into an orbit at extraordinary velocities by the sheer repulsion to all those other strings in that field formation.  Yet within that repulsion is enough attraction to hold the field bound.  Perfect charge distribution in whole and in part.  Those long necklaces group together.  Chokers of pearls piled against more chokers of pearls.  Break those strings and the entire formation collapses in a cascading miracle of matter made manifest in our own time frame.  We see this as sparks from a fire,  the glow in flux,  the vast clouds of our nebulae.  And then that glow fades - the fire cools, the nebulae recongregate - all against varying times that span infinity itself.  Those miraculous little pearls cool.  They regain their formation and their velocities.  They again dip back to enjoy a field condition where they simply hide outside our time frame and disappear from our world.  No longer  are they in our dimensions.  And then they busily engage with each other in that field condition.  A structured background as perfectly assembled as a sonnet - and as breathtakingly economical as a haiku.  As classical as is required for perfect conservation of energy.

So.  How does one reduce such a vision into the dry and accurate language of applied physics?  The concept itself may very well be reduced to a mathematical formula.  But at what price?  I would hope that the field itself can be conceptualised.  That way it can be better shared by many.  Else it may drift into the dry abstractions that our physicists require - which will devolve and damage this vision and condemn it to obscurity.  And it is a classical solution.  It seems that Einstein did well to object to our quantum resolutions.  If any of these insights are correct then God indeed does not play dice.


abstract and introduction to the paper authored in open source collaboration and submitted to TIE


Dear Reader,  

Again for the record, I am here intending to copy some parts of the that paper submitted to TIE - that was so heavily contended.  What follows is the text that was authored by myself and Donovan Martin and which was open for edit and comment by all the authors including Harvey Gramm.  In point of fact none of the the remaining authors outside of Harvey Gramm - and this includes includes Glen Lettenmaier -  made any material contribution to the text.  Glen simply conducted the tests under mine and Donovan's guidance and according to the requirements proposed by both Open Source members and Harvey Gramm.   I will give a copy of paper to TIE representing a full replication of our earlier published tests published in QUANTUM October edition 2002 - when this has been scanned and can be reproduced together with the details of the collaborators' names.  The following posts are intended to represent that part of the text of that paper that represented my own contributions.  It is not intended or implied that this is the entire text of the entire paper, missing as it will do, the  text and contributions of Harvey Gramm. 

The paper is lengthy and will be added to here - but I'm not sure of the limitations to these post lengths and will have to determine this on a 'trial by error' basis. 

Kind regards,

This experiment is designed to test the predictions of a thesis that determines material hidden properties of charge in circuit components. A MOSFET switching circuit is applied in series with an inductive resistive load and an interactively tuned duty cycle on the gate then enables an aperiodic, self oscillating frequency.  Subject to overlying harmonics this is seen to improve the circuit’s coefficient of performance above four.  The thesis proposes that this level of efficiency is due to the induced transients where the resultant current flow emanates from the circuit components. It is proposed that these have an alternate material source of charge to that of the supply. This energy is further proposed to be the source of the anomalous heat signatures as the circuit components enable this charge flow through the battery supply thereby also enabling a conservation of charge.


THE following tests were designed to evaluate a thesis that predicted anomalous heat signatures on an inductive resistor placed in series with a switching circuit. The thesis is developed from a non classical magnetic field model but a full description of this falls outside the scope of this submission. What is pertinent here is some overview of that thesis as it applies to current flow. The following paragraph is intended as a broad brushstroke description of this and is further clarified as described in the Appendix I.

The model proposes that charge has the property of mass with the material properties of velocities and thermal capacities associated with that mass. These particles do not conform to the standard model and remain hidden within three dimensional solid or liquid objects or amalgams. They are extraneous to the atom itself and only interact with the atomic energy levels that, in turn, comprise independent fields of the same fundamental particle. These extraneous fields are responsible for the bound condition of the amalgam. This interaction between the fields and the atoms’ energy levels results in a balanced distribution of charge throughout the amalgam. Measurable voltage reflects a transitional state of imbalance throughout these binding fields that, subject to circuit conditions, then move that charge through available conductive and inductive paths to reestablish a charge balance. In effect the circuit components that enable the flow of charge from a supply source are, themselves able to generate a flow of current depending on the strength of that applied potential difference and the material properties of the circuit components. Therefore both inductive and conductive circuit components have a potential to generate current flow in line with Inductive Laws.

(This reference to the thesis was included because TIE would not allow reference to any of the author's names prior to review to ensure absolute impartiality in that review process.  The previous submissions of this paper to IEEE included a direct link to that thesis, and my name associated, as it is, with this  - as the IEEE do NOT have this preclusion in their review process.   In other words, the thesis had ALWAYS been a part of every submission.  And much required.  We needed to show that the results of these tests were not of an anomalous nature.  Lest the reviewers assumed that we were pointing to a 'freak of nature' rather than to something that was both predicted and indeed repeatable.  This was an essential part of our submission as it was not expected that any reviewed journal would publish a mere anomaly.   We therefore had to rewrite the paper to TIE to include a synopsis of that thesis else the paper would otherwise have lost this advantage.  This inclusion of the thesis became the 'theme' of Harvey Gramm's complaint to all the collaborators where he seriously proposed to them that  I was hijacking Glen's replication to promote my own work.  And what followed were those mutterings - both loud and public by  both S Windisch and A Palise, added to the excessive parade of injury and indiganation by Glen Lettenmaier  - that the work SHOULD HAVE BEEN PROMOTED AS AN ANOMALY. 

Sadly and unfortunately none of them, none of these so called promoters of clean green,  and with the entire exception of Harvey Gramm realised this.  And Harvey Gramm was careful to advise all the collaborators that he could convince - that the paper COULD indeed be published as an anomaly.  And it seemed an easy task to convince them and thereby achieve the required alienation of myself in that collaboration as they none of them seemed to realise that it was ALWAYS referenced in the introduction of our previous submissions.  I often wonder if those collaborators even understood the most of the text in either paper.  Certainly, on the face of it, it seems not.)

Classical assumption requires an equivalence in the transfer of electric energy based as it is on the concept of a single supply source. Therefore voltage measured away from the supply on circuit components is seen to be stored energy delivered during closed circuit conditions of a switching cycle. The distinction is drawn that if indeed, the circuit components are themselves able to generate a current flow from potential gradients, then under open circuit conditions, that energy may be added to the sum of the energy on the circuit thereby exceeding the limit of energy available from the supply. Therefore if more energy is measured to be dissipated at a load than is delivered by the supply, then that evidence will be consistent with this thesis. The experimental evidence does indeed, conform to this prediction.

This submission details the experimental apparatus, the applied measurements protocol and the data from a test that is designed to adequately assess the data as it relates to the thesis. It is considered that this submission of the experimental results will allow a wide dissemination both of the experiment and some consideration of questions relating to these anomalies, as being preferred and required.

The circuit is designed to enable a secondary, current flow that is induced from the collapsing fields over the resistor during the ‘off’ period of the duty cycle as a result of counter electromotive force (CEMF). This induces a flow of current in anti phase to the initial current from the source and this is seen to return to the battery supply source to recharge it. The performance coefficient is enhanced through an applied duty cycle that allows the circuit components to oscillate at a naturally recurring frequency. This is referred to herein, as a preferred mode of oscillation which, in turn, results in an aperiodic, self-regulated, resonating frequency. Distinctive harmonics are evident in the waveform and these are seen to be a required condition to the circuit’s enhanced performance as it relates to the efficiency of the recharge cycle over the battery. However the precise parameters of the duty cycle, determined by adjustment of the potentiometer at the gate of the MOSFET transistor, are found to be both critical and elusive.

The fact that these benefits to an enhanced coefficient may have been overlooked under usual applications can be attributed to the narrowness of the range required for this setting. Under usual applications such aperiodicity is considered undesirable and therefore systematically factored out of standard switched applications.

Also included is a discussion on ‘meshed currents’ that are evident and a detailed account of the data analysis that was applied to all measurements. A series of related tests are appended that variously record the progress of the applied test parameters and the improved methods of measurements as the knowledge of the application unfolded. This schedule includes an evaluation of the inductance required on the load resistor to optimize the effect, as well as an evaluation of the comparative diameters of that resistor to determine optimized conditions. Other tests include the measurements that were performed to address a variety of concerns including grounding problems, voltage differentials and applied high frequencies without the required harmonics. These have been appended, together with an overview of the thesis relating to this effect, for both purposes of record and to afford a fuller evaluation as required.

The test that is described herein has results that appear to be consistent with the predictions of that thesis. The returning current from CEMF is seen to reduce the battery discharge rate while sustaining a higher level of energy dissipated at the load. This has a resulting advantage to the coefficient of performance. Indeed, the actual measurements indicate a potential for an absolute conservation of charge at the supply. The conclusions to the tests include a broad discussion of the potential of this technology and indicate a need for expert evaluation of both the results and the theoretical paradigms that predicted the results.

thank you Coast to Coast - and David Davey for your help


Dear Reader,

I need to pay tribute to our team member - this marvelous LeCroy - which is making due record of all our test results.  Many thanks for your contribution here to all our efforts to David, to Coast to Coast and to LeCroy.  All are very much appreciated.

Kindest regards

protocols applied to heat profiling on our tests


Dear Reader,

I'm simply trying to bend my mind around the logic required for the control on these tests that we're conducting.  We're attempting to establish the heat profile of that standard type resistor.  I'm writing this down because it sometimes happens that I then better understand the logic.  Here's the set up. 

We've contained the element inside water inside some hefty insulation that the heat is more or less trapped.  Now.  Our advice or instruction is to apply a series of graduated voltages to that rig - from small voltages of say, 10 volts to 12 volts - 16 volts - 24 volts and upwards - in a series of independent tests.

Then we record temperature rise, say every 10 minutes on each of those control tests.  That way we establish the graph - a time line against the record of temperature rise over a time - which can then be plotted on an  'x' and 'y' axis, I think it's termed.  And we allow each control test to run until the temperature has reached a predetermined maximum say of 60 degrees centigrade.

Then we apply standard protocols to those controls.  The value of the wattage is determined by the measure of the applied voltage divided by the Ohmage of the resistor to determine the rate of amperage.  Then, the applied volts multiplied by the applied rate of amperage flow - volts times amps or 'vi'  - determines that wattage empirically.  And then that wattage times the time taken to reach the pre determined temperature is summed to equate to the actual number of joules required to heat that body of water from ambient room temperature to that 60 degrees, say.  That rate of temperature rise will then precisely relate to the amount of energy dissipated to heat the water to that level expressed as power and related to the caloric values of the test.

In effect, if it takes 2 watts 12 hours to heat it to 60 degrees, and if it takes 4 watts 6 hours to heat it to 60 degrees, then it should take 3 watts 9 hours to heat it to that same level.  In effect we have a graphic indication of what energy is applied and dissipated which then, in turn, relates to the applied wattage delivered against that temperature graph.  Not only will we be able to determine the caloric values of those measured tests - but with those graphs we will be able to determine any arbitrary or 'in between' value by reference to those results that are then empirically evident.

This test is perfect.  In effect - when we apply this to a measure of our supplied energy from the battery supply source then it will relate to the rate of delivery determined by the amperage measured across the shunt multiplied by the voltage of the battery or volts times amps or 'vi'.  This wattage times the time it takes to heat that water to the equivalent 60 degrees will then be compared to the standard control test and any increase in the rate at which it heats that water from that applied wattage - will be a gain of our experiment over the control.  I see it now.

But there's a caveat.  We also need to gauge those smaller wattage values that will never be sufficient to heat that water to any discernible or measurable value.  Here the proposal is to take the readings of voltages applied from 2 volts upwards with the probe somehow positioned that it touches the actual body of the resistor and without immersing the resistor in water.  Here the object will be to apply a series of lower voltages to whatever temperature it stabilises at - against ambient room temperature.  The same protocols then apply - but this will determine the wattage but only with reference to ambient room temperature.  Because of the small heat values associated with this lower wattage - it will be the only reasonable means of determining those wattage values that we anticipate on the experiment itself.

Personally - I see no difference between the two tests other than the fact that the one dissipates its energy into water - which becomes the standard reference and the other that dissipates its energy into the surrounding atmosphere.  But the former test has the advantage of a reference to it's actual intended application.  And in as much as the dispersion of that heat is more general - less localised - it is probably a fairer indication of the total heat dissipated.  This the more so as it is understood that the on our earlier tests the resistor has localised areas of heat that can vary. 


Thursday, November 18, 2010

a belated tribute to our scientists


Dear Reader,

I have slept almost 14 hours - with a one hour break divided into two short wakes of  less than 1 half hour each.  More to the point - I've woken up to find none of those alarming email notifications '***** has replied' with that dreaded link directly back to another flamed thread.  It's the first time in over 3 months that I've have missed that faint early morning light where I see our wild syringa tree silhouetted against the sky.  It's also the first time in months that I've missed seeing the full promise of day announce itself in great washes and varieties of red.  And I missed those horrible hours before this - from midnight to early dawn - where I struggled to explain one thing after another, within the limits of my poor skills and huge efforts. I am delighting in this removal from public comment that I seem to have achieved.  And the greatest delight is that I have watched the stats on this blog of mine.  They are that manageably small that I feel I'm moving about - incognito - so to speak, recording what needs must.  And all this being done outside the glare of all that highly polarised attention.  Long may this last.  Where I asked, before, that you tell everyone about this new vehicle of mine - this blog. Well. Now I would much prefer it that you keep it secret.  Certainly for the time being.

My last upload was my article on 'more inconvenient truths' and I've just re-read this.  Its rather outspoken but was written in an explosion of anger after reading the absurdities on 'what is electric current flow' and sundry other mutterings and mouthings from some rather pretentious posters.  Also, in fairness, it all needed to be said.  But I'm not sure that it warranted that level of criticism.  What I would like to mention - belatedly and much needed - is that our scientists - those exceptional theorists who, through the ages have taken us from a study of the wheel to the study of quantum physics.  They have all diligently applied their exceptional work led as they were and are, by our even more exceptional Greats.  Our progress in all matters scientific is entirely due to their hard work and their amazing skills at measurement and observation.  To me their greatest miracle is that they have unfolded the properties, not only of the atom, but in an even greater miracle of observation - they then unfolded the actual properties - almost the entire mug shot - of the atoms' constituent particles.  Consider this scale of small.  Whole galaxies of atoms could fit on the tip of a needle.  And then if that doesn't leave one with the mouth agape at the wonder of it all -  consider this also.  They also showed us their particles - the population and its distribution, so to speak of the atom itself.  And the particles within the atom are just fractions of a fraction of the size of that body - that atomic geography.  We are here talking about a scale of small that quite simply beggars belief.  And this unfolding, these amazing insights, could not be have been done without their genius and their applied disciplines - those extraordinary applications of observation and measurement applied both from empirical evidence and from the logic of math.  I  most certainly owe them.  We all do.  We owe them everything that we know about science.  We owe them  a tribute of sincere thanks for the miracles of explanation and breadth of knowledge that they have progressed.  And I, personally, owe them.  Hugely.  I owe them  a debt of gratitude for learning something of their amazing work and for the passionate interest it has afforded me albeit rather late in life. 

Indeed I have no quarrel with scientists and their exceptional abilities.  I only quarrel with some of their theories - based as they are on the problems that I've recounted hereunder.

Kindest regards,

Wednesday, November 17, 2010

the inconvenient truth relating to our philosophies on science


The Moving Finger writes; and, having writ,
   Moves on: nor all thy Piety nor Wit
   Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.
Omar Khayyam

If we could see gravitons we’d know everything about gravity.  If we could see electrons we’d know everything about electricity.  If we could see the interaction of particles with each other then we’d know everything about the strong and weak nuclear forces.  We can’t see them.  We can’t even see an atom.  And we certainly can’t see the forces to explain them.  We can only speculate.  And when and if we do speculate then we’re no longer being scientific.  We’re being philosophical. 

The confusions that have been visited on this noble art of science is based on the philosophical reach that science is now trying to usurp.  A scientist does not have the disciplines of logic that are required for philosophy any more than a philosopher has the required acuity of observation and measurement that a scientist has.  The difference is only in this.  A philosopher does not, as a rule, dabble in science.  But our scientists are shamelessly dabbling in philosophies.  And it is all being done with such disgraceful parade of poor logic that, in the fullness of time, these last pages of its history are likely to remain as a source of more than a little embarrassment.   Whole chapters of scientific progress – based on nothing but pure speculation and the accidental use of concepts that partially work and partially don’t work.  And all of it presented with a kind of intellectual flourish – a parade of self aggrandisement that would rival the pride of Lucifer himself. 

What I find disgraceful, what is entirely inexcusable is that all this bad logic is hidden behind an obscure, in fact, an entirely incomprehensible techno-babble.  Terms are presented as acronyms and all is justified in the language of algebra.  Complex equations drift into ever greater complexities that would confuse God himself.  And all is intended simply to hide the manifold confusions that actually bedevil science itself.

It is possibly understandable that our experts feel required to explain ‘all’.  But these explanations are drifting into realms of obscurity  that have nothing to do with reason or logic or common sense or indeed science or philosophy.   It has simply become pretension.  What’s euphemistically referenced as theory is actually just  obscure gibberish masquerading as deep intellectual knowledge.  It makes the toes curl.   One must be ‘trained’ in science – of necessity.  It is not meant to be understood - certainly not as propounded by our experts.  Their intention is to flaunt a familiarity with complex abstractions.  And to own up to a lack of understanding would be to let the side down – to somehow admit to the disgrace of not actually being able to see the emperor’s new clothes. 

Let’s explore some of the confusions – let’s actually focus on the bare facts - on some of those manifold contradictions which our mainstream experts defend.  Starting with current flow.  Now.  We all know that electrical engineering is the applied knowledge of the electromagnetic force – so ably unfolded by Faraday and quantified by Maxwell.  And so widely applied in today’s technological revolution.  Our satellites, our trips to distant planets and more to come.  Our internet – our computers – our – cars – our measuring instruments, and on an on.  Examples of their skills are evident everywhere.

And yet.  Amongst all those able, those skilled engineers – the vast majority will insist that electricity is the result of electrons moving through their circuits in the form of current flow.  No matter that Pauli’s insights depended on the simple fact that electrons do not share a path. No matter that we have never been able to get electrons to move in the same direction without forcing them by the application of some very real energy.  No matter that electrons have a like charge and we could not get them to co-operate with each other in a shared environment any more than we can get to souths of two magnets to co-operate.  No matter that no-one has ever found ‘spare’ electrons inside circuit wiring.

And if the glove still doesn’t fit – then try another explanation.  We are now told that the actual current flow is the result of one valence electron somehow influencing a neighbouring electron – in a kind of domino effect.    Now we’ve got over the ‘shared path’ problem and that ‘no loss of electrons’ number.  This would certainly account for current flow.  But the problem is this.  Our scientists know the speed at which one valence electron would influence another valence electron.  And it would take up to half an hour for it to travel through the average two meters of circuit wire before it would reach the light to light it or to reach the kettle to heat it.  There would be a required delay between the switching of the switch and the lighting of the light to get that process started.  But, in all other respects it could – otherwise – have been a reasonable explanation.  But it’s self-evidently spurious.   

So.  If that glove doesn’t fit then try yet another.  We all know that if electrons were the actual ‘thing’ that was transferred from our generators by our utility supply sources, then those generators would need to supply an almost inexhaustible amount of electrons that somehow turn into photons that also somehow light whole cities – all of them linked, as is often the case, to a single supply grid.  The truth is that no utility supply source would be able to access that many electrons.

So.  Again.  Another glove.  Another qualification.   We are then told that actually the electrons themselves are ‘free floating’ and they intrude into the material of the conductive wiring.  They do not come from the supply source itself.  Which also means that these electrons that are somehow detached from any particular ‘home’ – are floating about in the air belonging to no atoms – just free for the taking.  And we must now get our heads around the problem that not only is our atmosphere saturated with these previously undetected little numbers but that they can move into the circuitry – all over the place, straight through the heavy barriers of insulation which was first applied to prevent this from happening, precisely because it’s impossible for electrons to breach this insulating material.

Challenge any scientist, any chemist, on any of these points and, in the unlikely event that they continue the conversation, they will do so in a loud voice and with more than a hint of exasperation.  What gets me every time is their usual defence based as it is on the statement that I should not question ‘what has been known and used for centuries now ’.  Somehow this is sufficient justification.  And God alone knows why because it certainly it’s not logical.  I would modestly propose that in the light of so much improbability – it may be proposed that – whatever else it is - current flow is NOT the flow of electrons, nor, as I’ve seen it suggested even on these forums, the flow of protons, or ions or anything at all that belongs to the atom.  Else it would be logically evident.  And it is not.  

Then to attend to other confusions especially as it relates to gravity.  Gravity – a weak force – apparently permeates the universe and acts as a kind of ‘glue’ on matter.  It only attracts.  It never repels.  If, indeed, all began as a Big Bang – then all that energy will systematically deplete until there is a kind of Big Crunch – where all disappears into the void that proceeded that bang.  Just as the electron is the ‘carrier’ of electrical energy – the graviton is philosophised to carry the gravitational energy.  But the graviton has not been seen.  Yet all is explained as if such a particle were extant.  Millions of dollars, euros, rupees, whatever, have been spent on trying to find some evidence in the vast space time continuum around us and beyond us -  in those seemingly infinite reaches of space.

Where is the  evidence of this little particle?  Not even the faintest of faintest of these ripples has been found.  Not a whisper.  Not a shadow.  Notwithstanding which we’re assured that this lack of evidence is actually not a problem.  It is not considered to be sufficient reason to preclude the particle nor to discontinue the experiments.  We are told to ignore the ‘absence of evidence’.  A trivial requirement, a small stepping stone.  Because eventually this required evidence must surely come to hand.  And until then – and in its absence  – it is to be regarded and referenced as a FACT.  This because our philosophical scientists are no longer requiring evidence to support a theory.  It’s enough to just balance those interminable equations – those  indecipherable and incomprehensible sums.

Now.  While it is understood that gravity is attractive – and ONLY attractive to all matter – for some reason our universe is not drifting towards a Big Crunch.  On the contrary.  Space is EXPANDING. And this is now also referenced as  FACT.  It seems that it’s enough for two schools to have reached the identical conclusion to establish a new scientific reality.  No-one questions the logic that supported this conclusion.  But there’s a small caveat.  The galaxies and stars and planets are not expanding.  It’s the actual space between them that – like poor little Alice stuck inside a rabbit hole – that is actually growing ever bigger and bigger.  And all this space is expanding at a predictable rate and is responsible for systematically propelling great clumps of matter apart from other great clumps of matter – all at a consistent and quantifiable velocity. 

Those that subscribe to this new evidence are careful NOT to reference the evidence of galaxies colliding – as this would put paid to their sums.  And those that do not subscribe – carefully do not reference these same galaxial collisions – for the same but opposite reasons.  I’ll get back to this point.  But for now the point is this.  If space is expanding, and yet galaxies collide – then that expansion is either not smooth or the galaxies themselves drift through space with varying velocities that would introduce a marvel of chaos to the otherwise and seemingly ordered and structured condition of our universe.  

Then more confusions.  We are told that nothing can exceed light speed unless it also had infinite mass.  Really?  In which case does that explain why photons that have no mass are able to travel at light speed?  And then what does one do with this famous equation where E = mc^2?  If the photon’s mass is zero then zero times any value greater or smaller than 1 – remains ZERO.  Where then is all this energy that moves at photon at light speed?  The truth of the matter is that science took a wrong turn somewhere and is reluctant to ‘go back’ so to speak.  Somewhere – somehow – the answers that were given as an explanation for all the forces were also somehow based on some erroneous foundation – a flaw in its structure.  And I would humbly suggest that this may have everything to do with the need to speculate on the properties of forces that remain invisible and particles that can only be studied by inference.

One of the more intriguing obsessions of our mainstream scientists is their interest in particle manifestations.  The neutrinos are the smallest and they're also considered to be stable.  But these little numbers could just as easily been seen as a really small photon or a really small electron - and the electron neutrinos - like the electron - theoretically also has it's anti particle – its twin.  These are the only stable particles together with the photon, the electron and the proton.  And they’re considered to be infinitely stable which is a really long time.

But the thing is this.  All other particles  – whatever their frequency, their mass, their lack of it, their charge, whatever - they all last for really small fractions of time.  Their duration can be measured in terms of quadrillionths of a second - or quintillionths - and so on - getting progressively smaller and progressively more improbable.   Here's the puzzle.  For some reason when one slams one particle into another - inside a bubble chamber - then from the interaction of two stable particles comes this 'particle zoo'.  It's been described as the creation of a really complex fruit salad from a chance meeting of two fruits.   Those myriad particles that manifest for such a brief moment of time - simply decay.  They disappear back into the vacuum of space.  And the proposal is that somehow these manifest particles are the product of that interaction.  It's so energetic that it would be absurd to balance out the energies in terms of thermodynamic laws.

Matter here has multiplied -  inexplicably and exponentially.  Strawberries, plums, apricots, pineapples, grapes, quinces, oranges, apples, and on and on - from the chance interaction of a banana with a small tomato.  So our scientists put paid to that energy equivalence - that all important sum that dominates science in every other respect - and they simply look at the conclusion of that experiment – to what happens after the manifest miracle of so much coming from so little.  And in as much as the final product of that interaction is less than the manifest particles that decay - then what is left is precisely the right combination of particles which then evidence a perfect conservation of charge.  One can almost hear the sigh of relief.

No-one, notwithstanding the evidence of this manifest matter in all it's varieties and that variety is widely considered to be potentially infinite - not one of them have suggested that, just perhaps, they are disturbing some kind of matter in the field that holds these particles.  Why is this not considered?  Could it not be that in the moment of interaction all that becomes manifest may be those particles in the field that were first invisible - and after impact, become visible - and then they decay?  That way - and only in that way - would they be able to argue conservation of anything at all.

This is the blind spot, the weak spot - the Achilles heel of our scientists.  There is an evident need or a compulsion to uphold to one inviolate truth regardless of how well it fits with the evidence.  According to mainstream -  energy cannot be created.  And NOTHING can exceed light speed.  My own question is this.  How would we be able to measure anything at all that exceeded light speed?  In our visible dimensions light is the limit to our measuring abilities.  It's the gold standard.  Actually it’s all we’ve got.  We’ve nothing smaller and nothing faster to compare it against.  If anything moved at faster than the speed of light then light itself would NEVER be able to find it.  It would, effectively be invisible. 

Which brings me round to my favourite topic and to another 'inconvenient truth' - to borrow a phrase from Al Gore. Around about the time when Heisenberg and Bohr were forging the foundations of Quantum mechanics, Zwicky, a Polish immigrant to America - saw something that was only enabled by a new found access  to new and improved telescopes.  What became evident were galaxies, in the millions, where prior to this there was nothing beyond our Milky Way Galaxy.  And what was also evident was that the mass measured in the galaxies, was simply NOT enough to hold those galaxial structures together.  If gravitational principles were to be universally upheld - then by rights - those great big star structures should have unravelled or should be unravelling.  Neither was evident.  He then superimposed the requirement for what he called 'missing matter'.

Over time those early results have been systematically ratified and refined.  In effect - many scientists - our leaders in the field of astrophysics - have proved, conclusively that galaxies themselves are held bound by what is now referred to as dark mass - from what is proposed to be dark energy.  In effect -  they've uncovered a new - hitherto unknown FORCE.  No longer are there four forces.  There appears to be every evidence that there is this fifth force - and like a fifth column - it's well hidden but pervasive.  But the new and insuperable puzzle is this.  It's invisible.  Yet it's everywhere.  And we have no reason to doubt this evidence.  Our scientists' ability to measure and observe is unquestionably exact.  But, and yet again - they then make yet another nose dive into yet another explanation for the inexplicable.  All around are frantically searching for its particle - the 'darkon' equivalent of the 'graviton'.  We are back to an Alice in Wonderland world - looking at an upside down reality - a bizarre universe that must first and foremost, obey any and every rule that our mainstream scientists propose - no matter their inherent contradictions.

Why should the particle be visible?  Is this still to do with the obsessive requirement to disallow faster than light speed?  Are we getting ready set, go - to confuse the hell out of another hundred years or more of theoretical physics - simply to adhere to relativity concepts?  Has the time not come - with respect, where we can concentrate of 'field' physics and explore the implications of this - rather than impose a 'field' condition on known particles that none of them are able to constitute a field.  No known stable particles are able to move together.  Electrons and protons are, effectively, monopoles.  Neutrons decay within twenty minutes.  Photons irradiate outwards and can only share a path when their rays are deflected unnaturally.  Nothing known is capable of sustaining a field condition.  So WHY do our learned and revered insist on imposing a standard particle construct on a field?  It is the quintessential condition of forcing a square peg into a round hole - of fitting one incorrect fact into another incorrect fact - in another endless circular argument.  Again, with respect, has the time not come, in fact LONG overdue, to revisit - not so much our answers, which are increasingly shown to be incorrect - but to revisit our questions about physics?  I personally, think that time would be well spent in exploring the conditions required for a sustained field.  And I think the evidence now is overwhelming that the field itself holds matter - and, for obvious reasons, this unhappy, this uncomfortable, this inconvenient truth - needs to be fully explored.  Just perhaps a whole world exists out there that remains out of touch of our actual realities.  It leads - we follow.  It proceeds in one time frame - and we interact with it in another time frame.  That way - just that one small inclusion into our theoretical constructs - and we would be able to reconcile so much with what is evident.  I suspect it's our aether energies - and reference to this has now been long been considered to be politically incorrect.  Perhaps the time is now that this poor, abused concept be revisited and revitalised by our theoreticians.  Certainly we may then salvage some logical coherence that is entirely exempt in current thinking.