Post in Quantum Computing group on Facebook on 13th March, 2019
<Well I had a eureka moment last night just like Archimedes (and this by the way is why AI will never surpass human intelligence; they can't sleep and dream; machines have no unconscious mind) I had a dream about these strange triangle shapes that were differentiating each other up to 10 times. Woke up and couldn't make head nor tail of it. Got to thinking about it and realized they were <BRA KET> Dirac formalism and they were performing QM matrix calculations on themselves. I realized that a quantum computer performs complex QM linear algebra and matrix math to solve Schrodinger's multi-particle time dependent wave equation at the speed of light. We gotta do away with all thought of binary numbers. Think solitons with geometric phase that represent Heisenberg Dirac Pauli matrices that are manipulated by holonomic quantum gates that may or may not use Boolean algebra to do complex calculus - differentiation integrals Fourier transforms etc. But wait - there's more. A quantum computer can also solve Einstein's time dependent field equations using 4x4 matrices that represent space-time vectors and Christoffel symbols. Eventually quantum computers will be able to simulate the universe. So the bottom line is forget about all this binary qubit stuff. Think linear algebra and matrix math and in the short term start thinking how you could make a quantum computer that will perform all the basic operations of MATLAB at the speed of light.>
Post in Quantum Computing on Facebook on 17th March, 2019
This paper Solving matrix equations in one step with cross-point resistive arrays involves memristors which basically uses the memory in electric current. Years ago Roger Penrose found quantum effects in microtubules. I posted a few days ago a paper about biophotons and microtubules "Our theoretical analysis indicates that the interaction of biophotons and microtubules causes transitions/fluctuations of microtubules between coherent and incoherent states." I have also come across theory that these microtubules act as optic fiber that enables light signals to pass to the various sections of the brain that gives the brain phenomenal parallel processing capacity. Electrical signals are too slow for signals to pass to widely separated parts of the brain for parallel processing. This paper about solving matrix equations in one step also involves parallel processing. What I would like to propose is that these 'quantum effects' that Penrose found and these "transitions/fluctuations of microtubules between coherent and incoherent states" actually are a form of memristor for light similar to the memristors in electric current. Here is relevant sentence from an abstract of paper <To facilitate cytoplasmic remodeling and timely responses to cell signaling events, microtubules depolymerize and repolymerize rapidly at their ends.>
Post in Quantum Computing on Facebook on 19th March, 2019
Well I can't help thinking that these ultrashort laser pulses are what I call solitons, and their 'specially designed donut-shaped intensity profile' has got something to do with their geometric phase!!! <As a rule of thumb, it can occur whenever there are at least two parameters characterizing a wave in the vicinity of some sort of singularity or hole in the topology; two parameters are required because either the set of nonsingular states will not be simply connected, or there will be nonzero holonomy.> A donut-shaped intensity profile = singularity or hole in the topology!?
<Using ultrashort laser pulses, scientists printed optical microdisk lasers in thin perovskite films coated above a glass substrate. The produced perovskite lasers can be used in photonic computers of the future and more widely—to provide the operation of photonic circuits in ultrafast data processing systems.
"We used femtosecond laser pulses with a specially designed donut-shaped intensity profile. The direct impact of a low-intense pulse train on a thin halide perovskite film allows to imprint the disks with a diameter down to 2 microns. The imprinted disks have smooth facets while the femtosecond pulse processing ensures minimized thermal impact of the perovskite>
Read more at: Microlasers for photonic computing of future
THE MAGIC PROPERTIES OF CARBON FOR QUANTUM COMPUTING
An article in New Scientist Quantum X-ray machine takes razor sharp pictures with less radiation explains how to split X-rays by passing them thru a diamond and then they can use the entanglement properties of quantum particles so that the photons in one stream will have the same quantum properties as photons in the other streams. They do this by directing the X-rays thru a diamond which is pure carbon. In my video above How to make a quantum computer I explain how the DNA is essentially made up of carbon interspersed with hydrogen nitrogen and oxygen atoms. The point is that the DNA presents essentially as a carbon based semi-conducting nanowire. It is suggested that this new research of splitting X-rays by shining them thru a diamond is essentially the same process as what is occurring in the DNA where biophotons (UV and visible light) are being split and redirected as part of quantum computing processes. https://www.newscientist.com/article/2214790-quantum-x-ray-machine-takes-razor-sharp-pictures-with-less-radiation/
Post in Quantum Computing group on Facebook 1st March, 2019.
It's just dawned on me the striking similarities between the 'two-slits' experiment and the DNA Phantom Effect. They both involve an experimenter shining light thru an apparatus and observing the wave pattern that appears on a screen behind. In other words they both involve this fundamental mystery of wave-particle duality. I have just realized that in both experiments we actually get to 'see' the probability waves of QM. When you think of it all the wave theory of light based on Fourier maths is in the nature of probability waves. The wave nature of light is what 'probably' happens and then miraculously when you observe a photon you find wonder of wonders that is what actually happens. For instance photons don't spread out in all directions from a bright orb billions of light years away and then travel indefinitely in perfectly straight lines sinusoidal oscillating from -1 to +1 on a Cartesian grid with no mass and no energy thru a void and then arrive in perfectly parallel 'rays' into the retina of our eyes so we see a perfectly coherent bright orb. Real physical photons would surely be no longer parallel and would be entering our eyes at an angle and should have spread out. I could list another dozen mysteries about light, The bottom line is 'wave-particle' duality mystery is precisely mysterious because we actually 'see' the probability waves of QM (if the experimenter doesn't make an observation then the probability waves travel thru both slits). And in the same way the DNA Phantom Effect is 'nature' enabling us to 'see' that the probability waves of QM are also occurring in the DNA.
Post in Quantum Computing group on Facebook on 2nd March, 2019
This entanglement experiment in New Scientist raises some interesting questions for quantum computing. Two entangled photons were sent off to labs AO and BO. They were measured by another set of entangled photons AOTest and AOSystem and BOTest and BOSystem. Then the original photon and the AOSystem photons sent on to lab A1 and other original photon and BOSystem sent to lab B1. Researchers claim the observer at AI got different results from 'observer' at AO and observer at B1 got different results from 'observer' at BO. In fact the 'observer' at AO and BO was not a conscious experimenter and the conscious experimenter at A1 and B1 was either choosing randomly to measure either the original photon or the A1System or B1System and the conscious experimenter at A1 and B1 got consistent 'results' with each other in as much as they significantly differed from the measurements of the 'observer' at AO and BO. All this proves in my opinion is that there is only one measurement within the meaning of QM at the end of process. Presumably in quantum computing there will be an infinitely large number of interactions like this between entangled photons before some sort of a 'result' appears on the screen. If the assumption made in this experiment were correct that an observation within the meaning of QM occurs every time there is an interaction between two photons such that the state of one photon changes the state of the other and the result of that 'observation' can disagree with 'observations' further down the line then you may as well close up shop right now cos quantum computing will never be possible. What would have been startling if the A1 results were 100% consistent with the AO 'results' and the B1 results diverged significantly from the BO 'results'. Then we would have to demolish the Copenhagen school and poor old Niels Bohr would turn over in his grave.
Post in Quantum Computing group on Facebook on 2nd March, 2019.
Well I’ve figured out this question of ‘probability’ waves in light within the meaning of quantum mechanics. Light occupies a unique position in as much as a single photon is clearly a ‘particle’ within the meaning of quantum mechanics and yet we can actually see light thru our sense of vision in the macroscopic world. In other words we do not need a measuring instrument so see light. So in the macroscopic world Fourier wave mechanics apply which will predict how light travels with 100% probability. It is not necessary to square the Fourier integral with it’s complex conjugate (Dirac’s <BRA KET> in order to get a ‘real’ probability and nor is it necessary to normalize the integral. Because the probability waves in the normal Earth based macroscopic world have 100% probability they ‘appear’ real. You can actually see them and predict them with certainty. Then comes Special Relativity and most especially the position of the observer. Special relativity states the probability that objects travelling near speed of light will appear different to a stationary observer here on Earth. It also predicts that the clocks will stop in an object travelling at the speed of light. So to an observer here on Earth it will appear that light from a distant galaxy has taken 10 billion light years to get here according to Fourier wave mechanics, but from the point of view of the photon itself time is simply not a factor. From the point of view of the photon itself it has traversed those 10 billion light years of space instantly and it is as young and as fresh as the instant it was emitted. When it comes to observing photons in the microscopic quantum world, that is to say photons that we can’t actually ‘see’ and now need a measuring instrument to give us a result, then quantum probability rules now apply. I puzzled over that finding that light slows down to 15 miles an hour in a Bose-Einstein condensate. What does that even mean? Light travelling at 15 miles an hour. Then I realized that they didn’t actually ‘see’ the light travelling at 15 miles an hour. There was a probability wave within the meaning of QM that told them the probability that light would travel at 15 miles an hour at near absolute zero and there was a measuring instrument that gave them that ‘result’. The point I’m trying to make is that all wave theory about light are probability waves. But different probabilities apply depending on where you are and what you’re trying to do. I think anyone trying to build a quantum computer in a Bose-Einstein condensate will have to factor this in.
Post in Quantum Computing group on facebook on 2nd March, 2019.
Sorry to be doing all these posts but I got quite a bit to say and if you make these posts too long nobody reads them. For my first post we need a definition of entangled photons. These are photons for which, from their own point of view, no time has passed. Even if they are sent to the two ends of the universe from their own point of view no time has passed. So in that experiment the original entangled photons went from their generation point to labs AO and BO and on to labs A1 and B1 and from their own point of view no time passed and the only measurement took place at labs A1 and B1 and those measurements would have been identical. At labs AO and BO a new set of entangled photons were generated, a test photon and a system photon and the system photon was then sent on with the original photon. Also from the point of view of the system photon no time has passed. The only point in the whole system where time passes is the minuscule amount of time to generate the test and system photons. So the system photon is ever so slightly out of phase with the original photon. From the point of view of all photons no time has passed but when the measurements were made by the conscious observers at labs A1 and B1 their measurements differed slightly from hypothetical measurements that would have been made at labs AO and BO. The point being that there was no observation and no measurement at AO and BO by a conscious experimenter so in the system there was only one point of view – the point of view of the entangled photons themselves and for them no time had passed. The fact that the system and original photons are slightly out of phase only becomes a factor when the measurement is made by the conscious observer and that is why they got a slight discrepancy in results. We can now solve quickly the second issue. The photons operating in a quantum computer in a Bose-Einstein condensate only have their own point of view so for them no time passes cos they’re travelling at the speed of light. They’ll do what they do instantaneously. But if a conscious observer decided to open the can and measure the speed of those photons, the measurement would show that they’re travelling at 15 mph.
Post in Quantum Computing group on Facebook on 3rd March, 2019.
Before someone says to me wait a minute there are lots of particles that are entangled that aren't travelling at the speed of light I'm gonna head you off at the pass. Photons play out in the macroscopic world so you need an explanation for entanglement of photons in standard physics and that's special relativity. Atomic and subatomic particles that are entangled don't need an explanation cos they just form part of a multiparticle wave function and when you collapse the wave function to observe one of them you will automatically get the state of the other. That's just standard Copenhagen. I think in a quantum computer you will need both forms of entanglement. For instance you generate highly polarized (horizontal or vertical) laser entangled photons one of which reads the data and the other manipulates the nucleus of the hydrogen atom as well as pushing the electron of the hydrogen atom into the conduction band. When that electron falls back into its hole it will emit a single spectral line photon which will read precisely whether that electron is up or down. I think the laser light manipulating the nucleus and reading the data has to be polarized horizontal or vertical cos they constitute orthogonal vectors that will read 0 or 1 (up or down spin). And I think you're gonna be able to use the DNA molecule itself as the semiconductor. It's my belief that visible laser light will push the hydrogen electron only into the conduction band. So all you gotta do is direct the laser light at eight hydrogen atoms in the DNA molecule and you will get a reading in spectral line photons of an 8 bit binary number.
Post in Quantum Computing group on Facebook on 5th March, 2019.
In previous posts I described how I thought you can make a quantum computer by generating two entangled laser photons one of which could read the input data and the other is directed at 8 hydrogen atoms that would then emit spectral lines of an 8 bit binary number. In other words this would be a solitonic (solitary) wave packet consisting of 8 spectral lines that contains an 8 bit binary number. This soliton would have non-adiabatic non-abelian geometric phase that contains information. This could then go thru these holonomic quantum gates that would execute some sort of algorithm or program. It would be a very nice concept if at the end of that process the geometric phase of that soliton has been changed into a different 8 bit binary number that represents the result. It's my feeling that a quantum computer must execute an algorithm or a program instantaneously like Shor's algorithm. I don't see how you could have RAM in the conventional sense in a quantum computer. If you've followed my ravings up to now one more crazy thought won't hurt. I've got this crazy notion that that result is still entangled with the other laser photon that read the input data and it now changes the hard drive data. The only way this could be justified is if the entire computing process represents a wave function that does not actually collapse until a result appears on the screen for a conscious human computer geek to read and that computer geek actually forms part of the wave function as well. In other words a quantum computer conceptually is just a very sophisticated measuring instrument. All the mysteries that apply to measuring conventional quantum processes will also apply to that quantum computer. You will think I am crazy but Niels Bohr would be proud of me!
Post in Quantum Computing group on Facebook on 8th March, 2019.
"One can recover the information dropped into the black hole by doing a massive quantum calculation on these outgoing Hawking photons," said Norman Yao, a UC Berkeley assistant professor of physics.
<The trans-Planckian problem is the issue that Hawking's original calculation includes quantum particles where the wavelength becomes shorter than the Planck length near the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with a finite frequency, if traced back to the horizon, must have had an infinite frequency, and therefore a trans-Planckian wavelength.>
This raises some nice issues about my new hobby horse about the probability waves of light. We'll take this as a pure thought experiment and assume that Einstein's field equations accurately reflect the fabric of space-time and inside that black hole there is some sort of 4-dimensional milieu that's accelerating. (the field equations are double differential equations) We drop a 3-D qubit that is stationary in there and then we measure the 'outgoing Hawking photons' that is to say we make an observation which collapses a wave function and we observe a photon in a particular state of polarization. We are entitled to surmise that that accurately reflects whether the 3-D qubit is now spin-up or spin-down in the 4-D space-time milieu. If nothing else it reflects the probability waves of all of physics. It's no more 'improbable' than the theory that light travels 13.5 billion years using 3-D Fourier wave mechanics at a constant speed thru 4-D space-time that is accelerating, and that this actually gives us the dimensions of a real physical universe.
Post in Quantum Computing group on Facebook 10th March, 2019
"and it's probable that there is some secret here which remains to be discovered" quote by C.S. Pierce at head of Eugene Wigner's article The Unreasonable Effectiveness of Mathematics in the Physical Sciences. I don't think you are going to make a quantum computer without discovering that 'secret'. And the secret is (drum roll) all of calculus is in the nature of probability waves whether in the macroscopic or the quantum world. By virtue of Planck's constant we know that all of matter is composed of discreet chunks ie. all of the physical world is discontinuous. Yet so much of physics involves differentiating waves that are 'real' and can't actually be differentiated because they are discontinuous. Once you realize that all of calculus involves probability wave functions and the only difference between quantum and macroscopic waves is the latter have 100% probability and therefore appear real, then we solve the 'observation' question in quantum mechanics. The fact is all of the 'physical sciences' require an 'observer'. A scientist comes up with a hypothesis, does the math, then goes and 'observes' whether the hypothesis is true or false. Scientists don't realize that the 'observation problem' applies in the macroscopic world because they have collapsed a wave function that has 100% probability of predicting the outcome of the observation.
ONCOGENES RESPONSIBLE FOR SPECIES DIFFERENTIATION?
In earlier blogs you will see is the essential problem for Neo-Darwinism is that there is only a 1-2% difference in DNA sequence between human and chimpanzee. At the molecular level, genome and proteins, the human and the chimpanzee are more similar than sibling species and yet the phenotype of human and chimp are so vastly different that taxonomically they are placed not only in different genera but in different families.
In his new book The Demon in the Machine, Paul Davies proposes that oncogenes may be responsible for differentiation of the species. Oncogenes are the genes implicated in the development of cancer cells as well:
<Another reason that evolution hasn’t eliminated cancer is because of the link with embryogenesis. It has been known for thirty years that some oncogenes play a crucial role in development; eliminating them would be catastrophic. Normally, these developmental genes are silenced in the adult form, but if something reawakens them cancer results – an embryo gone wrong developing in adult tissue. The writer George Johnson summarizes this well by referring to tumours as the ‘embryo’s evil twin’. Significantly, the early stages of an embryo are when the organism’s basic body plan is laid down, representing the earliest phase of multicelled life. When the cancer switch is flipped, there will be systematic disruption in both the genetic and epigenetic regulators of information flow, as the cells recapitulate the very different circumstances of early embryo development. This will involve both changes to the way regulatory genes are wired together and changes in patterns of gene expression. Our research group is trying to find information signatures of these changes. We hope it will prove possible to identify distinct ‘informational hallmarks’ of cancer to go alongside the physical hallmarks I mentioned – a software indicator of cancer initiation that may precede the clinically noticeable changes in cell and tissue morphology, thus providing an early warning of trouble ahead.>
It should be a simple matter to compare the oncogenes of human and chimpanzee and see if there are any significant differences that might explain how humans and chimps can be so similar in genotype and yet so different in phenotype.
In addition Paul Davies in his book tells about the massive electrical activity both in cancerous tissue and in embryogenesis:
<Altering the electrical properties of so-called instructor cells had a dramatic effect, causing the pigmented cells to go crazy, spreading cancer-like into distant regions of the embryo. One perfectly normal tadpole developed a metastatic melanoma entirely from the electrical disruption, in the absence of any carcinogens or mutations. That tumours may be triggered purely epigenetically contradicts the prevailing view that cancer is a result of genetic damage, a story that I shall take up later in the chapter. All this was remarkable enough. But an even bigger surprise lay in store. In a different experiment at Tufts University, devised by Dany Adams, a microscope was fitted with a time-lapse camera to produce a movie of the shifting electric patterns during the development of Xenopus embryos. What it showed was spectacular. The movie began with a wave of enhanced electrical polarization sweeping across the entire embryo in about fifteen minutes. Then various patches and spots of hyperpolarization and depolarization appeared and became enfolded as the embryo reorganized its structure. The hyperpolarized regions marked out the future mouth, nose, ears, eyes and pharynx. By altering the patterns of these electrical domains and tracing how the ensuing gene expression and face patterning changed, the researchers concluded that the electrical patterns pre-figure structures scheduled to emerge much later in development, most strikingly in the face of the frog-to-be. Electrical pre-patterning appears to guide morphogenesis by somehow storing information about the three-dimensional final form and enabling distant regions of the embryo to communicate and make decisions about large-scale growth and morphology.>
In my article The Evolution of Consciousness on this website I argue that the DNA is a semiconducting nanowire, and that communication in the nucleus of the cell is mediated by UV light aka biophotons aka optogenetics. Quite simply a semiconducting nanowire will emit UV light when the electrons in the conduction band fall back into their holes in the valence band. A recent study notes some curious facts about the electromagnetic properties of DNA. For example, link DNA is said to “zig-zag” back and forth between “stacks” of these mini-coils, while the histone cores of the mini-coils are reported to link with each other. There is said to be a “permanent dipole moment” between each mini-coil that generates “electric dipolar oscillation” between them. The capacity for mutual induction of electromotive force (emf) in the nucleosomal fiber would be virtually infinite. In addition, the current that has been detected in the nucleosomal fiber is “oscillating;” that is to say, it is an alternating current with frequencies between 2 and 50MHz.
In other words the genome is capable of generating its own electricity and this electrical activity mediates communication in the cell thru UV photons aka optogenetics. We find that oncogenes play a prominent role in embryogenesis and cancer cells. It seems likely that these three processes are linked in a fundamental way: the oncogenes are directing the electrical activity to produce coherent UV photons to control embryogenesis, and corrupted oncogenes result in incoherent UV photons which sends out scrambled signals to the cells which results in cancer. Quite simply the proper program they run in embryogenesis goes haywire. Which leads us back to Paul Davies’ initial thesis in his book that all of life is information processing.
A review I wrote for Paul Davies' new book The Demon in the Machine. This book has the potential to overturn Neo-Darwinism. But you have to know how to interpret the book. Essentially intelligence and consciousness is in the DNA in its own right as an organism. The DNA is 'thoughtfully' orchestrating its own evolution.
THE BIG QUESTION
LIFE’S SECRET INGREDIENT: A RADICAL THEORY OF WHAT MAKES THINGS ALIVE
In his new book The Demon in the Machine, Paul Davies attempts to provide an answer to Schrödinger’s question “What is life?” posed in a series of famous lectures that he delivered in Dublin in 1943. An auspicious date because unbeknown to him across the Irish Sea in England Alan Turing had just made the first computer. In those lectures, Schrödinger as one of the founding fathers of quantum mechanics was obviously angling to somehow reconcile quantum mechanics with organic matter and he was not optimistic and actually thought that some ‘new physics’ would have to be developed that would explain life.
Paul Davies offers as an answer to Schrödinger’s question that energy can in some way be equated with or be responsible for information; that the energy in a system directly encapsulates the information in the system. However when he comes to giving precise details about how the energy in a biological system can in some way generate the information in the system he reverts to conventional biological energy that drives cellular processes, namely ATP. Nowhere in his book does Davies point out that Schrödinger’s own famous wave equation is precisely about the calculation of energy in a system. Schrödinger’s wave equation is a complex differential equation that will enable a physicist to calculate the energy in a system. The hydrogen molecule which consists of two hydrogen atoms bonded together is the largest system for which a solution can be found for Schrödinger’s equation, but the fact is that every molecule whether organic or inorganic, has this wave equation including the extremely complex aperiodic DNA molecule.
Davies describes in detail how ATP drives a certain cellular component. The ‘information’ he gives comes as a result of a biologist ‘observing’ this process using no-doubt a variety of sophisticated measuring instruments, and just a basic knowledge of the principles of quantum mechanics should alert him to the fact that this act of observation has resulted in a collapse of the wave function not only at the DNA level but at the cellular level. This perhaps is not the right terminology. It involves the collapse of a wave function that encompasses the genome, the cell, the measuring equipment and the observing biologist. If you are looking for ‘information’ to answer the question ‘What is life?’ you need go no further than this one act of observation. There is more information here than all the computers in the world working in parallel could compute.
While we are on the subject of computers processing ‘information’ I read Davies book carefully thru from beginning to end, and not once did I come across the word semiconductor. May I humbly suggest to him that Schrödinger’s famous question ‘What is life?’ can actually be answered in one word – carbon. Carbon, like silicon, has four electrons in its valence shell and is a classic semiconductor. In 1943 however Schrödinger did not know this. The new physics that Schrödinger was seeking to explain life is simply semiconducting technology which leads to electronics and nanotechnology and ultimately to information technology. The result is that Davies correctly answers Schrödinger’s question without actually explaining precisely how that could be so. Davies has resort to complex self-regulating ‘up-down’ neural networks that somehow produce all this coherent ‘information’ that we take to be the real world, whereas all that was required to convince us that biology is about information is to point out that the DNA is essentially a carbon nanowire.
Well perhaps not simply that, he would also have to explain how the DNA could act as a semiconducting nanowire. And herein lies the other conundrum. After having read his book thru carefully from start to finish, I did not come across the word ‘optogenetics’. The fact is that there are thousand of research papers in mainstream journals detailing how the DNA absorbs electromagnetic radiation, everywhere from UV light down to ELF radiowaves (aka brainwaves), and Davis as a physicist will surely know that when a semiconducting nanowire absorbs electromagnetic radiation it pushes the electrons out of the valence band and into the conduction band. He will also know that when these electrons fall back into their ‘holes’ in the valence band they emit electromagnetic radiation (usually in the UV to visible light range aka biophotons).
Effectively Davis has correctly answered Schrödinger’s question ‘What is life?’ without knowing how or why. We shall call it inspiration. Davies has always impressed me as more than just a popularizer of science, but as a philosopher, dare I say a prophet. I distinctly remember one of his earlier books that the universe is the ‘mind of God’, which impressed me then although he fell short of recognizing that the universe is indeed a virtual reality of mental construct, and that our reality is no more than a sustained and consistent dream.
I particularly commend Davies for his ‘radical’ attempt to question Neo-Darwinism. He quotes with approval what Nobel Prize winner Barbara McClintock says that the DNA seems to be ‘thoughtfully’ orchestrating its own evolution, which clearly implies that both ‘consciousness’ and ‘intelligence’ are in the DNA as an organism in its own right. Davies puts this forward that life is about ‘information’ and once we understand that the DNA is actually a semi-conducting carbon nanowire it’s easy to see how this could be so. Indeed Schrödinger himself puts forward the same proposition in a latter series of lectures, Mind and Matter, which took place at Trinity College, Cambridge in 1956. I’m surprised that Davies did not mention this, as Schrödinger also offers an early theory of mutation based on the fact that the DNA is a semiconducting nanowire in those lectures as well.
In his treatment of Neo-Darwinism, Davies is also aware that there is only a 1-2% difference in DNA sequence between human and chimpanzee, and yet the phenotypes of human and chimp are vastly different. It has been said of human and chimp that at the molecular level, genome and proteins, they are even more similar than sibling species yet taxonomically human and chimp are not only in different genera but in different families. Davies recognizes that Neo-Darwinism is clearly wrong, or at least not the whole story. Davies suggests that epigenetic factors may be responsible for the profound difference in phenotype between human and chimp, and thus raises the spectre of Lamarckism as a more satisfactory explanation for evolution than Neo-Darwinism. As a mainstream internationally known scientist mouthing such a heresy, he has earned my undying respect and admiration. However I would point out to him that epigenetic factors affect the expression of genes, and if it was truly epigenetic factors that has caused the very profound difference in phenotype between human and chimp then this would be reflected in profound differences in the proteins.
This does not appear to be the case with simple proteins where there is a one-on-one relationship between DNA sequence and amino acid sequence, but there is certainly here an area of enquiry to see how complex proteins that are synthesized from more than one gene compare in human and chimp. Indeed if epigenetic factors are at work then this is most likely where they would show up.
I can’t remember when was the last time I read a book that I couldn’t put down, but Paul Davies book The Demon in the Machine is such a book. New Scientist has described his theory as ‘radical’ and indeed it is. I detect in this book a complete paradigm shift. Paul Davies has sufficient stature in the scientific community that if he cared to write a sequel and develop his ‘inspiration’ further, and perhaps call it The Ghost in the Machine, he could find himself on the same pedestal that occupies Schrödinger himself.
Bradley York Bartholomew
AN OPEN LETTER TO PAUL DAVIES AUTHOR OF THE DEMON IN THE MACHINE
Dear Professor Davies I have a few concerns which tends to make me doubt that the universe is physical, but it is actually a virtual reality. I am hoping that you can convince me that the universe really is physical because sometimes I think I must be going crazy. Although your new book claims that all of life is computer processing so maybe you have a few thoughts that the universe could be computer generated yourself.
You will see from earlier posts that the issue is a research article that was published in 2005 that found that there is 1-2% difference in DNA sequence between humans and chimps and yet 80% of the proteins are different. From the limited number of comparisons that were made at that time it emerged that for most of these proteins the differences were very small say 2% between human and chimp proteins. The article states that these differences were too small to account for the difference in phenotype between humans and chimps. The fact that there was such a small difference in DNA sequence and such a small difference in the quality not the quantity of the proteins in the chimp suggested that all is at it should be and Neo-Darwinism could stand. The article suggested that the difference in phenotype (which I put at about 60% at least) must be due to small differences in a few regulatory genes in early development. Indeed they must be small differences because there is only 1-2% difference in DNA sequence overall, and this is already needed to account for the differences in 80% of the proteins.
So I have now identified 10 genes that are expressed in the mammalian placenta and I am going to compare in the gene databases the proteins synthesized from these genes. I suspect however that again I will find that there is about a 1% difference in the DNA sequence and a 2-3% difference in the amino acid sequence of the proteins. My own theory about this is that the fact that there is a small percentage of difference in so many proteins (80%) does indeed account for the fact that there is a 60% (at least) difference in phenotype between human and chimp. Take a simple example: If there is a 2% difference in proteins between 80% of the proteins in two species then that could arguably account for an 80 x 2= 160% difference in phenotype between the two species. This is not strictly a formal mathematical permutation or combination but still as a matter of common sense it could account for the fact that there is a 60% (at least) difference in phenotype between human and chimp. Basically very small differences in a large proportion of all the proteins in an organism are responsible for its phenotype.
Which means that six million years ago in just one generation all these small insignificant mutations must have all occurred simultaneously for the human being to differentiate from the chimpanzee. The image above which presents the standard theory of Neo-Darwinism that the human gradually evolved over six million years and started to stand upright simply doesn't stack up with the fact that the differences in DNA sequence and the great bulk of the proteins in human and chimp are insignificantly small. It doesn't stack up because if these insignificantly small mutations happened randomly in dribs and drabs over millions of years then there could not have been a complete differentiation between the two species. They would have been able to continue to interbreed and the fossil record would show all sorts of intermediate hybrids. The only way to account for the 60% (at least) difference in phenotype between human and chimp is if all the insignificantly small mutations happened at once to create two different creatures. It is submitted that this demonstrates that Neo-Darwinism is clearly wrong, and if you can't accept that then surely you must concede that Neo-Darwinism offers no explanation for the fact that two creatures so similar in their genome and proteins could be two separate species so totally different and distinct in their phenotype.
In fact the orthodox explanation that a small difference in a few developmental genes in embryogenesis are responsible for the differentiation of human and chimp species would be the strongest argument possible for intelligent design, for these same small differences in only one or a few regulatory genes would be responsible for the differentiation of all the mammal species and these could not possibly be random cheemical mutations.
SCHRÖDINGER’S QUANTUM THEORY OF MUTATION VS. NEODARWINISM
TEN GENES EXPRESSED IN THE PLACENTA
Global gene expression analysis and regulation of the principal genes expressed in bovine placenta in relation to the transcription factor AP-2 family
We detected gestational-stage-specific gene expression profiles in bovine placentomes using a combination of microarray and in silico analysis. In silico analysis indicated that the AP-2 family may be a consensus regulator for the gene cluster that characteristically appears in bovine placenta as gestation progresses. In particular, TFAP2A and TFAP2B may be involved in regulating binucleate cell-specific genes such as CSH1, some PAG or SULT1E1. These results suggest that the AP-2 family is a specific transcription factor for clusters of crucial placental genes. This is the first evidence that TFAP2A may regulate the differentiation and specific functions of BNC in bovine placenta.
A Human Placenta-specific ATP-Binding Cassette Gene (ABCP) on Chromosome 4q22 That Is Involved in Multidrug Resistance
We characterized a new human ATP-binding cassette (ABC) transporter gene that is highly expressed in the placenta. The gene, ABCP, produces two transcripts that differ at the 5′ end and encode the same 655-amino acid protein. The predicted protein is closely related to the Drosophila white and yeast ADP1 genes and is a member of a subfamily that includes several multidrug resistance transporters. ABCP, white, and ADP1 all have a single ATP-binding domain at the NH2terminus and a single COOH-terminal set of transmembrane segments. ABCP maps to human chromosome 4q22, between the markers D4S2462 and D4S1557, and the murine gene (Abcp) is located on chromosome 6 28–29 cM from the centromere. ABCP defines a new syntenic segment between human chromosome 4 and mouse chromosome 6. The abundant expression of this gene in the placenta suggests that the protein product has an important role in transport of specific molecule(s) into or out of this tissue.
Identification of a novel member of the TGF-beta superfamily highly expressed in human placenta
While conducting a gene discovery effort targeted to transcripts of the prevalent and intermediate frequency classes in placenta throughout gestation, we identified a novel member of the TGF-β superfamily that is expressed at high levels in human placenta. Hence, we named this factor `Placental Transforming Growth Factor Beta' (PTGFB). The full-length sequence of the 1.2-kb PTGFB mRNA has the potential of encoding a putative pre-pro-PTGFB protein of 295 amino acids and a putative mature PTGFB protein of 112 amino acids. Multiple sequence alignments of PTGFB and representative members of all TGF-β subfamilies evidenced a number of conserved residues, including the seven cysteines that are almost invariant in all members of the TGF-β superfamily. The single-copy PTGFB gene was shown to be composed of only two exons of 309 bp and 891 bp, separated by a 2.9-kb intron. The gene was localized to chromosome 19p12-13.1 by fluorescence in-situ hybridization. Northern analyses revealed a complex tissue-specific pattern of expression and a second transcript of 1.9 kb that is predominant in adult skeletal muscle. Most importantly, the 1.2-kb PTGFB transcript was shown to be expressed in placenta at much higher levels than in any other human fetal or adult tissue surveyed.
Expression of P-glycoprotein in Human Placenta: Relation to Genetic Polymorphism of the Multidrug Resistance (MDR)-1 Gene
To evaluate whether mutations in the human multidrug resistance (MDR)-1 gene correlate with placental P-glycoprotein (PGP) expression, we sequenced the MDR-1 cDNA and measured PGP expression by Western blotting in 100 placentas obtained from Japanese women. When genotype results were compared between Caucasians and Japanese, ethnic differences in the frequency of polymorphism in the MDR-1 gene were suspected.
Human epidermal growth factor receptor cDNA sequence and aberrant expression of the amplified gene in A431 epidermoid carcinoma cells
The complete 1,210-amino acid sequence of the human epidermal growth factor (EGF) receptor precursor, deduced from cDNA clones derived from placental and A431 carcinoma cells, reveals close similarity between the entire predicted ν-erb-B mRNA oncogene product and the receptor transmembrane and cytoplasmic domains.
Differential expression of HLA-E, HLA-F, and HLA-G transcripts in human tissue
The data presented here demonstrated that the HLA-G class I gene is unique among the members of the human class I gene family in that its expression is restricted to extraembryonic tissues during gestation. Furthermore, the pattern of HLA-G expression in these tissues changes as gestation proceeds. During first trimester HLA-G is expressed within the placenta and not within the extravillous membrane. At term, the pattern of the HLA-G expression is reversed, extravillous membrane expressed HLA-G while placenta does not. Another non-HLA-A, -B, -C class I gene, HLA-E, is also expressed by extraembryonic tissues. Unlike HLA-G, HLA-E is expressed by both placenta and extravillous membrane at first trimester and at term. These results raise the intriguing possbility that the HLA-G-encoded molecule has a role in embryonic development and/or the fetal-maternal immune response.
Cloning of a New Member of the Insulin Gene Superfamily (INSL4) Expressed in Human Placenta
A new member of the insulin gene superfamily was identified by screening a subtracted cDNA library of first-trimester human placenta and, hence, was tentatively named early placenta insulin-like peptide (EPIL). In this paper, we report the cloning and sequencing of the EPIL cDNA and the EPIL gene (INSL4). Comparison of the deduced amino acid sequence of the early placenta insulin-like peptide revealed significant overall and structural homologies with members of the insulin-like hormone superfamily. Moreover, the organization of the early placenta insulin-like gene, which is composed of two exons and one intron, is similar to that of insulin and relaxin. Byin situhybridization, the INSL4 gene was assigned to band p24 of the short arm of chromosome 9. RT-PCR analysis of EPIL tissue distribution revealed that its transcripts are expressed in the placenta and uterus.
Identification of a novel MHC class I gene, Mamu-AG, expressed in the placenta of a primate with an inactivated G locus.
In this study, we report the identification of a novel nonclassical MHC class I locus expressed in the placenta of the rhesus monkey, Mamu-AG (Macaca mulatta-AG). Although unrelated to HLA-G, Mamu-AG encodes glycoproteins with all of the characteristics of HLA-G. These Mamu-AG glycoproteins are limited in their diversity, possess truncated cytoplasmic domains, are the products of alternatively spliced mRNAs, and their expression is restricted to the placenta. Taken together, these data suggest that convergent evolution may have resulted in the expression of a unique nonclassical MHC class I molecule in the rhesus monkey placenta, and that the common structural features of Mamu-AG and HLA-G may be functionally significant.
Secreted placental alkaline phosphatase: a powerful new quantitative indicator of gene expression in eukaryotic cells
This paper describes a novel eukaryotic reporter gene, secreted alkaline phosphatase (SEAP). In transient expression experiments using transfected mammalian cells, we demonstrate that SEAP yields results that are qualitatively and quantitatively similar, at both the mRNA and protein levels, to parallel results obtained using established reporter genes.
PPARγ Is Required for Placental, Cardiac, and Adipose Tissue Development
The nuclear hormone receptor PPARγ promotes adipogenesis and macrophage differentiation and is a primary pharmacological target in the treatment of type II diabetes. Here, we show that PPARγ gene knockout results in two independent lethal phases. Initially, PPARγ deficiency interferes with terminal differentiation of the trophoblast and placental vascularization, leading to severe myocardial thinning and death by E10.0. Supplementing PPARγ null embryos with wild-type placentas via aggregation with tetraploid embryos corrects the cardiac defect, implicating a previously unrecognized dependence of the developing heart on a functional placenta.
Human cholesterol side-chain cleavage enzyme, P450scc: cDNA cloning, assignment of the gene to chromosome 15, and expression in the placenta
P450scc cDNA was used to probe DNA from a panel of mouse-human somatic cell hybrids, showing that the single human P450scc gene lies on chromosome 15. The human P450scc gene is expressed in the placenta in early and midgestation; primary cultures of placental tissue indicate P450scc mRNA accumulates in response to cyclic AMP.