Monday, August 31, 2009

Emergence versus reductionism

In the final chapter of his book, A Different Universe, Bob Laughlin states:
while a simple and absolute law, such as hydrodynamics, can evolve from the deeper laws underneath, it is at the same time independent of them, in that it would be the same even if the deeper laws were changed.
Thinking through these effects seriously moves one to ask which law is the more ultimate, the details from which everything flows or the transcendent, emergent law they generate. That question is semantic and thus has no absolute answer, but it is clearly a primitive version of the moral conundrum raised by the alleged subordination of the laws of living to the laws of chemistry and physics. It shows allergorically how a person could easily master one and learn nothing whatsoever about the other. The epistemological barrier is not mystical but physical.
The conflict between these two conceptions of the ultimate: the laws of the parts or the laws of the collective is very ancient and not resolvable in a few minutes’ reflection or a casual conversation. One might say it represents the tension between two poles of thought, which drives the process of understanding the world the way the tension between the tonic and dominant drives a classical sonata. At any one time in history a given pole may be stronger than the other, but its predominance is only temporary, for the essence of the plot is the conflict itself.

Saturday, August 29, 2009

Watching Mott and Hubbard kill quasi-particles

This morning I gave a seminar, "Destruction of quasi-particles near the Mott insulator transition" to the Condensed Matter Physics group at University of Toronto. A few points I tried to emphasize in the talk were:

The frequency dependent optical conductivity is a powerful probe of many-body effects in strongly correlated electron materials. In particular, for a wide range of materials one observes a significant redistribution of spectral weight, with only a small amount of weight in the Drude peak, which often only exists at temperatures much less than the energy scales associated with the band structure.

The absence of a Drude peak is associated with destruction of Fermi-liquid quasi-particles. Other signatures of a bad metal include a non-monotonic temperature dependence of the resistivity, thermopower, and Hall constant.



Dynamical mean-field theory gives a quantitative description of the redistribution of spectral weight in organic charge transfer salts near the Mott-Hubbard insulating phase.

The talk was largely based on this PRL, a combined theory and experiment work.

Friday, August 28, 2009

Quantifying quantum decoherence of electronic excitations of moleucles in condensed phases

This afternoon I am giving my talk at the conference. Most of it is based on this review article. Main ideas I want to get across are:

-the main source of decoherence of electronic excitations in large molecules is due to the dielectric relaxation of the environment

-this can be quantified in terms of a spectral density, many of which have been determined experimentally from dynamic Stokes shift measurements

-the relevant time scales are 10's to 100's femtoseconds

Thursday, August 27, 2009

Dial a quantum state!

Today John Martinis presented some amazing experimental results showing how using superconducting phase qubits coupled to a transmission line one can synthesize arbitrary superpositions of photon number states. The figure below is taken from a recent Nature paper


Slide 1Movies can be seen at the Martinis group at UCSB site.

Wednesday, August 26, 2009

Is optimal control quantum, semi-classical, or classical?

I had a great meeting with Paul Brumer today, where he answered many of the questions I posted previously about quantum control. Here are just a few of the things I learnt:

In a 2005 PRL, Paul and Hoki did a careful analysis of experiments on the optimum pulses for photo-isomerisation of the dye NK88 in methanol. They found that the optimum pulses corresponded to an incoherent pump-dump scenario and that quantum interference effects were absent.

In a J. Chem. Phys. papers in 2005 and 2006 Christopher, Shapiro, and Brumer considered the pulse sequences needed to optimise internal conversion of S2 to S2 in pyrazine by considering the full time evolution of an effective Hamiltonian for these two states taking into account 24 vibrational modes. They found "active control over internal conversion so as to almost completely suppress the process over time scales of ~50–100 fs [well in excess of the natural internal conversion times (~20 fs)] or to accelerate it to complete internal conversion in less than 5 fs". One thing I want to understand better is how this relates to a conical intersection picture for internal conversion.

A recent experimental paper in PNAS found quantum coherence did not play a significant role in the isomerisation of retinal in bacteriorhodopsin, presenting a different view from a 2006 Science paper from Miller's group in Toronto.

Tuesday, August 25, 2009

Trying to disentangle my incoherent thoughts

Here a few notes and comments on some of todays talks on the Conference on Quantum Information and Quantum Control in Toronto.

Andrew White (University of Queensland)
Quantum Chemistry on a Quantum Computer: First Steps and Prospects

He showed some nices pictures of potential energy surfaces. In passing I mention that John Polanyi, a long-time faculty member in Chemistry at U. Toronto who shared a Nobel Prize in 1986 for illuminating the significance of such surfaces for reaction kinetics.

Essentially the work described seems to be diagonalising a 2x2 matrix on a quantum computer (by the phase estimation algorithm). It was not clear from the talk how the matrix elements in this matrix were evaluated since they involve performing various real space integrals (i.e., matrix elements) of the real space Hamiltonian. Practical quantum chemists would say that evaluating such integrals is an essential part of a real calculation. Even disregarding this issue calculations with more realistic basis sets will require many more qubits. Hence, I wonder if a better direction for such simulations of quantum systems is to focus on simulating systems with small Hilbert spaces interacting with an environment. The simplest such model Hamiltonian would be the spin-boson model. Simulating the quantum dynamics of this on classical computer is a real challenge but in a quantum computer simulation one could have the significant advantage that an artiificial source of decoherence would have few cost overheads...

Shohini Ghose (Wilfrid Laurier University)
Entanglement and nonlocality in multiqubit pure states

This is based on a recent PRL.

Consider pairs of qubits in a pure state. Then the states are entangled if and only if they violate Bell-type inequalities. [If the state is mixed then Werner showed there exist states which are entangled but not violate Bell].

For 3 qubits one can uses the 3 particle tangle (introduced by Coffman, Kundu, and Wootters) to quantify entanglement and there is an inequality due to Svetlichny which is the 3-qubit generalisation of Bell-CSCH inequalities.

A few really striking aspects of the results presented
(they are "counter-intuitive" because they are different to what occurs in the 2 qubit case) :
  • There exist tripartite entangled states that do not violate the Svetlichny inequality.
  • The tangle is not a smooth function of the state coefficients.
Talking to Shohini afterwards she drew my attention to a paper by Cai et al. which introduces a measure for true 2N-particle entanglement. I am particularly interested in this because I want to quantify the amount of entanglement in the resonating valence bond state of benzene.

Chris Monroe (University of Maryland)
Quantum Networks with Ions, Phonons, and Photons

Chris showed how to simulate a 3 spin Ising model in a transverse field. The spin-spin interaction is mediated by phonons (i.e., relative motion of the ions). Given the state of ion trap technology extending this to as many as 10 spins. This should make possible some of the simulations that Gerard Milburn and I considered several years ago with a student John Paul Barjaktarevic and described here. These have the significant advantage that one does not have to worry about Trotter decomposition.

well its midnight in Toronto and 2pm in brisbane.... time to go back to bed and try and get over the jet lag....

James Bond meets Niels Bohr!

At the conference today my UQ colleague Andrew White pointed out that if you type "Daniel James quantum" into Google the first two entries that appear are for the conference chairman, who rates higher than Daniel Craig as James Bond in Quantum of Solace!

Andrew then showed a clip from the movie which showed James Bond interrogating a poor "graduate student" who says:
I answered all your questions. I told you everything you wanted to know about quantum.
I should also mention that Daniel Craig actually stars as Werner Heisenberg in a PBS production of the play, Copenhagen! You can see some of the video here, and a key scene about the uncertainty principle.

Getting more out of quantum control

A problem is that one does not know what is the Hamiltonian of the system (this is not just true for a system as complex as a protein but even for a small organic molecule in the gas phase) is and so a priori one cannot predict what the optimal laser pulse sequence will be to steer the photochemical reaction. This might make the problem seem hopeless but in 1992 Judson and Rabitz proposed a very clever solution: to optimise the pulse sequence using a learning algorithm that iteratively improves the control scheme. This has now successfully been implemented by many experimental groups. A comprehensive review of what has been done in condensed phases is here.

Hence, the optimal (and anti-optimal) pulse sequence contains a significant amount of information about the system. I wonder is there a way to convert this into information about the Hamiltonian of the system? For example, details of the ground state and excited state potential energy surfaces. For example, for a reaction which passes through a conical intersection surely the optimal pulse sequence defines a wave packet at the Franck-Condon point on the excited state surface with a momentum that points in the direction towards the conical intersection, i.e., it tells us exactly which vibrational modes comprise the "reaction co-ordinate". The wave packet shape and speed may be optimised to minimise intersystem crossing at the conical intersection.

The figure below is taken from a paper by Hunt and Robb shows the relevant potential energy surfaces for photoisomerisation of a model cynanine dye.

Monday, August 24, 2009

Some questions about quantum control

I am struggling my way through trying to figure out what quantum control is really about and how quantum it is. On the plane to Toronto I read some of the book by Shapiro and Brumer. A few questions I have include:

When can quantum control be semi-classical?

Why is it possible to perform quantum control in condensed
phase systems at room temperature?

How much decoherence is needed to destroy it?

The basic idea of quantum control is to use multiple coherent laser pulses to create an initial state which is a coherent superposition of several quantum states. This state then evolves in a manner where the final products (branching ratios) depend on the relative phase and amplitude of the initial laser pulses.

One example is the Tannor-Rice scheme for the reaction
A+BC -> AB + C
which is shown in the figure below.

By the Franck-Condon principle absorption of a photon by the system in the electronic and vibrational ground state will produce a Gaussian vibrational wavepacket in the excited electronic state.
However, if one applies the relevant pulse sequence one can produce an excited state which has a vibrational wave packet with a net momemtum to the left or the right.

A similar issue arises with photo-isomerisation. Usually this occurs via a conical intersection between two potential energy surfaces.
One should be able to enhance the photo-isomerisation yield by producing an excited state in which the vibrational wave packet is a coherent state with a momemtum pointing towards the conical intersection.

Saturday, August 22, 2009

Max Delbruck: master of biophysics

Max Delbruck (1906-1981) started his career as a physicist but switched to biology. This week in PHYS3170 we saw how using simple arguments Delbruck was able to propose that the carrier of genetic information must be a single molecule.

About 20 years ago I heard a speaker refer to a commencement (i.e., graduation ceremony) address by Delbruck at Caltech in 1978, entitled the Arrow of Time: Beginning and End. I always wanted to track down a copy and a few years ago I was delighted that I found that Delbruck's son Tobi has a site with an annotated copy of the original text of the address

Thursday, August 20, 2009

Quasi-particles in the pseudogap state

What is the relationship between the pseudogap and the superconducting in the underdoped cuprates?

A really nice ARPES (Angle-Resolved PhotoEmission) study by Shi et al. shows that in an underdoped LSCO (Tc=30K) they are one and the same.
The figure at right shows the supeconducting gap Delta (filled blue dots) at T=12 K as a function of phi=angle around the Fermi surface. phi=45 degrees is the node.
The open green circles are at T=49 K, i.e., well above Tc and into the pseudogap state.
Gamma1 is the electron lifetime as a function of angle at 12 K.

This is all consistent with perfect particle-hole symmetry and the quasi-particles in the pseudogap state being like Bogoliubov quasiparticles.

This follows an earlier paper by Norman et al. who compared ARPES spectra to that predicted by the one-electron Greens function from six alternative theories for the pseudogap state (D-density wave, nesting density wave, differing Luttinger surface, energy displaced node,...) The only one consistent with experiment is a d-wave superconducting fluctuation model provided one has different lifetimes in the particle-hole and particle-particle channels.

As an aside, I note this shows the value of the method of alternative hypotheses.

Trying to shed light on organic LED's?

The system below is just one example of an organometallic phosphorescent system which is the basis of some LED's.

Some of the questions that interest me are:

Suppose we ignore spin-orbit coupling and the environment (either a solvent or thin film)
what are the quantum numbers and energies of the different excited states?

If the complex has C3v symmetry excited states can be labelled by A1, A2, and E (the irreps of the group) . Spin rotational invariance means the states will be either singlets or triplets.

What is their oscillator strength? Which states can be identified with features in the optical absorption spectrum? and with the emission spectrum?

How much mixing between the ligand centred (LC) and metal-to-ligand-charge transfer (MLCT) states occurs due to hybridization of metal orbitals and ligand pi and pi* orbitals?

How does the large spin-0rbit coupling on the metal core change things?
This will mix singlet and triplet states, particularly states with significant metal character.
It will also allow intersystem crossing (i.e., transitions between singlet and triplet states and visa versa) and will give "triplet" states oscillator strength.

How does the environment change things?
It breaks the C3v symmetry and tends to localise excitations on the ligands.

What determines the quantum efficiency of phosphorescence? What processes compete with phosphorescence? What is the dynamics following photoexcitation?

It must be something like:

A1 (ground state) + photon ->
A1 or E (depending on polarisation of light) "singlet" ->
"MLCT" "singlet" via internal conversion (and a conical intersection), symmetry? ->
"MLCT" "triplet" via spin-orbit coupling (i.e., intersystem crossing), symmetry? ->
"LC" "triplet" with some "singlet" character from spin-orbit coupling-> A1 (ground state) + photon

There are well-defined selection rules from group theory for spin-orbit coupling and internal conversion. This may help identify the symmetry of the different states.

Can it be made any simpler?

Tuesday, August 18, 2009

Finding the protons in proteins

X-ray crystallography has proven to be an extremely powerful tool to determine the structure of proteins. However, it does have several limitations:
  • real biology happens in water not in a crystal (How do we know the protein structure is the same in a crystal as in the native state in water?)
  • one can't see the location of protons (many important biochemical processes involve proton transfer and so knowing where the protons are is extremely important)
An alternative probe is neutron scattering. It does not have as high as resolution as X-ray scattering but one can see deuterium (which can be substituted for protons). It can also see water molecules.
A recent PNAS paper, Low-barrier hydrogen bond in photoactive yellow protein, illustrates the power and importance of the technique. Using nuetron scattering they were able to identify the positions of 819 H atoms out of 942 . More importantly, they showed that the hydrogen bond between the chromophore and the carboxylic acid group of the Glu46 residue was particularly short. The authors propose that in the excited state the fast relaxation of this bond into a normal hydrogen bond is the trigger for the photo-signal.

[I stumbled across this paper while looking at the Protein Data Bank as part of the PHYS3170 course]

I am curious as to how this relates to an earlier JACS by Gerrit Groenhof et al, which uses an extensive Quantum-Classical Molecular Dynamics study to argue that the Arg52 residue controls the photo-isomerisation process.

Monday, August 17, 2009

Dancing with the molecules

Here are some things I learnt from chapter 3, of Biological Physics by Nelson.

Biological question: Why is the nanoworld so different from the macroworld?
Physical idea: Everything is (thermally) dancing.

Activation barriers control reaction rates.
i.e., the rate of a typical chemical reaction depends on temperature largely via a factor of exp(-Eb/k_B T) where the activation energy Eb is independent of temperature, characteristic of the reaction and usually of the order of an eV (10^{-19} J) . Note this energy scale is two orders of magnitude larger than k_B T. Knowing this helps understand why genetic information must be coded at the molecular level. (Schrodinger emphasized this in What is life? which heavily influenced James Watson).

Early in the twentieth century, T.H. Morgan performed a series of experiments on genetic linkage that led him to deduce that genetic factors (alleles) must be encoded in a linear sequence. By the 1940's there were partial maps of the fruit fly genome:


In the 1930's Miller and Timofeeff found that the frequency with which a specific mutation (of fruit flies) occurred varied linearly with the total X-ray exposure of the system. Max Delbruck was able to deduce from this that genes are single molecules.

How fast does water forget?

This should be read in conjunction with my earlier post, on Quantum decoherence in water.

Looking forward to my visit to Toronto next week I have been reading a couple of beautiful ultrafast spectroscopy papers from a collaboration between the Miller (Toronto) and Nibbering (Berlin) groups. In a 2005 Nature paper, Cowan et al. showed that in pure water excitation of the OH stretch lost its memory, within about 50 fsec, faster than in any other liquid. They suggested that librational motions (restriction rotations due to hydrogen bonding) are key to understanding this.

In a more recent PNAS article, Kraemer et al., measured the temperature dependence from (274 to 340 K) and found how the polarisation anisotropy decay did not vary with temperature whereas the population lifetime increased by about 50 per cent.

The main questions I have are:

Is the population lifetime largely due to intermolecular Forster Resonant Energy Transfer?
What lifetimes do Forster's expressions give?

Can the spectral diffusion and dephasing be largely described by the expressions below together with the spectral density J(omega) from my previous post?
This can be tested by comparing the isotope and temperature dependence of the frequency dependent dielectric constant of bulk water (and particularly the features associated with librations).

The spectral density from the dielectric continuum model mentioned in the previous post is

The real part of the phase of the off-diagonal part of the density matrix (for the nu=0,1 vibrational states) describes quantum decoherence:
The spectral diffusion is described by the time derivative of the imaginary part of the phase of the off-diagonal part of the density matrix:

Saturday, August 15, 2009

Quantum control

In a week I am heading of to Toronto a Conference on Quantum Information and Quantum Control. I will give a talk on quantum decoherence in biomolecules. There are a number of speakers I am looking forward to hearing and meeting. Hopefully, I will post something about their work soon...

What is quantum control? Wikipedia has a short but helpful entry

Coherent control
is a quantum mechanical based method for controlling dynamical processes with light, employing quantum interference phenomena which are controlled by shaping the phase of laser pulses.

Friday, August 14, 2009

Quantum decoherence in water

Water is an amazing substance which has many unique properties. Some of these properties are essential for the functionality of biomolecules.
Previously, I mentioned how it plays a key role in determining the spectral density which describes how electronic excitations in biomolecules decohere.
If one takes the simplest possible Onsager type continuum dielectric model where an electric dipole is placed inside a spherical cavity then the spectral density can be related to the frequency dependent dielectric function, epsilon(omega)For more on this see this review article.

The figure below [from a nice paper by Hsu, Song, and Marcus] shows a plot of -J(omega)/omega using the measured epsilon(omega) for liquid water at room temperature.

What is the origin of the large shoulder around 800 cm-1 (corresponding to a time scale of about 40 fsec)?

It is due to the "librational" motion of the water molecules. This is a rotational motion of an individual water molecule which is restricted by hydrogen bondint to four surrounding water molecules. A nice animation of the librational motion of water is here.

In heavy water, D2O, the librational frequency is decreased by a factor of about two.

The librational motion makes an important contribution to the heat capacity of water and ice, as described in the classic book of Eisenberg and Kauzmann.

Wednesday, August 12, 2009

Biological physics vs. Biophysics

On the PHYS3170 blog, on of the students, Alex posted a perceptive question about clarifying the relation between biophysics and biological physics.

Certainly, different people will have different definitions. But, the question is worth thinking about if it helps clarify different approaches (which there certainly are).
Similar issues arise with physical chemistry vs. chemical physics.

Much of biophysics seems to be concerned with applying physics techniques to understand biological systems and processes.

Biological physics is particularly interesting when studying biological systems illuminates physical concepts and principles which hold independent of the system under study.
A master at this is John Hopfield, best known for his work on neural networks, which is now applied in computer science.

Another example, is how studying the protein folding problem led to energy landscape theory by Joseph Bryngelson and Peter Wolynes. Their approach introduced a Principle of minimal frustration and the notion of "folding funnel" energy landscapes which allow a protein via a large number of pathways. Such ideas of rugged energy landscapes are also relevant in non-biological systems such as glasses and artificial neural networks.

Two articles that discuss some of these ideas are a short Physics Today piece by Hopfield and a
Reviews of Modern Physics paper by Austin, Frauenfelder, and Wolynes.

Thermal conductivity, spinons, and visons

Patrick Lee sent me the following comment on an earlier post on thermal conductivity and visons and asked me to post it.

Ross, I am very glad you brought up the issue of the low temperature thermal conductivity in the ET salt spin liquid. I have been concerned about the data below 0.17 K (T^2 = 0.03) for some time. I think you will find a comparison with the data on the 9 K ET superconductor [Belin, Behnia and Deluzet, PRL 81, 4728 (1998)] amusing.

[This powerpoint slide shows the relevant data, reproduced below, sorry for the small size]

First, because they used different units, the numbers on the y-axis of the superconductor data should be multiplied by 0.1 to compare with the right panel. The point is that the absolute value is very comparable, as pointed out by Mike. More remarkable is that both sets of data show a hump starting below Tc = 9 K for the superconductor and To = 6 K for the spin liquid. The low temperature data shown in the lower panels are also very similar. Behnia stopped his measurement below T = 0.17 K. If we ignore the spin liquid data below the same temperature, one would be tempted to draw a straight line which extrapolates to a finite value of kappa(T)/T = 0.03 W/(K^2m^1), very close to that for the superconductor. To me it is truly amazing that a superconductor and an insulator look so much alike!

What does this all mean? The hump in the superconductor is well understood to have two possible origins. The phonon has a longer mean free path due to the gapping of quasiparticles and the quasiparticles may have increased their contribution to the thermal conductivity due to its longer lifetime. The only way to disentangle them is via thermal Hall measurements. Because the hump looks so much like that of the superconductor, my own inclination is to explore a fermionic spinon explanation before considering “more exotic” objects such as visons. It seems natural to replace the words electron and superconducting by spinon and spinon pairing in the insulator. This has led us to propose a thermal Hall measurement for the insulator (Katsura, Nagaosa and Lee, arXiv: 0904.3227). Yes, the spinons do see an external magnetic field via the spin chirality (or gauge field).

As to the low temperature behavior, I do not understand the experiment well enough to challenge the data. I hope the experiment will be repeated by others. At the same time, it will be useful if Dr. Yamashita will show us data on the superconductor below 0.17 K. This may help answer some of the questions concerning thermal decoupling.

Tuesday, August 11, 2009

How can it be?

Ch. 2 of Biological Physics by Nelson reviews a myriad of exquisite biological structures and processes. It is easy to get lost in the detail. But, he ends nicely, emphasizing the need to focus on the question:

How can all these "miraculous" processes occur?

Is there some common underlying physics?

The key physical phenomena we will need to understand are
  • specificity
  • self-assembly
  • active transport
Other things to keep in mind and pay attention to are

-the hierarchy of length scales
-the hierarchy of energy scales associated with different types of bonding
-water with its unique properties is crucial, the hydrophobic interaction which drives many processes
-structure determines property which determines function

Frustration produces a valence bond state

Ever since Andersons seminal RVB Science paper in 1987, many theoretical papers on quantum magnetism have considered valence bond ground states for frustrated quantum antiferromagnets. However, there has been a paucity of experimental realisations.

But, a few years ago Tamura, Nakao, and Kato clearly observed such a state in the MOtt insulating phase of an organic charge transfer salt based on the Pd(dmit)2 molecule.

The definitive experimental signatures were:
  • a structural phase transition associated with dimerisation of the pairs of molecules associated with each spin
  • a gap in the magnetic susceptibility below the transition
The magnitude of the gap (between 30 and 50 K) can be compared to the exchange interaction J of about 250 K estimated from the temperature dependence of the magnetic susceptibility.

This all fits nicely with the results of calculations (using series expansion techniques) on the relevant Heisenberg model on an anisotropic triangular lattice.

If one is close to the isotopic triangle, i.e., J'/J is between 0.65 and 0.95, then the ground state has valence bond order and there is a spin gap with the magnitude shown below.
[ to compare notation J'/J=J1/J2]The magnitude of the gap, (0.1-0.2) J is consistent with the experiment.

Later I will write about how one can apply pressure and produce superconductivity. This really shows the interplay between frustration, quantum magnetism, and superconductivity anticipated by Anderson.

Some of the above figures were taken from a nice talk by Subir Sachdev.

Sunday, August 9, 2009

Keeping track of everything without accountants


Biological question:
How do cells organise their myriad ongoing chemical processes and reactants?

Physical ideas:
a. Bilayer membranes self-assemble from their component molecules; the cell uses them to partition itself into separate compartments.
b. Cells use active transport to bring synthesized materials to particular destinations.
c. Biochemcial processes are highly specific . Most are mediated by enzymes, which select one particular target molecule and leave the rest alone.

Nelson, Biological Physics, page 37

Saturday, August 8, 2009

Managing people

Last week I had to go to a workshop on managing staff. This is required at UQ by all staff who perform annual staff appraisals.
On this topic two resources I have found helpful are a short DK book and an article in the Harvard Business Review, "What great managers do":
there is one quality that sets truly great managers apart from the rest: They discover what is unique about each person and then capitalize on it. Average managers play checkers, while great managers play chess. The difference? In checkers, all the pieces are uniform and move in the same way; they are interchangeable. ...... In chess, each type of piece moves in a different way, and you can’t play if you don’t know how each piece moves.
I thank my wonderful wife for bringing these resources to my attention.

Friday, August 7, 2009

Charge mobility in dendrimers



This post was stimulated by a talk at last weeks COPE meeting and looking at this 2001 Phys. Rev. B paper from the groups of Samuel and Burn.


In the attached rough notes I try to provide a framework to answer questions such as:

What determines the charge mobility in an array of these systems? How can it be maximised?

What is the relative importance of the conjugated and non-conjugated components of the dendrimer?

I take it the non-conjugated surface groups are required to make these systems soluble.
It seems that the mobility falls off exponentially fast with the length of the non-conjugated molecules at the surface. (roughly an order of magnitude for every carbon in the chain?)

Understanding the notes may be made easier by ready some of my earlier organic electronics posts, especially this one on Hush-Marcus electron transfer theory.

Wednesday, August 5, 2009

Some key ideas for biophysics from basic physics and chemistry

Section 1.5 of Biological Physics by Nelson lists

1.5.1 Molecules are small

He describes how in 1773, Benjamin Franklin estimated the linear size of a single molecule from observing that one teaspoon of oil covered half an acre of pond!

1.5.2 Molecules are particular spatial arrangements of atoms

1.5.3 Molecules have well-defined internal energies

1.5.4 Low-density gases obey a universal law

Boltzmann's constant is universal, i.e., it does not depend on the chemical or structural details of the system being described.

At room temperature

k_B T = 4.1 pN nm (most important formula in this book!)

I tend to think k_B T = 25 meV since this is a useful scale in solid state physics.
But, Nelson deliberately writes it terms of a force scale (picoNewton) and length scale (nanometer) that is relevant to the physics of cells.

How to do better on exams (and discover new physical laws)

This is the title of section 1.4 in Phil Nelson's book, Biological Physics.
I really like the way he uses subsection titles which convey useful information. Here are some:

1.4.1 Most physical quantities carry dimensions

1.4.2 Dimensional analysis can help you catch errors and recall definitions

1.4.3 Dimensional analysis can also help you formulate hypotheses

I think something that we just have to keep "hammering" students on is keeping track of units at each step of a calculation and canceling them out.

The ten commandments of protein folding

It is always interesting to see what former students are up to. 15 years ago Bosco Ho was in an undergraduate Quantum Mechanics class I taught at UNSW.

While trying to find an answer to a student question on the PHYS3170 blog,
I found Bosco's Ten Commandments of Protein Folding.

Tuesday, August 4, 2009

Can we see visons?

The organic charge transfer salt kappa-(ET)2Cu2(CN)3 has attracted a lot of attention the past few years because there is significant experimental evidence that the ground state of the Mott insulating phase is a spin liquid.

I have been reading a very interesting theoretical paper by Qi, Xu, and Sachdev that presents a highly original (and exotic) explanation for thermal conductivity and nuclear magnetic resonance experiments on this material.

At low temperatures the NMR relaxation rate 1/T1 goes to zero as T^\eta with eta ~1.5 and the thermal conductivity kappa(T)/T goes to zero in an activated fashion with a gap of about 0.5 K.

The authors propose the ground state is a Z2 spin liquid close to a quantum critical point with quasiparticles that are spin-1/2 bosons (spinons) and spinless bosons (visons).
It is shown that spinons dominate the nmr and visons the thermal conductivity.
The visons form a dilute Boltzmann gas with a bandwidth of about 8 K, which the authors claim corresponds to the peak observed in the heat capacity and thermal conductivity. Note that this bandwidth is only about 3 per cent of the exchange interaction J, which sets the energy scale for the spinons.

The visons correspond to low-energy singlet excitations and can be viewed as vortices in the Z2 gauge field
associated with a liquid of resonating valence bonds.

These are bold hypotheses.

I worry how robust the thermal conductivity data is. Is there any chance that at these low temperatures the suppression is due to a decoupling of the magnetic excitations from the phonons, as was observed in cuprates and explained by Mike Smith. The experimentalists claim not.
The thermal conductivity data is inconsistent with the heat capacity data, i.e.,
kappa(T)/T does not extrapolate to a non-zero value. So at least one of them must be wrong.

Monday, August 3, 2009

Static versus dynamic correlations in quantum chemistry

On Friday we had a great visit from Weitao Yang. One thing (among many others) I learnt from him was his definition of static versus dynamic correlations in quantum chemistry. This is something that people talk about but I have trouble getting a clear definition that I can understand.
Weitao illustrated this using the two-site Hubbard model and the hydrogen molecule.

Near the equilibrium geometry of the molecule, U/t is not large and a Hartree-Fock wavefunction is qualitatively but not quantitatively correct. The corrections to this are dynamical correlations.

At large separations, U/t is large, and the wavefunction approaches a Heitler-London state, and a Hartree-Fock wavefunction is qualitatively incorrect, e.g., it claims the H atoms will not bond to each other! This is static correlations.

Somehow I wonder if there is a way to quantify this distinction in terms of entanglement measures.

Why physicists and biologists need each other


I really enjoyed reading the rest of Chapter 1 of Biological Physics in preparation for thursday's class discussion. Section 1.3 has a nice discussion of the relationship between physical and biological sciences. Physicists seek the universal and simple in any system. In contrast, biologists when confronted with the complexities of the biosphere are more likely to emphasise "frozen accidents of history" and focus on details.

Figure 1.4 is nice but does not scan well and so I don't reproduce it here.

How does one synthesize these complementary approaches? First appreciate the value of each. Nelson suggests 3 steps for scientific advance:
"a) select a simplified but real model system for study
b) represent this system by a mathematical model with as few parameters and variables as possible.
c) deduce from the mathematical model some nonobvious, quantitative, and experimentally testable predictions."
He emphasises that a) and b) are inductive whereas c) is deductive.
a) and b) require a thorough knowledge of the biology.
Physicists need to be wary of proposing models that "lead to a large body of both theory and experiment culminating in irrelevant results."

Nelson points out that the best models may lead "to postulating entities whose very existence wasn't obvious from the observed phenomena."
Historical examples of this include:
Max Delbruck's deduction of the existence of a hereditary molecule (chapter 3)

the discovery of ions pumps and ion channels in cells (chapter 11,12)

George Gamow's proposal to find the genetic code