Wednesday, September 30, 2015

A very useful formula: diagonalisation of 2 x 2 matrix

It is amazing to me how much theoretical physics and chemistry ends with diagonalising a 2 x 2 Hermitian matrix!
e.g. from Marcus-Hush theory of electron transfer to the BCS theory of superconductivity...

Yet I find I am often scrambling to get the algebra right. Finally, I have written down the eigenvalues and eigenvectors in a form that I find the most useful. I give them below (partly so I won't have to keep finding them...). The version below is actually taken from this paper

Monday, September 28, 2015

What gets measured gets managed

I sometimes hear this is the first axiom of management.
It features at the end of an interesting article in The Economist, Digital Taylorism: A modern version of “scientific management” threatens to dehumanise the workplace

The article is stimulated by a recent controversial New York Times article that chronicled the way that treats their employees.

Just a few quotes to stimulate you to look at the Economist article
The reaction to the Times piece shows that digital Taylorism is just as unpopular as its stopwatch-based predecessor. Critics make some powerful points. “Gobbetising” knowledge jobs limits a worker’s ability to use his expertise creatively, they argue. Measuring everything robs jobs of their pleasure. Pushing people to their limits institutionalises “burn and churn”. Constant peer-reviews encourage back-stabbing. Indeed, some firms that graded their staff, including Microsoft, General Electric and Accenture, concluded that it is counter-productive, and dropped it.
Mr Pentland’s sociometric badges have produced some counter-intuitive results: for example, in a study of 80 employees in a Bank of America call centre, he found that the most successful teams were the ones that spent more time doing what their managers presumably didn’t want them to do: chatting with each other.
I wonder how this relates to universities.
Is it fair to say that “good” universities are a long way from Amazon?
Or is metric madness coupled with managerialism taking over?

Identity checks for theoretical physicists

How can a member of the general public find out if someone is a real theoretical physicist?
The end of this amusing clip from The Big Bang Theory gives the answer.

Saturday, September 26, 2015

If my name is on it then I need to sign off on it!

It is surprising and disappointing to me how this issue continues to comes up. Hence, I thought I would write a post about it.

Here are a few concrete examples.
  • A student submits an abstract to a conference without the permission of co-authors.
  • A university vice-president signs a letter of support for the application of a faculty member for a prestigious fellowship. The letter was actually written by the applicant and contains ridiculous claims about the international status of the applicant.
  • The senior author of a paper allows a postdoc to submit a paper even though she has not actually looked at the manuscript because she is so “busy.” The manuscript is full of typos and the referencing is poor.
  • A Ph.D student submits a dissertation proposal to a departmental review committee even though he has has not shown the document to his advisor.
  • An advisor shows the results of a student at a conference without letting them know.
  • A student gives a terrible talk at a conference. The advisor never saw a practise of the talk or the slides beforehand.
Such incidents may reflect equally poorly on both the senior and junior people involved.
They vary from the unethical to the unprofessional to just lacking common courtesy.

On the one hand, I don’t think senior people should be micro-managing control freaks who are constantly reviewing and checking everything their “underlings” do.
The amount of liberty and trust that I give students, postdocs, and collaborators is based on my prior experience with them. For some I say, “I want you to give a practise talk before the conference” or “Don’t resubmit the paper without showing me the final version”. For others I give more liberty. I trust them and am willing to be responsible for any failings.

The key is open and honest communication.
When in doubt, don’t be hesistant to ask.
Always keep everyone involved in the loop. Too much communication and information is always better than too little.

Thursday, September 24, 2015

Should course pre-requisites be enforced?

This is a question that is sometimes discussed.
Can you study physics (biochemistry) if you have not taken a calculus (chemistry) course?
Should you be allowed to?

At UQ we actually have some solid data that allows a more meaningful discussion about the issue. The following text is taken from a recent review of the Bachelor of Science at UQ (page 125).
As some, but by no means all, students are aware, for about 15 years the university has not enforced completion of prerequisites. That is, students are free to enrol in any courses they choose, irrespective of whether or not they have previously enrolled in, or passed, any prerequisites.  Students generally are not actively advised that prerequisites are effectively optional. However, any student who explicitly asks is usually advised to complete prerequisites, and that they take a significant risk if they attempt a course without the necessary background knowledge. The lack of enforcement of prerequisites means that a significant number of students ignore the advice to complete prerequisites and proceed with advanced courses without having done the prerequisite(s). 
This results in difficulties for academics teaching the advanced courses and for the students attempting to catch up on the knowledge and/or skills they lack. There is also some concern about a duty of care to students: should they be allowed to enter a course without having completed a prerequisite when data suggest that a significant number will fail? This issue may become even more prominent in a deregulated market, with higher fees. A counter argument to this is that students can and should take responsibility for their own decisions. Provided the advice they receive is accurate, timely and clear, they can decide for themselves whether or not to follow the advice. 
To help answer the question of whether or not prerequisites should be enforced, an analysis of a number of second and third year courses was undertaken. Figure 114 shows student failure rates in such courses, broken down by whether or not students had completed the recommended prerequisite courses(s).
The data show that the proportion of students attempting these courses without having completed the prerequisites ranges from 14% to 34%, with the average around 25%. There is a clear advantage to having completed the prerequisite, with lower failure rates amongst such students in all courses. Despite this, on average, 75% of the students who have not completed a prerequisite passed the follow-on course. In all cases, more than 50% of such students passed the course. In one case, almost 90% of such students passed the course.
This puts a somewhat positive spin on the problem. I think the fact that in many of the courses 30-40% of the students without the pre-requisite failed is a concern. A lot of time (both student and teacher) is being wasted. In particular, I know faculty who are frustrated by the demands, complaints, and questions of students who have not taken pre-requisites.

I don't have a strong view on this. I think skipping a pre-requisite is o.k. for gifted and motivated students who fully understand they will have to do significant work. However, for most students, particularly mediocre ones, I think it is a bad idea. Thus, I think pre-requisites should only be waived on a case-by-case basis. Unfortunately, that is labour intensive.

What about other institutions?
Based on a brief literature search I only found one paper that had looked at the issue.
Minimal Impact of Organic Chemistry Prerequisite on Student Performance in Introductory Biochemistry.

What do you think? Should pre-requisites be enforced?

Wednesday, September 23, 2015

A danger of students and postdocs getting career advice from faculty

It is natural that students and postdocs should come to faculty members to get career advice. Should I do a Ph.D? Should I do a postdoc? Should I leave academia?

Some faculty give excellent and balanced advice. Particularly, they give students a realistic picture of the [low] chances of a Ph.D (and postdoc) leading to an academic career, particularly at a leading university in the Western world. A good place to start the discussion is the data here. A related statistic to consider is that of the local department. What is the ratio of the Ph.D graduation rate to the faculty hiring rate? For example, in the School of Mathematics and Physics at UQ we currently have close to 100 enrolled Ph.D students. That means we graduate about 20-25 per year. We probably hire faculty at roughly the rate of 0-4 per year.

Unfortunately, some faculty don't exactly go out of their way to inform students and postdocs of these painful realities and/or they give the impression, either directly or subtly, that jobs outside academia are somehow second-rate.

However, I think there may be an additional subtle underlying psychological problem. This exists even when faculty give realistic and sober advice. The mere existence of a faculty member gives the message, particularly to wishful thinkers:
"I made it. I beat the odds. So you can you."

Monday, September 21, 2015

Emergence and singular asymptotic expansions, II

When is a phenomena truly emergent?
Is there some objective quantitative criteria that one might use to decide?
This is an issue because sometimes discussions of emergence are pretty fuzzy and even flaky.

by Michael Berry that I mentioned in passing in a previous post.

I highly recommend the article as I think it has a very important insight: singular asymptotic expansions provide a concrete criteria for emergence.

Berry considers the specific problem:

He then discusses these examples in detail, including discussions of the asymptotic expansions.

I recommend reading this article before the one by Hans Primas (reviewed in the previous post) as the latter is more technical and philosophical than Berry's.

One thing I think this highlights is that the problem of emergence in quantum systems is neither more or less challenging or interesting than in classical systems, something I argued before.

I have one minor addition to Berry. In quantum many-body systems the singular parameter delta may not just be 1/N, where N = number of particles. It can also be the coupling constant, lambda.  Emergent phenomena are associated with non-perturbative effects. Concrete examples are in the BCS theory of superconductivity and the Kondo effect. In both there is an emergent energy scale  exp(-1/lambda). There is no convergent expansion in powers of lambda. Taylor series around lambda =0 is singular.

Thursday, September 17, 2015

Desperately seeking triplet superconductors, II

Previously, I posted about the tricky problem of establishing experimentally that an unconventional superconductor that the Cooper pairs are in a spin triplet state. One basic (but far from definitive) signature is that the upper critical magnetic field is larger than the Clogston-Chandreshakar limit [this is often called the Pauli paramagnetic limit but I think that is a scientific misnomer].

I am particularly interested in this problem because of recent theoretical work showing how triplet superconductivity may arise in a particular quasi-one-dimensional metal.

A new family of materials A2Cr3As3 [A=K,Rb,Cs] is attracting significant interest because some experiments show the desired high upper critical field.

I think the "first" paper is a Phys. Rev. X article with the title Superconductivity in Quasi-One-Dimensional K2Cr3As3 with Significant Electron Correlations

I am slowly trying to work through some of the literature. Here are a few observations. I welcome comments and corrections.

Sample quality.
This is usually a big issue in newly discovered strongly correlated electron materials. Unfortunately, this does not stop the rush to publish and make bold claims.
Many of the reported measurements are on polycrystalline samples not single crystals. A "Note added" in the Phys. Rev. X article concedes that the linear in T resistivity that they observed in polycrystalline samples is not seen by other authors. Nevertheless the abstract states
A linear temperature dependence of resistivity in a broad temperature range from 7 to 300 K is observed, which suggests non-Fermi liquid behavior.

Furthermore, this preprint notes
The different low temperature behavior observed in samples which have deteriorated after being exposed to air, emphasises that it is necessary to properly handle the samples prior to being measured because the A2Cr3As3 compounds are extremely air sensitive and evidence for nodal superconductivity from penetration depth measurements is only observed in the samples which display a sharp superconducting transition.

This is subtle. The crystal structure does contain chains of Cr3As3 with C_3 symmetry. [Double walled sub-nanotubes!]
However, electronic structure calculations suggest one three-dimensional Fermi surface sheet in addition to two quasi-one-dimensional Fermi surface sheets.
Unfortunately, this is not stopping people already claiming experimental results "consistent with a Tomonaga-Luttinger liquid."

Strong correlations.
The abstract of the Phys. Rev. X article states
The material has a large electronic specific-heat coefficient of  70–75  mJ K−2 mol−1, indicating significantly strong electron correlations.
This is a weak statement. This coefficient is certainly large compared to elemental metals. However, the key issue is how large is this value compared to the value found from the density of states at the Fermi energy calculated in a "weakly correlated" band structure method such as a DFT approximation. In the text the authors report that the enhancement calculated this way is slightly larger than three. Some might say this is "moderate" rather than "strong" correlations.

Hund's rule coupling and minimal model effective Hamiltonian.
A three band model has been constructed and found to exhibit triplet p_z superconductivity at the RPA level.

NMR and spin fluctuations.
The measurements have been interpreted as evidence for Luttinger liquid behaviour,  antiferromagnetic spin fluctuations, and unconventional superconductivity. On the second, it is pity they don't have Knight shift data. Then one could have discussed the magnitude of the Korringa ratio.

Proximity to magnetism and possible parent compounds.
One might consider K2Cr3As3 as an electron doped version of  KCr3As3. The latter has been studied both experimentally (with the magnetic susceptibility exhibiting Curie-Weiss behaviour suggesting the present of antiferromagnetic magnetic moments; it remains metallic with no superconductivity) and theoretically (leading to a "spin tube" model). There are some interesting and subtle issues associated of the coupling of Cr spins, within the triangles, somewhat reminiscent of an organic system some my UQ colleagues have been studying.
Mike Norman briefly mentions K2Cr3As3 in a nice Physics article discussing the broader context of the interplay of helical magnetism and superconductivity in (three-dimensional) CrAs and MnP. Aside: They were the first Cr an Mn compounds ever found to superconduct.

Spin-locked superconductivity?
A paper reporting measurements of the anisotropic upper critical magnetic field up to 60 tesla claims that
The paramagnetically limited behavior of H∥c2(T) is inconsistent with triplet superconductivity but suggests a form of singlet superconductivity with the electron spins locked onto the direction of Cr chains.
I don't follow this at all. Surely, if you have a spin singlet there is no preferred direction for the electron spins. I must be missing something. Can someone explain?

Spin-orbit coupling
Electronic structure calculations suggest that 
Despite of the relatively small atomic numbers, the antisymmetric spin-orbit coupling splitting is sizable (≈ 60 meV) on the 3D Fermi surface sheet as well as on one of the quasi-1D sheets.
I welcome discussion as I am finding my way. 

Tuesday, September 15, 2015

Quantum biology?: the vitalism of Bohr, Schrodinger and Wigner

Ernst Mayr was one of the leading evolutionary biologists in the twentieth century and was influential in the development of the modern philosophy of biology. He particularly emphasised the importance of emergence and the limitations of reductionism. In his book, This is Biology: the Science of the Living World Mayr has the following paragraphs that are embarrassing to physicists.
Before turning to the organicist paradigm which replaced both vitalism and physicalism, we might note in passing a rather peculiar twentieth-century phenomenon-the development of vitalistic beliefs among physicists. Niels Bohr was apparently the first to suggest that special laws not found in inanimate nature might operate in organisms. He thought of these laws as analogous to the laws of physics except for their being restricted to organisms. Erwin Schrodinger and other physicists supported similar ideas. Francis Crick (1966) devoted a whole book to refuting the vitalistic ideas of the physicists Walter Elsasser and Eugene Wigner. It is curious that a form of vitalism survived in the minds of some reputable physicists long after it had become extinct in the minds of reputable biologists. 
A further irony, however, is that many biologists in the post-1925 period believed that the newly discovered principles of physics, such as the relativity theory, Bohr's complementarity principle, quantum mechanics, and Heisenberg's indeterminacy principle, would offer new insight into biological processes. In fact, so far as I can judge, none of these principles of physics applies to biology. In spite of Bohr's searching in biology for evidence of complementarity, and some desperate analogies to establish this, there really is no such thing in biology as that principle. The indeterminacy of Heisenberg is something quite different from any kind of indeterminacy encountered in biology.
Bohr's speculative foray into biology was not isolated. He made many highly speculative statements about the implications of quantum theory to other disciplines (including politics and religion) that were accepted uncritically and used inappropriately by some postmodernists. Mara Beller, has chronicled these excesses in a provocative Physics Today article "The Sokal Hoax: At Whom Are We Laughing?", and a book, Quantum Dialogue: the making of a revolution.

The hubris of physicists does not diminish with time. Earlier I posted about how (not) to break into a new field.

Monday, September 14, 2015

Against standardised CV formats

Increasingly people are asked in specific contexts to provide their CV in a very specific format. Previously, I noted the bizarre recent requirements of the NSERC in Canada.

At UQ there is a standard form "Academic portfolio of Achievement" that has to be completed for annual appraisals, applications for promotion, tenure, and sabbatical.
This is particularly arduous for senior people who have to do it for the first time. One cannot simply "cut and paste" a list of 100 publications.

Arguably, the value of every applicant putting their CV in the same format is that it makes it much easier for a committee to find specific information they require and to compare candidates.

Nevertheless, I used to think this was a bad idea because it simply wastes a lot of time in people reformatting their CV for each new situation. However, I now think that there are other reasons why a standard format CV is a bad idea. Allowing applicants to write a CV in their chosen format may actually reveal something useful about the applicant, particularly their values and priorities.

To be specific. Consider the following pieces of information that an applicant may or may not include in a CV.

  1. A clear, coherent, brief, and accessible statement about their specific research accomplishments.
  2. Lists of journal impact factors to 3 decimal places.
  3. A very long and detailed analysis of how they perform with regard to certain citation metrics.
  4. A list of courses they have taught and novel approaches they may have taken.
  5. No citation information at all (for a senior scientist).
  6. Scores for student teaching evaluations to several "significant" figures.
  7. Current employment of former students and postdocs.
  8. Long lists of "research interests", "computer skills", hobbies, ...
  9. Information about their high school academic record.

Including some of these would impress me. Including others would create a negative impression.

Aside: For people starting out, John Wilkins has a nice model CV for the first job after Ph.D.

Friday, September 11, 2015

Emergence and singular asymptotic expansions

Seth Olsen kindly lent me his copy of Chemistry, Quantum Mechanics, and Reductionism by Hans Primas, published in 1981. I has a Foreword by Paul Feyerabend
[Primas died last October and there will be a symposium in his honour later this year]
This is a book I had wanted to read for a while since I had seen it referenced in various philosophical contexts. Besides some deep philosophy he has lots of polemical statements about theoretical chemistry.

Wanting to find an electronic version I could copy choice quotes from led me to a more dense, broader, and more recent (1998) article Emergence in exact natural science.

Here I mention a few highlights.
emergence and theory reduction are related.
Theory reduction is the process where a more general theory, such as quantum mechanics or special relativity, "reduces" in a particular mathematical limit to a less general theory such as classical mechanics. This is a subtle philosophical problem that is arguably poorly understood both by scientists [who oversimplify or trivialise it] and philosophers [who sometimes overstate the problem]. The subtleties arise because the two different theories usually involve concepts that are "incommensurate" with one another.
the distinction inside/outside is not covered by the most fundamental context-independent natural laws (first principles of physics). 
Here Primas is stresses the sometimes "arbitrary" value judgements that are made in distinguishing a "system" and its "environment". This involves distinguishing "patterns" and invoking "symmetry breaking".  He introduces notions of topology to try and make such distinctions more rigorous. I found this too technical to appreciate.
 Many inter-theoretical relations can be mathematically described by asymptotic expansions. Singular asymptotic expansions are never uniformly convergent in the intrinsic topology of the basic theory. This nonuniformity is not a disaster but an indication that the limiting case represents a caricature, suppressing irrelevant details and enhancing contextually relevant features. The discontinuous change in the limit leads to a discontinuous change in the semantics and therewith to a description in a new language in terms of emergent properties. In the same sense as a photograph can never replace a brilliant caricature, an asymptotic description can – for the intended purpose – be more adequate than the exact description.
Michael Berry also has a 1994 article that takes a similar point of view.
The assertions  
“something consists of elementary systems”,   
“something can be decomposed into “elementary systems”, 
“something can be described in terms of “elementary systems”, 
are not equivalent. 
When a light wave passes an object, a typical discontinuity – called the shadow – can be observed. However, in Maxwell’s electrodynamics – the fundamental theory for the propagation of light – shadows do not exist. Maxwell’s electrodynamics is governed by partial differential equations which have only continuous solutions. The discontinuities associated with shadows appear only in geometric optics, the limiting case of vanishing wavelength l , l → 0 .
He discusses at length how the notion of "molecular structure" in chemistry is an emergent concept. This relates to the issue of quantum entanglement between electrons and nuclei.
In a quantum theoretical description the molecular shape emerges by abstracting from the actually existing Einstein–Podolsky–Rosen correlations between the electrons and the nuclei. Historically, the structure concept has been introduced into quantum chemistry by the so-called Born-Oppenheimer approximation. But this terminology is misleading since the main issue is not an approximation, but the breaking of a holistic symmetry. A more proper appreciation of the Born–Oppenheimer-description stresses its singular nature: it is an expansion about the singular point of infinite nuclear masses. An asymptotic expansion can be formulated in terms of the ratio e = (m/M)^1/4, where m is the mass of an electron and M is a mean nuclear mass of the molecular system. In the limiting case e = 0 the holistic correlations between nuclei and electrons are suppressed so the description of a molecule reduces to the description of the motion of electrons in the electric field of a classical nuclear framework. In this description the molecular structure is a property described by an emergent classical observable. The singular limiting case     e = 0 leads to a discontinuous change in the description and is the starting point for an asymptotic expansion in terms of the emergent property at higher levels of description. 
He then gives a another example that was new to me.
The transition from the more fundamental Lorentz-relativistic quantum mechanics to Galilei-relativistic quantum mechanics is governed by the contraction of the Lorentz group to the Galilei group – a highly singular limit. While the Lorentz group is semisimple, the Galilei group is not but has a more complicated mathematical structure. The emergent quantity associated with this contraction is the mass in the sense of a classical observable (which commutes with all other observables and can therefore be treated as a real parameter).
He also discusses how the concept of temperature is emergent, emphasising the centrality of the zeroth law of thermodynamics. 

Thursday, September 10, 2015

An important but basic skill: bringing a paper to publication

In trying to turn research into an actual journal publication there are several stages at which the process can stall or be significantly delayed (sometimes by months or years).

* Combining, selecting, and condensing some specific research results into a publon with a well defined message.

* Writing a rough first draft.

* Polishing the draft into an acceptable form for submission to a journal.

* Revising and resubmitting the paper, possibly to a different journal, if rejected from one.

Moving beyond these obstacles can be a significant struggle even for senior scientists. Furthermore,  junior collaborators can be frustrated and anxious as they wait for action. Their survival and careers depend on getting papers published in a timely manner. I even know of cases of students who did not get a Ph.D because a manuscript or draft thesis just sat on the desk of their advisor.
I am also struck by the fact that I know senior people who have impressive publication records but if you talk to their collaborators, both senior and junior, you will hear how this basic skill has not been learnt or mastered.

Aside: I myself am far from perfect. I have not been as quick as I could/should be with some of my collaborations. I still have two papers on the arXiv that have never been published and currently have at least three manuscripts stalled on my laptop. I do take a little consolation/excuse from the fact that these are single author papers and so I am the only one suffering.

I don't have simple solutions but do offer several suggestions. The first is by far the most important.

You must learn to do this yourself. Don't wait for others to do it for you. Take charge. Be responsible.

Don't be shy about bugging your collaborators to move things along.
I know this is can be difficult for junior people [graduate students and postdocs] from countries and cultures that are overly deferential to seniority and authority figures. Don't just email.
Knocking on doors and talking in person helps. .... even, if that means getting on a plane to the other side of the world. I am struck how some of my collaborations are moved forward just because my co-authors know that I am about to visit.

Consider your own possible underlying psychological issues such as perfectionism, procrastination, lack of confidence, laziness, or fear of offending authority.

I welcome other suggestions on how to develop this important but basic skill.

Tuesday, September 8, 2015

One mutation may drive you insane or make you smarter

I like to say, "physicists don't care about the details, chemists say the details do matter, and biologists say the details are a matter of life and death."

Bill Parson recently told me about a striking example of how one mutation [substituting a single amino acid for a different one in a protein] can really change things, for better or for worse. I reproduce below the relevant paragraph from this paper.

Crystal Structures of Human 108V and 108M Catechol O-Methyltransferase 
K. Rutherford, I. Le Trong, R.E. Stenkamp, W.W. Parson

Human COMT contains a common polymorphism at residue 108, which can be either valine (V) or methionine (M).2223 and 24 Approximately 25% of U.S. and Northern European Caucasians are homozygous for the 108M allele, which is much less common in African and Asian populations.25 and 26 The 108M allele has been linked with increased risk for breast cancer,272829 and 30 obsessive–compulsive disorder,31 and 32 some manifestations of schizophrenia,33343536 and 37 and increased sensitivity to pain,38 and 39but it has also been linked with improved prefrontal cognitive function, especially in working memory.40 and 41 It appears to be linked particularly strongly to neuropsychiatric dysfunction in velocardiofacial syndrome, a chromosomal disorder resulting from the deletion of multiple contiguous genes, including COMT, from one copy of chromosome 22. 3742 and 43 A haplotype involving the 108V allele and two noncoding COMTpolymorphisms has been associated with increased risk for schizophrenia, possibly because it results in diminished mRNA expression or translation. 44454647 and 48

One might wonder about connections with VHL disease, which it has been suggested may play a role in the Hatfield-McCoy feud. I first learned about this famous family feud in the book Outliers.

Monday, September 7, 2015

How robust are your tight-binding model parameters?

In a previous post I discussed the problem of extracting reliable parameters for tight-binding (and Hubbard) models from ab initio band structure calculations. My comments then were influenced by the figure below, which has now appeared on the arXiv in a short review by Anthony Jacko.

First, the band structure for a specific organic material was calculated using a density functional theory (DFT) based approximation. The energy dispersion relations were then fit to a tight-binding model involving 8 different hopping integrals, t0, t1, ....t7. 

The horizontal axis indexes the 8 integrals, the vertical axis shows their values determined from a range of different fits, using slightly different fitting methods and different runs of the fitting algorithm. 

Note the significant differences.
Thus, caution is in order if one uses the common practise of simply performing one fit [which may look impressive to the naked eye].
Jacko notes that this is like getting the elephants trunk to wiggle.

As Jacko stresses in the review, the most reliable and physically transparent way to determine the tight-binding hopping integrals is to construct Wannier orbitals and then directly calculate the integrals. The results for that procedure are shown in green. The red curve is the global best fit.

Saturday, September 5, 2015

The challenge of excited state proton transfer

What is excited state proton transfer (ESPT)?
Consider a hydrogen bond A-H...B in a molecular system.
Suppose the system absorbs a photon (usually in the visible to near UV range) and undergoes a transition to an electronic excited state. In most cases A-H is an organic molecule containing conjugated bonds and the transition is a pi to pi* transition. Then on the time scale of picoseconds [within a factor of one thousand] the proton transfers from the donor A to the acceptor B,
i.e. (A-H)*...B evolves to something like (A-)*...(H-B)+.
If A and B are part of the same molecule then this is intramolecular ESPT.
If A and B are distinct molecules then this is intermolecular ESPT.
If A-H is dissolved in water, and significant ESPT occurs then A-H is called a photoacid.

I have started to work on this rich and diverse subject.
My goal is to develop several simple diabatic state models that might give a more unified picture of the phenomena and provide some physical insight. Given the chemical complexity, this may be a mistake, reflecting a physicists naivety and/or hubris. But I am encouraged by the "success" of the simple two diabatic state model that I have promoted for hydrogen bonding (and proton) transfer in the ground state.

I am working my way through the extensive chemical literature and so here is my attempt to organise some of what I have learnt. Comments and corrections are particularly welcome.

In a short review [focusing mostly on solvent effects] from 1986 Michael Kasha presented the following picture. It shows the energy of the ground state (S_0) and the excited state (S_1) as a function of the hydrogen co-ordinate Q_H. For example this might be an OH stretch.

One can clearly see that in the excited state proton transfer is both energetically and kinetically more favourable. What might a diabatic state model look like?
The ground state surface could be described in terms of the usual two diabatic states: A-H,B-  and
A-,H-B.   Similarily the excited state surface could be described in terms of a separate but analogous model involving two diabatic states that differ by transfer of a proton.
The difference between the two models is simply the relative energy of the two diabatic states, i.e. the relative proton affinity of the donor and acceptor is reversed between the ground and excited states.
Furthermore, the barrier to proton transfer could be reduced, or even removed, if the coupling of the two diabatic states increases in the excited electronic state. This could happen if the donor-acceptor distance is reduced in the excited state.

This natural "explanation" of ESPT was widely promoted for a long time, probably going back to Weller in 1952. The basic idea is that in the excited state there is charge redistribution leading to weakening of the O-H bond, making it easy for the H to "pop off". A related claim is that in a photoacid the pKa of the excited state is much less than that of the ground state.

However, there are multiple problems with the picture presented above.

A. It is arguably not really an explanation but a description. It almost says "ESPT happens because ESPT happens." Specifically, it does not really explain why the relative energy of the donor and acceptor diabatic states reverses upon photo excitation.

B. It assumes there is no relationship (or interaction) between the ground and excited electronic states. In reality they can be intimately connected. Striking examples include that of twin states or resonance assisted H-bonds, such as in malonaldehyde.

C. Based on the energy surfaces above Forster presented a simple equation relating the S0-S1 energy difference (and the associated absorption and emission frequencies) between the two tautomers [i.e. molecules differing in the location of the proton] and the pKa's [a measure of acidity] in the ground and excited states.
However, Tolbert and Solntsev report many violations of this equation.

D. Actual high level quantum chemistry calculations for specific molecules that do exhibit ESPT do find that for some there is little charge redistribution in the excited state relative to the ground state; or more importantly, the proton affinity does not necessarily change significantly.

E. It may be omitting a role for different excited states (e.g. charge transfer states or n-pi* states) and conical intersections.

D. and E. are emphasised this calculation by Grannuci, Hynes, Milli, and Tran-Thi.

E. is emphasised by Sobolewski and Domcke who present the diabatic state picture below for cases where the proton transfer is coupled to an electron transfer.

A particularly interesting and widely studied case of ESPT is in the green fluorescent protein (GFP). More on that later...

I thank Seth Olsen for introducing me to some of the literature. If some of the above is not as coherent as it might be that is because of my limited reading and understanding. But, I think it also reflects the diversity of the subject and the lack of a comprehensive picture.

I welcome comments.

Thursday, September 3, 2015

A transition in university values: from scholarship to money to status

It is hard to make meaningful or reliable generalisations about social trends in a complex world. But, I do want to try. In particular, I would like to suggest that the values that drive university decisions [e.g. about hiring, promotions, and allocation of resources] has shifted in the last twenty years. Here are some potted historical observations, based largely on Australian and US universities.

The scholarship era (roughly before the 1960s).
People were hired and promoted largely based on letters of reference that evaluated the scholarly contributions of the individual. The emphasis was on quality not quantity.
Student tuition was either affordable (in the USA) or non-existent (in Australia).
Most administrators were faculty (many on secondment, i.e. temporary) with distinguished scholarly records. The disparity between faculty and senior administrator salaries was small.
Departments across the university had roughly equal influence and status. In particular, the humanities [history, literature, philosophy....] were respected and valued.
The only people getting grants were those who really needed them and it was easy to get them. Research groups were small.

The money era (roughly the 70s to 90s)
This coincided with the rise of MBAs and neoliberalism.
The number of "research universities" dramatically increased. Higher education became "massified" and a "sector" in the economy. Australian universities received significant income from international students. Departments fought each other for EFTSUs [Effective Full Time Student Units] because that determined departmental income. The best curriculum for students [e.g. engineering students taking physics courses or chemistry students taking mathematics courses] became of tangential importance. In the USA the total funding income of an individual had a significant effect on hiring, tenure, and promotion decisions. Publication rates and total "outputs" became important.
Administration became a [highly paid] career trajectory. Faculty became a minority among the university employees.
The internal influence of the humanities declined because they did not bring much money into the university. Science and engineering had much more clout.

The status era (the 21st century)
This coincided with a rise in metrics, rankings, and luxury journals.
All grants are no longer equal. Getting a grant is difficult and so just getting one is important for your "status" and career, even if you don't really need it, or if the dollar amount is relatively small. Furthermore, some grants have a higher status than others, particularly those with low success rates. In Australia a Future Fellowship helps you get promoted and in the USA an NSF CAREER award helps you get tenure. It is not just the money. It is the status.
The humanities have regained some status and influence because their faculty can win prizes, publish books with Oxford and Cambridge UP, or win "prestigious" fellowships.
I think basic science has also increased its influence and status.
[Personally, my career struggled in Australia in the 90s and took off after 2000 and I think this is largely due to an environmental transition not my own merits. My grant income or scientific output did not change significantly during this time].
"High profile" faculty may not "pay their way" in terms of grant or student income, but they are perceived (arguably wrongly) to help climb the rankings. Faculty who teach large numbers of students [which generates significant income] or get large $ industrial grants are appreciated less. Letters of reference play a much less influential role. Sometimes they are not even called for or if they are they may not even be read.

I freely acknowledge that scholarship, money, and status are not completely decoupled from one another. But, the question is which is the dominant value.

What do you think? Are these reasonable historical observations?

Wednesday, September 2, 2015

There is no metal-insulator transition in extremely large magnetoresistance materials

There is currently a lot of interest in layered materials with extremely large magnetoresistance [XMR], partly stimulated by a Nature paper last year.
The figure below shows the data from that paper, which is my main focus in this post.

A recent PRL contains the following paragraph

A striking feature of the XMR in WTe2 is the turn-on temperature behavior: in a fixed magnetic field above a certain critical value Hc, a turn-on temperature T is observed in the R(T) curve, where it exhibits a minimum at a field-dependent temperature T. At T<T, the resistance increases rapidly with decreasing temperature while at T>T, it decreases with temperature [2]. This turn-on temperature behavior, which is also observed in many other XMR materials such as graphite [19,20], bismuth [20]PtSn4 [21]PdCoO2 [22]NbSb2 [23], and NbP [24], is commonly attributed to a magnetic-field-driven metal-insulator transition and believed to be associated with the origin of the XMR [10,19,20,23,25].

My main point is that this temperature dependence and the "turn-on" has a very simple physical explanation: it is purely a result of the strong temperature dependence of the charge carrier mobility (scattering rate), which is reflected in the temperature dependence of the zero field resistance.
It is completely unnecessary to invoke a metal-insulator transition.
The "turn on" is really a smooth crossover.
I made this exact same point in a post last year about PdCoO2  and in this old paper.

Following the discussion [especially equation (1)] in the Nature paper, consider a semi-metal that has equal density of electrons and holes (n=p). For simplicity assume they have the same temperature dependent mobility mu(T). Then the total resistivity in a magnetic field B is given by
Differentiating this expression with respect to temperature T, for fixed B, one finds that the resistance is a minimum, at a temperature T* given by
Further justification for this point of view should come from a Kohler plot:
A plot of the ratio of the rho(T,B)/rho(T,B=0) versus B/rho(T,B=0) should be independent of temperature.

In the specific materials there will be further complications associated with spatial anisotropy, unequal and temperature dependent election and hole densities, tilted Weyl cones, chiral anomalies, .... However, the essential physics should be the same.

XMR is due to simple (boring old) physics: extremely large mobilities at low temperatures are due to very clean samples and in some cases, near perfect compensation of electron and hole densities.