Friday, February 28, 2014

A model public lecture

Scientists giving public lectures face a formidable challenge. It needs to be both interesting, exciting, and accessible to a broad audience. Hopefully, the speaker can communicate something not just about science but also about how science is done.

Yesterday I attended a very nice public lecture at UQ. Professor Ullrich Steiner from Cambridge spoke on How Nature Makes Materials. It nicely bridged physics, chemistry, and biology. His work on photonic structures in nature is described here.

The plant seed shown on the right is particularly amazing. Steiner and colleagues found a fifty year old one in a museum in Cambridge.

One thing, among others, that I appreciated was the lack of hype and the sober assessment of what his work in biomimetics has achieved. Sometimes it has provided some insight into how biological systems make and utilise specific materials. Some of the biomimetic materials and structures his group has made have some of the desirable features. But, most are also not yet commercially viable.

One minor comment.
In the introduction and motivation the notion was presented that evolution has produced optimal structures. I disagree with this commonly promoted claim.
Evolution does not optimise everything.
It just produces structures that work well enough to work together with many other components to increase probability of survival.

Thursday, February 27, 2014

Some basics for protecting your mental health

Late last year I gave a talk in Canberra for a group of scientists at CSIRO [Australia's government industrial research organisation]. This got a lot of positive feedback, and the associated blog post got a lot of page views. Several people told me the slide below is very helpful and so I post it here.
A group at CSIRO at Dutton Park [just over the river from UQ] has asked me to come and speak next month on the issue.

I would rather be getting science speaking invitations, but if this is helpful to others I am happy to do it.

Wednesday, February 26, 2014

Two historical questions about incoherent excitations

I believe that one of the most important concepts in quantum many-body physics is that of quasi-particles and the associated incoherent excitations. Often the one particle spectral function can be written in the form.

where the first term is a well-defined peak associated with quasi-particles and total spectral weight Z_k. 
The second term describes incoherent excitations, i.e., it has a weak dependence on the momentum k and as a function of omega is a broad distribution, in contrast to the sharp quasi-particle peak.
Futhermore, due to a sum rule [conservation of particle number] the total spectral weight of the incoherent part is 1-Z_k.

I think this equation is one of the most profound and important results in quantum many-body theory.

Some of this is illustrated in the figure below taken from a Nature Physics commentary by Nandini Trivedi.
The above equation and concepts have come to the fore over the past two decades due to wide studies of strongly correlated electron materials, particularly via dynamical mean-field theory and experimental ARPES [Angle-Resolved PhotoEmission Spectroscopy] studies.

I have two historical questions I am struggling to find answers for:

1. When and by whom was the equation above first clearly written down and elucidated?

I suspect sometime in the 1950-60s by Landau, Pines, Nozieres, Kadanoff, Baym, or Hubbard?
I looked in AGD and several other old books but could not find it.

2. When was the first time that the incoherent part of the spectral function was definitively observed in 
an experiment?

By this I don't mean just seeing some background [that could be noise], but actually showing that the incoherent background has the weight 1-Z_k. I presume an ARPES experiment in the past two decades.

I should know this, but just can't quickly find the answer.

Tuesday, February 25, 2014

A simple model potential energy surface for double proton transfer

I love simple models.

There is a very nice paper
Correlated double-proton transfer. I. Theory
Zorka Smedarchina, Willem Siebrand, and Antonio Fernández-Ramos

It considers an incredibly simple potential energy surface to describe double proton transfer.
x_1 is the (dimensionless) position of one proton relative to the middle of its donor and acceptor.
x_2 is the corresponding position for the second proton.
The first term describes a quartic potential with an energy barrier for transfer of the proton between the donor and acceptor.

The dimensionless parameter G describes the extent of correlation or coupling between the two hydrogen bonds. The coupling term is chosen to have the important property that it is symmetric in the two co-ordinates but sensitive to their sign. This is an important difference to earlier [rather nice] work by Benderskii et al. who considered competition between two dimensional quantum tunneling paths [instantons] associated with concerted and sequential transfer.

Three types of Potential Energy surface (PES) can occur, depending on the value of G.
The three cases shown below correspond to
a) 0 < G < 1/2
b) 1/2 < G < 1
c)  G > 1.
The coordinates x_a=1/2(x_1-x_2) and x_s=1/2(x_1+x_2). The horizontal line between the left and right minimum corresponds to a concerted path.
For the top PES a sequential proton transfer is possible via one of the two intermediate (INT) states.

These three cases correspond to the different types of PES's seen for double proton transfer for different molecules.

Monday, February 24, 2014

Teaching innovation: one step forward, one step backward

It great to try new things in teaching. We desperately need to when we are honest about how little many students actually learn, particularly with traditional modes of delivery. Technology also makes possible all sorts of things.

People will often promote innovations; but sometimes a few years later it is found that they don't work as well as they did or were hoped to. Yet I suspect that sometimes, because of disappointment or embarrassment, the proponents are a bit coy about making known regressions.
So in the interest of transparency and to promote discussion here are a couple of mine. They both relate to a course PHYS4030: Condensed Matter Physics that I have taught on of off for the past ten years. It is a final year undergraduate course that basically covers approximately half the material in Ashcroft and Mermin.

A few years ago I introduced three innovations. Both seemed to work for a while.

Formative and summative assessment following the example of another undergraduate course I was involved in. Students must complete a certain minimum amount of work [attendance, writing on the course blog, assignments, ...] to pass the course but this has little effect on their final grade.

A course blog that students must post and comment on several times a week. I did this because it had worked earlier in a biophysics reading course I taught.

Students give a talk on a recent research paper [often from a luxury journal] that relates to the course.

I won't be doing any of the three this year.
Why? Basically, for the same reason as
Confession of an Ivy League teaching assistant: Here’s why I inflated grades
It is not worth the hassle of dealing with student complaints.

It seems some students think they should "credit" for all the work they do. Some read the course profile to mean they could make all their posts on the blog in the last week. Some didn't see why they should have to attend the paper talks of their classmates. Some also had rather different views to me about what grade they should get for their talks.

So, for now I am reverting to the old traditional assessment: exams and assignments.

I welcome comments and suggestions.

Friday, February 21, 2014

Extracting the self energy from ARPES

I read an interesting PRL
High-Energy Anomaly in the Band Dispersion of the Ruthenate Superconductor
H. Iwasawa, Y. Yoshida, I. Hase, K. Shimada, H. Namatame, M. Taniguchi, and Y. Aiura

They perform ARPES [Angle Resolved Photoemission Spectroscopy] on strontium ruthenate [Sr2RuO4]. Some of the main results are shown below [the vertical scale is energy].
The key issue is understanding how the measured quasi-particle dispersion (left panel) differs from the band structure calculated from LDA [Local Density Approximation of Density Functional Theory (DFT)].
Where the two curves cross is the "high energy anomaly". This is very much related to "kinks" and "waterfalls" in the cuprates, as I discussed in an earlier post.

The spectrum is compared to a very simple model self energy (right panel) that is consistent with Fermi liquid theory and includes a "cut off" energy scale associated with the underlying interactions [bosons?, magnons?, electron-electron?] that are the origin of the self energy.

The solid black curve in the right panel above is from a theoretical calculation [self-consistent perturbation theory and DMFT on the relevant multi-band Hubbard model with Hunds rule coupling]. [A 2000 PRL by Liebsch and Lichtenstein].
It is very impressive that this agrees with the experiment.
This shows how good both ARPES and Dynamical Mean-Field Theory are getting.

I found some of the discussion of theory in the paper poor and confusing. For example, I failed to see how Zhang-Rice singlets are relevant to the ruthenates.
But the biggest concern was they make a big deal of the value of the energy of the anomaly and try and compare it to other energy scales such as the Hubbard U and the Hund's rule coupling J. This is simplistic. The whole of point of work described in  my earlier post is that this energy scale is emergent and does not have a simple relationship with the energy scales of the underlying interactions.

I would have also like to see a comparison with the recent LDA+DMFT calculations of Jernez Mravlje and collaborators that I discussed here.

Thursday, February 20, 2014

What is the minimal one-band Hamiltonian for sodium cobaltates?

I would say an ionic Hubbard model on the triangular lattice.

About a decade ago sodium cobaltate [NaxCoO2] was the "flavour of the month" when it came to strongly correlated electron materials. And then along came the iron pnictide superconductors....

The cobalt ions within a layer of the crystal structure form a triangular lattice and the sodium ions donate electrons to conducting layers. Hence, it is natural to consider a doped Hubbard model on a triangular lattice as the simplest possible effective Hamiltonian for these materials.
This led to numerous studies of this model. Today most studies of this model will  also claim relevance to sodium cobaltates. I disagree.

The sodium ions play a larger role that cannot be neglected. They actually modify the intra-layer electronic structure. Specifically, they spatially order in a manner dependent on the doping level.
This is unlike the case of the cuprates where the atoms between layers [and dopants] are merely spacers and do not change with doping.

Jaime Merino, Ben Powell, and I discussed this in a series of papers, discussed in an earlier post. In a 2006 PRB we considered the simple one band Hubbard model and pointed out that it could not describe all the cobaltates properties. The figure below shows how the spatial ordering of the sodium ions at different dopings produces different site energies on the triangular lattice.

Tuesday, February 18, 2014

Reviewing grants: a report from the coal face

I  recently reviewed a bunch of grant applications from several different countries and so I thought it might be interesting to compare the approach of different funding agencies and offer some general comments.

First, I don't review everything I am asked to. It just takes too much time. But I do try to be a good citizen. I do make a particular effort in two cases: when I really like the work of the person or when I think the proposal is shoddy and should not be funded, but may have a chance because of luck/politics/hype.
Lately I am receiving a lot more proposals. I fear this may be because of my increased profile due to this blog.

Getting international expert reviews for funding agencies is an increasing challenge. Yet it is absolutely crucial to making sure that money is allocated in the best manner, particularly given low success rates, and the level of complexity and specialisation of proposals. This is particularly important for small countries, such as Australia, as there may be few locals who can really evaluate the feasibility and worth of a specialised proposal.
Nevertheless, for some of reasons discussed below funding agencies are not helping themselves. Goodwill is wearing thin.

I spend at most 1-2 hours on each proposal, including getting passwords, downloading documents, and writing the report. [Sorry if this offends you]. Hence, every minute counts. If funding agency websites are hard to navigate or the proposal has a lot of statistical fluff and bureaucratic mumbo jumbo to wade through it reduces the time I spend on actually evaluating the science.
I also don't what to spend pages reading about why climate change or Moore's law necessitates new technologies. I want to know what science you are going to do and why you are the best person to do it.

Important point for investigators.
Make sure the first page contains a very clear statement about what you are actually planning to do.

So here a few random observations. I list the country agencies in rank order of decreasing ease of evaluation.
I think all required one to read and agree to some ridiculously long statement about confidentiality, conflicts of interest, lack of affiliation with the Nazi party, commitment to diversity, ....

The proposal was the shortest, but contained enough information. There was little mumbo jumbo.

A program manager sent me the proposal with a form to complete. I did not have to mess with a website. The first few pages of the proposal were unnecessary containing all sorts of bureaucratic mumbo jumbo in both german and english.

Sometimes I have significant problems getting the proposal to print off the website.
Most of the grant money seems to go on overhead and summer salaries. More than $100K per year to support one graduate student!
They seem to have so little money now for simple old fashioned single investigator curiosity driven research. Everything has some strings/target attached [education, outreach, energy, nano, ....]
It is refreshing that there seems to be a recognition of quality over quantity. No discussion of metrics. Publication rates [a few per year] that are unacceptable in Australia seem o.k. The CV including publications was only 2 pages, including publications. I felt the 15 pages of science was too long, being like a review article with extra pages containing100 plus references. The one page Data Management Plan was mumbo jumbo to me.

I found the EPSRC website hard to navigate. At first I could not even find where the proposal was I needed to evaluate. There was a lot of bureaucratic stuff that meant little to me and so it was not clear to me why I even needed to see it. But again I had to waste time trying to figure out if it was relevant or not.

Australia. ARC.
The admin. part of the proposals are too long. At most 10 per cent of the pages are actually about science. The level of hype, both about claimed commercial applications and the quality of the investigators, is often breathtaking and on average significantly exceeds that of other countries. Repeated hyperbolae such as "prestigious, outstanding, cutting edge, world class, ..." quickly become tiresome.

Canada. NSERC.
I am glad Australia is not the worst!
I had to download 6 or 7 different documents. There was all this jargon about HQPs [what? ]. After a while I figured out these were "Highly Qualified Personnel". The investigator CV had to be in some standard Canadian web form that looked like it would have taken a week for the Investigator to enter on the web! The science was discussed in a breathtaking 2? pages!
One feature I found interesting was reading the PI's description of how she/he ran their research group. Given the goal of developing HQPs [!] this seemed appropriate.

A couple of final comments.

Grading on the curve.
Not all countries require a numerical or letter grade, either overall or for different parts.
I think grades are generally problematic, particularly for foreign referees.
You really need to know what grade is necessary to get funding.
For a while in Australia, only proposals ranked in the top 2 per cent were funded. But this was twenty per cent of proposals! [Research is meant to be rational!]
If a rigorous German paid you the complement as ranking you as "Very Good" and in the top 20 per cent it was the kiss of death.
Now Australia sends [at least to local assessors] a bunch of proposals to review. Obviously this is an onerous task, but at least proposals get ranked relative to one another.

Letters of support.
Some of the proposals contain letters of support from the university or collaborators. I generally find these meaningless. They are probably drafted by the PI. Levels of commitment are mostly platitudes. They just look like more paperwork for everyone.

So here is my concrete proposal.
Reviewers should just receive minimal information: scientific proposal, a brief CV,  budget summary, and past funding.

I welcome discussion and encourage others to share their experiences.

Monday, February 17, 2014

Three types of double proton transfer

I previously posted about how double proton transfer is a concrete example of a chemical reaction that can occur via either a concerted or sequential process?
Precisely defining this question and answering it is a subtle issue.

There is a nice classification of types of potential energy surfaces for double proton transfer, summarised on the website of Antonio Fernandez-Ramos. It is based on a very simple model potential energy surface described here and compared to surfaces from computational quantum chemistry [at the DFT level] here [source of the figures below].

There are three qualitatively different potential energy surfaces, depending on the strength of the coupling of the motion of the two protons.

(1) One transition state and two minima, as in the formic acid dimer;

(2) Two equivalent transition states, one maxima and two minima, as in the pyrazole dimer;

(3) Four transition states, one maxima and four minima, as in porphine.

I am pretty happy because I am developing a simple diabatic state model that captures all of the above cases.

Saturday, February 15, 2014

How quickly should you leave academia?

I was asked to comment on the blog post Get a PhD—but leave academia as soon as you graduate by
Allison Schrager. She completed a Ph.D in economics at Columbia University and now works in the financial industry. It is worth reading and pondering. Overall, I like the post for a number of reasons, but would add some qualifiers.

Schrager nicely highlights
  • The reality, painful to many, that only a very small fraction of Ph.D's will end up as tenured faculty.
  • Adjunct teaching positions [part-time faculty and short-term contracts] are a career dead end.
  • The value of a good Ph.D, both in terms of the educational value and the enjoyment that it can provide.
  • The unfortunate fact that some faculty have the view that industry offers second-rate careers and soft intellectual challenges. Yet her experience shows this is far from the case.
  • Most Ph.D programs prepare people poorly for looking for jobs outside academia.
  • The transition from academia to industry can be difficult and painful.
The post is actually not as rigid as the title suggests. She actually says
"If you don’t graduate with a solid academic job or compelling post-doc, give up on the dream as quickly as possible."

Aside: Allison Schrager also has two other nice posts that I enjoyed reading and recommend:
Confession of an Ivy League teaching assistant: Here’s why I inflated grades
I was skeptical but now I’m convinced: Math matters 

Here are my qualifiers.

Ultimately these questions are very personal. You need to weigh up your own personal values and goals. Different people will make different decisions and they should.
It isn't one size fits all. Unfortunately, it is human nature to want validation for our own decisions. Consequently, sometimes people will basically say "you should do what I did".

Don't just assume the grass is greener outside academia, as I argued in a post, Should I change jobs?

Consider the value of quitting.

The situation in science is quite different than in economics and the humanities because of the common availability of postdoctoral positions. For science, I think the question is more should I leave academia after zero, two, four, six, or ten years of postdoctoral positions?
Again, I don't think there is a universal answer. It is personal. But, you do need to be well informed and realistic.

Friday, February 14, 2014

Berry's curvature eludes experimental signatures

An important discovery of the past decade is that the Berry phase and associated geometric curvature may play a role in the electronic properties of many solids. [A nice review is here].
The curvature enters the semi-classical equations for the electron dynamics in a magnetic field. This gives rise to some forms of the anomalous Hall effect.
One might also expect the curvature to be readily manifested in orbital magnetoresistive properties.
However, for subtle reasons this turns out not to be the case.

Tony Wright and I just published a paper
Signatures of the Berry curvature in the frequency dependent interlayer magnetoresistance in tilted magnetic fields

The abstract is below with the result that I found the most surprising [and discouraging] in bold.
We show that in a layered metal, the angle dependent, finite frequency, interlayer magnetoresistance is altered due to the presence of a non-zero Berry curvature at the Fermi surface. At zero frequency, we find a conservation law which demands that the 'magic angle' condition for interlayer magnetoresistance extrema as a function of magnetic field tilt angle is essentially both field and Berry curvature independent. In the finite frequency case, however, we find that surprisingly large signatures of a finite Berry curvature occur in the periodic orbit resonances. We outline a method whereby the presence and magnitude of the Berry curvature at the Fermi surface can be extracted.
The frequency experiments we propose are doable but challenging and involve subtle effects. 

One simple effect we did not explicitly highlight in the paper but should have.
The curvature is chiral, i.e. it has a unique direction perpendicular to a two-dimensional system. Thus, reversing the direction of a tilted magnetic field will change the magnitude of the cyclotron frequency.

Thursday, February 13, 2014

What should be the order of authors on a conference poster or talk?

I welcome discussion on this point.
I don't think it is as sensitive or as important a topic as the author order on papers.

With regard to paper authorship my general rule is that the person who does the bulk of the work, including actually writing the paper should be the first author.
Doug Natelson has a good post on co-authorship, that I largely agree with. My only difference is that I am not really convinced that good practice prevails in the majority of circumstances. I fear there are increasing numbers of co-authors, particularly senior ones, with marginal contributions.

But, what about conference talks and posters? Many of these are based on work that is already or about to be published. Should the author order be identical as the associated papers? I am not sure it should necessarily be. My tentative view is that the person who writes and submits the abstract and actually prepares and presents the post/talk should be the first author. Perhaps they should also highlight their own contributions in their presentation.

I am also not sure you should have co-authors of a conference presentation who have not looked over the abstract and the presentation beforehand.

What about conference proceedings? I think they are a waste of time and should be abolished.

I am not sure this is a particularly important issue but it may be worth discussing.

Wednesday, February 12, 2014

Are ultracold atomic gases strongly correlated systems?

I recently heard a talk by someone working on cold atoms who kept saying again and again that these were strongly correlated systems. I may have missed it but the justification was never clear. This got me wondering, what criteria would I use as a signature of strong correlations?

Here is my tentative answer motivated by strongly correlated electron materials.

A key signature of strong correlations is a significant redistribution of spectral weight [i.e., the many-body eigenvalue spectrum] compared to the corresponding non-interacting electron problem.

Common phenomena associated with this redistribution are
  • the emergence of new low-energy scales [e.g. Kondo temperature]
  • large renormalisation of quasi-particle energies [heavy fermions]
  • separation of the energy scales for spin and charge excitations
  • incoherent spectral features [Hubbard "bands"] 
  • breakdown of quasi-particle approximations [bad metals]
This redistribution is usually poorly [never?] described by weak coupling theories, static mean-field theories or the RPA [Random Phase Approximation].

I attempt to illustrate this idea with two figures below. The first color shaded plot shows the one-particle spectral density calculated from LDA+DMFT [Local Density Approximation for DFT (Density Functional Theory) + Dynamical Mean-Field Theory] for the parent compound of the iron pnictide superconductors.
The dashed lines are the band structure calculated from pure LDA [i.e. not including the strong correlation effects captured by DMFT].
The Figure is taken from a PRL by Haule, Shim, and Kotliar.

The figure below shows the spectral density measured by ARPES [Angle Resolved PhotoEmission Spectroscopy] for the iron pnictide LaOFeP. The solid red lines are the band structure calculated from LDA.
This is Figure 6 in a recent review from Z.X. Shen's group.

In the absence of strong correlations all of the spectral weight would lie on top of the band structure.
The important point is that it does not.

In the cuprates these effects are even more dramatic.

So what about cold atomic gases?

For fermionic systems I have not seen much discussion of redistribution of spectral weight.
Often a quasi-particle picture and mean-field techniques are used in theoretical calculations.

Chris Vale's group has done a beautiful series of experiments measuring dynamical spin and density correlation functions for a strongly interacting system with a BEC-BCS crossover. The figure below is taken from this PRL. One does see differences between the density [D] and spin [S] correlation functions and there is a redistribution of spectral weight. But, to me at least, it does not appear as dramatic as in strongly correlated electron materials.

For bosons near the Mott transition there has been some discussion of the spectral weight redistribution [see this PRA and references therein].

I think the condensate fraction in the first cold atom BEC's is close to unity. In contrast, in superfluid helium 4 the fraction is about 10 per cent. The vanishing of the condensate fraction as one approaches the Mott insulator has been observed, but seems to be captured by a mean-field theory.
For reference, in cuprate superconductors the superfluid density can be much less than the charge density.

So, my questions are:

Is large redistribution of spectral weight the best signature of strong correlations?

In what cold atom systems does one see the largest redistribution?

Tuesday, February 11, 2014

Is "Teaching and Learning" a tautology?

Please bear with me. I just don't get it.

Universities used to have teaching committees and lecture rooms. Now they have committees, rooms, policies, and vice Presidents for "Teaching and Learning".

"Teaching and learning" seems to be a tautology to me. If students aren't learning then you are not teaching. Teaching is not giving lectures, setting assignments, and marking exams. Teaching only happens when someone learns something.

I think replacing "teaching" by "teaching and learning" has noble goals. It is trying to make this important point that many traditional "teaching" methods are ineffective and don't result in students learning much. Teaching and learning have to go together. You can't have one without the other. But, that is also why I think this redundant nomenclature may degenerate into a silly marketing exercise. 

Friday, February 7, 2014

Quantifying many-body effects in organic photovoltaics

Most papers about organic photovoltaics are full of discussion about HOMO's and LUMO's, their relative energies and spatial extents.  
In the early days of this blog, I asked Am I HOMO- and LUMO-phobic?

Molecular orbitals are beautiful intuitive concepts that are extremely valuable for qualitative understanding. However, they do not exist, i.e., there is no way to measure one, even in principle.
Furthermore, for typical organic molecules used in organic photonics and electronics the one-electron energies associated with these orbitals usually do not give reliable estimates of physically observable energies [associated with true many-body states] such as the ionisation energy, electron affinity, optical energy gap....

I was pleased to see that the above issues are nicely explained and quantified in a recent paper

Reassessing the use of one-electron energetics in the design and characterization of organic photovoltaics
Brett M. Savoie, Nicholas E. Jackson, Tobin J. Marks, and Mark A. Ratner
We present results showing that common approximations employed in the design and characterization of organic photovoltaic (OPV) materials can lead to significant errors in widely adopted design rules. First, we assess the validity of the common practice of using HOMO and LUMO energies in place of formal redox potentials to characterize organic semiconductors. We trace the formal justification for this practice and survey its limits in a way that should be useful for those entering the field. We find that while the HOMO and LUMO energies represent useful descriptive approximations, they are too quantitatively inaccurate for predictive material design. Second, we show that the excitonic nature of common organic semiconductors makes it paramount to distinguish between the optical and electronic bandgaps for materials design. Our analysis shows that the usefulness of the “LUMO–LUMO Offset” as a design parameter for exciton dissociation is directly tied to the accuracy of the one-electron approximation. In particular, our results suggest that the use of the “LUMO–LUMO Offset” as a measure of the driving force for exciton dissociation leads to a systematic overestimation that should be cautiously avoided.
Some of these issues were also highlighted in earlier work, led by my UQ colleague Ben Powell, but not referenced.

Thursday, February 6, 2014

The problem with D-wave's "quantum" computer

Someone from the company D-wave is coming to UQ [not the physics department] next week to give a seminar. Their claim of producing the first commercial quantum computer has met with considerable skepticism. This has been led by Scott Aaronson, on his blog. It contains some nice detailed and thoughtful discussion of the relevant scientific issues.

But here is the key concern, that I fully agree with,
As I’ve said many times, I’d support even the experiments that D-Wave was doing, if D-Wave and its supporters would only call them for what they were: experiments.  Forays into the unknown.  Attempts to find out what happens when a particular speculative approach is thrown at NP-hard optimization problems.  It’s only when people obfuscate the results of those experiments, in order to claim something as “commercially useful” that quite obviously isn’t yet, that they leave the realm of science, and indeed walk straight into the eager jaws of skeptics ...
I think this reflects larger problems in society, once the [money] tail wags the donkey problems occur.

At the end of the day, there is no real evidence that
-there is any large scale quantum entanglement in the D-wave devices [there is some indirect evidence [i.e. not violation of Bell inequalities] of small scale entanglement.
-they have achieved any real speedup over what a classical computer can do [see this preprint].

Wednesday, February 5, 2014

Quantum fluctuations protect your genetic code

Yesterday I read an interesting paper
Enol Tautomers of Watson−Crick Base Pair Models Are Metastable Because of Nuclear Quantum Effects
Alejandro Pérez, Mark Tuckerman, Harold Hjalmarson, and Anatole von Lilienfeld

A key to the double helix structure of DNA and its ability to provide reliable stable storage of genetic information is hydrogen bonding between base pairs [G-C and A-T].
However, it is possible to switch around the positions of the protons on each of the base pairs, producing different tautomers of T, A, C, and G]. 
This is an example of double proton transfer.

This could lead to problems with correctly storing genetic information. 
An important question concerns just how rare this is. For example, what is the free energy of these tautomers relative to the Watson-Crick ones?
Over the past two decades a number of classical molecular dynamics simulations, using potentials derived from quantum chemistry suggested that the tautomers of DNA could be a problem.

Well, the paper above actually shows/argues [based on ab initio path integral molecular dynamics] that the quantum motion of the protons destabilises the mutant tautomers.

Monday, February 3, 2014

The art of writing effective figure captions

If you want people to look at your paper you need to spend significant time coming up with an engaging title and abstract.

If you want people to keep reading and engage with the scientific content you need to produce clear figures and write effective figure captions. This is not easy. Here are a few suggestions that bear in mind the following reality.

Most people will look at the figures to decide whether or not they think the paper is worth reading. Some will do just that. Thus, the main messages need to be contained in the figures and they need to be quickly ascertained.

1. Try to begin the caption with a short title sentence that summarises the main point of the figure.
Why are you including the figure in the paper? Hopefully, not because "I took lots of data" or "I did lots of calculations." What do we really learn from the figure?
For example "Resistivity exceeds Mott-Ioffe-Regel limit at high temperatures", rather than "Resistivity versus temperature."

2. Make sure the caption is self-contained. The reader should be able to understand the figure without having to refer to the text to find definitions of notation, parameter values, or what the significance of the figure is. For example, don't include acronyms [DMFT, MP2, ARPES, ....] unless they are defined. If the reader has to work really hard to understand the figure they will quickly lose interest.

3. Re-write and re-write. Get feedback. It is hard work.

Generally, the above means that captions can be quite long, e.g. a solid paragraph. Occasionally referees may not like this and say some of the details should be relegated to the main text. I disagree.

I learnt the above from John Wilkins, my postdoctoral advisor. He said the first step in writing a PRL is to work on the figures and captions.