Thursday, April 27, 2017

Is it an Unidentified Superconducting Object (USO)?

If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new material.
There is a long history of this and it is worth considering the wise observations of Robert Cava, back in 1997, contained in a tutorial lecture.
It would have been useful indeed in the early days of the field [cuprate superconductors] to have set up a "commission" to set some minimum standard of data quality and reproducibility for reporting new superconductors. An almost countless number of "false alarms" have been reported in the past decade, some truly spectacular. Koichi Kitazawa from the University of Tokyo coined these reports "USOs", for Unidentified Superconducting Objects, in a clever cross-cultural double entendre likening them to UFOs (Unidentified Flying Objects, which certainly are their equivalent in many ways) and to "lies" in the Japanese translation of USO. 
These have caused great excitement on occasion, but more often distress. It is important, however, to keep in mind what a report of superconductivity at 130K in a ceramic material two decades ago might have looked like to rational people if it came out of the blue sky with no precedent. That having been said, it is true that all the reports of superconductivity in new materials which were later confirmed to be true did conform to some minimum standard of reproducibility and data quality. I have tried to keep up with which of the reports have turned out to be true and which haven't. 
There have been two common problems: 
1. Experimental error- due, generally, to inexperienced investigators unfamiliar with measurement methods or what is required to show that a material is superconducting. This has become more rare as the field matures. 
[n.b. you really need to observe both zero resistivity and the Meissner effect].
2. "New" superconductors are claimed in chemical systems already known to have superconductors containing some subset of the components. This is common even now, and can be difficult for even experienced researchers to avoid. The previously known superconductor is present in small proportions, sometimes in lower Tc form due to impurities added by the experimentalist trying to make a new compound. In a particularly nasty variation on this, sometimes extra components not intentionally added are present - such as Al from crucibles or CO2 from exposure to air some time during the processing. I wish I had a dollar for every false report of superconductivity in a Nb containing oxide where the authors had unintentionally made NbN in small proportions.
There is also an interesting article about the Schon scandal, where Paul Grant claims
During my research career in the field of superconducting materials, I have documented many cases of an 'unidentified superconducting object' (USO), only one of which originated from an industrial laboratory, eventually landing in Physical Review Letters. But USOs have had origins in many universities and government laboratories. Given my rather strong view of the intrinsic checks and balances inherent in industrial research, the misconduct that managed to escape notice at Bell Labs is even more singular.

Monday, April 24, 2017

Have universities lost sight of the big questions and the big picture?

Here are some biting critiques of some of the "best" research at the "best" universities, by several distinguished scholars.
The large numbers of younger faculty competing for a professorship feel forced to specialize in narrow areas of their discipline and to publish as many papers as possible during the five to ten years before a tenure decision is made. Unfortunately, most of the facts in these reports have neither practical utility nor theoretical significance; they are tiny stones looking for a place in a cathedral. The majority of ‘empirical facts’ in the social sciences have a half-life of about ten years.
Jerome Kagan [Harvard psychologist], The Three Cultures Natural Sciences, Social Sciences, and the Humanities in the 21st Century
[I thank Vinoth Ramachandra for bringing this quote to my attention].
[The distinguished philosopher Alasdair] MacIntyre provides a useful tool to test how far a university has moved to this fragmented condition. He asks whether a wonderful and effective undergraduate teacher who is able to communicate how his or her discipline contributes to an integrated account of things – but whose publishing consists of one original but brilliant article on how to teach – would receive tenure. Or would tenure be granted to a professor who is unable or unwilling to teach undergraduates, preferring to teach only advanced graduate students and engaged in ‘‘cutting-edge research.’’ MacIntyre suggests if the answers to these two inquiries are ‘‘No’’ and ‘‘Yes,’’ you can be sure you are at a university, at least if it is a Catholic university, in need of serious reform. I feel quite confident that MacIntyre learned to put the matter this way by serving on the Appointment, Promotion, and Tenure Committee of Duke University. I am confident that this is the source of his understanding of the increasing subdisciplinary character of fields, because I also served on that committee for seven years. During that time I observed people becoming ‘‘leaders’’ in their fields by making their work so narrow that the ‘‘field’’ consisted of no more than five or six people. We would often hear from the chairs of the departments that they could not understand what the person was doing, but they were sure the person to be considered for tenure was the best ‘‘in his or her field."
Stanley Hauerwas, The State of the University, page 49.

Are these reasonable criticisms of the natural sciences?

Wednesday, April 19, 2017

Commercialisation of universities

I find the following book synopsis rather disturbing.
Is everything in a university for sale if the price is right? In this book, the author cautions that the answer is all too often "yes." Taking the first comprehensive look at the growing commercialization of our academic institutions, the author probes the efforts on campus to profit financially not only from athletics but increasingly, from education and research as well. He shows how such ventures are undermining core academic values and what universities can do to limit the damage. 
Commercialization has many causes, but it could never have grown to its present state had it not been for the recent, rapid growth of money-making opportunities in a more technologically complex, knowledge-based economy. A brave new world has now emerged in which university presidents, enterprising professors, and even administrative staff can all find seductive opportunities to turn specialized knowledge into profit. 
The author argues that universities, faced with these temptations, are jeopardizing their fundamental mission in their eagerness to make money by agreeing to more and more compromises with basic academic values. He discusses the dangers posed by increased secrecy in corporate-funded research, for-profit Internet companies funded by venture capitalists, industry-subsidized educational programs for physicians, conflicts of interest in research on human subjects, and other questionable activities. 
While entrepreneurial universities may occasionally succeed in the short term, reasons the author, only those institutions that vigorously uphold academic values, even at the cost of a few lucrative ventures, will win public trust and retain the respect of faculty and students. Candid, evenhanded, and eminently readable, Universities in the Marketplace will be widely debated by all those concerned with the future of higher education in America and beyond.
What is most disturbing is that the author of Universities in the Marketplace: The Commercialization of Higher Education is Derek Bok, former President of Harvard, the richest university in the world!

There is a helpful summary and review of the book here. A longer review compares and contrasts the book to several others addressing similar issues.

How concerned should we be about these issues?

Thursday, April 13, 2017

Quantum entanglement technology hype


Last month The Economist had a cover story and large section on commercial technologies based on quantum information.

To give the flavour here is a sample from one of the articles
Very few in the field think it will take less than a decade [to build a large quantum computer], and many say far longer. But the time for investment, all agree, is now—because even the smaller and less capable machines that will soon be engineered will have the potential to earn revenue. Already, startups and consulting firms are springing up to match prospective small quantum computers to problems faced in sectors including quantitative finance, drug discovery and oil and gas. .... Quantum simulators might help in the design of room-temperature superconductors allowing electricity to be transmitted without losses, or with investigating the nitrogenase reaction used to make most of the world’s fertiliser.
I know people are making advances [which are interesting from a fundamental science point of view] but it seems to me we are a very long way from doing anything cheaper [both financially and computationally] than a classical computer.

Doug Natelson noted that at the last APS March Meeting, John Martinis said that people should not believe the hype, even from him!

Normally The Economist gives a hard-headed analysis of political and economic issues. I might not agree with it [it is too neoliberal for me] but at least I trust it to give a rigorous and accurate analysis. I found this section to be quite disappointing. I hope uncritical readers don't start throwing their retirement funds into start-ups that are going to develop the "quantum internet" because they believe that this is going to be as important as the transistor (a claim the article ends with).

Maybe I am missing something.
I welcome comments on the article.

Tuesday, April 11, 2017

Should we fund people or projects?

In Australia, grant reviewers are usually asked to score applications according to three aspects: investigator, project, and research environment. These are usually weighted by something like 40%, 40%, and 20%, respectively. Previously, I wrote how I think the research environment aspect is problematic.

I struggle to see why investigator and project should have equal weighting. For example, consider the following caricatures.

John writes highly polished proposals with well defined projects on important topics. However, he has limited technical expertise relevant to the ambitious goals in the proposal. He also tends to write superficial papers on hot topics.

Joan is not particularly well organised and does not write polished proposals. She does not plan her projects but lets her curiosity and creativity lead her. Although she does not write a lot of papers she has a good track record of moving into new areas and making substantial contributions.

This raises the question of whether we should even forget the project dimension to funding. Suppose you had the following extreme system. You just give the "best" people a grant for three years and they can do whatever they want. Three years later they apply again and are evaluated based on what they have produced. This would encourage more risks and save a lot of time in the grant preparation and evaluation process.

Are there any examples of this kind of "no strings attached" funding? The only examples I can think of are MacArthur Fellows and Royal Society Professorships. However, these are really for stellar senior people.

What do you think?

Thursday, April 6, 2017

Do you help your students debug codes?

Faculty vary greatly in their level of involvement with the details of the research projects of the undergrads, Ph.D students, and postdocs they supervise. Here are three different examples based on real senior people.

A. gives the student or postdoc a project topic and basically does not want to talk to them again until they bring a draft of a paper.

B. talks to their students regularly but boasts that they have not looked at a line of computer code since they became a faculty member. It is the sole responsibility of students to write and debug code.

C. is very involved. One night before a conference presentation they stayed up until 3 am trying to debug a students code in the hope of getting some more results to present the next day.

Similar issues arise with analytical calculations or getting experimental apparatus to work.

What is an appropriate level of involvement?
On the one hand, it is important that students take responsibility for their projects and learn to solve their own problems.
On the other hand, faculty can speed things along and sometimes quickly find "bugs" because of experience. Also a more "hands on" approach gives a better feel for how well the student knows what they are doing and is checking things.
It is fascinating and disturbing to me that in the Schon scandal, Batlogg confessed that he never went in the lab and so did not realise there was no real experiment.

I think there is no clear cut answer. Different people have different strengths and interests (both supervisors and students). Some really enjoy the level of detail and others are more interested in the big picture.
However, I must say that I think A. is problematic.
Overall, I am closer to B. than C, but this has varied depending on the person involved, the project, and the technical problems.

What do you think?

Tuesday, April 4, 2017

Some awkward history

I enjoyed watching the movie Hidden Figures. It is based on a book that recounts the little-known history of the contributions of three African-American women to NASA and the first manned space flights in the 1960s. The movie is quite entertaining and moving while raising significant issues about racism and sexism in science. I grimaced at some of the scenes. On the one hand, some would argue we have come a long way in fifty years. On the other hand, we should be concerned about how the rise of Trump will play out in science.


One minor question I have is how much of the math on the blackboards is realistic?



Something worth considering is the extent to which the movie fits the too-common white savior narrative, as highlighted in a critical review, by Marie Hicks.

Saturday, April 1, 2017

A fascinating thermodynamics demonstration: the drinking bird

I am currently helping teach a second year undergraduate course Thermodynamics and Condensed Matter Physics. For the first time I am helping out in some of the lab sessions. Two of the experiments are based on the drinking bird.



This illustrates two important topics: heat engines and liquid-vapour equilibria.

Here are a few observations fo in random order.

* I still find it fascinating to watch. Why isn't it a perpetual motion machine?

* Several more surprising things are:
a. it operates on such a small temperature difference,
b. that there is a temperature difference between the head and bulb,
c. it is so sensitive to perturbations such as warming with your fingers or changes in humidity.

* It took me quite a while to understand what is going on, which makes me wonder about the students doing the lab. How much are they following the recipe and saying the mantra...

* I try to encourage the students to think critically and scientifically about what is going on, asking some basic questions, such as "How do you know the head is cooler than the bulb? What experiment can you do right now to test your hypothesis? How can you test whether evaporative cooling is responsible for cooling the head?" Such an approach is briefly described in this old paper.

* Understanding and approximately quantifying the temperature of the head involves the concept of humidity, wet-bulb temperature and a psychometric chart. Again I find this challenging.

* This lab is a great example of how you don't necessarily need a lot of money and fancy equipment to teach a lot of important science and skills.

Thursday, March 30, 2017

Perverse incentives in academia

According to Wikipedia, "A perverse incentive is an incentive that has an unintended and undesirable result which is contrary to the interests of the incentive makers. Perverse incentives are a type of negative unintended consequence."

There is an excellent (but depressing) article
Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition
Edwards Marc A. and Roy Siddhartha

I learnt of the article via a blog post summarising it, Every attempt to manage academia makes it worse.

Incidentally, Edwards is a water quality expert who was influential in exposing the Flint Water crisis.

The article is particularly helpful because it cites a lot of literature concerning the problems. It contains the following provocative table. I also like the emphasis on ethical behaviour and altruism.


It is easy to feel helpless. However, the least action you can take is to stop looking at metrics when reviewing grants, job applicants, and tenure cases. Actually read some of the papers and evaluate the quality of the science. If you don't have the expertise then you should not be making the decision or  should seek expert review.

Tuesday, March 28, 2017

Computational quantum chemistry in a nutshell

To the uninitiated (and particularly physicists) computational quantum chemistry can just seem to be a bewildering zoo of multiple letter acronyms (CCSD(T), MP4, aug-CC-pVZ, ...).

However, the basic ingredients and key assumptions can be simply explained.

First, one makes the Born-Oppenheimer approximation, i.e. one assumes that the positions of the N_n nuclei in a particular molecule are a classical variable [R is a 3N_n dimensional vector] and the electrons are quantum. One wants to find the eigenenergy of the N electrons. The corresponding Hamiltonian and Schrodinger equation is


The electronic energy eigenvalues E_n(R) define the potential energy surfaces associated with the ground and excited states. From the ground state surface one can understand most of chemistry! (e.g., molecular geometries, reaction mechanisms, transition states, heats of reaction, activation energies, ....)
As Laughlin and Pines say, the equation above is the Theory of Everything!
The problem is that one can't solve it exactly.

Second, one chooses whether one wants to calculate the complete wave function for the electrons or just the local charge density (one-particle density matrix). The latter is what one does in density functional theory (DFT). I will just discuss the former.

Now we want to solve this eigenvalue problem on a computer and the Hilbert space is huge, even for a simple molecule such as water. We want to reduce the problem to a discrete matrix problem. The Hilbert space for a single electron involves a wavefunction in real space and so we want a finite basis set of L spatial wave functions, "orbitals". Then there is the many-particle Hilbert space for N-electrons, which has dimensions of order L^N. We need a judicious way to truncate this and find the best possible orbitals.

The single particle orbitals can be introduced
where the a's are annihilation operators to give the Hamiltonian

These are known as Coulomb and exchange integrals. Sometimes they are denoted (ij|kl).
Computing them efficiently is a big deal.
In semi-empirical theories one neglects many of these integrals and treats the others as parameters that are determined from experiment.
For example, if one only keeps a single term (ii|ii) one is left with the Hubbard model!

Equivalently, the many-particle wave function can be written in this form.

Now one makes two important choices of approximations.

1. atomic basis set
One picks a small set of orbitals centered on each of the atoms in the molecule. Often these have the traditional s-p-d-f rotational symmetry and a Gaussian dependence on distance.

2. "level of theory"
This concerns how one solves the many-body problem or equivalently how one truncates the Hilbert space (electronic configurations) or equivalently uses an approximate variational wavefunction. Examples include Hartree-Fock (HF), second-order perturbation theory (MP2),  a Gutzwiller-type wavefunction (CC = Coupled Cluster), or Complete Active Space (CAS(K,L)) (one uses HF for higher and low energies and exact diagonalisation for a small subset of K electrons in L orbitals.
Full-CI (configuration interaction) is exact diagonalisation. This only possible for very small systems.

The many-body wavefunction contains many variational parameters, both the coefficients in from of the atomic orbitals that define the molecular orbitals and the coefficients in front of the Slater determinants that define the electronic configurations.

Obviously, one expects that the larger the atomic basis set and the "higher" the level of theory  (i.e. treatment of electron correlation) one hopes to move closer to reality (experiment). I think Pople first drew a diagram such as the one below (taken from this paper).


However, I stress some basic points.

1. Given how severe the truncation of Hilbert space from the original problem one would not necessarily to expect to get anywhere near reality. The pleasant surprise for the founders of the field was that even with 1950s computers one could get interesting results. Although the electrons are strongly correlated (in some sense), Hartree-Fock can sometimes be useful. It is far from obvious that one would expect such success.

2. The convergence to reality is not necessarily uniform.
This gives rise to Pauling points: "improving" the approximation may give worse answers.

3. The relative trade-off between the horizontal and vertical axes is not clear and may be context dependent.

4. Any computational study should have some "convergence" tests. i.e. use a range of approximations and compare the results to see how robust any conclusions are.

Thursday, March 23, 2017

Units! Units! Units!

I am spending more time with undergraduates lately: helping in a lab (scary!), lecturing, marking assignments, supervising small research projects, ...

One issue keeps coming up: physical units!
Many of the students struggle with this. Some even think it is not important!

This matters in a wide range of activities.

  • Giving a meaningful answer for a measurement or calculation. This includes canceling out units.
  • Using dimensional analysis to find possible errors in a calculation or formula.
  • Writing equations in dimensionless form to simplify calculations, whether analytical or computational.
  • Making order of magnitude estimates of physical effects.

Any others you can think of?

Any thoughts on how we can do better at training students to master this basic but important skill?

Tuesday, March 21, 2017

Emergence frames many of the grand challenges and big questions in universities

What are the big questions that people are (or should be) wrestling within universities?
What are the grand intellectual challenges, particularly those that interact with society?

Here are a few. A common feature of those I have chosen is that they involve emergence: complex systems consisting of many interacting components produce new entities and there are multiple scales (whether length, time, energy, the number of entities) involved.

Economics
How does one go from microeconomics to macroeconomics?
What is the interaction between individual agents and the surrounding economic order?
A recent series of papers(see here and references therein) have looked at how the concept of emergence played a role in the thinking of Friedrich Hayek.

Biology
How does one go from genotype to phenotype?
How do the interactions between many proteins produce a biochemical process in a cell?


The figure above shows a protein interaction network and taken from this review.

Sociology
How do communities and cultures emerge?
What is the relationship between human agency and social structures?

Public health and epidemics
How do diseases spread and what is the best strategy to stop them?

Computer science
Artificial intelligence.
Recently it was shown how Deep learning can be understood in terms of the renormalisation group.

Community development, international aid, and poverty alleviation
I discussed some of the issues in this post.

Intellectual history
How and when do new ideas become "popular" and accepted?

Climate change

Philosophy
How do you define consciousness?

Some of the issues are covered in the popular book, Emergence: the connected lives of Ants, Brains, Cities, and Software.
Some of these phenomena are related to the physics of networks, including scale-free networks. The most helpful introduction I have read is a Physics Today article by Mark Newman.

Given this common issue of emergence, I think there are some lessons (and possibly techniques) these fields might learn from condensed matter physics. It is arguably the field which has been the most successful at understanding and describing emergent phenomena. I stress that this is not hubris. This success is not because condensed matter theorists are smarter or more capable than people working in other fields. It is because the systems are "simple" enough and the presence (sometimes) of a clear separation of scales that they are more amenable to analysis and controlled experiments.

Some of these lessons are "obvious" to condensed matter physicists. However, I don't think they are necessarily accepted by researchers in other fields.

Humility.
These are very hard problems, progress is usually slow, and not all questions can be answered.

The limitations of reductionism.
Trying to model everything by computer simulations which include all the degrees of freedom will lead to limited progress and insight.

Find and embrace the separation of scales.
The renormalisation group provides a method to systematically do this. A recent commentary by Ilya Nemenman highlights some recent progress and the associated challenges.

The centrality of concepts.

The importance of critically engaging with experiment and data.
They must be the starting and end point. Concepts, models, and theories have to be constrained and tested by reality.

The value of simple models.
They can give significant insight into the essentials of a problem.

What other big questions and grand challenges involve emergence?

Do you think condensed matter [without hubris] can contribute something?

Saturday, March 18, 2017

Important distinctions in the debate about journals

My post, "Do we need more journals?" generated a lot of comments, showing that the associated issues are something people have strong opinions about.

I think it important to consider some distinct questions that the community needs to debate.

What research fields, topics, and projects should we work on?

When is a specific research result worth communicating to the relevant research community?

Who should be co-authors of that communication?

What is the best method of communicating that result to the community?

How should the "performance" and "potential" of individuals, departments, and institutions be evaluated?

A major problem for science is that over the past two decades the dominant answer to the last question (metrics such as Journal "Impact" Factors and citations) is determining the answer to the other questions. This issue has been nicely discussed by Carl Caves.
The tail is wagging the dog.

People flock to "hot" topics that can produce quick papers, may attract a lot of citations, and are beloved by the editors of luxury journals. Results are often obtained and analysed in a rush, not checked adequately, and presented in the "best" possible light with a bias towards exotic explanations. Co-authors are sometimes determined by career issues and the prospect of increasing the probability of publication in a luxury journal, rather than by scientific contribution.

Finally, there is a meta-question that is in the background. The question is actually more important but harder to answer.
How are the answers to the last question being driven by broader moral and political issues?
Examples include the rise of the neoliberal management class, treatment of employees, democracy in the workplace, inequality, post-truth, the value of status and "success", economic instrumentalism, ...

Thursday, March 16, 2017

Introducing students to John Bardeen

At UQ there is a great student physics club, PAIN. Their weekly meeting is called the "error bar." This friday they are having a session on the history of physics and asked faculty if any would talk "about interesting stories or anecdotes about people, discoveries, and ideas relating to physics."

I thought for a while and decided on John Bardeen. There is a lot I find interesting. He is the only person to receive two Nobel Prizes in Physics. Arguably, the discovery associated with both prizes (transistor, BCS theory) are of greater significance than the average Nobel. The difficult relationship with Shockley, who in some sense became the founder of Silicon Valley.

Here are my slides.


In preparing the talk I read the interesting articles in the April 1992 issue of Physics Today that was completely dedicated to Bardeen. In his article David Pines, says
[Bardeen's] approach to scientific problems went something like this: 
  • Focus first on the experimental results, by careful reading of the literature and personal contact with members of leading experimental groups. 
  • Develop a phenomenological description that ties the key experimental facts together. 
  • Avoid bringing along prior theoretical baggage, and do not insist that a phenomenological description map onto a particular theoretical model. Explore alternative physical pictures and mathematical descriptions without becoming wedded to a specific theoretical approach. 
  • Use thermodynamic and macroscopic arguments before proceeding to microscopic calculations. 
  • Focus on physical understanding, not mathematical elegance. Use the simplest possible mathematical descriptions. 
  • Keep up with new developments and techniques in theory, for one of these could prove useful for the problem at hand. 
  • Don't give up! Stay with the problem until it's solved. 
In summary, John believed in a bottom-up, experimentally based approach to doing physics, as distinguished from a top-down, model-driven approach. To put it another way, deciding on an appropriate model Hamiltonian was John's penultimate step in solving a problem, not his first.
With regard to "interesting stories or anecdotes about people, discoveries, and ideas relating to physics," what would you talk about?

Wednesday, March 15, 2017

The power and limitations of ARPES

The past two decades have seen impressive advances in Angle-Resolved PhotoEmission Spectroscopy (ARPES). This technique has played a particularly important role in elucidating the properties of the cuprates and topological insulators. ARPES allows measurement of the one-electron spectral function, A(k,E) something that can be calculated from quantum many-body theory. Recent advances have included the development of laser-based ARPES, which makes synchrotron time unnecessary.

A recent PRL shows the quality of data that can be achieved.

Orbital-Dependent Band Narrowing Revealed in an Extremely Correlated Hund’s Metal Emerging on the Topmost Layer of Sr2RuO4 
Takeshi Kondo, M. Ochi, M. Nakayama, H. Taniguchi, S. Akebi, K. Kuroda, M. Arita, S. Sakai, H. Namatame, M. Taniguchi, Y. Maeno, R. Arita, and S. Shin

The figure below shows a colour density plot of the intensity [related to A(k,E)] along a particular direction in the Brillouin zone.  The energy resolution is of the order of meV, something that would not have been dreamed of decades ago.
Note how the observed dispersion of the quasi-particles is much smaller than that calculated from DFT, showing how strongly correlated the system is.

The figure below shows how with increasing temperature a quasi-particle peak gradually disappears, showing the smooth crossover from a Fermi liquid to a bad metal, above some coherence temperature.
The main point of the paper is that the authors are able to probe just the topmost layer of the crystal and that the associated electronic structure is more correlated (the bands are narrower and the coherence temperature is lower) than the bulk.
Again it is impressive that one can make this distinction.

But this does highlight a limitation of ARPES, particularly in the past. It is largely a surface probe and so one has to worry about whether one is measuring surface properties that are different from the bulk. This paper shows that those differences can be significant.

The paper also contains DFT+DMFT calculations which are compared to the experimental results.

Monday, March 13, 2017

What do your students really expect and value?

Should you ban cell phones in class?

I found this video quite insightful. It reminded me of the gulf between me and some students.



It confirmed my policy of not allowing texting in class. Partly this is to force students to be more engaged. But it is also to make students think about whether they really need to be "connected" all the time?

What is your policy of phones in class?

I think that the characterisation of "millennials" may be a bit harsh and too one dimensional. Although I did encounter some of the underlying attitudes in a problematic class a few years ago. Then reading a Time magazine cover article was helpful.
I also think that this is not a good characterisation of many of the students that make it as far as an advanced undergraduate or Ph.D programs. By then many of the narcissistic and entitled have self selected out. It is just too much hard work.

Friday, March 10, 2017

Do we really need more journals?

NO!

Nature Publishing Group continues to spawn "Baby Natures" like crazy.

I was disappointed to see that Physical Review is launching a new journal Physical Review Materials. They claim it is to better serve the materials community. I found this strange. What is wrong with Physical Review B? It does a great job.
Surely, the real reason is APS wants to compete with Nature Materials [a front for mediocrity and hype] which has a big Journal Impact Factor (JIF).
On the other hand, if the new journal could put Nature Materials out of business I would be very happy. At least the journal would be run and controlled by real scientists and not-for-profit.

So I just want to rant two points I have made before.

First, the JIF is essentially meaningless, particularly when it comes to evaluating the quality of individual papers. Even if one believes citations are some sort of useful measure of impact, one should look at the distribution, not just the mean. Below the distribution is shown for Nature Chemistry.


Note how the distribution is highly skewed, being dominated by a few highly cited papers. More than 70 per cent of papers score less than the mean.

Second, the problem is that people are publishing too many papers. We need less journals not more!
Three years ago, I posted about how I think journals are actually redundant and gave a specific proposal of how to move towards a system that produces better science (more efficiently) and more accurately evaluates the quality of individuals contributions.

Getting there will obviously be difficult. However, initiatives such as SciPost and PLOS ONE, are steps in a positive direction.
Meanwhile those of us evaluating the "performance" of individuals can focus on real science and not all this nonsense beloved by many.

Wednesday, March 8, 2017

Is complexity theory relevant to poverty alleviation programs?

For me, global economic inequality is a huge issue. A helpful short video describes the problem.
Recently, there has been a surge of interest among development policy analysts about how complexity theory may be relevant in poverty alleviation programs.

On an Oxfam blog there is a helpful review of three books on complexity theory and development.
I recently read some of one of these books, Aid on the Edge of Chaos: Rethinking International Cooperation in a Complex World, by Ben Ramalingham.

Here is some of the publisher blurb.
Ben Ramalingam shows that the linear, mechanistic models and assumptions on which foreign aid is built would be more at home in early twentieth century factory floors than in the dynamic, complex world we face today. All around us, we can see the costs and limitations of dealing with economies and societies as if they are analogous to machines. The reality is that such social systems have far more in common with ecosystems: they are complex, dynamic, diverse and unpredictable. 
Many thinkers and practitioners in science, economics, business, and public policy have started to embrace more 'ecologically literate' approaches to guide both thinking and action, informed by ideas from the 'new science' of complex adaptive systems. Inspired by these efforts, there is an emerging network of aid practitioners, researchers, and policy makers who are experimenting with complexity-informed responses to development and humanitarian challenges. 
This book showcases the insights, experiences, and often remarkable results from these efforts. From transforming approaches to child malnutrition, to rethinking processes of economic growth, from building peace to combating desertification, from rural Vietnam to urban Kenya, Aid on the Edge of Chaos shows how embracing the ideas of complex systems thinking can help make foreign aid more relevant, more appropriate, more innovative, and more catalytic. Ramalingam argues that taking on these ideas will be a vital part of the transformation of aid, from a post-WW2 mechanism of resource transfer, to a truly innovative and dynamic form of global cooperation fit for the twenty-first century.
The first few chapters give a robust and somewhat depressing critique of the current system of international aid. He then discusses complexity theory and finally specific case studies.
The Table below nicely contrasts two approaches.

A friend who works for a large aid NGO told me about the book and described a workshop (based on the book) that he attended where the participants even used modeling software.

I have mixed feelings about all of this.

Here are some positive points.

Any problem in society involves a complex system (i.e. many interacting components). Insights, both qualitative and quantitative, can be gained from "physics" type models. Examples I have posted about before, include the statistical mechanics of money and the universality in probability distributions for certain social quantities.

Simplistic mechanical thinking, such as that associated with Robert McNamara in Vietnam and then at the World Bank, is problematic and needs to be critiqued. Even a problem as 'simple" as replacing wood burning stoves turns out to be much more difficult and complicated than anticipated.

A concrete example discussed in the book is that of positive deviance, which takes its partial motivation from power laws.

Here are some concerns.

Complexity theory suffers from being oversold. It certainly gives important qualitative insights and concrete examples in "simple" models. However, to what extent complexity theory can give a quantitative description of real systems is debatable. This is particularly true of the idea of "the edge of chaos" that features in the title of the book. A less controversial title would have replaced this with simply "emergence", since that is a lot of what the book is really about.

Some of the important conclusions of the book could be arrived at by different more conventional routes. For example, a major point is that "top down" approaches are problematic. This is where some wealthy Westerners define a problem, define the solution, then provide the resources (money, materials, and personnel) and impose the solution on local poor communities. A more "bottom up" or "complex adaptive systems" approach is where one consults with the community, gets them to define the problem and brainstorm possible solutions, give them ownership of implementing the project, and adapt the strategy in response to trials. One can come to this same approach if ones starting point is simply humility and respect for the dignity of others. We don't need complexity theory for that.

The author makes much of the story of Sugata Mitra, whose TED talk, "Kids can teach themselves" has more than a million views. He puts some computer terminals in a slum in India and claims that poor uneducated kids taught themselves all sorts of things, illustrating "emergent" and "bottom up" solutions. It is a great story.  However, it has received some serious criticism, which is not acknowledged by the author.

Nevertheless, I recommend the book and think it is a valuable and original contribution about a very important issue.

Monday, March 6, 2017

A dirty secret in molecular biophysics

The past few decades has seen impressive achievements in molecular biophysics that are based on two techniques that are now common place.

Using X-ray crystallography to determine the detailed atomic structure of proteins.

Classical molecular dynamics simulations.

However, there is a fact that is not as widely known and acknowledged as it should be. These two complementary techniques have an unhealthy symbiotic relationship.
Protein crystal structures are often "refined" using molecular dynamics simulations.
The "force fields" used in the simulations are often parametrised using known crystal structures!

There are at least two problems with this.

1. Because the methods are not independent of one another one cannot claim that a because in a particular case they give the same result that one has achieved something, particularly "confirmation" of the validity of a result.

2. Classical force fields are classical and do not necessarily give a good description of the finer details of chemical bonding, something that is intrinsically quantum mechanical. The active sites of proteins are "special" by definition. They are finely tuned to perform a very specific biomolecular function (e.g. catalysis of a specific chemical reaction or conversion of light into electrical energy). This is particularly true of hydrogen bonds, where bond length differences of less than a 1/20 of an Angstrom can make a huge difference to a potential energy surface.

I don't want to diminish or put down the great achievements of these two techniques. We just need to be honest and transparent about their limitations and biases.

I welcome comments.

Friday, March 3, 2017

Science is told by the victors and Learning to build models

A common quote about history is that "History is written by the victors". The over-simplified point is that sometimes the losers of a war are obliterated (or at least lose power) and so don't have the opportunity to tell their side of the story. In contrast, the victors want to propagate a one-sided story about their heroic win over their immoral adversaries. The origin of this quote is debatable but there is certainly a nice article where George Orwell discusses the problem in the context of World War II.

What does this have to do with teaching science?
The problem is that textbooks present nice clean discussions of successful theories and models that rarely engage with the complex and tortuous path that was taken to get to the final version.
If the goal is "efficient" learning and minimisation of confusion this is appropriate.
However, we should ask whether this is the best way for students to actually learn how to DO and understand science.

I have been thinking about this because this week I am teaching the Drude model in my solid state physics course. Because of its simplicity and success, it is an amazing and beautiful theory. But, it is worth thinking about two key steps in constructing the model; steps that are common (and highly non-trivial) in constructing any theoretical model in science.

1. Deciding which experimental observables and results one wants to describe.

2. Deciding which parameters or properties will be ingredients of the model.

For 1. it is Ohm's law, Fourier's law, Hall effect, Drude peak, UV transparency of metals, Weidemann-Franz, magnetoresistance, thermoelectric effect, specific heat, ...

For 2. one starts with only conduction electrons (not valence electrons or ions), no crystal structure or chemical detail (except valence), and focuses on averages (velocity, scattering time, density) rather than standard deviations, ...

In hindsight, it is all "obvious" and "reasonable" but spare a thought for Drude in 1900. It was only 3 years after the discovery of the electron, before people were even certain that atoms existed, and certainly before the Bohr model...

This issue is worth thinking about as we struggle to describe and understand complex systems such as society, the economy, or biological networks. One can nicely see 1. and 2. above in a modest and helpful article by William Bialek, Perspectives on theory at the interface of physics and biology.

Tuesday, February 28, 2017

The value of vacations

This is the first week of classes for the beginning of the academic year.

In preparation for a busy semester, I took last week off work (my last four posts were automated) and visited my son in Canberra (where I grew up) and spent some time hiking in one of my favourite places, Kosciusko National Park. One photo is below. This reminded me of the importance of vacations and down time, of the therapeutic value of the nature drug, and of turning off your email occasionally.


Above Lake Albina on the main range.

Friday, February 24, 2017

Excellent notes on the Quantum Hall Effect

In the condensed matter theory group at UQ we regularly run reading groups, where we work through a book, review article, or some lecture notes. This is particularly important as our PhD students don't take any courses.

Currently we are working through some nice lecture notes on the Quantum Hall effect, written by David Tong. They are very accessible and clear, particularly in putting the QHE in the context of topology, edge states, Berry's phase, Chern insulators, TKNN, ...

On his website he also has lectures on a wide range of topics from kinetic theory to string theory.

Wednesday, February 22, 2017

Desperately seeking Weyl semi-metals. 2.

Since my previous post about the search for a Weyl semimetal in pyrochlore iridates (such as R2Ir2O7, where R=rare earth) read two more interesting papers on the subject.

Metal-Insulator Transition and Topological Properties of Pyrochlore Iridates 
Hongbin Zhang, Kristjan Haule, and David Vanderbilt

Using a careful DMFT+DFT study they are able to reproduce experimental trends across the series, R=Y, Eu, Sm, Nd, Pr, Bi.

They show that when the self energy due to interactions is included that the band structure is topologically trivial, contrary to the 2010 proposal based on DFT+U.

They also find that the quasi-particle weight is quite small (about 0.1 for R=Sm, Nd and 0.2 for Pr). This goes some way towards explaining the fact that the infrared conductivity gives an extremely small Drude weight (about 0.05 electrons per unit cell), a puzzle I highlighted in my first post.

Field-induced quantum metal–insulator transition in the pyrochlore iridate Nd2Ir2O7 
Zhaoming Tian, Yoshimitsu Kohama, Takahiro Tomita, Hiroaki Ishizuka, Timothy H. Hsieh, Jun J. Ishikawa, Koichi Kindo, Leon Balents, and Satoru Nakatsuji

The authors make much of two things.

First, the relatively low magnetic field (about 10 Tesla) required to induce the transition from the magnetic insulator to the metallic phase. Specifically, the relevant Zeeman energy is much smaller that the charge gap in the insulating phase.
However, one might argue that the energy scale one should be comparing to is the thermal energy associated with the magnetic transition temperature.

Second. the novelty of this transition.
However, in 2001 a somewhat similar transition was observed in the organic charge transfer salt, lambda-(BETS)2FeCl4. It is even more dramatic because it undergoes a field-induced transition from a Mott insulator to a superconductor. The physics is also quite similar in that it can also be described by Hubbard-Kondo model, where local moments are coupled to interacting delocalised electrons.

Monday, February 20, 2017

Senior faculty position in Experimental Condensed Matter available at UQ

My department has just advertised a faculty position. 

I will be interested to see how many applicants want to escape Trumpland for sunny Queensland [which BTW has excellent gun control and national health care...].


Friday, February 17, 2017

A new picture of unconventional superconductivity

Two key ideas concerning unconventional superconductors are the following.

1. s-wave and p-wave pairing (in momentum space) are associated with spin singlet and spin triplet pairing, respectively. This can be shown with minimal assumptions (no spin-orbit coupling and spatial inversion symmetry).

2. If superconductivity is seen in proximity to an ordered phase (e.g. ferromagnetism or antiferromagnetism) with a quantum critical point (QCP) then the pairing can be "mediated" by low energy fluctuations (e.g. magnons) associated with the ordering.

3. Non-fermi liquid behaviour may be seen in the quantum critical region about the QCP.

However, an interesting paper shows that neither of the above is necessarily true.

Superconductivity from Emerging Magnetic Moments 
Shintaro Hoshino and Philipp Werner

They find spin triplet superconductivity with s-wave symmetry. This arises because there is more than one orbital per site and due to the Hund's rule coupling spin triplets can form on a single site.

They also find the pairing is strongest near the "spin freezing crossover" which is associated with the "Hund's metal", i.e. the bad metal arising from the Hund's rule interaction, and has certain "non-Fermi liquid" properties.

The results are summarised in the phase diagrams below, which has a striking similarity to various experimental phase diagrams that are usually interpreted in terms of 2. above.
However, all the theory is DMFT and so there are no long wavelength fluctuations.


Tuesday, February 14, 2017

Four subcultures of the university

A while back I was in a discussion about "What is the culture of the university? What would a sociologist or anthropologist say?"

I thought about this quite a while and came to the conclusion that most universities (particularly research universities in the Western world) do not have a single culture, but rather four distinct subcultures.

First, let me make an observation about modern cosmopolitan cities: New York, Brisbane, Bangalore, Paris, London, ... Within each city, there can co-exist several distinct social groups and subcultures, e.g. African-American, Jewish, homeless, business elite, Muslim, WASPs, Hispanic, ...
Culture is not just about what kind of restaurants they eat at. It concerns values.
Although they may occupy the same physical space (and to a certain extent the same political and economic space), the values of these communities are often distinctly different. If you don't think this I suggest you talk to someone from one community who has married someone (or tried to) from a different community. Or someone who has changed their religion from that of one community to another. These cross-cultural actions can be traumatic and divisive. There are small groups of people who may bridge more than one subculture, but they are in a minority. In reality, the amount of meaningful engagement and communication between the communities can be extremely small. Previously, I posted about when the conflicting values of faculty and students collide.

So here are my four subcultures of the university.
I am deliberately being provocative and extreme to make the point that the university is more fractured than some realise or might acknowledge.

Scholars, monks, and nuns.
This consists of most faculty, graduate students, and a few "nerdy" undergraduates, such as those in special honours program. They love learning and understanding things. Money is not so important. Some will happily work long hours because they love what they are doing. Research should not have to be justified in pragmatic economic terms. They think students should come to university to "expand their minds" not to get a piece of paper or a job. The university has intrinsic value.

Undergrads and party animals.
This sub-culture is provocatively captured in the novel, I am Charlotte Simmons by Tom Wolfe
According to Wikipedia
“Despite Dupont’s [the university] elite status, in the minds of its students, sex, alcohol, and social status rule the day. The student culture is focused upon gaining material wealth, physical pleasure, and a well-placed social status; academics are only important insofar as they help achieve these goals.”
Many undergraduates may not be party animals. Many are not as privileged as Dupont students. But,  the majority (and their parents) still have a completely functional view of education: it is a means towards employment and social advancement.

The neoliberal management class.
This is not just the very highly paid senior managers but the massive support staffs that go with them. Keep in mind that at most universities more than half of the staff are not doing any teaching or research. The 4 key values are management, money, metrics, and marketing. Neoliberalism is like a religion: it defines rationality and morality. It is not to be questioned.

The invisible underclass.
This includes the cafeteria workers, janitors, "adjunct faculty" on short-term teaching contracts, and unpaid "visiting scholars" from the Majority world. They are poorly paid, have uncertain employment, and virtually no voice. Their main value is survival. Yet the university would grind to a halt without them. A testimony to their invisibility is that I did not originally include them in my original version of this post. However, I read a moving New York Times article by Rosa Ines Rivera, a Harvard cafeteria worker and an article about a Singapore student group that ran a special event to honor janitors at their university.

What do you think? Is this characterisation reasonable?

Friday, February 10, 2017

Instability of the Fermi liquid near the Mott transition

In the metallic state of many strongly correlated electron materials, Fermi liquid properties are only observed at relatively low temperatures, at a scale (the coherence temperature T_coh) that can be orders of magnitude less than the Fermi temperature that is estimated from the relevant electronic band structure. Above T_coh one observes a "bad metal" and the absence of quasi-particles.

These features are nicely captured by Dynamical Mean-Field Theory (DMFT).
An interesting question is whether this low-temperature scale can be captured in simpler theories.

Alejandro Mezio and I just finished a paper


The phase diagram at half filling is shown below. Note how near the Mott insulator T_coh is orders of magnitude smaller than W/2, the scale of the Fermi temperature for U=0. It is also much smaller than this scale multiplied by Z, the band renormalisation due to interactions.
We welcome comments.

Wednesday, February 8, 2017

Emergence of the Hubbard bands near the Mott transition

Dynamical Mean-Field Theory (DMFT) has given many insights into the Mott metal-insulator transition in strongly correlated electron materials. In the metallic phase, DMFT nicely describes the interplay between the quasi-particles associated with Fermi liquid behaviour and the Hubbard bands that also exist in the insulating phase. DMFT gives a first-order phase transition and captures the emergence of bad metallic behaviour and the associated transfer of spectral weight.

On the down side DMFT is computationally expensive, particularly close to the Mott transition, as it requires solution of a self-consistent Anderson impurity problem. [If Quantum Monte Carlo is used one also has to do a tricky imaginary time continuation]. When married with atomistic electronic structure calculations (such as based on (Density Functional Theory) DFT-based approaches) DMFT becomes even more expensive. Sometimes I also feel DMFT can be a bit of a "black box."

Slave boson mean-field theory (SBMT) (and equivalently the Gutzwiller approximation (GA) to the Gutzwiller variational wave function) is computationally cheaper and also gives some insight. However, these approaches only describe the quasi-particles, completely miss the Hubbard bands and the associated physics, and give a second-order phase transition. This is sometimes known as the Brinkman-Rice picture.

There is a nice preprint that solves these problems.

Emergent Bloch Excitations in Mott Matter 
Nicola Lanatà, Tsung-Han Lee, Yongxin Yao, Vladimir Dobrosavljević

In addition to the physical orbitals they introduce "ghost orbitals" that are dispersionless (i.e. a flat band) and non-interacting. However, one starts with a Gutzwiller variational wave function that includes the "ghost orbitals". This enables capturing the charge fluctuations in the physical orbitals.
One sees that the Hubbard bands emerge naturally (as a bonus they are dispersive) provided one includes at least two ghost orbitals in the metallic phase and on in the Mott insulating phase.
There is a simple "conservation" of numbers of bands at play here.
The authors state that the Mott transition is a topological transition because of the change in the number of bands.

The metallic (insulating) phase is characterised by three (two) variational parameters.

The results compare well, both qualitatively and quantitatively, with DMFT.

The figure below shows plots of the spectral function A(E,k) for different values of the Hubbard U. The Mott transition occurs at U=2.9.
The color scale plots are DMFT results.
The green curves are from the ghost orbital approach with the size of the points proportional to the spectral weight of the pole in the one-electron Greens function.

Monday, February 6, 2017

A changing dimension to public outreach about science

I think it is worth noting that there are many distinct goals for public outreach activities concerning science. These include the following:

Show that science is fun, cool, and beautiful.

Teach about science, both with regard to how it is done and what we know from it.

Recruit students to study science, possibly at a particular institution.

Lobby for increased funding for science.

Enhance the public visibility of a specific institution (lab, university).

Defend scientific knowledge as reliable. 
This is particularly true of areas which have become politicised (partisan) such as climate change, childhood vaccinations, and evolutionary biology, and for which there are significant enterprises promoting "denial", "skepticism", or "alternative" views.

First, given these distinct goals, I think one needs to design activities that are tailored to a specific goal. Previously, I have discussed the problem of doing demonstrations for school kids that actually teach something about science rather than being like a magic show.
Perhaps one can achieve more than one goal, but I think it is unlikely.

Second, what is interesting and of great concern is that the last goal is a relatively new one. There are now sizeable (and sometimes very vocal) sections of the community who think science cannot be trusted. This is well highlighted in a recent Op-Ed piece in the New York Times, A Scientist's March on Washington is a Bad Idea by Robert S. Young. I agree with his argument that given the nature of the problem a march may be counter-productive, particularly as it will be painted as just another "liberal" political lobby group. A better strategy is for scientists to engage with a diverse range of community groups at a more grass roots level. Sometimes this means using subtle and diplomatic strategies such as described in this NYT article and by Katharine Hayhoe.

Third, this problem is very challenging because it is part of a much larger political and social problem, particularly in the USA. There is now a significant fraction of the population who have become disenfranchised from and distrustful of a broad range of public institutions: government, multi-national companies, universities, mainstream media, Wall street, "elites", ..... and science gets lumped in with all that.


Postscript (April 25).
The Marches for Science have now happened around the world. There are broader concerns, beyond those raised by the NYT article, that are eloquently presented by Vinoth Ramachandra.

Friday, February 3, 2017

Should you put "theory" or "experiment" in the title of your paper?

A referee for a recent paper, entitled "Effect of hydrogen bonding on infrared absorption intensity", suggested that we should add "theory" to the title since the chosen title could equally be about an experimental paper. In the end, we declined but did make the abstract clearer that the paper was purely theoretical.

I thought this is an interesting issue, that I had not thought about explicitly before. If you look at titles of papers it is true that it is sometimes not clear whether the paper is theoretical, experimental, or joint theory and experiment. This is particularly true with theory papers with titles such as "Property X of material ABC" or experimental papers with titles such as "Strong electron correlations in materials class Y". To experts who working are on the same topic or who know the authors it may be obvious. But to others, it may not be so obvious.

Does it matter?
Surely if the abstract makes it clear then it is o.k.?
[Again it is amazing how for some abstracts in luxury journals you have to read to practically the last sentence to figure it out. This is because experimental papers can be clothed in theoretical hype].

The suggestion prompted me to do two things.
First, I looked through the titles of most of my papers and found that the only ones which contained "theory" were those which referred to a particular technique, e.g. "Dynamical mean-field theory" or "linear spin wave theory".

Second, I looked at the titles of some famous papers, such as BCS and by P.W. Anderson.
BCS is "Theory of superconductivity" and the abstract begins "A theory...".
PWA does have "theory" in some papers but not others.

The only conclusion I came to from all of this is that I think we should work hard on the titles (and abstracts) of our papers, since the title may determine whether or not they are read.

Maybe it is tangential, but it also reminded me that like Anderson, I am largely against combined theory and experiment papers.

What do you think? Does it matter?

Monday, January 30, 2017

The challenge of multiferroism in organic Mott insulators

A theoretical picture of the Mott insulating phase of organic charge transfer salts [such as (BEDT-TTF)2X] is that they can be described by a single-band Hubbard model on an anisotropic triangular lattice at half filling. The spin excitations can then be described by the corresponding Heisenberg model. In these models, each lattice site corresponds to a single anti-bonding orbital on a pair (dimer) of BEDT-TTF molecules. Thus the internal structure of the dimer and the corresponding two-band Hubbard model at three-quarters filling is "integrated out" leaving a one-band picture.

However, there are some dielectric relaxation experiments that can be interpreted as inconsistent with the picture above.
The key question is whether there is charge order within the dimer, in particular, does it have a net dipole moment?
A 2010 theory paper by Hotta proposed this and an effective Hamiltonian for the Mott insulating phase where the spin on the dimer and the dipoles are coupled together. She suggested that a spin liquid phase could be driven by the dipoles, rather than spin frustration.
This picture also leads to the possibility of a multiferroic phase: coexisting ferromagnetic and ferroelectric phases.

There are two helpful recent reviews, presenting alternative views of the experiments.

Dielectric spectroscopy on organic charge-transfer salts
P Lunkenheimer and A Loidl

Ferroelectricity in molecular solids: a review of electrodynamic properties 
S Tomić and M Dressel

The figure below shows experimental measurements from this paper. (The authors of the first review above and Hotta are co-authors.) The figure shows the temperature dependence of the real part of the dielectric constant at different frequencies. Note how it becomes very large at low frequencies (almost static) near about 25 K, which coincidentally is the temperature at which this organic charge transfer salt becomes antiferromagnetic (with weak ferromagnetism due to spin canting).


The above dielectric behaviour is similar to what one sees at a ferroelectric transition.

However, one should be cautious about this interpretation for multiple reasons. These are tricky experiments.

Dielectric dispersion spectroscopy is a bulk probe, not a microscopic one. One is not measuring the electric dipole moment of a single unit cell but rather the electric polarisation of a bulk crystal that has surfaces and contains defects, and impurities. For example, charge accumulation on the sample surface can enhance the measured dielectric constant and lead to significant frequency dependence, even when the actual material has no intrinsic frequency dependence (This is known as Maxwell-Wagner polarisation or the space-charge effect).

There are reports of significant sample dependence; the dielectric constant can vary by up to two orders of magnitude!

The origin of the dependence of the results on the direction of the electric field is not clear (at least to me). One usually finds the largest effects when the electric field is parallel to the least conducting direction (i.e. perpendicular the layers) in the crystal.

The magnitude of the electric dipole moment that one deduces from the magnitude of the dielectric constant (by fitting the temperature dependence to a Curie form, as in the dashed line in the figure above) is an order of magnitude larger than the moment on single dimers that is deduced from infrared (IR) measurements. This last discrepancy is emphasized by the authors of the second review above.

(IR measures the vibrational frequencies of the BEDT-TTF molecules; spectral shifts are correlated with the charge density on the molecule. Splitting of spectral lines corresponds to the presence of charge order, as discussed here.)

If one does accept that charge order occurs, further questions that arise include:

How do we know that the charge order is occurring within the dimers not between dimers?

Are these dielectric properties necessary or relevant for the insulating, superconducting, and magnetic properties (antiferromagnetism or spin liquid) or is it just a second-order effect (causality or correlation)?

What is the relevant effective Hamiltonian in the Mott insulating phase?

How is this similar and different to multiferroic behaviour in inorganic materials?

What role does spin-orbit coupling [and specifically the Dzyaloshinskii-Moriya interaction] play?

What experimental signatures could be considered a "smoking gun" for the presence of electric dipoles on single dimers?

How does one understand the different experiments which probe the system on very different time scales?

Friday, January 27, 2017

What are the biggest discoveries in solid state electronic technology?

Watching an excellent video about the invention of the transistor stimulated to me to think about other big discoveries and inventions in solid state technology.

Who would have thought that huge device would become the basis of an amazing revolution (both technological, economic, and even social...)?



In particular, which are the most ubiquitous ones?
For which devices did both theory and experiment play a role, as they did for the transistor?

I find it worthwhile to think about this for two reasons. First, this semester I am again teaching solid state physics and it is nice to motivate students with examples.
 Second, there is too much hype about basic research in materials and device physics, that glosses over the formidable technical and economic obstacles, to materials and devices becoming ubiquitous. Can history give us some insight as to what is realistic?

Here is a preliminary list of some solid state devices that are ubiquitous.

transistor

inorganic semiconductor photovoltaic cell

liquid crystal display

semiconductor laser

optical fiber

giant magnetoresistance used in hard disk drives

blue LED used in solid state lighting

lithium battery

Some of these feature in a nice brochure produced by the USA National Academy of Sciences.

Here are a few that might be on the list but I am not sure about as I think they are more niche applications with limited commercial success. Of course, that may change...

thermoelectric refrigerators

organic LEDs

superconductors (in MRI magnets and as passive filters in mobile phone relay towers )

Is graphene in any commercial device?

What would you add or subtract from the list?

Wednesday, January 25, 2017

Tuning the electronic ground state of organic crystals by isotope substitution

One puzzle concerning organic charge transfer salts (such as those based on the BEDT-TTF molecule) is how the Mott metal-insulator transition can be tuned with substituting hydrogen with deuterium. I find it particularly puzzling because the relevant hydrogen bonds are weak and so one does not expect significant isotope effects.
Similar concerns are relevant to cases of isotopic polymorphism [where the actual crystal structure changes] in molecular crystals such as pyridine.

I recently came across a nice example that I do understand.

Hydrogen-Bond-Dynamics-Based Switching of Conductivity and Magnetism: A Phase Transition Caused by Deuterium and Electron Transfer in a Hydrogen-Bonded Purely Organic Conductor Crystal 
Akira Ueda, Shota Yamada, Takayuki Isono, Hiromichi Kamo, Akiko Nakao, Reiji Kumai, Hironori Nakao, Youichi Murakami, Kaoru Yamamoto, Yutaka Nishio, and Hatsumi Mori


The key to understanding how H/D substitution changes the electronic state is that there is a hydrogen bond between two of the organic molecules with an oxygen-oxygen distance of 2.45 A. As highlighted (and explained) in this paper, around this distance the geometric isotope effect is largest (the H bond length increases to almost 2.5 A), leading to a significant change in the energy barrier for proton transfer.

The figure below nicely shows, using DFT-based calculations and the measured crystal structures for both isotopes at two different temperatures, how the barrier changes, leading to a change in the charge state of the molecules.
The H and D isotopes are at the top and the bottom, respectively.