Tuesday, August 30, 2016

Bad metals, Mott insulators, and superconductivity in fullerenes

Last week in Ljubljana, I had a nice discussion with Denis Arčon about this paper concerning fullerenes, A3C60 where A = alkali metal.

Optimized unconventional superconductivity in a molecular Jahn-Teller metal
Ruth H. Zadik, Yasuhiro Takabayashi, Gyöngyi Klupp, Ross H. Colman, Alexey Y. Ganin, Anton Potočnik, Peter Jeglič, Denis Arčon, Péter Matus, Katalin Kamarás, Yuichi Kasahara, Yoshihiro Iwasa, Andrew N. Fitch, Yasuo Ohishi, Gaston Garbarino, Kenichi Kato, Matthew J. Rosseinsky and Kosmas Prassides

This is a rich system and is summarised in the (temperature vs. volume) phase diagram below. Superconductivity appears in proximity to a Mott (Jahn-Teller) insulator.

The JT metal is a bad metal. The novel signature here is that because the electrons are almost localised on individual molecules there is Jahn-Teller effect. This is seen in the Fano line shape of the associated vibrational spectra.

Aside: I have often wondered about a good theoretical description of the Fano line shape for vibrational spectra in metals because it is quite common in organic charge transfer salts. There is an old theory by Michael Rice.  However, it does not even mention Fano. 
Yesterday, Darko Tanaskovic brought to my attention a nice paper which explicitly relates the Rice theory, the relevant Feynman diagrams, to the Fano form for the spectral density. (See especially, Section III).

Charged-phonon theory and Fano effect in the optical spectroscopy of bilayer graphene 
 E. Cappelluti, L. Benfatto, M. Manzardo, and A. B. Kuzmenko

For these fullerenes the minimal effective Hamiltonian is a three band Hubbard model with Hund's rule coupling and electron-phonon interaction (which leads to the Jahn-Teller effect on isolated C60 molecules. Extensive calculations based on Dynamical Mean-Field Theory (DMFT) describe this phase diagram and have been reviewed by Massimo Capone, Michele Fabrizio, Claudio Castellani, and Erio Tosatti

Friday, August 26, 2016

What are the worst nightmare materials?

Not all materials are equal. Over the years I have noticed that there are certain materials that are rich, complex, and controversial.

Common problems (opportunities) are that it is extremely hard to control their chemical composition, they may have many competing ground states, tendency to inhomogeneity and instability, structural phase transitions, sensitivity to impurities (especially oxygen and water), and surface and bulk properties can be significantly different. One never knows quite which material system is being measured, regardless of what authors and enthusiasts may claim.

Consequently, these materials can be an abundant source of spurious experimental results leading to endless debates about their validity and possible exotic theoretical interpretation.

Pessimist's view: the material is a minefield for both experimentalists and theorists and with time the "exciting" results will disappear. They are a scientific nightmare. Be skeptical. Avoid.

Optimist's view: this is exciting science and there are promising technological applications. Jump in. With time we will sort out all the details.

Here are some of my candidates for the "best/worst" nightmare materials I have encountered.

Cerium oxides: controlling the stoichiometry is very tricky and chemical and physical properties vary significantly with oxygen content. Yet because of (or in spite of ?!) they have significant industrial applications...

water: polywater, "memory", and the "liquid-liquid" transition in supercooled phase....

1T-TaS2: it undergoes multiple charge density wave transitions as the temperature is lowered, there is "star of David" charge density wave order with a thirteen (!) site unit cell, a Mott insulator transition, superconductivity upon doping, and ultrafast electrical switching behaviour, ...

purple bronze, Li0.9Mo6O17: superconductivity, non-Fermi liquid, large thermopower, ...


What do you think are the "best" nightmare materials?

Wednesday, August 24, 2016

Subtleties in the theory of the diamagnetic susceptibility of metals

A magnetic field can couple to the electrons by two distinct mechanisms: by the Zeeman effect associated with the spin of the electrons and via the orbital motion of the electrons.

In the absence of spin-orbit coupling the Zeeman effect is isotropic in the direction of the magnetic field and leads to Paul paramagnetism.

The orbital motion, leads to Landau diamagnetism, and free electrons with a parabolic dispersion (and mass m) in three dimensions the magnitude is one-third (and opposite in sign) to that of Pauli susceptibility.

What happens for a parabolic band with effective mass m*?
The Pauli susceptibility is enhanced by m*/m and the Landau susceptibility is reduced by m/m*. Thus in semiconductors (where m* can be much less than m) the latter can become dominant.
In a simple Fermi liquid enhancing the interactions will make the spin susceptibility even more dominant over the orbital susceptibility.

What happens in the presence of a band structure?
This problem was "solved" by Peierls in 1933, leading to this formula.

Aside: this is the paper where he introduced the famous Peierls factor for effect of an orbital magnetic field on a tight-binding Hamiltonian.

However, according to two recent papers there is more to the story.

Geometric orbital susceptibility: quantum metric without Berry curvature
Frédéric Piéchon, Arnaud Raoux, Jean-Noël Fuchs, Gilles Montambaux

Orbital Magnetism of Bloch Electrons I. General Formula
Masao Ogata and Hidetoshi Fukuyama

What happens in the presence of electron-electron interactions?
This is the question I am ultimately interested in. Clearly in a Fermi liquid regime, strong correlations will enhance m*/m (or equivalently reduce the effective band width) and reduce the relative importance of the orbital susceptibility. However, in other regimes such as in bad metal it is not clear. One treatment of the effect of spin fluctuations is here.

Tuesday, August 23, 2016

Violation of AdS-CFT bounds on the shear viscosity

Tomorrow I am giving a seminar on the absence of quantum limits to the shear viscosity in the Theoretical Physics department at the Stefan Institute in Ljubljana, Slovenia.

Here is the current version of the slides.
The main results are in this paper.

This is Lake Bled, a popular tourist destination outside the city.

Friday, August 19, 2016

Signatures of strong vs. weak coupling in the superconducting phase?

Superconductivity in strongly correlated systems such as cuprates, organic charge transfer salts, and the Hubbard model presents the following interesting puzzle or challenge.

On the experimental side the superconducting phase can extend from a region of strong correlation (close proximity to the Mott insulator) to one of weak correlation (a Fermi liquid metal with a small mass enhancement).

On the theoretical side, one can obtain the d-wave superconducting state from a weak coupling approach (renormalisation group or random phase approximation) or a strong coupling approach such as an RVB variational wave function.
Aside: This also relates to the challenge/curse of intermediate coupling.

Given that in the two extremes the superconducting state emerges as an instability from two very different metallic states, the questions are:
What signatures or properties does the superconducting state (or "mechanism") have of these two distinct regimes (strong vs. weak coupling)?
Is it even possible that there is actually a phase transition (or at least a crossover) between different superconducting states?

Here is a partial answer, following this paper
Energetics of superconductivity in the two-dimensional Hubbard model 
E. Gull and A. J. Millis

In the weak coupling regime (smaller U, higher doping) the superconducting state becomes stable (as for traditional BCS theory) due to the fact that the potential energy decreases by more than the increase in kinetic energy.
In contrast, in the strong coupling regime (large U, lower doping, in the pseudogap region) the opposite occurs. The superconducting state becomes stable because the kinetic energy decreases by more than the increase in potential energy.
This is summarised in the figure below.

Aside: note how the condensation energy (the energy difference) is much less than the absolute values of the kinetic and potential energy. This highlights how, as often the case in strongly correlated systems, there is a very subtle energy competition. This is one reason why theory is so hard and why one can observe many competing phases.

I thank Andre-Marie Tremblay, Peter Hirschfeld and other Aspen participants for stimulating this post.

Thursday, August 18, 2016

Signatures of strong electron correlations in the Hall coefficient of organic charge transfer salts

Superconducting organic charge transfer salts exhibit many signatures of strong electron correlations: Mott insulator, bad metal, renormalised Fermi liquid, ...

Several times recently I have been asked about the Hall coefficient. There really is little experimental data. More is needed. But, here is a sample of the data for the metallic phase.
Generally, increasing pressure reduces correlations and moves away from the Mott insulator. Almost all of these materials are at half filling and at high pressures there is well defined Fermi surface, clearly seen in angle dependent magnetoresistance and quantum oscillation experiments.

The figure below is taken from this paper. At low temperatures the Hall coefficient is weakly temperature dependent and has a value consistent with the charge carrier density, i.e., what one expects in a Fermi liquid. However, about 30 K, which is roughly the coherence temperature, corresponding to the crossover to a bad metal, R_H decreases significantly, and appears to change sign.

The next data is from this paper and shows measurements on two different samples of the same material.
Note how in the two samples for a pressure of 4 kbar the temperature dependence and magnitude is not the same. This should be a point of concern about the reliability of the measurements.
But, broadly one sees again a significant temperature dependence, particularly on the scale of the coherence temperature.

Finally, the data below is from a recent PRL, and is for a material that is argued to be away from half filling (doped with 0.11 holes per lattice site (dimer)).

At high pressures there are a large number of charge carriers and weak temperature dependence, consistent with a Fermi liquid with a "large" Fermi surface.
However, at low pressure (i.e. when the metal is more correlated) the Hall coefficient becomes large and temperature dependent.

I thank Jure Kokalj, Jernez Mravlje, Peter Prelovsek, and Andre-Marie Tremblay for stimulating discussions about the data.

I welcome any comments.
Later I will post about the theoretical issues.

Monday, August 15, 2016

Aspen versus Telluride

The Aspen Center for Physics is a unique and wonderful institution offering relaxed and stimulating workshops in the midst of great scenery. It has been the setting for many famous collaborations and papers.
Maybe it is an apocryphal story, but I heard that the theoretical chemists got jealous and so started the Telluride Science Research Center.

This (northern) summer I was privileged to spend time at both, and so I offer some friendly comparisons. Both are excellent and so if you have opportunity to attend either, I would encourage you to.

This is highly selective and mostly restricted to faculty, with a few postdocs. Workshops are small, with typically only about twenty participants. For Telluride you have to be invited and for Aspen you apply and are then selected.

For Telluride most workshops run for 5 days. For Aspen they run for 3-4 weeks and participants must come for a minimum of two weeks. Apparently, in the good old days people used to stay for longer

For Telluride this is closer to a small conference with many talks during each day; although, some mornings or afternoons, and sometimes whole days are free. In contrast, in Aspen there are usually at most a couple of hours of talks, and sometimes none, on each day. The emphasis is really on informal interactions.

In both cases this is arranged by the Center. In Aspen it is subsidised by an (NSF grant and so more affordable (e.g. $75 per week for "bachelor" housing = shared apartment).

Both Centers are run by very professional staff who take care of all the logistics. So, organisers sole responsibility is selecting participants and setting the program. Thus, if you want to organise a small workshop this is a very easy way to do it.

Aspen has their own building with offices, so all participants have a desk in a shared office. Telluride meets in a local school and there are no desks for participants, which is fine since the programs are so busy.

Powerpoint vs. blackboards.
Something unique about Aspen is that most talks are on a blackboard. Generally, only experimentalists are allowed to use powerpoint. I really think this is a very positive thing as it significantly increases clarity and focuses on the key points.

Local scenery.
Although it is spectacular in both towns, I do think that Telluride is superior, because you can see massive snow covered peaks from within the town.

Again Telluride wins. The gondola is free.  Most days I take it to the top of the mountain just to bask in the views. In Aspen I have never taken the gondola because I am too cheap...

In both towns there are nice short hikes literally from the town. Both have trails along the river running through the town. However, for Telluride there are serious hikes you can do starting from the town or the top of the gondola. For Aspen, you have to drive out of town or pay to get the bus to Maroon Bells, which takes about an hour.

Altitude sickness.
Both towns are above 8,000 feet and so this is not unusual. It is strange that I have been to Telluride six times but never had a problem, but my last two times in Aspen I did have had a mild case. One important preventative measure is to drink lots of water.

Travel and accessibility.
The scenic locations in the Rocky mountains come with a cost. Neither is easy or cheap to get to. For both, one may have to fly through Denver, where flight delays and missed connections are not unusual. Some participants drive from Denver.

Public lectures.
Both Centers run regular lectures during one evening throughout the summer, given by some participant. These are often quite well attended by the local community or tourists. Given the demographics of both towns (the rich and powerful) I think this is a wise investment. You never know if there will be the next Moore, Gates, or Kavli in the audience...

In Aspen there is a weekly colloquium, given by someone from one of the current workshops, that all participants are required to attend, in the hope of encouraging interaction between workshops. In the past two weeks I heard two excellent talks on biological physics, by K.C. Huang and Lucy Colwell.
Telluride does not do this. Maybe it should.

Physics versus chemistry.
Most of the Telluride workshops are on chemistry or biology, with a smattering on materials science, involving physicists. As far as I am aware Aspen doesn't do much to encourage interactions between physics and chemistry. I think both Centers could benefit from trying to facilitate this more.

Saturday, August 13, 2016

Diminishing returns and opportunity costs in science

Consider scientific productivity as a problem in economics. One has a limited amount of resources (time, money, energy, political capital) and one wants to maximise the scientific output. Here I want to stress that the real output is scientific understanding. This is not the same as numbers of papers, grants, citations, conferences, newspaper articles, ...

The limited amount of resources is relevant at all scales: individual, research group, fields of research, departments, institutions, funding agencies, ...

As time passes one needs to face the problem of diminishing returns with increased resources. Consider the following diverse set of situations.

Adding extra parameters to a theoretical model.

Continuing to work on developing a theory without advances.

Calculating higher order corrections to a theory in the hope of getting better agreement with experiment.

Applying for an extra grant.

Taking on another student.

In quantum chemistry using a larger basis set or a higher level of theory (i.e. more sophisticated treatment of correlations).

Developing new correlation exchange functionals for density functional theory (DFT).

Trying to improve an experimental technique.

Repeating measurements or calculations in the hope of finding errors.

When one starts out it is never clear that these efforts will bear fruit. Sometimes they do. Sometimes they don't. But inevitably, I think one has to face the law of diminishing returns.

These thoughts were stimulated by two events in the last week. One was reading Not Even Wrong: The failure of String Theory and the search for unity in physical law by Peter Woit. The second was being part of a workshop on superconductivity that featured many discussions about the high-Tc cuprate superconductors.
The book chronicles how in spite of thousands of papers over the past thirty years high energy theory has not really produced any ideas beyond the standard model that are relevant to experiment, or even a theory that is coherent.
I don't think the cuprates as a field is in such a dire straight. There are real experiments and concrete theoretical calculations. But it may be debatable whether we are gaining significant new insights. This is a hard problem on which we have made some real progress, but will we make more?

Even when one is making advances one needs to consider the useful economic concept of opportunity cost: if the resources were directed elsewhere would one produce greater scientific gains? This again applies at all scales, from personal to funding agencies.

So how does one decide to move on? When is it time to quit?
I think there is a highly subjective and personal element to deciding at what point one is at the point of diminishing returns.

On also needs to be careful because there are plenty of times in the history of science where individuals perservered for many years without progress, but eventually had a breakthrough.
e.g. Watson and Crick, John Kendrew and the first protein crystal structure, theory of superconductivity, ...

I welcome suggestions.
How do you decide when you are at the point of diminishing returns?
How do you decide when a research field or topic is at that point?

Friday, August 12, 2016

Strong correlations and thermal expansion in iron based superconductors

There is a nice preprint
Strong Correlations, Strong Coupling and s-wave Superconductivity in Hole-doped BaFe2As2 Single Crystals 
F. Hardy, A. E. Böhmer, L. de' Medici, M. Capone, G. Giovannetti, R. Eder, L. Wang, M. He, T. Wolf, P. Schweiss, R. Heid, A. Herbig, P. Adelmann, R. A. Fisher, C. Meingast

The figures below summarise some of the key physics. The top is the phase diagram.

The bottom shows the specific heat coefficient gamma as a function of alkali metal content (Cs to Rb to K, and then fractional K content (doping x).

Note that

a. The black curve shows values calculated from density functional theory (DFT) based calculations. The blue points are experimental data, which are as much as an order of magnitude larger, reflecting strong correlations.

b. As one goes K to Rb to Cs the correlations are enhanced, somehow reflecting the "negative pressure" associated with the increasing ion size.

c. The experimental trend is captured nicely by calculations using slave spins (SS) to treat the relevant multi-band Hubbard model with Hund's rule coupling and band structure from DFT.

The thermal expansion alpha is particularly interesting because it is dominated by electronic effects (unlike in most metals) and shows a coherent-incoherent crossover from a Fermi liquid (where alpha/T is constant) to a bad metal at a temperature T*.
As one goes K to Rb to Cs alpha/T is enhanced reflecting the increased correlations.

One reason I am particularly interested in the manifestation of strong correlations in the thermal expansion because this also occurs in organic charge transfer salts, as discussed at length in a recent paper I published with Jure Kokalj. But, we did struggle to obtain a detailed quantitative description of the experiments, partly because of the crystallographic complexity.
It would be nice to see if a DMFT + LDA treatment of the relevant model for these iron compounds could describe the data above.

I thank Christoph Meingast for bringing this work to my attention and helpful discussions about it.

Wednesday, August 10, 2016

Science shows for kids

On sunday I went to the Science Street Fair hosted by the Aspen Science Center. It featured booths from a diverse range of organisations, many offering hands on activities for children.
I was on the look out for new ideas for demonstrations to do with kids. A new one for my dry ice repertoire is the smoke ring device featured in the video below.

There were public performances by Doctor Kaboom and Mr. Freeze from Fermilab.

One challenge of such performances is to go beyond "wow" and "gee whiz" to trying to teach something about how science works.
Dr. Kaboom tries to do this by testing a hypothesis about why the catapult was invented (video). However, I thought it was a little drawn out and was not sure if the point got through.

Mr. Freeze has a host of demonstrations based on liquid nitrogen. The one with the exploding cardboard box is pretty cool (video).

He also has a nice demonstration to show how the volume of a gas is about one thousand times larger than the volume of the liquid of the same amount of material. This involves using a 44 gallon garbage bag, shown below.

The demonstration is important and useful for at least two reasons.
For kids demonstrations this important fact is the key to many demonstrations involving rockets or explosions. One example, is baking soda rockets which are based on the production of CO2 gas.
For undergraduates, this thousand fold difference is the basis for using the Clausius-Clapeyron relation to explain why the slope of liquid-gas phase boundaries is much less than solid-liquid phase boundaries in pressure-temperature diagrams.

Trivia I learnt was that Fermilab uses thousands of gallons of liquid nitrogen per day, but this is less than McDonalds!

I find it a little ironic that one major part of Fermilab's public outreach involves condensed matter demonstrations.

Tuesday, August 9, 2016

Overdoped cuprates are marginal Fermi liquids

I am giving a talk tomorrow at the Superconductivity workshop at the Aspen Center for Physics.
Here is the current version of the slides. I will only cover the first half of the slides in the talk. The rest are from a longer version.

Often it is claimed that the overdoped cuprates are Fermi liquids. However, work with Jure Kokalj and Nigel Hussey, has shown that a wide range of experimental results can be described in terms of an electronic self energy that includes a marginal Fermi liquid component which has the same angular dependence as the pseudogap, i.e. there are cold spots near the nodes of the superconducting state.
What is particularly interesting to me is that this shows that even in the overdoped region one sees precursors of the distinct signatures of the strange metal and the pseudogap regions, that occur at optimal and underdoping, respectively.

The talk is largely based on this PRL and this PRB.

Sunday, August 7, 2016

Ahmed Zewail (1946-2016): father of femtosecond chemistry

The New York Times has an obituary for Ahmed Zewail who died this week. He received the Nobel Prize in Chemistry for work that used ultrafast lasers to probe the dynamics of chemical reactions and the associated potential energy surfaces. This is all standard today. However, before Zewail, many reaction mechanisms and the associated surfaces were just theoretical constructs and conjectures.
I often use the picture below from one of his papers, which I posted about years ago.

I also posted about a nice article about the future of chemical physics and a Nature column about the importance of basic science and how to cultivate it. His wisdom needs to be heeded.

The NYT obituary points out how after the Nobel, Zewail took on an admirable challenge that was greater than anything he had tackled in science: the promotion of scientific research and education in the Arab world, and particularly in his native Egypt. I really hope he will have a significant legacy there. In this vein, Margaret Warner has a nice tribute to Zewail on the PBS site.

Friday, August 5, 2016

Deducing broken rotational symmetry from angle-dependent magnetoresistance

There is an interesting preprint
Broken rotational symmetry on the Fermi surface of a high-Tc superconductor 
B. J. Ramshaw, N. Harrison, S. E. Sebastian, S. Ghannadzadeh, K. A. Modic, D. A. Bonn, W. N. Hardy, Ruixing Liang, P. A. Goddard

They measure the interlayer magnetoresistance as function of magnetic field direction (see below) and from this deduce that the C4 symmetry of the crystal is broken to C2 in the charge density wave phase that occurs in the pseudogap region.

They then compare their experimental results to a calculation that uses a Fermi surface (that is reconstructed due to the CDW), a coherent three-dimensional Fermi surface, and a Boltzmann equation.

One might be concerned about the use of a three-dimensional Fermi surface because
a. the CDW correlation length between the layers is small
b. the interlayer charge transport is not necessarily coherent.

However, based on work I did long ago with Perez Moses and Malcolm Kennett [see for example this paper]. I think the theoretical results are robust to these concerns. What we showed is that for two contrasting situations shown below, the angle-dependent magnetoresistance is identical.

The top shows a coherent three-dimensional Fermi surface.
The bottom shows two layers that are coherently coupled together. The interlayer momentum is conserved in hopping between the layers.
One does not need coherence over more than two layers.

Another minor comment is that the authors did most of their calculations numerically. However, I think a lot can be done analytically using the expression below (from the Kennett paper) and simplifying for the case an isotropic scattering time (tau) and the low field limit (omega_c tau much less than one).

I thank Sam Lederer and Steve Hayden for bringing this work to my attention and asking about these issues.

Wednesday, August 3, 2016

Superconductivity in Aspen

For the next two weeks I am at the Aspen Center for Physics participating in a workshop on Superconductivity. A blog for the meeting captures its flavour, spanning a diverse range of systems and debates. I was not here for the first two weeks. Here are two related experimental results for the underdoped cuprates that have generated a lot of discussion.

1. A charge density wave (CDW) phase.

This has been observed directly with X-rays. The figure below is taken from this paper.

2. A jump in the charge carrier density versus doping.

Hall resistance measurements at high magnetic fields imply that for small doping the charge density scales with the doping p [p=0 corresponds to the Mott insulator that occurs at half filling] and at higher dopings, 1+p. This is summarised in the figure below from this paper.

A few comments.

1. Is the CDW phase relevant to understanding the pseudogap, superconductivity, and the strange metal phase?
There is debate about this. On the one hand it does compete with superconductivity. It could provide the much sought after quantum critical point below the superconducting dome in the phase diagram. CDW fluctuations could produce the pseudogap and/or strange metal properties. On the other hand, it may just be an "artefact" due to residual interactions that only become important when the magnetic field suppresses the interactions that determine the zero field phase diagram.
Why does it generate so much interest?
Well it is a concrete result and we are desperate for them and some clue to these long standing puzzles.

2. The large Fermi surface and carrier density equal to 1+ p at large doping is what one expects from a simple Fermi liquid and Luttinger's theorem. The fact that at small doping the charge density is proportional to p may have a boring explanation or an interesting one.
Boring: the antiferromagnetic (AFM) order that occurs for small p reconstructs the Fermi surface due to the periodicity associated with AFM.
Interesting: this is a strongly correlated effect associated with doping a Mott insulator.

3. In a strongly correlated metal deducing information about the charge carrier density and the Fermi surface from measurements of the Hall coefficient and/or the thermoelectric power is subtle, as emphasised by Shastry, and discussed here.

Tuesday, August 2, 2016

Two results from quantum chemistry that physicists should/might worry about

Solid state physicists love model effective Hamiltonians such as Hubbard, Heisenberg, Anderson, and Holstein because they are simple but have rich properties, can explain diverse phenomena, and present a significant intellectual challenge.

However, it is worth considering key assumptions. The first is imbedded in the model and the second in approximate solutions.
Computational quantum chemistry does raise some questions that physicists rarely seem to think about. They are hard and scary.

1. Rigid orbitals.
Each lattice site is associated with some sort of atomic or molecular orbital. A beauty of the models is one does not have to know exactly what this orbital is.
Now consider different energy eigenstates of the model. These only differ in orbital occupations and the different coefficients in superposition of Slater determinants (or creation and annihilation operators). The localised orbitals do not change.
Similarly, when one changes lattice or vibrational co-ordinates in a Holstein or Su-Schrieffer-Heeger model, one assumes the orbitals associated with the lattice sites don't change.

2. Strong correlations can be captured with just a few Slater determinants.

Physicists like simple variational wave functions such as Gutzwiller, RVB, ...

Yet if one considers the treatment of simple molecules by high level quantum chemistry methods one finds two things need to be taken into account to obtain an accurate description of strong electron correlations.

A. Orbital relaxation can be significant.
If one considers the orbitals (whether localised or delocalised) that appear in many-body wave function for different eigenstates (e.g. singlet, triplet, low lying excited states) one sees these orbitals can be quite different. This is why one has methods such as the Breathing Orbital Valence Bond method, reviewed here. It discusses specific examples of how the quality of wave function can be improved by using different orbitals in covalent and ionic parts of a wave function and as bond lengths change.

If one wants to think about parametrising an effective Hamiltonian for the cuprates, such as the t-J model this paper is relevant.

Heisenberg exchange enhancement by orbital relaxation in cuprate compounds 
A.B. van Oosten, R. Broer, W.C. Nieuwpoort

B. Many Slater determinants are often required.

Chemists are very good at performing calculations that involve even hundreds of thousands of Slater determinants (configurations).
Consider the case of benzene. If one wants to get accurate energies for the ground and low-lying excited states. One finds that if one works with delocalised molecular orbitals one needs to include literally hundreds of thousands of orbitals.
However, if one works instead with localised orbitals and valence bond theory one finds one can use a handful of Slater determinants. For example, the ground state can be described in terms of the five structures below. Eighty per cent of the weight is in the first two. Their superposition forms the legendary resonating valence bond (RVB) structure. However, I want to stress that 20 per cent of the weight is in the rest, and this is not a trivial amount.

I doubt these issues matter if one just uses effective model Hamiltonians to obtain physical insights, describe qualitative behaviour and trends, and semi-quantitative comparisons with experiment. However, at some point one is not going to be able to get detailed quantitative description of real materials.