Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net



Friday 17 November 2017

The Dark Side of Galaxy Evolution (IV)

So here we are at last : the fourth and final part of my little introductory series to galaxy evolution. After this point normal blog services will be resumed, though since I don't have any followers I imagine no-one cares. Oh well. This fourth lecture is unavoidably subject to far more of my own biases than the others. I worry that I may, perhaps, have gone a little bit too far at one point. But what's done is done and I stand by my conclusion. So I'll not try to make amends here but instead continue to report a reasonably accurate transcript of what I said. Hell, in a regular blog post I'd probably be a lot more forceful anyway. As always, if you want the original PowerPoint slides you can get them here, but this post contains all the same material. 


LECTURE 4 : All Hope Abandon, Ye Who Enter Here



It seems appropriate to end this course with a look at some of the current problems in galaxy evolution. We've looked at what we know, how we know, and why we know our main conclusion, so let's finish we a look at what we don't yet know, and how this field may progress in the coming years. Sometimes, the list of problems can seem overwhelming. Amongst others :
  • Missing satellites
  • Planes of satellites
  • "Too big to fail"
  • Missing baryons
  • The origin of UDGs
  • Bizarre objects
  • Dark galaxies
  • The Tully-Fisher relation and MDAR
  • Dark matter verus MOND
Given all of these issues, people seem to divide into two camps : either the situation is hopeless and we need to throw out our most basic ideas and come up with something new, or, equally extreme, that everything is fine and we'll soon have all the answers with just a few more minor adjustments to our theories.

If I'd given this course a few years ago, I definitely would have been more in favour of the first camp than the latter. But now it seems to me that the situation has changed, and with the advent of these new all-singing all-dancing simulations with baryonic physics, I lean more towards the second group. At the same time, the sudden, unexpected discovery of things like UDGs cautions us that the Universe doesn't always cooperate. Similarly, some of the more extreme outliers might just be weird objects, but they could also herald something much more dramatic. So while it does seem to me that we're making real progress, we should be wary of becoming over-confident. I'll therefore be ending this course on a note of cautious optimism.

I'll be covering most of the issues on the above list. This is a lot to get through, so let's get right to it.


1 : Missing Satellites

I've covered this a bit last time, but it's worth recapping here and going into a bit more detail because this is the issue in galaxy evolution and cosmology today. It's also now known to be just a small part of a much larger problem, as I'll describe.

The missing satellite problem is of course this tension between exciting theoretical expectation and the grim and disappointing observational reality :


In this case, we were expecting around 500 satellite galaxies of the Milky Way but we actually find about 50. This comes from these big n-body simulations which use pure dark matter. Here's an example, the Via Lactea simulation from 2006, which shows the formation of a Milky Way-mass object :


It's beautiful to watch : an intricate network of filaments with a massive halo slowly assembling itself at the nexus of the web. The assembly history is complex, with the halo growing by mergers but with some sub-halos just being tidally disrupted by their encounters. Over time, the filamentary pattern disappears on the smaller scales, but when the view zooms back out again you see that the filaments are still present on larger scales.

This problem was first quantified independently by Moore et al. and Klypin et al. in 1999. Here's one of Moore's simulations :


And here's what we actually see around the Milky Way :


Two things are immediately obvious. First, the number of satellite galaxies around the Milky Way is far less than the simulations predict. Second, their distribution is totally different. In the simulations the result is a slightly squashed spheroid - it's not perfectly spherical, but to a very good approximation the satellites are distributed isotropically. That's not at all what's actually observed, with the real satellites being found in a very narrow plane. But more on this later. For now, let's look at the difference in the sheer numbers before we worry about locations.

It's important to note that this problem depends strongly on the mass-resolution of the simulations. Here's Via Lactea's low resolution run :


And here's their high resolution version, using 27 times more particles :


More resolution means you can resolve the formation of smaller objects, resulting in far more substructure - something like 50,000 subhalos from the final Via Lactea simulation. Not all of these would be detectable though; most would be so small that they could never have enough stars (if any at all) to be visible. For comparison, Moore's simulation used 1 million particles, Via Lactea 1 (in 2006) used 200 million, and Via Lactea 2 (in 2007) used 1 billion. So a factor of 1,000 increase in resolution in 8 years.

It's inconvenient having to run these big numerical simulations, so Klypin paramaterised the results in an analytical formula. The distribution of the subhalos is typically measured using the circular velocity functions I've shown previously, since this is a good proxy for dynamical mass and can be measured directly for objects in our Local Group. Klypin's formula, as far as I understand it, is a simple empirical fit to the simulation data without any theoretical basis behind it, but it's been shown to work on larger scales than groups as well. Also note from the graph that the discrepancy between theory and observations is worse at low velocities.

This is slightly different from the previous versions I've shown as it's a cumulative function, so at any point it shows the number of subhalos at or above that velocity, rather than in some bin centred on that velocity.
Klypin's formula (as re-expressed in a slightly more useful way by Sternberg, McKee & Wolfire 2002) is as follows :
It's pretty straightforward to use. N refers to the number of subhalos which have vcirc above a specified value, found within a distance d of a parent halo of mass Mvir. Just to emphasise, Mvir refers to the parent halo while vcirc relates to the subhalos. It's worth playing around with this to see how strongly it depends on the input parameters. We'll be using it in anger later on.

Since those first simulations, observations have somewhat alleviated the problem through deeper imaging discovering more faint galaxies, principally through the SDSS. We shouldn't necessarily expect them to solve the problem completely : no survey is currently searching the entire sky, and the new discoveries are very hard to detect (as discussed last time).

Footprint of the SDSS.
But they do seem to have helped quite significantly. Here's a slightly more recent velocity function from Simon & Geha 2007 :

The solid line is from Via Lactea, the classical dwarfs are shown with open squares and the new discoveries as solid squares. They've clearly helped quite a bit at the low velocity end, with the discrepancy down to a factor of three at vcirc > 10 km/s. But it's still a factor of ten at the lowest velocities, and even when corrections are made for the survey's limited coverage and other completeness effects, it doesn't seem possible to solve the problem completely. And also take a look at the higher velocity region. Here (up as far as 30 km/s) the discrepancy hasn't changed at all by the new findings, which didn't find any larger satellite galaxies. I'll come back to that issue later.


Missing satellites galaxies

The velocity function is normally used to parameterise this problem because it's a good proxy for dynamical mass and you can measure it directly for both the simulations and observations (but see below), giving a like-for-like comparison. The disadvantage is that while that's possible in the Local Group, it's much harder to get the sensitivity and kinematic measurements needed for more distant environments. So instead people use luminosity functions. These aren't as well-suited as velocity functions, but they're very much easier to do because you just need photometry. Here's an example from Sandage, Binnggeli & Tammann 1985 for the Virgo cluster :


Apparent and absolute magnitude are directly equivalent here because all the galaxies are at the same distance in the cluster. Since this was from 1985, the missing satellite problem was not yet a thing. Fortunately, by chance the "divergent" line has a slope of -2, which is what the standard model has since predicted. So clearly there's a very strong disagreement between theory and expectation.

But luminosity functions have many more uncertainties associated with them than velocity functions. They're biased against low surface brightness galaxies, as we've seen. They have issues with the Zone of Avoidance of the Milky Way and incomplete sky coverage, though that can be corrected for. They require internal extinction corrections for the other galaxy's own dust causing them to appear fainter than they really are. And the faint-end slope is also known to be (weakly) morphologically-dependent. More fundamentally, as discussed last time, it's very, very hard to predict how optically bright a halo should be based on models which only use dark matter. Nevertheless, multiple teams have examined this issue using different models to convert the dark to luminous mass, and the problem hasn't gone away.

Fortunately observational advances mean we can now start to directly measure the velocity function out to much larger distances. This is largely thanks to improved HI surveys such as ALFALFA (see lecture 1), which has measured the kinematics of ~20,000 galaxies out to distances ~200 Mpc. Which means we're now talking about the full gamut of galaxy environments, from voids to clusters and everything in between . Here's the velocity function according to ALFALFA :



Which reveals exactly the same discrepancy between theory (blue line) and observations (black circles). And here's another one from Klypin et al. 2015 :



Exactly the same discrepancy. So now we have evidence from multiple teams using different CDM simulations and different observations, all showing the same problem. Now, velocity functions aren't completely without their own issues, they're just much less important than the uncertainties involved in dealing with luminosity. Tidal encounters and ram pressure stripping can disturb the gas so that its kinematics might not be such a good measure of true circular velocity after all, but that effect should be limited to a small fraction of the sample. More problematic is that complex inclination-based selection effect I described in detail last time. But it's difficult to see how that could have caused the very dramatic disagreement we see between theory and observation. So the problem does seem to be real and not due to measurement or simulation errors, despite the many uncertainties associated with both.

Perhaps more importantly, we now know that the problem exists across multiple environments, and most galaxies in these samples aren't themselves satellites. So we don't just have a missing satellite problem, we have a full-on missing galaxy problem. Were it the case that only our Milky Way had this issue, then we could probably get away with blaming it on some peculiar but local environmental effect. But it seems to occur everywhere, suggesting that perhaps there's a very serious problem with the physics of the models.


Missing galaxies : possible solutions

Before we rush to claim the death of a theory, we should note that there's still quite a bit of wiggle room that might let us use less drastic measures. For example, perhaps we've overestimated the masses of the giant halos, which would overpredict the number of associated satellites. I'll come back to how these estimates are made a bit later (it's not as simple as just using the dynamical mass directly from the observations), but it seems unlikely we could have got things this badly wrong : giant galaxies tend to be relatively easy to observe.

A more promising idea is that we've underestimated the true masses of the smaller halos, in which the baryonic physics might play more a key role. In particular, one idea is that maybe the gas doesn't probe as much of the dark matter halo for small galaxies as we've been assuming (recall that we can only directly measure mass as far as the baryons extend - we can't get the total halo mass directly because we don't know how much further it continues). Dwarf galaxies often have rotation curves which are rising and haven't flattened off, which lends credence to this idea. 

Recently, Brooks et al. 2017 have tried using one of these big simulations with full baryonic physics included to study this idea. They conclude that this does play a significant role. At low masses, the gas doesn't probe the full rotation curve so it underestimates the true rotational velocity of the dark matter halo.


This is a very appealing idea. The major caveat is that when you examine the paper in detail, it becomes clear just how complex the baryonic physics is (e.g. supernovae and star formation feedback that ejects material) and how much they have to tweak it. And we all know now about the dangers of complexity.



Another possibility is that satellite galaxies might be tidally disrupted, including their dark matter halos, so the the smallest structures don't actually exist. But this is something I'll describe in more detail later, so for now I'll just mention that this idea doesn't really work for galaxies in the field and only works for satellites.

The baryonic physics might also render fewer small galaxies detectable than we expect. Brooks et al. again :



Where Fluminous is the fraction of subhalos of a given circular velocity that have > 106 solar masses of stars, which they say should be detectable with current surveys. In combination with the gas rotation underestimating the true rotation, this could account for the missing galaxies problem. But again, remember the elephant : these ideas are very interesting, but it's far too early to say if they're right.

A related popular idea is that early feedback in dwarf galaxies might have ejected their gas and so halted further star formation. Thus again the dark halos should exist, they just don't contain any stars. "Squelching" is the idea that this feedback happened in the era of reionisation, when highly luminous Population III stars heated much of the gas in the early Universe. Dwarf galaxies would have lost it, but it would have been more difficult to do for giant galaxies which have much higher escape velocities. 

Simon & Geha explored this possibility. They found that this can work (red, blue and dotted curves below), but they're careful to note that their are many caveats. Everything has to be just right : the mass of the Milky Way, the luminosity of young stars, the formation process of satellite galaxies, etc. It's a matter of careful fine tuning, rather than a single mechanism that would naturally solve it. It also has the problem that it predicts dwarf galaxies should have relatively simple star formation histories, essentially forming all their stars early on, which is contrary to the complex histories observed in real dwarfs.


A more radical possibility is that maybe the dark matter isn't cold as is often supposed. If it has just a little bit of its own velocity dispersion (or is even weakly self-interacting), that can help prevent the formation of the smallest substructures without - hopefully - affecting the giant galaxies too much. But can this work ? Different groups find different results. Here's the plot from ALFALFA :


Which is in pretty good agreement with the observations. But Klypin et al. 2015, who try using various kinds of warm dark matter, find very different results, and none of them are compatible with the observations :


Why the strong difference between the two groups ? I don't know. I'm really not expert enough to hazard a guess at what's gone wrong, let alone judge who's right and who's wrong.

Before moving on to another topic, there's one final aspect of this problem that needs to be looked at.


Too big to fail ?



"Too big to fail" is a phrase more normally associated with the financial sector. The idea was that during the crisis of 2008, some banks, which were doing a lousy job and were getting into severe difficulties, were just too important for the economy. They were failing, but they were too big to be allowed to fail, so governments intervened with massive cash injections and suchlike.

The idea in extragalactic astronomy is that maybe we can appeal to complex baryonic physics to explain the missing galaxy problem at very low masses, but this doesn't work for objects at higher masses. Remember, there's still a discrepancy between theory and observation for objects of quite high rotation speeds, and new observations haven't improved that situation at all. Essentially, some objects are so massive that there's no way they should be able to avoid accumulating enough gas to form a visible amount of stars... they shouldn't be able to fail, but they fail anyway. Or more simply, the missing galaxy problem extends even to objects of quite high circular velocities.

This is expressed much more subtly in the original paper by Boylan-Kolchin et al. 2011, but that's the essence of it. Here's their money plot :


Strictly speaking this problem is about the densities of the galaxies, which is why they plot radius as a function of circular velocity (the predicted densities of the simulated galaxies are much higher than what's observed in satellites of the Milky Way). In the plot, the filled circles are galaxies found in simulations, whereas the grey band shows observational data. What you see is that a significant fraction of simulated galaxies lie well outside this grey band at quite high circular velocities, from ~30 all the way up to 90 km/s. It's very difficult to see what could have caused objects this massive to "fail", if you count a galaxy as successful only if it forms as many stars as possible - but more on that later. See also Papastergis et al. 2015 for a discussion on whether this problem exists just for the Milky Way or also in the general field.

What are the possible solutions ? Again, appealing to the baryonic physics might have enough wiggle-room - see Brooks et al. 2017 and also Verbeke et al. 2017 (though that paper is quite a bit harder). It seems that a combination of factors may be at work, with the gas not extending as far as is normally assumed so it doesn't probe the full rotation curve. Gas discs of dwarfs, in this model, are quite a bit thicker than in giant galaxies because of supernovae feedback causing them to "puff up". Our estimates of the densities may therefore be innacurate. Or perhaps the interactions of the dark matter and the baryons is more complex than we realise, with the baryons somehow disrupting the formation of the smallest halos. Or maybe these halos do exist but really have "failed"... much more on that later.


2 : Planes of satellites

All of this very naturally and obviously leads me on to discuss the movie 300, this really stupid film about the Spartans at the battle of Thermopylae. What on Earth am I on about ? Well, apart from needing a joke every 20 minutes or so to keep everyone awake, this movie has many quotes which are historically accurate and relevant for extragalactic astronomy, such as :



And, on being told that the Persian arrows would blot out the Sun :



Well, okay, maybe these two aren't very important for extragalactic astronomy (though they are historically accurate, despite the stupidity of the movie these are things the Spartans are supposed to have said).  But there is one quote which is relevant, though that didn't make it into the film because it's not quite so dramatic :



The idea being that the Spartans are such good fighters that they don't care about many the enemy are, they just need to know where they are so they can go and beat 'em (or alternatively that the Spartans fight with cleverer strategies than their enemies). Of course, for extragalactic astronomy this is about the planes of satellites problem. Here's the Milky Way - an artist's impression - shown with its attendant satellites as grey spheres :



Remember the simulations predicted that they should be in a slightly flattened spheroid, but in reality what we see is quite a narrow plane. This can be seen if we restrict ourselves to the brightest classical satellites but also (as here) if we include the fainter recent discoveries. It can also be seen looking at the distribution of stellar streams, though I don't count this as independent evidence - once you've got a plane of satellites, their interactions must occur within that plane and so stellar streams must be located in that plane almost by definition.

This is a genuinely very interesting feature and it needs an explanation. But there have also been claims that planes are common features found around other nearby galaxies - and if that's the case it would be a real challenge for the standard model.

Now, here I must make a confession and a warning. When I'm preparing these lectures I obviously choose topics I already know something about, because starting from scratch would simply be too much work. But I always go back and check the source literature, both to make sure that I've got the numbers right and that I haven't had some mistaken idea stuck in my head for whatever reason. Usually, this means I revise the number or make other adjustments - small changes, nothing more.

Not so in this case. Here I have to say that when I read the original papers, I had many "what the hell ?!?" moments. I have to say I think this field is complete nonsense. I suppose the professional, responsible thing to to would be to present both fields of view in an impartial way and let you make up your own minds. Fortunately for you, I'm not gonna do that. I'm gonna give you my honest opinion, which should be at least more entertaining. But if you want a different perspective I strongly suggest that you consult the various links provided and also, if at all possible, try and discuss this with someone else. Because I won't (can't) claim that I'm not biased about this area.

OK, so, one of the most popular claims for another plane of satellites is around Andromeda. This is a field of study that's only really possible for our nearest neighbours because of the sensitivity requirements, and the so-called Great Plane of Andromeda is the most well-studied of these other features. So here's a plot from Ibata et al. 2013, showing their survey field and their detected satellite galaxies :



Which looks pretty random; you could probably draw a whole bunch of linear features through those data points if you wanted. But note that these papers claiming to have found planes never show you this unbiased view. They always show you with the selection after they've made it, so that you're immediately biased to seeing whatever they want you to see. So for these presentations I've gone and removed their colour scheme so you can see it without that preconception of what they think is present.

But what this survey also did was to measure the distances to the galaxies. Now the first thing I would have done in that situation is to plot the distribution in 3D. That's what's physically meaningful - true spatial position. But that's not what the team said they did. They claim that the first thing they did was to plot the distribution of galaxies as they would appear on the sky to an observer living in the Andromeda galaxy*. And if you do that, you get this :

* "Claim" is maybe a bit unfair of me. I don't believe for one second that the first thing they did was plot the Andromeda-centric sky distribution, because that's just silly. But in terms of searching for planes, then this must be true, as we'll see.



That's quite a bit more interesting than the first view. The distribution is clearly not isotropic, with a big thick band running through the middle. The paper is a little bit unclear as to how they then identify this band. First they say the identification is visual (by eye), but in an appendix they say that they use a structure-finding algorithm. Whichever method, the feature they identify is highlighted in red here :



Now this becomes a bit strange. There doesn't seem to be any good reason to avoid including A2, A15 and A10 (the three galaxies just south of the main band) - those are part of what drew my attention to the band in the first place. And it's not really obvious why A22 (far left) has been excluded either. You might be wondering why they've suggested that A12 and A27 (extreme left and right) have been included. Well, that's because they lie close to a great circle fitted through the main band :



But this is biasing the result. It's selecting what they think should be there, not what the data itself actually shows : if you select galaxies based purely and literally on the distribution in the above plot, you'd get something different. If we now return to the original sky view, here's what they get :



Quite a narrow band running across the sky. If we use the galaxies that I would have (subjectively) identified, then we see a much thicker structure :



Which is barely different from the original distribution ! Those minor differences in one projection make a big difference to the other. Or perhaps not, because the really key thing is the true 3D space - if the distances are correct, then this whole structure could be much thinner than it appears from any one viewpoint. My point is that there's no good reason to do the selection in this way - it's just replacing one projection with another. There's nothing physically significant about viewing the sky from Andromeda. What you want to do is look at the distribution of satellites in 3D. When you do that for the Milky Way galaxies the plane becomes much more obvious than in a sky plot. What happens for the M31 plane ?



Oh yes, wow, look at that nice clear plane, it's sooo obvious. Come on people. There's no plane here. You wouldn't spot a plane in a million years looking at that. Here it is with their plane highlighted :



Yes, OK, they've selected something. But as far as I can tell this appears to be an essentially random selection of objects within an isotropic cloud that happen to look like a plane from an arbitrary viewing angle. If we show the angle at which the plane appears most "clearly" :



Then there are at least three blue objects supposedly not in the plane for no good reason. And yes, you can see something in the middle from the red objects. But you could equally decide that there was another, parallel plane on the left - and even one on the right if you were feeling generous ! If you subdivide your sample enough, you can find as many planes as you like.

It gets slightly better when they account for the kinematics, but honestly not that much. Knowing the velocity of the satellites they can work out if they're moving towards or away from M31. Here's the system seen at right-angles to the view above :



North of M31, most of the red objects are moving in one direction with respect to M31, while to the south most of them are moving in the opposite direction. This is a signature consistent with rotation. Now, if they'd merely claimed that the thinnest structure they'd detected showed evidence of rotation, I'd buy that. I might have some reservations, but I'd be interested and I'd provisionally accept it. The problem is that's not what they do. They use rotation as independent evidence for the existence of the thin plane in the first place. And this is not correct, because if you select objects by rotation, then you have to include several of the blue objects in the above plot, making the plane much fatter. It's the same situation as for the case of selecting by structure, which as we saw gave a very different result to what they choose. And if you select by a combination of both, then you can play all kinds of statistical games.

Ultimately the problem is that statistical selection procedure, which is designed to select the thinnest structure. And that's a crazy way of selecting things, because they are deliberately selecting what they want to find. If they'd used some truly independent physical parameter - say, brightness - and then found that all the brightest galaxies were found in a plane (as happens for the Milky Way), that would be far more convincing*. They select the thinnest structure because it's the thinnest structure, not because they have any independent reason to suspect it exists. It reminds me very much of an xkcd webcomic. The word "tautology" just means unnecessary repetition, so :


* Addendum to the blog post not found in the original lecture : they actually explicitly state that this is not the case, that there is no difference in physical properties between the galaxies in and out of the "plane" ! I mean, good grief...



I repeat : you can't just find a thin structure by searching for the thinnest structure, because if you do that, there's no way you can fail to find a thin structure ! Now, I'll stop short of saying I definitely don't think there's a plane around M31, but I think the evidence is at best marginal and certainly doesn't warrant the enormous amount of investigation that's been invested in it. I'm not quite convinced it doesn't exist, but I'm certainly not in the least bit convinced that it does exist (if that makes sense). But in other cases it gets even worse.


Are planes a problem ?

As I mentioned, if planes are in fact common around most galaxies and not unique to the Milky Way, then that becomes a very interesting challenge for the standard model. But are they ? We can only do this for nearby galaxies, but there are a few other claims. There are actually claims for multiple planes around M31, which got me so irritated that I couldn't bring myself to read the paper - I only made the suggestion of multiple planes earlier as a joke, but some people apparently take it seriously. Sigh. But it gets even worse. There are also claims for two planes around the nearby elliptical galaxy Centaurus A. OK, so let's have a look at that, skipping straight to the 3D unbiased view :



I mean, WHERE ?!?! Come on people, what are you doing ??! Well, let's look at their selection :



And just how do they select these features ? I don't know, because the paper doesn't specify. They just say that they select them, as though it was obvious. Aaaaaaaaarrrggghhhh.



OK, so if the subjective selection procedure are no good, do they at least have some good statistical measure ? No, they don't. The problem is that this is incredibly vulnerable to the effects of completeness. Here's their view which they say shows the "planes" most clearly :



Apparently one plane is formed of the "group" to the north and the other from the "group" to the south, which isn't even that obvious here. But watch what happens if I add a single other galaxy at the right spot :



That double structure completely disappears. I'm not saying that's what would happen in reality - it is possible that a deeper survey might uncover galaxies that actually strengthen the appearance of two planes. All I'm saying is that if your conclusion is so incredibly vulnerable to a single errant data point, then you shouldn't be publishing this in peer-reviewed journal articles.

Sigh. I just don't understand how anyone can look at a data set like this and think, "yep, there's a plane there", I really don't. But let's say for the sake of argument that they are real and I've made a terrible mistake somewhere. Would they be a problem for the standard model ?

No - not necessarily. Admittedly different groups claim different statistics for the frequency of planes in simulations. The most enthusiastic claim that they're incredibly rare in standard model simulations, << 1%, a figure I don't think is credible based on the statistical methods we've seen so far. Other groups find 2%-10%, which seems more plausible. Let's go with that 2%, which is still low but not terrifyingly low. Does that mean the Milky Way plane is worryingly unusual ? To understand this, we have to bring in all of the statistical methods I was describing last time.

On the one hand, we've got the Copernican Principle, which says basically :



Or slightly more accurately, that we shouldn't assume we're in a special or privileged position, because if we do that we can explain any problem by random chance without trying to understand the physics. But on the other hand, we've got the flaw of averages :



The average observer doesn't exist; every galaxy has some weird deviations. And we know that this is true because it's saved lives.

How do we reconcile these two contrasting ideas ? In my view, what you should do is begin by assuming that you are an average observer, or at least close to the average. That's as good an assumption as any for the initial guess. But once you have evidence that you're not typical, you take account of that - you don't go on insisting that you're an average observer when you know this is not the case.

What I mean by this is that it might be very misleading to say the Milky Way is one of only 2% of galaxies which have planes, because this might not be making a like-for-like comparison. For instance we know that the Milky Way isn't an average galaxy. For starters it's a spiral galaxy, and it's also a particularly massive galaxy. It lives in a group at the intersection of filaments. We need to account for all of these particular characteristics and compare with other, similar galaxies if we're going to have a fair result.

An analogy might be that if you go for a walk and find a tree that's 90 metres tall. Compared to all other trees, this is very unusual, because most trees aren't anywhere near 90 metres tall. But if that tree is a giant redwood, then comparing it to all other trees would be very silly, because you already know that redwoods are especially tall - it's them you should be comparing it to, not the general tree population.



What might be going on is that planes may form preferentially in certain environments. So if our Milky Way is in one of those environments, it wouldn't be at all surprising that it has a plane of satellites even if that environment is unusual. And simulations have found that plane formation isn't entirely random. The figure below is from Buck et al. 2015, who found preferential plane formation in massive galaxies with an early formation epoch (they even show some suggestion of rotation). They suggest that there's also independent evidence that this is the case for the Milky Way, and that planes are partially formed as a result of the satellites infalling along filaments. So when you do the statistics properly and make a really like-for-like comparison (no-one has actually done this yet, mind you), I don't think it's at all implausible to suggest that the Milky Way is one of not 2%, but something very much higher - say, 50% or more.



What's even worse is that sometimes people assume that the plane formation must only occur due to random chance in the standard model. If that was the case, and if the M31 plane is real, then the chances of both planes existing becomes very small indeed since you could multiply the low-value probabilities together. But this assumption isn't justified - M31 and the Milky Way live in the same environment, and we've already seen one environmentally-dependent mechanism for plane formation in the standard model. So the plane formation may well not be independent, in which case - as we know from high school maths - we can't multiply the probabilities together, because that would be simply wrong. Assuming that they're independent is a major assumption which I don't think is fair to make.


Statistics is not the whole story !

Understanding the statistical biases and methods is great, and will help you interpret data correctly. But it won't much help you to formulate a physical theory. You still need to understand the physical processes at work. So what might be at work to form planes of satellites ? We've seen that it might have something to do with filaments, but there are more specific proposals as to how they might form.

One idea is that a tidal encounter might be responsible. We saw in the first lecture how these can create complex structures by removing gas and stars from galaxies. In some cases, the amount of material removed can be so large that it can form self-gravitating structures called tidal dwarf galaxies. Many such candidates are known within streams; a few might exist in relative isolation.



The candidates within streams, at least, are pretty convincing. They are self-bound by their baryonic mass and require little or no dark matter, making them very different to "normal" galaxies. It seems that many such objects can be produced per interaction, and of course they will naturally lie in planes since tidal encounters tend to create linear-ish plumes of material. What's not clear is if these objects can be stable on long timescales, or if large numbers can be produced on polar orbits. The difficulty there would be in removing enough material with an encounter perpendicular to the plane of rotation of the parent galaxy, which requires a strong interaction that might heavily disrupt the disc.

A very interesting but more radical version of this idea is that maybe all of the satellites of the Milky Way are tidal dwarfs. Although they seem to be dominated by dark matter because of their high velocity dispersions, we might be being fooled - they could be in the process of a rather slow evaporation. I think this is a very interesting and novel approach, but it requires essentially abandoning the standard model altogether. We'll look at that idea a bit more later on.

A less radical prospect is that maybe the tidal encounter of two massive galaxies could disrupt their initially-isotropic attendant clouds of satellites. This is an idea which seems to have been given little attention, but at least one simulation suggests it's possible. The animations below show one model from this paper, though they also explore others. What you see is two galaxies merging, with the one on the right starting with its own satellite cloud but the other not possessing any. What you can see (first the face-on view) is that the initially small satellite cloud is dramatically enlarged by the encounter.


Perhaps more importantly, watch what happens if we look at the edge-on view. The satellite "cloud" is actually a squashed spheroid (not very different from the standard model distributions). The interaction doesn't make it any thinner, but it does make it much larger along the other two axes. So the cloud becomes a thin plane by being stretched. In other models, the authors demonstrated that satellites are preferentially destroyed if they're initially orbiting in a certain direction, hence this process could also explain why the planes seem to be rotating.



Now this model is still at a very preliminary stage. It has only dark matter and stars. It has a merger rather than a mere encounter. It's uncertain if it's self-consistent with the large n-body standard model simulations. And it doesn't say if such planes can be formed on polar orbits. But it does, I think, show the very intriguing result that encounters between galaxies can change the distribution of their satellites, and again points to the fact that the formation of planes may not be independent.

So are planes a problem ? No, because they don't exist. But even if they did, they do not pose the huge challenge that certain people to the standard model that certain people insist that they do.

A more detailed follow-up to the original Ibata paper on the M31 plane is presented by Conn et al. 2013. For more on forming planes in the standard model, see Sawala et al. 2016. For the frequency of planes in the standard model see Cautun et al.  2015. And for a radically different view of planes to mine, see pretty much anything by Marcel Pawlowski.


3 : Into Darkness

That's more than enough about planes of satellites so it's time to move on to another topic. I introduced Ultra Diffuse Galaxies last time, but now let's take a look at them in a bit more detail. As with the missing satellite problem, these may be just one aspect of a larger issue.

Let's recap these objects very briefly. We know that they're found in large numbers in clusters like Coma...



... and also Fornax. They're also found in Virgo but in very low numbers. We're not sure why Virgo has so few, but it might be an observational limit. We also know know that they're found in groups, where they tend to be bluer and more irregular, e.g. :



And recently Hickson Compact Groups like this one :



Which are again somewhat bluer and more irregular than the ones generally detected in clusters. We also know from Leisman et al. 2017 that they're found in isolation :



Here you can also see we have HI measurements but I'll get to that in a moment. That they're found in isolation is important, because we can't attribute their formation mechanism to have some weird environmental dependence - e.g. they can't be some peculiar form of tidal debris as a result of interactions within a cluster, since that can't happen in the field environment.

In terms of their basic properties , their stellar masses occupy quite a narrow range, 107-108 solar masses. They're found everywhere on the colour-magnitude diagram and have a wide range of morphologies, as we've seen. Their physical size is roughly 1-10 kpc, comparable to the Milky Way or even larger in some cases. Their gas contents vary more strongly than their stellar masses, ranging from 107 up to a few times 109 solar masses. That's a respectable amount of gas, comparable to that in the Milky Way but apparently they've done a lousy job in converting that gas into stars.

Undoubtedly the most important question is their total mass : are they huge dwarfs or crouching giants ? Do they help alleviate the missing satellite problem or just make it even worse ? The recent gas measurements help to start to answer that, but they also show that this process is not so straightforward. Consider, for example, those four detections in a HCG :


Dynamical mass estimates range from 10(very much in the dwarf regime) right up to 1011 solar masses, on the border between dwarf and giant. The problem is very similar to what we discussed earlier. We can only directly measure the dynamical mass as far as the baryons extend, and in the case of line widths we have to assume some size for the HI disc rather than measuring it directly.

Fortunately we've got those three objects which do have resolved HI measurements, so in those cases we have proper rotation curves :


Which all show a maximum dynamical mass of 1010 : on the upper end of the dwarf regime, but still definitely not giants. Note also how flat the rotation curves are at the outer edge. The authors note that if they'd followed the usual procedures for calculating the total mass of the halo, which accounts for the baryons not probing the full rotation curve, they'd have calculated a mass of 1011 . But they note that because the rotation curves are so flat, that would imply a huge extent of the dark matter halo, which is a heck of an extrapolation.

Other galaxies have also had their mass estimates subject to detailed analyses by other methods. Overall, the view seems to be that the majority of these UDGs are huge dwarfs. For example, consider VCC 1287, one of the few known UDGs in the Virgo cluster :



VCC 1287 is the smudgy-thing in the centre. The highlighted points are objects identified as globular cluster associated with the galaxy. These are identified spectroscopically, i.e. by redshift determination, so they are definitely associated with the galaxy and aren't more distant background objects.

By measuring the velocity difference of the globular clusters with respect to VCC 1287, it's possible to construct something similar to a rotation curve (though it's actually a measure of dispersion, not rotation). This has led to a wide range of mass estimates, from 4x109 (directly measured) up to 8x1010 (extrapolated) solar masses -  a variation of a factor of 20. In either case though, this object seems to be firmly established as a dwarf. Even so, this object is really interesting. It seems to be exceptionally dark matter dominated, far more so than for other objects of similar baryonic mass. Here's the mass using another technique based on counting the number of globular clusters :




And here it is using dynamical mass estimates, which still show a deviation albeit with quite a bit more uncertainty :



But the dwarfs aren't having it all their own way. A few objects do seem to be plausible candidates for giant objects. The best-known of these is Dragonfly 44 in the Coma cluster :



The authors used a similar procedure to VCC 1287. These extrapolated total masses rely on a procedure known as halo abundance matching. I won't go into details, but it's basically a way to take the baryonic mass content of a galaxy, compare it with numerical simulations and infer the total mass. In this case, Dragonfly 44 has a (minimum) measured total mass of 7x109 but a total extrapolated mass of 8x1011 , a variation of a factor of over 100.


Halo abdundance matching procedure for Dragonfly 44 (filled squares) and VCC 1287 (open squares). This compares the measured total mass at a given radius to establish the likely total halo bass (black curves, with grey error ranges).

That upper limit would be comparable to the Milky Way, making it very much a giant galaxy. In the press release the authors are keen to stress this, saying things like "it's the first object of a new class of galaxies" and so on. But in the actual paper they're much more careful to note this upper mass relies on a very large extrapolation. I'd treat it with some caution, especially noting the Leisman result of those very flat rotation curves. If it is a massive galaxy, then it's very exciting indeed. Although the missing galaxy problem extends to quite high circular velocities, it doesn't extend past the "knee" of the function - for very massive objects the theory and observations seemed to be in good agreement; we thought we knew the basics of how galaxy formation worked for the most massive objects. Now it looks like we might have a whole new problem on our hands. 

Right now it's hard to say if this really is a problem or not. I think we just don't have enough data yet to make a robust conclusion either way - we're going to have to wait for more measurements.


Dark galaxies

The existence of these large numbers of low surface brightness galaxies raises the possibility that, by extension, there could also exist no surface brightness galaxies. These "dark galaxies" might have accumulated just enough gas to be detectable to radio surveys (otherwise they're a bit useless) but not enough to trigger star formation. As a radio astronomer, I like this possibility because I hate star formation. Stars are just a waste of precious neutral hydrogen gas and if we don't stop it then we're going to run out.


The idea does make a certain kind of sense. Presumably, not every halo will accumulate exactly the same baryon fraction. And observationally there seems to be a density threshold for star formation. Combine this with the missing satellite problem and suddenly it looks very interesting. The difficult bit is establishing quantitatively if objects below the star formation density threshold could be detectable to radio surveys on long timescales. Many such objects have been proposed over the years, so I'll just mention a few of them here. 


Keenan's Ring

Let's return to one of my favourite galaxies, M33. M33 has its own missing satellite problem. Klypin's formula predicts we should be able to detect about 25 companions, but at most it has one observational candidate. And that one is pretty lousy - it's miles away from M33, almost closer to M31.



Remember that Klypin's formula lets us do a bit better than just a number - it gives the number above a specified circular velocity.  Here's what we expect to find :


But you might also remember from the first lecture that we detected a population of HI clouds around M33. The HI measurements allow us to measure the line width of each clouds, so if we assume that it represents circular velocity we get this :


The observations are in pretty darn good agreement with the theory ! If anything we find slightly more clouds than predicted. But as usual, there are caveats. In this case the HI measurements are pretty well resolved, so we can see how the velocity varies across each cloud. And it's unclear if this is consistent with rotation, so it might not be measuring circular velocity at all - it could be turbulent or other unstable motions; the clouds may not be self-bound. It would also be a bit strange if M33 happened to be the only galaxy where all of its satellites had somehow remained dark. But it's interesting all the same.




You might also remember that I mentioned Keenan's Ring, this giant HI cloud (number 31 above) that's as large as the disc of M33. This is a really interesting object. Together with Wright's Cloud, it's very much larger than the other detected objects in this region, suggesting that there might be multiple formation mechanisms at work. This makes this area a tremendously complicated place to study. 

Keenan's Ring is also especially interesting because it has a clear velocity gradient across it, with one side moving at about 30 km/s different to the other. It's not a very high gradient, but it's clearly visible. It suggests that the Ring is a single coherent object and not a chance alignment of smaller clouds. Could the Ring be a dark galaxy ? Probably not - the velocity gradient isn't very large, and its velocity structure isn't really consistent with what you'd expect for rotation. The main lesson from this object is that there are still major discoveries lurking even within our Local Group. We have no idea how to explain this object - none of the proposed explanations seem to work very well.



HI1225+01



Another, more famous object has remained unexplained for much longer. HI1225+01, sometimes known as the Giovanelli & Haynes cloud, was discovered accidentally back in 1989 using Arecibo, and mapped shortly thereafter. This is a much more massive object than Keenan's Ring (~106 solar masses) - at about 5x109 it's of comparable HI mass to the Milky Way. It's a nice linear feature and unlike Keenan's Ring it does seem to have ordered motions. In fact it has a flat rotation curve, just like a normal galaxy ! Its rotation speed isn't very much, mind you, only about 60 km/s... but that's still enough that it would require about 3.6x1010 solar masses of dark matter (so a Mdyn/MHI ratio of over 7).


So the dark galaxy interpretation seems plausible, but of course there are some caveats. First, the thing is enormous. At close to 200 kpc across it would be one of the largest galaxies ever discovered, absolutely dwarfing the Milky Way. Second, it's not completely optically dark : a faint, fairly pathetic-looking galaxy was discovered in the northern part of the stream.



Well... it's not that pathetic an object. It's a lovely little galaxy, very blue and irregular. And it's far too small to be the source of all the HI in the rest of the cloud. Using the equations at the end of the second lecture, it should only be able to account for 20% of the gas in the stream. Even with the uncertainties of the deficiency equation, that's far too low for it to be a plausible source of all that gas. And there isn't even any other large galaxy nearby that could have caused a tidal disturbance anyway.



So what the hell is HI1225+01 ? Despite nearly 30 years of study, including more detailed mapping with the VLA, we just don't know. Maybe it's just one of those weird exotic objects that aren't very significant. Perhaps it's something more. No-one really seems to have much of a clue.


The AGES Virgo clouds


We have follow-up observations, so we know they're real.
These are the objects that I've based most of my career on thus far. They're eight low-mass (2x107) clouds in the Virgo cluster, relatively isolated with no obvious parent galaxies nearby, and unresolved to Arecibo meaning that they're less than 17 kpc across. The most interesting thing about these objects are their high line widths : two are very narrow at just 30 km/s, but the others are up to 180 km/s. That's almost like detecting a giant galaxy but without anything visible in the optical. One of the most common interpretations is that objects like this might be some form of tidal debris, but they seem to be too isolated for that. And we were able to show in detail that's it's very hard to form features with these high widths by tidal encounters.

Recently we obtained VLA data of these objects. That should give us a much better idea (when I learn how to reduce it !) of whether these objects are really rotating, as their line widths suggest, or doing something else. For now, perhaps the strangest feature of these objects is that they don't sit on the Tully-Fisher relation. So what is the Tully-Fisher relation and why does that matter ?


The Tully-Fisher Relation

Discovered in 1977, the TFR is an empirical relation between the circular velocity of a galaxy and its absolute brightness. In some ways that's not too surprising : rotation is a good measure of total mass, and more massive galaxies should contain more stars almost by definition. More surprising is the precise form of the relation, which I'll describe more in a moment.

Traditionally the TFR is used as a redshift-independent distance estimate. Knowing the rotation speed you can predict the intrinsic brightness, and by measuring the apparent magnitude and combining it with the distance modulus equation (see lecture 2) you can get the distance. Of course measuring the line width means you have to measure the redshift anyway, but what this method allows you to do is calculate the peculiar velocity of the galaxy.

The TFR, like everything, requires some additional corrections. Getting the circular velocity needs the inclination angle. You also want to correct for line broadening due to instrumental effects and other measurement errors (though these corrections are usually small). And of course you have to correct the optical magnitude for extinction. But this is all relatively simple, and consequently the errors are small.

Where it gets weird is that the scatter in the relationship is so small that it's been claimed its consistent with zero, which means the intrinsic scatter must be at most extremely low. Now, in the plot below :

From Stacy McGaugh's website. I don't know why he likes these strange aspect ratios instead of nice square plots, but he does.
... it may not look like the scatter is so low. In fact there seems to be a distinctly non-linear relationship at the lower velocity end, where the scatter is considerably higher.

This lead Stacy McGaugh to make a very interesting discovery. He found that the relationship was more fundamental than rotation speed and luminosity - it's actually about total baryonic mass. The deviant points in the above relationship are all particularly gas-rich objects. When you plot total baryonic mass (stars plus gas) instead of luminosity, you find that all those objects sit nicely on the TFR after all.

And that's really weird. It's actually easy to show what the slope of the TFR should be... but unfortunately, it's also easy to show that it should have a strong scatter. To do this, we need to do a bit of maths. It's quite straightforward, but it's one of those cases which is easy when someone shows you how but probably not so obvious if they don't.

Let's start with the traditional TFR, the relationship between circular velocity and stellar luminosity. From simple Newtonian dynamics we know that :
But we don't care about constants for this exercise - we just want to see how things scale. Re-arranging slightly :
Which doesn't look much like the TFR yet, but now we need to do one of those weird mathematical tricks. What we want is to get the right hand side in terms of observable quantities. We know that L*(M/L) = M by definition, but let's keep the M/L term so we don't have to cancel :
So we could substitute this for Mtotal in our velocity equation if we wanted, but let's push on and also substitute for R. Knowing how the surface brightness relates to luminosity and ignoring pesky constants like pi :
Now we can substitute for both M and R in our velocity equation, which gives :
But since in the TFR we have L = f(v), let's re-arrange :
So that L goes as v to the fourth power. This gives excellent agreement with the observed slope of the TFR ! And we didn't have to use surface brightness - we only did that because we wanted the classical, optical TFR. If we use surface baryon density instead of brightness then it's trivial to show that :
Which is of course the baryonic TFR.

We've reproduced the slope of the TFR. Great. The problem is that it's not just dependent on circular velocity, but also on these two other terms Q and ∑. Q is the dark matter to baryon ratio and  is the baryon density. There's no good reason to suppose these terms would be correlated. Or to put it another way, galaxies which have a different surface density should lie on a different TFR unless there is an exactly compensating change in M/L ratio. It's as if the total mass - i.e. the dark matter - somehow "knows about" the baryonic properties (i.e. the surface density), not just in a broad sort of statistical way but very precisely indeed, as though the baryons are dark matter were conspiring to create a precise relation where none should exist. Galaxies which vary by a factor of 10 in rotation speed, 50 in physical size and ten thousand in stellar or total baryonic content all lie precisely on the same relation. And that's just weird.

Other strange "conspiracies" are also known, which are held in varying levels of importance depending on who you talk to. I don't want to go through them all in detail, but these include :
  • The halo conspiracy. This refers to the fact that rotation curves always tend to be flat. Personally I'm unconvinced that this is such a big problem : dark matter simulations seem to explain this pretty well, and there's quite a strong observed scatter in the slopes anyway.
  • Rotation curve "wiggles". Rotation curves aren't perfectly smooth but tend to have little wiggles in them, which seem to correspond to the local baryonic density. Which is strange because the dark matter should be mass dominant and control the shape of the curve, not the baryons. This is interesting, although as an observer I'm concerned about the errors in these measurements.
  • The mass-discrepancy acceleration relation. But I'll look at that in detail later
Interestingly, the MOND theory (that replaces dark matter with modified gravity, which I'll describe later) actually predicts the low scatter of the TFR naturally. Recent standard model simulations also claim to reproduce the TFR, but with the usual caveats about complexity. And there are some other concerns about the TFR which should be mentioned. The low scatter is, perhaps, not as low as is sometimes claimed. Here's the (standard optical) TFR from AGES :

Where the high scatter is likely the result of my dodgy processing but that's not terribly important. Blue objects are normal galaxies within the Virgo cluster. Black points are galaxies in the same field behind the cluster. But highly deficient galaxies do not lie on this standard TFR, and they deviate too strongly to be explained by any measurement problems.

Similarly, early-type disc (lenticular) galaxies also lie off the TFR in a similar location :

For a simple straight line this relation is remarkably versatile ! It's one of my favourite relations. As well as telling us the distance and perhaps giving us a clue to fundamental physics, it can also tell use about galaxy evolution, pointing, perhaps, to morphology evolution driven by gas loss. What might be happening here is that there's been so much gas lost that it no longer probes the full rotation curve, so that the gas disc is confined to the very innermost regions of the galaxies. But the AGES clouds are known to deviate in the opposite way, and that's very much harder to explain :

These are upper limits. Remember, circular velocities are always greater than the observed line width because of the inclination correction. So galaxies to the left of the TFR might have inclination measurement problems. But that doesn't help for objects on the opposite side : if anything, their true deviation should be even greater ! And the other solution - that these clouds have more baryons than we observe - doesn't seem to work, because they'd have to have a lot more baryonic matter than we've detected. It's hard to see what form this could take where we wouldn't have detected it.

This deviation gets even stronger if we use the baryonic form of the TFR :


And there are some hints - only hints, mind you - that some galaxies may be very strongly deviant in the other direction. Here are some of those UDGs with gas from the Leisman paper :


I'm very cautious about mentioning this, because of the inclination angle problem. I also asked Luke Leisman about this and he's even more cautious about this, because of the inclination angle and selection effects. I only mention it at all because after having looked at the data, it seems to me that at least some of these galaxies look like they're pretty close to edge-on, meaning that the inclination correction couldn't bring them back into agreement with the TFR. If true, this means the low scatter of the TFR may be breaking down at lower masses. But it's too early to tell as yet; we need better optical data and much more HI data.


The Mass Discrepancy Acceleration Relation

The main puzzle of the Tully-Fisher relation is its low scatter. Along with the other issues mentioned, it seems that there is a "conspiracy" between the dark and luminous matter. In essence, knowing the baryon content we can determine the dark matter content. Or more specifically, and more strangely, knowing the baryonic acceleration we can predict the acceleration due to dark matter.

That's the essence of the MDAR (this term may be inaccurate : elsewhere it's called the Radial Acceleration Relation, or the Mass Discrepancy Relation, or the Mass Discrepancy Radial Acceleration Relation; I'm going with MDAR as that seems to be the term that's in vogue). It's an attempt to cut through all the complexities of the TFR and rotation curve wiggles and whatnot and reduce everything back down to physics : acceleration versus acceleration, like for like. 

Consider a baryonic particle at some point within a galaxy. Knowing the mass of the baryons wherever they're detected, and the position within the galaxy, you can predict how fast the particle should be accelerating. In a disc galaxy you can reduce this to the circular acceleration to make things easier. And from observations, you can directly measure their actual circular acceleration. So you can compare the theoretical baryonic acceleration (gbar) with the actual observed acceleration (gobs). And that gives you the MDAR.



That plot comes from the "discovery" paper by McGaugh et al. 2016. Admittedly, it's a very interesting plot with a wonderfully low scatter. Unfortunately the authors decided to declare this "discovery" to be tantamount to a new law of nature, which is a crazy thing to say in a paper. I mean, it's fine to hold whatever private opinion of the importance of your own research that you want. But you don't go around telling people that you're tantamount to the next Einstein, because that's just silly.

Incidentally, this wasn't the actual discovery at all. It had previously been reported in Wu & Kroupa 2015, but they didn't make such a fuss about it so no-one noticed. Subsequently a more detailed follow-up paper was produced by Lelli et al. 2017, who again annoyingly insisted on calling it a "new law of nature". Apart from that obnoxious statement, the Lelli paper is really very good [I don't recall if I actually said this in the lecture, but I certainly meant to]. Which is a shame, because if you're going to make such an obnoxious statement, you're going to get attacked for it. They could easily have avoided all the nasty things people said about it subsequently.

Anyway, the trend is very clear and with low scatter. The plot comprises data from many different galaxies, but each galaxy contributes many different points since you can measure the acceleration at different locations within each galaxy. So you can get low accelerations both in dwarf galaxies (because of their low masses) and within giant galaxies (at the outer edges). It appears to be a real, physical relation, not some effect of selecting particular objects. It's not that there's a one-to-one trend (you can see the data doesn't follow that dotted line), it's that there's a trend at all and that the scatter is so low that's important. What's very interesting is that it was predicted by MOND decades ago; I couldn't find a plot showing this but this claim is uncontested so I have no reason to doubt it.

But is this evidence against the standard model ? No. Within days of the McGaugh paper, dark matter groups responded with their own simulations showing that they found exactly the same trend. That's far too quick to have set up a simulation specially designed to show this, so it must have been there in the data all along - it's just that no-one thought it was very important. The first result was by Keller & Wadsley 2017 (it was formally published in 2017, but a preprint was on arXiv within 5 days of the McGaugh paper).


As you can see, it reproduces the observations almost perfectly - with even less scatter than in reality. Shortly afterwards, another, independent team (Ludlow et al. 2017) published their own set of simulations. And again they found that the relation dropped out naturally from their models.


So MOND might have predicted it, but CDM explains it equally well. There's a paper which attempts to explain why this trend occurs in CDM by Navarro et al. 2017. The reasons are somewhat subtle, but essentially it's to do with the scaling relations about which halos host galaxies and where they can form. Anyway, it doesn't look like this trend is nearly as important as its "discoverers" claimed - you can't use it to discriminate between MOND and CDM in any way.

(Note that Milgrom wrote an irritated response to the McGaugh paper and a furious rebuttal of the Keller & Wadsley paper before it was fully published, however pretty much all of his objections were addressed in their final version - see this link for more details)

At least that's true for the high mass end (or rather, high accelerations). It's still possible that something more interesting might be going on at the low mass end. The Lelli paper notes that at low accelerations, the scatter increases significantly and the objects (for here you can only measure a single point per galaxy) are systematically offset from the main relation.



These "high quality" measurements refer to the fact that it's difficult to get a proper rotation curve for the faintest dwarf spheroidals. This sub-sample of the data are those objects for which the authors believe the measurements are the most reliable, though you can see the error bars are large. They're unsure if the higher scatter is real, but the offset from the main relation is potentially interesting.

Or perhaps not, because a paper by Fattahi et al. 2017 (still under review) claims that actually their CDM simulations reproduce even this feature :


Which gives a pretty good agreement for both the scatter and the offset at the low-acceleration end. The offset wouldn't necessarily be interesting, but the scatter also points to the relation not really being an effect of fundamental physics - it's difficult to imagine that acceleration could work differently in different galaxies (although more on that in a moment).

So it looks as if even this isn't a problem for CDM. The reason I mention it at all is because Lelli et al. make one other interesting remark. They note that if you change the plot slightly, the deviant dwarfs fall back on to the same relation as the giant galaxies. The change is to replace the baryonic acceleration to include the acceleration due to the host halo (all the dwarf spheroidals here are satellites of more massive galaxies).


Which is definitely interesting. No-one's tried doing this plot for CDM simulations (as far as I'm aware), so we don't know what might happen here. Still, overall, the MDAR doesn't look like it's a great way to choose between different theories.


MOND

I've mentioned MOND a few times already, so I should probably go in to it in a bit more detail. Keep in mind that I'm not at all an expert in this, so follow the links for better information. I can't say I'm a fan of MOND though. It seems to me that its supporters too often fall for the fallacy that evidence against one theory automatically constitutes evidence in favour of another, and are prone to making claims against CDM which are stronger than the evidence warrants. So keep that in mind for this final section.

The basic idea of MOND is that instead of invoking unknown dark matter, we need to modify our theory of gravity. In this model gravity behaves in the classical Newtonian way at high accelerations, but transitions to a different behaviour at low accelerations.
Unfortunately, this is much more subtle than simply changing the force law by adding some constant or similar modification. In MOND, the distribution of matter becomes far more important than with Newtonian gravity : you can't just do the vector sum of forces at a point. The reasons for this are not intuitive, but Chris Mihos has a wonderful page in which he imagines trying to explain why this happens to Arnold Schwarzenegger - I highly recommend it.



This means that MOND can in principle even explain things like the Bullet Cluster that we examined last time - it's not so easy to intuitively predict how lensing should work in MOND. The problem is that there isn't a unique theory of MOND that's compatible with general relativity, which you need for lensing calculations. So no-one knows whether or not it really can explain the Bullet Cluster, because we don't know what it predicts.

This was an attempt to explain the Bullet Cluster by Angus, Famaey & Zhao 2006, using a then-popular version of MOND that attempted to reconcile it with relativity. This is a pretty horrible paper, in the sense that it's highly mathematical and difficult to read. Clowe et al.'s 2006 rebuttal is more readable and claims to show that the Angus model doesn't work. 

In any case, that particular variant of MOND was later ruled out by pulsar timing. Many other modified theories of gravity have apparently been ruled out by the recent discovery of a gravitational wave with an optical counterpart, since they predict that the speed of gravity should be significantly slower than that of light. The main problem with all of these is that it's much harder to say if this rules out the MOND premise entirely or just the many relativistic flavours of it.


MOND versus CDM



Since I don't really want this lecture/blog post to go on for ever and ever, let me round off with a brief summary of how MOND currently fares against CDM. Keep in mind that this list is not my attempt at objective truth - it's highly subjective, based on both my own reading and what people have told me and undoubtedly contains errors ! Here's the list, I'll go through it very briefly below. This way inspired by Stacy McGaugh's original version, of which I disagree with almost everything but that's okay... check it out for yourself.




  • Both MOND and CDM explain flat rotation curves very well; I don't think there's much to choose between them on that score. 
  • MOND explains the Tully-Fisher relation, but there are some hints (emphasis again, only hints) that it might not be real, which would cause problems for MOND. CDM can explain the Tully-Fisher relation only through highly complex models, but if TFR isn't real then it's unclear how that affects the CDM predictions.
  • MOND gets a bonus for predicting MDAR whereas CDM merely explains it, though let's not go nuts since this appears to have been a pre-existing feature of CDM simulations that no-one reported.
  • MOND has severe problems explaining the motions of galaxies in clusters. In fact it requires some dark matter to explain the observations, making it deeply unsatisfying for many. On the other hand it doesn't need nearly as much dark matter (a factor of a few) as CDM does (factors of tens).
  • MOND might actually have its own missing galaxy problem. The problem is that we don't yet have any MOND simulations so we don't know how much substructure it predicts. Of course, CDM has a severe missing galaxy problem, though complex baryonic physics might solve this.
  • MOND has problems explaining the large-scale structures in the Universe, whereas CDM has no difficulties. But it's still very early days for MOND simulations.
  • MOND might be able to explain planes of satellites; it's not yet clear. But these planes don't actually exist, so that would cause MOND problems. CDM might be able to explain planes with complex physics, but since they don't exist that would make things easier for CDM.
  • MOND has problems explaining relativistic effects because it's fundamentally difficult to reconcile with general relativity. We just don't know how difficult pulsar timing, lensing, gravitational waves are to incorporate in MOND. Currently they are fatal for certain relativistic version of MOND but that doesn't mean they are fatal for the underlying theory. Of course, CDM has no difficulty explaining these.
  • MOND has problems explaining the power spectrum of the cosmic microwave background. CDM does not.
  • Finally, if you believe MOND is correct, then you have to accept that our theory of gravity, which has been very well-tested on Solar System scales, is wrong or incomplete. But if you're a supporter of CDM, then you have to accept that the standard model of particle physics is wrong or incomplete. So in that sense it's a case of arguing about which bit of physics you think is broken. [At this last point, an audience member who I later learned is a supporter of satellite planes - oops - started shaking his head. I've no idea why - the other points I can see as debatable, but I can't see how you could argue against this one !]
These are quite exciting times for MOND. It's just starting to become possible to do simulations with MONDian gravity, so the next few years should really start to answer if MOND can address these difficulties or if they'll prove fatal.


The Future : With Big Data Comes Big Responsibility

Xkcd's list of telescope names verus actual instruments.
So, to bring this to a close, where are we going ? For simulations the future seems obvious : we'll keep increasing computational power and the sophistication of the models, testing other theories of gravity as well as refining the baryonic physics. But we're also going to have a slew of new, vastly more powerful instruments. Both in the radio and the optical, sensitivity and resolution is going to increase by orders of magnitude even using telescopes which are currently under construction. Personally I'm most excited by the radio side of things, since we'll finally be able to directly measure gas content at high redshift. That should give us a much better idea of how gas content changes over time and so how star formation really works. And we should of course remember the possibility that we'll detect entirely new things that we didn't expect at all.

But all this comes with a price :


This sounds good but it isn't true. The problem is knowing which information is important and which is irrelevant or misleading. You can of course have the extreme opposite to having no information, you can have far too much information instead !


Or as the other saying goes, "garbage in, garbage out". Traditionally this is used for simulations, where their increasing sophistication may not help if the models are fundamentally wrong. But it applies equally well to observations. More data just risks more complexity and thus more elephants. Fancy algorithms for processing data are just BS if they're not used correctly. And I'll say it again : data doesn't solve problems. Statistics doesn't solve problems. People solve problems... except, of course, for when they make things worse. It's going to be more important than ever to understand both statistics and physics if we're going to make meaningful progress in this field. And on that note, I shall return to writing regular blog posts.