SFR-Stellar Mass-Size Paper

These is the notebook for the MS-paper


In [12]:
import numpy as np
from pylab import *
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')

Stellar Mass limit

We need to decide what our stellar mass limit will be. We need to think about this because at a given absolute magnitude limit, the lowest SFR objects will have the highest $M_*/L$ and hence the highest $M_*$. Converseley at a fixed $M_*$ cut, the lowest SFR objects will be the faintest. So, if we make our $M_*$ cut too low, objects with low SFR will fall below our detection limit.

Could cause a bias such that we miss the most suppressed SF-galaxies, or those with the smallest 24um disks?


In [13]:
run ~/github/LCS/python/Python3/LCS_MSpaper.py


normalizing by radius of disk
nothing happening here
(1800, 1800)
Using UV + IR SFR

Different selection cuts

This shows how all the different selection cuts manifest themselves in the SFR-$M_*$ plane. In all plots, red are the objects removed by the flag.

The first plot is for all the galaxies and the second and third are separated by core and exterior.

The horizontal green line is the SFR corresponding to our LIR limit. I took the 0.086 from Elbaz and divided it by 1.74 to convert to Chabrier (Salim+16).

The blue line is my fit to the MS (see below for details.)


In [9]:
g.plotSFRStellarmasssel()


****************fit parameters [ 0.70194788 -7.4065244 ]

In [11]:
g.plotSFRStellarmasssel(subsample='core')


****************fit parameters [ 0.70194788 -7.4065244 ]

In [12]:
g.plotSFRStellarmasssel('exterior')


****************fit parameters [ 0.70194788 -7.4065244 ]

Final Sample

This shows the SFR-$M_*$ distribution for all galaxies and those that pass the final selection. It seems that making a cut at log($M_*$)>9.5 excludes galaxies at the bottom of the main sequence. The cut below, at log($M_*)>9.7$ seems to work.

The solid lines are for SFR(MS), the dashed for SFR(MS)/5. Black is for Elbaz+11, Salmon is for Salim+07 for their pure SF sample, blue is our fit to the non-AGN galaxies above our LIR limit with log(Mstar)>9.5 and with SFR>SFR_MS(Elbaz; Mstar)/6.

This seems to show that there isn't any different selection between core and cluster. However, it does seem that the Elbaz and Salim lines lie above ours. The difference is much in excess of the factor of 1.58 that you would expect if there were some IMF mismatch.


In [4]:
g.plotSFRStellarmassall()
g.plotelbaz()
g.plotsalim07()


****************fit parameters [ 0.77272569 -8.0923806 ]

In [5]:
g.plotSFRStellarmassallenv()


****************fit parameters [ 0.77272569 -8.0923806 ]
The SFR-mass plot coded by size

On the left I show all the galaxies in our sample and on the right the running median. Both environments have similar median SFRs but different sizes. The solid blue line is a fit to the non-AGN galaxies above our LIR limit with log(Mstar)>9.5 and with SFR>SFR_MS(Elbaz; Mstar)/10.


In [13]:
g.plotSFRStellarmass_sizebin()
g.plotSFRStellarmass_sizebin(btcutflag=False)


/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:404: RuntimeWarning: invalid value encountered in less
  flag = (self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut)
/Users/grudnick/anaconda/lib/python3.5/site-packages/matplotlib/axes/_axes.py:2813: MatplotlibDeprecationWarning: Use of None object as fmt keyword argument to suppress plotting of data values is deprecated since 1.4; use the string "none" instead.
  warnings.warn(msg, mplDeprecation, stacklevel=1)
No handles with labels found to put in legend.
****************fit parameters [ 0.70194788 -7.4065244 ]
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:472: RuntimeWarning: invalid value encountered in less
  flag = (~self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut)
No handles with labels found to put in legend.
****************fit parameters [ 0.70194788 -7.4065244 ]

galaxies in the core have smaller R24/Rd than external galaxies at most stellar masses


In [16]:
g.plotSFRStellarmass_musfrbin()


No handles with labels found to put in legend.
****************fit parameters [ 0.70194788 -7.4065244 ]

galaxies in the core have higher muSFR than external galaxies at lower stellar masses. However, the external galaxies have higher muSFR at higher masses. This result doesn't change if I color code by log(median(musfr)) or median(log(musfr)). Also, the result does not qualitatively change if I remove the B/T cut.


In [17]:
g.musfr_size()


muSFR is computed as 0.5 SFR / (pi R242). The dashed line is y = log10(1/x2) and so shows the expected correlation. It is not clear that there is any residual correlation.


In [22]:
g.sizediff_musfrdiff_mass()


/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1009: RuntimeWarning: divide by zero encountered in true_divide
  self.sfrdense = 0.5 * self.SFR_USE / (np.pi * self.mipssize**2)
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1028: RuntimeWarning: invalid value encountered in less
  cflag = (self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut) & (abs(lsfrdiff) < lsfrthresh)
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1034: RuntimeWarning: invalid value encountered in less
  eflag = (~self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut) & (abs(lsfrdiff) < lsfrthresh)
****************fit parameters [ 0.77272569 -8.0923806 ]
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1059: RuntimeWarning: invalid value encountered in true_divide
  sizerat_rat = sbinc / sbine
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1060: RuntimeWarning: invalid value encountered in true_divide
  sizerat_rat_err = np.sqrt(sizerat_rat**2 * ((sbine_err/sbine)**2 + (sbinc_err/sbinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1061: RuntimeWarning: invalid value encountered in true_divide
  sizerat_rat_errbt = np.sqrt(sizerat_rat**2 * ((symarre/sbine)**2 + (symarrc/sbinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1063: RuntimeWarning: invalid value encountered in true_divide
  sizerat_rat_errbthigh = np.sqrt(sizerat_rat**2 * ((sbine_err_bthigh/sbine)**2 + (sbinc_err_bthigh/sbinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1064: RuntimeWarning: invalid value encountered in true_divide
  sizerat_rat_errbtlow = np.sqrt(sizerat_rat**2 * ((sbine_err_btlow/sbine)**2 + (sbinc_err_btlow/sbinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1110: RuntimeWarning: invalid value encountered in true_divide
  musfr_rat = mubinc / mubine
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1111: RuntimeWarning: invalid value encountered in true_divide
  musfr_rat_err = np.sqrt(musfr_rat**2 * ((mubine_err/mubine)**2 + (mubinc_err/mubinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1112: RuntimeWarning: invalid value encountered in true_divide
  musfr_rat_errbt = np.sqrt(musfr_rat**2 * ((symarre/mubine)**2 + (symarrc/mubinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1115: RuntimeWarning: invalid value encountered in true_divide
  musfrrat_rat_errbthigh = np.sqrt(musfr_rat**2 * ((mubine_err_bthigh/mubine)**2 + (mubinc_err_bthigh/mubinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1116: RuntimeWarning: invalid value encountered in true_divide
  musfrrat_rat_errbtlow = np.sqrt(musfr_rat**2 * ((mubine_err_btlow/mubine)**2 + (mubinc_err_btlow/mubinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1151: RuntimeWarning: invalid value encountered in true_divide
  sfr_rat = sfrbinc / sfrbine
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1152: RuntimeWarning: invalid value encountered in true_divide
  sfr_rat_err = np.sqrt(sfr_rat**2 * ((sfrbine_err/sfrbine)**2 + (sfrbinc_err/sfrbinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1153: RuntimeWarning: invalid value encountered in true_divide
  sfr_rat_errbt = np.sqrt(sfr_rat**2 * ((symarre/sfrbine)**2 + (symarrc/sfrbinc)**2))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1156: RuntimeWarning: invalid value encountered in true_divide
  sfr_rat_errbthigh = np.sqrt(sfr_rat**2 * (((sfrbine_err_bthigh/sfrbine)**2 + (sfrbinc_err_bthigh/sfrbinc)**2)))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1157: RuntimeWarning: invalid value encountered in true_divide
  sfr_rat_errbtlow = np.sqrt(sfr_rat**2 * (((sfrbine_err_btlow/sfrbine)**2 + (sfrbinc_err_btlow/sfrbinc)**2)))

This shows that the sizes are systematically lower in the core. It also shows that the SFR surface densities show no clear trend with mass of the ratio between the two environments. Also, the result does not qualitatively change if I remove the B/T cut. Finally, the results don't change if I do them in slices around the main sequence.


In [6]:
g.sfr_offset()
g.sfr_offset(btcutflag=False)


****************fit parameters [ 0.77272569 -8.0923806 ]
Core
median lower 68% SFRdiff confidence interval -1.0893786687705784
median lower 90% SFRdiff confidence interval -1.4389721689718158
External
median lower 68% SFRdiff confidence interval -0.7552370456869824
median lower 90% SFRdiff confidence interval -1.3698421497471287
Core has lower 68% SFR limit 0.931 of the time
Core has lower 90% SFR limit 0.781 of the time
KS Test:
D =   0.20
p-vale = 0.01955 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   1.56
p-vale = 0.07278 (prob that samples are from same distribution)
****************fit parameters [ 0.77272569 -8.0923806 ]
Core
median lower 68% SFRdiff confidence interval -1.0893786687705784
median lower 90% SFRdiff confidence interval -1.4389721689718158
External
median lower 68% SFRdiff confidence interval -0.7552370456869824
median lower 90% SFRdiff confidence interval -1.3698421497471287
Core has lower 68% SFR limit 0.931 of the time
Core has lower 90% SFR limit 0.781 of the time
KS Test:
D =   0.20
p-vale = 0.02149 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   1.56
p-vale = 0.07278 (prob that samples are from same distribution)

Although there is a tail to lower SFR in the core, we can't reject the hypothesis that these are drawn from the same distribution. Below I'm showing some versions of this in different mass slices.


In [20]:
g.sfr_offset(logmassmin=9.7,logmassmax=10.3)


****************fit parameters [ 0.70194788 -7.4065244 ]
Core
median lower 68% SFRdiff confidence interval -0.34441894784132643
median lower 90% SFRdiff confidence interval -0.4648416515774514
External
median lower 68% SFRdiff confidence interval -0.24682737761804674
median lower 90% SFRdiff confidence interval -0.4343083910965646
Core has lower 68% SFR limit 0.839 of the time
Core has lower 90% SFR limit 0.751 of the time
KS Test:
D =   0.25
p-vale = 0.13925 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.33
p-vale = 0.49180 (prob that samples are from same distribution)

In [21]:
g.sfr_offset(logmassmin=10.,logmassmax=10.5)


****************fit parameters [ 0.70194788 -7.4065244 ]
Core
median lower 68% SFRdiff confidence interval -0.4250797824757129
median lower 90% SFRdiff confidence interval -0.6346058035596869
External
median lower 68% SFRdiff confidence interval -0.0880567600607236
median lower 90% SFRdiff confidence interval -0.19527288997964115
Core has lower 68% SFR limit 0.999 of the time
Core has lower 90% SFR limit 0.973 of the time
KS Test:
D =   0.44
p-vale = 0.00701 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   3.76
p-vale = 0.00999 (prob that samples are from same distribution)

In [22]:
g.sfr_offset(logmassmin=10.3,logmassmax=10.8)


****************fit parameters [ 0.70194788 -7.4065244 ]
Core
median lower 68% SFRdiff confidence interval -0.5238791478725346
median lower 90% SFRdiff confidence interval -0.7044161981486567
External
median lower 68% SFRdiff confidence interval -0.1693015218958962
median lower 90% SFRdiff confidence interval -0.39040925040081476
Core has lower 68% SFR limit 0.967 of the time
Core has lower 90% SFR limit 0.983 of the time
KS Test:
D =   0.40
p-vale = 0.13586 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   0.68
p-vale = 0.17416 (prob that samples are from same distribution)

Doesn't look like the differences become significant with different mass bins. At least not without fiddling a lot with the binning.


In [23]:
g.plotmusfr_optdisksize()


Core - Spearman Rank
rho =  -0.4310605286361902
p =  0.00010136184603349731
No handles with labels found to put in legend.
External - Spearman Rank
rho =  -0.15225431451326593
p =  0.12851922028012175
No handles with labels found to put in legend.

At a fixed $R_e(r)$, galaxies with lower $\Sigma_{SFR}$ also have lower $\Sigma_{\star}$. Clusters are missing a small number galaxies at high $R_e(r)$ with large $\Sigma_{SFR}$ but it's not clear if it's significant. My guess is that's what driving the poor correlation in the external sample. At fixed $R_e(r)$ it also appears that the cluster could have $\Sigma_\star$ than the external sample but it's not clear. $Sigma_\star$ is measured using the single Sersic fits to the galaxy from the NSA. We might want to do these using GIM2D to be consistent. Or compute the $R_e$ analytically from the Bulge-disk fits, again to be consistent.

The horizontal dashed line is for reference only


In [24]:
g.plotmusfr_mustar()


Core - Spearman Rank
rho =  0.2811756664388243
p =  0.013875680189961867
No handles with labels found to put in legend.
External - Spearman Rank
rho =  0.4368225790556267
p =  4.948314882468881e-06
No handles with labels found to put in legend.

This might just be telling us that smaller stellar disks also have smaller SF disks. But interestingly, it's not just size but also surface density.


In [25]:
g.musfr_mustar_ks()


3 bins of mass
########## 9.7 10.1
log mass range  9.7 10.1
SFR surface density
KS Test:
D =   0.41
p-vale = 0.01599 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   2.24
p-vale = 0.03835 (prob that samples are from same distribution)
Stellar mass surface density
KS Test:
D =   0.31
p-vale = 0.12282 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   0.71
p-vale = 0.16866 (prob that samples are from same distribution)
########## 10.1 10.5
log mass range  10.1 10.5
SFR surface density
KS Test:
D =   0.46
p-vale = 0.01572 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   1.42
p-vale = 0.08325 (prob that samples are from same distribution)
Stellar mass surface density
KS Test:
D =   0.43
p-vale = 0.02748 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   1.46
p-vale = 0.08060 (prob that samples are from same distribution)
########## 10.5 10.899999999999999
log mass range  10.5 10.899999999999999
SFR surface density
KS Test:
D =   0.50
p-vale = 0.31803 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.69
p-vale = 0.72159 (prob that samples are from same distribution)
Stellar mass surface density
KS Test:
D =   0.58
p-vale = 0.16454 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   0.32
p-vale = 0.25108 (prob that samples are from same distribution)

This shows the distribution of $\Sigma_{SFR}$ and $\Sigma_\star$. While the probability that these are different distributions is high with this mass binning as I show below, the significance of the difference goes away with a slightly different mass binning. So I would say that they are identical distributions


In [26]:
g.musfr_mustar_ks(logmassmin=9.6, logmassmax=10.8,dlogmass=0.3)


4 bins of mass
########## 9.6 9.9
log mass range  9.6 9.9
SFR surface density
KS Test:
D =   0.35
p-vale = 0.13495 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.25
p-vale = 0.45454 (prob that samples are from same distribution)
Stellar mass surface density
KS Test:
D =   0.30
p-vale = 0.27527 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.92
p-vale = 0.92169 (prob that samples are from same distribution)
########## 9.9 10.2
log mass range  9.9 10.2
SFR surface density
KS Test:
D =   0.35
p-vale = 0.13495 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.19
p-vale = 0.42603 (prob that samples are from same distribution)
Stellar mass surface density
KS Test:
D =   0.25
p-vale = 0.49734 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -1.15
p-vale = 1.18727 (prob that samples are from same distribution)
########## 10.2 10.5
log mass range  10.2 10.5
SFR surface density
KS Test:
D =   0.55
p-vale = 0.00982 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   1.79
p-vale = 0.05884 (prob that samples are from same distribution)
Stellar mass surface density
KS Test:
D =   0.35
p-vale = 0.22772 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.16
p-vale = 0.41226 (prob that samples are from same distribution)
########## 10.5 10.799999999999999
log mass range  10.5 10.799999999999999
SFR surface density
KS Test:
D =   0.50
p-vale = 0.31803 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.69
p-vale = 0.72159 (prob that samples are from same distribution)
Stellar mass surface density
KS Test:
D =   0.58
p-vale = 0.16454 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   0.32
p-vale = 0.25108 (prob that samples are from same distribution)

In [27]:
g.matchsamp_mass()


*********fit parameters [-2.28122189 -0.07450221]

This plot shows mass-matched samples constructed by choosing all external galaxies within +/-0.3 dex in mass. I omit core galaxies with no field galaxies in this mass range. Each point shows how one of the core galaxies differs from the value for the median of the matched sample. The red line shows the intrinsic correlation you'd expect in the right-hand axis, given that $\Sigma_{SFR}$ is inversely proportional to $R_{24}^2$. This assumes that the optical disks are the same size and that the SFRs for the two samples are the same.

The blue line shows a simple fit to the black points assuming uniform errors. It appears that there is no residual correlation aside from what is expected. I wonder what would happen if I included errors or bootstrapped the line.


In [28]:
g.plot_n24diff_mipssizediff()
g.plot_n24diff_mipssizediff(btcutflag=False)


for the left panel
rho =  -0.2978700133978618
p =  0.008966092510207671
Galaxies with low  R24, high n, and low Sigma SFR, all relative
[ 68305. 146606.  72659. 166167. 103648.] [0.82470399 0.63997495 0.84069818 0.54306495 0.46031362] [-0.77416639 -0.70206155 -0.34333089 -0.48186984 -0.22615193]
*********fit parameters for the right panel [-2.04043823 -0.06905   ]
for the left panel
rho =  -0.35468118195956455
p =  0.00020534904751949354
Galaxies with low  R24, high n, and low Sigma SFR, all relative
[ 70696.  43836.  68305. 146606. 146636.  72623.  72659. 146115. 146121.
 166167.  89063. 103648.] [0.48801178 0.68571132 0.80879045 0.63997495 0.50543755 0.67067313
 0.81145966 1.11565781 0.7639848  0.52715141 0.56149685 0.52024817] [-0.44040556 -0.39567618 -0.75196443 -0.70206155 -0.64535498 -0.23901859
 -0.3222343  -0.41349331 -0.388137   -0.41769712 -0.82635195 -0.22615193]
*********fit parameters for the right panel [-1.97141621 -0.09535164]

This is now constructed using samples matched in stellar mass and optical disk size. The left panel is color-coded by the difference in SFR. From van der Wel et al. 2012 Fig. 7 it looks like bigger sizes will result in slightly larger sersic n. This is the opposite trend from what we see in the left panel, which indicates that larger sizes yield lower sersic n. It also looks like galaxies with lower $n$ also have lower SF. Also from van der Wel et al. 2012, it appears that making the sersic $n$ larger makes the integrated profile brighter. That would mean that increased n could result in systematically higher SFR, which could drive some of the correlation of $\Delta \Sigma_{SFR}$ with $\Delta n_{24}$

In the right panel, the scatter is lower than in the matchsampmass plot, indicating that the scatter may have been driven by the scatter in optical disk size. The green line is the median $\Delta log \Sigma{SFR}$ There may be a weak trend but I need to do a better job with the errors to confirm this. The red line is the expected correlation and the blue line is the fit (without data uncertainties) to the data

In the plot below, I highlight the sources with $\Delta log(n_{24})>0.45$ and $\Delta log(SFR)<-0.2$. These are the red/orange points at the top of the plot. These are the x's in the figure below


In [11]:
g.plotSFRStellarmass_matchsamp()
g.plotSFRStellarmass_matchsamp(btcutflag=False)


/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1592: RuntimeWarning: invalid value encountered in less
  cind = np.where((self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1593: RuntimeWarning: invalid value encountered in less
  eind = np.where((~self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut))
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1594: RuntimeWarning: invalid value encountered in less
  cflag = (self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut)
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1595: RuntimeWarning: invalid value encountered in less
  eflag = (~self.membflag & self.sampleflag) & (self.gim2d.B_T_r < btcut)
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1632: RuntimeWarning: divide by zero encountered in true_divide
  self.sfrdense = 0.5 * self.SFR_USE / (np.pi * self.mipssize**2)
/Users/grudnick/github/LCS/python/Python3/LCS_MSpaper.py:1642: RuntimeWarning: invalid value encountered in less
  mmatchflag = (eflag) & (abs(dlMstar) < dlMstarsel) & (self.s.NSAID[i] != self.s.NSAID) & (abs(drd) < drdsel)
****************fit parameters [ 0.70194788 -7.4065244 ]
****************fit parameters [ 0.70194788 -7.4065244 ]
****************fit parameters [ 0.70194788 -7.4065244 ]
****************fit parameters [ 0.70194788 -7.4065244 ]

In [10]:
g.sfr_offset_matchsamp(btcutflag=False)


****************fit parameters [ 0.77272569 -8.0923806 ]
##############
SF compared to morphological sample
KS Test:
D =   0.21
p-vale = 0.03524 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   2.84
p-vale = 0.02231 (prob that samples are from same distribution)
##############
morphological sample compared to normal sizes
KS Test:
D =   0.15
p-vale = 0.49422 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.94
p-vale = 0.94957 (prob that samples are from same distribution)
##############
morphological sample compared to small sizes
KS Test:
D =   0.24
p-vale = 0.23432 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =  -0.42
p-vale = 0.53958 (prob that samples are from same distribution)
##############
normal sizes  compared to small sizes
KS Test:
D =   0.30
p-vale = 0.10333 (prob that samples are from same distribution)
Anderson-Darling test Test:
D =   0.57
p-vale = 0.19464 (prob that samples are from same distribution)

This shows the distribution of SF w.r.t. the main sequence for different samples of galaxies. Top: all the galaxies in the SF sample. Second: all the galaxies in the morphological sample. Third: the galaxies in the morphological sample with normal 24um sizes compared to a set of matched external galaxies. Bottom: the galaxies in the morphological sample with small 24um sizes compared to a set of matched external galaxies. All of the samples are consistent except for the top and second panels, which are marginally consistent. This doesn't really depend on where I draw the boundary between small and normal.


In [11]:
g.sersicint()


This plot shows how the luminosity of a sersic profile depends on the radius at which I cut the profile off. This is for different sersic indices and all curves are normalized to to have the same central intensity. The total luminosity is computed at R<7Re.


In [14]:
g.r24_re_msdist()
g.r24_re_msdist(btcutflag=False)


****************fit parameters [ 0.77272569 -8.0923806 ]
****************fit parameters [ 0.77272569 -8.0923806 ]