Droplets

This jupyter notebook carries out the analyses in the upcoming journal article focusing on analyzing gravitationally unbound, coherent structures with significant velocity gradients. The notebook is edited to work with data and Python scripts in the Github repo.

0. Abstract and Outlines

The project looks for coherent cores with significant velocity gradients in the L1688 region in Ophiuchus and the B18 region in Taurus, out of the four nearby star forming regions covered by the GBT Ammonia Survey (GAS) Data Release 1 (Friesen and Pineda et al., 2017). One goal of the project is to update numbers for physical properties of (potentially) rotational motions within these structures. The relation between the velocity gradient and the size was first examined using observations of ammonia molecular line emission by Goodman et al. (1993). With an improved physical resolution of ~4000 AU (the FWHM beam size at the distance of Ophiuchus and Taurus), we hope to provide more reliable and relevant numbers for simulations and analytical models, especially ones concerning with disk formation inside star forming cores.

Subsequent analyses included in this project discover that the structures are possibly gravitationally unbound, despite subsonic velocity dispersions and relatively high column densities. The behavior is unexpected in the final stages of monolithic star formation driven by gravity. Due to their small sizes and the seeming unboundness, we term these structures "droplets," to distinguish them from "cores," which are often used to indicate bound and star forming, sub-parsec structures. The project aims at providing a guess to how the "droplets" are formed and what role they could play in the star formation process.

This notebook is organized as follows. In I. Identification of Structures, we go through how we identify these structures and how we define the boundaries. In II. Physical Properties of Droplets, we derive the basic physical properties of the "droplets," based on the property maps from ammonia line fitting to the GAS DR1 data. In III. Kinematics and Dynamics, we look for possible explanations to the formation of "droplets" and the role they play in star formation. In particular, we look into the spatial distribution of ram pressure within and around the "droplets," to see whether the droplets could be pressure-bound. In all sections of this notebook, we also provide the codes to generate tables and plots that eventually appear in the corresponding parts of the article.

Further discussions are included in the article hosted on Authorea. This notebook, though filled with definitions essential for users to understand the analyses, is not meant to be complete or continuous as the journal article itself. Please look in the article for details.

CAUTION: This is work in progress.

[The square brackets in this notebook host the technical explanations of following code cells. They should be treated as code comments.]

Authors

  • Hope Chen (leading and corresponding, at hhchen@cfa.harvard.edu; Harvard-Smithsonian Center for Astrophysics)
  • Jaime Pineda (Max-Planck-Institut für Extraterrestrische Physik)
  • Alyssa Goodman (Harvard-Smithsonian Center for Astrophysics)
  • Andreas Burkert (University Observatory Munich)

Code Dependencies

The version numbers listed below correspond to the versions used when this notebook is compiled. Most Python packages are backward compatible, but there is no guarantee. Please report problems via the issue tracker of the Droplet repo.

  • numpy: v1.13.1
  • scipy: v0.19.1
  • astropy: v2.0.2
  • FITS_tools: v0.0.dev
  • matplotlib: v2.0.2
  • pandas: v0.20.3

In [1]:
%matplotlib inline
import sys
import os
from collections import *

#
import numpy as np
import scipy

#
from astropy.io import fits
import astropy.wcs as wcs
import astropy.units as u
import astropy.constants as c
import FITS_tools as fits_tools

#
import matplotlib.pyplot as plt
import matplotlib.colors as colors
#from matplotlib import rcParams

#
import pandas as pd

## 
from Droplets import *
from stat_tools import *
from constants import *
import styles

Main Variables in this Notebook

The variables are derived along the way following the steps outlined above. This list is just for bookkeeping. (The attached link goes to the code cell where each variable is first defined.)

0.1 Data and Targets

The GAS DR1 inlcudes observations of the ammonia hyperfine line emission made with the Green Bank Telescope (GBT). By fitting Guassian profiles to the ammonia hyperfine lines, Friesen and Pineda et al. (2017) derive the property maps of four nearby star forming regions: B18 in Taurus, L1688 in Ophiuchus, NGC1333 in Perseus, and Orion A North. These physical properties include the NH$_3$ velocity centroid (in the LSR frame; $V_\text{LSR}$), the NH$_3$ velocity dispersion ($\sigma_V$), the NH$_3$ column density ($N_{\text{NH}_3}$), the excitation temperature of NH$_3$ hyperfine lines ($T_\text{ex}$), and the kinetic temperature ($T_\text{kin}$). Besides property maps included as part of the GAS DR1, we also derive the peak main beam temperature from the trimmed and baseline subtracted data cubes from observations of NH$_3$ (1, 1) hyperfine lines. We look for coherent structures using these physical properties in the B18 region and the L1688 region, where their proximity (at the distances of ~135 and ~137.3 pc, respectively; Schlafly et al., 2014; Ortiz-León et al., 2017) grants us the highest physical resolution (the GBT FWHM beamsize of 32" at 23 GHz corresponds to ~4300 AU).

[The Python dictionary distances stores the distances that are used throughout this paper, in the formats of astropy.units. The code cell below is for demonstrative purpose; for a complete list of constants used in this paper, see constants.py.]


In [2]:
distances = {'L1688': 137.3*u.pc, 'B18': 135.*u.pc}
## uncertainties
distances['L1688_e'] = 6.*u.pc
distances['B18_e'] = 20.*u.pc

The GAS data are supplemented by the column density and dust temprature maps derived using Herschel observations of dust continuum emission. The Herschel observations were made as part of the Herschel Gould Belt Survey (GBS; André et al., 2010), and the data are from the Herschel Science Archive. Ayushi Singh and Peter Martin at University of Toronto derived maps of the optical depth and the dust temperature by fitting the SED from the SPIRE observations to opaticy corrected blackbody emission ("gray-body emission"; Hildebrand, 1983). Singh and Martin assumed a dust opacity at 1 THz ($\kappa_\text{1THz}$) of 10 cm$^2$ g$^{-1}$ with a power-law dependence on the frequency to the order of $\beta = 1.62$, and also a gas to dust ratio of 100. The resulting maps can be found in the github ./data/Herschel/ folder, with key parameters used in the SED fitting accessible in the FITS headers. The FWHM beamsize of SPIRE 500 µm is 36", which matches well with the GBT beamsize.

The full GAS DR1 dataset is hosted on Dataverse, which includes the original data cubes from GBT observations of various molecular lines. The peak main beam temperature maps were not included in the set of property maps in the GAS DR1, but were instead derived from the trimmed and baseline subtracted data cubes. Due to the github policy for large data files, the Droplets github repo includes only the data maps that are needed in the analyses, but not the data cubes from which the property maps were derived.

[The datasets are read and stored in dict_data. The maps of error in each phsyical property were produced for the GAS DR1 alongside the property maps, and are stored in dict_data with a prefix "e". The property maps in the GAS DR1 are flagged by the detection level and the goodness of the fit. In the array directly read in from files in this github repo and in the GAS DR1 dataverse, the flagged pixels within the observed areas have values of 0. In the following code cell, we remove the flags by converting them to NaN values, for the ease of future calculation. This is not done for the maps of the peak main beam temperature, which is derived from the data cubes that include the full observed areas.]


In [2]:
# data folder (within the current directory)
direcData = os.getcwd()+'/data/'
## lists of property maps to be read in
list_propertiesGAS = ['Vlsr', 'Sigma', 'N_NH3', 'Tex', 'Tkin']
list_propertiesHerschel = ['colden', 'temp']


# Read the data.
dict_data = {'L1688': {}, 'B18': {}}
for reg in ['L1688', 'B18']:
    
    # GAS DR1 Property Maps
    for prop in list_propertiesGAS:
        ## data
        data = fits.open(direcData+'GAS_DR1/'+reg+'/'+reg+'_'+prop+'_DR1_rebase3_flag.fits')[0].data
        data[data == 0.] = np.nan ## Remove the flags
        dict_data[reg][prop] = data
        
        ## error maps
        data = fits.open(direcData+'GAS_DR1/'+reg+'/'+reg+'_e'+prop+'_DR1_rebase3_flag.fits')[0].data
        data[data == 0.] = np.nan ## Remove the flags
        dict_data[reg]['e'+prop] = data
    ## header
    dict_data[reg]['header_GAS'] = fits.open(direcData+'GAS_DR1/'+reg+'/'+reg+'_Vlsr_DR1_rebase3_flag.fits')[0].header
    
    # GAS DR1 Peak Main Beam Temperature
    dict_data[reg]['Tpeak'] = fits.open(direcData+'GAS_DR1/'+reg+'/'+reg+'_Tpeak_DR1_rebase3_trim.fits')[0].data
    
    # Herschel
    for prop in list_propertiesHerschel:
        ## data
        dict_data[reg][prop] = fits.open(direcData+'Herschel/'+reg+'/'+reg+'_'+prop+'_masked.fits')[0].data
        
        ## error maps
        dict_data[reg]['e'+prop] = fits.open(direcData+'Herschel/'+reg+'/'+reg+'_'+prop+'_err_masked.fits')[0].data
    ## header
    dict_data[reg]['header_Herschel'] = fits.open(direcData+'Herschel/'+reg+'/'+reg+'_colden_masked.fits')[0].header

Several properties that are immediately derivable from the property maps include the total velocity dispersion ($\sigma_\text{tot}$) and its thermal and non-thermal components ($\sigma_\text{T}$ and $\sigma_\text{NT}$, respectively). These can be derived from the NH$_3$ linewidth ($\sigma_{\text{NH}_3}$) and the kinetic temperature of the gas ($T_\text{kin}$), assuming that different species of molecules are at a thermal equilibrium. The equation used here is: $$\sigma_\text{tot}^2 = \sigma_{\text{NH}_3}^2 - \frac{k_B T_\text{kin}}{m_{\text{NH}_3}} + \frac{k_B T_\text{kin}}{m_\text{ave}} \text{,}\ \ \ \ \ \text{[1]}$$ where $m_{\text{NH}_3}$ is the molecular weight of NH$_3$ and $m_\text{ave}$ is the average particle weight. Throughout this paper, we use $m_\text{ave} = 2.37\ \text{a.m.u.}$ based on the results presented by Kauffmann et al. (2008), and $m_{\text{NH}_3} = 17.031\ \text{a.m.u.}$ according to the IUPAC value.

The total velocity dispersion can be further separated into the thermal and the non-thermal components. The thermal component of the velocity dispersion is the velocity dispersion expected of an average particle ($m_\text{ave} = 2.37\ \text{a.m.u.}$) at the kinetic temperature, $T_\text{kin}$. That is, $$\sigma_\text{T}=\sqrt{\frac{k_\text{B}T_\text{kin}}{m_\text{ave}}}\ \text{.}\ \ \ \ \ \text{[2]}$$ This is also the sound speed of a uniform medium composed of the average particles, at $T_\text{kin}$. The non-thermal component of the velocity dispersion is the NH$_3$ linewidth beyond what can be explained by the thermal velocity dispersion of the NH$_3$ particles. That is, $$\sigma_\text{NT}=\sqrt{\sigma_{\text{NH}_3}^2-\frac{k_\text{B}T_\text{kin}}{m_{\text{NH}_3}}}\ \text{.}\ \ \ \ \ \text{[3]}$$ The thermal and the non-thermal components are related to the total velocity dispersion by a squared sum: $$\sigma_\text{tot}^2 = \sigma_\text{T}^2+\sigma_\text{NT}^2\ \text{.}\ \ \ \ \ \text{[4]}$$

[The derived properties: $\sigma_\text{tot}$, $\sigma_\text{T}$, and $\sigma_\text{NT}$ are stored in the same dictionary: dict_data. The following code cell might generate a RuntimeWarning regarding invalid value encoutered in sqrt, which by default would simply produce a NaN value at the pixel(s). The warning message is due to the unlikely event where the observed NH$_3$ linewidth is narrower than the thermal linewidth of the NH$_3$ molecules expected at the observed kinetic temperature. This happens only at ~0.01% of the sky areas observed in the L1688 and B18 regions combined. The definition of the variable mass below is merely for demonstrative purpose. For the full list of constants used in this paper, see constants.py.]


In [3]:
# mass for the NH3 molecule and an average particle in ISM
mass = {'NH3': 17.031*u.u, 'average': 2.37*u.u}


# Derive the total velocity dispersion, and its thermal and non-thermal components.
for reg in ['L1688', 'B18']:
    
    # NH3 linewidth and kinetic temperature from GAS DR1
    sigmaNH3 = dict_data[reg]['Sigma']
    Tkin = dict_data[reg]['Tkin']
    
    
    # derived properties
    ## thermal and non-thermal components
    sigmaNT = np.sqrt((sigmaNH3*u.km/u.s)**2. - c.k_B*Tkin*u.K/mass['NH3']).to(u.km/u.s).value
    sigmaT = np.sqrt(c.k_B*Tkin*u.K/mass['average']).to(u.km/u.s).value
    ## total velocity dispersion
    sigmaTot = np.sqrt((sigmaNT*u.km/u.s)**2.+(sigmaT*u.km/u.s)**2.).to(u.km/u.s).value
    
    
    # store the derived maps
    dict_data[reg]['SigmaNT'] = sigmaNT
    dict_data[reg]['SigmaT'] = sigmaT
    dict_data[reg]['SigmaTot'] = sigmaTot


//anaconda/lib/python2.7/site-packages/astropy/units/quantity.py:641: RuntimeWarning: invalid value encountered in sqrt
  *arrays, **kwargs)

For easier comparison between the GAS DR1 data and the Herschel maps, we project the Herschel maps to the GAS DR1 grid. As mentioned above, the GBT has a FWHM beamsize at 23 GHz (32") similar to the FWHM beamsize of Herschel 500 µm (36"). Thus, we can safely assume that the regridding does not introduce a significant biase.

[The regridding is done using the Python package FITS_tools. The newly projected images replace the original images in dict_data.]


In [4]:
# Regrid the Herschel maps.
for reg in ['L1688', 'B18']:
    
    for prop in list_propertiesHerschel:
        
        # Regrid using hcongrid.
        ## data
        image = dict_data[reg][prop].copy()
        header1 = dict_data[reg]['header_Herschel'] ## original projection
        header2 = dict_data[reg]['header_GAS'] ## projection to interpolate into
        
        dict_data[reg][prop] = fits_tools.hcongrid.hcongrid(image, header1, header2)
        
        ## error maps
        image = dict_data[reg]['e'+prop].copy()
        header1 = dict_data[reg]['header_Herschel'] ## original projection
        header2 = dict_data[reg]['header_GAS'] ## projection to interpolate into
        
        dict_data[reg]['e'+prop] = fits_tools.hcongrid.hcongrid(image, header1, header2)

In the analyses below, we use YSO catalogs from Dunham et al. (2015) and Rebull et al. (2010) to determine the existence of YSO sources potentially associated with target structures in Ophiuchus and in Taurus, respectively. Parsed versions of the catalogs that include only Class 0/I and flat spectrum YSOs, above certain "grades" of reliability assigned in the papers just mentioned, can be found in the data folder of this github repo. The original, full catalogs can be found on VizieR on the corresponding catalog pages: catalogs in Dunham et al. (2015) and catalogs in Rebull et al. (2010).

The parsed versions of the catalogs include only Class 0/I and flat spectrum YSOs (according to the definitions in the corresponding papers mentioned above), which are not AGBs in Dunham et al. (2015) nor below the rank "A-" in Rebull et al. (2010). There is information about the coordinates, the object names in the original surveys, and the classification in the catalogs. In the case of L1688.csv, the column "alpha_prime" corresponds to the extinction corrected spectral index, $\alpha^\prime$, in Dunham et al. (2015), and is used to determine the class of the YSOs. According to Dunham et al. (2015), YSOs with $\alpha^\prime \geq 0.3$ belong to Class 0/I, and YSOs with $0.3 > \alpha^\prime \geq -0.3$ belong to the "flat spectrum" category.

[Since only the pixel coordinates on the GAS data projection are needed, the celetial coordinates are converted using the GAS headers before being stored in dict_YSOs. The resulting array of pixel coordinates has a length of the number of YSOs in the catalog and a width of two corresponding to the x (Axis 1 in a numpy array) and the y (Axis 0 in a numpy array) coordinates.]


In [5]:
# data folder (within the current directory)
direcData = os.getcwd()+'/data/'

# Read the data.
dict_YSOs = {}
for reg in ['L1688', 'B18']:
    
    ## catalogs
    catalog = pd.read_csv(direcData+'YSO_catalogs/'+reg+'.csv')
    
    ## header used for converting YSO coordinates to pixel coordinates
    header = dict_data[reg]['header_GAS']
    
    ## Convert YSO coordinates to pixel coordinates
    projection = wcs.WCS(header)
    dict_YSOs[reg] = projection.wcs_world2pix(np.c_[catalog['RA'].values, catalog['Dec'].values], 0)

0.2 Sources from Goodman et al. (1993)

Goodman et al. (1993) presented a survey of 43 sources with observations of NH$_3$ line emission (see the SIMBAD obejct list; several objects are associated with different SIMBAD identifiers than in the paper). The properties of the sources are summarized in Table 1 and Table 2 of the paper. Unfortunately, no digitization exists for the paper and the tables, so we manually duplicate the two tables, and the results can be found in the github ./data/Goodman93/ folder.

To compare the properties of the 43 sources observed by Goodman et al. (1993), we update the tables with 1) peak positions in the FK5 J2000.0 frame, 2) updated mass of average particles (2.37 a.m.u.; Kauffmann et al., 2008), 3) updated measurements of distances, and 4) updated derived properties based on the new particle weight and distances. The updated distances are summarized below:

  1. Regions associated with the molecular cloud in Perseus: PER3, PER6, and B5. We adopt distances measured by Schlafly et al. (2014) using PanSTARRS-1 photometry, which are 315±32 pc for the western part of the Perseus molecular cloud (including PER3 and PER6) and 260±26 pc for the eastern part (including B5).

  2. Regions associated with the molecular cloud in Taurus: L1489, L1498, L1495, L1495NW, L1495SE, TAU11, TAU16, B217, L1524, TMC-2A, L1534 (TMC-1A), L1527, TMC-1C, and L1517B. We adopt a distance of 135±20 pc, measured by Schlafly et al. (2014).

  3. Regions associated with $\lambda$ Orionis: L1582A and B35A. We adopt a distance of 420±42 pc, measured by Schlafly et al. (2014).

  4. A region associated with the molecular cloud and the YSO cluster in Ophiuchus (sometimes under the name of $\rho$ Oph): L1696A. We adopt a distance of 137.3±6 pc, measured by Ortiz-León et al. (2017) using parallax.

  5. Regions associated with clouds and clumps in Oph N: L43/RNO90, L43, L260 (a.k.a. L255), L158, L234E, L234A, and L63. These regions are usually associated with the Ophiuchus complex or, on a larger scope, the Upper Sco-Oph-Cen complex. Goodman et al. (1993) adopted the same distance for these regions as for L1696A. Here we use an updated distance measurement of 125±18 pc to the Ophiuchus complex by Schlafly et al. (2014). This is in good agreement with the widely used 125±45 pc, measured by de Geus et al. (1989).

  6. Regions associated with Cepheus Flare: The Cepheus Flare spans more than 10 degrees from North to South on the plane of the sky, and is known to have a complicated structure with multiple concentrations of material at different distances. Here we adopt distance measurements for different regions in Cepheus Flare, and note that these distances were used by Kauffmann et al. (2008) side-by-side. Note, however, that Schlafly et al. (2014) measured 360±35 pc for the southern part of Cepheus Flare and 900±90 pc for the northern part of Cepheus Flare. See discussions in Schlafly et al. (2014).

    a. L1152: 325±13 pc, measured by Straižys et al. (1992) using photometry.

    b. L1082C, L1082A, and L1082B: 400±50 pc, measured by Bourke et al. (1995) using photometry.

    c. L1174 and L1172A: 288±25 pc, measured by Straižys et al. (1992) using photometry.

    c. L1251A, L1251E, and L1262A: we update the distance used by Kauffmann et al. (2008) based on Kun (1998; 300$^{+50}_{-10}$ pc) with a more recent measurement of 286±20 pc made by Zdanavičius et al. (2011) using photometry.

  7. Other regions with distances measured from masers:

    a. L1400G and L1400K: 170±50 pc, measured by Montillaud et al. (2015).

    b. L134A: 110±10 pc, measured by Montillaud et al. (2015).

  8. Regions of which the distances are not updated since 1990s, but are cited recently. Here we provide a list of the original references and the most recent year when each reference was cited.

    a. L483: 200 pc (Dame & Thaddeus, 1985), with citations as recent as 2017.

    b. L778: 200 pc (Schneider & Elmegreen, 1979), with citations as recent as 2017.

    c. B361: 350 pc (Schmidt, 1975), with citations as recent as 2010.

    d. L1031B: 900 pc (Hilton and lahulla, 1995), with citations as recent as 2017.

And the derived properties are affected by the updates in distance:

  1. Mass ($M$): Since the mass was calculated from the number density derived by modeling the radiative transfer of NH$_3$ line emission, the mass estimates scale with the distance to the cubic order, $M \propto D^3$.

  2. Sizes ($r_\text{major}$, $r_\text{minor}$, and $R$): Sizes $\propto D$.

  3. The velocity gradient ($\mathcal{G}$): $\mathcal{G} \propto D^{-1}$.

  4. The product of the gradient and the radius ($\mathcal{G}\times R$): The product remains unchanged, since the effect of a new distance cancels out.

  5. The ratio of rotational and gravitational energies ($\beta$): $\beta \propto \mathcal{G}^2 \propto D^{-2}$.

  6. The specific angular momentum ($J/M$): $J/M \propto \mathcal{G}R^2 \propto D$.

Based on these updates, we also create new columns including information on the uncertainties in the distance measurements and the radii used by Goodman et al. (1993) in subsequent calculations, which are the geometric means of the FWHMs along the major and minor axes ($r_\text{major}$ and $r_\text{minor}$): $$R_\text{eff} = \sqrt{r_\text{major}\times r_\text{minor}}\ \ \text{.}\ \ \ \ \ \text{[5]}$$

Unfortunately, the change due to an updated average particle weight cannot be applied on the total velocity dispersion, since no values of $T_\text{kin}$ and $\sigma_{\text{NH}_3}$ used in calculations of $\sigma_\text{tot}$ were given by Goodman et al. (1993), except pointers to the work done by Benson and Myers (1989), Ladd 1991 (thesis work), and private communication with Benson in 1992. We include peer reviewed values in Benson and Myers (1989) and the published numbers in Ladd et al. (1994), which is based on Ladd 1991, in the updated table (Column "Tkin" and Column "SigmaNH3" in table1_updated.csv), even though calculating $\sigma_\text{tot}$ based on these numbers usually result in disagreement with the numbers presented by Goodman et al. (1993), with discrepancy up to 7%. The values reported via private communication with Benson were not made publicly available (covering a total of 5 regions out of the 43 regions presented by Goodman et al., 1993). Since for typical values of $\sigma_{\text{NH}_3}$ and $T_\text{kin}$, the change of average particle weight from 2.33 a.m.u. to 2.37 a.m.u. (Kauffmann et al., 2008) usually affects $\sigma_\text{tot}$ by less than 1%, we do not update measurements of $\sigma_\text{tot}$ presented by Goodman et al. (1993) with the updated average particle weight. See Figure Extra 1 for a visualization of comparison.

[The following code cell runs the codes used to update Table 1 and Table 2 from Goodman et al. (1993), and the static versions are saved in the github ./data/Goodman93/ folder. The codes used to covert and update Table 1 and Table 2 in Goodman et al. (1993) are wrapped in ./Goodman93_updates.py. Note that values in original Table 1 and Table 2 are rounded to the nearest significant digits. No rounding is done, however, for the updated values below. This is to guarantee a continuous and consistent conversion over all data points when compared to the properties derived in this paper. Please use the original tables to find the significant digits when needed. The updated tables are read in using the pandas package.]


In [6]:
# Update and convert values in the original tables.
import Goodman93_updates

# Read the updated tables.
direcData = os.getcwd()+'/data/Goodman93/'
dict_tables = {}
dict_tables['Goodman93_table1'] = pd.read_csv(direcData+'table1_updated.csv',
                                              index_col = False)
dict_tables['Goodman93_table2'] = pd.read_csv(direcData+'table2_updated.csv',
                                              index_col = False)

In Figure Extra 1, $\sigma_\text{tot}$ presented in Table 1 in Goodman et al. (1993) is compared to values calculated from $T_\text{kin}$ and $\sigma_{\text{NH}_3}$ in Benson and Myers (1989) and Ladd et al. (1994), assuming $m_\text{ave}$ of 2.33 or 2.37 a.m.u. In a total of 12 regions (B5, L1495-NW, L1400G, B217, L1524, TMC-2A, L234E, L63, L1152, L1174, B361, and L1251A), the difference due to different assumptions of the average particle weight is visibly smaller than the difference between $\sigma_\text{tot}$ from Goodman et al. (1993) and values calculated from $T_\text{kin}$ and $\sigma_{\text{NH}_3}$ in Benson and Myers (1989) and Ladd et al. (1994). In another 7 regions (L1495, L1495-SE, TAU11, TAU16, L43/RNO90, L483, and L1251E), either Benson and Myers (1989) and Ladd et al. (1994) provided no estimates of $T_\text{kin}$ or $\sigma_{\text{NH}_3}$ (L1495 and L1495-SE), or that the data obtained via private communication with Benson by Goodman et al. (1993) were never published. The discrepancies and the lack of data occur in 19 regions (out of the 43 regions in Table 1 in Goodman et al. (1993)), and have made updates using the new average particle weight impractical.

Figure Extra 1. The plot shows the difference between $\sigma_\text{tot}$ presented in Table 1 and the values calculated from $T_\text{kin}$ and $\sigma_{\text{NH}_3}$ in Benson and Myers (1989) and Ladd et al. (1994), assuming $m_\text{ave}$ of 2.33 or 2.37 a.m.u. (see Eq. 1). The region names are labeled on the x-axis. [This plot is not included in the article (thus the name). Change plotNewWeight to plot/un-plot estimates of $\sigma_\text{tot}$ using $m_\text{ave} = 2.37\ \text{a.m.u.}$]

[The codes used to produce the following plot are wrapped in the plotTable1Sigmas function, in ./Droplets.py.]


In [7]:
table = dict_tables['Goodman93_table1']
fig = plotTable1Sigmas(table, plotNewWeight = False)  ## <<< Change the code in this line!


I. Identification of Structures

We look for coherent structures (Goodman et al., 1998) in the L1688 region in Ophiuchus and the B18 region in Taurus. Goodman et al. (1998) proposed that these coherent structures represent the smallest scale in the self-similar process of star formation in turbulent clouds, within which the dominant gravity drives the final stage of star formation. The coherent structures are defined by a boundary of sharp transition in linewidth, from supersonic linewidths outside to subsonic linewidths inside the coherent structures. Thus, our selection and definition of target structures are based on comparison between the thermal ($\sigma_\text{T}$) and non-thermal ($\sigma_\text{NT}$) components of velocity dispersion. We also look for coherent structures that also show a local peak in column density.

Note that Goodman et al. (1993) did not impose a criterion regarding the velocity dispersion when selecting targets. In this paper, since we intend to look for structures much smaller than the targets observed by Goodman et al. (1993) and often more embedded within star forming regions, we try to avoid line of sight confusion by choosing only coherent structures.

[The following code cell read in results that will be explained later in this notebook, including the masks for the target structures and the maps of predicted $V_\text{LSR}$ based on the velocity gradient fits. The process of defining the boundary will be explained in this section. On the other hand, fitting of the velocity gradient is done in the next section, and the maps of the predicted $V_\text{LSR}$ are read in here just for the ease of examination of how good a single linear velocity gradient could fit the $V_\text{LSR}$ field within each structure.]


In [7]:
# directory to the masks
direcMasks = os.getcwd()+'/results/masks/'
dict_masks = {'L1688': {}, 'B18': {}}
# directory to the maps of predicted Vlsr
direcResults = os.getcwd()+'/results/Vlsr_predicted/'
dict_Vlsr_predicted = {'L1688': {}, 'B18': {}}

# Read in the maps.
for reg in ['L1688', 'B18']:
    if reg == 'L1688':
        listStructures = list(range(1, 13))+['extra']
    elif reg == 'B18':
        listStructures = range(1, 7)
        
    for structure in listStructures:
        ## masks
        dict_masks[reg][structure] = fits.open(direcMasks+reg+'/'\
                                               +reg+'_'+str(structure)+'.fits')[0].data.astype(bool)
        
        ## predicted Vlsr
        dict_Vlsr_predicted[reg][structure] = fits.open(direcResults+reg+'/'\
                                                        +reg+'_'+str(structure)+'_Vlsr_predicted.fits')[0].data

The quantitative criterion for defining the boundary of each target structure varies slightly from structure to structure, depending on the local environment each core sits in. The boundary is built following the steps below:

  1. Start with the contour where the NH$_3$ velocity dispersion, $\sigma_{\text{NH}_3}$, is equal to the velocity dispersion one would expect from when the velocity dispersion non-thermal component is the same as the sound speed of an average particle, $$\sigma_\text{NT} = c_\text{s, ave} \text{,}$$ at the median $T_\text{kin}$ measured in the targeted region (a rectangular box enclosing a potential structure).

  2. If the contour forms a closed shape, and the shape contains a region with a condensed spatial distribution of the NH$_3$ (1, 1) peak main beam temperature, $T_\text{peak}$, and a constrained range of velocity centroids, $V_\text{LSR}$, then the contour is used to define the boundary of the structure. This means that the boundary is coincident to either or both of the following: a) a significant increase in $T_\text{peak}$, or b) a significant change in $V_\text{LSR}$.

  3. And, if the criteria in Step 2 can be achieved by manually editing (adding or removing pixels from) the mask by less than 10% of the area, then the edited mask is used as the boundary of the structure.

  4. If the criteria in Step 2 cannot be achieved by manually editing the mask, then a contour of constant $T_\text{peak}$ and/or a range of $V_\text{LSR}$ around the median velocity centroid in the original mask are used to define the structure. This situation often happens when the original contour is too spatially extended or has a hole near the center of the target region.

  5. Note that neither the Herschel column density map nor the Herschel dust temperature map is quantitatively used in defining the boundary. The Herschel maps are but instead used to examine potential contamination from embedded sources, and to confirm that the boundary centers around a local rise in column density and a drop in dust temperature.

  6. Lastly, the size of the resulting structure has to be larger than the size of the GBT beam at 23 GHz (~32") in both the R.A. and the Dec. directions.

As a result, we include 12 structures in L1688 and 6 structures in B18. (See Figure 1 for an example of such structure; see Figure 2 for all the structures in either region, plotted on top of several key property maps.) There could potentially be more structures that satisfy the criteria above, but based on the GAS maps, these structures are probably more embedded in the star forming clusters and could suffer from contamination along the line of sight. We include an "extra" structure in the L1688 region, which is isolated and has a clear velocity gradient, but does not have a subsonic linewidth. This "extra" structure is include for the purpose of comparison. Besides the "extra" structure, we number the structures following the guidelines recommended by the International Astronomical Union, in the increasing R.A. order (from West to East).

Figure 1. Maps of the Herschel column density ($N_{\text{H}_2}$), the Herschel dust temperature ($T_\text{dust}$), the NH$_3$ peak main beam temperature ($T_\text{peak}$), the NH$_3$ velocity dispersion ($\sigma_{\text{NH}_3}$), the velocity centroids ($V_\text{LSR}$), and the predicted velocity centroid from a 2D linear fit to the observed velocities, of a target structure. The thick contour marks the boundary of the droplet in view, and the crosshair shows the centroid weighted by $T_\text{peak}$ that is used as the position of the structure in the following analyses. The thin red contours on the $\sigma_{\text{NH}_3}$ map is where the NH$_3$ linewidth equals to the linewidth expected from a line with a sonic non-thermal component (see Step 1 above). The star symbols denote the positions of Class 0/I and flat spectrum YSOs from the YSO catalogs (Dunham et al., 2015; Rebull et al., 2010). The GBT FWHM beam at 23 GHz is shown on the bottom left of each panel. [Change the code to view another structure. The first input is the region ("L1688" or "B18"), and the second input is the structure number (from 1 to 12 for L1688; from 1 to 6 for B18; L1688 also has an "extra" core that does not satisfy all the criteria, but is include for comparison). The static version of the plots can be found in the github ./plots/droplets/ folder.]

[The following code cell could generate a RuntimeWarning due to a conditional statement that compares the map of NH$_3$ linewidth, $\sigma_{\text{NH}_3}$, and the linewidth expected from when the non-thermal component is equal to the sound speed of an average particle, $\sigma_\text{NT} = c_\text{s, ave}$. Such a comparison is invalid when the pixels have NaN values, where the pixels are flagged. The plotting is wrapped in the plotDroplet function in Droplets.py, and please modify to your own taste. The plots included in the paper are based on the plot generated in the same way, but with annotations made outside this ipython notebook (in macOS Keynote).]


In [9]:
list_dictionaries = [dict_data, dict_masks, dict_YSOs, dict_Vlsr_predicted]
fig = plotDroplet('L1688', 6, list_dictionaries)  ## <<< Change the code in this line!


Droplets.py:454: RuntimeWarning: invalid value encountered in less
  axis.contour((list_images[i] < NT_sonic),
//anaconda/lib/python2.7/site-packages/matplotlib/colors.py:929: RuntimeWarning: invalid value encountered in less_equal
  mask |= resdat <= 0

Figure 2. Maps of the Herschel column density ($N_{\text{H}_2}$), the Herschel dust temperature ($T_\text{dust}$), the NH$_3$ peak main beam temperature ($T_\text{peak}$), the NH$_3$ kinetic temperature ($T_\text{kin}$), the velocity centroids ($V_\text{LSR}$), and the NH$_3$ velocity dispersion ($\sigma_{\text{NH}_3}$), of the entire region. The colored contours mark the boundaries of the droplets in the region. The star symbols denote the positions of Class 0/I and flat spectrum YSOs from the YSO catalogs (Dunham et al., 2015; Rebull et al., 2010). The GBT FWHM beam at 23 GHz is shown on the bottom left of each panel. User's choice: A box could be plotted to highlight the region shown in Figure 1. [Change the code to view either L1688 or B18 (the first input of the function "plotRegion". Input a structure number to the variable "chooseStructure" to highlight the structure (from 1 to 12 for L1688; from 1 to 6 for B18; L1688 also has an "extra" core that does not satisfy all the criteria, but is include for comparison). The static version of the plots can be found in the github ./plots/regionMaps/ folder.]

[The code for plotting this figure is wrapped in the plotRegion function in Droplets.py.]


In [11]:
list_dictionaries = [dict_data, dict_masks, dict_YSOs, dict_Vlsr_predicted]
fig = plotRegion('L1688', list_dictionaries, chooseStructure = 6)  ## <<< Change the code in this line!


I.1 The Coherence of the Structures

Since Goodman et al. (1998), multiple attempts to observe the coherent cores have been made (for example, Caselli et al., 2002). As described in the previous paragraphs, one key property of such coherent cores is a sharp transition from supersonic velocity dispersion outside the core to subsonic velocity dispersion inside. In the first direct observations of a coherent core in the B5 region in Perseus, Pineda et al. (2010) shows that the NH$_3$ velocity dispersion decreases with increasing peak antenna temperature around the coherent core (Figure 4 in Pineda et al., 2010), with the increasing peak temperature serving as a proxy for increasing density. Here we examine L1688 and B18 in similar plots of the NH$_3$ velocity dispersion, $\sigma_{\text{NH}_3}$, and the peak main beam temperature, $T_\text{peak}$, in Figure 3. We find that a similar trend exists for the L1688 and B18 regions. We also find that the target structures defined in Section I sits at the low velocity dispersion-high peak main beam temperature end of the trend, similar to the coherent core in B5 (Pineda et al. (2010)).

Figure 3. $\sigma_{\text{NH}_3}$ as a function of $T_\text{peak}$, for both L1688 and B18. The colored dots are values measured at pixels inside the target structures, color coded correpondingly to the colors of contours in Figure 2; the gray dots represent the rest of the pixels. The contours show the cumulative distributions, each of which encircles 25%, 50%, 75%, or 95% of the pixels. The horizontal lines represent the expected NH$_3$ linewidths ($\sigma_{\text{NH}_3}$) when the non-thermal component ($\sigma_\text{NT}$) is equal to the sonic and half sonic speed ($c_\text{s,ave}$) of average particles, at $T_\text{kin} = 10\ K$. [Change the xscale input to plot $T_\text{peak}$ on the logarithmic or the linear scale ('log' or 'linear'). The static version of the plots can be found in the github ./plots/TpeakSigma/ folder.]

[The code for plotting this figure is wrapped in the plotTpeakSigma function in Droplets.py.]


In [8]:
list_dictionaries = [dict_data, dict_masks, dict_YSOs, dict_Vlsr_predicted]
fig = plotTpeakSigma(list_dictionaries, xscale = 'log')  ## <<< Change the code in this line!


The radial profiles of found structures also demonstrate that these strcutures have subsonic velocity dispersions and show sharp transitions from supersonic to subsonic velocity dispersions at the edge where measurements are available (limited by the detection of NH$_3$ (1, 1) line emission; Figure 4). And generally, the thermal component of the velocity dispersion is larger than the non-thermal component (Figure 5), again, where the measurements are available.

Friesen and Pineda et al. (2017) used the 3-$\sigma$ S/N level in the NH$_3$ (1, 1) line emission to mask out points with low detection levels/high uncertainties in the fitting of $\sigma_{\text{NH}_3}$, $V_\text{LSR}$, and $T_\text{ex}$; and added the 3-$\sigma$ S/N level in the NH$_3$ (2, 2) emission to mask out points with high uncertainties in the fitting of $T_\text{kin}$ and $N_{\text{NH}_3}$. Since the calculation of $\sigma_\text{T}$ and $\sigma_\text{NT}$ requires both $\sigma_{\text{NH}_3}$ and $T_\text{kin}$, there are fewer available pixels where we have confident measurements of $\sigma_\text{T}$ and $\sigma_\text{NT}$. This results in fewer solid dots found inside the effective radius in Figure 5.

Figure 4. The radial profile of NH$_3$ velocity dispersion, $\sigma_{\text{NH}_3}$, of each structure. The solid green dots represent $\sigma_{\text{NH}_3}$ measured at pixels inside each structure, and the transparent green band shows 1-$\sigma$ distribution of $\sigma_{\text{NH}_3}$ measured at all pixels within each radius bin of a width $\sim0.01$ pc. The horizontal dashed and dotted lines represent the expected $\sigma_{\text{NH}_3}$ when the nonthermal component equals to the sonic and half sonic speeds of average particles at $T_\text{kin} = 10\ K$, i.e. $\sigma_\text{NT}=c_\text{s,ave}$ and $\sigma_\text{NT}=0.5\ c_\text{s,ave}$. The vertical line shows the effective radius, $R_\text{eff}$, of each structure, and the transparent gray band shows the uncertainty of $R_\text{eff}$. Similar plots for the thermal and the non-thermal components ($\sigma_\text{T}$ and $\sigma_\text{NT}$) are shown in Figure 5. The structure numbering is shown in the upper right corner. [The plotSigma input can be changed to plot the thermal and the non-therma components of the velocity dispersion, which is shown in Figure 5. The static version of the plot can befound at the github ./plots/Sigmas/ folder.]

[The following code cell could generate a RuntimeWarning, because the maps of velocity dispersions include pixels flagged for bad fits and thus having NaN values. The RuntimeWarning occurs when the distance bin contains only pixels with NaN values. The code for plotting this figure is wrapped in the plotSigmas function in Droplets.py.]


In [8]:
list_dictionaries = [dict_data, dict_masks, dict_YSOs, dict_Vlsr_predicted]
fig = plotSigmas(list_dictionaries, plotSigma = 'sigma', plotRfromA = False)## <<< See below for plotting components!


//anaconda/lib/python2.7/site-packages/numpy/lib/function_base.py:4011: RuntimeWarning: All-NaN slice encountered
  r = func(a, **kwargs)
//anaconda/lib/python2.7/site-packages/numpy/lib/nanfunctions.py:1427: RuntimeWarning: Degrees of freedom <= 0 for slice.
  keepdims=keepdims)

Figure 5. Similar to Figure 4, showing the profiles of the thermal and non-thermal components of velocity dispersion ($\sigma_\text{T}$ and $\sigma_\text{NT}$), of each structure. The solid red dots are $\sigma_\text{T}$ measured at pixels inside each structure, and the solid blue dots, $\sigma_\text{NT}$. The transparent red and blue bands show 1-$\sigma$ distribution of $\sigma_\text{T}$ and $\sigma_\text{NT}$ measured at all pixels within each radius bin of a width $\sim0.01$ pc. The horizontal dashed and dotted lines show the sonic and half sonic speeds of average particles at $T_\text{kin}=10\ K$. (The sonic speeds are directly comparable to the thermal velocity dispresion, so they are plotted instead of the expected $\sigma_{\text{NH}_3}$.) Similar to Figure 4, the vertical line shows the effective radius of each structure. The structure numbering is shown in the upper right corner. [The plotSigma input can be changed to plot the NH$_3$ velocity dispersion, which is shown in Figure 4. The static version of the plot can befound at the github ./plots/Sigmas/ folder.]

[The following code cell could generate a RuntimeWarning, because the maps of velocity dispersions include pixels flagged for bad fits and thus having NaN values. The RuntimeWarning occurs when the distance bin contains only pixels with NaN values. The code for plotting this figure is wrapped in the plotSigmas function in Droplets.py.]


In [9]:
list_dictionaries = [dict_data, dict_masks, dict_YSOs, dict_Vlsr_predicted]
fig = plotSigmas(list_dictionaries, plotSigma = 'components', plotRfromA = False)## <<< See above for plotting sigma!


II. Physical Properties of Droplets

With the property maps from the GAS DR1 (Friesen and Pineda et al., 2017; the original data are in the designated dataverse and the maps used in analyses below are in the github ./data/GAS_DR1/ folder), we can derive basic physical properties including the mass, the size, and the velocity dispersion of each structure (for definition of structures, see Section I). We can then analyze the velocity features, including the gradient and the dispersion, and compare them to that of structures presented by Goodman et al. (1993).

II.1 Basic Properties (Mass, Size, and Velocity Dispersion)

Before analyzing the velocity features of the structures defined using data from the GAS DR1 (Friesen and Pineda et al., 2017), we compare their basic physical properties to that of the structures presented by Goodman et al. (1993), including their mass, size, and velocity dispersion.

The mass ($M$), the size (characterized by $R_\text{eff}$), and the velocity dispersion ($\sigma_\text{tot}$) are often related to each other through the so-called Larson's Law (Larson, 1981). Even though there are later updates on the relationship between the mass, the size, and the velocity dispersion (refs xx), as well as observations that show deviations from the original laws (refs xx), we use it as as a reference point in our comparison of these physical properties between the structures presented in this work and by Goodman et al. (1993). We do not intend to provide a quantitative update on Larson's Law. The following analyses present characteristic distributions of the droplets in the parameter spaces, and through these analyses, we confirm that the droplets sit at the lowest end of a continuous relationship between the mass, the size, and the velocity dispersion at larger scales.

Similar to Table 1 in Goodman et al. (1993), we derive a catalog of the target structures in this paper with their basic properties including: the distance ($D$), the celectial coordinates (R.A. and Dec., in the FK5 J2000.0 frame), the velocity centroid in the LSR frame ($V_\text{LSR}$), the kinetic temperature ($T_\text{kin}$), the NH$_3$ velocity dispersion ($\sigma_{\text{NH}_3}$), the total velocity dispersion ($\sigma_\text{tot}$) and its thermal and non-thermal components ($\sigma_\text{T}$ and $\sigma_\text{NT}$), the mass ($M$), the FWHMs along the major and minor axes ($r_\text{major}$ and $r_\text{minor}$), the effective radius ($R=\sqrt{r_\text{major}\times r_\text{minor}}$), the shape positional angle ($PA_\text{shape}$), and the axis ratio ($AR$). We also estimate the uncertainty of each property. The uncertainties of properties that are derictly derived from the property maps in the GAS DR1 are estimated from the corresponding error maps. The uncertainties of properties calculated from properties directly derived from the maps are estimated through error propagation, assuming there is no correlation among the properties.

Details of how to derive each property are given below:

  1. Distance ($D$): For L1688 in Ophiuchus, we adopt a distance of 137.3±3 pc to the Ophiuchii cluster (conventionally named "$\rho$ Oph"), as measured using parallax of stellar systems with Very Long Baseline Array (VLBA) observations by Ortiz-León et al. (2017). For B18 in Taurus, we adopt a distance of 135±20 pc to the molecular cloud in Taurus, as measured using Pan-STARRS1 photometry by Schlafly et al. (2014). These are also the distances used by Friesen and Pineda et al. (2017) for the GAS DR1.

  2. Celectial Coordinates (R.A. and Dec.): The position of each target structure corresponds to the centroid weighted by the peak NH$_3$ main beam temperature ($T_\text{peak}$). The coordinates are in the FK5 frame, with the equinox set to J2000.0. See Figure 1 for where the centroid sits with respect to the boundary.

  3. LSR Velocity ($V_\text{LSR}$): The LSR velocity of each structure is estimated by calculating the median of the velocity centroids at pixels within the boundary.

  4. Kinetic Temperature ($T_\text{kin}$): The kinetic temperature is the median of $T_\text{kin}$ at pixels within the boudnary. Notice that to be able to fit for $T_\text{kin}$, there has to be substantial detection of both NH$_3$ (1, 1) and (2, 2) line emission. In the GAS DR1, both (1, 1) and (2, 2) lines have to have signal-to-noise ratios of ≥3 to be included in the resulting property map of kinetic temperature.

  5. NH$_3$ Velocity Dispersion ($\sigma_{\text{NH}_3}$): The NH$_3$ velocity dispersion is the median of $\sigma$ in the fitted Gaussian profiles for pixels within the boundary.

  6. Total Velocity Dispersion ($\sigma_\text{tot}$): With the kinetic temperature and the NH$_3$ velocity dispersion, we can derive the total velocity dispersion (Eq. 1). Here we calculate $\sigma_\text{tot}$ from the measured (median) $T_\text{kin}$ and $\sigma_{\text{NH}_3}$, instead of taking the median value on the map of total velocity dispersion derived in Section 0.1.

  7. Thermal Component of Velocity Dispersion ($\sigma_\text{T}$): We can also calcualte the thermal component of the velocity dispersion from $T_\text{kin}$ (Eq. 2). We calculate $\sigma_\text{T}$ from the measured $T_\text{kin}$, instead of taking the median value on the map of the velocity dispersion thermal component derived in Section 0.1. This guarantees that the squared sum of the thermal and non-thermal velocity dispersion components is equal to the total velocity dispersion fro each target structure (see Eq. 4).

  8. Non-thermal Component of Velocity Dispersion ($\sigma_\text{NT}$): Similarly, we can calculate the non-thermal component of the velocity dispersion from the kinetic temperature and the NH$_3$ velocity dispersion (Eq. 3). Here we calculate $\sigma_\text{tot}$ from the measured (median) $T_\text{kin}$ and $\sigma_{\text{NH}_3}$, instead of taking the median value on the map of total velocity dispersion derived in Section 0.1. This guarantees that the squared sum of the thermal and non-thermal velocity dispersion components is equal to the total velocity dispersion fro each target structure (see Eq. 4).

  9. Mass ($M$): The mass of each target structure is derived from the Herschel column density map (see Section 0.I).

  10. FWHMs along the major and the minor axes ($r_\text{major}$ and $r_\text{minor}$): We first determine the major axis by finding the spatial direction that explains most of the spatial distribution of each target structure. The calculation is weighted by the peak NH$_3$ temperature in the main-beam unit, which is characteristic of the internal gas density structure. The minor axis is then the spatial direction that is perpendicular to the major axis. We then calculate the 2nd moments of spatial distribution projected onto the major and the minor axes. The FWHMs are then estimated through the conversion with a constant factor, $\text{FWHM} = 2\sqrt{2 \ln{2}} \times \sigma$, where $\sigma$ is the 2nd moment along the major or the minor axis.

  11. Effective Radius ($R_\text{eff}$): The effective radius is the geometric mean of the FWHMs along the major and the minor axes. See Eq. 5.

  12. Shape Positional Angle ($\text{PA}_\text{shape}$): The shape positional angle is simply the position angle of the major axis derived above.

  13. Aspect Ratio ($\text{AR}$): The aspect ratio is the ratio between the FWHMs along the major and the minor axes. That is, $\text{AR}=r_\text{major}/r_\text{minor}$.


In [8]:
dict_table1 = defaultdict(list)
list_columns = ['ID', 'Distance', 'eDistance', 'RA', 'Dec',
                'Vlsr', 'eVlsr', 'Tkin', 'eTkin', 'Sigma', 'eSigma',
                'SigmaTot', 'eSigmaTot', 'SigmaT', 'eSigmaT', 'SigmaNT', 'eSigmaNT',
                'M', 'eM', 'major', 'emajor', 'minor', 'eminor', 'Reff', 'eReff',
                'PA', 'AspectRatio', 'eAspectRatio', 'nYSOin', 'nYSOnearby']


ReffComp = []
for reg in ['L1688', 'B18']:
    
    if reg == 'L1688':
        listStructures = list(range(1, 13))+['extra']
    elif reg == 'B18':
        listStructures = range(1, 7)
    
    
    # header
    header = dict_data[reg]['header_GAS']
    ## scales with astropy units
    distance = distances[reg]
    edistance = distances[reg+'_e']
    pixscale = np.radians(abs(header['CDELT1']))*distance
    epixscale = np.radians(abs(header['CDELT1']))*edistance
    # property maps
    ## Tpeak and statistics
    mapTpeak = dict_data[reg]['Tpeak']
    ## Vlsr
    mapVlsr = dict_data[reg]['Vlsr']
    mapVlsr_e = dict_data[reg]['eVlsr']
    ## Tkin
    mapTkin = dict_data[reg]['Tkin']
    mapTkin_e = dict_data[reg]['eTkin']
    ## SigmaNH3
    mapSigma = dict_data[reg]['Sigma']
    mapSigma_e = dict_data[reg]['eSigma']
    ## column density
    mapColden = dict_data[reg]['colden']
    mapColden_e = dict_data[reg]['ecolden']
    
        
    for structure in listStructures:
        
        # Prepare the property maps
        ## mask
        mask = dict_masks[reg][structure]
        ## statistics
        meshx, meshy = np.meshgrid(range(mapTpeak.shape[1]), range(mapTpeak.shape[0]))
        stat = statBasic2D(mapTpeak[mask], (meshy[mask], meshx[mask]))
        stat.calculate()
        
        # structure ID
        ID = reg+'_'+str(structure)
        dict_table1['ID'].append(ID)
        
        # distance (D) in pc
        D = distance.to(u.pc).value
        eD = edistance.to(u.pc).value
        dict_table1['Distance'].append(D)
        dict_table1['eDistance'].append(eD)
        
        # RA & Dec
        ceny, cenx = stat.mom1
        RA, Dec = wcs.WCS(header).wcs_pix2world([[cenx, ceny]], 0)[0]
        dict_table1['RA'].append(RA)
        dict_table1['Dec'].append(Dec)
        
        # Vlsr in km/s
        Vlsr = np.nanmedian(mapVlsr[mask])
        eVlsr = np.nanmedian(mapVlsr_e[mask])
        dict_table1['Vlsr'].append(Vlsr)
        dict_table1['eVlsr'].append(eVlsr)
        
        # Tkin in K
        Tkin = np.nanmedian(mapTkin[mask])
        eTkin = np.nanmedian(mapTkin_e[mask])
        dict_table1['Tkin'].append(Tkin)
        dict_table1['eTkin'].append(eTkin)
        
        # SigmaNH3 in km/s
        Sigma = np.nanmedian(mapSigma[mask])
        eSigma = np.nanmedian(mapSigma_e[mask])
        dict_table1['Sigma'].append(Sigma)
        dict_table1['eSigma'].append(eSigma)
        
        # Sigmas
        ## SigmaT in km/s
        SigmaT = np.sqrt(c.k_B*Tkin*u.K/mass['average'])
        eSigmaT = .5*np.sqrt(2.37*u.u/(c.k_B*Tkin*u.K))*(c.k_B/mass['average'])*eTkin*u.K
        dict_table1['SigmaT'].append(SigmaT.to(u.km/u.s).value)
        dict_table1['eSigmaT'].append(eSigmaT.to(u.km/u.s).value)
        ## SigmaNT in km/s
        SigmaNT = np.sqrt((Sigma*u.km/u.s)**2.-c.k_B*Tkin*u.K/mass['NH3'])
        eSigmaNT = (.5*1./np.sqrt((Sigma*u.km/u.s)**2.-c.k_B*Tkin*u.K/mass['NH3']))**2.\
                   *((2.*Sigma*u.km/u.s*eSigma*u.km/u.s)**2.+(-c.k_B/mass['NH3']*eTkin*u.K)**2.)
        eSigmaNT = np.sqrt(eSigmaNT)
        dict_table1['SigmaNT'].append(SigmaNT.to(u.km/u.s).value)
        dict_table1['eSigmaNT'].append(eSigmaNT.to(u.km/u.s).value)
        ## SigmaTot in km/s
        SigmaTot = np.sqrt(SigmaT**2.+SigmaNT**2.)
        eSigmaTot = (.5*1./np.sqrt(SigmaT**2.+SigmaNT**2.))**2.\
                    *((2.*SigmaT*eSigmaT)**2.+(2.*SigmaNT*eSigmaNT)**2.)
        eSigmaTot = np.sqrt(eSigmaTot)
        dict_table1['SigmaTot'].append(SigmaTot.to(u.km/u.s).value)
        dict_table1['eSigmaTot'].append(eSigmaTot.to(u.km/u.s).value)
        
        # mass in Msun
        M = np.nansum((mapColden[mask]-np.nanmin(mapColden[mask]))*u.cm**-2.*mass['average']*pixscale**2.)
        eM = np.nansum((pixscale**2.*mapColden_e[mask]*u.cm**-2.*mass['average'])**2.
                       +((mapColden[mask]-np.nanmin(mapColden[mask]))*u.cm**-2.*mass['average']
                         *(2.*pixscale*epixscale))**2.
                       +(pixscale**2.
                         *mapColden_e[mask][np.nanargmin(mapColden[mask])]*u.cm**-2.*mass['average'])**2.)
        eM = np.sqrt(eM)
        dict_table1['M'].append(M.to(u.Msun).value)
        dict_table1['eM'].append(eM.to(u.Msun).value)
        
        # FWHMs
        ## major axis in pc
        major = pixscale*stat.major_sigma.value*(2.*np.sqrt(2.*np.log(2.)))
        emajor = epixscale*stat.major_sigma.value*(2.*np.sqrt(2.*np.log(2.)))
        dict_table1['major'].append(major.to(u.pc).value)
        dict_table1['emajor'].append(emajor.to(u.pc).value)
        ## minor axis in pc
        minor = pixscale*stat.minor_sigma.value*(2.*np.sqrt(2.*np.log(2.)))
        eminor = epixscale*stat.minor_sigma.value*(2.*np.sqrt(2.*np.log(2.)))
        dict_table1['minor'].append(minor.to(u.pc).value)
        dict_table1['eminor'].append(eminor.to(u.pc).value)
        
        # Reff in pc
        #Reff = np.sqrt(major*minor)
        Reff = stat.radius.value*pixscale*(2.*np.sqrt(2.*np.log(2.)))
        statComp = statBasic2D(mask.astype(float)[mask], (meshy[mask], meshx[mask]))
        statComp.calculate()
        ReffComp.append((statComp.radius.value*pixscale*(2.*np.sqrt(2.*np.log(2.)))).to(u.pc).value)
        
        eReff = np.sqrt((.5*(1./np.sqrt(major*minor))*major*eminor)**2.
                        +(.5*(1./np.sqrt(major*minor))*minor*emajor)**2.)
        
        dict_table1['Reff'].append(Reff.to(u.pc).value)
        dict_table1['eReff'].append(eReff.to(u.pc).value)
        
        # PA shape in degrees, E of N; -90 to 90
        PA = stat.position_angle.value
        if (PA >= 0.) and (PA <= 180.):
            PA = PA - 90.
        elif (PA >= -180.) and (PA < 0.):
            PA = PA + 90.
        dict_table1['PA'].append(PA)
            
        # AR (aspect ratio), dimensionless
        AR = (major/minor).decompose().value
        eAR = np.sqrt((emajor/minor)**2. + (major*eminor/minor**2.)**2.)
        eAR = eAR.decompose().value
        dict_table1['AspectRatio'].append(AR)
        dict_table1['eAspectRatio'].append(eAR)
        
        # other properties
        ## YSOs
        YSOin = [dict_masks[reg][structure][YSO[1], YSO[0]]\
                 if ((YSO[0] in range(dict_masks[reg][structure].shape[1]))\
                 and (YSO[1] in range(dict_masks[reg][structure].shape[0])))\
                 else False for YSO in np.around(dict_YSOs[reg]).astype(int)]
        YSOin = np.array(YSOin)
        YSOnearby = [np.hypot(cenx-YSO[0], ceny-YSO[1])*pixscale < (2.*Reff)\
                     for YSO in dict_YSOs[reg]]
        YSOnearby = np.array(YSOnearby)
        YSOnearby = (YSOnearby ^ YSOin)
        dict_table1['nYSOin'].append(sum(YSOin))
        dict_table1['nYSOnearby'].append(sum(YSOnearby))


dict_table1 = pd.DataFrame(dict_table1,
                           columns = list_columns)
dict_tables['table1'] = dict_table1

We then fit the velocity distribution within the boundary of each structure to a 2D 1st-order polynomial (a plane function). This is the same function Goodman et al. (1993) fit to. That is, $$V_\text{LSR,fit}=c_xx+c_yy+c_0\ \text{,}\ \ \ \ \ \text{[6]}$$ where $x$ and $y$ are two perpendicular axes on the plane of the sky. The velocity gradient vector is then a 2D vector constituted of the two 1st-order coefficient, $(c_x, c_y)$, pointing from low to high values of $V_\text{LSR}$ (i.e., from the more blue-shifted end to the more red-shifted end). Here we use R.A. and Dec. as the two axes, but the magnitude of the fitted gradient should be invariant with respect to rotation.

From the velocity gradient ($\mathcal{G}$ as the magnitude and $\text{PA}_\mathcal{G}$ as the direction on the plane of the sky), we derive related properties included in Table 2 in Goodman et al. (1993). These include the total difference in velocity ($\mathcal{G}\times R$), the ratio between the rotational and the gravitational energies ($\beta$), and the specific angular momentum ($J/M$). The details of how each property is calculated are given below:

  1. Velocity Gradient ($\mathcal{G}$ as the magnitude and $\text{PA}_\mathcal{G}$ as the position angle indicating the direction): We fit $V_\text{LSR}$ within the boundary of each structure to a 2D 1st-order polynomial (Eq. 6), weighted by the inverse of squared uncertainty in $V_\text{LSR}$. We then calculate the magnitude and the direction of the resulting velocity gradient vector, $(c_x, c_y)$. The magnitude is in units of km s$^{-1}$, and the direction is presented as a positional angle in degrees, increasing from zero eastward of North.

  2. Total Difference in Velocity ($\mathcal{G}\times R$): Notice that this is simply calculated by multiplying the velocity gradient magnitude by the effective radius. This would be the exact total difference in velocity in an ideal case where the velocity field is a single linear gradient and the core is spherical.

  3. The Ratio between the Rotational and the Gravitational Energy ($\beta$): Following Eq. 5 in Goodman et al. (1993), $$\beta = \frac{1}{2}\frac{p}{q}\frac{\omega^2R^3}{GM}\ \text{,}$$ where $p$ and $q$ are the shape parameters that correlate the internal rotational and density structures with the bulk properties. Goodman et al. (1993) defined $p$ and $q$ by $I = pMR^2$ and $\left|E_\text{grav}\right| = qGM^2/R$. The angular velocity, $\omega$, is assumed to relate to the observed velocity gradient by an inclination angle, $i$, by $\omega=\mathcal{G}/\sin{i}$, assuming that the observed velocity gradient can be fully attributed to the rotation. Goodman et al. (1993) assumed a constant density and a solid-body rotation, corresponding to $p = 2/5$ and $q = 3/5$. Goodman et al. (1993) also assumed that $\sin{i} = 1$. Note that the assumption relating the observed velocity gradient and the rotational motion is a strong one, and can introduce biases wherever there is core-scale turbulence or internal motion like outflows.

    Even though $\beta$ calculated in this work is directly compared to values presented by Goodman et al. (1993), the way mass is estimated in this work is critically different from how it was estimated in Goodman et al. (1993). The mass in Goodman et al. (1993) was taken from Benson & Myers (1989), which was calculated based on NH$_3$ line fitting. The model used by Benson & Myers (1989) in the line fitting gives an estimate of total number density, and from the total number density and the effective radius, Benson & Myers (1989) calculated the mass. In the GAS DR1, however, the model used in NH$_3$ line fitting gives the ammonia column density instead. Without risking introducing more biases by assuming an abundance, we calculate the mass from the Herschel map, essentially following the "clipping" scheme in Rosolowsky et al. (2008) and independent from NH$_3$ observations and fitting.

  4. Specific Angular Momentum ($J/M$): We can also calculate the specific angular momentum, $J/M = pR^2\omega$, where $p$ is again the coefficient that correlates the moment of intertia, which is dependent on the internal rotational profile, with the bulk measurements of mass and radius, $I = pMR^2$. Again, the angular velocity, $\omega$, is assumed to relate to the velocity gradient, $\omega = \mathcal{G}/\sin{i}$. And again, correlating the observed velocity gradient to the intrinsic angular velocity is a strong assumption.


In [9]:
dict_table2 = defaultdict(list)
list_columns = ['ID', 'Gradient', 'eGradient', 'Direction', 'eDirection',
                'GradientR', 'eGradientR', 'beta', 'ebeta', 'JoverM', 'eJoverM',
                'QA']

for reg in ['L1688', 'B18']:
    
    if reg == 'L1688':
        listStructures = list(range(1, 13))+['extra']
    elif reg == 'B18':
        listStructures = range(1, 7)
        
    # header
    header = dict_data[reg]['header_GAS']
    ## scales with astropy units
    distance = distances[reg]
    edistance = distances[reg+'_e']
    pixscale = np.radians(abs(header['CDELT1']))*distance
    epixscale = np.radians(abs(header['CDELT1']))*edistance
    # property maps
    ## Vlsr
    mapVlsr = dict_data[reg]['Vlsr']
    mapVlsr_e = dict_data[reg]['eVlsr']
    ## Tkin
    mapTkin = dict_data[reg]['Tkin']
    mapTkin_e = dict_data[reg]['eTkin']
    ## SigmaNH3
    mapSigma = dict_data[reg]['Sigma']
    mapSigma_e = dict_data[reg]['eSigma']
    ## column density
    mapColden = dict_data[reg]['colden']
    mapColden_e = dict_data[reg]['ecolden']
    
    
    for structure in listStructures:
        
        # structure ID
        ID = reg+'_'+str(structure)
        dict_table2['ID'].append(ID)
        
        # Prepare the property maps
        ## mask
        mask = dict_masks[reg][structure]
        ## statistics
        stat = statGrad(dict_data, dict_masks, reg, structure)
        ## properties from table 1
        ### Reff
        Reff = dict_table1[dict_table1['ID'] == 'L1688_2']['Reff'].values[0]
        eReff = dict_table1[dict_table1['ID'] == 'L1688_2']['eReff'].values[0]
        ### Mass
        M = dict_table1[dict_table1['ID'] == 'L1688_2']['M'].values[0]
        eM = dict_table1[dict_table1['ID'] == 'L1688_2']['eM'].values[0]
        
        
        # Gradients
        ## magnitude in km/s/pc
        Gradient = (stat.Grad*u.km/u.s/pixscale).to(u.km/u.s/u.pc).value
        eGradient = (stat.eGrad*u.km/u.s/pixscale)**2.\
                    +(stat.Grad*u.km/u.s*epixscale/pixscale**2.)**2.
        eGradient = np.sqrt(eGradient).to(u.km/u.s/u.pc).value
        dict_table2['Gradient'].append(Gradient)
        dict_table2['eGradient'].append(eGradient)
        ## PA in degrees
        Direction = stat.PAGrad
        eDirection = stat.ePAGrad
        dict_table2['Direction'].append(Direction)
        dict_table2['eDirection'].append(eDirection)
        
        # difference in velocity in km/s
        GradientR = (Gradient*u.km/u.s/u.pc*Reff*u.pc).to(u.km/u.s).value
        eGradientR = (eGradient*u.km/u.s/u.pc*Reff*u.pc)**2.\
                     +(Gradient*u.km/u.s/u.pc*eReff*u.pc)**2.
        eGradientR = np.sqrt(eGradientR).to(u.km/u.s).value
        dict_table2['GradientR'].append(GradientR)
        dict_table2['eGradientR'].append(eGradientR)
        
        # beta (dimensionaless; ratio of energies)
        beta = (1./(3.*c.G))*((Gradient*u.km/u.s/u.pc)**2.*(Reff*u.pc)**3./(M*u.Msun))
        beta = beta.decompose().value
        ebeta = ((2./(3.*c.G))*((Reff*u.pc)**3.*Gradient*u.km/u.s/u.pc/(M*u.Msun))*eGradient*u.km/u.s/u.pc)**2.\
                +((1./c.G)*(Gradient*u.km/u.s/u.pc)**2.*(Reff*u.pc)**2./(M*u.Msun)*eReff*u.pc)**2.\
                +((1./(3.*c.G))*(Gradient*u.km/u.s/u.pc)**2.*(Reff*u.pc)**3./(M*u.Msun)**2.*eM*u.Msun)**2.
        ebeta = np.sqrt(ebeta).decompose().value
        dict_table2['beta'].append(beta)
        dict_table2['ebeta'].append(ebeta)
        
        # J/M in km/s * pc
        JoverM = (2./5.)*(Reff*u.pc)**2.*(Gradient*u.km/u.s/u.pc)
        JoverM = JoverM.to(u.km/u.s*u.pc).value
        eJoverM = ((4./5.)*(Reff*u.pc)*(Gradient*u.km/u.s/u.pc)*(eReff*u.pc))**2.\
                  +((2./5.)*(Reff*u.pc)**2.*(eGradient*u.km/u.s/u.pc))**2.
        eJoverM = np.sqrt(eJoverM).to(u.km/u.s*u.pc).value
        dict_table2['JoverM'].append(JoverM)
        dict_table2['eJoverM'].append(eJoverM)
        
# Assessment of the "quality" of the linear fit
dict_table2['QA'] = ['good', 'bad', 'bad', 'good', 'good',
                     'good', 'good', 'good', 'bad', 'good',
                     'good', 'good', 'bad',
                     'bad', 'good', 'good', 'good', 'bad',
                     'good']
dict_table2 = pd.DataFrame(dict_table2,
                           columns = list_columns)
dict_tables['table2'] = dict_table2


WARNING: Model is linear in parameters; consider using linear fitting methods. [astropy.modeling.fitting]
WARNING:astropy:Model is linear in parameters; consider using linear fitting methods.

In [10]:
beamsize = 3.*np.radians(abs(dict_data['L1688']['header_GAS']['CDELT1']))*distances['L1688']
beamsize = beamsize.to(u.pc).value

Tkin0 = 10.*u.K

Sigma_Sonic = np.sqrt(c.k_B*Tkin0/mass['average']+c.k_B*Tkin0/mass['NH3']).to(u.km/u.s).value
Sigma_halfSonic = np.sqrt(.5**2.*c.k_B*Tkin0/mass['average']+c.k_B*Tkin0/mass['NH3']).to(u.km/u.s).value

SigmaTot_Sonic = np.sqrt(2.*c.k_B*Tkin0/mass['average']).to(u.km/u.s).value
SigmaTot_halfSonic = np.sqrt((1.+.5**2.)*c.k_B*Tkin0/mass['average']).to(u.km/u.s).value

Sonic = np.sqrt(c.k_B*Tkin0/mass['average']).to(u.km/u.s).value
halfSonic = np.sqrt(.5**2.*c.k_B*Tkin0/mass['average']).to(u.km/u.s).value

In [256]:
fig = plotProperties('AspectRatio', 'Gradient', 'log-log', annotate = True)
#plt.savefig('/Users/hopechen/Desktop/test1.png')



In [156]:
dict_tables['table2'].keys()


Out[156]:
Index([u'ID', u'Gradient', u'eGradient', u'Direction', u'eDirection',
       u'GradientR', u'eGradientR', u'beta', u'ebeta', u'JoverM', u'eJoverM',
       u'QA'],
      dtype='object')

In [64]:
feat = 'SigmaTot'
tb = 'table1' if feat in dict_tables['table1'].keys() else 'table2'

print dict_tables['Goodman93_'+tb][feat].min(), dict_tables['Goodman93_'+tb][feat].max()
print dict_tables[tb][feat].min(), dict_tables[tb][feat].max()


0.186850796063 0.445893945151
0.191244338924 0.411401703317

In [203]:
from matplotlib import ticker

In [238]:
def getName(columnName):
    
    if columnName == 'Gradient':
        return 'Gradient'
    elif columnName == 'Direction':
        return 'Gradient Direction'
    elif columnName == 'GradientR':
        return 'Gradient $\times$ Radius'
    elif columnName == 'beta':
        return 'Rotational to Grav. Energy Ratio'
    elif columnName == 'JoverM':
        return 'Specific Angular Momentum'
    elif columnName == 'Distance':
        return 'Distance'
    elif columnName == 'Vlsr':
        return 'Velocity'
    elif columnName == 'SigmaTot':
        return 'Total Velocity Dispersion'
    elif columnName == 'M':
        return 'Mass'
    elif columnName == 'major':
        return 'Major Axis'
    elif columnName == 'minor':
        return 'Minor Axis'
    elif columnName == 'Reff':
        return 'Effective Radius'
    elif columnName == 'PA':
        return 'Shape Position Angle'
    elif columnName == 'AspectRatio':
        return 'Aspect Ratio'
    elif columnName == 'Tkin':
        return 'Kinetic Temperature'
    elif columnName == 'Sigma':
        return 'NH$_3$ Velocity Dispersion'
    elif columnName == 'SigmaT':
        return 'Vel. Dispersion Thermal Comp.'
    elif columnName == 'SigmaNT':
        return 'Vel. Dispersion Non-thermal Comp.'
    
def getMathName(columnName):
    
    if columnName == 'Gradient':
        return '$\mathcal{G}$'
    elif columnName == 'Direction':
        return 'PA$_\mathcal{G}$'
    elif columnName == 'GradientR':
        return '$\mathcal{G}\times R_{eff}$'
    elif columnName == 'beta':
        return r'$\beta$'
    elif columnName == 'JoverM':
        return '$J/M$'
    elif columnName == 'Distance':
        return '$D$'
    elif columnName == 'Vlsr':
        return '$V_{LSR}$'
    elif columnName == 'SigmaTot':
        return '$\sigma_{tot}$'
    elif columnName == 'M':
        return '$M$'
    elif columnName == 'major':
        return '$r_{major}$'
    elif columnName == 'minor':
        return '$r_{minor}$'
    elif columnName == 'Reff':
        return '$R_{eff}$'
    elif columnName == 'PA':
        return 'PA$_{shape}$'
    elif columnName == 'AspectRatio':
        return 'AR'
    elif columnName == 'Tkin':
        return '$T_{kin}$'
    elif columnName == 'Sigma':
        return '$\sigma_{{NH}_3}$'
    elif columnName == 'SigmaT':
        return '$\sigma_T$'
    elif columnName == 'SigmaNT':
        return '$\sigma_{NT}$'
    
def getUnitName(columnName):
    
    if columnName == 'Gradient':
        return '[km s$^{-1}$ pc$^{-1}$]'
    elif columnName == 'Direction':
        return '[deg.]'
    elif columnName == 'GradientR':
        return '[km s$^{-1}$]'
    elif columnName == 'beta':
        return ''
    elif columnName == 'JoverM':
        return '[km s$^{-1}$ pc]'
    elif columnName == 'Distance':
        return '[pc]'
    elif columnName == 'Vlsr':
        return '[km s$^{-1}$]'
    elif columnName == 'SigmaTot':
        return '[km s$^{-1}$]'
    elif columnName == 'M':
        return '[M$_\odot$]'
    elif columnName == 'major':
        return '[pc]'
    elif columnName == 'minor':
        return '[pc]'
    elif columnName == 'Reff':
        return '[pc]'
    elif columnName == 'PA':
        return '[deg.]'
    elif columnName == 'AspectRatio':
        return ''
    elif columnName == 'Tkin':
        return '[K]'
    elif columnName == 'Sigma':
        return '[km s$^{-1}$]'
    elif columnName == 'SigmaT':
        return '[km s$^{-1}$]'
    elif columnName == 'SigmaNT':
        return '[km s$^{-1}$]'
    

def plotProperties(feature1, feature2, scales, annotate = False):
    fig = plt.figure(figsize = (14., 14.))
    ax = fig.gca()
    fig.subplots_adjust(left = .09, right = .97, bottom = .08, top = .96)

    features = [feature1, feature2]
    tables = ['table1' if feature in dict_tables['table1'].keys() else 'table2' for feature in features]
    scales = scales.split('-')
    # Set axes.
    ## x-axis
    iaxis = 0
    ax.set_xscale(scales[iaxis])
    if features[iaxis] in ['Reff', 'major', 'minor']:

        fig.subplots_adjust(bottom = .07, top = .93)

        if scales[iaxis] == 'linear':
            ax.set_xlim(8e-3, .95)
        elif scales[iaxis] == 'log':
            ax.set_xlim(8e-3, .95)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
    elif features[iaxis] in ['SigmaTot']:

        if scales[iaxis] == 'linear':
            ax.set_xlim(9.5e-2, .65)
        elif scales[iaxis] == 'log':
            ax.set_xlim(9.5e-2, 1.02)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['Sigma', 'SigmaT', 'SigmaNT']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(3e-2, .65)
        elif scales[iaxis] == 'log':
            ax.set_xlim(3e-2, .65)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['M']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(-1e1, 8.1e2)
        elif scales[iaxis] == 'log':
            ax.set_xlim(1.2e-2, 9.5e2)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 5))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['Tkin']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(5., 30.)
        elif scales[iaxis] == 'log':
            ax.set_xlim(.25, 120.)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['Vlsr']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(-5., 12.5)
        elif scales[iaxis] == 'log':
            raise ValueError('Vlsr goes to negative values. No log scale is possible.')
            
    elif features[iaxis] in ['Distance']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(105., 950.)
        elif scales[iaxis] == 'log':
            ax.set_xlim(95., 1050.)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12) 
            
    elif features[iaxis] in ['PA', 'Direction']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(-180., 180.)
            ax.xaxis.set_major_locator(ticker.MultipleLocator(45.))
        elif scales[iaxis] == 'log':
            raise ValueError('Direction angles go to negative values. A log scale introduces biase.')
            
    elif features[iaxis] in ['Gradient']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(.05, 7.)
            ax.xaxis.set_major_locator(ticker.MultipleLocator(2.))
        elif scales[iaxis] == 'log':
            ax.set_xlim(.11, 8.)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['AspectRatio']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(.85, 5.3)
            ax.xaxis.set_major_locator(ticker.MultipleLocator(1.))
        elif scales[iaxis] == 'log':
            ax.set_xlim(.85, 5.3)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 1))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['beta']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(-.05, 1.7)
            ax.xaxis.set_major_locator(ticker.MultipleLocator(.2))
        elif scales[iaxis] == 'log':
            ax.set_xlim(.0009, 2.1)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['JoverM']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(-.005, .15)
            ax.xaxis.set_major_locator(ticker.MultipleLocator(.02))
        elif scales[iaxis] == 'log':
            ax.set_xlim(2.3e-5, .21)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
            
    elif features[iaxis] in ['GradientR']:
        
        if scales[iaxis] == 'linear':
            ax.set_xlim(-.05, 1.1)
            ax.xaxis.set_major_locator(ticker.MultipleLocator(.2))
        elif scales[iaxis] == 'log':
            ax.set_xlim(.003, 1.1)
            ax.tick_params(axis='x', which='major', pad=6.)
            ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='x', which='minor', labelsize=12)
    
    if features[iaxis] in ['beta', 'AspectRatio']:
        ax.set_xlabel(getName(features[iaxis])+', '\
                      +getMathName(features[iaxis]))
    else:
        ax.set_xlabel(getName(features[iaxis])+', '\
                      +getMathName(features[iaxis])+' '\
                      +getUnitName(features[iaxis]))
            
    ## y-axis
    iaxis = 1
    ax.set_yscale(scales[iaxis])
    if features[iaxis] in ['Reff', 'major', 'minor']:

        fig.subplots_adjust(left = .08, right = .93)

        if scales[iaxis] == 'linear':
            ax.set_ylim(8e-3, .95)
        elif scales[iaxis] == 'log':
            ax.set_ylim(8e-3, .95)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
    elif features[iaxis] in ['SigmaTot']:

        if scales[iaxis] == 'linear':
            ax.set_ylim(9.5e-2, .65)
        elif scales[iaxis] == 'log':
            ax.set_ylim(9.5e-2, 1.02)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
    elif features[iaxis] in ['Sigma', 'SigmaT', 'SigmaNT']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(3e-2, .65)
        elif scales[iaxis] == 'log':
            ax.set_ylim(3e-2, .65)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['M']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(-1e1, 8.1e2)
        elif scales[iaxis] == 'log':
            ax.set_ylim(1.2e-2, 9.5e2)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['Tkin']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(5., 30.)
        elif scales[iaxis] == 'log':
            ax.set_ylim(.25, 120.)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['Vlsr']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(-5., 12.5)
        elif scales[iaxis] == 'log':
            raise ValueError('Vlsr goes to negative values. No log scale is possible.')
            
    elif features[iaxis] in ['Distance']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(105., 950.)
        elif scales[iaxis] == 'log':
            ax.set_ylim(95., 1050.)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['PA', 'Direction']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(-180., 180.)
            ax.yaxis.set_major_locator(ticker.MultipleLocator(45.))
        elif scales[iaxis] == 'log':
            raise ValueError('Direction angles go to negative values. A log scale introduces biase.')
            
    elif features[iaxis] in ['Gradient']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(.05, 7.)
            ax.yaxis.set_major_locator(ticker.MultipleLocator(2.))
        elif scales[iaxis] == 'log':
            ax.set_ylim(.11, 8.)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['AspectRatio']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(.85, 5.3)
            ax.yaxis.set_major_locator(ticker.MultipleLocator(1.))
        elif scales[iaxis] == 'log':
            ax.set_ylim(.85, 5.3)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 1))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['beta']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(-.05, 1.7)
            ax.yaxis.set_major_locator(ticker.MultipleLocator(.2))
        elif scales[iaxis] == 'log':
            ax.set_ylim(.0009, 2.1)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['JoverM']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(-.005, .15)
            ax.yaxis.set_major_locator(ticker.MultipleLocator(.02))
        elif scales[iaxis] == 'log':
            ax.set_ylim(2.3e-5, .21)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    elif features[iaxis] in ['GradientR']:
        
        if scales[iaxis] == 'linear':
            ax.set_ylim(-.05, 1.1)
            ax.yaxis.set_major_locator(ticker.MultipleLocator(.2))
        elif scales[iaxis] == 'log':
            ax.set_ylim(.003, 1.1)
            ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            ax.tick_params(axis='y', which='minor', labelsize=12)
            
    if features[iaxis] in ['beta', 'AspectRatio']:
        ax.set_ylabel(getName(features[iaxis])+', '\
                      +getMathName(features[iaxis]))
    elif features[iaxis] in ['Reff', 'major', 'minor']:
        
        if scales[iaxis] == 'linear':
            labelpad = 0.
        elif scales[iaxis] == 'log':
            labelpad = -4.
        
        ax.set_ylabel(getName(features[iaxis])+', '\
                      +getMathName(features[iaxis])+' '\
                      +getUnitName(features[iaxis]),
                      labelpad = labelpad)
    else:
        ax.set_ylabel(getName(features[iaxis])+', '\
                      +getMathName(features[iaxis])+' '\
                      +getUnitName(features[iaxis]))
            
            

    
    # Style X-axis.
    iaxis = 0
    if features[iaxis] in ['Reff', 'major', 'minor']:
        ax.fill_betweenx(ax.get_ylim(), 0., beamsize,
                         linewidth = 0.,
                         color = 'gray',
                         alpha = .15,
                         zorder = 0)

        axtw = ax.twiny()
        axtw.set_xlim(*(ax.get_xlim()*u.pc).to(u.AU).value)
        axtw.set_xscale(scales[iaxis])
        if scales[iaxis] == 'linear':
            
            axtw.xaxis.set_major_locator(ticker.MultipleLocator(50000))
            
        elif scales[iaxis] == 'linear':
            
            axtw.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            axtw.tick_params(axis='x', which='minor', labelsize=12)
            
        axtw.set_xlabel(getName(features[iaxis])+', '\
                        +getMathName(features[iaxis])+' [AU]',
                        labelpad = 7.)
    elif features[iaxis] in ['Sigma']:
        ### Plot the expected line widths
        ax.plot([Sigma_Sonic]*2, ax.get_ylim(),
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
        ax.plot([Sigma_halfSonic]*2, ax.get_ylim(),
                color = 'k',
                linestyle = ':',
                linewidth = 3.,
                dashes = (3., 3.))
        if scales[1] == 'linear':
            textY = ax.get_ylim()[0]+.97*(ax.get_ylim()[1]-ax.get_ylim()[0])
        elif scales[1] == 'log':
            textY = np.log10(ax.get_ylim()[0])\
                    +.97*(np.log10(ax.get_ylim()[1])-np.log10(ax.get_ylim()[0]))
            textY = 10.**textY
        if scales[0] == 'linear':
            textX1 = Sigma_Sonic+.01*(ax.get_ylim()[1]-ax.get_ylim()[0])
            textX2 = Sigma_halfSonic+.01*(ax.get_ylim()[1]-ax.get_ylim()[0])
        elif scales[0] == 'log':
            textX1 = np.log10(Sigma_Sonic)\
                    +.01*(np.log10(ax.get_ylim()[1])-np.log10(ax.get_ylim()[0]))
            textX1 = 10.**textX1
            
            textX2 = np.log10(Sigma_halfSonic)\
                    +.01*(np.log10(ax.get_ylim()[1])-np.log10(ax.get_ylim()[0]))
            textX2 = 10.**textX2
        ax.text(textX1, textY,
                '$\sigma_{NT}=c_{s, ave}$',
                verticalalignment = 'top',
                horizontalalignment = 'left',
                rotation = 270)
        ax.text(textX2, textY,
                '$\sigma_{NT}=0.5c_{s, ave}$',
                verticalalignment = 'top',
                horizontalalignment = 'left',
                rotation = 270)
        
    elif features[iaxis] in ['SigmaTot']:
        ax.plot([SigmaTot_Sonic]*2, ax.get_ylim(),
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
        ax.plot([SigmaTot_halfSonic]*2, ax.get_ylim(),
                color = 'k',
                linestyle = ':',
                linewidth = 3.,
                dashes = (3., 3.))
    elif features[iaxis] in ['SigmaT', 'SigmaNT']:
        ax.plot([Sonic]*2, ax.get_ylim(),
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
        ax.plot([halfSonic]*2, ax.get_ylim(),
                color = 'k',
                linestyle = ':',
                linewidth = 3.,
                dashes = (3., 3.))
    elif features[iaxis] in ['Tkin']:
        ax.plot([Tkin0.value]*2, ax.get_ylim(),
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
        
    elif features[iaxis] in ['PA']:
        
        ax.fill_betweenx(ax.get_ylim(), ax.get_xlim()[0], -90.,
                         linewidth = 0.,
                         color = 'gray',
                         alpha = .15,
                         zorder = 0)
        ax.fill_betweenx(ax.get_ylim(), 90., ax.get_xlim()[1],
                         linewidth = 0.,
                         color = 'gray',
                         alpha = .15,
                         zorder = 0)
        
    # Style Y-axis.
    iaxis = 1
    if features[iaxis] in ['Reff', 'major', 'minor']:
        ax.fill_between(ax.get_xlim(), 0., beamsize,
                        linewidth = 0.,
                        color = 'gray',
                        alpha = .15,
                        zorder = 0)
        axtw = ax.twinx()
        axtw.set_ylim(*(ax.get_ylim()*u.pc).to(u.AU).value)
        axtw.set_yscale(scales[iaxis])
        if scales[iaxis] == 'linear':
            
            axtw.yaxis.set_major_locator(ticker.MultipleLocator(50000))
            plt.setp(axtw.yaxis.get_majorticklabels(), rotation = 90)
            
        elif scales[iaxis] == 'log':
            
            axtw.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
            axtw.tick_params(axis='y', which='minor', labelsize=12)
        
        axtw.set_ylabel(getName(features[iaxis])+', '\
                        +getMathName(features[iaxis])+' [AU]',
                        labelpad = 3.)
    elif features[iaxis] in ['Sigma']:
        ### Plot the expected line widths
        ax.plot(ax.get_xlim(), [Sigma_Sonic]*2,
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
        ax.plot(ax.get_xlim(), [Sigma_halfSonic]*2,
                color = 'k',
                linestyle = ':',
                linewidth = 3.,
                dashes = (3., 3.))
        if scales[0] == 'linear':
            textX = ax.get_xlim()[0]+.97*(ax.get_xlim()[1]-ax.get_xlim()[0])
        elif scales[0] == 'log':
            textX = np.log10(ax.get_xlim()[0])\
                    +.97*(np.log10(ax.get_xlim()[1])-np.log10(ax.get_xlim()[0]))
            textX = 10.**textX
        if scales[1] == 'linear':
            textY1 = Sigma_Sonic+.03*(ax.get_ylim()[1]-ax.get_ylim()[0])
            textY2 = Sigma_halfSonic+.03*(ax.get_ylim()[1]-ax.get_ylim()[0])
        elif scales[1] == 'log':
            textY1 = np.log10(Sigma_Sonic)\
                    +.03*(np.log10(ax.get_ylim()[1])-np.log10(ax.get_ylim()[0]))
            textY1 = 10.**textY1
            
            textY2 = np.log10(Sigma_halfSonic)\
                    +.03*(np.log10(ax.get_ylim()[1])-np.log10(ax.get_ylim()[0]))
            textY2 = 10.**textY2
        ax.text(textX, textY1,
                '$\sigma_{NT}=c_{s, ave}$',
                verticalalignment = 'center',
                horizontalalignment = 'right')
        ax.text(textX, textY2,
                '$\sigma_{NT}=0.5c_{s, ave}$',
                verticalalignment = 'center',
                horizontalalignment = 'right')
        
    elif features[iaxis] in ['SigmaTot']:
        ax.plot(ax.get_xlim(), [SigmaTot_Sonic]*2,
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
        ax.plot(ax.get_xlim(), [SigmaTot_halfSonic]*2,
                color = 'k',
                linestyle = ':',
                linewidth = 3.,
                dashes = (3., 3.))
    elif features[iaxis] in ['SigmaT', 'SigmaNT']:
        ax.plot(ax.get_xlim(), [Sonic]*2,
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
        ax.plot(ax.get_xlim(), [halfSonic]*2,
                color = 'k',
                linestyle = ':',
                linewidth = 3.,
                dashes = (3., 3.))
    elif features[iaxis] in ['Tkin']:
        ax.plot(ax.get_xlim(), [Tkin0.value]*2,
                color = 'k',
                linestyle = '--',
                linewidth = 3.,
                dashes = (20., 10.))
    elif features[iaxis] in ['PA']:
        ax.fill_between(ax.get_xlim(), ax.get_ylim()[0], -90.,
                        linewidth = 0.,
                        color = 'gray',
                        alpha = .15,
                        zorder = 0)
        ax.fill_between(ax.get_xlim(), 90., ax.get_ylim()[1],
                        linewidth = 0.,
                        color = 'gray',
                        alpha = .15,
                        zorder = 0)
        


    for QA in set(dict_tables['table2']['QA']):

        if QA == 'good':
            facecolor = 'k'
            edgecolor = 'k'
        elif QA == 'bad':
            facecolor = 'w'
            edgecolor = 'k'

        for stars in ['inside', 'nearby', 'none']:

            if stars == 'inside':
                maskplot = (dict_tables['table2']['QA'].values == QA)\
                           &(dict_tables['table1']['nYSOin'].values != 0.)
                ## styles
                marker = '*'
                size = 500.
            elif stars == 'nearby':
                maskplot = (dict_tables['table2']['QA'].values == QA)\
                           &(dict_tables['table1']['nYSOin'].values == 0.)\
                           &(dict_tables['table1']['nYSOnearby'].values != 0.)
                ## styles
                marker = '^'
                size = 300.
            elif stars == 'none':
                maskplot = (dict_tables['table2']['QA'].values == QA)\
                           &(dict_tables['table1']['nYSOin'].values == 0.)\
                           &(dict_tables['table1']['nYSOnearby'].values == 0.)
                ## styles
                marker = 'o'
                size = 300.

            
            xthis, ythis = dict_tables[tables[0]][features[0]][maskplot],\
                           dict_tables[tables[1]][features[1]][maskplot]
            ax.scatter(xthis, ythis,
                       c = facecolor,
                       marker = marker,
                       s = size,
                       edgecolors = edgecolor,
                       linewidth = 2.,
                       zorder = 100)

            xerr = (dict_tables[tables[0]]['e'+features[0]]\
                   if ('e'+features[0]) in dict_tables[tables[0]].keys()\
                   else np.zeros(len(dict_tables[tables[0]]))*np.nan)
            yerr = (dict_tables[tables[1]]['e'+features[1]]\
                   if ('e'+features[1]) in dict_tables[tables[1]].keys()\
                   else np.zeros(len(dict_tables[tables[1]]))*np.nan)


            ax.errorbar(xthis, ythis,
                        xerr = xerr[maskplot],
                        yerr = yerr[maskplot],
                        linestyle = 'none',
                        linewidth = 2.,
                        color = 'k',
                        alpha = .8,
                        zorder = 99)
            
            
                
            
            ## Triangle of Larson's Laws
            if (features[0] == 'Reff') and (features[1] == 'SigmaTot'):
                ax.plot(ax.get_xlim(),
                        np.sqrt((1.1*np.array(ax.get_xlim())**.38*u.km/u.s)**2.
                                -c.k_B*Tkin0/(12.*u.u)
                                +c.k_B*Tkin0/mass['average']).to(u.km/u.s).value,
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
            elif (features[1] == 'Reff') and (features[0] == 'SigmaTot'):
                ax.plot(np.sqrt((1.1*np.array(ax.get_ylim())**.38*u.km/u.s)**2.
                                -c.k_B*Tkin0/(12.*u.u)
                                +c.k_B*Tkin0/mass['average']).to(u.km/u.s).value,
                        ax.get_ylim(),
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
            
                
            if (features[0] == 'Reff') and (features[1] == 'M'):
                ax.plot(ax.get_xlim(),
                        (1.1/.42)**5.*np.array(ax.get_xlim())**1.9,
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
            elif (features[1] == 'Reff') and (features[0] == 'M'):
                ax.plot((1.1/.42)**5.*np.array(ax.get_ylim())**1.9,
                        ax.get_ylim(),
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
                
            if (features[0] == 'M') and (features[1] == 'SigmaTot'):
                ax.plot(ax.get_xlim(),
                        np.sqrt((.42*np.array(ax.get_xlim())**.2*u.km/u.s)**2.
                                -c.k_B*Tkin0/(12.*u.u)
                                +c.k_B*Tkin0/mass['average']).to(u.km/u.s).value,
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
            elif (features[1] == 'M') and (features[0] == 'SigmaTot'):
                ax.plot(np.sqrt((.42*np.array(ax.get_ylim())**.2*u.km/u.s)**2.
                                -c.k_B*Tkin0/(12.*u.u)
                                +c.k_B*Tkin0/mass['average']).to(u.km/u.s).value,
                        ax.get_ylim(),
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
                
            # parallelity
            if (features[0] in ['PA', 'Direction']) and (features[1] in ['PA', 'Direction']):
                
                ## parallel
                ax.plot(ax.get_xlim(), ax.get_ylim(),
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
                
                ## anti-parallel
                ax.plot([0., 180.], [-180., 0.],
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
                ax.plot([-180., 0.], [0., 180.],
                        color = 'gray',
                        linestyle = '-',
                        linewidth = 1.,
                        zorder = 1)
                
                ## perpendicular
                ax.plot([90., 180.], [-180., -90.],
                        color = 'gray',
                        linestyle = '--',
                        linewidth = 1.,
                        zorder = 1)
                ax.plot([-90., 180.], [-180., 90.],
                        color = 'gray',
                        linestyle = '--',
                        linewidth = 1.,
                        zorder = 1)
                ax.plot([-180., -90.], [90., 180.],
                        color = 'gray',
                        linestyle = '--',
                        linewidth = 1.,
                        zorder = 1)
                ax.plot([-180., 90.], [-90., 180.],
                        color = 'gray',
                        linestyle = '--',
                        linewidth = 1.,
                        zorder = 1)
                

    xGoodman93, yGoodman93 = dict_tables['Goodman93_'+tables[0]][features[0]],\
                             dict_tables['Goodman93_'+tables[1]][features[1]]
    ax.plot(xGoodman93, yGoodman93,
            linestyle = 'none',
            markeredgewidth = 0.,
            marker = '.',
            color = 'gray',
            markersize = 35.,
            alpha = .25,
            zorder = 2)
    
    
    if annotate == True:
        ## Goodman 93
        if scales[0] == 'linear':
            if (features[0] == 'Gradient') and (features[1] == 'AspectRatio'):
                textX = np.nanpercentile(xGoodman93, 60.)
                horizontalalignment = 'center'
                verticalalignment = 'center'
            elif (features[0] == 'Direction') and (features[1] == 'PA'):
                textX = 170.
                horizontalalignment = 'right'
                verticalalignment = 'top'
            elif (features[0] == 'PA') and (features[1] == 'Direction'):
                textX = 0.
                horizontalalignment = 'center'
                verticalalignment = 'center'
            else:
                textX = np.nanpercentile(xGoodman93, 95.)
                horizontalalignment = 'right'
                verticalalignment = 'center'
        elif scales[0] == 'log':
            if (features[0] == 'Gradient') and (features[1] == 'AspectRatio'):
                textX = 10.**np.nanpercentile(np.log10(xGoodman93), 60.)
                horizontalalignment = 'center'
                verticalalignment = 'center'
            else:
                textX = 10.**np.nanpercentile(np.log10(xGoodman93), 95.)
                horizontalalignment = 'right'
                verticalalignment = 'center'
        if scales[1] == 'linear':
            if (features[0] == 'AspectRatio') and (features[1] == 'Gradient'):
                textY = np.nanpercentile(yGoodman93, 40.)
            elif (features[0] == 'Direction') and (features[1] == 'PA'):
                textY = 80.
            elif (features[0] == 'PA') and (features[1] == 'Direction'):
                textY = 155.
            else:
                textY = np.nanpercentile(yGoodman93, 95.)
        elif scales[1] == 'log':
            if (features[0] == 'AspectRatio') and (features[1] == 'Gradient'):
                textY = 10.**np.nanpercentile(np.log10(yGoodman93), 40.)
            else:
                textY = 10.**np.nanpercentile(np.log10(yGoodman93), 95.)
            
        ax.text(textX, textY,
                'Goodman et al. (1993)',
                horizontalalignment = horizontalalignment,
                verticalalignment = verticalalignment,
                weight = 'bold',
                alpha = .35,
                zorder = 200)
        
        ## this work
        xthis, ythis = dict_tables[tables[0]][features[0]][dict_tables['table2']['QA'] == 'good'],\
                       dict_tables[tables[1]][features[1]][dict_tables['table2']['QA'] == 'good']
        if scales[0] == 'linear':
            if (features[0] == 'AspectRatio') and (features[1] == 'Gradient'):
                textX = np.nanpercentile(xthis, 5.)
                horizontalalignment = 'center'
            elif (features[0] == 'Gradient') and (features[1] == 'AspectRatio'):
                textX = np.nanpercentile(xthis, 40.)
                horizontalalignment = 'center'
            elif (features[0] == 'Direction') and (features[1] == 'PA'):
                textX = -170.
                horizontalalignment = 'left'
            elif (features[0] == 'PA') and (features[1] == 'Direction'):
                textX = 0.
                horizontalalignment = 'center'
            else:
                textX = np.nanpercentile(xthis, 50.)
                horizontalalignment = 'center'
        elif scales[0] == 'log':
            if (features[0] == 'AspectRatio') and (features[1] == 'Gradient'):
                textX = 10.**np.nanpercentile(np.log10(xthis), 5.)
                horizontalalignment = 'center'
            elif (features[0] == 'Gradient') and (features[1] == 'AspectRatio'):
                textX = 10.**np.nanpercentile(np.log10(xthis), 40.)
                horizontalalignment = 'center'
            else:
                textX = 10.**np.nanpercentile(np.log10(xthis), 50.)
                horizontalalignment = 'center'
        if scales[1] == 'linear':
            if (features[0] == 'AspectRatio') and (features[1] == 'Gradient'):
                textYd = np.nanpercentile(ythis, 60.)
                textYu = np.nanpercentile(ythis, 60.)
            elif (features[0] == 'Gradient') and (features[1] == 'AspectRatio'):
                textYd = np.nanpercentile(ythis, 95.)
                textYu = np.nanpercentile(ythis, 95.)
            elif (features[0] == 'Direction') and (features[1] == 'PA'):
                textYd, textYu = 80., 80.
            elif (features[0] == 'PA') and (features[1] == 'Direction'):
                textYd, textYu = -155., -155.
            else:
                textYd = np.nanmin(ythis)-np.nanpercentile(ythis, 25.)
                textYu = np.nanmax(ythis)+(np.nanmax(ythis)-np.nanpercentile(ythis, 75.))
        elif scales[1] == 'log':
            if (features[0] == 'AspectRatio') and (features[1] == 'Gradient'):
                textYd = 10.**np.nanpercentile(np.log10(ythis), 75.)
                textYu = 10.**np.nanpercentile(np.log10(ythis), 75.)
            elif (features[0] == 'Gradient') and (features[1] == 'AspectRatio'):
                textYd = 10.**np.nanpercentile(np.log10(ythis), 95.)
                textYu = 10.**np.nanpercentile(np.log10(ythis), 95.)
            else:
                textYd = np.nanmin(np.log10(ythis))\
                         -(np.nanpercentile(np.log10(ythis), 25.)-np.nanmin(np.log10(ythis)))
                textYd = 10.**textYd
                textYu = np.nanmax(np.log10(ythis))\
                         +(np.nanmax(np.log10(ythis))-np.nanpercentile(np.log10(ythis), 75.))
                textYu = 10.**textYu
        
        textY = textYd if (textYd > ax.get_ylim()[0])\
                else textYu
        verticalalignment = 'top' if (textYd > ax.get_ylim()[0]) else 'bottom'
        ax.text(textX, textY,
                '"Droplet"',
                horizontalalignment = horizontalalignment,
                verticalalignment = verticalalignment,
                style = 'italic',
                weight = 'bold',
                zorder = 201)
    
    
    return fig

In [87]:
fig = plt.figure(figsize = (14., 14.))
ax = fig.gca()
fig.subplots_adjust(left = .09, right = .97, bottom = .07, top = .93)

a, b = 'Reff', 'SigmaTot'
scales = 'log-log'

features = [a, b]
tables = ['table1' if feature in dict_tables['table1'].keys() else 'table2' for feature in features]
scales = scales.split('-')
# Set axes.
## x-axis
iaxis = 0
ax.set_xscale(scales[iaxis])
if features[iaxis] in ['Reff', 'major', 'minor']:
    if scales[iaxis] == 'linear':
        ax.set_xlim(8e-3, .89)
    elif scales[iaxis] == 'log':
        ax.set_xlim(8e-3, .89)
        ax.tick_params(axis='x', which='major', pad=6.)
        ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
        ax.tick_params(axis='x', which='minor', labelsize=12)
elif features[iaxis] in ['Sigma', 'SigmaTot', 'SigmaT', 'SigmaNT']:
    if scales[iaxis] == 'linear':
        ax.set_xlim(8.1e-2, .5)
    elif scales[iaxis] == 'log':
        ax.set_xlim(8.1e-2, 1.05)
        ax.tick_params(axis='x', which='major', pad=6.)
        ax.xaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
        ax.tick_params(axis='x', which='minor', labelsize=12)
## y-axis
iaxis = 1
ax.set_yscale(scales[iaxis])
if features[iaxis] in ['Reff', 'major', 'minor']:
    if scales[iaxis] == 'linear':
        ax.set_ylim(8e-3, .89)
    elif scales[iaxis] == 'log':
        ax.set_ylim(8e-3, .89)
        ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
        ax.tick_params(axis='y', which='minor', labelsize=12)
elif features[iaxis] in ['Sigma', 'SigmaTot', 'SigmaT', 'SigmaNT']:
    if scales[iaxis] == 'linear':
        ax.set_ylim(8.1e-2, .5)
    elif scales[iaxis] == 'log':
        ax.set_ylim(8.1e-2, 1.05)
        ax.yaxis.set_minor_formatter(FuncFormatter2(ticks_format, interval = 2))
        ax.tick_params(axis='y', which='minor', labelsize=12)
        
    
    
# Style X-axis.
iaxis = 0
if features[iaxis] in ['Reff', 'major', 'minor']:
    ax.fill_betweenx(ax.get_ylim(), 0., beamsize,
                     color = 'gray',
                     alpha = .15,
                     zorder = 0)
    
    axtw = ax.twiny()
    axtw.set_xlim(*(ax.get_xlim()*u.pc).to(u.AU).value)
    axtw.set_xscale(scales[iaxis])
    #axtw.set_xlabel('Radius [AU]')
elif features[iaxis] in ['Sigma']:
    ### Plot the expected line widths
    ax.plot([Sigma_Sonic]*2, ax.get_ylim(),
            color = 'k',
            linestyle = '--',
            linewidth = 3.,
            dashes = (20., 10.))
    ax.plot([Sigma_halfSonic]*2, ax.get_ylim(),
            color = 'k',
            linestyle = ':',
            linewidth = 3.,
            dashes = (3., 3.))
# Style Y-axis.
iaxis = 1
if features[iaxis] in ['Reff', 'major', 'minor']:
    ax.fill_between(ax.get_ylim(), 0., beamsize,
                    color = 'gray',
                    alpha = .15,
                    zorder = 0)
    axtw = ax.twinx()
    axtw.set_ylim(*(ax.get_xlim()*u.pc).to(u.AU).value)
    axtw.set_yscale(scales[iaxis])
    #axtw.set_ylabel('Radius [AU]')
elif features[iaxis] in ['Sigma']:
    ### Plot the expected line widths
    ax.plot(ax.get_xlim(), [Sigma_Sonic]*2,
            color = 'k',
            linestyle = '--',
            linewidth = 3.,
            dashes = (20., 10.))
    ax.plot(ax.get_xlim(), [Sigma_halfSonic]*2,
            color = 'k',
            linestyle = ':',
            linewidth = 3.,
            dashes = (3., 3.))
    



for QA in set(dict_tables['table2']['QA']):
    
    if QA == 'good':
        edgecolor = 'k'
        alpha = 1.
    elif QA == 'bad':
        edgecolor = 'k'
        alpha = .35
    
    for stars in ['inside', 'nearby', 'none']:
        
        if stars == 'inside':
            maskplot = (dict_tables['table2']['QA'].values == QA)\
                       &(dict_tables['table1']['nYSOin'].values != 0.)
            ## styles
            marker = 'o'
            facecolor = 'k'
            hatch = 'none'
            size = 300.
            ax.scatter(dict_tables[tables[0]][features[0]][maskplot],
                   dict_tables[tables[1]][features[1]][maskplot],
                   c = facecolor,
                   marker = marker,
                   s = size,
                   edgecolors = edgecolor,
                   linewidth = 2.,
                       alpha = alpha,
                   zorder = 100)
            ax.scatter(dict_tables[tables[0]][features[0]][maskplot],
                   dict_tables[tables[1]][features[1]][maskplot],
                   c = 'w',
                   marker = marker,
                   s = size,
                   edgecolors = 'none',
                   linewidth = 2.,
                   zorder = 99)
        elif stars == 'nearby':
            maskplot = (dict_tables['table2']['QA'].values == QA)\
                       &(dict_tables['table1']['nYSOin'].values == 0.)\
                       &(dict_tables['table1']['nYSOnearby'].values != 0.)
            ## styles
            marker = 'o'
            facecolor = 'w'
            hatch = '/////'
            size = 300.
            ax.scatter(dict_tables[tables[0]][features[0]][maskplot],
                   dict_tables[tables[1]][features[1]][maskplot],
                   c = facecolor,
                   marker = marker,
                   s = size,
                   edgecolors = edgecolor,
                   linewidth = 2.,
                   alpha = alpha,
                   hatch = hatch,
                   zorder = 100)
            ax.scatter(dict_tables[tables[0]][features[0]][maskplot],
                   dict_tables[tables[1]][features[1]][maskplot],
                   c = 'w',
                   marker = marker,
                   s = size,
                   edgecolors = 'none',
                   linewidth = 2.,
                   zorder = 99)
        elif stars == 'none':
            maskplot = (dict_tables['table2']['QA'].values == QA)\
                       &(dict_tables['table1']['nYSOin'].values == 0.)\
                       &(dict_tables['table1']['nYSOnearby'].values == 0.)
            ## styles
            marker = 'o'
            facecolor = 'w'
            hatch = 'none'
            size = 300.
            ax.scatter(dict_tables[tables[0]][features[0]][maskplot],
                   dict_tables[tables[1]][features[1]][maskplot],
                   c = facecolor,
                   marker = marker,
                   s = size,
                   edgecolors = edgecolor,
                   linewidth = 2.,
                   alpha = alpha,
                   zorder = 100)
            ax.scatter(dict_tables[tables[0]][features[0]][maskplot],
                   dict_tables[tables[1]][features[1]][maskplot],
                   c = 'w',
                   marker = marker,
                   s = size,
                   edgecolors = 'none',
                   linewidth = 2.,
                   zorder = 99)
        
        
        
        
        xerr = (dict_tables[tables[0]]['e'+features[0]]\
               if ('e'+features[0]) in dict_tables[tables[0]].keys()\
               else np.zeros(len(dict_tables[tables[0]]))*np.nan)
        yerr = (dict_tables[tables[1]]['e'+features[1]]\
               if ('e'+features[1]) in dict_tables[tables[1]].keys()\
               else np.zeros(len(dict_tables[tables[1]]))*np.nan)
        
            
        ax.errorbar(dict_tables[tables[0]][features[0]][maskplot],
                    dict_tables[tables[1]][features[1]][maskplot],
                    xerr = xerr[maskplot],
                    yerr = yerr[maskplot],
                    linestyle = 'none',
                    linewidth = 2.,
                    color = 'k',
                    alpha = alpha,
                    zorder = 98)
        

ax.plot(dict_tables['Goodman93_'+tables[0]][features[0]], dict_tables['Goodman93_'+tables[1]][features[1]],
        linestyle = 'none',
        markeredgewidth = 0.,
        marker = '.',
        color = 'gray',
        alpha = .2,
        markersize = 17.,
        zorder = 2) 

plt.savefig('/Users/hopechen/Desktop/test.png')



In [45]:
plt.plot(ReffComp, dict_tables['table1']['Reff'].values,
         'k.')

plt.plot([0., 1.], [0., 1.],
         'r-')

plt.xlim(.017, .078)
plt.ylim(.017, .078)


Out[45]:
(0.017, 0.078)

In [41]:
ReffComp


Out[41]:
[0.03836354489735999,
 0.021024925591785912,
 0.024724010430864717,
 0.034103707075614616,
 0.02662088852096129,
 0.03481427559847372,
 0.027758418791312345,
 0.027691068446366467,
 0.04886994137090299,
 0.03760793172970954,
 0.05931314331718893,
 0.048375004606611476,
 0.036768705740107974,
 0.07829828800288409,
 0.05432439696023242,
 0.05627248746717553,
 0.05813545912951838,
 0.08605644289703347,
 0.05941088977091703]

In [40]:
features = ['SigmaTot', 'SigmaNH3', 'SigmaNT', 'SigmaT']
scales = ['linear', 'linear', 'linear', 'linear']
fig = plt.figure(figsize = (16., 12.))
ax = fig.gca()

for i in range(len(features)):
    feature, scale = features[i], scales[i]
    
    data = np.concatenate([table[feature] if (feature in table.keys()) else []
                           for table in dict_tables.values()])
    
    if scale == 'linear':
        bins = np.linspace(np.nanmin(data), np.nanmax(data), 20)
    elif scale == 'log':
        bins = np.logspace(np.log10(np.nanmin(data)), np.log10(np.nanmax(data)), 20)
    
    ax.hist(data,
            bins = bins,
            range = (bins.min(), bins.max()),
            histtype = 'step',
            color = ssk_colors[i])


II.3 Full Virial Analysis

Appendix I. Comparison to other works


In [13]:
from astropy.coordinates import SkyCoord
from matplotlib import patches, cm

direcTable = '/Users/hopechen/Documents/projects/git_projects/data/Kirk_Oph/'
testTable = pd.read_csv(direcTable+'L1688-MasterCoreCat-gs.txt',
                        delim_whitespace = True)

In [14]:
testDerived = {'Reff': testTable['EFFRADIUS(pc)'].values,
               'Sigma': testTable['LINEWIDTH(cm/s)'].values/1e5,
               'Tkin': testTable['TKIN(K)'].values}
testDerived['SigmaT'] = np.sqrt(c.k_B*testDerived['Tkin']*u.K/mass['average']).to(u.km/u.s).value
testDerived['SigmaNT'] = np.sqrt((testDerived['Sigma']*u.km/u.s)**2.
                                 -c.k_B*testDerived['Tkin']*u.K/mass['NH3']).to(u.km/u.s).value
testDerived['SigmaTot'] = np.sqrt(testDerived['SigmaT']**2. + testDerived['SigmaNT']**2.)
    

testDerived = pd.DataFrame(testDerived)

In [73]:
reg = 'L1688'
wcs_GAS = wcs.WCS(dict_data[reg]['header_GAS'])
pixscale = np.median(testTable['EFFRADIUS(pc)'].values/np.sqrt(testTable['S1'].values*testTable['S2'].values))

fig = plt.figure(figsize = (14., 14.*dict_data[reg]['Sigma'].shape[0]/dict_data[reg]['Sigma'].shape[1]))
ax = fig.gca(projection = wcs_GAS)

ax.imshow(dict_data[reg]['Sigma'],
          cmap = 'YlGnBu',
          norm = colors.Normalize(0., .4))


xmesh, ymesh = np.meshgrid(np.arange(dict_data[reg]['Sigma'].shape[1]),
                           np.arange(dict_data[reg]['Sigma'].shape[0]))

SigmaMap, TkinMap, TpeakMap = [], [], []
for i, structure in enumerate(list(range(1, 13))+['extra']):
    
    ax.contour(dict_masks[reg][structure],
               levels = [.5],
               colors = 'w',
               linewidths = 5.)
    
    ax.contour(dict_masks[reg][structure],
               levels = [.5],
               colors = ssk_colors[i],
               linewidths = 3.)
    
    
for i in range(len(testTable)):
    raStr, decStr = testTable[['RA', 'DEC']].loc[i].values
    icoord = SkyCoord(raStr, decStr,
                      unit=(u.hourangle, u.deg))
    ra, dec = icoord.ra.value, icoord.dec.value
    
    beam_center = (ra * u.deg, dec * u.deg)
    '''
    beam_size = (testTable[['EFFRADIUS(pc)']].loc[i].values[0]*u.pc/distances['L1688']).decompose().value
    beam_size = np.degrees(beam_size) * u.deg
    beam = wcsaxes.SphericalCircle(beam_center,
                                   beam_size,
                                   edgecolor = 'r',
                                   facecolor = 'none',
                                   linewidth = 2.,
                                   zorder = 999,
                                   transform = ax.get_transform('fk5'))
    '''
    beam_width = (testTable[['S1']].loc[i].values[0]*pixscale*u.pc/distances['L1688']).decompose().value
    beam_width = 1.5*np.degrees(beam_width) * u.deg
    beam_height = (testTable[['S2']].loc[i].values[0]*pixscale*u.pc/distances['L1688']).decompose().value
    beam_height = 1.5*np.degrees(beam_height) * u.deg
    beam_angle = -(testTable[['ANGLE']].loc[i].values[0] + 90.)
    beam = patches.Ellipse((beam_center[0].value, beam_center[1].value),
                           beam_width.value, beam_height.value,
                           angle = beam_angle,
                           edgecolor = 'r',
                           facecolor = 'none',
                           linewidth = 1.5,
                           zorder = 999,
                           transform = ax.get_transform('fk5'))
    ax.add_patch(beam)
    
    # in pix system
    raPix, decPix = wcs_GAS.wcs_world2pix([[ra, dec]], 0)[0]
    s1Pix = beam_width.value/2./abs(dict_data[reg]['header_GAS']['CDELT1'])
    s2Pix = beam_height.value/2./abs(dict_data[reg]['header_GAS']['CDELT1'])
    angleRad = -np.radians(beam_angle)
    A = s1Pix**2.*np.sin(angleRad)**2.+s2Pix**2.*np.cos(angleRad)**2.
    B = 2.*(s2Pix**2.-s1Pix**2.)*np.sin(angleRad)*np.cos(angleRad)
    C = s1Pix**2.*np.cos(angleRad)**2.+s2Pix**2.*np.sin(angleRad)**2.
    D = -2.*A*raPix - B*decPix
    E = -B*raPix - 2.*C*decPix
    F = A*raPix**2. + B*raPix*decPix + C*decPix**2. - s1Pix**2.*s2Pix**2.
    
    beamMask = ((A*xmesh**2.+B*xmesh*ymesh+C*ymesh**2.+D*xmesh+E*ymesh+F) < 0.)
    
    '''
    ax.contour(beamMask,
               levels = [.5],
               colors = 'r',
               linewidths = 2.,
               zorder = 1000)
    '''
    
    SigmaMap.append(np.nanmedian(dict_data[reg]['Sigma'][beamMask]))
    TkinMap.append(np.nanmedian(dict_data[reg]['Tkin'][beamMask]))
    TpeakMap.append(np.nanmedian(dict_data[reg]['Tpeak'][beamMask]))

    

testDerived['SigmaMap'] = SigmaMap
testDerived['TkinMap'] = TkinMap

testDerived['SigmaTMap'] = np.sqrt(c.k_B*testDerived['TkinMap'].values*u.K/mass['average']).to(u.km/u.s).value
testDerived['SigmaNTMap'] = np.sqrt((testDerived['SigmaMap'].values*u.km/u.s)**2.
                                    -c.k_B*testDerived['TkinMap'].values*u.K/mass['NH3']).to(u.km/u.s).value
testDerived['SigmaTotMap'] = np.sqrt(testDerived['SigmaTMap']**2. + testDerived['SigmaNTMap']**2.)

testDerived['TpeakMap'] = TpeakMap


ax.coords[0].set_major_formatter('hh:mm')
ax.coords[0].set_axislabel('R.A. [J2000]')
ax.coords[1].set_major_formatter('dd:mm')
ax.coords[1].set_axislabel('Dec. [J2000]',
                           minpad = -.5)

#direcComparison = '/Users/hopechen/Documents/projects/git_projects/Droplets/tests/comparison_Kirk/'
#plt.savefig(direcComparison+'skyMap.png')



In [72]:
rcParams['figure.subplot.wspace'] = .15
rcParams['figure.subplot.bottom'] = .13
rcParams['figure.subplot.top'] = .98

fig, axes = plt.subplots(figsize = (23., 11.),
                         ncols = 2)

ax = axes[0]
ax.plot(testDerived['Sigma'], testDerived['SigmaMap'],
         'k.')

ax.plot([0., 1.], [0., 1.],
         'r-')

ax.set_xlim(.16, .91)
ax.set_ylim(.16, .91)
ax.set_yticks([.2, .4, .6, .8])
ax.set_xlabel('Original $\sigma_{NH_3}$ [km s$^{-1}$]')
ax.set_ylabel('Re-calculated $\sigma_{NH_3}$ [km s$^{-1}$]')


ax = axes[1]
ax.plot(testDerived['Tkin'], testDerived['TkinMap'],
         'k.')

ax.plot([0., 30.], [0., 30.],
         'r-')

ax.set_xlim(7.4, 28.5)
ax.set_ylim(7.4, 28.5)
ax.set_yticks([10., 15., 20., 25.])
ax.set_xlabel('Original $T_{kin}$ [K]')
ax.set_ylabel('Re-calculated $T_{kin}$ [K]')

#direcComparison = '/Users/hopechen/Documents/projects/git_projects/Droplets/tests/comparison_Kirk/'
#plt.savefig(direcComparison+'recalculation.png')


Load the old tables for comparison


In [27]:
direcFinal = '/Users/hopechen/Documents/projects/GAS/projects/GASAng_final/'
d_table = pd.DataFrame.from_csv(direcFinal+'table_full.csv')

direcProject = '/Users/hopechen/Documents/projects/GAS/projects/GASAng/'
d_AG = pd.DataFrame.from_csv(direcProject+'Goodman93.csv')

## pixscales
d_pixscale = {}
for reg in ['L1688', 'B18']:
    if reg == 'L1688':
        distance = 137.3
    elif reg == 'B18':
        distance = 135.
    
    hdr = dict_data[reg]['header_GAS']
    d_pixscale[reg] = (abs(hdr['CDELT1'])*3600.*distance*u.AU).to(u.pc).value

In [95]:
###
rcParams['figure.subplot.left'] = .09
rcParams['figure.subplot.right'] = .97
rcParams['figure.subplot.bottom'] = .07
rcParams['figure.subplot.top'] = .93
rcParams['font.size'] = 30
pixscale = d_pixscale['L1688']
###

# prepare
xx, yy = d_table['Radius'].values, d_table['Vdisp_total'].values
xerr, yerr = d_table['Radius_ERR'].values, d_table['Vdisp_total_ERR'].values
## color coded by the ratio of the thermal support to the total kinetic energy.
ccode = d_table['Vdisp_T'].values**2./d_table['Vdisp_total'].values**2.
## AG
#xAG, yAG = (d_AG['radius'].values*u.AU).to(u.pc).value, d_AG['sigmaV']
xAG, yAG = dict_tables['Goodman93_table1']['R'], dict_tables['Goodman93_table1']['SigmaTot']
## limits
xmin, xmax = 5e-3, 1e0
ymin, ymax = 9e-2, 1.1e0
## expected vel dispersion
#NT10K1 = (np.sqrt(c.k_B*10.*u.K/(17.031*u.u)+c.k_B*10.*u.K/(2.37*u.u))).to(u.km/u.s).value
#NT10K2 = (np.sqrt((c.k_B*10.*u.K/(17.031*u.u)+.5**2.*c.k_B*10.*u.K/(2.37*u.u)))).to(u.km/u.s).value
## sig total
NT10K1 = (np.sqrt(2.*c.k_B*10.*u.K/(2.37*u.u))).to(u.km/u.s).value
NT10K2 = (np.sqrt((.5**2.+1.)*c.k_B*10.*u.K/(2.37*u.u))).to(u.km/u.s).value

Thermal = (np.sqrt(c.k_B*10.*u.K/(2.37*u.u))).to(u.km/u.s).value

fig = plt.figure(figsize = (14., 14.))
ax = fig.gca()

norm_cores = colors.Normalize(.5, 4.)
for QA in set(d_table['QA']):
    for stars in set(d_table['protostars']):
        
        xxP, yyP = xx[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values],\
                   yy[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        ccodeP = ccode[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        xerrP = xerr[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        yerrP = yerr[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        
        
        size = 360.
        if stars == 'inside':
            marker = '*'
            size = 600.
        elif stars == 'nearby':
            marker = '^'
        elif stars == 'no':
            marker = 'o'
            
        if QA == 'good':
            mfcolor = 'k'
            mecolor = 'k'
        elif QA == 'bad':
            mfcolor = 'w'
            mecolor = 'k'
            
        ax.scatter(xxP, yyP,
                   c = mfcolor,
                   marker = marker,
                   s = size,
                   edgecolors = 'k',
                   linewidth = 2.,
                   zorder = 100,
                   alpha = 1.)

        ax.errorbar(xxP, yyP, xerr = xerrP, yerr = yerrP,
                    linestyle = 'none',
                    linewidth = 2.,
                    color = 'k',
                    alpha = .8,
                    zorder = 99)
            
        
        
           

#ax.plot(xx[4], yy[4], 'ro')
        


# plot
#ax.scatter(xx, yy,
#           c = ccode,
#           cmap = 'RdYlBu_r',
#           norm = colors.Normalize(vmin = .5, vmax = 1.),
#           s = 240.,
#           edgecolors = 'k',
#           linewidth = .5,
#           zorder = 100)

#ax.errorbar(xx, yy, xerr = xerr, yerr = yerr,
#            linestyle = 'none',
#            linewidth = 2.,
#            color = 'k',
#            alpha = .8,
#            zorder = 99)
## AG
ax.plot(xAG, yAG,
        linestyle = 'none',
        marker = '.',
        color = 'gray',
        markersize = 35.,
        alpha = .25,
        zorder = 2)


## beam
ax.fill_between([ax.get_xlim()[0], 3.*pixscale], 2*[ymin], 2*[ymax],
                color = 'gray',
                alpha = .1,
                edgecolor = 'none',
                zorder = 0)


## larson SigmaCO
ax.plot([xmin, xmax], 1.1*np.array([xmin, xmax])**.38,
        color = 'gray',
        linestyle = '-',
        linewidth = 1.,
        zorder = 1)
## larson SigmaTot assuming 10K
ax.plot([xmin, xmax],
        np.sqrt((1.1*np.array([xmin, xmax])**.38*u.km/u.s)**2.
                -c.k_B*10.*u.K/(12.*u.u)
                +c.k_B*10.*u.K/mass['average']).to(u.km/u.s).value,
        color = 'gray',
        linestyle = '--',
        linewidth = 1.,
        zorder = 1)

### Plot the expected line widths
ax.plot([xmin, xmax], [NT10K1]*2,
        color = 'k',
        linestyle = '--',
        linewidth = 3.,
        dashes = (20., 10.))
ax.plot([xmin, xmax], [NT10K2]*2,
        color = 'k',
        linestyle = ':',
        linewidth = 3.,
        dashes = (3., 3.))
ax.plot([xmin, xmax], [Thermal]*2,
        color = 'gray',
        linestyle = '-',
        linewidth = 3.)

## Ronan and Helen's work
ax.scatter(testDerived['Reff'], testDerived['SigmaTot'],
           c = TpeakMap,
                   cmap = 'plasma',
                   marker = marker,
                   norm = norm_cores,
                   s = 200.,
                   edgecolors = 'k',
                   linewidth = 2.,
                   zorder = 98)

# adjust the plot
ax.set_xlabel('Radius [pc]')
ax.set_ylabel(r'$\sigma_{tot}$ [km s$^{-1}$]',
              labelpad = -10.)
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
ax.set_xscale('log')
ax.set_yscale('log')


axtw = ax.twiny()
axtw.set_xlim(*(ax.get_xlim()*u.pc).to(u.AU).value)
axtw.set_xscale('log')
axtw.set_xlabel('Radius [AU]')


direcComparison = '/Users/hopechen/Documents/projects/git_projects/Droplets/tests/comparison_Kirk/'
plt.savefig(direcComparison+'comparison_Tpeak.png')



In [100]:
###
rcParams['figure.subplot.left'] = .09
rcParams['figure.subplot.right'] = .97
rcParams['figure.subplot.bottom'] = .07
rcParams['figure.subplot.top'] = .93
rcParams['font.size'] = 30
pixscale = d_pixscale['L1688']
###

# prepare
xx, yy = d_table['Radius'].values, d_table['Mass'].values
xerr, yerr = d_table['Radius_ERR'].values, d_table['Mass_ERR'].values
## color coded by the ratio of the thermal support to the total kinetic energy.
ccode = d_table['Vdisp_T'].values**2./d_table['Vdisp_total'].values**2.
## AG
#xAG, yAG = (d_AG['radius'].values*u.AU).to(u.pc).value, d_AG['mass'].values
xAG, yAG = dict_tables['Goodman93_table1']['R'], dict_tables['Goodman93_table1']['M']
## limits
xmin, xmax = 5e-3, 1e0
ymin, ymax = 5e-3, 1e3

fig = plt.figure(figsize = (14., 14.))
ax = fig.gca()

# plot (with shapes)
for QA in set(d_table['QA']):
    for stars in set(d_table['protostars']):
        
        xxP, yyP = xx[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values],\
                   yy[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        ccodeP = ccode[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        xerrP = xerr[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        yerrP = yerr[((d_table['QA'] == QA)&(d_table['protostars'] == stars)).values]
        
        
        size = 360.
        if stars == 'inside':
            marker = '*'
            size = 600.
        elif stars == 'nearby':
            marker = '^'
        elif stars == 'no':
            marker = 'o'
            
        if QA == 'good':
            mfcolor = 'k'
            mecolor = 'k'
        elif QA == 'bad':
            mfcolor = 'w'
            mecolor = 'k'
            
        ax.scatter(xxP, yyP,
                   c = mfcolor,
                   marker = marker,
                   s = size,
                   edgecolors = 'k',
                   linewidth = 2.,
                   zorder = 100,
                   alpha = 1.)

        ax.errorbar(xxP, yyP, xerr = xerrP, yerr = yerrP,
                    linestyle = 'none',
                    linewidth = 2.,
                    color = 'k',
                    alpha = .8,
                    zorder = 99)
            
            
        



# plot
#ax.errorbar(xx, yy, xerr = xerr, yerr = yerr,
#            linestyle = 'none',
#            linewidth = 2.,
#            marker = '.',
#            markersize = 28.,
#            color = 'k',
#            zorder = 100)

## AG
ax.plot(xAG, yAG,
        linestyle = 'none',
        marker = '.',
        markersize = 35.,
        color = 'gray',
        alpha = .4,
        zorder = 3)

## beam
ax.fill_between([ax.get_xlim()[0], 3.*pixscale], 2*[ymin], 2*[ymax],
                color = 'gray',
                alpha = .1,
                edgecolor = 'none',
                zorder = 0)

## larson
ax.plot([xmin, xmax], (1.1/.42)**5.*np.array([xmin, xmax])**1.9,
        color = 'gray',
        linestyle = '--',
        linewidth = 1.,
        zorder = 1)


## Ronan and Helen's work
ax.scatter(testDerived['Reff'], testTable['MASS'],
           c = TpeakMap,
                   cmap = 'plasma',
                   marker = marker,
                   norm = norm_cores,
                   s = 200.,
                   edgecolors = 'k',
                   linewidth = 2.,
                   zorder = 98)


# adjust the plot
ax.set_xlabel('Radius [pc]')
ax.set_ylabel(r'Mass [M$_\odot$]',
              labelpad = -10.)
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
ax.set_xscale('log')
ax.set_yscale('log')

axtw = ax.twiny()
axtw.set_xlim(*(ax.get_xlim()*u.pc).to(u.AU).value)
axtw.set_xscale('log')
axtw.set_xlabel('Radius [AU]')


direcComparison = '/Users/hopechen/Documents/projects/git_projects/Droplets/tests/comparison_Kirk/'
plt.savefig(direcComparison+'comparison_SizeMass.png')


Appendix II. Gas and Dust


In [41]:
reg = 'L1688'

fig = plt.figure(figsize = (14., 10.))
ax = fig.gca()

ax.hist(ratioMap[np.isfinite(ratioMap)],
        bins = np.logspace(-9., -7.5, 150),
        histtype = 'step',
        color = 'k',
        linewidth = 1.,
        alpha = .4)

maskAll = np.zeros(ratioMap.shape, dtype = bool)

for i, structure in enumerate(list(range(1, 13))):
    
    '''
    ax.hist(ratioMap[np.isfinite(ratioMap)&dict_masks[reg][structure]],
            bins = np.logspace(-9., -7.5, 150),
            histtype = 'step',
            color = ssk_colors[i],
            linewidth = 2.)
    '''
    
    maskAll = maskAll | dict_masks[reg][structure]
    
ax.hist(ratioMap[np.isfinite(ratioMap)&maskAll],
        bins = np.logspace(-9., -7.5, 150),
        color = 'k',
        linewidth = 1.)

ax.set_xscale('log')

ax.set_xlabel('NH$_3$ Abundance [ratio of column density]')
ax.set_ylabel('Number of Pixels')


Out[41]:
<matplotlib.text.Text at 0x16610c890>

In [39]:
ratioMap = 10.**dict_data['L1688']['N_NH3']/dict_data['L1688']['colden']
wcs_GAS = wcs.WCS(dict_data['L1688']['header_GAS'])

fig = plt.figure(figsize = (14., 14.*ratioMap.shape[0]/ratioMap.shape[1]))
ax = fig.gca(projection = wcs_GAS)

ax.imshow(ratioMap,
          cmap = 'YlGn_r',
          norm = colors.LogNorm())


for i, structure in enumerate(list(range(1, 13))+['extra']):
    
    ax.contour(dict_masks[reg][structure],
               levels = [.5],
               colors = 'w',
               linewidths = 5.)
    
    ax.contour(dict_masks[reg][structure],
               levels = [.5],
               colors = ssk_colors[i],
               linewidths = 3.)
    

ax.coords[0].set_major_formatter('hh:mm')
ax.coords[0].set_axislabel('R.A. [J2000]')
ax.coords[1].set_major_formatter('dd:mm')
ax.coords[1].set_axislabel('Dec. [J2000]')



In [ ]: