Suicide Comorbidities (using Brown MySQL server)

This script run a PubMed-Comorbidities pipeline using the following characteristics:

  • Main MeSH Heading: Colonic Neoplasms
  • UMLS filtering concept: "Disease or Syndrome", "Mental or Behavioral Dysfunction" or "Neoplastic Process"
  • Articles analysed: All MEDLINE 2017AA articles tagged with the as a MeSH Heading. Note that this is equivalent to searching PubMed using [MH:noexp] Total number of articles found: 63304
  • UMLS concept filtering: Comorbidities are analysed on all other MeSH descriptors associated with the specified UMLS concepts
  • This script uses Brown MySQL databases:
    • medline
    • umls_meta
    • pubmed_miner

In [2]:
#Optional for running Step 1. However, sometimes the concurrancy of DatabaBase creates errors. 
#If that happens, reduce the number of process or comment out the line completely. 
#The fewer the processes the longer the task. At times I've successfully used 12 workers, at others only 2
# addprocs(2);

In [3]:
using Revise #used during development to detect changes in module - unknown behavior if using multiple processes
using PubMedMiner

In [4]:
#Settings
const mh = "Suicide"
const concepts = ("Disease or Syndrome", "Mental or Behavioral Dysfunction", "Neoplastic Process");

The folllowing code is designed to save to the pubmed_miner database a table containing the list of pmids and mesh descriptors that match the specified filtering criteria.


In [5]:
overwrite = false
@time save_semantic_occurrences(mh, concepts...; overwrite = overwrite)


INFO: 33806 Articles related to MH:Suicide
INFO: ----------------------------------------
INFO: Start all articles
INFO: Using concept table: MESH_T047
INFO: Using results table: suicide_mesh_t047
INFO: Table exists and will remain unchanged
INFO: Using concept table: MESH_T048
INFO: Using results table: suicide_mesh_t048
INFO: Table exists and will remain unchanged
INFO: Using concept table: MESH_T191
INFO: Using results table: suicide_mesh_t191
INFO: Table exists and will remain unchanged
  1.650369 seconds (1.25 M allocations: 59.220 MiB, 1.50% gc time)

2. Retrieve results and analyze simple occurrences and co-occurrences


In [6]:
using FreqTables

@time occurrence_df = get_semantic_occurrences_df(mh, concepts...)
@time mesh_frequencies = freqtable(occurrence_df, :pmid, :descriptor);

info("Found ", size(occurrence_df, 1), " related descriptors")


INFO: Using concept table: MESH_T047
INFO: Using results table: suicide_mesh_t047
INFO: Using concept table: MESH_T048
INFO: Using results table: suicide_mesh_t048
  1.042235 seconds (1.04 M allocations: 47.162 MiB, 2.15% gc time)
INFO: Using concept table: MESH_T191
INFO: Using results table: suicide_mesh_t191
  1.479638 seconds (1.20 M allocations: 176.893 MiB, 2.79% gc time)
INFO: Found 32017 related descriptors

In [7]:
using PlotlyJS
using NamedArrays

# Visualize frequency 
topn = 50
mesh_counts = vec(sum(mesh_frequencies, 1))
count_perm = sortperm(mesh_counts, rev=true)
mesh_names = collect(keys(mesh_frequencies.dicts[2]))

#traces
#remove from plot for better scaling
freq_trace = PlotlyJS.bar(; x = mesh_names[count_perm[1:topn]], y= mesh_counts[count_perm[1:topn]], marker_color="orange")

data = [freq_trace]
layout = Layout(;title="$(topn)-Most Frequent MeSH ",
                 showlegend=false,
                 margin= Dict(:t=> 70, :r=> 0, :l=> 50, :b=>200),
                 xaxis_tickangle = 90,)
plot(data, layout)


Plotly javascript loaded.

To load again call

init_notebook(true)

WARNING: deprecated syntax "abstract Shell".
Use "abstract type Shell end" instead.
Out[7]:

3. Pair Statistics

  • Mutual information
  • Chi-Square
  • Co-occurrance matrix

In [8]:
using BCBIStats.COOccur
using StatsBase

#co-occurrance matrix - only for topp MeSH 
# min_frequency = 5 -- alternatively compute topn based on min-frequency
top_occ = mesh_frequencies.array[:, count_perm[1:topn]]
top_mesh_labels = mesh_names[count_perm[1:topn]]
top_occ_sp = sparse(top_occ)
top_coo_sp = top_occ_sp' * top_occ_sp


#Point Mutual Information
pmi_sp = BCBIStats.COOccur.pmi_mat(top_coo_sp)
#chi2
top_chi2= BCBIStats.COOccur.chi2_mat(top_occ, min_freq=0);
#correlation
corrcoef = BCBIStats.COOccur.corrcoef(top_occ);


WARNING: sqrt{T <: Number}(x::AbstractArray{T}) is deprecated, use sqrt.(x) instead.
Stacktrace:
 [1] depwarn(::String, ::Symbol) at ./deprecated.jl:70
 [2] sqrt(::Array{Float64,2}) at ./deprecated.jl:57
 [3] #corrcoef#14(::Int64, ::Function, ::Array{Int64,2}) at /Users/isa/.julia/v0.6/BCBIStats/src/COOccur/COOccur.jl:66
 [4] corrcoef(::Array{Int64,2}) at /Users/isa/.julia/v0.6/BCBIStats/src/COOccur/COOccur.jl:57
 [5] include_string(::String, ::String) at ./loading.jl:515
 [6] include_string(::Module, ::String, ::String) at /Users/isa/.julia/v0.6/Compat/src/Compat.jl:464
 [7] execute_request(::ZMQ.Socket, ::IJulia.Msg) at /Users/isa/.julia/v0.6/IJulia/src/execute_request.jl:154
 [8] eventloop(::ZMQ.Socket) at /Users/isa/.julia/v0.6/IJulia/src/eventloop.jl:8
 [9] (::IJulia.##14#17)() at ./task.jl:335
while loading In[8], in expression starting on line 17

Plot Matrix of Pair Statistic


In [9]:
function plot_stat_mat(stat_mat, labels)
    stat_trace = heatmap(x=labels, y=labels, z=full(stat_mat- spdiagm(diag(stat_mat))))

    data = [stat_trace]
    layout = Layout(;
                     showlegend=false,
                     height = 900, width=900,
                     margin= Dict(:t=> 300, :r=> 0, :l=> 200, :b=>0),
                     xaxis_tickangle = 90, xaxis_autotick=false, yaxis_autotick=false,
                     yaxis_autorange = "reversed",
                     xaxis_side = "top", 
                     xaxis_ticks = "", yaxis_ticks = "")
    plot(data, layout)
end


Out[9]:
plot_stat_mat (generic function with 1 method)

Correlation Coefficient


In [10]:
plot_stat_mat(corrcoef, top_mesh_labels)


Out[10]:

Pointwise Mutual Information


In [11]:
plot_stat_mat(pmi_sp, top_mesh_labels)


Out[11]:

Plot Co-Occurrences Graph


In [12]:
using PlotlyJSFactory

p = create_chord_plot(top_coo_sp, labels = top_mesh_labels)
relayout!(p, title="Co-occurrances between top 50 MeSH terms")
JupyterPlot(p)


Out[12]:

Association Rules

  • Compute using apriori algorithm (eclat version)

In [13]:
using ARules
using DataTables


WARNING: Method definition ==(Base.Nullable{S}, Base.Nullable{T}) in module Base at nullable.jl:238 overwritten in module NullableArrays at /Users/isa/.julia/v0.6/NullableArrays/src/operators.jl:99.
WARNING: Method definition append!(NullableArrays.NullableArray{WeakRefStrings.WeakRefString{T}, 1}, NullableArrays.NullableArray{WeakRefStrings.WeakRefString{T}, 1}) in module Data at /Users/isa/.julia/v0.6/DataStreams/src/DataStreams.jl:344 overwritten in module DataTables at /Users/isa/.julia/v0.6/DataTables/src/abstractdatatable/io.jl:318.
WARNING: Method definition describe(AbstractArray{T, N} where N where T) in module StatsBase at /Users/isa/.julia/v0.6/StatsBase/src/scalarstats.jl:559 overwritten in module DataTables at /Users/isa/.julia/v0.6/DataTables/src/abstractdatatable/abstractdatatable.jl:381.

In [14]:
mh_occ = convert(BitArray{2}, mesh_frequencies.array)

# We don't need to remove Suicide (MH) because is not of the available UMLS types
# mh_col = mesh_frequencies.dicts[2][mh]
# mh_occ[:, mh_col] = zeros(size(mh_occ,1))

epilepsy_lkup = convert(DataStructures.OrderedDict{String,Int16}, mesh_frequencies.dicts[2]) 
@time epilepsy_rules = apriori(mh_occ, supp = 0.001, conf = 0.1, maxlen = 9)

#Pretty print of rules
epilepsy_lkup = Dict(zip(values(mesh_frequencies.dicts[2]), keys(mesh_frequencies.dicts[2])))
rules_dt= ARules.rules_to_datatable(epilepsy_rules, epilepsy_lkup, join_str = " | ");


WARNING: Compat.AsyncCondition is deprecated, use Base.AsyncCondition instead.
  likely near /Users/isa/.julia/v0.6/IJulia/src/kernel.jl:31
WARNING: Compat.AsyncCondition is deprecated, use Base.AsyncCondition instead.
  likely near /Users/isa/.julia/v0.6/IJulia/src/kernel.jl:31
  0.855651 seconds (513.62 k allocations: 59.484 MiB, 4.06% gc time)

In [15]:
println(head(rules_dt))
println("Found ", size(rules_dt, 1), " rules")


6×5 DataTables.DataTable
│ Row │ lhs                                  │
├─────┼──────────────────────────────────────┤
│ 1   │ {Acne Vulgaris}                      │
│ 2   │ {HIV Infections}                     │
│ 3   │ {Acquired Immunodeficiency Syndrome} │
│ 4   │ {Acquired Immunodeficiency Syndrome} │
│ 5   │ {Acute Disease}                      │
│ 6   │ {Acute Disease}                      │

│ Row │ rhs                                │ supp        │ conf     │ lift     │
├─────┼────────────────────────────────────┼─────────────┼──────────┼──────────┤
│ 1   │ Depression                         │ 0.000996133 │ 0.515152 │ 3.07505  │
│ 2   │ Acquired Immunodeficiency Syndrome │ 0.00158209  │ 0.2      │ 20.4383  │
│ 3   │ HIV Infections                     │ 0.00158209  │ 0.161677 │ 20.4383  │
│ 4   │ Substance-Related Disorders        │ 0.00123052  │ 0.125749 │ 1.28813  │
│ 5   │ Chronic Disease                    │ 0.0014063   │ 0.102564 │ 5.10309  │
│ 6   │ Mental Disorders                   │ 0.00181648  │ 0.132479 │ 0.617221 │
Found 476 rules

Frequent Item Sets


In [16]:
supp_int = round(Int, 0.001 * size(mh_occ, 1))
@time root = frequent_item_tree(mh_occ, supp_int, 9);

supp_lkup = gen_support_dict(root, size(mh_occ, 1))
item_lkup = mesh_frequencies.dicts[2]
item_lkup_t = Dict(zip(values(item_lkup), keys(item_lkup)))
freq = ARules.suppdict_to_datatable(supp_lkup, item_lkup_t);


  0.033465 seconds (68.66 k allocations: 34.563 MiB, 24.23% gc time)

In [17]:
println(head(freq))
println("Found ", size(freq, 1), " frequent itemsets")


6×2 DataTables.DataTable
│ Row │ itemset                                                │ supp │
├─────┼────────────────────────────────────────────────────────┼──────┤
│ 1   │ {Alcoholism,Schizophrenia,Substance-Related Disorders} │ 33   │
│ 2   │ {Hypertension}                                         │ 57   │
│ 3   │ {Depression,Substance-Related Disorders}               │ 236  │
│ 4   │ {Adjustment Disorders,Schizophrenia}                   │ 19   │
│ 5   │ {Alcoholism,Depressive Disorder, Major}                │ 41   │
│ 6   │ {Child Abuse,Substance-Related Disorders}              │ 31   │
Found 518 frequent itemsets

Visualization of Frequent Item Sets

  • Basic visualization of frequent item sets using Sankey diagram (experimental - use with caution)
  • Future work inclused better layout for more links as well as the ability to dinamically change the number of of itemsets

In [18]:
function fill_sankey_data!(node, sources, targets, vals)
    if length(node.item_ids) >1
        push!(sources, node.item_ids[end-1]-1)
        push!(targets, node.item_ids[end]-1)
        push!(vals, node.supp)
    end
    if has_children(node)     
        for nd in node.children
            fill_sankey_data!(nd,  sources, targets, vals)
        end
    end
end


Out[18]:
fill_sankey_data! (generic function with 1 method)

In [19]:
sources = []
targets = []
vals = []
fill_sankey_data!(root, sources, targets, vals);

In [20]:
# size(sources)
topn_links = 50
freq_vals_perm = sortperm(vals, rev=true)
s = sources[freq_vals_perm[1:topn_links]]
t = targets[freq_vals_perm[1:topn_links]]
v = vals[freq_vals_perm[1:topn_links]]
l = mesh_names

println("Found, ", length(sources), "links, showing ", topn_links)


Found, 358links, showing 50

In [21]:
pad = 1e-7
trace=sankey(orientation="h",
             node = attr(domain=attr(x=[0,1], y=[0,1]), pad=pad, thickness=pad, line = attr(color="black", width= 0.5),
                         label=l), 
             link = attr(source=s, target=t, value = v))

layout = Layout(width=900, height=1100)
    

plot([trace], layout)


Out[21]:

In [ ]: