Import packages and set up notebook


In [1]:
from __future__ import absolute_import, division, print_function
import dic
from cycler import cycler
from IPython.display import HTML
from base64 import b64encode
import matplotlib.pyplot as plt
import numpy as np
import os
import warnings

rc_dict = {'font.size': 14, 
           'font.family': 'sans-serif', 
           'font.sans-serif': ['Arial'],
           'axes.titlesize':'large',
           'lines.linewidth': 2,
           'axes.prop_cycle': cycler('color', ('#2196F3', # blue
                                               '#F44336', # red
                                               '#4CAF50', # green
                                               '#9C27B0', # purple
                                               '#FFB300', # amber
                                               '#795548', # brown
                                               '#607D8B'  # blue grey
                                              ))
}

plt.rcParams.update(rc_dict)
%matplotlib notebook

Enter data variables

These variables must be manually entered to your test parameters


In [2]:
# path to the .mat files exported by VIC software:
dic_directory = "/home/ryan/Desktop/sample_006_mat/"

# path to the raw image file and their extension:
image_directory = "/home/ryan/Desktop/sample_006_dic/"
image_extension = ".tif"

# CSV file saved by VIC software during analysis:
csv_filename = "/home/ryan/BTSync/dic_load/FCC_2x2x1_sample-006.csv"

# column in the CSV file that contains the load data in volts
csv_load_column = 9

# full scale load from MTS card / 10 V
newtons_per_volt = 4448 / 10.0

# sample area is needed to calculate stress from load data
lattice_parameter = 30.0
sample_area = (2 * lattice_parameter)**2

Load data files

This loads the DIC files (with .MAT extension), image files and MTS data file.

I assume the left camera was used as the reference images. If you used the right camera, simply change the reference_camera_filenames variable to right_camera_filenames.

Warnings will be issued if a possible error is detected when loading the data.


In [3]:
dic_filenames = dic.get_filenames(dic_directory, ".mat", prepend_directory=True)
left_camera_filenames, right_camera_filenames = dic.get_image_filenames(image_directory, image_extension, 
                                                                        prepend_directory=True)

# set this to the camera filenames used for the reference during the correlation:
reference_camera_filenames = left_camera_filenames

print("Found {:d} DIC files at {}".format(len(dic_filenames), dic_directory))
print("Found {:d} image files at {}".format(len(reference_camera_filenames), image_directory))
if len(dic_filenames) != len(reference_camera_filenames):
    warnings.warn("The number of camera images must equal the number of DIC files.")
if os.path.exists(csv_filename):
    print("Found CSV file {}".format(csv_filename))
    mts_load = dic.load_csv_data(csv_filename, csv_load_column, scale=newtons_per_volt)
    print("MTS load data imported from CSV file.")
else:
    warnings.warn("Unable to find CSV file: {}".format(csv_filename))


Found 199 DIC files at /home/ryan/Desktop/sample_006_mat/
Found 199 image files at /home/ryan/Desktop/sample_006_dic/
Found CSV file /home/ryan/BTSync/dic_load/FCC_2x2x1_sample-006.csv
MTS load data imported from CSV file.

Execute cell and use plot to choose placements of extensometers

Clicking on the plot adds a point. Adding two points creates an extensometer. Repeat until the desired number of extensometers are created.


In [4]:
extensometers = dic.place_extensometers(reference_camera_filenames[0], dic_filenames[0])


Calculate stress-strain

Strain will be calculated from the average strain of the extensometers created above. Stress is simply found by dividing the MTS load data by the sample area. These are calculated together in order to ensure that the same number of stress and strain data points were provided.


In [5]:
stress, strain = dic.stress_strain(dic_filenames, mts_load, area=sample_area, extensometers=extensometers)


Calculating stress-strain: 100%|██████████| 199/199 [00:03<00:00, 54.23it/s]
Strain calculation completed.

Smooth the stress-strain curve

Often the output from the MTS is noisey. This will smooth some out some of the measurement error. Increase window_len for greater smoothing.


In [6]:
smoothed_stress = dic.smooth(stress, window_len=10)

axes_options = {
    "xlabel": "Compressive strain, $\\epsilon_{1}$", 
    "ylabel": "Compressive stress, $\\sigma_{1}$ (MPa)",
    "xlim"  : (0, 0.06),
    "xticks": np.arange(0, 0.08, 0.02),
    "ylim"  : (0, 0.6),
    "yticks": np.arange(0, 0.8, 0.2)
}

fig, ax = dic.plot_xy(-strain, -stress, axes_options=axes_options, plot_options={"label": "Raw data"})
dic.plot_xy(-strain, -smoothed_stress, ax=ax, plot_options={"label": "Smoothed"})
plt.legend()
plt.show()


Prepare frame creator

A frame creator is an object that when called with a frame number i will return a matplotlib.figure.Figure corresponding to the current frame. The frame creator's __len__ attribute must return the number frames the creator intends to create (usually the number of DIC files/images). Those are the only two requirements when creating a custom frame creator. Often the frame creator will have to be created for your specific use case. I've created one that will plot a contour overlay and an (x, y) plot.


In [7]:
xy_axes_options = {
    "xlabel": "Compressive strain, $\\epsilon_{1}$", 
    "ylabel": "Compressive stress, $\\sigma_{1}$ (MPa)",
    "xlim"  : (0, 0.06),
    "xticks": np.arange(0, 0.08, 0.02),
    "ylim"  : (0, 0.9),
    "yticks": np.arange(0, 1.2, 0.3)
}

xy_plot_options = {
    "color" : (0., 0., 0., 0.7),
    "linewidth": 1.5
}

point_plot_options = {
    "markersize": 6,
    "color": plt.get_cmap("viridis")(0.5)
}

vmin = -0.04
vmax = -vmin
step = 0.02
levels = 32

overlay_contour_options = {
    "levels": np.linspace(vmin, vmax, levels)
}

colorbar_options = {
    "title": "$\\epsilon_{\\mathsf{min}}$",
    "ticks": np.arange(vmin, vmax+step, step)
}

plt.close("all")
frame_creator = dic.OverlayWithStressStrainFrameCreator(reference_camera_filenames, dic_filenames, "e2", 
                    (-strain, -smoothed_stress), figure_width=11, fractional_padding=0.5,
                    overlay_contour_options=overlay_contour_options, 
                    xy_axes_options=xy_axes_options, xy_plot_options=xy_plot_options, 
                    colorbar_options=colorbar_options, point_plot_options=point_plot_options)
fig = frame_creator(90)
plt.show()


Export video frames

This will save .JPG files to the specified output_directory using the given options. Notice that the frame creator from above is given as the first argument.


In [8]:
savefig_options = {
    "dpi" : 300,
    "bbox_inches": "tight"
}

dic.export_frames(frame_creator, output_directory="example/image_sequence/", 
                  output_filename="frame.jpg", savefig_options=savefig_options)


Exporting frames: 100%|██████████| 199/199 [04:13<00:00,  1.30s/it]
Frames successfully exported.

Convert frames into a video

Convert the image sequence into a video. This requires FFMPEG to be installed, and the directory containing the executable must be on your path.


In [9]:
input_template = "example/image_sequence/frame_%3d.jpg"
output_filename = "example/movie.mp4"
dic.image_sequence_to_video(input_template, output_filename, crf=23, scale=(720, -1))


Converting images into video...
ffmpeg version 3.1.3 Copyright (c) 2000-2016 the FFmpeg developers
  built with gcc 6.1.1 (GCC) 20160802
  configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-avisynth --enable-avresample --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libass --enable-libbluray --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-netcdf --enable-shared --enable-version3 --enable-x11grab
  libavutil      55. 28.100 / 55. 28.100
  libavcodec     57. 48.101 / 57. 48.101
  libavformat    57. 41.100 / 57. 41.100
  libavdevice    57.  0.101 / 57.  0.101
  libavfilter     6. 47.100 /  6. 47.100
  libavresample   3.  0.  0 /  3.  0.  0
  libswscale      4.  1.100 /  4.  1.100
  libswresample   2.  1.100 /  2.  1.100
  libpostproc    54.  0.100 / 54.  0.100
Input #0, image2, from 'example/image_sequence/frame_%3d.jpg':
  Duration: 00:00:07.96, start: 0.000000, bitrate: N/A
    Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 2788x1081 [SAR 1:1 DAR 2788:1081], 25 fps, 25 tbr, 25 tbn, 25 tbc
[swscaler @ 0x558bce5a8740] deprecated pixel format used, make sure you did set range correctly
[libx264 @ 0x558bce599fe0] using SAR=4047/4064
[libx264 @ 0x558bce599fe0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
[libx264 @ 0x558bce599fe0] profile High, level 2.2
[libx264 @ 0x558bce599fe0] 264 - core 148 r2699 a5e06b9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[mp4 @ 0x558bce599340] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Output #0, mp4, to 'example/movie.mp4':
  Metadata:
    encoder         : Lavf57.41.100
    Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 720x278 [SAR 96883:97290 DAR 2788:1081], q=-1--1, 25 fps, 12800 tbn, 25 tbc
    Metadata:
      encoder         : Lavc57.48.101 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Press [q] to stop, [?] for help
frame=  199 fps= 90 q=-1.0 Lsize=     140kB time=00:00:07.84 bitrate= 145.9kbits/s speed=3.56x    
video:136kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.322668%
[libx264 @ 0x558bce599fe0] frame I:1     Avg QP:21.24  size: 18362
[libx264 @ 0x558bce599fe0] frame P:50    Avg QP:21.88  size:  2129
[libx264 @ 0x558bce599fe0] frame B:148   Avg QP:19.93  size:    96
[libx264 @ 0x558bce599fe0] consecutive B-frames:  0.5%  1.0%  0.0% 98.5%
[libx264 @ 0x558bce599fe0] mb I  I16..4: 16.7% 50.0% 33.3%
[libx264 @ 0x558bce599fe0] mb P  I16..4:  0.1%  0.1%  0.0%  P16..4: 10.0%  5.4%  4.6%  0.0%  0.0%    skip:79.8%
[libx264 @ 0x558bce599fe0] mb B  I16..4:  0.0%  0.1%  0.0%  B16..8:  5.5%  0.2%  0.0%  direct: 0.1%  skip:94.1%  L0:31.9% L1:65.8% BI: 2.3%
[libx264 @ 0x558bce599fe0] 8x8 transform intra:52.4% inter:40.3%
[libx264 @ 0x558bce599fe0] coded y,uvDC,uvAC intra: 29.1% 17.7% 17.2% inter: 2.6% 3.2% 2.6%
[libx264 @ 0x558bce599fe0] i16 v,h,dc,p: 65% 23% 11%  1%
[libx264 @ 0x558bce599fe0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 34% 18% 40%  2%  2%  1%  1%  1%  1%
[libx264 @ 0x558bce599fe0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 27% 14%  7%  8%  4%  3%  4%  5%
[libx264 @ 0x558bce599fe0] i8c dc,h,v,p: 84%  7%  8%  1%
[libx264 @ 0x558bce599fe0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x558bce599fe0] ref P L0: 57.7% 23.1% 14.1%  5.2%
[libx264 @ 0x558bce599fe0] ref B L0: 69.6% 26.4%  4.0%
[libx264 @ 0x558bce599fe0] ref B L1: 96.6%  3.4%
[libx264 @ 0x558bce599fe0] kb/s:139.72

Display the output


In [11]:
video = open(output_filename, "rb").read()
video_encoded = b64encode(video)
video_tag = \
"""
<video alt="example video" controls>
    <source src="data:video/mp4;base64,{0}" type="video/mp4"/>
</video>
""".format(video_encoded.decode('ascii'))
HTML(data=video_tag)


Out[11]: