2020-04-05 17:45:31,476 INFO tune.py:60 -- Tip: to resume incomplete experiments, pass resume='prompt' or resume=True to run()
2020-04-05 17:45:31,477 INFO tune.py:223 -- Starting a new experiment.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 2.2/16.6 GB
WARNING:tensorflow:From /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/ray/tune/logger.py:127: The name tf.VERSION is deprecated. Please use tf.version.VERSION instead.
WARNING:tensorflow:From /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/ray/tune/logger.py:132: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 2/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 2.2/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 1, 'PENDING': 2})
PENDING trials:
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: PENDING
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING
(pid=9119) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9119) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9119) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9119) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9118) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9118) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9118) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9118) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9118) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9118) warnings.warn(warning_msg)
(pid=9118) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9118) warnings.warn(warning_msg)
(pid=9183) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9183) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9119) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9119) warnings.warn(warning_msg)
(pid=9119) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9119) warnings.warn(warning_msg)
(pid=9184) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9184) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9183) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9183) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9183) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9183) warnings.warn(warning_msg)
(pid=9183) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9183) warnings.warn(warning_msg)
(pid=9184) Prepending /home/shane/.local/lib/python3.6/site-packages/bigdl/share/conf/spark-bigdl.conf to sys.path
(pid=9184) Prepending /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/share/conf/spark-analytics-zoo.conf to sys.path
(pid=9118) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9118) warnings.warn(warning_msg)
(pid=9118) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9118) warnings.warn(warning_msg)
(pid=9183) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9183) warnings.warn(warning_msg)
(pid=9183) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9183) warnings.warn(warning_msg)
(pid=9184) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9184) warnings.warn(warning_msg)
(pid=9184) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9184) warnings.warn(warning_msg)
(pid=9184) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9184) warnings.warn(warning_msg)
(pid=9184) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9184) warnings.warn(warning_msg)
(pid=9119) /home/shane/.local/lib/python3.6/site-packages/bigdl/util/engine.py:41: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, please use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9119) warnings.warn(warning_msg)
(pid=9119) /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/zoo/util/engine.py:42: UserWarning: Find both SPARK_HOME and pyspark. You may need to check whether they match with each other. SPARK_HOME environment variable is set to: /home/shane/shane/spark, and pyspark is found in: /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/pyspark/__init__.py. If they are unmatched, you are recommended to use one source only to avoid conflict. For example, you can unset SPARK_HOME and use pyspark only.
(pid=9119) warnings.warn(warning_msg)
(pid=9184) 2020-04-05 17:45:32,982 WARNING worker.py:204 -- Calling ray.get or ray.wait in a separate thread may lead to deadlock if the main thread blocks on this thread and there are not enough resources to execute more tasks
(pid=9184) 2020-04-05 17:45:32,982 WARNING worker.py:204 -- Calling ray.get or ray.wait in a separate thread may lead to deadlock if the main thread blocks on this thread and there are not enough resources to execute more tasks
(pid=9184) LSTM is selected.
(pid=9184) LSTM is selected.
(pid=9119) LSTM is selected.
(pid=9119) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
(pid=9119) Instructions for updating:
(pid=9119) If using Keras pass *_constraint arguments to layers.
(pid=9184) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
(pid=9184) Instructions for updating:
(pid=9184) If using Keras pass *_constraint arguments to layers.
(pid=9119) LSTM is selected.
(pid=9184) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
(pid=9184) Instructions for updating:
(pid=9184) If using Keras pass *_constraint arguments to layers.
(pid=9119) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
(pid=9119) Instructions for updating:
(pid=9119) If using Keras pass *_constraint arguments to layers.
(pid=9184) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
(pid=9184) Instructions for updating:
(pid=9184) Use tf.where in 2.0, which has the same broadcast rule as np.where
(pid=9184) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
(pid=9184) Instructions for updating:
(pid=9184) Use tf.where in 2.0, which has the same broadcast rule as np.where
(pid=9119) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
(pid=9119) Instructions for updating:
(pid=9119) Use tf.where in 2.0, which has the same broadcast rule as np.where
(pid=9119) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
(pid=9119) Instructions for updating:
(pid=9119) Use tf.where in 2.0, which has the same broadcast rule as np.where
(pid=9184) 2020-04-05 17:45:34.923735: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shane/torch/install/lib:/home/shane/torch/install/lib:/home/shane/torch/install/lib:
(pid=9184) 2020-04-05 17:45:34.923761: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
(pid=9184) 2020-04-05 17:45:34.923778: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (shane-workstation): /proc/driver/nvidia/version does not exist
(pid=9184) 2020-04-05 17:45:34.923945: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=9184) 2020-04-05 17:45:34.923735: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shane/torch/install/lib:/home/shane/torch/install/lib:/home/shane/torch/install/lib:
(pid=9184) 2020-04-05 17:45:34.923761: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
(pid=9184) 2020-04-05 17:45:34.923778: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (shane-workstation): /proc/driver/nvidia/version does not exist
(pid=9184) 2020-04-05 17:45:34.923945: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=9119) 2020-04-05 17:45:34.948973: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shane/torch/install/lib:/home/shane/torch/install/lib:/home/shane/torch/install/lib:
(pid=9119) 2020-04-05 17:45:34.948990: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
(pid=9119) 2020-04-05 17:45:34.949005: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (shane-workstation): /proc/driver/nvidia/version does not exist
(pid=9119) 2020-04-05 17:45:34.949176: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=9119) 2020-04-05 17:45:34.953666: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3591790000 Hz
(pid=9119) 2020-04-05 17:45:34.953915: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7efe5526d160 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
(pid=9119) 2020-04-05 17:45:34.953928: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
(pid=9184) 2020-04-05 17:45:34.947009: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3591790000 Hz
(pid=9184) 2020-04-05 17:45:34.947372: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f0b612f2950 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
(pid=9184) 2020-04-05 17:45:34.947394: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
(pid=9184) 2020-04-05 17:45:34.947009: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3591790000 Hz
(pid=9184) 2020-04-05 17:45:34.947372: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f0b612f2950 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
(pid=9184) 2020-04-05 17:45:34.947394: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
(pid=9119) 2020-04-05 17:45:34.948973: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shane/torch/install/lib:/home/shane/torch/install/lib:/home/shane/torch/install/lib:
(pid=9119) 2020-04-05 17:45:34.948990: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
(pid=9119) 2020-04-05 17:45:34.949005: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (shane-workstation): /proc/driver/nvidia/version does not exist
(pid=9119) 2020-04-05 17:45:34.949176: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=9119) 2020-04-05 17:45:34.953666: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3591790000 Hz
(pid=9119) 2020-04-05 17:45:34.953915: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7efe5526d160 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
(pid=9119) 2020-04-05 17:45:34.953928: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:From /home/shane/anaconda3/envs/automl/lib/python3.6/site-packages/ray/tune/logger.py:110: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 13 s, 1 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 23 s, 2 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 15 s, 1 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 33 s, 3 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 27 s, 2 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 33 s, 3 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 40 s, 3 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 43 s, 4 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 52 s, 4 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 64 s, 6 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 52 s, 4 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 74 s, 7 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 64 s, 5 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 84 s, 8 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 76 s, 6 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 94 s, 9 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 89 s, 7 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'RUNNING': 2, 'PENDING': 1})
PENDING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: PENDING
RUNNING trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9119], 94 s, 9 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 101 s, 8 iter
(pid=9118) LSTM is selected.
(pid=9118) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
(pid=9118) Instructions for updating:
(pid=9118) If using Keras pass *_constraint arguments to layers.
(pid=9118) LSTM is selected.
(pid=9118) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
(pid=9118) Instructions for updating:
(pid=9118) If using Keras pass *_constraint arguments to layers.
(pid=9118) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
(pid=9118) Instructions for updating:
(pid=9118) Use tf.where in 2.0, which has the same broadcast rule as np.where
(pid=9118) WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
(pid=9118) Instructions for updating:
(pid=9118) Use tf.where in 2.0, which has the same broadcast rule as np.where
(pid=9118) 2020-04-05 17:47:22.771295: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shane/torch/install/lib:/home/shane/torch/install/lib:/home/shane/torch/install/lib:
(pid=9118) 2020-04-05 17:47:22.771348: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
(pid=9118) 2020-04-05 17:47:22.771372: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (shane-workstation): /proc/driver/nvidia/version does not exist
(pid=9118) 2020-04-05 17:47:22.771796: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=9118) 2020-04-05 17:47:22.799066: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3591790000 Hz
(pid=9118) 2020-04-05 17:47:22.799519: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f88bd0f39a0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
(pid=9118) 2020-04-05 17:47:22.799543: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
(pid=9118) 2020-04-05 17:47:22.771295: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/shane/torch/install/lib:/home/shane/torch/install/lib:/home/shane/torch/install/lib:
(pid=9118) 2020-04-05 17:47:22.771348: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
(pid=9118) 2020-04-05 17:47:22.771372: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (shane-workstation): /proc/driver/nvidia/version does not exist
(pid=9118) 2020-04-05 17:47:22.771796: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=9118) 2020-04-05 17:47:22.799066: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3591790000 Hz
(pid=9118) 2020-04-05 17:47:22.799519: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f88bd0f39a0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
(pid=9118) 2020-04-05 17:47:22.799543: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 1, 'RUNNING': 2})
RUNNING trials:
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 112 s, 9 iter
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: RUNNING
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 1, 'RUNNING': 2})
RUNNING trials:
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 112 s, 9 iter
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9118], 12 s, 1 iter
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 1, 'RUNNING': 2})
RUNNING trials:
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9184], 112 s, 9 iter
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9118], 18 s, 2 iter
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 2/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 2, 'RUNNING': 1})
RUNNING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9118], 24 s, 4 iter
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9184], 124 s, 10 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 2/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 2, 'RUNNING': 1})
RUNNING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9118], 30 s, 6 iter
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9184], 124 s, 10 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 2/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 2, 'RUNNING': 1})
RUNNING trials:
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: RUNNING, [2 CPUs, 0 GPUs], [pid=9118], 36 s, 8 iter
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9184], 124 s, 10 iter
2020-04-05 17:48:02,052 INFO ray_trial_executor.py:180 -- Destroying actor for trial train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 3})
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9184], 124 s, 10 iter
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9118], 42 s, 10 iter
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/4 CPUs, 0/0 GPUs (0/1.0 ps, 0/1.0 trainer)
Memory usage on this node: 3.0/16.6 GB
Result logdir: /home/shane/ray_results/automl
Number of trials: 3 ({'TERMINATED': 3})
TERMINATED trials:
- train_func_0_batch_size=64,dropout_2=0.39015,lr=0.0055474,lstm_1_units=16,lstm_2_units=16,past_seq_len=73,selected_features=['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9119], 104 s, 10 iter
- train_func_1_batch_size=64,dropout_2=0.41106,lr=0.0094198,lstm_1_units=128,lstm_2_units=32,past_seq_len=74,selected_features=['WEEKDAY(datetime)' 'IS_WEEKEND(datetime)' 'MONTH(datetime)'
'IS_AWAKE(datetime)' 'HOUR(datetime)' 'DAY(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9184], 124 s, 10 iter
- train_func_2_batch_size=64,dropout_2=0.25682,lr=0.009007,lstm_1_units=16,lstm_2_units=64,past_seq_len=40,selected_features=['WEEKDAY(datetime)' 'DAY(datetime)' 'IS_AWAKE(datetime)'
'MONTH(datetime)']: TERMINATED, [2 CPUs, 0 GPUs], [pid=9118], 42 s, 10 iter
The best configurations are:
selected_features : ['DAY(datetime)' 'HOUR(datetime)' 'IS_WEEKEND(datetime)'
'IS_BUSY_HOURS(datetime)' 'WEEKDAY(datetime)']
model : LSTM
lstm_1_units : 16
dropout_1 : 0.2
lstm_2_units : 16
dropout_2 : 0.39014785774361804
lr : 0.005547442119849576
batch_size : 64
epochs : 1
past_seq_len : 73
WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/init_ops.py:97: calling GlorotUniform.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/init_ops.py:97: calling Orthogonal.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/init_ops.py:97: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:OMP_NUM_THREADS is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading APIs.
WARNING:tensorflow:From /home/shane/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
CPU times: user 5.36 s, sys: 775 ms, total: 6.13 s
Wall time: 2min 32s