Advertisement
mayankjoin3

Untitled

Feb 29th, 2024
36
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 50.38 KB | None | 0 0
  1. \documentclass{article}
  2. \usepackage[utf8]{inputenc}
  3. \usepackage[a4paper,left=2.9cm,right=2cm,top=2cm,bottom=2.25cm]{geometry}
  4.  
  5. \title{Response to reviewer’s comments}
  6. \date{}
  7. \usepackage{setspace}
  8. \usepackage{amssymb}
  9.  
  10. \usepackage{subcaption}
  11. \usepackage{float}
  12. \usepackage{xcolor}
  13. \captionsetup{compatibility=false}
  14. \usepackage{url}
  15. \usepackage{float}
  16. \usepackage{tabularx}
  17. \usepackage{lineno}
  18. \usepackage{booktabs}
  19. \usepackage{multirow}
  20. \usepackage{graphicx}
  21. \usepackage{natbib}
  22. \begin{document}
  23. \onehalfspacing
  24.  
  25. \maketitle
  26.  
  27. \textbf{Manuscript Title:} {{Predicting Total Sediment Load Transport in Rivers using Regression Techniques, Extreme Learning and Deep Learning Models}} \\
  28.  
  29. \textbf{Dear editors and reviewers:}
  30.  
  31. We appreciate your valuable comments and suggestions. They were really
  32. helpful in improving the quality of our work. We have revised our paper
  33. carefully according to the comments and suggestions provided to us.
  34. The summary of the changes are as follows: \\
  35.  
  36. \textbf{Response to Reviewer 2:}
  37.  
  38. \textbf{General Comment:} The paper focuses on the application of various machine learning models to predict total sediment load transport in rivers. The manuscript is clearly written. As far as I am concerned, the methods are not novel in themselves but have been culled from a few existing papers. I think that the authors would do well to introduce the specific novelty in this paper. My comments are as follows:\\
  39.  
  40. \textbf{\textbf{Q2.1)} The methods seem sound and well-explained. As far as I am concerned, the methods are not novel in themselves but have been culled from a few existing papers. I think the authors would do well to make their debt to these papers clearer (as well as the specific novelty introduced in this paper).}
  41.  
  42. \textbf{Answer:} We thank the reviewer for reviewing our manuscript and
  43. providing valuable comments for improving it. We first describe the
  44. research gap that we are aiming to solve, which automatically helps to
  45. ascertain the novelty of this research.
  46.  
  47. \textbf{Research gap}: The following are the research gaps that exists in the
  48. current studies with reference to total sediment load prediction.
  49.  
  50. 1) \textbf{Usage of limited data or specific environment for predicting total
  51. sediment load:} A major limitation that is present in majority of the existing
  52. studies on total sediment load prediction is that most of them have focused on
  53. utilizing ML algorithms to develop predictive models for only one hydrological
  54. station or river, or used a series of data collected from experiments performed on a laboratory flume. As the
  55. magnitude and
  56. behavior of the total sediment load for each river is different, the
  57. suitability of certain ML algorithms for the task of total sediment load
  58. prediction may vary depending on the river under consideration. Few ML
  59. algorithms may perform well and produce a good
  60. prediction for total sediment load for a hydrological station at a particular
  61. river, but may not perform well in predicting total sediment load for a
  62. different river, due to variance in anthropogenic and natural factors. Further,
  63. most of the studies are concentrated either on the bed load or
  64. suspended sediment load. So, usually models that are tuned to perform well on
  65. suspended sediment load might under-perform when applied to predict bed load
  66. and vice versa.
  67.  
  68. 2) \textbf{Empirical Equations:} Current studies that have been published on
  69. total sediment load transport either employ empirical methods derived out from
  70. the studies carried out on a individual laboratory experiment or specific
  71. rivers under a particular environment. Depending on the data collection
  72. conditions, the same empirical formula may yield completely different results
  73. when the underlying environment changes. As a result, it is challenging for a
  74. researcher to select an appropriate formula for a given river
  75. \citep{vanoni1975river, yang1996sediment}.
  76.  
  77. \begin{table}[!h]
  78. \caption{The individual datasets that we have used in this study. All these
  79. have been compiled by Brownlie \citep{brownlie1981compilation}. The
  80. motivation of our research is to develop a generic predictive total
  81. sediment load model that comprises of dataset for both flume and field
  82. studies. It can be observed that the undertaken study has 11 flume
  83. experiments and 6 field experiments demonstrating a good mix of flume and
  84. field data.}
  85. \label{tab:brownlie-studies}
  86. \resizebox{\textwidth}{!}{%
  87. \begin{tabular}{@{}lllll@{}}
  88. \toprule
  89. \textbf{Sl} & \textbf{Author/Agency} & \textbf{Type} &
  90. \textbf{River Body / Flume} & \textbf{Citation/Agency} \\
  91. \midrule
  92. 1 & U. S. Bureau of Reclamation & Field Data &
  93. Colarado river US & U. S. Bureau of Reclamation \\
  94. \midrule
  95. 2 & Einstein, H. A. & Field Data &
  96. Mountain Creek &
  97. \cite{einstein1944bed} \\ \midrule
  98. 3 & Mahmood & Field Data &
  99. ACOP Canal &
  100. \cite{mahmood1979selected} \\ \midrule
  101. 4 & Milhous, R. T. & Field Data &
  102. Oak Creek &
  103. \cite{milhous1973sediment} \\ \midrule
  104. 5 & Nordin and Beverage & Field
  105. Data &
  106. Rio Grande River &
  107. \cite{nordin1965sediment} \\ \midrule
  108. 6 & Simons & Field Data &
  109. American Canal &
  110. \cite{simons1957theory} \\ \midrule
  111. 7 & Guy et al. & Lab Data &
  112. CSU Data (Experimental) &
  113. \cite{guy1966summary} \\ \midrule
  114. 8 & Einstein and Chien & Lab Data &
  115. Experimental Flume &
  116. \cite{einstein1955effect} \\ \midrule
  117. 9 & Gilbert and Murphy & Lab
  118. Data &
  119. Experimental Flume &
  120. \cite{gilbert1914transportation} \\ \midrule
  121. 10 & Meyer-Peter, E., and Muller, R., & Lab Data &
  122. Experimental Flume &
  123. \cite{meyer1948formulas} \\ \midrule
  124. 11 & Paintal, A. S. & Lab Data &
  125. Experimental Flume &
  126. \cite{paintal1971concept} \\ \midrule
  127. 12 & Satoh, S., et al. & Lab Data &
  128. Experimental Flume &
  129. \cite{satoh1958research} \\ \midrule
  130. 13 & Soni, J. P. & Lab Data &
  131. Experimental Flume &
  132. \cite{soni1980short} \\ \midrule
  133. 14 & Straub, L. G. & Lab Data &
  134. Experimental Flume & \cite{straub1954terminal},
  135. \cite{straub1958experiments} \\ \midrule
  136. 15 & Taylor, B. D. & Lab Data &
  137. Experimental Flume &
  138. \cite{taylor1971temperature} \\ \midrule
  139. 16 & USWES & Lab Data &
  140. Experimental Flume & \begin{tabular}[c]{@{}l@{}}U.S.
  141. Army Corps of Engineers\\ Waterways Experiment
  142. Station\end{tabular} \\ \midrule
  143. 17 & Vanoni and Hwang & Lab Data &
  144. Experimental Flume &
  145. \cite{vanoui1967relation} \\ \bottomrule
  146. \end{tabular}
  147. }
  148. \end{table}
  149.  
  150. 3) \textbf{Effect of combination of different characteristics affecting total
  151. sediment load:} Most of the existing studies consider all available variables
  152. for predicting the total sediment load. Total sediment load transport depends
  153. on the following characteristics i.e., \textit{Sediment}, \textit{Geometry},
  154. and \textit{Dynamic}. However, it is important to ascertain the effect of each
  155. of these characteristics individually as well as their combination on the
  156. prediction of total
  157. sediment load. This is important as many studies may contain measurements from either of the \textit{Sediment}, \textit{Geometry},
  158. and \textit{Dynamic} or a combination of these characteristics.
  159.  
  160. This creates a noteworthy research gap for our study, wherein lies the
  161. \textbf{novelty}
  162. of this research work. We analyze in depth to understand whether
  163. there is a model or algorithm that is capable of producing accurate total
  164. sediment load prediction for a dataset comprising of multiple different rivers
  165. and laboratory flume data. The present study contributes towards addressing
  166. this research gap through the development of predictive models for total
  167. sediment load based on the dataset compiled by Brownlie
  168. \citep{brownlie1981compilation}. Brownlie's dataset comprises of observations
  169. for both laboratory and field conditions. In our study, we have used 17 unique
  170. datasets comprising of 11 datasets from lab data, and 6 datasets from field
  171. data (see Table
  172. \ref{tab:brownlie-studies}). Moreover, Brownlie's dataset comprises of both bed
  173. load as well as suspended sediment load data. So, the eventual model tuned to
  174. Brownlie's dataset can be considered more robust as it comprises of multiple
  175. different rivers, laboratory flume data, measurements pertaining to bed load as
  176. well as suspended sediment load.
  177.  
  178. The usage of such a comprehensive datasets
  179. consisting of data from heterogeneous sources
  180. allows
  181. a ML and DL methods to produce a robust
  182. model that can be used
  183. for prediction of total sediment load. Thus, usage of Brownlie's dataset helps
  184. to
  185. overcome the research gap induced in points 1 and 2 above.
  186. For addressing the
  187. research gap highlighted in point 3, we consider various combinations of
  188. characteristics i.e., \textit{Sediment}, \textit{Geometry},
  189. and \textit{Dynamic} that might effect the total sediment load transport. To
  190. the
  191. best of our knowledge, we are the first to analyze the impact of the various
  192. combinations of these characteristics on total sediment load transport. The
  193. combinations are as follows:
  194. \begin{enumerate}
  195. \item $C = f(Sediment) = f(d_{50}, C_g, G_s)$
  196. \item $C = f(Geometry) = f(y, BF)$
  197. \item $C = f(Dynamic) = f(Q, \tau_b, Sf)$
  198. \item $C = f(Sediment, Geometry) = f(d_{50}, C_g, G_s,y, BF)$
  199. \item $C = f(Geometry, Dynamic) = f(y, BF, Q, \tau_b, Sf)$
  200. \item $C = f(Sediment, Dynamic) = f(d_{50}, C_g, G_s, Q, \tau_b, Sf)$
  201. \item $C = f(Sediment, Geometry, Dynamic) = f(d_{50}, C_g, G_s, y, BF, Q, \tau_b, Sf)$
  202. \end{enumerate}
  203.  
  204. The specific novelty introduced in this work are:
  205. \begin{enumerate}
  206. \item Usage of Brownlie's dataset that comprises of multiple
  207. different rivers, laboratory flume data, measurements pertaining to bed
  208. load as
  209. well as suspended sediment load helping us to develop a more robust
  210. prediction model.
  211. \item We consider various combinations of \textit{Sediment},
  212. \textit{Geometry},
  213. and \textit{Dynamic}
  214. characteristics and their combinations. We also analyze their impact on the total sediment load transport prediction.
  215. \item We compare and contrast deep learning and ML models. We
  216. have compared our proposed DNN model with extreme machine learning (ELM),
  217. support vector regression (SVR), linear regression (LR), and existing empirical
  218. equations. We conclude that
  219. DNN models are more effective as compared to ML models and empirical models
  220. for total sediment
  221. load prediction.
  222. \end{enumerate}
  223.  
  224. We agree with the reviewer's point that the methods used in the research are
  225. standard ones, but to the best of our knowledge, they have never been compared
  226. and contrasted in the manner we have done in this work. In addition, methods
  227. like ELM, SVR, DNN are recommended methods for fitting data which exhibit
  228. non-linearity. The total sediment load transport is quite complex
  229. phenomenon as it
  230. involves a large number of variables (e.g., $d_{50}$, $C_g$, $G_s$, $Q$,
  231. $\tau_b$, $Sf$ etc.) that often have non-linear relationships between them
  232. making our proposed methods a viable choice to fit the data and provide robust prediction. (Pages 3,4)\\
  233.  
  234. \textbf{\textbf{Q2.2)} The dataset used in the paper was published in 1981 by Brownlie et al. The authors should use the most recent dataset to demonstrate the usefulness of the algorithm.}
  235.  
  236. \textbf{Answer:} We thank the reviewer for the critical comments. One of
  237. the major issues in the field of sediment transport is the
  238. lack of
  239. data sharing in open literature and also lack of comprehensive datasets.
  240. This prompted us to make use of the dataset compiled by Brownlie
  241. \citep{brownlie1981compilation}. Brownlie's dataset comprises of
  242. observations
  243. for both laboratory and field conditions. Moreover, Brownlie's dataset
  244. comprises of both
  245. bed
  246. load as well as suspended sediment load data. In our study, we have used 17
  247. unique
  248. datasets comprising of 11 datasets from lab data, and 6 datasets from field
  249. data (see Table
  250. \ref{tab:brownlie-studies}). So, the eventual model tuned
  251. to
  252. Brownlie's dataset can be considered more robust as it comprises of
  253. multiple
  254. different rivers, laboratory flume data, measurements pertaining to bed
  255. load as
  256. well as suspended sediment load. The ranges of the variables used in this
  257. study is shown in Table \ref{Data_description}. It can be seen based on the
  258. ranges that the dataset is comprehensive in nature and can aid in
  259. development of a more robust
  260. model that can be used
  261. for prediction of total sediment load. However, if the reviewer can provide
  262. us with the pointers to a more recent dataset on sediment transport, we
  263. shall be happy to evaluate our models on those too.\\
  264.  
  265. \begin{table}[ht]
  266. \centering
  267. \caption{Statistical description of \citet{brownlie1981compilation}'s
  268. dataset used in this study.
  269. Notations: $y$ (flow depth), ${BF}$ (bed form of the channel), $Q$ (channel discharge), $Sf$ (friction/energy slope), ${\tau_b}$ (bed shear stress), $d_{50}$ (median diameter of sediment particles), $C_g$ (gradation coefficient of the sediment particles), $G_s$ (specific gravity) and $C$ (total sediment load).}
  270. \label{Data_description}
  271. \resizebox{\textwidth}{!}{%
  272. \begin{tabular}{@{}lllllllllll@{}}
  273. \toprule
  274. \multirow{4}{*}{} & & $Q$ & $y$ & $\tau_b$ & $Sf$ & $d_{50}$ & $C_g$ & $G_s$ & $BF$ & $C$ \\ \midrule
  275. & Mean & 15.0735 & 0.2989 & 4.4480 & 0.0045 & 0.0016 & 1.3973 & 2.6484 & 2.3340 & 3087.6026 \\ \cmidrule(l){2-11}
  276. & Standard deviation & 66.1484 & 0.6200 & 5.2332 & 0.0048 & 0.0034 & 0.4057 & 0.0327 & 2.2096 & 5861.1493 \\ \cmidrule(l){2-11}
  277. Overall set & Minimum & 0.0006 & 0.0079 & 0.2600 & 0 & 0.0002 & 1 & 2.2500 & 0 & 0.0010 \\ \cmidrule(l){2-11}
  278. & Maximum & 486.8233 & 4.2977 & 51.0500 & 0.0275 & 0.0270 & 3.8500 & 2.6800 & 8 & 52238 \\ \cmidrule(l){2-11}
  279. & Count & 1880 & 1880 & 1880 & 1880 & 1880 & 1880 & 1880 & 1880 & 1880 \\ \midrule
  280. \multicolumn{11}{l}{} \\ \midrule
  281. \multirow{3}{*}{}
  282. & Mean & 14.5515 & 0.2943 & 4.516 & 0.0045 & 0.0016 & 1.3962 & 2.6483 & 2.3218 & 3158.5306 \\ \cmidrule(l){2-11}
  283. & Standard deviation & 65.443 & 0.6105 & 5.2688 & 0.0048 & 0.0035 & 0.4083 & 0.0333 & 2.212 & 6054.0704 \\ \cmidrule(l){2-11}
  284. Training set & Minimum & 0.0006 & 0.0079 & 0.26 & 0 & 0.0002 & 1 & 2.25 & 0 & 0.001 \\ \cmidrule(l){2-11}
  285. & Maximum & 486.8233 & 4.2977 & 51.05 & 0.0275 & 0.027 & 3.85 & 2.68 & 8 & 52238 \\ \cmidrule(l){2-11}
  286. & Count & 1504 & 1504 & 1504 & 1504 & 1504 & 1504 & 1504 & 1504 & 1504 \\ \midrule
  287. \multicolumn{11}{l}{} \\ \midrule
  288. \multirow{3}{*}{}
  289. & Mean & 17.1618 & 0.3175 & 4.1758 & 0.0044 & 0.0014 & 1.4015 & 2.6489 & 2.383 & 2803.8908 \\ \cmidrule(l){2-11}
  290. & Standard deviation & 68.9481 & 0.6572 & 5.0857 & 0.0047 & 0.0031 & 0.3957 & 0.0298 & 2.2023 & 5013.0451 \\ \cmidrule(l){2-11}
  291. Testing set & Minimum & 0.0011 & 0.0133 & 0.3171 & 0.0001 & 0.0002 & 1 & 2.25 & 0 & 0.004 \\ \cmidrule(l){2-11}
  292. & Maximum & 412.2933 & 3.6576 & 47.4954 & 0.0247 & 0.026 & 3.46 & 2.68 & 7 & 27200 \\ \cmidrule(l){2-11}
  293. & Count & 376 & 376 & 376 & 376 & 376 & 376 & 376 & 376 & 376 \\ \bottomrule
  294. \end{tabular}%
  295. }
  296. \end{table}
  297.  
  298. \textbf{\textbf{Q2.3)} In DNN model, the number of neurons in each layer
  299. are chosen 256, 256, 256, 64, 512 respectively, whether the model will have
  300. better prediction results by choosing other values.}
  301.  
  302. \textbf{Answer:} The comment is well taken. There are no rules on the
  303. choice of the number of hidden layers and number of neurons; it's a trial
  304. and error method. We have used hyperparameter tuning to optimize the
  305. hyperparameters systematically. We have experimented with various
  306. configurations of the network architecture.
  307.  
  308. Even though there are infinitely many combinations of hyperparameters
  309. possible and it is intractable to test all the configurations. By analyzing
  310. the bias-variance trade-off of the network on the available dataset, we
  311. have come up with this network configuration. Increasing the network
  312. capacity will lead to poor generalization on unseen data and result in poor
  313. prediction quality, as shown in Figure \ref{dnn_comb}.
  314.  
  315. In addition, we conducted 20 independent runs to inspect the
  316. reproducibility of the established run results. We tried different values
  317. of neurons for the DNN model for different input variable
  318. combinations, which are shown in Table \ref{tab:results_neurons}. It can be
  319. seen from Table \ref{tab:results_neurons}, that among all the neuron
  320. combination, the proposed structure with one input layer with seven
  321. neurons, five
  322. hidden layers where the number of neurons in each layer is 256, 256, 256,
  323. 64, 512, respectively, and one output layer with one neuron, 100 epochs
  324. with batch size one, \textit{`adam'} optimizer and learning rate 0.01 with
  325. `relu' activation function performs the best. Hence, we chose the number
  326. of neurons in each layer
  327. as 256, 256, 256, 64, 512 respectively.\\
  328.  
  329. \begin{figure}[!h]
  330. \centering
  331. \includegraphics[scale=.45]{dnn_justification.pdf}
  332. \caption{Number of hidden layers with respect to R$^2$ in DNN model for all
  333. input variables. It can be seen that the model performs the best with 5
  334. hidden layers.}
  335. \label{dnn_comb}
  336. \end{figure}
  337. \begin{table}[ht]
  338. \centering
  339. \caption{Performance of DNN model using different values of number of neurons when we consider a combination of all characteristics i.e., Sediment, Geometry, Dynamic. \textbf{Bold} indicates the best performance.}
  340. \label{tab:results_neurons}
  341. \begin{tabular}{@{}lllllll@{}}
  342. \toprule
  343. Number of neurons & I$_d$ & PCC & MSE & NSE & RMSE & R$^2$ \\ \midrule
  344. 128-128-128-64-512-1 & 0.975 & 0.959 & 0.065 & 0.911 & 0.255 & 0.920 \\ \midrule
  345. 512-512-512-64-512-1 & 0.984 & 0.969 & 0.045 & 0.939 & 0.211 & 0.939 \\ \midrule
  346. \begin{tabular}[c]{@{}l@{}}256-256-256-64-512-1\\ (Our proposed model)\end{tabular} & \textbf{0.989} & \textbf{0.979} & \textbf{0.042} & \textbf{0.958} & \textbf{0.204} & \textbf{0.959} \\ \bottomrule
  347. \end{tabular}%
  348. \end{table}
  349.  
  350. \textbf{\textbf{Q2.4)} Figure 6 in the paper is too blurry, the authors should have used a high resolution to accommodate the readers.}
  351.  
  352. \textbf{Answer:} We thank the reviewer for the valuable comments. We have
  353. incorporated your suggestion made the figure sharper.
  354. (\textcolor{black}{Figure 6}, Page 21)\\
  355. \clearpage
  356. \textbf{\textbf{Q2.5)} The authors should give corresponding time complexity and space complexity analysis to demonstrate the usefulness of the model.}
  357.  
  358. \textbf{Answer:} We thank the reviewer for the valuable suggestion. The time
  359. complexity and space complexity of each model is shown in Table
  360. \ref{tab:complexities} and we explain each of them in details.
  361.  
  362. \begin{table}[htpb]
  363. \centering
  364. \caption{Summary of time and space complexity of each model used in this
  365. study.}
  366. \label{tab:complexities}
  367. \begin{tabular}{@{}ccc@{}}
  368. \toprule
  369. Models & Time Complexity & Space
  370. Complexity \\ \midrule
  371. DNN & $\mathcal{O}(ne(ij + jk + kp + pq + qr + rl))$ &
  372. $\mathcal{O}(z)$ \\
  373. ELM & $\mathcal{O}(n(ij+jk))$ &
  374. $\mathcal{O}(z)$ \\
  375. SVR & $\mathcal{O}(n^2)$ &
  376. $\mathcal{O}(k)$ \\
  377. Linear Regression & $\mathcal{O}(n)$
  378. & $\mathcal{O}(n)$ \\ \bottomrule
  379. \end{tabular}%
  380. \end{table}
  381.  
  382. \section{DNN}
  383. \subsection{Time Complexity}
  384.  
  385. \begin{itemize}
  386. \item \textbf{Time complexity of matrix multiplication:}
  387. Training DNN using back-propagation is usually implemented with matrices.\\
  388. {The time complexity of matrix multiplication for }
  389. $M_{ij}*M_{jk}$ is simply $\mathcal{O}(i*j*k)$.
  390. \item \textbf{Feed-forward propagation algorithm:}
  391. The feed-forward propagation algorithm is as follows: First, to go from layer $i$ to $j$, you do \\
  392. \begin{center}
  393. $S_j=W_{ji}*Z_i$.
  394. \end{center}
  395. Then you apply the activation function \\
  396. \begin{center}
  397. $Z_j=f(S_j)$
  398. \end{center}
  399. where $S_{j}$ is the intermediate feature after applying weights $W_{ji}$ and $Z_i$ is activation function of layer $i$,\\
  400. If we have $N$ layers (including input and output layer), this will
  401. run $ N-1 $ times.\\
  402. This study computes the time complexity for the forward pass algorithm
  403. for deep neural network with 7 layers including input and output layer,
  404. input layer $i$, $j$ neurons of first hidden layer, $k$ number of
  405. neurons in second hidden layer, $l$ neurons in third hidden layer
  406. layer, $m$ number of neurons in fourth hidden layer, $n$ neurons in
  407. fifth hidden layer, $p$ neuron at output layer.
  408.  
  409. Since there are 7 layers, you need 6 matrices to represent weights
  410. between these layers. Let us denote them by W$_{ji}$, W$_{kj}$,
  411. W$_{lk}$, W$_{ml}$, W$_{nm}$, and W$_{pn}$, where W$_{ji}$ is a matrix
  412. with $j$ rows and $i$ columns (W$_{ji}$ thus contains the weights going
  413. from layer $i$ to layer $j$).
  414.  
  415. Assuming $t$ training examples. For propagating from layer $i$ to $j$, we have first
  416. \begin{center}
  417. $S_{jt}=W_{ji}*Z_{it}$
  418. \end{center}
  419. and this operation (i.e., matrix multiplication) has $\mathcal{O}(j*i*t)$ time complexity. Then we apply the activation function
  420. \begin{center}
  421. $Z_{jt}=f(S_{jt})$
  422. \end{center}
  423. and this has $\mathcal{O}(j*t)$ time complexity, because it is an element-wise operation.
  424.  
  425. So, in total, we have
  426. \begin{center}
  427. $\mathcal{O}(j*i*t+j*t)$ = $\mathcal{O}(j*t*(i+1))$ = $\mathcal{O}(j*i*t)$
  428. \end{center}
  429. Using same logic, for going $j \rightarrow k$, we have $\mathcal{O}(k*j*t)$, and vice-versa. In total, the time complexity for feed-forward propagation will be
  430. \begin{center}
  431. $\mathcal{O}(i*j*t + j*k*t + k*p*t + p*r*t + r*l*t)$ = $\mathcal{O}(t(ij + jk + kp + pr + rl))$
  432. \end{center}
  433. \item \textbf{Back-propagation algorithm:}
  434. The back-propagation algorithm proceeds as follows. Starting from the output layer $l \rightarrow k $, we compute the error signal, E$_{lt}$, a matrix containing the error signals for nodes at layer $l$\\
  435. \begin{center}
  436. $E_{lt}={f}'(S_{lt})\odot (Z_{lt}-O_{lt})$
  437. \end{center}
  438. here $\odot$ means element-wise multiplication. Note that $E_{lt}$ has $l$ rows and $t$ columns: it simply means each column is the error signal for training example $t$.\\
  439. We then compute the `delta weight', $D_{lk}\in \mathbb{R}^{l*k}$ (between layer $l$ and layer $k$)
  440. \begin{center}
  441. $D_{lk} = E_{lt} * Z_{tk}$
  442. \end{center}
  443. where $Z_{tk}$ is the transpose of $Z_{kt}$.\\
  444. We then adjust the weights,
  445. \begin{center}
  446. $W_{lk} = W_{lk}-D_{lk}$
  447. \end{center}
  448. For $l\rightarrow k$, we thus have the time complexity $\mathcal{O}(lt+lt+ltk+lk) = \mathcal{O}(l*t*k)$.\\
  449. Now, going back from $k \rightarrow j$. We first have
  450. \begin{center}
  451. $E_{kt} = f'(S_{kt})\cdot (W_{kl}*E_{lt})$
  452. \end{center}
  453. Then
  454. \begin{center}
  455. $D_{jk} = E_{kt}*Z_{tj}$
  456. \end{center}
  457. And then
  458. \begin{center}
  459. $W_{kj} = W_{kj}-D_{kj}$
  460. \end{center}
  461. where $W_{kl}$ is the transpose of $W_{lk}$. For $k \rightarrow j$, we have the time complexity
  462. \begin{center}
  463. $\mathcal{O}(kt+klt+ktj+kj) = \mathcal{O}(k*t(l+j))$
  464. \end{center}
  465. And finally, for $j \rightarrow i$, we have $\mathcal{O}(j*t(k+i))$. In total, we have
  466. \begin{center}
  467. $\mathcal{O}(ltk+tk(l+j)+tj(k+i)) = \mathcal{O}(t*(lk+kj+ji))$
  468. \end{center}
  469. which is the same as the feed-forward pass algorithm. Since, they are the same, the total complexity for one epoch will be
  470. \begin{center}
  471. $\mathcal{O}(t*(ij+jk+kl))$.
  472. \end{center}
  473. This time complexity is then multiplied by the number of epochs. So, we have
  474. \begin{center}
  475. $\mathcal{O}(e*t*(ij+jk+kl)$
  476. \end{center}
  477. \end{itemize}
  478. Therefore,
  479. \textit{Time Complexity} = $\mathcal{O}(ne(ij + jk + kp + pq + qr + rl))$\\
  480. where $n$ is the number of data points, $e$ is the number of
  481. epochs, $i$ is the number of input layer neurons, $j$ is the
  482. neurons of first hidden layer, $k$ is the number of neurons in
  483. second hidden layer, $p$ is the number neurons in third hidden
  484. layer layer, $q$ is the number neurons in fourth hidden layer
  485. layer, $r$ neurons in fifth hidden layer, and $l$ neurons in
  486. output layer.
  487. \subsection{Space Complexity}
  488. The space complexity of DNN model will depend on the number of inputs that the model have, because the number of inputs will determine the number of weights in the first layer, which need to store in memory.\\
  489. If gradient descent (GD) and back-propagation (BP) are using to train
  490. the model, at each training iteration (i.e., a GD update), we need to
  491. store all the matrices that represent the parameters (or weights) of
  492. the model, as well as the gradients and the learning rate (or other
  493. hyper-parameters). Let us denote the vector that contains all
  494. parameters of the model as $\theta \in \mathbb{R}^z$ so it has $m$
  495. components. The gradient vector has the same dimensionality as
  496. $\theta$, so we need at least to store $2z+1$ parameters. So, we can
  497. write this $2z+1$ = $\mathcal{O}(z)$.\\
  498. \textit{Space Complexity} = $\mathcal{O}(z)$, where $z$ is the total
  499. number of neurons.
  500.  
  501. In this study, we have $ n=1880 $ data points, $ e=100 $ epochs, input
  502. layer
  503. $ i=7 $, $ j = 256 $ neurons of first hidden layer, $ k=256 $ number of
  504. neurons
  505. in second hidden layer, $ p=256 $ neurons in third hidden layer layer,
  506. $ q= 64 $ number of neurons in fourth hidden layer, $ r=512 $ neurons in
  507. fifth hidden layer, $ l=1 $ neuron at the output layer as we need to
  508. predict
  509. only one variable i.e., total sediment load transport.
  510. \section{ELM}
  511. \subsection{Time Complexity}
  512. \textit{Time Complexity} = $\mathcal{O}(n(ij+jk))$
  513.  
  514. \subsection{Space Complexity}
  515. \textit{ Space Complexity} = $\mathcal{O}(z)$\\
  516. where $n$ is number of observations, $i$ is the number of neurons of input layer, $j$ neurons in second layer, $k$ is the number of neurons in output layer, and $z$ is the total number of neurons.
  517.  
  518. This study uses $n$ = 1880 observations, $i$ = 7 input layer, $j$ = 90 neurons, and $k$ = 1 neuron at output layer.
  519. \section{SVR}
  520. \subsection{Time Complexity}
  521. \textit{Time Complexity} = $\mathcal{O}(n^2)$
  522. \subsection{Space Complexity}
  523. \textit{Space Complexity} = $\mathcal{O}(k)$\\
  524. where $n$ is the number of data points and $ k $ is the number of
  525. support vectors.
  526. \section{Linear Regression}
  527. \subsection{Time Complexity}
  528. The linear regression is computed as
  529. \begin{center}
  530. $A = (X^{T}X))^{-1}X^{T}Y$
  531. \end{center}
  532. If $X$ is an $(n * k)$ matrix and $Y$ is an order $(n * 1)$ matrix
  533. \begin{enumerate}
  534. \item $(X^{T}X)$ takes $\mathcal{O}(n*k^2)$ time and produces a $(k * k)$ matrix.
  535. \item The matrix inversion of a $(k * k)$ matrix takes $\mathcal{O}(k^3)$ time.
  536. \item $(X^{T}Y)$ takes $\mathcal{O}(n*k)$ time and produces a $(k * 1)$ matrix.
  537. \item The final matrix multiplication of $(k * k)$ and $(k * 1)$ matrices takes $\mathcal{O}(k^2)$ time.
  538. \end{enumerate}
  539. So, \textit{Time Complexity} = $\mathcal{O}(k^2*n + k^3 + k*n + k^2)$ = $\mathcal{O}(k^2*n + k^3)$ = $\mathcal{O}(k^2(n+k))$.
  540. % \textit{Testing Time Complexity} = $\mathcal{O}(n)$
  541. \subsection{Space Complexity}
  542. In linear regression, after training the model we get $W$ and $b$ where $W$ is basically a vector of dimension $k$. Given any new point, we have to perform
  543. \begin{center}
  544. Y = $W^T * X + b$
  545. \end{center}
  546. to predict the new value of $Y$ and check the accuracy of the model. As, $b$ is independent of input size so the space required to store $b$ is $\mathcal{O}(1)$ and $W^T * X + b$ takes $\mathcal{O}(k)$ space. Now, $W$ is a vector of size $k$. So, the space complexity of $W$ is $\mathcal{O}(k)$. \\
  547. Therefore, \textit{Space Complexity} = $\mathcal{O}(nk + n)$ \\
  548.  
  549.  
  550. \textbf{\textbf{Q2.6)} The workflow of the proposed model is more complicated and difficult to repeat. Can the author make the code open source to help more learners?}
  551.  
  552. \textbf{Answer:} Thanks for this constructive suggestion. We have
  553. incorporated your suggestion and now the workflow of the proposed model
  554. is much simpler to understand. The workflow is shown in the reviewer
  555. response so that it can aid the reviewer to check the new image here itself
  556. (see Figure \ref{fig:workflow}). We would love to make our code open source to the learning community. After the review process is complete, we shall make the code open source for the
  557. benefit of the community. The figure can be found on Page 19 in the manuscript.\\
  558. \begin{figure}[!t]
  559. \centering
  560. \includegraphics[scale=.68]{architecture_sediment_transport.pdf}
  561. \caption{The overall flow of the proposed model for the prediction of total sediment load transport.}
  562. \label{fig:workflow}
  563. \end{figure}
  564.  
  565. \textbf{\textbf{Q2.7)} I would suggest adding a paragraph about the limitation of the proposed methodology in the Conclusion part. Also, future works need to be discussed in detail.}
  566.  
  567. \textbf{Answer:} We appreciate the reviewer's comment.
  568.  
  569. \textbf{Limitations:} The total sediment load transport is quite
  570. complex phenomenon as it involves a large number of variables (e.g.,
  571. $y$ (flow depth), $BF$ (bed form of the channel), $Q$ (channel
  572. discharge), $Sf$ (friction slope), $\tau_b$ (bed shear stress),
  573. $d_{50}$ (median diameter of sediment particles), C$_g$ (gradation
  574. coefficient), $G_s$ (specific gravity) etc.) that often have non-linear
  575. relationships between them. However,
  576. it is certainly possible that parameters like Froude number, viscosity, water surface width, might have
  577. an impact on the total sediment load prediction. Since, the Brownlie's
  578. dataset did not contain these parameters, our models are not
  579. tuned to incorporate their effect. In general ML/DL models perform
  580. well to the dataset they are trained to. In our study, we have used
  581. Brownlie's dataset, which is a comprehensive dataset comprising of both
  582. bed load as well as suspended sediment load data, in addition to flume
  583. and field data from various researchers. Despite the comprehensive
  584. dataset, it is possible that some dataset would have ranges of
  585. variables which are outside the range of values considered in this
  586. study or has a different data distribution. For such datasets, it is
  587. possible that the proposed model may not perform well. However, this
  588. issue
  589. exists for all ML/DL models which have an assumption that the target
  590. data would be within the range of training sample or has a similar
  591. data distribution like that of the training sample.
  592.  
  593. \textbf{Future Work:} Future work requires testing these prediction at a
  594. even larger field scale, investigating a larger range of input variables
  595. (e.g., $y$ (flow depth), $BF$ (bed form of the channel), $Q$ (channel
  596. discharge), $Sf$ (friction slope), $\tau_b$ (bed shear stress), $d_{50}$
  597. (median diameter of sediment particles), C$_g$ (gradation coefficient),
  598. $G_s$ (specific gravity) etc.) in order to test the
  599. efficacy of the proposed model. Also, we wish to setup an in-house
  600. laboratory flume and undertake different sets of experiments and collect
  601. the data in order to further test the performance of the proposed models.
  602. Although the current study uses ($d_{50}$, $C_g$, $G_s$, $Q$, $\tau_b$, $Sf$ etc.)
  603. as input variables for prediction, but variables like Froude number, viscosity, may have an impact on total sediment load prediction. Thus, we would like to
  604. explore those dataset(s) that includes these variables, so that their
  605. effect on the total sediment load prediction
  606. can be ascertained. We also aim to build a web based tool that can be used
  607. by the researchers to predict total sediment load using various ML/DL
  608. techniques. (Line 465-485, Page 13)\\
  609.  
  610. \textbf{Response to Reviewer 3:}
  611.  
  612. \textbf{General Comment:} The authors presented a study on total sediment load transport in rivers, which is challenging and complex. The paper provides a fluent read. Moreover, the paper is technically sound. With respect, I would like to figure out some points to improve the quality of the paper.\\
  613.  
  614. \textbf{\textbf{Q3.1)} The paper has repetitive many abbreviations such as PCC and NSE. Where the abbreviation is used for the first time, its full name should be given only once time.}
  615.  
  616. \textbf{Answer:} The comment is well taken. We have corrected it.\\
  617.  
  618. \textbf{\textbf{Q3.2)} The motivation of the study should be given in
  619. Introduction section.}
  620.  
  621. \textbf{Answer:} We thank the reviewer for the careful review. In water resource planning and management, total sediment transport challenges are significant. It is clear that prediction of total sediment load transport owes a significant importance in the area of hydraulics. The total sediment load varies as the underlying environment or the prevailing conditions change. The prediction of total sediment load transport is quite complex phenomenon as it involves a large number of variables (e.g., $y$ (flow depth), $BF$ (bed form of the channel), $Q$ (channel discharge), $Sf$ (friction slope), $\tau_b$ (bed shear stress), $d_{50}$ (median diameter of sediment particles), C$_g$ (gradation coefficient), $G_s$ (specific gravity) etc.). It involves the usage of a large number of parameters that are often non-linear and multi-dimensional in nature. Due to its complexity, nonlinearity, and multidimensionality, the system becomes difficult to analyse analytically. In addition, these variables appear to take on values that are unique to field and flume investigations. So, the assumption made for one particular environment may not hold true in another environment making the prediction erroneous and unusable. The primary motivation of this study is to check the applicability of the advanced ML and DNN models for prediction of total sediment load transport so that more accurate and generic sediment transport models could be built. The same can be found on Line 38-43, Page 1 and Line 50-54, Page 2.\\
  622.  
  623. \textbf{\textbf{Q3.3)} Authors used linear regression, support vector
  624. regression, extreme learning machine, and DNN-based models
  625. for the prediction of the total sediment load transport. Are the sub-sets
  626. used in training and testing these models the same? This should be
  627. emphasized. It is also recommended to perform 5-fold or 10-fold
  628. cross-validation experiments.}
  629.  
  630. \textbf{Answer:} Thank you for pointing this out. Yes, sub-sets used in
  631. training and testing of these models is same. We have performed 5-fold and
  632. 10-fold cross-validation experiments for all models and their results are
  633. shown in Table \ref{tab:results_with_rank}.
  634.  
  635. By observing Table \ref{tab:results_with_rank}, we can see that
  636. without using cross-validation, our proposed DNN performs
  637. better. Similarly, observation can be made for extreme learning machine (ELM)
  638. method. But in case
  639. of support vector regression (SVR) and linear regression (LR), the 10-fold
  640. cross-validation performs the best. However, both support vector regression
  641. and linear regression are under performing methods. So, even if they
  642. perform well in 10-fold
  643. cross-validation their performance is poor as compared to the results
  644. obtained for the proposed DNN method. In our case, we show the results
  645. for the combination of all three characteristics only, i.e.,
  646. \textit{Sediment},
  647. \textit{Geometry},
  648. and \textit{Dynamic} as they performed the best in all the methods. The
  649. other six combinations are ignored as they perform poor compared to
  650. the combination of all three characteristics.\\
  651.  
  652. \begin{table}[ht]
  653. \centering
  654. \caption{Comparison of all models with and without using
  655. cross-validation. The rank $ 1 $ represents the best results while rank 3
  656. indicates the worst result.}
  657. \label{tab:results_with_rank}
  658. \resizebox{\textwidth}{!}{%
  659. \begin{tabular}{@{}cclllllllllllllll@{}}
  660. \toprule
  661. Models &
  662. \multicolumn{3}{c}{DNN} &
  663. &
  664. \multicolumn{3}{c}{ELM} &
  665. &
  666. \multicolumn{3}{c}{SVR} &
  667. &
  668. \multicolumn{3}{c}{LR} \\ \midrule
  669. \multicolumn{1}{l}{Fold} &
  670. \multicolumn{1}{l}{5} &
  671. 10 &
  672. No &
  673. &
  674. 5-fold &
  675. 10-fold &
  676. No &
  677. &
  678. 5-fold &
  679. 10-fold &
  680. No &
  681. &
  682. 5-fold &
  683. 10-fold &
  684. No \\ \midrule
  685. \begin{tabular}[c]{@{}c@{}}I$_d$\\ Rank\end{tabular} &
  686. \begin{tabular}[c]{@{}c@{}}0.946\\3 \end{tabular} &
  687. \begin{tabular}[c]{@{}c@{}}0.985\\2 \end{tabular}&
  688. \begin{tabular}[c]{@{}c@{}}0.989\\ 1\end{tabular}&
  689. &
  690. \begin{tabular}[c]{@{}c@{}}0.933\\ 3\end{tabular}&
  691. \begin{tabular}[c]{@{}c@{}}0.963\\ 2\end{tabular}&
  692. \begin{tabular}[c]{@{}c@{}}0.970\\ 1\end{tabular}&
  693. &
  694. \begin{tabular}[c]{@{}c@{}}0.961\\ 3\end{tabular}&
  695. \begin{tabular}[c]{@{}c@{}}0.981\\ 1\end{tabular}&
  696. \begin{tabular}[c]{@{}c@{}}0.967\\ 2\end{tabular}&
  697. &
  698. \begin{tabular}[c]{@{}c@{}}0.925\\3 \end{tabular}&
  699. \begin{tabular}[c]{@{}c@{}}0.964\\ 1\end{tabular}&
  700. \begin{tabular}[c]{@{}c@{}}0.927\\ 2\end{tabular}&\\ \midrule
  701. \begin{tabular}[c]{@{}c@{}}PCC\\ Rank\end{tabular} &
  702. \begin{tabular}[c]{@{}c@{}}0.909\\ 3\end{tabular} &
  703. \begin{tabular}[c]{@{}c@{}}0.977\\ 2\end{tabular} &
  704. \begin{tabular}[c]{@{}c@{}}0.979\\1 \end{tabular}&
  705. &
  706. \begin{tabular}[c]{@{}c@{}}0.874\\ 3\end{tabular}&
  707. \begin{tabular}[c]{@{}c@{}}0.932\\ 2\end{tabular}&
  708. \begin{tabular}[c]{@{}c@{}}0.943\\1 \end{tabular}&
  709. &
  710. \begin{tabular}[c]{@{}c@{}}0.926\\3 \end{tabular}&
  711. \begin{tabular}[c]{@{}c@{}}0.964\\ 1\end{tabular}&
  712. \begin{tabular}[c]{@{}c@{}}0.943\\ 2\end{tabular}&
  713. &
  714. \begin{tabular}[c]{@{}c@{}}0.860\\ 3\end{tabular}&
  715. \begin{tabular}[c]{@{}c@{}}0.931\\ 1\end{tabular}&
  716. \begin{tabular}[c]{@{}c@{}}0.869\\ 2\end{tabular}& \\ \midrule
  717. \begin{tabular}[c]{@{}c@{}}MSE\\ Rank\end{tabular} &
  718. \begin{tabular}[c]{@{}c@{}}0.141\\3 \end{tabular} &
  719. \begin{tabular}[c]{@{}c@{}}0.051\\2 \end{tabular} &
  720. \begin{tabular}[c]{@{}c@{}}0.042\\1 \end{tabular}&
  721. &
  722. \begin{tabular}[c]{@{}c@{}}0.178\\ 3\end{tabular}&
  723. \begin{tabular}[c]{@{}c@{}}0.124\\2 \end{tabular}&
  724. \begin{tabular}[c]{@{}c@{}}0.111\\ 1\end{tabular}&
  725. &
  726. \begin{tabular}[c]{@{}c@{}}0.098\\2 \end{tabular}&
  727. \begin{tabular}[c]{@{}c@{}}0.060\\1 \end{tabular}&
  728. \begin{tabular}[c]{@{}c@{}}0.115\\ 3\end{tabular}&
  729. &
  730. \begin{tabular}[c]{@{}c@{}}0.191\\ 2\end{tabular}&
  731. \begin{tabular}[c]{@{}c@{}}0.116\\ 1\end{tabular}&
  732. \begin{tabular}[c]{@{}c@{}}0.245\\ 3\end{tabular}& \\ \midrule
  733. \begin{tabular}[c]{@{}c@{}}NSE\\ Rank\end{tabular} &
  734. \begin{tabular}[c]{@{}c@{}}0.790\\3 \end{tabular} &
  735. \begin{tabular}[c]{@{}c@{}}0.934\\ 2\end{tabular} &
  736. \begin{tabular}[c]{@{}c@{}}0.958\\ 1\end{tabular}&
  737. &
  738. \begin{tabular}[c]{@{}c@{}}0.735\\ 3\end{tabular}&
  739. \begin{tabular}[c]{@{}c@{}}0.841\\ 2\end{tabular}&
  740. \begin{tabular}[c]{@{}c@{}}0.889\\1 \end{tabular}&
  741. &
  742. \begin{tabular}[c]{@{}c@{}}0.855\\ 3\end{tabular}&
  743. \begin{tabular}[c]{@{}c@{}}0.923\\ 1\end{tabular}&
  744. \begin{tabular}[c]{@{}c@{}}0.885\\ 2\end{tabular}&
  745. &
  746. \begin{tabular}[c]{@{}c@{}}0.715\\ 3\end{tabular}&
  747. \begin{tabular}[c]{@{}c@{}}0.852\\ 1\end{tabular}&
  748. \begin{tabular}[c]{@{}c@{}}0.755\\ 2\end{tabular}& \\ \midrule
  749. \begin{tabular}[c]{@{}c@{}}RMSE\\ Rank\end{tabular} &
  750. \begin{tabular}[c]{@{}c@{}}0.376\\3 \end{tabular} &
  751. \begin{tabular}[c]{@{}c@{}}0.226\\ 2\end{tabular} &
  752. \begin{tabular}[c]{@{}c@{}}0.204\\ 1\end{tabular}&
  753. &
  754. \begin{tabular}[c]{@{}c@{}}0.422\\ 3\end{tabular}&
  755. \begin{tabular}[c]{@{}c@{}}0.353\\ 2\end{tabular}&
  756. \begin{tabular}[c]{@{}c@{}}0.333\\ 1\end{tabular}&
  757. &
  758. \begin{tabular}[c]{@{}c@{}}0.312\\ 2\end{tabular}&
  759. \begin{tabular}[c]{@{}c@{}}0.245\\ 1\end{tabular}&
  760. \begin{tabular}[c]{@{}c@{}}0.339\\ 3\end{tabular}&
  761. &
  762. \begin{tabular}[c]{@{}c@{}}0.437\\ 2\end{tabular}&
  763. \begin{tabular}[c]{@{}c@{}}0.341\\ 1\end{tabular}&
  764. \begin{tabular}[c]{@{}c@{}}0.495\\ 3\end{tabular}& \\ \midrule
  765. \begin{tabular}[c]{@{}c@{}}R$^2$\\ Rank\end{tabular} &
  766. \begin{tabular}[c]{@{}c@{}}0.827\\ 3\end{tabular} &
  767. \begin{tabular}[c]{@{}c@{}}0.955\\ 2\end{tabular} &
  768. \begin{tabular}[c]{@{}c@{}}0.959\\ 1\end{tabular}&
  769. &
  770. \begin{tabular}[c]{@{}c@{}}0.765\\ 3\end{tabular}&
  771. \begin{tabular}[c]{@{}c@{}}0.869\\ 2\end{tabular}&
  772. \begin{tabular}[c]{@{}c@{}}0.889\\ 1\end{tabular}&
  773. &
  774. \begin{tabular}[c]{@{}c@{}}0.857\\ 3\end{tabular}&
  775. \begin{tabular}[c]{@{}c@{}}0.929\\ 1\end{tabular}&
  776. \begin{tabular}[c]{@{}c@{}}0.889\\ 2\end{tabular}&
  777. &
  778. \begin{tabular}[c]{@{}c@{}}0.740\\ 3\end{tabular}&
  779. \begin{tabular}[c]{@{}c@{}}0.867\\1 \end{tabular}&
  780. \begin{tabular}[c]{@{}c@{}}0.755\\ 2\end{tabular}& \\ \midrule
  781. Average &
  782. \multicolumn{1}{c}{3} &
  783. \multicolumn{1}{c}{2} &
  784. \multicolumn{1}{c}{\textbf{1}} &
  785. \multicolumn{1}{c}{} &
  786. \multicolumn{1}{c}{3} &
  787. \multicolumn{1}{c}{2} &
  788. \multicolumn{1}{c}{\textbf{1}} &
  789. &
  790. \multicolumn{1}{c}{3} &
  791. \multicolumn{1}{c}{\textbf{1}} &
  792. \multicolumn{1}{c}{2} &
  793. &
  794. \multicolumn{1}{c}{3} &
  795. \multicolumn{1}{c}{\textbf{1}} &
  796. \multicolumn{1}{c}{2} \\ \bottomrule
  797. \end{tabular}%
  798. }
  799. \end{table}
  800.  
  801.  
  802.  
  803. \textbf{\textbf{Q3.4)} Furthermore, what are the limitations of this study? Clarifying the limitations of a study allows the readers to understand better under which conditions the results should be interpreted.}
  804.  
  805. \textbf{Answer:} Thank you for the feedback and pointing this out.
  806.  
  807. \textbf{Limitations:} Total sediment load prediction is a complex
  808. phenomenon as it involves a large number of independent variables like
  809. (e.g.,
  810. $y$ (flow depth), $BF$ (bed form of the channel), $Q$ (channel
  811. discharge), $Sf$ (friction slope), $\tau_b$ (bed shear stress),
  812. $d_{50}$ (median diameter of sediment particles), C$_g$ (gradation
  813. coefficient), $G_s$ (specific gravity) etc.). However,
  814. it is certainly possible that parameters like Froude number, viscosity, might have
  815. an impact on the total sediment load prediction. Since, the Brownlie's
  816. dataset did not have datasets with these parameters, our models are not
  817. tuned to incorporate their effect. In general ML/DL models perform
  818. well to the dataset they are trained to. In our study, we have used
  819. Brownlie's dataset which is comprehensive dataset comprising of both
  820. bed load as well as suspended sediment load data in addition of flume
  821. and field data from various researchers. Despite the comprehensive
  822. dataset, it is possible that some dataset would have ranges of
  823. variables which are outside the range of values considered in this
  824. study or has a different data distribution. For such datasets, it is
  825. possible that the model may not perform well. However, this issue
  826. exists for all ML/DL models which have an assumption that the target
  827. data would be within the range of training sample or has a similar
  828. data distribution like that of the training sample. (Line 465-477, Page 13)\\
  829.  
  830. \textbf{Response to Reviewer 4:}
  831.  
  832. \textbf{General Comment:} The manuscript ``Predicting Total Sediment Load Transport in Rivers using Regression Techniques, Extreme Learning and Deep Learning Models" applied four machine learning algorithms to predict the total sediment load transport and compared their performance with the empirical models in previous studies. Acceptable accuracy was obtained based on the systematic comparison between models or selected variables. This manuscript is recommended to be accepted after minor revisions.\\
  833.  
  834. \textbf{\textbf{Q4.1)} Introduction: The first paragraph: ``This study
  835. focuses on the prediction of total sediment load transport, which involves
  836. a combination of bed load as well as suspended load." We typically explain
  837. the purpose of the study in the last paragraph, after describing the
  838. background of the study step by step in preceding paragraphs of the
  839. introduction.}
  840.  
  841. \textbf{Answer:}
  842. We thank the reviewer for the critical comments and review. We have now re-arranged the introduction section to incorporate the changes as suggested by the reviewer. (Line 160-171, Page 4)\\
  843.  
  844. \textbf{\textbf{Q4.2)} Introduction: The first paragraph: ``Sediment particles found on … those sediments that that are transported …" "that that"?}
  845.  
  846. \textbf{Answer:} We thank the reviewer for pointing this out. Now, we have
  847. corrected it. (Line 35, Page 1)\\
  848.  
  849. \textbf{\textbf{Q4.3)} Introduction: 4th to 12th paragraph: the review of the applications of machine learning in predicting sediment load transport is too lengthy and unfocused. While it is necessary to overview the application scenarios, types, and performance of the applied machine learning in literature, it is also necessary to compare and comment on the variables used in these studies and their performance, as the authors described these variables in 2nd paragraph, and stated that one of the innovations of this paper is the comparison of these variables in the last paragraph.}
  850.  
  851. \textbf{Answer:} We thank the reviewer for the suggestion. As per your
  852. suggestion, we have revised the introduction section and made it more
  853. succinct.
  854.  
  855. With regards to comparing the variables used in this with the studies provided in the literature,
  856. a direct performance
  857. comparison of the proposed methodology with the existing studies in the literature is not possible. The reason being that, to the best of
  858. author's knowledge, there does not exists any studies which has used the
  859. exact dataset used in this study. Researchers have used a subset of
  860. Brownlie's dataset so a viz-a-viz, comparison is not possible.
  861. In Table \ref{tab:aire:comparison}, we highlight a few studies that have worked on prediction of sediment transport. As shown in Table \ref{tab:aire:comparison}, we can see that existing studies are either focused on data collected from river or flume in a specific environment. In addition, they either work on bed load and sediment load. They also either use all the available variables for prediction and do not analyze the effect of Sediment, Geometry, and Dynamic characteristics on prediction. Thus the analysis using the combinations of Sediment, Geometry, and Dynamic characteristics provides an innovative approach in determining their usefulness in the prediction of total sediment load. (Pages 3,4)\\
  862.  
  863. \begin{table}[!t]
  864. \centering
  865. \caption{Comparison of proposed method with the previous studies. Here all the studies either have worked explicitly on rivers or flume. Brownlie's dataset that has been considered comprises of 11 flume
  866. experiments and 6 field experiments demonstrating a good mix of flume and
  867. field data. In addition the dataset contains only few points as compared to our proposed dataset that has 1880 points. None of the studies here provide analysis of effect of variables in the form of Sediment, Dynamic and Geometry characteristics. Notations: Flow discharge,$ Q (m^3/s) $, Flow velocity ($ V (m/s) $), Water-surface width ($B (m)$), Flow depth ($Yo (m)$), Cross sectional area of flow ($A (m^2)$), Hydraulic radius ($R (m)$), Channel slope ($So$), Bed load ($Tb (kg/s)$), Suspended load ($Tt (kg/s)$), Total bed material load ($Tj (kg/s)$), Median sediment size ($d50 (mm)$), Manning's $ n $, daily stream flow ($Q$), daily mean concentration of suspended sediment ($C$), and daily suspended-sediment discharge or load ($SL$).}
  868. \label{tab:aire:comparison}
  869. \resizebox{\textwidth}{!}{%
  870. \begin{tabular}{@{}lllll@{}}
  871. \toprule
  872. \textbf{Paper} & \textbf{Methods} & \textbf{Variables Used} & \textbf{Dataset} & \textbf{Cons} \\ \midrule
  873.  
  874. \cite{melesse2011suspended} & ANN MLR MNLR ARIMA
  875. & \begin{tabular}[c]{@{}l@{}} $P,Q,Q(t-1)$\\ $SL, SL(t-1) $ \end{tabular}
  876. &\begin{tabular}[c]{@{}l@{}} Mississippi, Missouri, \\ Rio Grande river \end{tabular}
  877. & \begin{tabular}[c]{@{}l@{}} Only three rivers data. \\ Only suspended load considered. \\No explicit split of variables into\\ Sediment, Dynamic and Geometry. \end{tabular} \\ \midrule
  878.  
  879. \cite{ghani2011prediction} & Various ANN methods
  880. & \begin{tabular}[c]{@{}l@{}} $Q, V, B, Yo, A, R$ \\ $ Tb, Tt, Tj, d_{50}, n$ \end{tabular} & \begin{tabular}[c]{@{}l@{}} Kurau, Langat, \\ and Muda River \end{tabular}
  881.  
  882. & \begin{tabular}[c]{@{}l@{}} Total 214 points in the dataset. \\ Only 3 rivers data considered. \\Only one column combination used. \\No explicit split of variables into\\ Sediment, Geometry, and Dynamic. \end{tabular}
  883. \\ \midrule
  884.  
  885. \cite{chang2012appraisal} & FFNN, ANFIS and GEP
  886. & \begin{tabular}[c]{@{}l@{}} $Q, V, B, Yo, A, R$ \\ $ Tb, Tt, Tj, d_{50}, n$ \end{tabular} & \begin{tabular}[c]{@{}l@{}} Kurau, Langat, \\ and Muda River \end{tabular}
  887. & \begin{tabular}[c]{@{}l@{}} Total 214 points in the dataset. \\ Only 3 rivers data considered. \\No explicit split of variables into\\ Sediment, Dynamic and Geometry. \\GEP takes 48 hours for training. \end{tabular}
  888. \\ \midrule
  889.  
  890. \cite{waikhom2017prediction} & Empirical Eq. of \cite{yang1973incipient}
  891. & \begin{tabular}[c]{@{}l@{}} $Q, Yo, B, d_{50}, So$ \end{tabular} & Shetrunji River data
  892. & \begin{tabular}[c]{@{}l@{}} Only one river data. \\ No explicit split of variables into\\ Sediment, Geometry, and Dynamic. \end{tabular}
  893. \\ \midrule
  894.  
  895. \cite{khosravi2020bedload} &\begin{tabular}[c]{@{}l@{}} M5P, RT, RF, REPT, \\ BA-M5P, BA-RF, BA-RT\\ and BA-REPT \end{tabular}
  896. & \begin{tabular}[c]{@{}l@{}} $V, \tau, Q, V*, S, Y, d_{50}, RR$ \end{tabular} & Flume Experiments
  897. & \begin{tabular}[c]{@{}l@{}} Total 72 points in the dataset. \\ Only flume data considered. \\No explicit split of variables into\\ Sediment, Geometry, and Dynamic. \end{tabular}
  898. \\ \midrule
  899.  
  900. \textbf{Proposed Method} & LR, SVR, ELM, DNN
  901. & \begin{tabular}[c]{@{}l@{}} $d_{50}, C_g, G_s, y, BF, Q,
  902. \tau_b, Sf$ \end{tabular} & \begin{tabular}[c]{@{}l@{}} 11 flume
  903. and \\ 6 field experiments \end{tabular}
  904. & \begin{tabular}[c]{@{}l@{}} 1880 data points. \\Analysis of variables into\\ Sediment, Geometry, and Dynamic. \end{tabular}
  905. \\ \bottomrule
  906.  
  907. \end{tabular}%
  908. }
  909. \end{table}
  910.  
  911. \textbf{\textbf{Q4.4)} 3.1 and the preceding part of 3.2 until 3.2.1: It's
  912. better to put them in the Methodology section, meanwhile, the authors have
  913. explained the seven models (f, f, f, …) in Introduction.} \\
  914. \textbf{Answer:} We thank the reviewer for the valuable comments. We have
  915. incorporated your suggestions. We have included subsection 3.1 and 3.2 in
  916. methodology section (Section 2.4 and 2.5, Page 8). One of the novelty of
  917. the papers is that we have analyzed the impact of parameters that affect
  918. the total sediment load prediction individually as well as their
  919. combination. In order to emphasize this we briefly mentioned the seven
  920. models in Introduction section. (Line 150-159, Page 3)
  921.  
  922. \clearpage
  923. \bibliographystyle{cas-model2-names}
  924. \textsc{}
  925. \bibliography{all_references}
  926.  
  927. \end{document}
  928.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement