This paper reviews the information theoretic methods used for inferring gene regulatory networks. Mutual information has been widely used as a dependency measure to estimate the undirected interactions between genes using steady state data. However, employing time-series data results in a directed graph. Since two genes may be interacting with each other via an intermediate gene, their mutual information may show a direct dependency. To resolve this issue, data processing inequality and conditional mutual information have been employed. Mutual information, being a symmetric measure, is unable to predict directed edges using the steady-state data alone, while algorithms using time-series data can be computationally complex as more data is involved. Therefore, non-symmetric measures such as mixing coefficients have recently been proposed in the literature. The algorithms using these techniques are also discussed in this article. Estimation of information-theoretic metrics is explained which is a core component of all the methods. Performance metrics that are frequently used to test the robustness and accuracy of the algorithms are also described and some avenues of future research are proposed.