Abstract : Nonlinear channel impairments are a major obstacle in fiber-optic communication systems. To facilitate a higher data rate in these systems, the complexity of the underlying digital signal processing algorithms to compensate for these impairments must be reduced. Deep learning-based methods have proven successful in this area. However, the concept of computational complexity remains an open problem. In this paper, a lowcomplexity convolutional recurrent neural network (CNN+RNN) is considered for deep learning of the long-haul optical fiber communication systems where the channel is governed by the nonlinear Schrödinger equation. This approach reduces the computational complexity via balancing the computational load by capturing short-temporal distance features using strided convolution layers with ReLU activation, and the long-distance features using a many-to-one recurrent layer. We demonstrate that for a 16-QAM 100 G symbol/s system over 2000 km optical-link of 20 spans, the proposed approach achieves the bit-error-rate of the digital back-propagation (DBP) with substantially fewer floating-point operations (FLOPs) than the recently-proposed learned DBP, as well as the non-model-driven deep learningbased equalization methods using end-to-end MLP, CNN, RNN, and bi-RNN models. Index Terms-Fiber-optic communications, deep learning, nonlinear channel impairments, convolutional recurrent neural networks.