Experimental View of the Way Encoder-Decoder with Latest Advances in Near Regions as Gadget Translation for Chatbots

Abstract

Chatbots goal at routinely providing a communication among a human and a computer. While there may be a protracted music of studies in rule-primarily based totally and retrieval-primarily based totally strategies, the generation-primarily based totally strategies are promisingly rising fixing problems like responding to queries in inference that had been now no longer formerly visible in improvement or schooling time. In this paper, we provide an experimental view of the way latest advances in near regions as gadget translation may be followed for chatbots. In particular, we examine how opportunity encoder-decoder deep getting to know architectures carry out withinside the context of chatbots. Our studies concludes that a totally attention-primarily based totally structure is capable of outperform the recurrent neural community baseline system.

Country : India

1 Sneha Priya Budha

  1. Assistant Professor, Department of Electronics and Communication Engineering, Malla Reddy College of Engineering for Women, Hyderabad -500100, Telangana, India

IRJIET, Volume 1, Issue 3, December 2017 pp. 33-39

.

References

[1] Banchs, R. E. & Li, H. (2012). IRIS: a chat-oriented dialogue system based on the vector space model. The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the System Demonstrations, Jeju Island, Korea,37–42.

[2] Cho, K., van Merrienboer, B., Gulc¨¸ehre, C¸., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. Proceedings of the 2014

[3] Conference on Empirical Methods in Natural Language Processing, EMNLP, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1724–1734.

[4] Hochreiter, S. & Schmidhuber, J. (1997). Long short-term memory. Neural Comput., Vol. 9, No. 8,1735–1780.

[5] Kingma, D. P. & Ba, J. (2014). Adam: A method for stochastic optimization. CoRR, Vol. abs/1412.6980

[6] Experimental Research on Encoder-Decoder Architectures with Attention for Chatbots 1239

[7] Mikolov, T., Karafiat,´ M., Burget, L., Cernocky,´ J., & Khudanpur, S. (2010). Recurrent neural network based language model. INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, pp. 1045–1048.

[8] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Se-quence to sequence learning with neural networks. Advances in Neural Information Processing Sys-tems 27: Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada,3104–3112.

[9] Tiedemann, J. (2009). News from OPUS: A collection of multilingual parallel corpora with tools and interfaces. In Recent Advances in Natural Language Processing, volume V. John Benjamins,237–248.

[10] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit,J., Jones,L., Gomez, A. N., Kaiser, L.,& Polosukhin, I. (2017). Attention is all you

[11] need. Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, pp. 6000–6010.

[12] Vinyals, O. & Le, Q. V. (2015). A neural conversational model. CoRR, Vol. abs/1506.05869.

[13] Wallace, R. (2003). The elements of aiml style.Weiss, R. J., Chorowski, J., Jaitly, N., Wu, Y., & Chen, Z. (2017). Sequence-to-sequence models can directly transcribe foreign speech. CoRR, Vol. abs/1703.08581.

[14] Weizenbaum, J. (1966). ELIZA: a computer program for the study of natural language communi-cation between man and machine. Commun. ACM, Vol. 9, No. 1, pp. 36–45.